Noarchive Tag

TL;DR

A Noarchive Tag prevents search engines from showing a cached copy of a page in their search results listings. It can be done by placing a piece of code ( like a metatag) on a website in order to exclude specific pages from being cached by search engine bots.

What is a Noarchive Tag?

A Noarchive Tag is a bot exclusion protocol that tells search engines not to store a cached copy of a page or an entire website. Namely, this metatag is used when the webmaster wants to prevent a page from being cached and to prevent search engines from showing a preview of a website in SERP

Why should a website use a Noarchive Tag?

A Noarchive Tag prevents scrapers from taking a site out of Google. Also, this metatag is usually used by webmasters when they choose to ban Google or another search engine from republishing the content of their sites.

A Noarchive Tag can be used by ecommerce sites when prices could change or by webpages whose content changes several times a day. In this case, if the page is not reindexed often, users may see an outdated version of that page.

Why should a website not use a Noarchive Tag?

Showing the cached version of a page can be useful in some cases. Here are some situations when the cached version can help the usability of the website. If it is accidentally deleted, if it is overloaded or if it is temporarily unavailable, users still can see the cached version of the website.

How to implement a Noarchive Tag?

Implementing a Noarchive Tag can be done by adding the `noarchive` tag to a webpage. This piece of code should look like this:

<META NAME=”ROBOTS” CONTENT=”NOARCHIVE”>

In this way, the search engine won’t cache any copy of the webpage in search results.

 

up-arrow.svg