Noarchive Tag prevents the search engines from showing a cached copy of a page in their search results listings. It can be done by placing a piece of code ( like a metatag) on a website in order to exclude specific pages to be cached by the search engine robots in their storage.
What is a Noarchive Tag?
Noarchive Tag is a robot exclusion protocol that tells search engines not to store a cached copy of a page or an entire website. Namely, this metatag is used when the webmaster wants to prevent a page from being cached and to prevent search engines from showing a preview of a website in SERP.
Why should a website use a Noarchive Tag?
Noarchive Tag prevents scrapers from taking a site out of Google. Also, this metatag is usually used by webmasters when they choose to ban Google or another search engine from republishing the content of their sites.
Noarchive Tag can be used by eCommerce sites and when the prices could change or webpages whose content changes several times a day. In this case, if the page is not reindexed yet, the users may see an outdated version of that page.
Why should a website not use a Noarchive Tag?
Showing the cached version of a page can be useful in some cases. Here are some situations when the cached version can help the usability of the website: If it is accidentally deleted, if it is overloaded or if it is temporarily unavailable, the users still can see the cached version of the website.
How to implement a Noarchive Tag?
Implementing a Noarchive Tag can be done by adding the `noarchive` tag to a webpage. This piece of code should look like this:
<META NAME=”ROBOTS” CONTENT=”NOARCHIVE”>
In this way, the search engine won’t cache any copy of the webpage in search results.