Crawlability

TL;DR

Crawlability ensures a smooth process for the search engines' spiders to crawl the website in order to get information about it and to index it.

What is Crawlability? 

Crawlability represents the easiness of search engines to crawl a website without missing essential content or having their crawler blocked. Crawlers or spiders represent the search engines' bots that crawl a website in order to gather information about its content and rank it appropriately. 

Why is Crawlability important for a website?

If a search engines' bot crawls the website correctly and fetches all the information, the website and its pages will be successfully indexed.  

However, if there are broken links or wrong sitemap setup, it may lead to a few crawlability issues, and the search engine's spider will not be able to access, crawl and index specific content on a site.

To ensure a proper and smooth crawling of a site, check this list of actions to avoid because they could prevent the spiders from crawling:

  • Make sure to have a correct robots.txt file and that the robots meta tag on a specific page will not block the crawler. 
  • Check the HTTP codes. e.g., status code 200 will imply that the action has succeeded and everything is ok.
  • Verify the HTTP response header fields that can have an impact on your SEO strategy: such as x-robots tag (e.g., X-Robots-Tag: noindex); server (e.g., Server: nginx); location (ensure the URLs to be redirected are working), and link to shows that the requested resource has a relationship with other resources.
up-arrow.svg