This Is Why Google Aims To Set An Official Standard For Using Robots.txt


SEO experts and webmasters have been following the unofficial rules outlined in the Robots Exclusion Protocol (REP) for the past 25 years when using robots.txt. With this, the publishers are allowed to pick what they want to be crawled on their website, and interested users can view them. The said rules are also observed by Googlebot, other major crawlers, and almost 500 million sites that depend on REP. And now, Google has finally proposed to standardise these rules.

Google noted the challenge caused by these unofficial rules to SEO professionals and website owners since they are ambiguously written. Therefore, the search engine giant has documented how the REP is used today and forwarded their findings to the Internet Engineering Task Force (IETF) for review. It is important to note though that the draft does not alter any rules established in 1994, but only updated for modern usage.

Some of the updated rules are: (1) at least the first 500 kibibytes of a robots.txt must be parsed by developers, (2) known disallowed pages are not crawled for a considerable amount of time when a robots.txt file becomes inaccessible because of server failures, and (3) the use of robots.txt is not limited to HTTP anymore and can now be used by any URI based transfer protocol, CoAP, and FTP.

Google encourages everyone to send their feedback on the proposed draft so they can make it the best version.

Information used in this article was gathered from https://www.searchenginejournal.com/google-wants-to-establish-an-official-standard-for-using-robots-txt/314817/.

Availing of affordable local SEO services is one effective way to increase your SERP rankings and ensure high traffic all the time. Visit Position1SEO right now and find out more about our available services.