Nitro-Net.com – Internet Marketing Services – A Global Marketing Group Company

Google’s John Mueller responded to a comment about Google’s guidelines around “Use the robots.txt file on your web server to manage your crawling budget by preventing crawling of infinite spaces such as search result pages.” He said this is less about spam and more about ” watering down your indexed content with useless pages that compete with each other.”

He posted this on Twitter:

Here is the original tweet from Lily Ray from this conversarion:

In 2007, Google told webmasters to block internal search results from being indexed. The original guideline read “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines.” Now it reads “Use the robots.txt file on your web server to manage your crawling budget by preventing crawling of infinite spaces such as search result pages.”

Then ten years later, Google’s John Mueller explained why Google doesn’t want your search result pages in its index. He said “they make infinite spaces (crawling), they’re often low-quality pages, often lead to empty search results/soft-404s.”

So it really wasn’t always about spam but blocking pages that might not be as relevant to Google.

Forum discussion at Twitter.

//platform.twitter.com/widgets.js

Original Source

Advertisement