Publishers Propose New Controls on Search Engines’ Access to Content

Publishers unveiled a proposal today to establish more flexible rules governing how search engines index their content. Currently, the AP notes, publishers can use a “robots.txt” file to tell a search engine not to index a given page. The plan, announced at an NYC gathering of a publishers’ consortium today and known as Automated Content Access Protocol (ACAP), would give publishers more say in what search engines could do. Rather than a simple do/do not index request, publishers could come up with specific rules, such as how long a search engine could retain content, or what links it can follow. AP CEO Tom Curley said that the technology could play an important role in blocking sites that distribute content without permission. A Google (NSDQ: GOOG) spokesperson said the company still needs to evaluate ACAP to ensure that it would work for a wide variety of websites, not just those that are backing it. Search Engine Land’s Danny Sullivan said robots.txt ”’certainly is long overdue for some improvements.’ But he questioned whether ACAP would do much to prevent future legal battles.”


Comments have been disabled for this post