Automated Content Access Protocol

Automated Content Access Protocol

Automated Content Access Protocol ("ACAP") is a proposed method of providing machine-readable permissions information for content. This will allow automated processes (such as search-engine web crawling) to be compliant with publishers policies without the need for human interpretation of legal terms. ACAP was developed by the publishing industry with technical partners (including search engines). It is intended to provide support for more sophisticated online publishing business models, but has been [http://blogs.telegraph.co.uk/technology/iandouglas/dec2007/acap.htm criticised for being biased towards the fears of publishers who see search and aggregation as a threat] , rather than as a source of traffic and new readers.

Current status

In November 2007 ACAP announced that [http://the-acap.org/download.php?ACAP-TF-CrawlerCommunications-Part2-V1.0.pdf the first version of the standard] was ready. No non-ACAP members, whether publishers or search engines, have yet adopted it. A Google spokesman appeared to have [ [http://blog.searchenginewatch.com/blog/080313-090443 Search Engine Watch report of Rob Jonas' comments on ACAP] ] ruled out adoption. Google's CEO has since indicated that Google has no objection to implementing ACAP [ [http://www.itwire.com/content/view/17206/53/ IT Wire report of Eric Schmidt's comments on ACAP] ] and are working on resolving technical issues which are, at present, preventing implementation. No progress has been announced since the remarks in March 2008 and Google [ [http://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html Improving on Robots Exclusion Protocol: Official Google Webmaster Central Blog] ] , along with Yahoo and MSN, have since reaffirmed their commitment to the use of robots.txt and sitemaps.

Previous milestones

In April 2007 ACAP commenced a pilot project in which the participants and technical partners undertook to specify and agree various use cases for ACAP to address. A technical workshop, attended by the participants and invited experts, has been held in London to discuss the use cases and agree next steps.

By February 2007 the pilot project was launched and participants announced.

By October 2006, ACAP had completed a feasibility stage and was formally announced [ [http://www.the-acap.org/press_releases/Frankfurt_acap_press_release_6_oct_06.pdf Official ACAP press release announcing project launch] ] at the Frankfurt Book Fair on 6th October 2006. A pilot program commenced in January 2007 involving a group of major publishers and media groups working alongside search engines and other technical partners.

ACAP and search engines

One of ACAP's initial goals is to provide better rules to search engine crawlers (or robots) when accessing websites. In this role it can be considered as an extension to the Robots Exclusion Standard (or "robots.txt") for communicating website access information to automated web crawlers.

It has been suggested [ [http://googlesystem.blogspot.com/2006/09/news-publishers-want-full-control-of.html News Publishers Want Full Control of the Search Results] ] that ACAP is unnecessary, since the "robots.txt" protocol already exists for the purpose of managing search engine access to websites. However, others [ [http://www.yelvington.com/20061016/why_you_should_care_about_automated_content_access_protocol Why you should care about Automated Content Access Protocol] ] support ACAP’s view [ [http://www.the-acap.org/faqs.php#existing_protocols ACAP FAQ on robots.txt] ] that "robots.txt" is no longer sufficient. ACAP argues that "robots.txt" was devised at a time when both search engines and online publishing were in their infancy and as a result is insufficiently nuanced to support today’s much more sophisticated business models of search and online publishing. ACAP aims to make it possible to express more complex permissions than the simple binary choice of “inclusion” or “exclusion”.

As an early priority, ACAP is intended to provide a practical and consensual solution to some of the rights-related issues which in some cases have led to litigation [ [http://www.out-law.com/page-7427 "Is Google Legal?" OutLaw article about Copiepresse litigation] ] [ [http://media.guardian.co.uk/newmedia/comment/0,,2013051,00.html Guardian article about Google's failed appeal in Copiepresse case] ] between publishers and search engines.

Only one search engine, the little-known Exalead, has confirmed that they will be adopting ACAP.

Comment and debate

The project has generated considerable online debate, in the search [ [http://blog.searchenginewatch.com/blog/060922-104102 Search Engine Watch article] ] , content [ [http://shore.com/commentary/newsanal/items/2006/200601002publishdrm.html Shore.com article about ACAP] ] and intellectual property [ [http://www.ip-watch.org/weblog/index.php?p=408&res=1280_ff&print=0 IP Watch article about ACAP] ] communities. If there is one linking theme to the commentary, it is that keeping the specification simple will be critical to its successful implementation, and that the aims of the project are focussed on the needs of publishers, rather than readers. Many have seen this as a flaw.

ACAP participants

Publishers confirmed as participating in the ACAP pilot project include (as at 16th February 2007)

* [http://www.afp.com/ Agence France-Presse]
* [http://www.persgroep.be/ De Persgroep]
* [http://www.impresa.pt/ Impresa]
* [http://www.inmplc.com/ Independent News & Media Plc]
* [http://www.wiley.com/ John Wiley & Sons]
* [http://www.macmillan.com/ Macmillan / Holtzbrinck]
* [http://www.media24.com/ Media 24]
* [http://www.reedelsevier.com/ Reed Elsevier]
* [http://www.sanoma.fi/english Sanoma Corporation]

External links

* [http://www.the-acap.org/ Official website]
* [http://www.bl.uk/ British Library website]
* [http://media.guardian.co.uk/columnists/story/0,,1935057,00.html Article about ACAP and Google] in The Guardian newspaper
* [http://www.yelvington.com/20061016/why_you_should_care_about_automated_content_access_protocol Yelvington article about ACAP]
* [http://www.wildlyappropriate.com/article/139/automated-content-access-protocol-why Automated Content Access Protocol: Why?] - Wildly Appropriate
* [http://www.currybet.net/cbet_blog/2007/12/acap_flawed_and_broken.php Ac
] - Martin Belam
* [http://blogs.telegraph.co.uk/technology/iandouglas/dec2007/acap.htm Ac
] - Telegraph Blogs
* [http://blogs.telegraph.co.uk/technology/iandouglas/jan2008/acapshootsback.htm Acap shoots back] - Telegraph Blogs
* [http://www.websearchguide.ca/netblog/archives/007094.html Acap needs publishers]
* [http://www.mediainfo.com/eandp/departments/online/article_display.jsp?vnu_content_id=1003724998 WAN calls on Google to embrace Acap] - Editor and Publisher
* [http://www.journalism.co.uk/2/articles/531181.php Google rejects adoption of Acap standard] - journalism.co.uk

Notes and references


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Automated Tissue Image Systems — (ATIS) are computer controlled automatic test equipment (ATE) systems classified as medical device and used as pathology laboratory tools (tissue based cancer diagnostics) to characterize a stained tissue sample embedded on a bar coded glass… …   Wikipedia

  • Hypertext Transfer Protocol — HTTP Persistence · Compression · HTTPS Request methods OPTIONS · GET · HEAD · POST · PUT · DELETE · TRACE · CONNECT Header fields Cookie · ETag · Location · Referer DNT · …   Wikipedia

  • Media and Publishing — ▪ 2007 Introduction The Frankfurt Book Fair enjoyed a record number of exhibitors, and the distribution of free newspapers surged. TV broadcasters experimented with ways of engaging their audience via the Internet; mobile TV grew; magazine… …   Universalium

  • Robots Exclusion Standard — Nach der Übereinkunft des Robots Exclusion Standard Protokolls liest ein Webcrawler (Robot) beim Auffinden einer Webseite zuerst die Datei robots.txt (kleingeschrieben) im Stammverzeichnis (Root) einer Domain. In dieser Datei kann festgelegt… …   Deutsch Wikipedia

  • Robots.txt — Nach der Übereinkunft des Robots Exclusion Standard Protokolls liest ein Webcrawler (Robot) beim Auffinden einer Webseite zuerst die Datei robots.txt (kleingeschrieben) im Stammverzeichnis (Root) einer Domain. In dieser Datei kann festgelegt… …   Deutsch Wikipedia

  • Robots exclusion standard — selfref| For restricting Wikipedia bots, see .|The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of …   Wikipedia

  • ACAP — may refer to:In conservation:* Agreement on the Conservation of Albatrosses and Petrels, a legally binding international treaty signed in 2001 * Arctic Council Action Plan, an action plan to eliminate pollution in the ArcticIn technology:*… …   Wikipedia

  • NewspaperARCHIVE.com — Logo NewspaperARCHIVE.com is an online database of digitized newspapers. The site was launched in 1999 by its parent company, Heritage Archives of Cedar Rapids, Iowa. NewspaperArchive.com states that it is the largest online historical newspaper… …   Wikipedia

  • ACAP — kann stehen für Übereinkommen zum Schutz der Albatrosse und Sturmvögel Automated Content Access Protocol, eine bislang von Suchmaschinen nicht genutzte Alternative zum Robots Exclusion Standard Diese Seite ist eine Begriffsklärung …   Deutsch Wikipedia

  • List of computing and IT abbreviations — This is a list of computing and IT acronyms and abbreviations. Contents: 0–9 A B C D E F G H I J K L M N O P Q R S T U V W X Y …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”