Why you should care about Automated Content Access Protocol

Lost in the knee-jerk anti-MSM reaction to the recent Belgian copyright case was a distribution-rights development that could be more important in the long run: ACAP, the Automated Content Access Protocol. It's an initiative to define a machine-readable "industry standard to enable the providers of all types of content published on the World Wide Web to communicate information relating to permissions for access and use of that content."

The project is being driven by traditional publishers, mostly in Europe, which probably taints it in the minds of many. But the aims are not at odds with the Creative Commons, which says its goal is "enabling the legal sharing and reuse of cultural, educational, and scientific works." Both groups support the idea that a creator of content should be able to encourage usage under limited terms. Creative Commons in fact has about a dozen alternative licenses setting various restrictions on content reuse.

The problem is that there is no standard way of saying, for example, "you may read/index/link to this item but you may not repurpose it into your own website" or "you may republish this, but only without alteration and only in a noncommercial context" in a way that computers can understand. ACAP aims to develop such a standard, building on preexisting technologies.

The ACAP project is expected to deliver a standard by November of next year, and will involve not only publishers but at least one search engine company in the planning process. If it achieves its goals, the outcome will not be a wrestling match between content publishers and developers of new network services, but rather the framework for partnership without misunderstandings, confrontations and lawsuits.

Taxonomy upgrade extras: 


I strongly agree that one should care about ACAP and the goals the publisher are trying to meet with this initiative. Hence I had a closer look at the available details at www.the-acap.org . Unfortunately there is not much until now.

Seems that they try build on the basis of ONIX, an approach for formally describing the license regulations of book publishers with university libraries (at least that is their showcase). ONIX relies on formal ontologies for formally describing the contract and making it machine understandable. Given my experiences explaining formal ontology stuff to non-experts in the field, explaining the formalisation of a legal contract to the laywers should be real fun. Building the inference engines that understand the contracts is at least a technological challenge. Even if they do succeed, how will the contracts be signed etc.

IMHO if this is the ACAP approach, it is overengineered and far too complex to work on an internet scale. In summary, the approach as well as the goal of ACAP remind me very much of ICE, which AFAIK has gone the way of the dodo. At least compared to RSS and ATOM the success of ICE was limited ;-).

In order to meet the goals, a more pragmatic approach to ACAP, based on extensions of the CC framework looks more promising to me than the current direction. The basic functionality that is missing from the CC framework right now (at least as i see it):

  • some basic functionality to trigger the human interaction between interested parties, e.g. a mailaddress or an url that should be contacted in case somebody wants to buy the content
  • means to answer to the following two questions then selecting the CC license: (commercial) Indexing allowed?, and (commercial) Caching allowed for how many days?

The implicit dependency of ONIX on Rightscom technology (for formal ontologies) doesn't look like open standards to me. Especially since i haven't recognised Rightscom as a major player in the field of formal ontologies and/or semantic web. And there is a lot of Rightscom in the ACAP scenario. First their work within ONIX, then they have done the ACAP feasibility study, and now they are a major player in the pilot project.