Skip to content

AtomGraph/Processor

Repository files navigation

AtomGraph Processor is a server of declarative, read-write Linked Data applications. If you have a triplestore with RDF data that you want to serve Linked Data from, or write RDF over a RESTful HTTP interface, AtomGraph Processor is the only component you need.

What AtomGraph Processor provides for users as out-of-the-box generic features:

  • API logic in a single Linked Data Templates ontology
  • control of RDF input quality with SPARQL-based constraints
  • SPARQL endpoint and Graph Store Protocol endpoint
  • HTTP content negotiation and caching support

AtomGraph's direct use of semantic technologies results in extemely extensible and flexible design and leads the way towards declarative Web development. You can forget all about broken hyperlinks and concentrate on building great apps on quality data. For more details, see articles and presentations about AtomGraph.

For a compatible frontend framework for end-user applications, see AtomGraph Web-Client.

Getting started

For full documentation, see the wiki index.

Usage

Docker

Processor is available from Docker Hub as atomgraph/processor image. It accepts the following environment variables (that become webapp context parameters):

ENDPOINT
SPARQL 1.1 Protocol endpoint
URI
GRAPH_STORE
SPARQL 1.1 Graph Store Protocol endpoint
URI
ONTOLOGY
Linked Data Templates ontology
URI
AUTH_USER
SPARQL service HTTP Basic auth username
string, optional
AUTH_PWD
SPARQL service HTTP Basic auth password
string, optional
PREEMPTIVE_AUTH
use premptive HTTP Basic auth?
true/false, optional

If you want to have your ontologies read from a local file rather than their URIs, you can define a custom location mapping that will be appended to the system location mapping. The mapping has to be a file in N3 format and mounted to the /usr/local/tomcat/webapps/ROOT/WEB-INF/classes/custom-mapping.n3 path. Validate the file syntax beforehand to avoid errors.

To enable logging, mount log4j.properties file to /usr/local/tomcat/webapps/ROOT/WEB-INF/classes/log4j.properties.

Examples

The examples show Processor running with combinations of

  • default and custom LDT ontologies
  • local and remote SPARQL services
  • Docker commands

However different combinations are supported as well.

Default ontology and a local SPARQL service

The Fuseki example shows how to run a local Fuseki SPARQL service together with Processor and how to setup nginx as a reverse proxy in front of Processor. Fuseki loads RDF dataset from a file. Processor uses a built-in LDT ontology. It uses the docker-compose command.

Run the Processor container together with Fuseki and nginx container:

cd examples/fuseki

docker-compose up

After that, open one of the following URLs in the browser and you will retrieve RDF descriptions:

Alternatively you can run curl http://localhost:8080/ etc. from shell.

In this setup Processor is also available on http://localhost/ which is the nginx host. The internal hostname rewriting is done by nginx and useful in situations when the Processor hostname is different from the application's dataset base URI and SPARQL queries do not match any triples. The dataset for this example contains a second http://example.org/ base URI, which works with the rewritten example.org hostname.

Custom ontology and a remote SPARQL service

The Wikidata example example shows to run Processor with a custom LDT ontology and a remote SPARQL service. It uses the docker run command.

Run the Processor container with the Wikidata example:

cd examples/wikidata

docker-compose up

After that, open one of the following URLs in the browser and you will retrieve RDF descriptions:

Alternatively you can run curl http://localhost:8080/ etc. from shell.

Note that Wikidata's SPARQL endpoint https://query.wikidata.org/bigdata/namespace/wdq/sparql is very popular and therefore often overloaded. An error response received by the SPARQL client from Wikidata will result in 500 Internal Server Error response by the Processor.

Maven

Processor is released on Maven central as com.atomgraph:processor.

Datasource

AtomGraph Processor does not include an RDF datasource. It queries RDF data on the fly from a SPARQL endpoint using SPARQL 1.1 Protocol over HTTP. SPARQL endpoints are provided by most RDF triplestores.

The easiest way to set up a SPARQL endpoint on an RDF dataset is Apache Jena Fuseki as a Docker container using our fuseki image. There is also a number of of public SPARQL endpoints.

For a commercial triplestore with SPARQL 1.1 support see Dydra.

Test suite

Processor includes a basic HTTP test suite for Linked Data Templates, SPARQL Protocol and the Graph Store Protocol.

master develop

Support

Please report issues if you've encountered a bug or have a feature request.

Commercial consulting, development, and support are available from AtomGraph.

Community

Please join the W3C Declarative Linked Data Apps Community Group to discuss and develop AtomGraph and declarative Linked Data architecture in general.