This repository contains some code of my experiments with semantic web technologies.
The following architecture is used at the moment.
These are the sources where the raw data comes from. This could be an OPCServer an XML file or a microcontroller.
The lifters know their data source and transform the raw data into semantic data. Each lifter can act as an agent or as a standalone program.
The backend provides a service layer that can be used by the lifters to push semantic data into the global model. It also provides a SPARQL interface for the webClient. As a basic framework Jena is used at the moment but e.g. sesame could do the same job. Like the lifters the backend can be started in a standalone mode or as an agent.
The webClient privides an search interface for the user. The search can be stated more precisely by using filters.
The following software and tools are used:
- W3C RDF Validator
- rdf:about RDF Validator
- Arduino IDE - IDE for programming arduino µControllers
- Sesame - Semanitc Web Framwork for Java
- Jena - Semanitc Web Framwork for Java
- rdfQuery - Javascript library for RDF-related processing
- JADE - Agent platform
- AgentOWL - library for RDF/OWL support in JADE
- Scala - Programming language for the JVM
- sbt - a build tool for Scala
- CoffeeScript - a little language that compiles into JavaScript
- rdfstore-js - a great RDF store with SPARQL support, 2013
- rdf.js - RDF Tooling, 2011
- js3 - generates RDF out of JavaScript values and objects, 2010
- rdfQuery - Javascript library for RDF-related processing, 2011
- Jstle - RDF serialization language, 2010
- jOWL - a jQuery plugin for processing OWL, MIT, 2009
- RDFAuthor, GPLv3, 2011
- rdf-parser - a simple RDF parser, 2006
- hercules - framework for semantic web applications, 2008
- sparql.js - JS library for processing SPARQL queries, 2007
- vie.js - Library for making RDFa -annotated content on a web pages editable, 2013
- jquery-sparql - a SPARQL jQuery plugin, 2010
- node-neo4j - Neo4j graph database driver (REST API client) for Node.js, 2013
- LevelGraph - A graph database built on top of LevelUp, 2013
- Protégé - OWL-Editor
- RDF-JSON
- JQbus - XMPP query services
- SPARQL By Example - A Tutorial
- Semantic web basics
- SPARQL implementations
- Jena JavaDoc
- Jena ARQ JavaDoc
- Joseki - SPARQL Endpoint for Jena
- Scala Style Guide
- Strategies for Building Semantic Web Applications
- http://semanticweb.com/
- SemanticOverflow
- Where can I learn about the semantic web
- Linked Data Patterns
For compiling JAR files sbt (Setup) is used.
To build a project change to the according directory and type sbt
to reach the
sbt promt. To compile the source just type
> compile
Or if you want compile on every change use
> ~compile
With run
you start the compiled program and with assembly
you can create a
JAR file with all dependencies included.
To use scala with IntelliJ install the sbt-idea-plugin as processor and execute
idea
to create IDEA project files:
> *sbtIdeaRepo at http://mpeltonen.github.com/maven/
> *idea is com.github.mpeltonen sbt-idea-processor 0.1-SNAPSHOT
> update
> idea
Install the Scala plugin in IntelliJ and have fun!
Run sbt
and generate the eclipse project and classpath file with
> eclipse
For more information have a look at http://www.scala-ide.org/ and https://github.com/musk/SbtEclipsify
- Install Node.js and CoffeeScript.
- Compile
.coffee
files withcoffee -c *.coffee
sbt supports scaladoc, so just type doc
to create the documentation.
The source code is licenced under the GPLv3.