Sunday, December 21, 2014

A Solr RDF Store and SPARQL endpoint in just 2 minutes

How to store and query RDF data in Solr? Here is a quick guide, just 2 minute / 5 steps and you will get that ;)

1. All what you need  

  • A shell  (in case you are on the dark side of the moon, all steps can be easily done in Eclipse or whatever IDE) 
  • Java (7)
  • Apache Maven (3.x)
  • git 

2. Checkout SolRDF code

Open a shell and type the following:

# cd /tmp
# git clone solrdf-download

3. Build and Run SolRDF

# cd solrdf-download/solrdf
# mvn clean install
# cd solrdf-integration-tests
# mvn clean package cargo:run

The very first time you run this command a lor of things will be downloaded, Solr included. At the end you should see something like this:

[INFO] Jetty 7.6.15.v20140411 Embedded started on port [8080]
[INFO] Press Ctrl-C to stop the container...

SolRDF is up and running! 

4. Add some data

Open another shell and type the following:

# curl -v http://localhost:8080/solr/store/update/bulk?commit=true \
  -H "Content-Type: application/n-triples" \
  --data-binary @/tmp/solrdf-download/solrdf/src/test/resources/sample_data/bsbm-generated-dataset.nt 

Wait a moment...ok! You just added (about) 5000 triples!

5. Execute some query

Open another shell and type the following:

# curl "" \
  --data-urlencode "q=SELECT * WHERE { ?s ?p ?o } LIMIT 10" \
  -H "Accept: application/sparql-results+json"

# curl "" \
  --data-urlencode "q=SELECT * WHERE { ?s ?p ?o } LIMIT 10" \
  -H "Accept: application/sparql-results+xml"

Et voilĂ ! Enjoy! I'm still working on that...any suggestion about this idea is warmly welcome...and if you meet some annoying bug feel free to give me a shout ;)

Monday, December 01, 2014

Loading RDF (i.e. custom) data in Solr

Update: SolRDF, a working example of the topic discussed in this post is here. Just 2 minutes and you will be able to index and query RDF data in Solr.

The Solr built-in UpdateRequestHandler supports several formats of input data. It delegates the actual data loading to a specific ContentStreamLoader, depending on the content type of the incoming request (i.e. the Content-type header of the HTTP request). Currently, these are the available content types declared in the UpdateRequestHandler class:
  • application/xml or text/xml
  • application/json or text/json
  • application/csv or text/csv
  • application/javabin
So, a client has several options to send its data to Solr; all what it needs is to prepare those data in a specific format and call the UpdateRequestHandler (usually located at /update endpoint) specifying the corresponding content type

> curl http://localhost:8080/solr/update -H "Content-Type: text/json" --data-binary @/home/agazzarini/data.json

The UpdateRequestHandler can be extended, customized, and replaced; so we can write our own UpdateRequestHandler that accepts a custom format, adding a new content type or overriding the default set of supported content types.

In this brief post, I will describe how to use Jena to load RDF data in Solr, in any format supported by Jena IO API.
This is a quick and easy task mainly because:
  • the UpdateRequestHandler already has the logic to index data
  • the UpdateRequestHandler can be easily extended
  • Jena already provides all the parsers we need
So doing that, is just a matter of subclassing UpdateRequestHandler in order to override the content type registry:

public class RdfDataUpdateRequestHandler extends UpdateRequestHandler
    protected Map createDefaultLoaders(NamedList parameters) {
           final Map<String, ContentStreamLoader> registry 

                      = new HashMap<String, ContentStreamLoader>();
           final ContentStreamLoader loader =
new RdfDataLoader();
           for (final Lang language : RDFLanguages.getRegisteredLanguages()) {
                  registry.put(language.getContentType().toHeaderString(), loader);
           return registry;

As you can see, the registry is a simple Map that associates a content type (e.g. "application/xml") with an instance of ContentStreamLoader. For our example, since the different content types will always map to RDF data, we create an instance of a dedicated ContentStreamLoader (RdfDataLoader) once; that instance will be associated with all built-in content types in Jena. That means each time an incoming request will have a content type like
  • text/turtle
  • application/turtle
  • application/x-turtle
  • application/rdf+xml
  • application/rdf+json
  • application/ld+json
  • text/plain (for n-triple)
  • application/n-triples
  • (others)
Our RdfDataLoader will be in charge to parse and load the data. Note that the above list is not exhaustive, there a lot of other content types registered in Jena (See the RDFLanguages class). 

So, what about the format of the data? Of course, it still depends on the content type of your RDF data, and most important, it has nothing to do with those data we used to send to Solr (i.e. SolrInputDocuments serialized in some format).

The RdfDataLoader is a subclass of ContentStreamLoader

public class RdfDataLoader extends ContentStreamLoader

and, not surprisingly, it overrides the load() method:

public void load()
            final SolrQueryRequest request,
            final SolrQueryResponse response,
            final ContentStream stream,
            final UpdateRequestProcessor processor) throws Exception {

        final PipedRDFIterator<Triple> iterator = new PipedRDFIterator<Triple>();
        final PipedRDFStream
<Triple> inputStream  = new PipedTriplesStream(iterator);    
        // We use an executor for running the parser in a separate thread
        final ExecutorService executor = Executors.newSingleThreadExecutor();

        final Runnable parser = new Runnable() {
              public void run() {
                   try {
                   } catch (final IOException exception) {


        while (iterator.hasNext()) {
          final Triple triple =;
            // create and populate the Solr input document
            final SolrInputDocument document = new SolrInputDocument();
             // create the update command
            final AddUpdateCommand command  = new AddUpdateCommand(request);
            // populate it with the input document we just created
            command.solrDoc = document;

            // add the document to index


That's, once the request handler has been registered within Solr (i.e. in solrconfig.xml), with a file containing RDF data in n-triples format, we can send to Solr a command like this:

> curl http://localhost:8080/solr/store/update -H "Content-Type: application/n-triples" --data-binary @/home/agazzarini/triples_dogfood.nt