KnowledgeWeb RDF Benchmark : Corese RDF engine

olivier.corby@sophia.inria.fr

http://www.inria.fr/acacia/soft/corese

September 2005

 1.  A brief description of the tool.

Corese is an RDF engine based on Conceptual Graphs. It implements RDF, RDFS, some statements from OWL Lite and the query pattern part of SPARQL. The query language integrates additional features such as approximate search, group, count, graph path.  Corese also integrates an RDF Rule Language based on the CG Rule model. The inference rule engine works in forward chaining. Corese is embedded in a Semantic Web Server (based on Tomcat). It has been applied in more than 10 applications at the INRIA and is available for downloaded.  Corese data structures are based on the notio CG platform and uses the ARP RDF parser from HP.

   2. The process followed for executing the benchmark suites (including any modifications performed in the tool).

Import test :
Copy the benchmark in a local directory.
Loading the files into Corese.
Exporting the content of each file back to RDF using a generic query :


// Create a Corese
Corese corese =
        new Corese("benchmark.properties", // which data to load
                   "/user/path/data");      // path where to find data
    String query =
        "select group ?src  sort ?src  where " +
        "source ?src (?x ?p ?y  ) " +
        "filter ( ?src ~ 'graph'  && ?x != ?src )  "; 

    RDFResult res = corese.query(query);
    CoreseGraph cg ;
    for (Enumeration en = res.getValues(); en.hasMoreElements(); ) {
      cg = (CoreseGraph) en.nextElement();
      System.out.println(cg.toRDF());
    }

There were no a priori modification in order to load the RDF documents.

Export  test:

Load the O'Comma ontology and an additional toy ontology to cover all the benchmark.
Write a set of sparql queries, execute the queries againts the ontology, save the results in separate files.

   3. The following comments on each benchmark execution:

          * The expected result of the benchmark. (see   bench-import.tar.gz and bench-export.tar.gz)
          * If the tool passes the benchmark or does not.

The tool passes the benchmark except loops  in rdfs:subClassOf and rdfs:subPropertyOf the semantics of which is undefined in Corese.


          * If not, the reasons for not passing the benchmark.
          * If the tool does not pass the benchmark and is corrected to pass it, and the changes performed.

1. The RDF pretty printer did print the source of each document in answer to a generic query because the source is represented as a standard resource within the graph. The pretty printer has been corrected in order not to print the source unless required.

2. The RDF pretty printer did not print correctly the case :  xxx rdf:type class1  class1 rdf:type class2
the second type statement was not printed, it has been corrected.

3. The RDF pretty printer did print an rdfs:label (if possible) even if not required by the query. Now, it does it  only at user option.


   4. Comments on the results.


   5. Comments on improvements on the tool.


   6. Comments on the benchmark suites.

There is no test for the specialization of the metamodel, e.g. :


c:subRelation
rdfs:subPropertyOf rdfs:subPropertyOf
c:type c:subRelation rdf:type


There is no test for literals with language


The graph66 document has graph65 as namespace instead of graph66 ?

   7. The files containing the imported and exported ontologies in the specific format of each tool and in RDF/XML respectively.

No specific format, RDF/XML only