These pages are deprecated ; please go to the pages of our new team Edelweiss


Semantic Web Enabled Technologies for Wikis


     Wikis are social web sites enabling a large number of participants to modify any page or create a new page using their web browser. As they grow, wikis suffer from a number of problems (anarchical structure, large amount of pages, aging navigation paths, etc.). In SweetWiki we investigate the design of a wiki built around a semantic web server i.e. the use of semantic web technologies to support and ease the life cycle of the wikis. The very model of wikis was declaratively described using the semantic web frameworks: an OWL schema captures concepts such as WikiWord, wiki page, forward and back link, author, date of modification, version, etc. This ontology is then exploited by an existing semantic search engine (Corese) embedded in our server: using RDF/S and OWL descriptions the engine solves SPARQL queries ranging from classic WikiWord resolution to the dynamic production of indexes or "see also" recommendation. In addition, SweetWiki integrates a standard WYSIWYG editor (Kupu) that we extended to directly support semantic annotation following the "social tagging" approach made popular by web sites such as or and by the search engine. When editing a page, the user can enter freely some keywords in an AJAX-powered textfield. An auto-completion mechanism proposes existing keywords issuing SPARQL queries to identify existing concepts with compatible labels and shows the number of other pages sharing these concepts. With this approach, tagging is both easy (keyword-like) and motivating (realtime display of the number of pages you are linking two). Thus concepts are collected and used as in folksonomies. In order to maintain and reengineer the folksonomy, SweetWiki reuses web-based editors available in the underlying semantic web server to edit semantic web ontologies and annotations. Another distinctive feature of SweetWiki is its persistence mechanism: unlike other wikis, its pages are stored directly in XHTML thus ready to be served to browsers. Semantic annotations are located in the wiki pages themselves using the RDF/A syntax under specification at W3C. This embedded RDF is extracted using a standard GRDDL XSLT stylesheet thus providing semantic annotations directly to the semantic search engine. Therefore, if someone sends a wiki page to someone else the annotations follow it, and if an application crawls the wiki site it can extract the metadata and reuse them. SweetWiki also supports a powerful set of macros using JSP tags offered by the underlying semantic web server. These can be inserted directly at editing time for instance to include the result of a SPARQL query in a page and display it as a table with sortable columns. To summarize, the overall scenario is that of regular users just editing wiki pages and tagging them with keywords like they do in other tools. IT managers, editors or administrators check the folksonomy being built, look at the keywords and concepts proposed by the users and may (re)organize them adding new relationships (e.g. subClassOf, seeAlso). The annotations that users entered are not changed, but faceted navigation and search based on semantic queries are improved by these new links.


Obsolete list, see GRDDL RDFa profiles
To use it in command mode: java -jar RDFaParser-0.0.1.jar e.g.
java -jar RDFaParser-0.0.1.jar
To use it in a java application:
import fr.inria.rdfa.RDFaParser;
import javax.xml.transform.TransformerConfigurationException;
import javax.xml.transform.TransformerException;


// Get a parser
RDFaParser aParser = new RDFaParser()

	public void handleDataProperty(String subjectURI, String subjectNodeID,
			String propertyURI, String value, String datatype, String lang) {
		... // handle the Data property event

	public void handleObjectProperty(String subjectURI, String subjectNodeID,
			String propertyURI, String objectURI, String objectNodeID) {
		... // handle the Object property event


Reader aReader = null; // replace this by a reader on the source
String aBase = null;  // base or URL of the source
try {
	aParser.parse(aReader, aBase);
} catch (TransformerConfigurationException e) {
} catch (FileNotFoundException e) {
} catch (IOException e) {
} catch (TransformerException e) {

*Contacts within Acacia Michel Buffa, Fabien Gandon

bas de page