www.openlinksw.com
docs.openlinksw.com

Book Home

Contents
Preface

RDF Data Access and Data Management

Data Representation
SPARQL
SPARQL Implementation Details Query Constructs SPARQL Web Services & APIs Troubleshooting SPARQL Queries SPARQL Inline in SQL API Functions Useful Internal Functions Default and Named Graphs Calling SQL from SPARQL SPARQL DESCRIBE Transitivity in SPARQL Supported SPARQL-BI "define" pragmas Built-in bif functions Sending SOAP Requests to Virtuoso SPARQL Endpoint Use of Hash Join With RDF
Extensions
RDF Graphs Security
Linked Data Views over RDBMS Data Source
Automated Generation of Linked Data Views over Relational Data Sources
Virtuoso R2RML Support
Examples of Linked Data Views
RDF Insert Methods in Virtuoso
RDFizer Middleware (Sponger)
Virtuoso Faceted Browser Installation and configuration
Virtuoso Faceted Web Service
Linked Data
Inference Rules & Reasoning
RDF and Geometry
RDF Performance Tuning
RDF Data Access Providers (Drivers)
RDF Graph Replication

16.2. SPARQL

16.2.1. SPARQL Implementation Details

Virtuoso's RDF support includes in-built support for the SPARQL query language. It also includes a number of powerful extensions that cover path traversal and business intelligence features. In addition, there is in-built security based on Virtuoso's support for row level policy-based security, custom authentication, and named graphs.

The current implementation does not support some SPARQL features:

On the other hand, Virtuoso implements some extensions to SPARQL:

The following listing shows the SPARQL grammar expressed in BNF, including all Virtuoso extensions but excluding rules for the syntax of each lexical element. Rule numbers in square brackets are from W3C normative SPARQL grammar. An asterisk indicates that the rule differs from the W3C grammar due to Virtuoso extensions - [Virt] means that the rule is Virtuoso-specific, [DML] indicates a data manipulation language extension from SPARUL.

[1]*	Query		 ::=  Prolog ( QueryBody | SparulAction* | ( QmStmt ('.' QmStmt)* '.'? ) )
[1]	QueryBody	 ::=  SelectQuery | ConstructQuery | DescribeQuery | AskQuery
[2]*	Prolog		 ::=  Define* BaseDecl? PrefixDecl*
[Virt]	Define		 ::=  'DEFINE' QNAME (QNAME | Q_IRI_REF | String )
[3]	BaseDecl	 ::=  'BASE' Q_IRI_REF
[4]	PrefixDecl	 ::=  'PREFIX' QNAME_NS Q_IRI_REF
[5]*	SelectQuery	 ::=  'SELECT' 'DISTINCT'? ( ( Retcol ( ','? Retcol )* ) | '*' )
			DatasetClause* WhereClause SolutionModifier
[6]	ConstructQuery	 ::=  'CONSTRUCT' ConstructTemplate DatasetClause* WhereClause SolutionModifier
			DatasetClause* WhereClause? SolutionModifier
[8]	AskQuery	 ::=  'ASK' DatasetClause* WhereClause
[9]	DatasetClause	 ::=  'FROM' ( DefaultGraphClause | NamedGraphClause )
[10]*	DefaultGraphClause	 ::=  SourceSelector SpongeOptionList?
[11]*	NamedGraphClause	 ::=  'NAMED' SourceSelector SpongeOptionList?
[Virt]	SpongeOptionList	 ::=  'OPTION' '(' ( SpongeOption ( ',' SpongeOption )* )? ')'
[Virt]	SpongeOption	 ::=  QNAME PrecodeExpn
[Virt]	PrecodeExpn	 ::=  Expn	(* Only global variables can occur in Expn, local cannot *)
[13]	WhereClause	 ::=  'WHERE'? GroupGraphPattern
[14]	SolutionModifier	 ::=  OrderClause?
			((LimitClause OffsetClause?) | (OffsetClause LimitClause?))?
[15]	OrderClause	 ::=  'ORDER' 'BY' OrderCondition+
[16]*	OrderCondition	 ::=  ( 'ASC' | 'DESC' )?
			( FunctionCall | Var | ( '(' Expn ')' ) | ( '[' Expn ']' ) )
[17]	LimitClause	 ::=  'LIMIT' INTEGER
[17]	LimitClause	 ::=  'LIMIT' INTEGER
[18]	OffsetClause	 ::=  'OFFSET' INTEGER
[18]	OffsetClause	 ::=  'OFFSET' INTEGER
[19]*	GroupGraphPattern	 ::=  '{' ( GraphPattern | SelectQuery ) '}'
[20]	GraphPattern	 ::=  Triples? ( GraphPatternNotTriples '.'? GraphPattern )?
[21]*	GraphPatternNotTriples	 ::=
			QuadMapGraphPattern
			| OptionalGraphPattern
			| GroupOrUnionGraphPattern
			| GraphGraphPattern
			| Constraint
[22]	OptionalGraphPattern	 ::=  'OPTIONAL' GroupGraphPattern
[Virt]	QuadMapGraphPattern	 ::=  'QUAD' 'MAP' ( IRIref | '*' ) GroupGraphPattern
[23]	GraphGraphPattern	 ::=  'GRAPH' VarOrBlankNodeOrIRIref GroupGraphPattern
[24]	GroupOrUnionGraphPattern	 ::=  GroupGraphPattern ( 'UNION' GroupGraphPattern )*
[25]*	Constraint	 ::=  'FILTER' ( ( '(' Expn ')' ) | BuiltInCall | FunctionCall )
[26]*	ConstructTemplate	 ::=  '{' ConstructTriples '}'
[27]	ConstructTriples	 ::=  ( Triples1 ( '.' ConstructTriples )? )?
[28]	Triples		 ::=  Triples1 ( '.' Triples? )?
[29]	Triples1	 ::=  VarOrTerm PropertyListNotEmpty | TriplesNode PropertyList
[30]	PropertyList	 ::=  PropertyListNotEmpty?
[31]	PropertyListNotEmpty	 ::=  Verb ObjectList ( ';' PropertyList )?
[32]*	ObjectList	 ::=  ObjGraphNode ( ',' ObjectList )?
[Virt]	ObjGraphNode	 ::=  GraphNode TripleOptions?
[Virt]	TripleOptions	 ::=  'OPTION' '(' TripleOption ( ',' TripleOption )? ')'
[Virt]	TripleOption	 ::=  'INFERENCE' ( QNAME | Q_IRI_REF | SPARQL_STRING )
[33]	Verb		 ::=  VarOrBlankNodeOrIRIref | 'a'
[34]	TriplesNode	 ::=  Collection | BlankNodePropertyList
[35]	BlankNodePropertyList	 ::=  '[' PropertyListNotEmpty ']'
[36]	Collection	 ::=  '(' GraphNode* ')'
[37]	GraphNode	 ::=  VarOrTerm | TriplesNode
[38]	VarOrTerm	 ::=  Var | GraphTerm
[39]*	VarOrIRIrefOrBackquoted	 ::=  Var | IRIref | Backquoted
[40]*	VarOrBlankNodeOrIRIrefOrBackquoted	 ::=  Var | BlankNode | IRIref | Backquoted
[Virt]	Retcol	 ::=  ( Var | ( '(' Expn ')' ) | RetAggCall ) ( 'AS' ( VAR1 | VAR2 ) )?
[Virt]	RetAggCall	 ::=  AggName '(', ( '*' | ( 'DISTINCT'? Var ) ) ')'
[Virt]	AggName	 ::=  'COUNT' | 'AVG' | 'MIN' | 'MAX' | 'SUM'
[41]*	Var	 ::=  VAR1 | VAR2 | GlobalVar | ( Var ( '+>' | '*>' ) IRIref )
[Virt]	GlobalVar	 ::=  QUEST_COLON_PARAMNAME | DOLLAR_COLON_PARAMNAME
			| QUEST_COLON_PARAMNUM | DOLLAR_COLON_PARAMNUM
[42]*	GraphTerm	 ::=  IRIref | RDFLiteral | ( '-' | '+' )? NumericLiteral
			| BooleanLiteral | BlankNode | NIL | Backquoted
[Virt]	Backquoted	 ::=  '`' Expn '`'
[43]	Expn		 ::=  ConditionalOrExpn
[44]	ConditionalOrExpn	 ::=  ConditionalAndExpn ( '||' ConditionalAndExpn )*
[45]	ConditionalAndExpn	 ::=  ValueLogical ( '&&' ValueLogical )*
[46]	ValueLogical	 ::=  RelationalExpn
[47]*	RelationalExpn	 ::=  NumericExpn
			( ( ('='|'!='|'<'|'>'|'<='|'>='|'LIKE') NumericExpn )
			| ( 'IN' '(' Expns ')' ) )?
[49]	AdditiveExpn	 ::=  MultiplicativeExpn ( ('+'|'-') MultiplicativeExpn )*
[50]	MultiplicativeExpn	 ::=  UnaryExpn ( ('*'|'/') UnaryExpn )*
[51]	UnaryExpn	 ::=   ('!'|'+'|'-')? PrimaryExpn
[58]	PrimaryExpn	 ::=
			BracketedExpn | BuiltInCall | IRIrefOrFunction
			| RDFLiteral | NumericLiteral | BooleanLiteral | BlankNode | Var
[55]	IRIrefOrFunction	 ::=  IRIref ArgList?
[52]*	BuiltInCall	 ::=
			( 'STR' '(' Expn ')' )
			| ( 'IRI' '(' Expn ')' )
			| ( 'LANG' '(' Expn ')' )
			| ( 'LANGMATCHES' '(' Expn ',' Expn ')' )
			| ( 'DATATYPE' '(' Expn ')' )
			| ( 'BOUND' '(' Var ')' )
			| ( 'sameTERM' '(' Expn ',' Expn ')' )
			| ( 'isIRI' '(' Expn ')' )
			| ( 'isURI' '(' Expn ')' )
			| ( 'isBLANK' '(' Expn ')' )
			| ( 'isLITERAL' '(' Expn ')' )
			| RegexExpn
[53]	RegexExpn	 ::=  'REGEX' '(' Expn ',' Expn ( ',' Expn )? ')'
[54]	FunctionCall	 ::=  IRIref ArgList
[56]*	ArgList	 ::=  ( NIL | '(' Expns ')' )
[Virt]	Expns	 ::=  Expn ( ',' Expn )*
[59]	NumericLiteral	 ::=  INTEGER | DECIMAL | DOUBLE
[60]	RDFLiteral	 ::=  String ( LANGTAG | ( '^^' IRIref ) )?
[61]	BooleanLiteral	 ::=  'true' | 'false'
[63]	IRIref		 ::=  Q_IRI_REF | QName
[64]	QName		 ::=  QNAME | QNAME_NS
[65]*	BlankNode	 ::=  BLANK_NODE_LABEL | ( '[' ']' )
[DML]	SparulAction	 ::=
			CreateAction | DropAction | LoadAction
			| InsertAction | InsertDataAction | DeleteAction | DeleteDataAction
			| ModifyAction | ClearAction
[DML]*	InsertAction	 ::=
			'INSERT' ( ( 'IN' | 'INTO ) 'GRAPH' ( 'IDENTIFIED' 'BY' )? )? PrecodeExpn
			ConstructTemplate ( DatasetClause* WhereClause SolutionModifier )?
[DML]*	InsertDataAction	 ::=
			'INSERT' 'DATA' ( ( 'IN' | 'INTO ) 'GRAPH' ( 'IDENTIFIED' 'BY' )? )?
			PrecodeExpn ConstructTemplate
[DML]*	DeleteAction	 ::=
			'DELETE' ( 'FROM' 'GRAPH' ( 'IDENTIFIED' 'BY' )? )? PrecodeExpn
			ConstructTemplate ( DatasetClause* WhereClause SolutionModifier )?
[DML]*	DeleteDataAction	 ::=
			'DELETE' 'DATA' ( 'FROM' 'GRAPH' ( 'IDENTIFIED' 'BY' )? )?
			PrecodeExpn ConstructTemplate
[DML]*	ModifyAction	 ::=
			'MODIFY' ( 'GRAPH' ( 'IDENTIFIED' 'BY' )? PrecodeExpn?
			'DELETE' ConstructTemplate 'INSERT' ConstructTemplate
			( DatasetClause* WhereClause SolutionModifier )?
[DML]*	ClearAction	 ::=  'CLEAR' ( 'GRAPH' ( 'IDENTIFIED' 'BY' )? PrecodeExpn )?
[DML]*	LoadAction	 ::=  'LOAD' PrecodeExpn
			( ( 'IN' | 'INTO' ) 'GRAPH' ( 'IDENTIFIED' 'BY' )? PrecodeExpn )?
[DML]*	CreateAction	 ::=  'CREATE' 'SILENT'? 'GRAPH' ( 'IDENTIFIED' 'BY' )? PrecodeExpn
[DML]*	DropAction	 ::=  'DROP' 'SILENT'? 'GRAPH' ( 'IDENTIFIED' 'BY' )? PrecodeExpn
[Virt]	QmStmt		 ::=  QmSimpleStmt | QmCreateStorage | QmAlterStorage
[Virt]	QmSimpleStmt	 ::=
			QmCreateIRIClass | QmCreateLiteralClass | QmDropIRIClass | QmDropLiteralClass
			| QmCreateIRISubclass | QmDropQuadStorage | QmDropQuadMap
[Virt]	QmCreateIRIClass	 ::=  'CREATE' 'IRI' 'CLASS' QmIRIrefConst
			( ( String QmSqlfuncArglist )
			| ( 'USING' QmSqlfuncHeader ',' QmSqlfuncHeader ) )
[Virt]	QmCreateLiteralClass	 ::=  'CREATE' 'LITERAL' 'CLASS' QmIRIrefConst
			'USING' QmSqlfuncHeader ',' QmSqlfuncHeader QmLiteralClassOptions?
[Virt]	QmDropIRIClass	 ::=  'DROP' 'IRI' 'CLASS' QmIRIrefConst
[Virt]	QmDropLiteralClass	 ::=  'DROP' 'LITERAL' 'CLASS' QmIRIrefConst
[Virt]	QmCreateIRISubclass	 ::=  'IRI' 'CLASS' QmIRIrefConst 'SUBCLASS' 'OF' QmIRIrefConst
[Virt]	QmIRIClassOptions	 ::=  'OPTION' '(' QmIRIClassOption (',' QmIRIClassOption)* ')'
[Virt]	QmIRIClassOption	 ::=
			'BIJECTION'
			| 'DEREF'
			| 'RETURNS' STRING ('UNION' STRING)*
[Virt]	QmLiteralClassOptions	 ::=  'OPTION' '(' QmLiteralClassOption (',' QmLiteralClassOption)* ')'
[Virt]	QmLiteralClassOption	 ::=
			( 'DATATYPE' QmIRIrefConst )
			| ( 'LANG' STRING )
			| ( 'LANG' STRING )
			| 'BIJECTION'
			| 'DEREF'
			| 'RETURNS' STRING ('UNION' STRING)*
[Virt]	QmCreateStorage	 ::=  'CREATE' 'QUAD' 'STORAGE' QmIRIrefConst QmSourceDecl* QmMapTopGroup
[Virt]	QmAlterStorage	 ::=  'ALTER' 'QUAD' 'STORAGE' QmIRIrefConst QmSourceDecl* QmMapTopGroup
[Virt]	QmDropStorage	 ::=  'DROP' 'QUAD' 'STORAGE' QmIRIrefConst
[Virt]	QmDropQuadMap	 ::=  'DROP' 'QUAD' 'MAP' 'GRAPH'? QmIRIrefConst
[Virt]	QmDrop	 ::=  'DROP' 'GRAPH'? QmIRIrefConst
[Virt]	QmSourceDecl	 ::=
			( 'FROM' QTABLE 'AS' PLAIN_ID QmTextLiteral* )
			| ( 'FROM' PLAIN_ID 'AS' PLAIN_ID QmTextLiteral* )
			| QmCondition
[Virt]	QmTextLiteral	 ::=  'TEXT' 'XML'? 'LITERAL' QmSqlCol ( 'OF' QmSqlCol )? QmTextLiteralOptions?
[Virt]	QmTextLiteralOptions	 ::=  'OPTION' '(' QmTextLiteralOption ( ',' QmTextLiteralOption )* ')'
[Virt]	QmMapTopGroup	 ::=  '{' QmMapTopOp ( '.' QmMapTopOp )* '.'? '}'
[Virt]	QmMapTopOp	 ::=  QmMapOp | QmDropQuadMap | QmDrop
[Virt]	QmMapGroup	 ::=  '{' QmMapOp ( '.' QmMapOp )* '.'? '}'
[Virt]	QmMapOp		 ::=
			( 'CREATE' QmIRIrefConst 'AS' QmMapIdDef )
			| ( 'CREATE' 'GRAPH'? QmIRIrefConst 'USING' 'STORAGE' QmIRIrefConst QmOptions? )
			| ( QmNamedField+ QmOptions? QmMapGroup )
			| QmTriples1
[Virt]	QmMapIdDef	 ::=  QmMapTriple | ( QmNamedField+ QmOptions? QmMapGroup )
[Virt]	QmMapTriple	 ::=  QmFieldOrBlank QmVerb QmObjField
[Virt]	QmTriples1	 ::=  QmFieldOrBlank QmProps
[Virt]	QmNamedField	 ::=  ('GRAPH'|'SUBJECT'|'PREDICATE'|'OBJECT') QmField
[Virt]	QmProps		 ::=  QmProp ( ';' QmProp )?
[Virt]	QmProp		 ::=  QmVerb QmObjField ( ',' QmObjField )*
[Virt]	QmObjField	 ::=  QmFieldOrBlank QmCondition* QmOptions?
[Virt]	QmIdSuffix	 ::=  'AS' QmIRIrefConst
[Virt]	QmVerb		 ::=  QmField | ( '[' ']' ) | 'a'
[Virt]	QmFieldOrBlank	 ::=  QmField | ( '[' ']' )
[Virt]	QmField		 ::=
			NumericLiteral
			| RdfLiteral
			| ( QmIRIrefConst ( '(' ( QmSqlCol ( ',' QmSqlCol )* )? ')' )? )
			| QmSqlCol
[Virt]	QmCondition	 ::=  'WHERE' ( ( '(' SQLTEXT ')' ) | String )
[Virt]	QmOptions	 ::=  'OPTION' '(' QmOption ( ',' QmOption )* ')'
[Virt]	QmOption	 ::=  ( 'SOFT'? 'EXCLUSIVE' ) | ( 'ORDER' INTEGER ) | ( 'USING' PLAIN_ID )
[Virt]	QmSqlfuncHeader	 ::=  'FUNCTION' SQL_QTABLECOLNAME QmSqlfuncArglist 'RETURNS' QmSqltype
[Virt]	QmSqlfuncArglist	 ::=  '(' ( QmSqlfuncArg ( ',' QmSqlfuncArg )* )? ')'
[Virt]	QmSqlfuncArg	 ::=  ('IN' | QmSqlId) QmSqlId QmSqltype
[Virt]	QmSqltype	 ::=  QmSqlId ( 'NOT' 'NULL' )?
[Virt]	QmSqlCol	 ::=  QmSqlId | spar_qm_sql_id
[Virt]	QmSqlId		 ::=  PLAIN_ID | 'TEXT' | 'XML'
[Virt]	QmIRIrefConst	 ::=  IRIref | ( 'IRI' '(' String ')' )

Example: Using OFFSET and LIMIT

Virtuoso uses a zero-based index for OFFSET. Thus, in the example below, the query returns 1000 rows starting from, and including, record 9001 of the result set. Note that the default value of the MaxSortedTopRows parameter in the [Parameters] section of the virtuoso.ini configuration file defaults to 10000, so in this example its value will need to have been increased beforehand.

SQL>SELECT ?name
ORDER BY ?name
OFFSET 9000
LIMIT 1000

LIMIT applies to the solution resulting from the graph patterns specified in the WHERE CLAUSE. This implies that SELECT and CONSTRUCT/DESCRIBE queries will behave a little differently. In the case of a SELECT, there is a straight translation i.e. LIMIT 4 implies 4 records in the result set. In the case of CONSTRUCTs where the solution is a graph (implying that the existence of duplicates and/or unbound variables is common) LIMIT is basically a maximum triples threshold of: [Solution Triples] x [LIMIT].

Example query:

SQL>SPARQL
prefix dct:<http://purl.org/dc/terms/>
prefix rdfs:<http://www.w3.org/2000/01/rdf-schema#>

CONSTRUCT { ?resource dct:title ?title ;
                      a ?type }

FROM <http://msone.computas.no/graphs/inferred/classification>
FROM <http://msone.computas.no/graphs>
FROM <http://msone.computas.no/graphs/instance/nfi>
FROM <http://msone.computas.no/graphs/instance/mo>
FROM <http://msone.computas.no/graphs/ontology/mediasone>
FROM <http://msone.computas.no/graphs/vocab/mediasone>
FROM <http://msone.computas.no/graphs/inferred/nfi/realisation1>
FROM <http://msone.computas.no/graphs/inferred/mo/realisation1>
FROM <http://msone.computas.no/graphs/inferred/nfi/realisation2>
FROM <http://msone.computas.no/graphs/inferred/mo/realisation2>
FROM <http://msone.computas.no/graphs/inferred/agent-classification>
FROM <http://msone.computas.no/graphs/ontology/mediasone/agent>

WHERE {
  {
?resource a ?type .
  FILTER (?type = <http://www.w3.org/2002/07/owl#Class> ) .
  ?resource rdfs:label ?title .
  } UNION {
?resource a ?type .
  FILTER (?type in (
          <http://musicbrainz.org/mm/mm-2.1#Track> ,
          <http://www.csd.abdn.ac.uk/~ggrimnes/dev/imdb/IMDB#Movie> ,
          <http://xmlns.com/foaf/0.1/Image> ,
          <http://www.computas.com/mediasone#Text> ) ) .
  ?resource dct:title ?title .
  }
  FILTER regex(?title, "turi", "i")
}
ORDER BY ?title LIMIT 4 OFFSET 0

Example: Prevent Limits of Sorted LIMIT/OFFSET query

The DBpedia SPARQL endpoint is configured with the following INI setting:

MaxSortedTopRows = 40000

The setting above sets a threshold for sorted rows. Thus, when using basic SPARQL queries that include OFFSET and LIMIT the following query will still exist the hard limit set in the INI:

DEFINE sql:big-data-const 0
SELECT DISTINCT  ?p ?s
FROM <http://dbpedia.org>
WHERE
  {
    ?s ?p <http://dbpedia.org/resource/Germany>
  }
ORDER BY ASC(?p)
OFFSET  40000
LIMIT   1000

returns the following error on execution:

HttpException: 500 SPARQL Request Failed

Virtuoso 22023 Error SR353: Sorted TOP clause specifies more then 41000 rows to sort.
Only 40000 are allowed.
Either decrease the offset and/or row count or use a scrollable cursor

To prevent the problem outlined above you can leverage the use of subqueries which make better use of temporary storage associated with this kind of quest. An example would take the form:

SELECT ?p ?s
WHERE
  {
    {
      SELECT DISTINCT ?p ?s
      FROM <http://dbpedia.org>
      WHERE
        {
          ?s ?p <http://dbpedia.org/resource/Germany>
        } ORDER BY ASC(?p)
    }
  }
OFFSET 50000
LIMIT 1000

16.2.1.1. SPARQL and XQuery Core Function Library

In the current implementation, the XQuery Core Function Library is not available from SPARQL.

As a temporary workaround, string parsing functions are made available, because they are widely used in W3C DAWG examples and the like. They are:

xsd:boolean (in strg any) returns integer
xsd:dateTime (in strg any) returns datetime
xsd:double (in strg varchar) returns double precision
xsd:float (in strg varchar) returns float
xsd:integer (in strg varchar) returns integer

(assuming that the query contains the declaration: 'PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>')



16.2.2. Query Constructs

Starting from Version 5.0, Virtuoso supports filtering RDF objects triples by a given predicate.

16.2.2.1. Examples

The boolean functions bif:contains, bif:xcontains, bif:xpath_contains and bif:xquery_contains can be used for objects that come from Linked Data Views as well as for regular "physical" triples. Each of these functions takes two arguments and returns a boolean value. The first argument is a local variable which should also be used as an object field in the group pattern where the filter condition is placed.

In order to execute the examples below please run these commands:

SQL>SPARQL CLEAR GRAPH <http://MyTest.com>;
DB.DBA.RDF_QUAD_URI_L ('http://MyTest.com', 'sxml1', 'p_all1', xtree_doc ('<Hello>world</Hello>'));
DB.DBA.RDF_QUAD_URI_L ('http://MyTest.com', 'sxml2', 'p_all2', xtree_doc ('<Hello2>world</Hello2>'));
DB.DBA.RDF_QUAD_URI_L ('http://MyTest.com', 'nonxml1', 'p_all3', 'Hello world');
VT_INC_INDEX_DB_DBA_RDF_OBJ();
DB.DBA.RDF_OBJ_FT_RULE_ADD ('http://MyTest.com', null, 'My test RDF Data');

bif:contains

SQL>SPARQL
SELECT *
FROM <http://MyTest.com>
WHERE { ?s ?p ?o . ?o bif:contains "world" };

s             p         o
VARCHAR       VARCHAR   VARCHAR
_______________________________________________________________________________

sxml1         p_all1    <Hello>world</Hello>
nonxml1       p_all3    Hello world
sxml2         p_all2    <Hello2>world</Hello2>

3 Rows. -- 20 msec.

bif:xcontains

SQL>SPARQL
SELECT *
FROM <http://MyTest.com>
WHERE { ?s ?p ?o . ?o bif:xcontains "//Hello[text-contains (., 'world')]" };
s                  p          o
VARCHAR            VARCHAR    VARCHAR
_______________________________________________________________________________

sxml1              p_all      <Hello>world</Hello>

1 Rows. -- 10 msec.

bif:xpath_contains

SQL>SPARQL
SELECT *
FROM <http://MyTest.com>
WHERE { ?s ?p ?o . ?o bif:xpath_contains "//*" };

s             p         o
VARCHAR       VARCHAR   VARCHAR
_______________________________________________________________________________

sxml1         p_all1    <Hello>world</Hello>
sxml2         p_all2    <Hello2>world</Hello2>

2 Rows. -- 20 msec.

bif:xquery_contains

SQL>SPARQL
SELECT *
FROM <http://MyTest.com>
WHERE { ?s ?p ?o . ?o bif:xquery_contains "//Hello2 , world" };

s             p         o
VARCHAR       VARCHAR   VARCHAR
_______________________________________________________________________________

sxml2         p_all2    <Hello2>world</Hello2>

1 Rows. -- 20 msec.


16.2.3. SPARQL Web Services & APIs

16.2.3.1. Introduction

The Virtuoso SPARQL query service implements the SPARQL Protocol for RDF (W3C Working Draft 25 January 2006) providing SPARQL query processing for RDF data available on the open internet.

The query processor extends the standard protocol to provide support for multiple output formats. At present this uses additional query parameters.

Supported features include:

Virtuoso also supports /sparql-graph-crud/ web service endpoint that implements the current draft of W3C SPARQL Graph Update protocol. Both /sparql /sparql-graph-crud/ endpoints use the same SPARQL user account, so this user should be member of SPARQL_UPDATE group in order to modify data via Graph Update protocol. Note that /sparql/ endpoint has /sparql-auth/ variant that uses web authentication. Similarly, /sparql-graph-crud/ has /sparql-graph-crud-auth/ variant. As soon as user is member of SPARQL_UPDATE group, she/he can modify the stored data via /sparql-graph-crud-auth/ as well as via /sparql-auth/ . The /sparql-graph-crud/ endpoint is primarily for serving requests from applications, not for manual interactions via browser. See more information in our SPARQL Authentication section.


16.2.3.2. Service Endpoint

Virtuoso uses the pre-assigned endpoints "/sparql" and "/SPARQL" as the defaults for exposing its REST based SPARQL Web Services.

The port number associated with the SPARQL services is determined by the 'ServerPort' key value in the '[HTTPServer]' section of the virtuoso.ini file. Thus, if the Virtuoso instance is configured to listen at a none default port e.g. 8890, the SPARQL endpoints would be accessible at http://example.com:8890/sparql/.

The SPARQL endpoint supports both GET and POST requests. The client chooses between GET and POST automatically, using the length of query text as the criterion. If the SPARQL endpoint is accessed without any URL and requisite SPARQL protocol parameters, an interactive HTML page for capturing SPARQL input will be presented.

16.2.3.2.1. Customizing SPARQL Endpoint Page

The SPARQL Endpoint Page can now be customized using a xsl stylesheet.

This works by adding the following line with isql:

SQL> registry_set ('sparql_endpoint_xsl', 'http://host:port/path/isparql.xsl');

where obviously host, port, path and the name isparql.xsl can be set to anything.



16.2.3.3. SPARQL Protocol Extensions

16.2.3.3.1. Request Parameters
Table: 16.2.3.3.1.1. Request Parameters List
Parameter Notes Required?
service Service URI such as 'http://example.com/sparql/' Yes
query Text of the query Yes
dflt_graph Default graph URI (string or NULL) No
named_graphs Vector of named graphs (or NULL to prevent overriding named graphs specified in the query) Yes
req_hdr Additional HTTP headers that should be passed to the service, e.g. 'Host: ...' No
maxrows Limit on the numbers of rows that should be returned (the actual size of the result set may differ) No
xslt-uri Absolute URL of any XSLT stylesheet file to be applied to the SPARQL query results No
timeout Timeout for "anytime" query execution, in milliseconds, values less than 1000 are ignored; see Anytime Queries for more details No


16.2.3.3.2. Response Codes

If the query is a CONSTRUCT or a DESCRIBE then the result set consists of a single row and a single column. The value inside is a dictionary of triples in 'long valmode'. Note that the dictionary object cannot be sent to a SQL client, say, via ODBC. The client may lose the database connection trying to fetch a result set row that contains a dictionary object. This disconnection does not disrupt the server, so the client may readily reconnect to the server, but the disconnected transaction will have been rolled back.

See Also:

16.2.3.3.3. Response Format

All the SPARQL protocol standard MIME types are supported by a SPARQL web service client. Moreover, SPARQL web service endpont supports additional MIME types and in some cases additional query types for standard MIME types.

Server Response Formats
Table: 16.2.3.3.3.1. Server Response Formats
Content-Type SPARQL query type Description
'application/sparql-results+xml' SELECT, ASK Canonical XML presentation of SPARQL result set
'text/rdf+n3' SELECT, ASK, CONSTRUCT, DESCRIBE Turtle
'text/rdf+ttl' SELECT, ASK, CONSTRUCT, DESCRIBE Turtle
'text/rdf+turtle' SELECT, ASK, CONSTRUCT, DESCRIBE Turtle
'text/turtle' SELECT, ASK, CONSTRUCT, DESCRIBE Turtle
'text/n3' SELECT, ASK, CONSTRUCT, DESCRIBE Turtle
'application/turtle' SELECT, ASK, CONSTRUCT, DESCRIBE Turtle
'application/x-turtle' SELECT, ASK, CONSTRUCT, DESCRIBE Turtle
'application/x-nice-turtle' SELECT, ASK, CONSTRUCT, DESCRIBE Turtle, like above, but the server will try to use "list" and "[...]" notations to make the document easier to read. This formatting is a slow procedure so long results will be formatted as plain Turtle.
'text/rdf+nt' SELECT Format for NT (each triple is printed separately without abbreviations)
'text/plain' SELECT Format for NT (each triple is printed separately without abbreviations)
'text/ntriples' SELECT, CONSTRUCT, DESCRIBE Format for NT (each triple is printed separately without abbreviations)
'application/x-trig' SELECT, CONSTRUCT, DESCRIBE TriG syntax for result sets, triples and quads (or sets of graphs with triples). While it is not used for quads, the output is same as for Turtle.
'application/rdf+xml' SELECT, CONSTRUCT, DESCRIBE Canonical RDF/XML presentation
'application/soap+xml' SELECT SOAP XML
'application/soap+xml;11' SELECT SOAP XML
'text/html' SELECT HTML document for plain browsing; it's a TABLE for result set and HTML with micro-data for triples
'text/md+html' SELECT, CONSTRUCT, DESCRIBE HTML with microdata; for triples only
'text/microdata+html' SELECT, CONSTRUCT, DESCRIBE HTML with microdata; for triples only
'text/x-html+ul' SELECT, CONSTRUCT, DESCRIBE HTML with triples grouped into hierarchical list
'text/x-html+tr' SELECT, CONSTRUCT, DESCRIBE HTML with triples in form of a table
'application/vnd.ms-excel' SELECT HTML table for loading data into stylesheets
'text/csv' SELECT, CONSTRUCT, DESCRIBE Comma-separated values
'text/tab-separated-values' SELECT Tab-separated values
'application/javascript' SELECT JavaScript data fragment
'application/json' SELECT JSON
'application/sparql-results+json' SELECT, ASK JSON result set
'application/odata+json' SELECT, ASK, CONSTRUCT, DESCRIBE JSON in ODATA style
'application/microdata+json' SELECT, CONSTRUCT, DESCRIBE Microdata as JSON; for triples only
'application/rdf+json' CONSTRUCT, DESCRIBE JSON in TALIS style; for triples only
'application/x-rdf+json' CONSTRUCT, DESCRIBE JSON in TALIS style; for triples only
'application/x-json+ld' CONSTRUCT, DESCRIBE JSON in Linked Data style; for triples only
'application/ld+json' CONSTRUCT, DESCRIBE JSON in Linked Data style; for triples only
'text/cxml' SELECT, CONSTRUCT, DESCRIBE CXML output for rendering in Pivot Viewer of MS SilverLight?. Result sets and triples are handled in different ways.
'text/cxml+qrcode' SELECT,CONSTRUCT, DESCRIBE CXML output with QRcode imprinted into each picture; for result sets and triples
'application/atom+xml' SELECT, CONSTRUCT, DESCRIBE Atom-style XML
'application/xhtml+xml' SELECT RDFa placed into XHTML; for triples only


Client Response Formats
Table: 16.2.3.3.3.1. Client Response Formats
Content-Type SPARQL query type Description
'application/sparql-results+xml' SELECT, ASK Canonical XML presentation of SPARQL result set
'text/rdf+n3' CONSTRUCT, DESCRIBE Turtle
'text/rdf+ttl' CONSTRUCT, DESCRIBE Turtle
'text/rdf+turtle' CONSTRUCT, DESCRIBE Turtle
'text/turtle' CONSTRUCT, DESCRIBE Turtle
'text/n3' CONSTRUCT, DESCRIBE Turtle
'application/turtle' CONSTRUCT, DESCRIBE Turtle
'application/x-turtle' CONSTRUCT, DESCRIBE Turtle
'application/rdf+xml' CONSTRUCT, DESCRIBE Canonical RDF/XML presentation


The current implementation does not support returning the results of SELECT as RDF/XML or 'sparql-results-2'.

If the HTTP header returned by the remote server does not contain a 'Content-Type' line, the client may guess MIME type from the text of the returned body.

Error messages returned from the service are returned as XML documents, using the MIME type application/xml. The documents consist of a single element containing an error message.


16.2.3.3.4. Additional Response Formats -- SELECT

Use the format parameter to select one of the following alternate output formats:

Table: 16.2.3.3.4.1. Additional Response formats list -- SELECT
Format Value Description Mimetype
HTML The result is a HTML document containing query summary and tabular results. The format is human-readable but not intended for using by applications because it makes strings undistinguishable from IRIs and loses other details such as exact datatypes of returned values. text/html
json Two separate MIME types exist for JSON: JSON serialization of results is 'application/sparql-results+json' and confirms to the draft specification "Serializing SPARQL Query Results in JSON". JSON serialization of triples is 'application/rdf+json' and interoperable with Talis. Sometimes a client needs a JSON but it does not know the type of query it sends to Virtuoso web service endpoint. In this case the client can specify either one MIME-type 'application/json' or both 'application/sparql-results+json' and 'application/rdf+json' in the "Accept" header line and Virtuoso will chose the appropriate one automatically. Similar trick works for other sorts of result types: Virtuoso inspects the whole "Accept" header line to find out the most appropriate return type for the given query. application/sparql-results+json
json application/rdf+json
js Javascript serialization of results generates an HTML table with the CSS class sparql. The table contains a column indicating row number and additional columns for each query variable. Each query solution contributes one row of the table. Unbound variables are indicated with a non-breaking space in the appropriate table cells. application/javascript
table text/html
XML text/html
TURTLE text/html


16.2.3.3.5. Additional Response Formats -- CONSTRUCT & DESCRIBE

Example output of DESCRIBE in rdf+json serialization format

  1. Go to the sparql endpoint at http://host:port/sparql, for ex. at http://dbpedia.org/sparql
  2. Enter query in the "Query text" area, for ex.:
    DESCRIBE <http://dbpedia.org/resource/%22S%22_Bridge_II>
    
  3. Select for "Display Results As": JSON
  4. Click "Run Query" button.
  5. As result should be produced the following output:
    {
      { 'http://dbpedia.org/resource/%22S%22_Bridge_II' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://dbpedia.org/ontology/Place' } ,
          { 'type' : 'uri', 'value' : 'http://dbpedia.org/ontology/Resource' } ,
          { 'type' : 'uri', 'value' : 'http://dbpedia.org/ontology/HistoricPlace' } } ,
        { 'http://dbpedia.org/ontology/added' : { 'type' : 'literal', 'value' : '1973-04-23' , 'datatype' : 'http://www.w3.org/2001/XMLSchema#date' } } ,
        { 'http://www.w3.org/2003/01/geo/wgs84_pos#lat' : { 'type' : 'literal', 'value' : 39.99305725097656 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#float' } } ,
        { 'http://www.w3.org/2003/01/geo/wgs84_pos#long' : { 'type' : 'literal', 'value' : -81.74666595458984 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#float' } } ,
        { 'http://dbpedia.org/property/wikiPageUsesTemplate' : { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/Template:infobox_nrhp' } } ,
        { 'http://dbpedia.org/property/name' : { 'type' : 'literal', 'value' : '"S" Bridge II' , 'lang' : 'en' } } ,
        { 'http://dbpedia.org/property/nearestCity' : { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/New_Concord%2C_Ohio' } ,
          { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/Ohio' } } ,
        { 'http://dbpedia.org/property/latDirection' : { 'type' : 'literal', 'value' : 'N' , 'lang' : 'en' } } ,
        { 'http://dbpedia.org/property/governingBody' : { 'type' : 'literal', 'value' : 'State' , 'lang' : 'en' } } ,
        { 'http://www.georss.org/georss/point' : { 'type' : 'literal', 'value' : '39.99305556 -81.74666667' } ,
          { 'type' : 'literal', 'value' : '39.9930555556 -81.7466666667' } } ,
        { 'http://xmlns.com/foaf/0.1/name' : { 'type' : 'literal', 'value' : '"S" Bridge II' } } ,
        { 'http://dbpedia.org/property/latDegrees' : { 'type' : 'literal', 'value' : 39 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#integer' } } ,
        { 'http://dbpedia.org/property/latMinutes' : { 'type' : 'literal', 'value' : 59 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#integer' } } ,
        { 'http://dbpedia.org/property/latSeconds' : { 'type' : 'literal', 'value' : 35 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#integer' } } ,
        { 'http://dbpedia.org/property/longDirection' : { 'type' : 'literal', 'value' : 'W' , 'lang' : 'en' } } ,
        { 'http://dbpedia.org/property/architect' : { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/Benjamin_Latrobe' } } ,
        { 'http://dbpedia.org/property/added' : { 'type' : 'literal', 'value' : '1973-04-23' , 'datatype' : 'http://www.w3.org/2001/XMLSchema#date' } } ,
        { 'http://www.w3.org/2000/01/rdf-schema#label' : { 'type' : 'literal', 'value' : '"S" Bridge II (Muskingum County, Ohio)' , 'lang' : 'nl' } ,
          { 'type' : 'literal', 'value' : '"S" Bridge II' , 'lang' : 'en' } } ,
        { 'http://dbpedia.org/ontology/architect' : { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/Benjamin_Latrobe' } } ,
        { 'http://xmlns.com/foaf/0.1/img' : { 'type' : 'uri', 'value' : 'http://upload.wikimedia.org/wikipedia/commons/d/d4/FoxRunS-Bridge_NewConcordOH.jpg' } } ,
        { 'http://dbpedia.org/property/locmapin' : { 'type' : 'literal', 'value' : 'Ohio' , 'lang' : 'en' } } ,
        { 'http://dbpedia.org/property/refnum' : { 'type' : 'literal', 'value' : 73001513 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#integer' } } ,
        { 'http://dbpedia.org/property/abstract' : { 'type' : 'literal', 'value' : '"S" Bridge II is a historic S bridge near New Concord, Ohio, United States. A part of the National Road, the first federally-financed highway in the United States, it was built in 1828. Its peculiar shape, typical for an S bridge, is designed to minimize the span and allow easy access. In 1973, it was listed on the National Register of Historic Places.' , 'lang' : 'en' } ,
          { 'type' : 'literal', 'value' : '"S" Bridge II bij New Concord, Ohio, is een deel van de National Road, een van de eerste highways die door de federale overheid vanaf 1811 werden aangelegd. De vorm, die de brug als een S Brug kenmerkt, is bedoeld om de overspanning zo klein mogelijk te houden en toch gemakkelijk toegang tot de brug te verlenen. De brug staat sinds 1973 op de lijst van het National Register of Historic Places als monument vermeld.' , 'lang' : 'nl' } } ,
        { 'http://www.w3.org/2004/02/skos/core#subject' : { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/Category:National_Register_of_Historic_Places_in_Ohio' } ,
          { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/Category:Bridges_on_the_National_Register_of_Historic_Places' } } ,
        { 'http://dbpedia.org/ontology/nearestCity' : { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/Ohio' } ,
          { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/New_Concord%2C_Ohio' } } ,
        { 'http://xmlns.com/foaf/0.1/depiction' : { 'type' : 'uri', 'value' : 'http://upload.wikimedia.org/wikipedia/commons/thumb/d/d4/FoxRunS-Bridge_NewConcordOH.jpg/200px-FoxRunS-Bridge_NewConcordOH.jpg' } } ,
        { 'http://dbpedia.org/property/caption' : { 'type' : 'literal', 'value' : 'The bridge in the fall' , 'lang' : 'en' } } ,
        { 'http://dbpedia.org/property/longDegrees' : { 'type' : 'literal', 'value' : 81 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#integer' } } ,
        { 'http://dbpedia.org/property/longMinutes' : { 'type' : 'literal', 'value' : 44 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#integer' } } ,
        { 'http://dbpedia.org/property/longSeconds' : { 'type' : 'literal', 'value' : 48 , 'datatype' : 'http://www.w3.org/2001/XMLSchema#integer' } } ,
        { 'http://www.w3.org/2000/01/rdf-schema#comment' : { 'type' : 'literal', 'value' : '"S" Bridge II is a historic S bridge near New Concord, Ohio, United States.' , 'lang' : 'en' } ,
          { 'type' : 'literal', 'value' : '"S" Bridge II bij New Concord, Ohio, is een deel van de National Road, een van de eerste highways die door de federale overheid vanaf 1811 werden aangelegd.' , 'lang' : 'nl' } } ,
        { 'http://xmlns.com/foaf/0.1/page' : { 'type' : 'uri', 'value' : 'http://en.wikipedia.org/wiki/%22S%22_Bridge_II' } } } ,
      { 'http://dbpedia.org/resource/%22S%22_Bridge_II_%28Muskingum_County%2C_Ohio%29' : { 'http://dbpedia.org/property/redirect' : { 'type' : 'uri', 'value' : 'http://dbpedia.org/resource/%22S%22_Bridge_II' } } }
    }
    
    

Example output of CONSTRUCT in rdf+json serialization format

  1. Go to the sparql endpoint at http://host:port/sparql, for ex. at http://dbpedia.org/sparql
  2. Enter query in the "Query text" area, for ex.:
    CONSTRUCT
    {
     ?s a ?Concept .
    }
    WHERE
    {
     ?s a ?Concept .
    }
    LIMIT 10
    
  3. Select for "Display Results As": JSON
  4. Click "Run Query" button.
  5. As result should be produced the following output:
    {
      { 'http://dbpedia.org/ontology/Place' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/Area' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/City' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/River' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/Road' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/Lake' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/LunarCrater' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/ShoppingMall' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/Park' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } } ,
      { 'http://dbpedia.org/ontology/SiteOfSpecialScientificInterest' : { 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' : { 'type' : 'uri', 'value' : 'http://www.w3.org/2002/07/owl#Class' } } }
    }
    

For interoperability with clients that were developed before current versions of SPARQL protocol and format specs are issued, Virtuoso supports some obsolete variants of standard MIME types. 'text/rdf+n3', 'text/rdf+ttl', 'application/turtle' and 'application/x-turtle' are understood for TURTLE output, 'application/x-rdf+json' and 'application/rdf+json' are for "Serializing SPARQL Query Results in JSON". When a client specifies obsolete MIME type but not its standard variant, an obsolete variant is returned for interoperability.


16.2.3.3.6. View Results Page of SPARQL Query Execution

To view SPARQL Endpoint Results page of SPARQL query execution should be used the parameter query i.e the SPARQL Protocol URL should look like:

http://cname/sparql?default-graph-uri=&query=...

Example

Suppose the following simple query:

SELECT *
WHERE
  {
    ?s ?p ?o
  }
LIMIT 10

See this example link against Virtuoso Demo Server SPARQL Endpoint with SPARQl Protocol URL.


16.2.3.3.7. View Editor Page of SPARQL Query

To view the SPARQL Endpoint editor page of SPARQL query execution should be used the parameter qtxt i.e the SPARQL Protocol URL should look like:

http://cname/sparql?default-graph-uri=&qtxt=...

Example

Suppose the following simple query:

SELECT *
WHERE
  {
    ?s ?p ?o
  }
LIMIT 10

Suppose also this results page link against Virtuoso Demo Server SPARQL Endpoint with SPARQl Protocol URL.

Replace the parameter name query with qtxt.

Access the new link, which should present the SPARQL Endpoint Editor page with "Query Text" area filled in with the SPARQL Query from above.


16.2.3.3.8. Virtuoso/PL APIs

Virtuoso also provides SPARQL protocol client APIs in Virtuoso PL, so you can communicate with SPARQL Query Services from Virtuoso stored procedures. The APIs are as follows:

Table: 16.2.3.3.8.1. Virtuoso/PL APIs
API Notes
DB.DBA.SPARQL_REXEC Behaves like DBA.SPARQL_EVAL, but executes the query on the specified server. The procedure does not return anything. Instead, it creates a result set.
DB.DBA.SPARQL_REXEC_TO_ARRAY Behaves like DBA.SPARQL_EXEC_TO_ARRAY(), but executes the query on the specified server. The function returns a vector of rows, where every row is represented by a vector of field values.
DB.DBA.SPARQL_REXEC_WITH_META Has no local SPARQL_EVAL analog. It produces not only an array of result rows together with an array of result set metadata in a format used by the exec() function.


16.2.3.3.9. SPARQL Anytime Queries

Starting with version 6, Virtuoso offers a partial query evaluation feature that guarantees answers to arbitrary queries within a fixed time. This is intended for use in publicly available SPARQL or SQL end points on large databases. This enforces a finite duration to all queries and will strive to return meaningful partial results. Thus this provides the same security as a transaction timeout but will be more user friendly since results will generally be returned, also for aggregate queries. Outside of a public query service, this may also be handy when exploring a large data set with unknown properties.

The feature is activated with the statement

set result_timeout == <expression>;

Find more detailed information in the Anytime Queries section.

Example Dump arbitrary query result as N-Triples

Assume the following arbitrary query:

SPARQL define output:format "NT"
CONSTRUCT { ?s a ?t }
FROM virtrdf:
WHERE { ?s a ?t };

For iteration over result-set of an arbitrary query, use exec_next() in a loop that begins with exec() with cursor output variable as an argument and ends with exec_close() after it is out of data.




16.2.3.4. Service Endpoint Security

Earlier releases of Virtuoso secured the SPARQL endpoint via privileges assigned to the service- specific SQL user account "SPARQL". This account was optionally granted "SPARQL_SELECT" or "SPARQL_UPDATE" roles. By default only the "SPARQL_SELECT" role was assigned, enabling all users to at least perform SELECT queries. The "SPARQL_UPDATE" role must be granted to allow updates to the Quad Store - a pre-requisite for the Virtuoso Sponger services to be functional i.e. to allow the Sponger to populate and update the Quad Store. In Virtuoso release 5.0.7, there is a new "SPARQL_SPONGE" role which can be assigned specifically to allow Sponger services to update the Quad Store but not SPARQL users via the SPARQL endpoint.

Restricting a user's access to specific graphs can be done using Virtuoso Graph security functionality, via one of the Virtuoso Data Access APIs: ODBC, JDBC, ADO.Net or PL code.

See Also:

For example, users of OpenLink Data Space (ODS) applications are restricted in the RDF graphs accessible to them as follows:

DB.DBA.TABLE_DROP_POLICY ('DB.DBA.RDF_QUAD', 'S');

create procedure DB.DBA.RDF_POLICY (in tb varchar, in op varchar)
{
 declare chost, ret varchar;
 chost := DB.DBA.WA_CNAME ();
 ret := sprintf ('(ID_TO_IRI (G) NOT LIKE \'http://%s/dataspace/%%/private#\' ' ||
 'OR G = IRI_TO_ID (sprintf (\'http://%s/dataspace/%%U/private#\', USER)))', chost, chost);
 return ret;
}
;

grant execute on DB.DBA.RDF_POLICY to public;

DB.DBA.TABLE_SET_POLICY ('DB.DBA.RDF_QUAD', 'DB.DBA.RDF_POLICY', 'S');

where DB.DBA.WA_CNAME () is an ODS function returning the default host name.

The effect of this policy is to restrict user 'user' to the graph http://cname/dataspace/user/private#

16.2.3.4.1. SPARQL Auth Endpoint Usage Example

Virtuoso reserves the path '/sparql-auth/' for a SPARQL service supporting authenticated SPARUL. This endpoint allows specific SQL accounts to perform SPARUL over the SPARQL protocol. To be allowed to login via SQL or ODBC and update physical triples, a user must be granted "SPARQL_UPDATE" privileges. To grant this role:

  1. Go to the Virtuoso administration UI i.e. http://host:port/conductor
  2. Login as user dba
  3. Go to System Admin->User Accounts->Users
    Conductor UI

    Figure: 16.2.3.4.1.1.1. Conductor UI
  4. Click the link "Edit"
  5. Set "User type" to "SQL/ODBC Logins and WebDAV".
  6. Select from the list of available Account Roles "SPARQL_UPDATE" role and click the ">>" button so to add it to the right-hand list.
  7. Click the "Save" button.

Note that if a table is used in an Linked Data View, and this table is not granted to SPARQL_SELECT permission (or SPARQL_UPDATE, which implicitly confers SPARQL_SELECT), then all SELECTs on a graph defined by an Linked Data View will return an access violation error as the user account has no permissions to read the table. The user must have appropriate privileges on all tables included in an Linked Data View in order to be able to select on all graphs.


16.2.3.4.2. Managing a SPARQL Web Service Endpoint

Virtuoso web service endpoints may provide different default configurations for different host names mentioned in an HTTP request. Host name configuration for SPARQL web service endpoints can be managed via the table DB.DBA.SYS_SPARQL_HOST.

create table DB.DBA.SYS_SPARQL_HOST (
  SH_HOST	varchar not null primary key, -- host mask
  SH_GRAPH_URI	varchar,                -- default 'default graph' uri
  SH_USER_URI	varchar,                  -- reserved for any use in applications
  SH_DEFINES	long varchar              -- additional defines for requests
)

You can find detailed descriptions of the table columns here. Also, please read these notes on managing public web service endpoints.


16.2.3.4.3. Authentication

Virtuoso 5.0.7 introduced a new "SPARQL_SPONGE" role which can be assigned specifically for controlling Sponger middleware services which perform writes and graph creation in the RDF Quad Store. This role only allows updates through the Sponger. Quad Store updates via any other route require granting the SPARQL_UPDATE role.

Virtuoso 5.0.11 onwards added three new methods for securing SPARQL endpoints that include:

Each of these authentication methods is associated with a purpose specific default SPARQL endpoint along the following lines:

Note: sparql-ssl is alias of sparql-webid.

The Virtuoso Authentication Server offers a UI with options for managing:

Virtuoso Authentication Server can be installed by downloading and installing the conductor_dav.vad package.

The Authentication UI is accessible from the Conductor UI -> Linked Data -> Access Control -> SPARQL-WebID. Here is sample scenario:

SPARQL-WebID Authentication Example
  1. Download and install the conductor_dav.vad package.
  2. Generate an X.509 Certificate hosted WebID.
  3. Go to http://<cname>:<port>/conductor, where <cname>:<port> are replaced by your local server values.
  4. Log in as user "dba" or another user with DBA privileges.
  5. Go to Linked Data -> Access Controls -> SPARQL-WebID:
    SPARQL-WebID

    Figure: 16.2.3.4.3.1.1. SPARQL-WebID
  6. Enter in the presented form Web ID for ex.:
    http://id.myopenlink.net/dataspace/person/demo#this
    

    and select "SPARQL Role" for ex. "Sponge".

    SPARQL-WebID

    Figure: 16.2.3.4.3.1.2. SPARQL-WebID
  7. Click the "Register" button.
  8. As result the WebID Protocol ACLs will be created:
    SPARQL-WebID

    Figure: 16.2.3.4.3.1.3. SPARQL-WebID
  9. Go to the SPARQL-WebID endpoint https://<cname>:<port>/sparql-webid
  10. Select the user's certificate from above:
    SPARQL-WebID

    Figure: 16.2.3.4.3.1.4. SPARQL-WebID
  11. As result the SPARQL Query UI will be presented:
    SPARQL-WebID

    Figure: 16.2.3.4.3.1.5. SPARQL-WebID
  12. Execute sample query and view the results:
    SPARQL-WebID

    Figure: 16.2.3.4.3.1.6. SPARQL-WebID


16.2.3.4.4. SPARQL OAuth Endpoint

OAuth provides a secure data transmission level mechanism for your SPARQL endpoint. It enables you to interact securely with your RDF database from a variety of locations. It also allows you to provide controlled access to private data to selected users.

Virtuoso OAuth Server can be installed by downloading and installing the ods_framework_dav.vad package. The OAuth UI is accessible from the URL http://cname:port/oauth

A user must have SQL privileges in order to run secured SPARQL statements.

Here is a sample scenario:

  1. Download and install the conductor_dav.vad and ods_framework_dav.vad packages.
  2. Generate an X.509 Certificate hosted WebID.
  3. Go to http://<cname>:<port>/conductor, where <cname>:<port> are replaced by your local server values.
  4. Log in as user "dba" or another user with DBA privileges.
  5. Go to System Admin->User Accounts:
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.1. SPARQL OAuth Endpoint
  6. Click "Create New Account":
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.2. SPARQL OAuth Endpoint
  7. In the presented form enter respectively:
    1. Account name, for ex:demo1; a password and then confirm the password;
    2. User type: SQL/ODBC and WebDAV;
    3. Account role: SPARQL_UPDATE
      SPARQL OAuth Endpoint

      Figure: 16.2.3.4.4.1.1.1. SPARQL OAuth Endpoint
  8. Click the "Save" button.
  9. The created user should be shown in the list of registered users:
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.3. SPARQL OAuth Endpoint
  10. Go to http://<cname>:<port>/oauth/, where <cname>:<port> are replaced by your local server values.
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.4. SPARQL OAuth Endpoint
  11. Click the "OAuth keys" link:
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.5. SPARQL OAuth Endpoint
  12. Log in as user demo1:
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.6. SPARQL OAuth Endpoint
  13. The OAuth application registration form will be shown.
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.7. SPARQL OAuth Endpoint
  14. Select SPARQL from the "Application name" list, and click the "Generate Keys" button.
  15. A Consumer Key for SPARQL will be generated:
    90baa79108b1d972525bacc76c0279c02d6421e8
    
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.8. SPARQL OAuth Endpoint
  16. Click the "Back to main menu" link.
  17. Click the "Protected SPARQL Endpoint" link.
  18. The OpenLink Virtuoso SPARQL Query form will be displayed.
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.9. SPARQL OAuth Endpoint
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.9. SPARQL OAuth Endpoint
  19. Enter a simple query, for ex:
    SELECT *
    WHERE
      {
        ?s ?p ?o
      }
    LIMIT 10
    
  20. Enter the value from below for the "OAuth token":
    90baa79108b1d972525bacc76c0279c02d6421e8
    
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.11. SPARQL OAuth Endpoint
  21. Click the "Run Query" button.
  22. In the OAuth Authorization Service form enter the password for user demo1 and click the "Login" button.
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.12. SPARQL OAuth Endpoint
  23. Next you should authorize the request:
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.13. SPARQL OAuth Endpoint
  24. On successful authentication and authorization, the query results should be shown:
    SPARQL OAuth Endpoint

    Figure: 16.2.3.4.4.1.14. SPARQL OAuth Endpoint

16.2.3.4.5. WebID Protocol ACLs

WebID Protocol is an implementation of a conceptual authentication and authorization protocol that links a Web ID to a public key, to create a global decentralized/distributed, and open yet secure authentication system that functions with existing browsers.

To use WebID Protocol, download and install the conductor_dav.vad VAD package. Once installed, to access the WebID Protocol ACLs UI, go to URL http://cname:port/conductor -> Linked Data -> Access Controls -> SPARQL-WebID .

WebID

Figure: 16.2.3.4.5.1. WebID

Configuring WebID Protocol ACLs is with a WebID Protocol certificate and a Web ID allows secure SPARQL queries to be performed against a Virtuoso SPARQL-WebID endpoint and viewing of the query results. The SPARQL-WebID endpoint URL is of the form https://cname:port/sparql-webid

Note: SPARQL-SSL is alias of SPARQL-WebID.

See sample example how to configure a sample WebID Protocol ACL are outlined below:

See Also:

WebID Protocol ODBC Login


16.2.3.4.6. Creating and Using a SPARQL-WebID based Endpoint

The following section describes the basic steps for setting up an SSL protected and WebID based SPARQL Endpoint (SPARQL-WebID). The guide also covers the use of Virtuoso PL functions and the Virtuoso Conductor for SPARQL endpoint creation and configuration. It also covers the use of cURL for exercising the newly generated SPARQL-SSL endpoint. Note: SPARQL-SSL is alias of SPARQL-WebID.

  1. Setup the CA issuer and https listener
  2. To create the /sparql-webid endpoint, install the policy_manager.vad manage or manually define the /sparql-webid endpoint on an HTTPS based listener (HTTPS service endpoint), for example using Virtuoso PL:
    DB.DBA.VHOST_DEFINE (
    	 lhost=>'127.0.0.1:443',
    	 vhost=>'localhost',
    	 lpath=>'/sparql-webid',
    	 ppath=>'/!sparql/',
    	 is_dav=>1,
    	 auth_fn=>'DB.DBA.FOAF_SSL_AUTH',
    	 vsp_user=>'dba',
    	 ses_vars=>0,
    	 auth_opts=>vector ( 'https_cert',
    	                     'db:https_key_localhost',
    	                     'https_key',
    	                     'db:https_key_localhost',
    	                     'https_verify',
    	                     3,
    	                     'https_cv_depth',
    	                     10 ),
    	 opts=>vector ('noinherit', 1),
    	 is_default_host=>0
    );
    
  3. Setup the SPARQL-WebID endpoint and define ACLs using the Virtuoso Conductor
  4. Export your private key and its associated WebID based X.509 certificate from your Firefox browser or System's Key Manager into PEM (PKCS12) file
    1. If using Firefox use the menu path: Advanced -> View Certificates, then click Backup for your certificate with name "mykey".
    2. The file "mykey.p12" will be created. To disable password protection so that you can use this file in non-interactive mode (e.g. with cURL and other HTTP clients) execute:
      openssl pkcs12 -in mykey.p12 -out mykey.pem -nodes
      
  5. Test the SPARQL-WebID endpoint with cURL: (listening on default HTTPS 443 port):
    • Note: In this example we use the "-k / --insecure" option with cURL since we are going to be using self-signed X.509 certificates signed by self-signed root CA.
    	curl -k -E mykey.pem "https://localhost/sparql-webid?query=select+*+where+\{+%3Fx+%3Fy+%3Fz+.+\}+limit+10&format=text%2Fn3"
    
    @prefix res: <http://www.w3.org/2005/sparql-results#> .
    @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
    _:_ a res:ResultSet .
    _:_ res:resultVariable "x" , "y" , "z" .
    @prefix ns0:    <https://localhost/tutorial/> .
    @prefix rdf:    <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:hosting ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:xml ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:repl ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:rdfview ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:services ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:wap ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:bpeldemo ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:web ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:web2 ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    _:_ res:solution [
          res:binding [ res:variable "x" ; res:value ns0:xmlxslt ] ;
          res:binding [ res:variable "y" ; res:value rdf:type ] ;
          res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
    
  6. Import your key it via Conductor UI:
    1. Go to Conductor -> System Admin->User Accounts
      Import key it via Conductor UI

      Figure: 16.2.3.4.6.1.1.1. Import key it via Conductor UI
    2. Click "Edit" for your user
      Import key it via Conductor UI

      Figure: 16.2.3.4.6.1.1.2. Import key it via Conductor UI
    3. Change "User type" to: SQL/ODBC and WebDAV
      Import key it via Conductor UI

      Figure: 16.2.3.4.6.1.1.3. Import key it via Conductor UI
    4. Enter your ODS user WebID:
      http://cname:port/dataspace/person/username#this
      
      Import key it via Conductor UI

      Figure: 16.2.3.4.6.1.1.4. Import key it via Conductor UI
    5. Click "Save"
    6. Click again "Edit" for your user
    7. In "PKCS12 file:" click the Browse" button and select your key.
    8. Enter a local Key Name, for e.g., "cli_key"
    9. Enter key password
      Import key it via Conductor UI

      Figure: 16.2.3.4.6.1.1.5. Import key it via Conductor UI
    10. Click "Import Key"
    11. As result the key will be stored with name for ex. cli_key
      Import key it via Conductor UI

      Figure: 16.2.3.4.6.1.1.6. Import key it via Conductor UI
    12. Click "Save"
  7. Test the SPARQL-WebID endpoint with http_client (listening on default HTTPS 443 port):
    1. Log in at Virtuos ISQL with your user credentials:
      C:\>isql localhost:1111 johndoe****
      Connected to OpenLink Virtuoso
      Driver: 06.01.3127 OpenLink Virtuoso ODBC Driver
      OpenLink Interactive SQL (Virtuoso), version 0.9849b.
      Type HELP; for help and EXIT; to exit.
      SQL>
      
    2. Execute:
      SQL>select http_client ('https://localhost/sparql-webid?query=select+*+where+{+%3Fx+%3Fy+%3Fz+.+}+limit+10&format=text%2Fn3', cert_file=>'d
      b:cli_key', insecure=>1);
      callret
      VARCHAR
      _______________________________________________________________________________
      
      
      @prefix res: <http://www.w3.org/2005/sparql-results#> .
      @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
      _:_ a res:ResultSet .
      _:_ res:resultVariable "x" , "y" , "z" .
      @prefix ns0:    <https://localhost/tutorial/> .
      @prefix rdf:    <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
      _:_ res:solution [
            res:binding [ res:variable "x" ; res:value ns0:hosting ] ;
            res:binding [ res:variable "y" ; res:value rdf:type ] ;
            res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
      _:_ res:solution [
            res:binding [ res:variable "x" ; res:value ns0:xml ] ;
            res:binding [ res:variable "y" ; res:value rdf:type ] ;
            res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
      _:_ res:solution [
            res:binding [ res:variable "x" ; res:value ns0:repl ] ;
            res:binding [ res:variable "y" ; res:value rdf:type ] ;
            res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
      _:_ res:solution [
            res:binding [ res:variable "x" ; res:value ns0:rdfview ] ;
            res:binding [ res:variable "y" ; res:value rdf:type ] ;
            res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
      _:_ res:solution [
            res:binding [ res:variable "x" ; res:value ns0:services ] ;
            res:binding [ res:variable "y" ; res:value rdf:type ] ;
            res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
      _:_ res:solution [
            res:binding [ res:variable "x" ; res:value ns0:wap ] ;
            res:binding [ res:variable "y" ; res:value rdf:type ] ;
            res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
      _:_ res:solution [
            res:binding [ res:variable "x" ; res:value ns0:bpeldemo ] ;
            res:binding [ res:variable "y" ; res:value rdf:type ] ;
            res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
      _:_ res:solution [
            res:binding [ res:variable "x" ; res:value ns0:web ] ;
            res:binding [ res:variable "y" ; res:value rdf:type ] ;
            res:binding [ res:variable "z" ; res:value "Tutorial" ] ] .
      _:_ res:solution [
      
      
      1 Rows. -- 281 msec.
      
See Also:

Demo Example Using HTTP client to perform WebID Protocol connection.


16.2.3.4.7. Disable Default SPARQL Endpoint
Using iSQL:
  1. To disable /sparql, execute:
    DB.DBA.VHOST_REMOVE (lpath=>'/sparql');
    
  2. To add the endpoint again via PL, execute:
    DB.DBA.VHOST_DEFINE (lpath=>'/sparql/', ppath => '/!sparql/', is_dav => 1, vsp_user => 'dba', opts => vector('noinherit', 1));
    

Using Conductor UI:
  1. Go to http://cname:port/conductor .
  2. Enter user dba credentials.
  3. Go to "Web Application Server" -> "Virtual Domains & Directories".
    Disable SPARQL Endpoint

    Figure: 16.2.3.4.7.1.1. Disable SPARQL Endpoint
  4. Find the logical path "/sparql".
    Disable SPARQL Endpoint

    Figure: 16.2.3.4.7.1.2. Disable SPARQL Endpoint
  5. Click "Edit" from the "Action" column.
    Disable SPARQL Endpoint

    Figure: 16.2.3.4.7.1.3. Disable SPARQL Endpoint
  6. Change "VSP User" to "nobody".
    Disable SPARQL Endpoint

    Figure: 16.2.3.4.7.1.4. Disable SPARQL Endpoint
  7. Click "Save Changes".
  8. As result the SPARQL Endpoint should be shown as disabled:
    Disable SPARQL Endpoint

    Figure: 16.2.3.4.7.1.5. Disable SPARQL Endpoint



16.2.3.5. Request Methods

Table: 16.2.3.5.1. Methods List
Method Supported? Notes
GET Yes Short queries are sent in GET mode
POST Yes Queries longer than 1900 bytes are POST-ed.
DELETE No
PUT No


16.2.3.6. Functions

The SPARQL client can be invoked by three similar functions:

Table: 16.2.3.6.1. Functions List
Function Notes
DB.DBA.SPARQL_REXEC Behaves like DBA.SPARQL_EVAL, but executes the query on the specified server. The procedure does not return anything. Instead, it creates a result set.
DB.DBA.SPARQL_REXEC_TO_ARRAY Behaves like DBA.SPARQL_EXEC_TO_ARRAY (), but executes the query on the specified server. The function return a vector of rows, where every row is represented by a vector of field values.
DB.DBA.SPARQL_REXEC_WITH_META Has no local 'SPARQL_EVAL' analog. It produces an array of result rows together with an array of result set metadata in the same format as produced by the exec () function. This function can be used when the result should be passed later to exec_result_names () and exec_result () built-in functions. To process a local query in similar style, an application can use the SQL built-in function exec () - a SPARQL query (with the 'SPARQL' keyword in front) can be passed to exec () instead of a plain SQL SELECT statement.

create procedure DB.DBA.SPARQL_REXEC (
    in service varchar, in query varchar, in dflt_graph varchar, in named_graphs any,
    in req_hdr any, in maxrows integer, in bnode_dict any );
create function DB.DBA.SPARQL_REXEC_TO_ARRAY (
    in service varchar, in query varchar, in dflt_graph varchar, in named_graphs any,
    in req_hdr any, in maxrows integer, in bnode_dict any )
    returns any;
create procedure DB.DBA.SPARQL_REXEC_WITH_META (
    in service varchar, in query varchar, in dflt_graph varchar, in named_graphs any,
    in req_hdr any, in maxrows integer, in bnode_dict any,
    out metadata any,  -- metadata like exec () returns.
    out resultset any) -- results as 'long valmode' values.

16.2.3.7. Examples

Virtuoso's SPARQL demo offers a live demonstration of Virtuoso's implementation of the DAWG's SPARQL test-suite, a collection of SPARQL query language use cases that enable interactive and simplified testing of a triple store implementation. If you have installed the SPARQL Demo VAD locally, it can be found at a URL similar to 'http://example.com:8080/sparql_demo/', the exact form will depend on your local configuration. Alternatively, a live version of the documentation is available at Virtuoso Demo Server.

16.2.3.7.1. Example SPARQL query issued via curl
curl -F "query=SELECT DISTINCT ?p FROM <http://demo.openlinksw.com/DAV/home/demo/rdf_sink/> WHERE {?s ?p ?o}" http://demo.openlinksw.com/sparql

The result should be:

<?xml version="1.0" ?>
<sparql xmlns="http://www.w3.org/2005/sparql-results#" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/sw/DataAccess/rf1/result2.xsd">
 <head>
  <variable name="p"/>
 </head>
 <results distinct="false" ordered="true">
  <result>
   <binding name="p"><uri>http://www.w3.org/1999/02/22-rdf-syntax-ns#type</uri></binding>
  </result>
  <result>
   <binding name="p"><uri>http://xmlns.com/foaf/0.1/nick</uri></binding>
  </result>
  <result>
   <binding name="p"><uri>http://xmlns.com/foaf/0.1/name</uri></binding>
  </result>
  <result>
   <binding name="p"><uri>http://xmlns.com/foaf/0.1/homepage</uri></binding>
  </result>
  <result>
   <binding name="p"><uri>http://xmlns.com/foaf/0.1/knows</uri></binding>
  </result>
  <result>
   <binding name="p"><uri>http://xmlns.com/foaf/0.1/workplaceHomepage</uri></binding>
  </result>
  <result>
   <binding name="p"><uri>http://xmlns.com/foaf/0.1/mbox</uri></binding>
  </result>
 </results>
</sparql>

16.2.3.7.2. Other Examples of SPARQL query issued via curl

Further example SPARQL queries:

curl -F "query=SELECT DISTINCT ?Concept FROM <http://dbpedia.org> WHERE {?s a ?Concept} LIMIT 10" http://dbpedia.org/sparql
curl -F "query=SELECT DISTINCT ?Concept FROM <http://myopenlink.net/dataspace/person/kidehen> WHERE {?s a ?Concept} LIMIT 10" http://demo.openlinksw.com/sparql
curl -F "query=SELECT DISTINCT ?Concept FROM <http://data.openlinksw.com/oplweb/product_family/virtuoso> WHERE {?s a ?Concept} LIMIT 10" http://demo.openlinksw.com/sparql
curl -F "query=SELECT DISTINCT ?Concept FROM <http://openlinksw.com/dataspace/organization/openlink> WHERE {?s a ?Concept} LIMIT 10" http://demo.openlinksw.com/sparql

16.2.3.7.3. Example with curl and SPARQL-WebID endpoint
$ curl -H "Accept: text/rdf+n3"  --cert test.pem -k https://demo.openlinksw.com/dataspace/person/demo
Enter PEM pass phrase: *****
@prefix rdf:    <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ns1:    <https://demo.openlinksw.com/dataspace/demo/socialnetwork/demo%27s%20AddressBook/1046#> .
@prefix foaf:   <http://xmlns.com/foaf/0.1/> .
ns1:this        rdf:type        foaf:Person .
@prefix ns3:    <http://www.pipian.com/rdf/tami/juliette.n3#> .
ns3:juliette    rdf:type        foaf:Document .
@prefix ns4:    <https://demo.openlinksw.com/dataspace/person/> .
ns4:demo        rdf:type        foaf:PersonalProfileDocument .
@prefix ns5:    <https://demo.openlinksw.com/dataspace/person/demo#> .
@prefix geo:    <http://www.w3.org/2003/01/geo/wgs84_pos#> .
ns5:based_near  rdf:type        geo:Point .
@prefix ns7:    <https://demo.openlinksw.com/dataspace/demo/socialnetwork/demo%27s%20AddressBook/1042#> .
ns7:this        rdf:type        foaf:Person .
ns5:this        rdf:type        foaf:Person .
@prefix ns8:    <https://demo.openlinksw.com/dataspace/person/demo/online_account/> .
@prefix sioc:   <http://rdfs.org/sioc/ns#> .
ns8:demo        rdf:type        sioc:User .
@prefix ns10:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/myAddressBook/1001#> .
ns10:this       rdf:type        foaf:Person .
@prefix ns11:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/demo%27s%20AddressBook/1045#> .
ns11:this       rdf:type        foaf:Person .
@prefix ns12:   <https://demo.openlinksw.com/dataspace/demo#> .
ns12:this       rdf:type        sioc:User .
ns5:org rdf:type        foaf:Organization .
@prefix ns13:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/demo%27s%20AddressBook/1048#> .
ns13:this       rdf:type        foaf:Person .
@prefix ns14:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/myAddressBook/1001#this#> .
ns14:org        rdf:type        foaf:Organization .
@prefix ns15:   <https://demo.openlinksw.com/dataspace/person/imitko#> .
ns15:this       rdf:type        foaf:Person .
@prefix ns16:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/myAddressBook/1049#> .
ns16:this       rdf:type        foaf:Person .
@prefix ns17:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/myAddressBook/1000#> .
ns17:this       rdf:type        foaf:Person .
ns8:MySpace     rdf:type        sioc:User .
@prefix ns18:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/demo%27s%20AddressBook/1044#> .
ns18:this       rdf:type        foaf:Person .
@prefix dc:     <http://purl.org/dc/elements/1.1/> .
ns4:demo        dc:title        "demo demo's FOAF file" .
ns14:org        dc:title        "OpenLink" .
ns5:org dc:title        "OpenLink" .
ns18:this       foaf:name       "Kingsley Idehen" .
ns13:this       foaf:name       "Juliette" .
ns17:this       foaf:name       "Kingsley Idehen" .
ns5:this        foaf:name       "demo demo" .
ns15:this       foaf:name       "Mitko Iliev" .
ns10:this       foaf:name       "test test12" .
@prefix rdfs:   <http://www.w3.org/2000/01/rdf-schema#> .
ns5:this        rdfs:seeAlso    ns4:demo .
ns15:this       rdfs:seeAlso    ns4:imitko .
ns4:demo        foaf:maker      ns5:this .
ns15:this       foaf:nick       "imitko" .
ns7:this        foaf:nick       "Orri Erling" .
ns13:this       foaf:nick       "Juliette" .
ns10:this       foaf:nick       "test1" .
ns5:this        foaf:nick       "demo" .
ns18:this       foaf:nick       "Kingsley" .
ns17:this       foaf:nick       "Kingsley" .
ns16:this       foaf:nick       "test2" .
ns1:this        foaf:nick       "TEST" .
ns11:this       foaf:nick       "TEST" .
ns5:this        foaf:holdsAccount       ns8:demo ,
                ns8:MySpace ,
                ns12:this .
@prefix ns21:   <http://myopenlink.net/dataspace/person/imitko#> .
ns5:this        foaf:knows      ns21:this ,
                ns17:this ,
                ns16:this ,
                ns3:juliette ,
                ns10:this ,
                ns7:this .
@prefix ns22:   <http://myopenlink.net/dataspace/person/kidehen#> .
ns5:this        foaf:knows      ns22:this ,
                ns18:this ,
                ns11:this ,
                ns1:this .
@prefix ns23:   <http://bblfish.net/people/henry/card#me\u0020> .
ns5:this        foaf:knows      ns23: ,
                ns13:this ,
                ns15:this ;
        foaf:firstName  "demo" ;
        foaf:family_name        "demo" ;
        foaf:gender     "male" ;
        foaf:icqChatID  "125968" ;
        foaf:msnChatID  "45demo78" ;
        foaf:aimChatID  "demo1234" ;
        foaf:yahooChatID        "demo678" ;
        foaf:based_near ns5:based_near .
@prefix ns24:   <http://www.openlinksw.com> .
ns5:this        foaf:workplaceHomepage  ns24: .
ns5:org foaf:homepage   ns24: .
ns5:this        foaf:homepage   ns24: .
ns14:org        foaf:homepage   ns24: .
ns4:demo        foaf:primaryTopic       ns5:this .
ns5:based_near  geo:lat "47.333332" ;
        geo:long        "13.333333" .
@prefix rdf:    <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ns1:    <https://demo.openlinksw.com/dataspace/demo#> .
@prefix foaf:   <http://xmlns.com/foaf/0.1/> .
ns1:this        rdf:type        foaf:OnlineAccount .
@prefix ns3:    <https://demo.openlinksw.com/dataspace/person/demo/online_account/> .
ns3:MySpace     rdf:type        foaf:OnlineAccount .
ns3:demo        rdf:type        foaf:OnlineAccount .
@prefix ns4:    <https://demo.openlinksw.com/dataspace/person/demo#> .
ns4:this        foaf:holdsAccount       ns3:MySpace ,
                ns1:this ,
                ns3:demo .
@prefix vcard:  <http://www.w3.org/2001/vcard-rdf/3.0#> .
ns4:this        vcard:ADR       ns4:addr .
ns4:addr        vcard:Country   "United States" ;
        vcard:Locality  "New York" ;
        vcard:Region    "Nebraska" .
@prefix ns6:    <http://myspace.com> .
ns3:MySpace     foaf:accountServiceHomepage     ns6: .
@prefix ns7:    <skype:demo?> .
ns3:demo        foaf:accountServiceHomepage     ns7:chat ;
        foaf:accountName        "demo" .
ns3:MySpace     foaf:accountName        "MySpace" .
@prefix ns8:    <http://vocab.org/bio/0.1/> .
ns4:this        ns8:olb "this is short resume of user Demo." .
@prefix ns9:    <https://demo.openlinksw.com/dataspace/> .
ns4:this        foaf:openid     ns9:demo ;
        ns8:keywords    "demo, openlinksw, virtuoso, weblog, rdf" .
@prefix foaf:   <http://xmlns.com/foaf/0.1/> .
@prefix ns1:    <https://demo.openlinksw.com/dataspace/demo/subscriptions/> .
@prefix ns2:    <https://demo.openlinksw.com/dataspace/person/demo#> .
ns1:DemoFeeds   foaf:maker      ns2:this .
@prefix ns3:    <https://demo.openlinksw.com/dataspace/demo/community/> .
ns3:demoCommunity       foaf:maker      ns2:this .
@prefix ns4:    <https://demo.openlinksw.com/dataspace/demo/eCRM/demo%27s%20eCRM> .
ns4:    foaf:maker      ns2:this .
@prefix ns5:    <https://demo.openlinksw.com/dataspace/demo/calendar/> .
ns5:mycalendar  foaf:maker      ns2:this .
@prefix ns6:    <https://demo.openlinksw.com/dataspace/demo/photos/> .
ns6:MyGallery   foaf:maker      ns2:this .
@prefix ns7:    <https://demo.openlinksw.com/dataspace/demo/briefcase/> .
ns7:mybriefcase foaf:maker      ns2:this .
@prefix ns8:    <https://demo.openlinksw.com/dataspace/demo/wiki/> .
ns8:ESBWiki     foaf:maker      ns2:this .
@prefix ns9:    <https://demo.openlinksw.com/dataspace/demo/bookmark/> .
ns9:mybookmarks foaf:maker      ns2:this .
@prefix ns10:   <https://demo.openlinksw.com/dataspace/demo/weblog/> .
ns10:myblog     foaf:maker      ns2:this .
@prefix ns11:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/demo%27s%20AddressBook> .
ns11:   foaf:maker      ns2:this .
@prefix ns12:   <https://demo.openlinksw.com/dataspace/demo/community/demo%27s%20Community> .
ns12:   foaf:maker      ns2:this .
ns8:mywiki      foaf:maker      ns2:this .
@prefix ns13:   <https://demo.openlinksw.com/dataspace/demo/eCRM/demo%20demo%27s%20eCRM> .
ns13:   foaf:maker      ns2:this .
@prefix ns14:   <https://demo.openlinksw.com/dataspace/demo/polls/> .
ns14:mypolls    foaf:maker      ns2:this .
@prefix ns15:   <https://demo.openlinksw.com/dataspace/demo/socialnetwork/> .
ns15:myAddressBook      foaf:maker      ns2:this .
ns3:SP2 foaf:maker      ns2:this .
ns2:this        foaf:made       ns11: ,
                ns4: ,
                ns3:demoCommunity ,
                ns12: ,
                ns15:myAddressBook ,
                ns10:myblog ,
                ns9:mybookmarks ,
                ns7:mybriefcase ,
                ns5:mycalendar ,
                ns14:mypolls ,
                ns8:mywiki ,
                ns1:DemoFeeds ,
                ns8:ESBWiki ,
                ns6:MyGallery ,
                ns3:SP2 ,
                ns13: .
@prefix rdfs:   <http://www.w3.org/2000/01/rdf-schema#> .
ns9:mybookmarks rdfs:label      "demo demo's Bookmarks" .
ns15:myAddressBook      rdfs:label      "demo demo's AddressBook" .
ns4:    rdfs:label      "demo demo's eCRM" .
ns12:   rdfs:label      "demo's Community" .
ns14:mypolls    rdfs:label      "demo demo's Polls" .
ns13:   rdfs:label      "demo demo's eCRM Description" .
ns8:mywiki      rdfs:label      "demo demo's Wiki" .
ns7:mybriefcase rdfs:label      "demo demo's Briefcase" .
ns1:DemoFeeds   rdfs:label      "demo demo's Feeds" .
ns10:myblog     rdfs:label      "demo's Weblog" .
ns5:mycalendar  rdfs:label      "demo demo's Calendar" .
ns11:   rdfs:label      "demo demo's AddressBook" .
ns6:MyGallery   rdfs:label      "demo demo's Gallery" .
ns8:ESBWiki     rdfs:label      "demo demo's Wiki" .
ns3:demoCommunity       rdfs:label      "demo demo's Community" .
ns3:SP2 rdfs:label      "demo demo's Community" .
@prefix rdf:    <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ns1:    <https://demo.openlinksw.com/dataspace/person/demo#> .
@prefix ns2:    <http://www.w3.org/ns/auth/rsa#> .
ns1:cert        rdf:type        ns2:RSAPublicKey .
@prefix dc:     <http://purl.org/dc/elements/1.1/> .
@prefix ns4:    <https://demo.openlinksw.com/dataspace/person/demo/projects#ods%20project> .
ns4:    dc:title        "ods project" .
@prefix foaf:   <http://xmlns.com/foaf/0.1/> .
ns4:    foaf:maker      ns1:this .
ns1:this        foaf:made       ns4: .
@prefix ns6:    <http://www.w3.org/ns/auth/cert#> .
ns1:cert        ns6:identity    ns1:this ;
        ns2:modulus     ns1:cert_mod .
ns1:cert_mod    ns6:hex "b8edefa13092d05e85257d6be0aca54218091278583f1d18759c4bced0007948fa6e920018abc3c30b8885d303ec2e679f3a7c15036d38452ddd9ebfcbb41
e1bd08dca66b7737b744fd9e441ebefa425311363711714cd0fe3b334a79ce50be9eb3443193bcbf2f1486481e775382f1a1792a2a8438543ca6f478c3b13c5db2a7f9a12a9a5aed5ec498
6be0169a1859d027170812a28914d158fb76a5933f11777a06c8db64d10f7c02900c4bb4bbf2d24c0e34c6ca135fdb5e05241bc029196ceef13a2006f07d1800f17762c0cfe05b3dac3042
09e1b7a3973122e850e96fcd0396544f82f0b11a46f0d868ba0f3d8efd957e7ef224871905a06c3c5d85ac9" .
ns1:cert        ns2:public_exponent     ns1:cert_exp .
ns1:cert_exp    ns6:decimal     "65537" .
@prefix rdf:    <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ns1:    <https://demo.openlinksw.com/dataspace/person/demo#> .
@prefix ns2:    <http://vocab.org/bio/0.1/> .
ns1:event       rdf:type        ns2:Birth .
@prefix foaf:   <http://xmlns.com/foaf/0.1/> .
@prefix ns4:    <mailto:demo@openlinksw.com> .
ns1:this        foaf:mbox       ns4: ;
        foaf:birthday   "01-01" .
@prefix dc:     <http://purl.org/dc/elements/1.1/> .
ns1:event       dc:date "1968-01-01" .
ns1:this        ns2:event       ns1:event .

16.2.3.7.4. Example with curl and SPARQL-OAuth endpoint

Note: this is just an example as token had expired already. You can go to this section to see how to interact with our Virtuoso UI.

$ curl "http://demo.openlinksw.com/oauth/sparql.vsp?debug=on&default-graph-uri=&format=text%2Fhtml&oauth_consumer_key=27f105a327f5f23163e0636f78901
8dacdd70bb5&oauth_nonce=a14d43339fcb2638&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1242106643&oauth_token=42e2af4d9264ef42521c1010aff99f60a8
ee95a2&oauth_version=1.0&query=select%20distinct%20%3FURI%20%3FObjectType%20where%20%7B%3FURI%20a%20%3FObjectType%7D%20limit%2050&oauth_signature=C
w9yJ2saU1vgHuFxWcughai5cZY%3D"
<table class="sparql" border="1">
  <tr>
    <th>URI</th>
    <th>ObjectType</th>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-iid</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-iid-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-iid-nonblank</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-iid-nonblank-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-longvarchar</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-longvarchar-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-longvarbinary</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-longvarbinary-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-uri</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-uri-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-integer</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-integer-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-integer-uri</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-integer-uri-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-doubleprecision</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-doubleprecision-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-date</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-date-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-datetime</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-datetime-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#multipart-uri</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#multipart-uri-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#multipart-uri-fn-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#multipart-literal-fn-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-uri-fn</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-uri-fn-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-integer-uri-fn</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-integer-uri-fn-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-literal-fn</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-literal-fn-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-integer-literal-fn</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-integer-literal-fn-nullable</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.w3.org/1999/02/22-rdf-syntax-ns#type</td>
    <td>http://www.w3.org/1999/02/22-rdf-syntax-ns#Property</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-iid-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-iid-nullable-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-iid-nonblank-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-iid-nonblank-nullable-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#default-nullable-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-nullable-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-longvarchar-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-longvarchar-nullable-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-longvarbinary-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-longvarbinary-nullable-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
  <tr>
    <td>http://www.openlinksw.com/virtrdf-data-formats#sql-varchar-uri-SuperFormats</td>
    <td>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</td>
  </tr>
</table>

16.2.3.7.5. Example with CONSTRUCT

Go to the sparql endpoint UI: i.e. go to http://host:port/sparql

For the Default Graph URI enter: http://www.w3.org/2001/sw/DataAccess/proto-tests/data/construct/simple-data.rdf

Select "Retrieve remote RDF data for all missing source graphs".

For the query text enter:

SELECT * WHERE {?s ?p ?o}

Click the "Run Query" button.

The query results, shown below, are cached locally ( network resources being fetched ). The remote RDF data is saved in the local RDF quad store as graph http://www.w3.org/2001/sw/DataAccess/proto-tests/data/construct/simple-data.rdf

s  	                                  p  	                                           o
http://www.example/jose/foaf.rdf#jose 	  http://www.w3.org/1999/02/22-rdf-syntax-ns#type  http://xmlns.com/foaf/0.1/Person
http://www.example/jose/foaf.rdf#jose 	  http://xmlns.com/foaf/0.1/nick 	           Jo
http://www.example/jose/foaf.rdf#jose 	  http://xmlns.com/foaf/0.1/name 	           Jose Jimen~ez
http://www.example/jose/foaf.rdf#jose 	  http://xmlns.com/foaf/0.1/knows 	           http://www.example/jose/foaf.rdf#juan
http://www.example/jose/foaf.rdf#jose 	  http://xmlns.com/foaf/0.1/homepage 	           http://www.example/jose/
http://www.example/jose/foaf.rdf#jose 	  http://xmlns.com/foaf/0.1/workplaceHomepage 	   http://www.corp.example/
http://www.example/jose/foaf.rdf#kendall  http://xmlns.com/foaf/0.1/knows                  http://www.example/jose/foaf.rdf#edd
http://www.example/jose/foaf.rdf#julia 	  http://www.w3.org/1999/02/22-rdf-syntax-ns#type  http://xmlns.com/foaf/0.1/Person
http://www.example/jose/foaf.rdf#julia 	  http://xmlns.com/foaf/0.1/mbox 	           mailto:julia@mail.example
http://www.example/jose/foaf.rdf#juan 	  http://www.w3.org/1999/02/22-rdf-syntax-ns#type  http://xmlns.com/foaf/0.1/Person
http://www.example/jose/foaf.rdf#juan 	  http://xmlns.com/foaf/0.1/mbox 	           mailto:juan@mail.example

Now let's take the CONSTRUCT query:

PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX myfoaf: <http://www.example/jose/foaf.rdf#>
CONSTRUCT
  { myfoaf:jose foaf:depiction <http://www.example/jose/jose.jpg>.
    myfoaf:jose foaf:schoolHomepage <http://www.edu.example/>.
    ?s ?p ?o.
  }
FROM <http://www.w3.org/2001/sw/DataAccess/proto-tests/data/construct/simple-data.rdf>
WHERE
  {
    ?s ?p ?o. myfoaf:jose foaf:nick "Jo".
    FILTER ( ! (?s = myfoaf:kendall && ?p = foaf:knows && ?o = myfoaf:edd )
    && ! ( ?s = myfoaf:julia && ?p = foaf:mbox && ?o = <mailto:julia@mail.example> )
    && ! ( ?s = myfoaf:julia && ?p = rdf:type && ?o = foaf:Person))
  }

From an HTTP client, issue the GET command with the above query added as a URL-encoded parameter value:

GET -e -s http://host:port/sparql/?query=PREFIX+rdf%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22-rdf-syntax-ns%23%3E%0D%0APREFIX+foaf%3A+%3Chttp%3A%2F%2Fxmlns.com%2Ffoaf%2F0.1%2F%3E%0D%0APREFIX+myfoaf%3A+%3Chttp%3A%2F%2Fwww.example%2Fjose%2Ffoaf.rdf%23%3E%0D%0A%0D%0ACONSTRUCT+%7B+myfoaf%3Ajose+foaf%3Adepiction+%3Chttp%3A%2F%2Fwww.example%2Fjose%2Fjose.jpg%3E.%0D%0A++++++++++++myfoaf%3Ajose+foaf%3AschoolHomepage+%3Chttp%3A%2F%2Fwww.edu.example%2F%3E.%0D%0A++++++++++++%3Fs+%3Fp+%3Fo.%7D%0D%0AFROM+%3Chttp%3A%2F%2Fwww.w3.org%2F2001%2Fsw%2FDataAccess%2Fproto-tests%2Fdata%2Fconstruct%2Fsimple-data.rdf%3E%0D%0AWHERE+%7B+%3Fs+%3Fp+%3Fo.+myfoaf%3Ajose+foaf%3Anick+%22Jo%22.%0D%0A+++++++FILTER+%28+%21+%28%3Fs+%3D+myfoaf%3Akendall+%26%26+%3Fp+%3D+foaf%3Aknows+%26%26+%3Fo+%3D+myfoaf%3Aedd+%29%0D%0A++++++++++++++%26%26+%21+%28+%3Fs+%3D+myfoaf%3Ajulia+%26%26+%3Fp+%3D+foaf%3Ambox+%26%26+%3Fo+%3D+%3Cmailto%3Ajulia%40mail.example%3E+%29%0D%0A++++++++++%26%26+%21+%28+%3Fs+%3D+myfoaf%3Ajulia+%26%26+%3Fp+%3D+rdf%3Atype+%26%26+%3Fo+%3D+foaf%3APerson%29%29%0D%0A%7D%0D%0A&format=application%2Frdf%2Bxml

The request response will be similar to:

200 OK
Connection: close
Date: Fri, 28 Dec 2007 10:06:14 GMT
Accept-Ranges: bytes
Server: Virtuoso/05.00.3023 (Win32) i686-generic-win-32  VDB
Content-Length: 2073
Content-Type: application/rdf+xml; charset=UTF-8
Client-Date: Fri, 28 Dec 2007 10:06:14 GMT
Client-Peer: 83.176.40.177:port
Client-Response-Num: 1

<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#">
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#juan"><ns0pred:mbox xmlns:ns0pred="http://xmlns.com/foaf/0.1/" rdf:resource="mailto:juan@mail.example"/></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#jose"><ns0pred:schoolHomepage xmlns:ns0pred="http://xmlns.com/foaf/0.1/" rdf:resource="http://www.edu.example/"/></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#jose"><ns0pred:type xmlns:ns0pred="http://www.w3.org/1999/02/22-rdf-syntax-ns#" rdf:resource="http://xmlns.com/foaf/0.1/Person"/></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#jose"><ns0pred:homepage xmlns:ns0pred="http://xmlns.com/foaf/0.1/" rdf:resource="http://www.example/jose/"/></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#juan"><ns0pred:type xmlns:ns0pred="http://www.w3.org/1999/02/22-rdf-syntax-ns#" rdf:resource="http://xmlns.com/foaf/0.1/Person"/></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#jose"><ns0pred:workplaceHomepage xmlns:ns0pred="http://xmlns.com/foaf/0.1/" rdf:resource="http://www.corp.example/"/></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#jose"><ns0pred:nick xmlns:ns0pred="http://xmlns.com/foaf/0.1/">Jo</ns0pred:nick></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#jose"><ns0pred:depiction xmlns:ns0pred="http://xmlns.com/foaf/0.1/" rdf:resource="http://www.example/jose/jose.jpg"/></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#jose"><ns0pred:name xmlns:ns0pred="http://xmlns.com/foaf/0.1/">Jose Jime?+ez</ns0pred:name></rdf:Description>
<rdf:Description rdf:about="http://www.example/jose/foaf.rdf#jose"><ns0pred:knows xmlns:ns0pred="http://xmlns.com/foaf/0.1/" rdf:resource="http://www.example/jose/foaf.rdf#juan"/></rdf:Description>
</rdf:RDF>
Done

16.2.3.7.6. Example with extraction part of literal as variable

The following example shows how to extract a part of a literal as a variable for use in a numeric comparison using SPARQL

Suppose there are the following triples inserted:

SQL>SPARQL INSERT INTO GRAPH <http://mygraph.com> {  <:a>
                                                     <:p>
                                                     "123 abc" };
callret-0
VARCHAR
_______________________________________________________________________________

Insert into <http://mygraph.com>, 1 triples -- done

1 Rows. -- 30 msec.
SQL>SPARQL INSERT INTO GRAPH <http://mygraph.com> {  <:a>
                                                     <:p>
                                                     "234 abc" };
callret-0
VARCHAR
_______________________________________________________________________________

Insert into <http://mygraph.com>, 1 triples -- done

1 Rows. -- 0 msec.

In order to extract the numeric part, and then do a numeric (<.>,=), you can use atoi (), atol or atof in the filter:

SQL>SPARQL
SELECT *
FROM <http://mygraph.com>
WHERE
  {
    ?s ?p ?o . filter (bif:atoi (?o) > 130)
  };
s        p         o
VARCHAR  VARCHAR   VARCHAR
___________________________________

:a       :p        234 abc

1 Rows. -- 10 msec.

16.2.3.7.7. Example how to define rule
See details
here how to define rule context that is initialized from the contents of a given graph.

16.2.3.8. Implementation Notes

This service has been implemented using Virtuoso Server.


16.2.3.9. Virtuoso 'Semantic Bank' End Point

What is Piggy Bank?

Piggy Bank is an extension to the Firefox Web browser that turns it into a Semantic Web browser, letting you make use of existing information on the Web in more useful and flexible ways not offered by the original Web sites.

What is Semantic Bank?

Semantic Bank is the server companion of Piggy Bank that lets you persist, share and publish data collected by individuals, groups or communities. Here is a screen shot of one in action:

What can I do with this?

A Semantic Bank allows you to:

How can I help?

Semantic Bank is Open Source software and built around the spirit of open participation and collaboration.

There are several ways you can help:

Licensing and Legal Issues

Semantic Bank is open source software and is licensed under the BSD license.

Note, however, that this software ships with libraries that are not released under the same license; that we interpret their licensing terms to be compatible with ours and that we are redistributing them unmodified. For more information on the licensing terms of the libraries Semantic Bank depends on, please refer to the source code.

Download location:

"http://simile.mit.edu/dist/semantic-bank/

The Virtuoso Semantic Bank End Point

Before you can publish, you must register with one or more Semantic Banks:

What is the graph name used by Virtuoso for the triples from PiggyBank?

http://simile.org/piggybank/<piggybank-generated-name>

The piggybank-generated-name is a Virtuoso DAV user ID.


16.2.3.10. Making Linked Data Views Dereferenceable - Northwind Example

Consider an application that makes some relational data available for SPARQL requests, as described in the first part of the Northwind Linked Data View example. This may be sufficient for some clients but the IRIs of the described subjects are not dereferenceable. This means that external SPARQL processors cannot retrieve that data using the Virtuoso Sponger or the like. It also means that if some external resources refer to the IRI of some Northwind subject and a user browses that resource then he cannot look at the application's data by clicking on the subject link.

To make RDF access complete, applications can do the following:

  1. Create a virtual directory
  2. Instruct the server how to prepare RDF resources on demand
  3. Configure rendering of RDF resources for non-RDF clients (including Web search engines)
  4. Make the used ontology available
  5. Provide an index or sitemap page to help users who try to browse published data but do not know the proper URLs

The following sequence of operations demonstrates how to implement the listed features without writing any special web pages. All requests (except the application-specific index/sitemap) will be handled by existing web service endpoints.

As a precaution, we erase any URL rewriting rule lists created by this example that may be in the database following a previous run of the script.

DB.DBA.URLREWRITE_DROP_RULELIST ('demo_nw_rule_list1', 1)
;

Do the same for individual rewrite rules:

DB.DBA.URLREWRITE_DROP_RULE ('demo_nw_rule1', 1)
;
DB.DBA.URLREWRITE_DROP_RULE ('demo_nw_rule2', 1)
;
DB.DBA.URLREWRITE_DROP_RULE ('demo_nw_rule3', 1)
;
DB.DBA.URLREWRITE_DROP_RULE ('demo_nw_rule4', 1)
;

As a sanity check we ensure that there are no other similarly named rules:

SQL>SELECT signal ('WEIRD', sprintf ('Rewrite rule "%s" found', URR_RULE))
FROM DB.DBA.URL_REWRITE_RULE WHERE URR_RULE like 'demo_nw%'
;

Next we create URI rewrite rules based on regular expressions by calling DB.DBA.URLREWRITE_CREATE_REGEX_RULE, so the same path will be redirected to different places depending on the MIME types the client can accept.

For a given input path, that is a URI identifying a particular Linked Data entity, the rewrite rule below generates an N3 or RDF/XML representation of the entity using a CONSTRUCT query. (Note: In the regular expression identifying the Accept: MIME types this rule applies to, i.e. in rdf.n3 and rdf.xml, each period (.) replaces a literal character because some SPARQL web clients published before the relevant W3C recommendations produce slightly incorrect "Accept:" strings.)

SQL>DB.DBA.URLREWRITE_CREATE_REGEX_RULE (
    'demo_nw_rule2',
    1,
    '(/[^#]*)',
    vector('path'),
    1,
    '/sparql?query=CONSTRUCT+{+%%3Chttp%%3A//^{URIQADefaultHost}^%U%%23this%%3E+%%3Fp+%%3Fo+}+FROM+%%3Chttp%%3A//^{URIQADefaultHost}^/Northwind%%3E+WHERE+{+%%3Chttp%%3A//^{URIQADefaultHost}^%U%%23this%%3E+%%3Fp+%%3Fo+}&format=%U',
    vector('path', 'path', '*accept*'),
    null,
    '(text/rdf.n3)|(application/rdf.xml)',
    0,
    null
    );
Note:

The request URL for the SPARQL web service looks terrible because it is URL-encoded; the sprintf format string for it is even worse! The easiest way of composing encoded strings of this sort is to use the Conductor UI for configuring the rewrite rules. Alternatively open the SPARQL endpoint page (assuming it supports a UI for entering queries, if no query string is specified), type in the desired CONSTRUCT or DESCRIBE statement into the web form (using some sample URI), execute it, cut the URL of the page with results from the address line of the browser window, paste it into the script and then replace the host name with ^{URIQADefaultHost}^, every percent with double percent, the parts of the sample IRI to be substituted with %U; finally adjust the vector of replacement parameters so that its length is equal to the number of %U or other format specifiers in the template.

The next rule redirects to the RDF browser service to display a description of the subject URI and let the user explore related subjects.

SQL>DB.DBA.URLREWRITE_CREATE_REGEX_RULE (
    'demo_nw_rule1',
    1,
    '(/[^#]*)',
    vector('path'),
    1,
    '/rdfbrowser/index.html?uri=http%%3A//^{URIQADefaultHost}^%U%%23this',
    vector('path'),
    null,
    '(text/html)|(\\*/\\*)',
    0,
    303
    );

This next rule removes any trailing slash from the input path. Note that \x24 is the hex character code for the end-of-line pattern $. It is written escaped because the dollar sign indicates the beginning of macro in ISQL.

SQL>DB.DBA.URLREWRITE_CREATE_REGEX_RULE (
    'demo_nw_rule3',
    1,
    '(/[^#]*)/\x24',
    vector('path'),
    1,
    '%s',
    vector('path'),
    null,
    null,
    0,
    null
    );

To configure the server to furnish the ontology underpinning the example Northwind Linked Data View, the procedure LOAD_NW_ONTOLOGY_FROM_DAV, listed below, takes the ontology described in file /DAV/VAD/demo/sql/nw.owl and loads it into graph http://demo.openlinksw.com/schemas/NorthwindOntology/1.0/ in the local quad store. A rewrite rule is then created to query this graph when the input path identifies entities from this ontology.

SQL>create procedure DB.DBA.LOAD_NW_ONTOLOGY_FROM_DAV()
{
  declare content1, urihost varchar;
  SELECT cast (RES_CONTENT as varchar) INTO content1 from WS.WS.SYS_DAV_RES WHERE RES_FULL_PATH = '/DAV/VAD/demo/sql/nw.owl';
  DB.DBA.RDF_LOAD_RDFXML (content1, 'http://demo.openlinksw.com/schemas/northwind#', 'http://demo.openlinksw.com/schemas/NorthwindOntology/1.0/');
  urihost := cfg_item_value(virtuoso_ini_path(), 'URIQA','DefaultHost');
  if (urihost = 'demo.openlinksw.com')
  {
    DB.DBA.VHOST_REMOVE (lpath=>'/schemas/northwind');
    DB.DBA.VHOST_DEFINE (lpath=>'/schemas/northwind', ppath=>'/DAV/VAD/demo/sql/nw.owl', vsp_user=>'dba', is_dav=>1, is_brws=>0);
    DB.DBA.VHOST_REMOVE (lpath=>'/schemas/northwind#');
    DB.DBA.VHOST_DEFINE (lpath=>'/schemas/northwind#', ppath=>'/DAV/VAD/demo/sql/nw.owl', vsp_user=>'dba', is_dav=>1, is_brws=>0);
  }
};

DB.DBA.LOAD_NW_ONTOLOGY_FROM_DAV();

drop procedure DB.DBA.LOAD_NW_ONTOLOGY_FROM_DAV;

DB.DBA.URLREWRITE_CREATE_REGEX_RULE (
    'demo_nw_rule4',
    1,
    '/schemas/northwind#(.*)',
    vector('path'),
    1,
    '/sparql?query=DESCRIBE%20%3Chttp%3A//demo.openlinksw.com/schemas/northwind%23%U%3E%20FROM%20%3Chttp%3A//demo.openlinksw.com/schemas/NorthwindOntology/1.0/%3E',
    vector('path'),
    null,
    '(text/rdf.n3)|(application/rdf.xml)',
    0,
    null
    );

Next we define virtual directory /Northwind and associate with this a rulelist containing the URL rewriting rules defined above. Requests matching the rewriting rules should then be properly redirected to produce the requested data. Attempts to access the virtual directory root will execute the application's default VSP page, namely sfront.vspx.

SQL>DB.DBA.URLREWRITE_CREATE_RULELIST (
    'demo_nw_rule_list1',
    1,
    vector (
                'demo_nw_rule1',
                'demo_nw_rule2',
                'demo_nw_rule3',
                'demo_nw_rule4'
          ));

VHOST_REMOVE (lpath=>'/Northwind');
DB.DBA.VHOST_DEFINE (lpath=>'/Northwind', ppath=>'/DAV/home/demo/', vsp_user=>'dba', is_dav=>1, def_page=>'sfront.vspx',
          is_brws=>0, opts=>vector ('url_rewrite', 'demo_nw_rule_list1'));

Finally, to register the namespace prefix northwind as persistent we execute:

SQL>DB.DBA.XML_SET_NS_DECL ('northwind', 'http://demo.openlinksw.com/schemas/northwind#', 2);

16.2.3.11. Sponger Proxy URI Service

In certain cases, such as Ajax applications, it's prohibited to issue HTTP requests to a server other than the original server. In other cases it is necessary to transform the content of a target to an RDF format. To this end Virtuoso Server provides a Sponger Proxy URI Service. This service takes as an argument a target URL and may return the target's content "as is" or the Sponger may try to transform the content and return an RDF representation of the target. When transforming to RDF, the RDF format (RDF/XML, N3, TURTLE etc) of the output can be forced by a URL parameter or by content negotiation.

When the cartridges_dav.vad package is installed, Virtuoso reserves the path '/about/[id|html|data|rdf]/http/' for the RDF proxy service. In the current implementation, Virtuoso defines virtual directories for HTTP requests that come to the port specified as 'ServerPort' in the '[HTTPServer]' section of Virtuoso configuration file and refer to the above path string. So, if the Virtuoso installation on host example.com listens for HTTP requests on port 8080, client applications should use the 'service endpoint' string equal to 'http://example.com:8080/about/[id|html|data|rdf]/http/'.

If the cartridges_dav.vad VAD package is not installed, then the path '/proxy/rdf/' is used for the Sponger Proxy URI Service.

The old pattern for the Sponger Proxy URI Service, '/proxy/', is now deprecated.

Note: If you do not have the cartridges package installed, in order for the Sponger Proxy URI Service to work correctly, you must grant the SPARQL_UPDATE role to user SPARQL and grant execute permission on procedure RDF_SPONGE_UP.

To enable SPARQL_UPDATE using the Conductor UI:

  1. Go to the Virtuoso Administration Conductor i.e. http://host:port/conductor
  2. Login as dba user
  3. Go to System Admin->User Accounts->Roles
  4. Click the link "Edit" for "SPARQL_UPDATE
  5. Select from the list of available user/groups "SPARQL" and click the ">>" button so to add it to the right-positioned list.
  6. Click the button "Update".

To grant execute permission on RDF_SPONGE_UP:

grant execute on DB.DBA.RDF_SPONGE_UP to "SPARQL";

When invoked with a URL of the form http://host:port/proxy?..., the Sponger Proxy URI Service accepts the following query string parameters:

When RDF data is requested and 'output-format' is not specified, the result will be serialized with a MIME type determined by the request 'Accept' headers i.e. the proxy service will do content negotiation.

Example: RDF file with URL: http://www.w3.org/People/Berners-Lee/card

-- Access the url in order to view the result in HTML format:
http://host:port/about/html/http/www.w3.org/People/Berners-Lee/card
-- Access the url in order to view the result in RDF:
http://host:port/about/rdf/http://www.w3.org/People/Berners-Lee/card
-- or use the following proxy invocation style:
http://host:port/proxy/rdf/http://www.w3.org/People/Berners-Lee/card
-- or this one:
http://host:port/proxy?url=http://www.w3.org/People/Berners-Lee/card&force=rdf

Note: It is not permitted, when using the style http://host:port/proxy/rdf, to pass URL query string parameters to the proxy.

Now go to the SPARQL endpoint, i.e. http://host:port/sparql

For the 'Default Graph URI' enter the URL of the RDF file: http://www.w3.org/People/Berners-Lee/card

For 'Query' enter:

SELECT *
WHERE
  {
    ?s ?p ?o
  }

Query result:

s  	                                        p  	                                           o
http://www.w3.org/People/Berners-Lee/card 	http://www.w3.org/1999/02/22-rdf-syntax-ns#type    http://xmlns.com/foaf/0.1/PersonalProfileDocument
http://www.w3.org/People/Berners-Lee/card 	http://purl.org/dc/elements/1.1/title 	           Tim Berners-Lee's FOAF file
http://www.w3.org/People/Berners-Lee/card 	http://creativecommons.org/ns#license 	           http://creativecommons.org/licenses/by-nc/3.0/
http://www.w3.org/People/Berners-Lee/card 	http://xmlns.com/foaf/0.1/maker 	           http://www.w3.org/People/Berners-Lee/card#i
etc ...

16.2.3.12. SPARQL INI service

The [SPARQL] section of the virtuoso.ini configuration file sets parameters and limits for the SPARQL query web service. The values contained in the [SPARQL] section can be exposed in RDF form via the URL pattern http://cname/sparql?ini

Example: http://demo.openlinksw.com/sparql?ini

<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#">
<rdf:Description rdf:about="http://www.openlinksw.com/schemas/virtini#SPARQL"><ns0pred:MaxQueryCostEstimationTime xmlns:ns0pred="http://www.openlinksw.com/schemas/virtini#">1000</ns0pred:MaxQueryCostEstimationTime></rdf:Description>
<rdf:Description rdf:about="http://www.openlinksw.com/schemas/virtini#SPARQL"><ns0pred:ExternalXsltSource xmlns:ns0pred="http://www.openlinksw.com/schemas/virtini#">1</ns0pred:ExternalXsltSource></rdf:Description>
<rdf:Description rdf:about="http://www.openlinksw.com/schemas/virtini#SPARQL"><ns0pred:DefaultQuery xmlns:ns0pred="http://www.openlinksw.com/schemas/virtini#">SELECT ?Subject ?Concept WHERE {?Subject a ?Concept}</ns0pred:DefaultQuery></rdf:Description>
<rdf:Description rdf:about="http://www.openlinksw.com/schemas/virtini#SPARQL"><ns0pred:ResultSetMaxRows xmlns:ns0pred="http://www.openlinksw.com/schemas/virtini#">100000</ns0pred:ResultSetMaxRows></rdf:Description>
<rdf:Description rdf:about="http://www.openlinksw.com/schemas/virtini#SPARQL"><ns0pred:MaxQueryExecutionTime xmlns:ns0pred="http://www.openlinksw.com/schemas/virtini#">30</ns0pred:MaxQueryExecutionTime></rdf:Description>
<rdf:Description rdf:about="http://www.openlinksw.com/schemas/virtini#SPARQL"><ns0pred:ExternalQuerySource xmlns:ns0pred="http://www.openlinksw.com/schemas/virtini#">1</ns0pred:ExternalQuerySource></rdf:Description>
<rdf:Description rdf:about="http://www.openlinksw.com/schemas/virtini#SPARQL"><ns0pred:DefaultGraph xmlns:ns0pred="http://www.openlinksw.com/schemas/virtini#">http://demo.openlinksw.com/dataspace/person/demo</ns0pred:DefaultGraph></rdf:Description>
<rdf:Description rdf:about="http://www.openlinksw.com/schemas/virtini#SPARQL"><ns0pred:PingService xmlns:ns0pred="http://www.openlinksw.com/schemas/virtini#">http://rpc.pingthesemanticweb.com/</ns0pred:PingService></rdf:Description>
</rdf:RDF>

16.2.3.13. SPARQL Endpoint with Excel MIME Type Output Option

The SPARQL endpoint offers an Excel MIME type output option.

From http://cname:host/sparql, select "Spreadsheet" for the "Display Results As:" option and click the "Run Query" button.

SPARQL Endpoint with Excel MIME type output

Figure: 16.2.3.13.1. SPARQL Endpoint with Excel MIME type output

The resulting query string contains a format parameter value of "application/vnd.ms-excel". For example, A URL such as this one will be generated, and can be opened directly with Excel.

SPARQL Endpoint with Excel MIME type output

Figure: 16.2.3.13.2. SPARQL Endpoint with Excel MIME type output

16.2.3.14. SPARQL Endpoint with RDF+JSON Output: SPARQL UI Example

The SPARQL endpoint also offers a RDF+JSON output option.

From http://cname:host/sparql select "JSON" for "Display Results As:" and click the "Run Query" button.

SPARQL Endpoint with RDF+JSON output

Figure: 16.2.3.14.1. SPARQL Endpoint with RDF+JSON output

As result URL containing as parameter the format application/sparql-results+json will be generated and the content should look like:

SPARQL Endpoint with JSON+RDF

Figure: 16.2.3.14.2. SPARQL Endpoint with JSON+RDF

16.2.3.15. SPARQL Endpoint with JSON/P Output Option: Curl Example

The SPARQL endpoint also offers a JSON/P output option.

The SPARQL endpoint accepts a 'callback' URL parameter and in this case when parameter 'format' is 'json', then it will produce JSON/P output.

$ curl "http://lod.openlinksw.com/sparql?query=select+*+where+\{+%3Fx+a+%3Fz+.+\}+limit+10&format=json&debug=on&callback=func"
func(

{ "head": { "link": [], "vars": ["x", "z"] },
  "results": { "distinct": false, "ordered": true, "bindings": [
    { "x": { "type": "bnode", "value": "nodeID://b196899188" }  , "z": { "type": "uri", "value": "http://www.w3.org/2000/10/swap

    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2005/04/sparqlette/#profile" } , "z": { "type": "uri", "value":
services/owl-s/1.1/Service.owl#ServiceProfile" }},
    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2003/12/nearestAirport/#b-profile" }   , "z": { "type": "uri",
aml.org/services/owl-s/1.1/Service.owl#ServiceProfile" }},
    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2003/12/nearestAirport/#a-profile" }   , "z": { "type": "uri",
aml.org/services/owl-s/1.1/Service.owl#ServiceProfile" }},
    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2003/12/nearestAirport/index.rdf#a-profile" }  , "z": { "type":
://www.daml.org/services/owl-s/1.1/Service.owl#ServiceProfile" }},
    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2003/12/nearestAirport/index.rdf#b-profile" }  , "z": { "type":
://www.daml.org/services/owl-s/1.1/Service.owl#ServiceProfile" }},
    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2006/06/blogmatrix/#profile" } , "z": { "type": "uri", "value":
services/owl-s/1.1/Service.owl#ServiceProfile" }},
    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2003/12/nearestAirport/#b-profile" }   , "z": { "type": "uri",
aml.org/services/owl-s/1.1/Service.owl#ServiceProfile" }},
    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2003/12/nearestAirport/#a-profile" }   , "z": { "type": "uri",
aml.org/services/owl-s/1.1/Service.owl#ServiceProfile" }},
    { "x": { "type": "uri", "value": "http://www.wasab.dk/morten/2003/12/nearestAirport/#b-profile" }   , "z": { "type": "uri",
aml.org/services/owl-s/1.1/Service.owl#ServiceProfile" }} ] } })


16.2.4. Troubleshooting SPARQL Queries

A short SPARQL query can be compiled into a long SQL statement, especially if data comes from many quad map patterns. A moderately sized application with 50 tables and 10 columns per table may create thousands of quad map patterns for subjects spanning hundreds of different types. An attempt to "select everything" from Linked Data View of that complexity may easily create 5000 lines of SQL code. Thus it is to be expected that some queries will be rejected even if the same queries would work fine if the RDF data were held as physical quads in default storage, rather than synthesized through an Linked Data View.

In addition, the SQL compiler catches typos efficiently, signalling an error if a table or column name is unknown, efficiently catching typos. SPARQL uses IRIs that are long and sometimes unreadable, but there is no "closed world" schema of the data so a typo in an IRI is not an error; it is simply some other IRI. So a typo in an IRI or in a namespace prefix causes missing bindings of some triple patterns of the query and an incomplete result, but usually no errors are reported. A typo in graph or predicate IRI may cause the SPARQL compiler to generate code that accesses default (quad) storage instead of a relational source or generate empty code that accesses nothing.

The SQL compiler does not signal casting errors when it runs the statement generated from SPARQL, because the generated SQL code contains option (QUIETCAST). This means that mismatches between expected and actual datatypes of values stay invisible and may cause rounding errors (e.g. integer division instead of floating-point) and even empty joins (due to join conditions that silently return NULL instead of returning a comparison error).

In other words, SPARQL queries are so laconic that there is no room for details that let the compiler distinguish between intent and a bug. This masks query complexity, misuse of names and type mismatches. One may make debugging easier by making queries longer.

Two very helpful debugging tools are automatic void variable recognition and plain old code inspection. "Automatic" means "cheap" so the very first step of debugging is to ensure that every triple pattern of the query may in principle return something. This helps in finding typos when the query gets data from Linked Data Views. It also helps when a query tries to join two disjoint sorts of subjects. If the define sql:signal-void-variables 1 directive is placed in the preamble of the SPARQL query, the compiler will signal an error if it finds any triple pattern that cannot bind variables or any variable that is proved to be always unbound. This is especially useful when data are supposed to come from an option (exclusive) or option (soft exclusive) quad map. Without one of these options, the SPARQL compiler will usually bind variables using "physical quads"; the table of physical quads may contain any rows that match any given triple pattern; thus many errors will remain undiscovered. If the name of a quad map pattern is known then it is possible to force the SPARQL compiler to use only that quad map for the whole query or a part of the query. This is possible by using the following syntax:

QUAD MAP quad-map-name { group-pattern }

If some triple pattern inside group-pattern cannot be bound using quad-map-name or one of its descendants then define sql:signal-void-variables 1 will force the compiler to signal the error.

Note:

Although it is technically possible to use QUAD MAP to improve the performance of a query that tries to access redundant Linked Data Views, it is much better to achieve the same effect by providing a more restrictive query or by changing/extending the Linked Data View. If an application relies on this trick then interoperable third-party SPARQL clients may experience problems because they cannot use Virtuoso-specific extensions.

If the automated query checking gives nothing, function sparql_to_sql_text can be used in order to get the SQL text generated from the given query. Its only argument is the text of the SPARQL query to compile (without any leading SPARQL keyword or semicolon at the end). The returned value is the SQL text. The output may be long but it is the most authoritative source of diagnostic data.

When called from ISQL or an ODBC client, the return value of sparql_to_sql_text may be transferred as a BLOB so ISQL requires the "set blobs on" instruction to avoid data truncation. Even better, the SQL text can be saved to a file:

string_to_file ('debug.sql', sparql_to_sql_text ('SELECT * WHERE { graph ?g { ?s a ?type }}'), -2);

(The -2 is to overwrite the previous version of the file, as this function may be called many times).

Note:

When passing the query text to sparql_to_sql_text, if the query contains single quotes, each embedded single quote must be doubled up. Use double quotes in SPARQL queries to avoid this inconvenience.

As an example, let's find out why the query

SQL>SPARQL
PREFIX northwind: <http://demo.openlinksw.com/schemas/northwind#>
SELECT DISTINCT ?emp
FROM <http://myhost.example.com/Northwind>
WHERE {
    ?order1 northwind:has_salesrep ?emp ; northwind:shipCountry ?country1 .
    ?order2 northwind:has_salesrep ?emp ; northwind:shipCountry ?country2 .
    filter (?country1 != ?country2) }

is much slower than a similar SQL statement. The call of sparql_to_sql_text returns the equivalent SQL statement:

SELECT DISTINCT sprintf_iri ( 'http://myhost.example.com/Northwind/Employee/%U%U%d#this' ,
    /*retval[*/ "s-6-1-t0"."b067b7d~FirstName~0" /* emp */ /*]retval*/ ,
    /*retval[*/  "s-6-1-t0"."b067b7d~FirstName~1" /*]retval*/ ,
    /*retval[*/  "s-6-1-t0"."b067b7d~FirstName~2" /*]retval*/ ) AS /*tmpl*/ "emp"
FROM (SELECT "s-6-1-t0-int~orders"."OrderID" AS /*tmpl*/ "20ffecc~OrderID",
         "s-6-1-t0-int~employees"."FirstName" AS /*as-name-N*/ "b067b7d~FirstName~0",
         "s-6-1-t0-int~employees"."LastName" AS /*as-name-N*/ "b067b7d~FirstName~1",
         "s-6-1-t0-int~employees"."EmployeeID" AS /*as-name-N*/ "b067b7d~FirstName~2"
         FROM Demo.demo.Employees AS "s-6-1-t0-int~employees", Demo.demo.Orders AS "s-6-1-t0-int~orders"
         WHERE /* inter-alias join cond */
       "s-6-1-t0-int~orders".EmployeeID = "s-6-1-t0-int~employees".EmployeeID) AS "s-6-1-t0",
    (SELECT "s-6-1-t1-int~orders"."OrderID" AS /*tmpl*/ "20ffecc~OrderID",
        "s-6-1-t1-int~orders"."ShipCountry" AS /*tmpl*/ "e45a7f~ShipCountry"
        FROM Demo.demo.Orders AS "s-6-1-t1-int~orders") AS "s-6-1-t1",
    (SELECT "s-6-1-t2-int~orders"."OrderID" AS /*tmpl*/ "20ffecc~OrderID",
        "s-6-1-t2-int~employees"."FirstName" AS /*as-name-N*/ "b067b7d~FirstName~0",
	"s-6-1-t2-int~employees"."LastName" AS /*as-name-N*/ "b067b7d~FirstName~1",
	"s-6-1-t2-int~employees"."EmployeeID" AS /*as-name-N*/ "b067b7d~FirstName~2"
	FROM Demo.demo.Employees AS "s-6-1-t2-int~employees", Demo.demo.Orders AS "s-6-1-t2-int~orders"
    WHERE /* inter-alias join cond */
       "s-6-1-t2-int~orders".EmployeeID = "s-6-1-t2-int~employees".EmployeeID) AS "s-6-1-t2",
    (SELECT "s-6-1-t3-int~orders"."OrderID" AS /*tmpl*/ "20ffecc~OrderID",
        "s-6-1-t3-int~orders"."ShipCountry" AS /*tmpl*/ "e45a7f~ShipCountry"
    FROM Demo.demo.Orders AS "s-6-1-t3-int~orders") AS "s-6-1-t3"
WHERE /* two fields belong to same equiv */
    /*retval[*/  "s-6-1-t0"."20ffecc~OrderID" /* order1 */ /*]retval*/  =
    /*retval[*/  "s-6-1-t1"."20ffecc~OrderID" /* order1 */ /*]retval*/
    AND /* two fields belong to same equiv */
    sprintf_iri ( 'http://myhost.example.com/Northwind/Employee/%U%U%d#this' ,
        /*retval[*/  "s-6-1-t0"."b067b7d~FirstName~0" /* emp */ /*]retval*/ ,
	/*retval[*/  "s-6-1-t0"."b067b7d~FirstName~1" /*]retval*/ ,
	/*retval[*/  "s-6-1-t0"."b067b7d~FirstName~2" /*]retval*/ ) =
    sprintf_iri ( 'http://myhost.example.com/Northwind/Employee/%U%U%d#this' ,
        /*retval[*/  "s-6-1-t2"."b067b7d~FirstName~0" /* emp */ /*]retval*/ ,
	/*retval[*/  "s-6-1-t2"."b067b7d~FirstName~1" /*]retval*/ ,
	/*retval[*/  "s-6-1-t2"."b067b7d~FirstName~2" /*]retval*/ )
    AND /* two fields belong to same equiv */
    /*retval[*/  "s-6-1-t2"."20ffecc~OrderID" /* order2 */ /*]retval*/  =
    /*retval[*/  "s-6-1-t3"."20ffecc~OrderID" /* order2 */ /*]retval*/
    AND /* filter */
   ( /*retval[*/  "s-6-1-t1"."e45a7f~ShipCountry" /* country1 */ /*]retval*/  <>
        /*retval[*/  "s-6-1-t3"."e45a7f~ShipCountry" /* country2 */ /*]retval*/ )
OPTION (QUIETCAST)

The query is next to unreadable but some comments split it into meaningful expressions. Every triple (or list of similar triples) becomes a subquery that returns fields needed to build the values of bound variables. The fields are printed wrapped by comments like /*retval[*/ expression /* original variable name */ /*]retval*/. Names like"s-6-1-t0" contain the source line number where a group pattern begins (6) and the serial number of the triple (0). Comment /* inter-alias join cond */ means that the expression which follows is the condition as written in the declaration of the quad map pattern. Comment /* filter */ precedes expressions for FILTER expressions in the source SPARQL. The word "equiv" means "equivalence class", i.e. a group of occurrences of variables in the source query such that all occurrences are bound to the same value. E.g. when a name repeats in many triples of a group, all its occurrences form an equivalence class. In some cases the compiler can prove that two variables are always equal even if the names differ - these variables are also placed into an "equiv".

Looking at this query, you may notice equalities like sprintf_iri (...) = sprintf_iri (...). That is sub-optimal because it indicates that no index will be used to optimize the join and that there will be one function call per row. When the variable ?emp appears in two different triples, it means that the value of the variable is the same in both triples. The query compares IRIs instead of comparing the arguments of sprintf_iri because the format string is not proven to be a bijection. Indeed it cannot be a bijection for arbitrary strings, but the database must reflect the real world. If it is assumed that the real names of persons never start with a digit, within the %d%U format fragment, the digits will always be distinguishable from the name; so the IRI class can be declared as a bijection even if it is not true for arbitrary strings. The script can then include "suspicious" option (bijection) as follows:

create iri class sample:Employee "http://example.com/Employee/%d%U#this"
  (in employee_id integer not null, in employee_lastname varchar not null)
  option (bijection) .

Unfortunately, attempts to use the same trick with the declaration from the Northwind example will fail:

create iri class northwind:Employee "http://^{URIQADefaultHost}^/Northwind/Employee/%U%U%d#this"
  (in employee_firstname varchar not null, in employee_lastname varchar not null, in employee_id integer not null)
  option (bijection) .

Bijection will allow the parsing, but it will never give the proper result, because the first %U will read the whole concatenation of %U%U%d, leaving nothing before the#this for the second %U (this is an error) and leaving nothing for the %d (that is an explicit parse error, becauses the integer field cannot be empty).

.

The string parser will process the string from left to right so it will be unable to parse the string. The compiler might sometimes report an error if it can prove that the format string is not appropriate for bijection.

The correct way of improving the Northwind example is to enable reliable bijection by adding strong delimiters:

create iri class northwind:Employee "http://^{URIQADefaultHost}^/Northwind/Employee/%U/%U/%d#this"
  (in employee_firstname varchar not null, in employee_lastname varchar not null, in employee_id integer not null)
  option (bijection) .

After running the updated script, the query contains three comparisons of fields that were arguments of sprintf_iri in the previous version.

Example for casting string as IRI type

create function DB.DBA.RDF_DF_GRANTEE_ID_URI (in id integer)
{
  declare isrole integer;
  isrole := coalesce ((SELECT top 1 U_IS_ROLE FROM DB.DBA.SYS_USERS WHERE U_ID = id));
  if (isrole is null)
    return NULL;
  else if (isrole)
    return sprintf ('http://%s/sys/group?id=%d', registry_get ('URIQADefaultHost'), id);
  else
    return sprintf ('http://%s/sys/user?id=%d', registry_get ('URIQADefaultHost'), id);
}
;

grant execute on DB.DBA.RDF_DF_GRANTEE_ID_URI to SPARQL_SELECT
;

create function DB.DBA.RDF_DF_GRANTEE_ID_URI_INVERSE (in id_iri varchar)
{
  declare parts any;
  parts := sprintf_inverse (id_iri, sprintf ('http://%s/sys/user?id=%%d', registry_get ('URIQADefaultHost')), 1);
  if (parts is not null)
    {
      if (exists (SELECT TOP 1 1 FROM DB.DBA.SYS_USERS WHERE U_ID = parts[0] and not U_IS_ROLE))
        return parts[0];
    }
  parts := sprintf_inverse (id_iri, sprintf ('http://%s/sys/group?id=%%d', registry_get ('URIQADefaultHost')), 1);
  if (parts is not null)
    {
      if (exists (SELECT TOP 1 1 FROM DB.DBA.SYS_USERS WHERE U_ID = parts[0] and U_IS_ROLE))
        return parts[0];
    }
  return NULL;
}
;

grant execute on DB.DBA.RDF_DF_GRANTEE_ID_URI_INVERSE to SPARQL_SELECT
;


create iri class oplsioc:grantee_iri using
  function DB.DBA.RDF_DF_GRANTEE_ID_URI (in id integer) returns varchar ,
  function DB.DBA.RDF_DF_GRANTEE_ID_URI_INVERSE (in id_iri varchar) returns integer
  option ( bijection ,
    returns	"http://^{URIQADefaultHost}^/sys/group?id=%d"
    union	"http://^{URIQADefaultHost}^/sys/user?id=%d" ) .


16.2.5. SPARQL Inline in SQL

Virtuoso extends the SQL 92 syntax with SPARQL queries and subqueries. Instead of writing a SQL SELECT query or subquery, one can write the SPARQL keyword and a SPARQL query after the keyword.

SQL>SPARQL SELECT DISTINCT ?p WHERE { graph ?g { ?s ?p ?o } };
p
varchar
----------
http://example.org/ns#b
http://example.org/ns#d
http://xmlns.com/foaf/0.1/name
http://xmlns.com/foaf/0.1/mbox
...


SQL>SELECT distinct subseq ("p", strchr ("p", '#')) as fragment
  FROM (SPARQL SELECT DISTINCT ?p WHERE { graph ?g { ?s ?p ?o } } ) as all_predicates
  WHERE "p" like '%#%';
fragment
varchar
----------
#query
#data
#name
#comment
...

Note that names of variables returned from SPARQL are always case-sensitive and no case mode rules apply to them. Depending on the CaseMode parameter in the Virtuoso configuration file, double quotes should be used if necessary to refer to them in surrounding SQL code.

It is possible to pass parameters to a SPARQL query via a Virtuoso-specific syntax extension. ?? or $? indicates a positional parameter similar to ? in plain SQL. ?? can be used in graph patterns or anywhere in place of a SPARQL variable. The value of a parameter should be passed in SQL form, i.e. this should be a number or a untyped string. An IRI ID can be passed in all cases where an absolute IRI can, except the obvious case of when the variable is an argument of a function that requires string. If the parameter is used in the 'graph', 'subject' or 'object' position of the SPARQL pattern, the string parameter is converted into an IRI automatically. In other cases an IRI string is indistinguishable from a string literal, so it is necessary to call the built-in SPARQL function iri() , e.g. iri (??). Using this notation, any dynamic SQL client (whether ODBC, JDBC or some other) can execute parameterized SPARQL queries, binding parameters just as with dynamic SQL.

SQL> create function param_passing_demo ()
{
  declare stat, msg varchar;
  declare mdata, rset any;
  exec ('SPARQL SELECT ?s WHERE { graph ?g { ?s ?? ?? }}',
    stat, msg,
    vector ( /* Vector of two parameters */
      'http://www.w3.org/2001/sw/DataAccess/tests/data/Sorting/sort-0#int1',
      4 ),
    10, /* Max no of rows */
    mdata, /* Variable to get metadata */
    rset ); /* Variable to get result-set */
  if (length (rset) = 0)
    signal ('23000',
      'No data found, try demo database with installed Virtuoso tutorials');
  return rset[0][0];
}

SQL> SELECT param_passing_demo ();
callret
VARCHAR
_______________________________________________________________________________

http://www.w3.org/2001/sw/DataAccess/tests/data/Sorting/sort-0#four

1 Rows. -- 00000 msec.

Another example:

INSERT INTO GRAPH <http://localhost:8890/Northwind>
{ `iri($?)` <http://localhost:8890/schemas/northwind#has_province> "Valencia" };

An inline SPARQL query can refer to SQL variables that are in scope in the SQL query or stored procedure containing it. Virtuoso extends the SPARQL syntax with a special notation to this effect. A reference to SQL variable X can be written as ?:X or $:X. A reference to column C of a table or a sub-select with alias T can be written as ?:T.C or $:T.C. Both notations can be used in any place where a variable name is allowed, except the 'AS' clause described below.

A column of a result set of a SPARQL SELECT can be used in SQL code inside a for statement just like any column from a SQL select.

SQL rules about double-quoted names are applicable to variables that are passed to a SPARQL query or selected from one. If a variable name contains unusual characters or should not be normalized according to SQL conventions then the name should use double quotes for escaping. e.g., the notation ?:"OrderLine" will always refer to variable or column titled OrderLine whereas ?:OrderLine can be converted to ORDERLINE or orderline.

It is safer to avoid using variable names that conflict with column names of RDF system tables, esp. G, S, P and O. These names are not reserved now but they may cause subtle bugs when an incorrect SPARQL subquery is compiled into SQL code that refers to identically named table columns. Some of these names may be rejected as syntax errors by future Virtuoso versions.

SQL> create procedure sql_vars_demo ()
{
#pragma prefix sort0: <http://www.w3.org/2001/sw/DataAccess/tests/data/Sorting/sort-0#>
  declare RES varchar;
  declare obj integer;
  result_names (RES);
  obj := 4;
  for (SPARQL SELECT ?subj WHERE { graph ?g { ?subj sort0:int1 ?:obj } } ) do
    result ("subj");
}

SQL> sql_vars_demo ();
RES
VARCHAR
_______________________________________________________________________________

http://www.w3.org/2001/sw/DataAccess/tests/data/Sorting/sort-0#four

1 Rows. -- 00000 msec.

The example also demonstrates the Virtuoso/PL pragma line for procedure-wide declarations of namespace prefixes. This makes the code more readable and eliminates duplicate declarations of namespace prefixes when the procedure contains many SPARQL fragments that refer to a common set of namespaces.

A SPARQL ASK query can be used as an argument of the SQL EXISTS predicate.

create function sparql_ask_demo () returns varchar
{
  if (exists (sparql ask where { graph ?g { ?s ?p 4}}))
    return 'YES';
  else
    return 'NO';
}

SQL> SELECT sparql_ask_demo ();
_______________________________________________________________________________

YES

16.2.5.1. Controlling SPARQL Output Data Types

The compilation of a SPARQL query may depend on an environment that is usually provided by the SPARQL protocol and which includes the default graph URI. Environment settings that come from the SPARQL protocol may override settings in the text of a SPARQL query. To let an application configure the environment for a query, SPARQL's syntax has been extended with the 'define' clause:

define parameter-qname parameter-value

Examples of supported parameters are output:valmode and output:format

output:valmode specifies which data types (i.e. representation) should be used for values in the result set. The default is "SQLVAL", meaning that a query returns result set values in SQL format and behaves as a typical SQL select - IRIs and string literals are returned as strings, making the output compatible with ODBC and the standard SQL routines. To compose triple vectors in Virtuoso PL code, an application may need data in long format. A valmode of "LONG" means that IRIs are returned as IRI_IDs and string literals may be returned as special "RDF boxes" even if they are actually plain strings. This may cause problems if these new datatypes are not known to the data recipient or if IRIs come from RDF Views (in which case IRI_IDs are created on the fly and 'pollute' the database), but it can result in fewer data conversions and thus better speed if used properly. "AUTO" disables all types of conversion for the result set, so the latter can comprise a mix of values across "SQLVAL" and "LONG" value modes, as well as some internal representations. It is better to avoid using this mode in user applications because the output may change from version to version.

If the query contains a

define output:valmode 'LONG'

clause then all returned values are in long format. e.g., the following query returns IRI_ID's instead of IRI strings.

SQL>SPARQL define output:valmode 'LONG' SELECT distinct ?p WHERE { graph ?g { ?s ?p ?o } };
p
----------
#i1000001
#i1000003
#i1000005
#i1000006
...

output:format instructs the SPARQL compiler that the result of the query should be serialized into an RDF document - that document will be returned as a single column of a single row result set. output:format is especially useful if a SPARQL CONSTRUCT or SPARQL DESCRIBE query is executed directly via an ODBC or JDBC database connection and the client cannot receive the resulting dictionary of triples (there's no way to transfer such an object via ODBC). Using this option, the client can receive the document that contains the whole result set of a SELECT or the dictionary of triples of a CONSTRUCT/DESCRIBE, and parse it locally.

Supported values for output:format are RDF/XML and TURTLE (or TTL). If both output:valmode and output:format are specified, output:format has higher priority, raising an error if output:valmode is set to a value other than LONG.

When a SPARQL query is compiled, the compiler checks whether the result set is to be sent to a remote ODBC/JDBC client or used in some other way. The compiler will automatically set output:format to TURTLE if compiling for execution by an SQL client.

The example below demonstrates how different values of output:format affect the result of SPARQL SELECT. Note 10 rows and 4 columns in the first result, and single LONG VARCHAR in the others. When using the ISQL client, use the 'set blobs on;' directive if fetching long texts to avoid receiving a 'data truncated' warning.

SQL> SPARQL SELECT * WHERE {graph ?g { ?s ?p ?o }} limit 10;
g                                            s                    p                              o
VARCHAR                                      VARCHAR              VARCHAR                        VARCHAR
______________________________________________________________________

http://local.virt/DAV/bound/manifest.rdf     nodeID://1000000000 http://example.com/test#query http://local.virt/DAV/bound/bound1.rq
. . .
http://local.virt/DAV/examples/manifest.rdf nodeID://1000000019 http://example.com/test#query http://local.virt/DAV/examples/ex11.2.3.1_1.rq

10 Rows. -- 00000 msec.

SQL> SPARQL define output:format "TTL" SELECT * WHERE {graph ?g { ?s ?p ?o }} limit 10;
callret-0
LONG VARCHAR
_______________________________________________________________________________

@prefix :rdf <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix :rs <http://www.w3.org/2005/sparql-results#> .
@prefix :xsd <http://www.w3.org/2001/XMLSchema#> .
[ rdf:type rs:results ;
  rs:result [
      rs:binding [ rs:name "g" ; rs:value <http://local.virt/DAV/bound/manifest.rdf> ] ;
      rs:binding [ rs:name "s" ; rs:value _:nodeID1000000000 ] ;
      rs:binding [ rs:name "p" ; rs:value <http://example.com/test#query> ] ;
      rs:binding [ rs:name "o" ; rs:value <http://local.virt/DAV/bound/bound1.rq> ] ;
      ] ;

. . .

  rs:result [
      rs:binding [ rs:name "g" ; rs:value <http://local.virt/DAV/examples/manifest.rdf> ] ;
      rs:binding [ rs:name "s" ; rs:value _:nodeID1000000019 ] ;
      rs:binding [ rs:name "p" ; rs:value <http://example.com/test#query> ] ;
      rs:binding [ rs:name "o" ; rs:value <http://local.virt/DAV/examples/ex11.2.3.1_1.rq> ] ;
      ] ;
    ] .

1 Rows. -- 00000 msec.

SQL> SPARQL define output:format "RDF/XML" SELECT * WHERE {graph ?g { ?s ?p ?o }} LIMIT 10;
callret-0
LONG VARCHAR
_______________________________________________________________________________

<rdf:RDF
  xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
  xmlns:rs="http://www.w3.org/2005/sparql-results#"
  xmlns:xsd="http://www.w3.org/2001/XMLSchema#" >
   <rs:results rdf:nodeID="rset">
  <rs:result rdf:nodeID="sol206">
   <rs:binding rdf:nodeID="sol206-0" rs:name="g"><rs:value rdf:resource="http://local.virt/DAV/bound/manifest.rdf"/></rs:binding>
   <rs:binding rdf:nodeID="sol206-1" rs:name="s"><rs:value rdf:nodeID="1000000000"/></rs:binding>
   <rs:binding rdf:nodeID="sol206-2" rs:name="p"><rs:value rdf:resource="http://example.com/test#query"/></rs:binding>
   <rs:binding rdf:nodeID="sol206-3" rs:name="o"><rs:value rdf:resource="http://local.virt/DAV/bound/bound1.rq"/></rs:binding>
  </rs:result>

. . .

  <rs:result rdf:nodeID="sol5737">
   <rs:binding rdf:nodeID="sol5737-0" rs:name="g"><rs:value rdf:resource="http://local.virt/DAV/examples/manifest.rdf"/></rs:binding>
   <rs:binding rdf:nodeID="sol5737-1" rs:name="s"><rs:value rdf:nodeID="1000000019"/></rs:binding>
   <rs:binding rdf:nodeID="sol5737-2" rs:name="p"><rs:value rdf:resource="http://example.com/test#query"/></rs:binding>
   <rs:binding rdf:nodeID="sol5737-3" rs:name="o"><rs:value rdf:resource="http://local.virt/DAV/examples/ex11.2.3.1_1.rq"/></rs:binding>
  </rs:result>
 </rs:results>
</rdf:RDF>

1 Rows. -- 00000 msec.

SPARQL CONSTRUCT and SPARQL DESCRIBE results are serialized as one would expect:

SQL> SPARQL
define output:format "TTL"
CONSTRUCT { ?s ?p "004" }
WHERE
  {
    graph ?g { ?s ?p 4 }
  };
callret-0
LONG VARCHAR
_______________________________________________________________________________

<http://www.w3.org/2001/sw/DataAccess/tests/data/Sorting/sort-0#four> <http://www.w3.org/2001/sw/DataAccess/tests/data/Sorting/sort-0#int1> "004" .
_:b1000000913 <http://www.w3.org/2001/sw/DataAccess/tests/result-set#index> "004" .


1 Rows. -- 00000 msec.

SQL> SPARQL
define output:format "RDF/XML"
CONSTRUCT { ?s ?p "004" }
WHERE
  {
    graph ?g { ?s ?p 4 }
  };
callret-0
LONG VARCHAR
_______________________________________________________________________________

<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<rdf:Description about="http://www.w3.org/2001/sw/DataAccess/tests/data/Sorting/sort-0#four"><ns0pred:int1 xmlns:ns0pred="http://www.w3.org/2001/sw/DataAccess/tests/data/Sorting/sort-0#">004</ns0pred:int1></rdf:Description>
<rdf:Description rdf:nodeID="b1000000913"><ns0pred:index xmlns:ns0pred="http://www.w3.org/2001/sw/DataAccess/tests/result-set#">004</ns0pred:index></rdf:Description>
</rdf:RDF>

1 Rows. -- 00000 msec.

SPARQL ASK returns a non-empty result set if a match is found for the graph pattern, an empty result set otherwise. If output:format is specified then the query makes a 'boolean result' document instead:

SQL> SPARQL ASK WHERE {graph ?g { ?s ?p 4 }};
__ask_retval
INTEGER
_______________________________________________________________________________

1

1 Rows. -- 00000 msec.

SQL> SPARQL ASK WHERE {graph ?g { ?s ?p "no such" }};
__ask_retval
INTEGER
_______________________________________________________________________________


0 Rows. -- 00000 msec.

SQL> SPARQL define output:format "TTL" ASK WHERE {graph ?g { ?s ?p 4 }};
callret
VARCHAR
_______________________________________________________________________________

@prefix :rdf <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
 @prefix :rs <http://www.w3.org/2005/sparql-results#> .
[ rdf:type rs:results ; rs:boolean TRUE ]

1 Rows. -- 00000 msec.

SQL> SPARQL define output:format "RDF/XML" ASK WHERE {graph ?g { ?s ?p 4 }};
callret
VARCHAR
_______________________________________________________________________________

<rdf:RDF
  xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
  xmlns:rs="http://www.w3.org/2005/sparql-results#"
  xmlns:xsd="http://www.w3.org/2001/XMLSchema#" >
   <rs:results rdf:nodeID="rset">
    <rs:boolean rdf:datatype="http://www.w3.org/2001/XMLSchema#boolean">1</rs:boolean></results></rdf:RDF>

1 Rows. -- 00000 msec.


16.2.6. API Functions

SPARQL can be used inline wherever SQL can be used. The only API functions that one needs to know are the ones for loading RDF data into the store. Dynamic SQL client applications can issue SPARQL queries against Virtuoso through the regular SQL client API, ODBC, JDBC or any other supported API, simply by prefixing the SPARQL query with the SPARQL keyword. Parameters work just as with dynamic SQL. Stored procedures can have SPARQL expressions inline and can declare cursors over SPARQL result sets.

Value conversions between SQL and SPARQL are most often automatic and invisible. In some cases one needs to be aware of the different SPARQL value representations (valmodes). SPARQL offers declarations for specifying whether returned graphs are to be serialized as XML or Turtle, or whether these will be hash tables of triples. See dict_new() and related functions for a description of the hash table SQL data type. The use of dict's is convenient for further programmatic processing of graphs.

RDF-related procedures use Virtuoso/PL vectors and dictionaries to represent RDF triples and sets of triples.

Valmode means the "format of values returned by an expression", i.e. 'short', 'long' or 'SQL value'.

Triple vector is a vector (array) of S, P and O, where all values are in 'long' formats, i.e. IRI_ID's for IRI values, numbers or datetimes for corresponding XMLSchema types, special "RDF box" objects if O is neither string nor IRI.

Dictionary of triples or Hash table of triples is an dictionary object made by the SQL function dict_new () whose keys are triple vectors and values are not specified; this is a good storage format for an unordered set of distinct triples.

Dictionary of blank node names is a dictionary used for tricky processing of a number of TURTLE or RDF /XML descriptions of subgraphs that come from a common graph. Imagine a situation where different descriptions actually refer to the same blank nodes of the original graph and, moreover, the application that generates these descriptions always generates the same blank node id string for the same node. A reader of descriptions can correctly join described subgraphs into one big subgraph by filling in a dictionary that contains blank node id strings as keys and IRI_ID's assigned to those strings as dependent data. The sharing of the same node dictionary by all readers of an application will ensure that no blank node is duplicated.

16.2.6.1. Data Import

16.2.6.1.1. Using TTLP

DB.DBA.TTLP() parses TTL (TURTLE or N3 resource) and places its triples into DB.DBA.RDF_QUAD.

create procedure DB.DBA.TTLP (
    in strg any,       -- text of the resource
    in base varchar,   -- base IRI to resolve relative IRIs to absolute
    in graph varchar, -- target graph IRI, parsed triples will appear in that graph.
    in flags int)   -- bitmask of flags that permit some sorts of syntax errors in resource, use 0.

For loading a file of any great length, it is more practical to use the file_to_string_output function.

It is important the file be accessible to the Virtuoso server. You need to have set properly set the DirsAllowed parameter value in the section [Parameters] of the Virtuoso database INI file. For example on Windows it could be:

virtuoso.ini file:
[Parameters]
...
DirsAllowed =  .\tmp
...

So, in the example, the file you want to import from, should be in the tmp folder or in a subfolder. Note that this example folder is a subfolder of the Virtuoso Server working directory.

SQL> DB.DBA.TTLP (file_to_string_output ('.\tmp\data.ttl'), '', 'http://my_graph', 0);

16.2.6.1.2. Using TTLP_MT

The DB.DBA.TTLP_MT() procedure is like DB.DBA.TTLP() but loads the file on multiple threads, using parallel I/O and multiprocessing if available. The function does not leave a transaction log. Hence, after a successful load, one should execute the checkpoint statement to make sure that a server restart does not wipe out the results.

create procedure DB.DBA.TTLP_MT (
    in strg any,       -- text of the resource
    in base varchar,   -- base IRI to resolve relative IRIs to absolute
    in graph varchar,  -- target graph IRI, parsed triples will appear in that graph.
    in flags int) -- flags, use 0

16.2.6.1.3. Using RDF_LOAD_RDFXML_MT

For loading large resources when transactional integrity is not important (loading of a single resource may take more than one transaction) you can use also the DB.DBA.RDF_LOAD_RDFXML_MT() procedure:

create procedure DB.DBA.RDF_LOAD_RDFXML_MT (
    in strg varchar,  -- text of the resource
    in base varchar,  -- base IRI to resolve relative IRIs to absolute
    in graph varchar) -- target graph IRI, parsed triples will appear in that graph.

The following example demonstrates importing data from the RDF resource with URI: http://www.w3.org/People/Berners-Lee/card

SQL>create procedure MY_LOAD_FILE (in full_uri varchar, in in_resultset integer := 0)
{
  declare REPORT varchar;
  declare graph_uri, dattext varchar;
  declare app_env any;
  app_env := null;
  whenever sqlstate '*' goto err_rep;
  if (not in_resultset)
    result_names (REPORT);
  dattext := cast (XML_URI_GET_AND_CACHE (full_uri) as varchar);
  MY_SPARQL_REPORT (sprintf ('Downloading %s: %d bytes',
      full_uri, length (dattext) ) );
  graph_uri := full_uri;
  DELETE FROM RDF_QUAD WHERE G = DB.DBA.RDF_MAKE_IID_OF_QNAME (graph_uri);
  DB.DBA.RDF_LOAD_RDFXML_MT (dattext, full_uri, graph_uri);
  return graph_uri;
err_rep:
  result (sprintf ('%s: %s', __SQL_STATE, __SQL_MESSAGE));
  return graph_uri;
}
;

Done. -- 0 msec.

SQL>create procedure MY_SPARQL_REPORT(in strg varchar)
{
  if (__tag(strg) <> 182)
    strg := cast (strg as varchar) || sprintf (' -- not a string, tag=%d', __tag(strg));
  strg := replace (strg, 'SPARQL_DAV_DATA_URI()', '\044{SPARQL_DAV_DATA_URI()}');
  strg := replace (strg, 'SPARQL_DAV_DATA_PATH()', '\044{SPARQL_DAV_DATA_PATH()}');
  strg := replace (strg, 'SPARQL_FILE_DATA_ROOT()', '\044{SPARQL_FILE_DATA_ROOT()}');
  result (strg);
}
;

Done. -- 0 msec.

SQL> MY_LOAD_FILE('http://www.w3.org/People/Berners-Lee/card');
REPORT
VARCHAR
_______________________________________________________________________________

Downloading http://www.w3.org/People/Berners-Lee/card: 17773 bytes

1 Rows. -- 4046 msec.

SQL>SPARQL
SELECT *
FROM <http://www.w3.org/People/Berners-Lee/card>
WHERE {?s ?p ?o} ;

s                                             p                                               o
VARCHAR                                       VARCHAR                                         VARCHAR
__________________________________________________________________________________________________________

http://bblfish.net/people/henry/card#me       http://xmlns.com/foaf/0.1/name                  Henry Story
http://www.w3.org/People/Berners-Lee/card#i   http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://xmlns.com/foaf/0.1/Person
http://www.w3.org/People/Berners-Lee/card#i   http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.w3.org/2000/10/swap/pim/contact#Male
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/nick                  TimBL
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/nick                  timbl
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/mbox                  mailto:timbl@w3.org
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/mbox_sha1sum          965c47c5a70db7407210cef6e4e6f5374a525c5c
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/knows                 http://bblfish.net/people/henry/card#me
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/knows                 http://hometown.aol.com/chbussler/foaf/chbussler.foaf#me
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/knows                 http://danbri.org/foaf#danbri
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/knows                 http://norman.walsh.name/knows/who#norman-walsh
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/knows                 http://www.aaronsw.com/about.xrdf#aaronsw
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/knows                 http://www.ivan-herman.net/foaf.rdf#me
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/knows                 http://www.w3.org/People/Berners-Lee/card#amy
http://www.w3.org/People/Berners-Lee/card#i   http://xmlns.com/foaf/0.1/knows                 http://dig.csail.mit.edu/People/RRS
..........


16.2.6.1.4. Using RDF_TTL2HASH

The DB.DBA.RDF_TTL2HASH() does not load TTL content, instead it returns a dictionary of triples in 'long valmode'.

create function DB.DBA.RDF_TTL2HASH (
    in strg any,
    in base varchar,
    in graph varchar ) returns any

Parameter flags is useful when the syntax of the resource is TURTLE-like, but not correct TURTLE. By default, use zero value. Add 1 to let string literals contain end-of-line characters. Add 2 to suppress error messages on blank node verbs. Add 4 to allow variables instead of blank nodes. Add 8 to silently skip triples with literal subjects.


16.2.6.1.5. Using RDF_LOAD_RDFXML

The DB.DBA.RDF_LOAD_RDFXML() procedure parses RDF/XML and places its triples into DB.DBA.RDF_QUAD.

create procedure DB.DBA.RDF_LOAD_RDFXML (
    in strg any,           -- text of and XML document
    in base_iri varchar,   -- base IRI to resolve relative IRIs
    in graph_iri varchar ) -- the IRI of destination graph

See example

.
16.2.6.1.6. Using RDF_QUAD_URI, RDF_QUAD_URI_L and RDF_QUAD_URI_L_TYPED

To insert a single quad into DB.DBA.RDF_QUAD() table, use one of these procedures:

-- Simple insertion of a quad where the object is a node
create procedure DB.DBA.RDF_QUAD_URI (
  in g_uri varchar, in s_uri varchar, in p_uri varchar,
  in o_uri varchar ) -- IRI string or IRI_ID

-- Simple insertion of a quad where the object is a literal value in 'SQL valmode'
create procedure DB.DBA.RDF_QUAD_URI_L (
  in g_uri varchar, in s_uri varchar, in p_uri varchar,
  in o_lit any ) -- string, number or datetime, NULL is not allowed

create procedure DB.DBA.RDF_QUAD_URI_L_TYPED (
  in g_uri varchar, in s_uri varchar, in p_uri varchar,
  in o_lit any,     -- string value of the literal
  in dt any,        -- datatype as IRI string or IRI_ID, can be NULL
  in lang varchar ) -- language as string or NULL

Arguments g_uri, s_uri and p_uri of these three functions should be IRI strings or IRI_IDs. All string arguments should be in UTF-8 encoding, otherwise they will be stored but are not queryable via SPARQL.


16.2.6.2. Data Export

These two procedures serialize a vector of triples into a session, in TURTLE or RDF/XML syntax. In their current versions, every triple is printed in a separate top-level record (say, in an rdf:Description tag), without any pretty-printing or nesting optimization.

create procedure DB.DBA.RDF_TRIPLES_TO_TTL (
    inout triples any, -- vector of triples in 'long valmode'.
    inout ses any )    -- an output stream in server default encoding

create procedure DB.DBA.RDF_TRIPLES_TO_RDF_XML_TEXT (
    inout triples any,          -- vector of triples in 'long valmode'.
    in print_top_level integer, -- zero if only rdf:Description tags should be written,
                                -- non-zero if the rdf:RDF top-level element should also be written
    inout ses any )             -- an output stream in server default encoding

16.2.6.3. Data query

-- Local execution of SPARQL via SPARQL protocol, produces a result set of SQL values.
create procedure DB.DBA.SPARQL_EVAL (
    in query varchar,      -- text of SPARQL query to execute
    in dflt_graph varchar, -- default graph IRI, if not NULL then this overrides what's specified in query
    in maxrows integer )   -- limit on numbers of rows that should be returned.

-- Similar to SPARQL_EVAL, but returns a vector of vectors of SQL values.
create function DB.DBA.SPARQL_EVAL_TO_ARRAY (
    in query varchar,      -- text of SPARQL query to execute
    in dflt_graph varchar, -- default graph IRI, if not NULL then this overrides what's specified in query
    in maxrows integer )   -- limit on numbers of rows that should be returned.
returns any
-- Remote execution of SPARQL via SPARQL protocol, produces a result set of SQL values.
create procedure DB.DBA.SPARQL_REXEC (
    in service varchar,    -- service URI to call via HTTP
    in query varchar,      -- text of SPARQL query to execute
    in dflt_graph varchar, -- default graph IRI, if not NULL then this overrides what's specified in query
    in named_graphs any,   -- vector of named graph IRIs, if not NULL then this overrides what's specified in query
    in req_hdr any,        -- additional HTTP header lines that should be passed to the service; 'Host: ...' is most popular.
    in maxrows integer,    -- limit on numbers of rows that should be returned.
    in bnode_dict any )    -- dictionary of bnode ID references.

-- Similar to SPARQL_REXEC (), but returns a vector of vectors of SQL values.
-- All arguments are the same.
create function DB.DBA.SPARQL_REXEC_TO_ARRAY (
    in service varchar, in query varchar, in dflt_graph varchar, in named_graphs any,
    in req_hdr any, in maxrows integer, in bnode_dict any)
returns any

-- Similar to SPARQL_REXEC (), but fills in output parameters with metadata (like exec metadata) and a vector of vector
s of 'long valmode' values.
-- First seven arguments are the same.
create procedure DB.DBA.SPARQL_REXEC_WITH_META (
    in service varchar, in query varchar, in dflt_graph varchar, in named_graphs any,
    in req_hdr any, in maxrows integer, in bnode_dict any,
    out metadata any,  -- metadata like exec () returns.
    out resultset any) -- results as 'long valmode' value.

If the query is a CONSTRUCT or DESCRIBE then the result set consists of a single row and column, the value inside is a dictionary of triples in 'long valmode'.



16.2.7. Useful Internal Functions

16.2.7.1. Conversion Functions for XMLSchema/RDF Data Serialization Syntax

These functions emulate constructor functions from XQuery Core Function Library.

create function DB.DBA."http://www.w3.org/2001/XMLSchema#boolean" (in strg any) returns integer
create function DB.DBA."http://www.w3.org/2001/XMLSchema#dateTime" (in strg any) returns datetime
create function DB.DBA."http://www.w3.org/2001/XMLSchema#double" (in strg varchar) returns double precision
create function DB.DBA."http://www.w3.org/2001/XMLSchema#float" (in strg varchar) returns float
create function DB.DBA."http://www.w3.org/2001/XMLSchema#integer" (in strg varchar) returns integer

16.2.7.2. RDF-specific Predicates

-- Returns 1 if string s matches pattern p, 0 otherwise
create function DB.DBA.RDF_REGEX (
    in s varchar,            -- source string to check
    in p varchar,            -- regular expression pattern string
    in coll varchar := null) -- unused for now (modes are not yet implemented)

-- Returns 1 if language identifier r matches lang pattern t
create function DB.DBA.RDF_LANGMATCHES (
  in r varchar, -- language identifies (string or NULL)
  in t varchar) -- language pattern (exact name, first two letters or '*')


16.2.8. Default and Named Graphs

Sometimes the default graph IRI is not known when the SPARQL query is composed. It can be added at the very last moment by providing the IRI in a 'define' clause as follows:

define input:default-graph-uri &lt;http://example.com&gt

Such a definition overrides the default graph URI set in query by the 'FROM ...' clause (if any).

The query may contain more than one define input:default-graph-uri. The set of values of input:default-graph-uri has the highest possible priority and cannot be redefined in the rest of the text of the query by FROM clauses.

FROM NAMED clauses can be used multiple times in one query:

SPARQL
  SELECT ?id
  FROM NAMED <http://example.com/user1.ttl>
  OPTION (get:soft "soft", get:method "GET")
  FROM NAMED <http://example.com/user2.ttl>
  OPTION (get:soft "soft", get:method "GET")
  WHERE { GRAPH ?g { ?id a ?o } }

Similarly, define input:named-graph-uri <http://example.com> is a replacement for a FROM NAMED clause

When Virtuoso receives a SPARQL request via HTTP, the value of the default graph can be set in the protocol using a default-graph-uri HTTP parameter. Multiple occurrences of this parameter are allowed. This HTTP parameter is converted into define input:default-graph-uri. There's similar support for named-graph-uri HTTP parameter. For debugging purposes, graph names set in the protocol are sent back in the reply header as X-SPARQL-default-graph: ... and X-SPARQL-named-graph: ... header lines, one line per graph.

A web service endpoint may provide different default configurations for different host names mentioned in HTTP requests. This facility is configured via table DB.DBA.SYS_SPARQL_HOST.

create table DB.DBA.SYS_SPARQL_HOST (
  SH_HOST	varchar not null primary key, -- host mask
  SH_GRAPH_URI	varchar,                -- 'default default' graph uri
  SH_USER_URI	varchar,                  -- reserved for any use in applications
  SH_DEFINES	long varchar              -- additional defines for requests
)

When the SPARQL web service endpoint receives a request it checks the Host HTTP header line. This line contains zero or more target host names, delimited by commas. For every host name in the line, the service scans the DB.DBA.SYS_SPARQL_HOST table in search of a row containing a matching host name in SH_HOST. The SH_HOST field acts as 'pattern' argument for the SQL string operator LIKE. If a matching row is found, the text of SPARQL request is extended. If a default graph is not explicitly set by the HTTP parameters and SH_GRAPH_URI is not null then the default graph is set to SH_GRAPH_URI. If SH_DEFINES is not null then it is added in front of the query; so this field is a good place for the text for any define options.

SH_USER_URI is for arbitrary user data and can be used in any way by the application that is "responsible" for the declared host.

The search of DB.DBA.SYS_SPARQL_HOST stops at the first found row, other possible matches are silently ignored.


16.2.9. Calling SQL from SPARQL

A SPARQL expression can contain calls to Virtuoso/PL functions and built-in SQL functions in both the WHERE clause and in the result set. Two namespace prefixes, bif and sql are reserved for these purposes. When a function name starts with the bif: namespace prefix, the rest of the name is treated as the name of a SQL BIF (Built-In Function). When a function name starts with the sql: namespace prefix, the rest of the name is treated as the name of a Virtuoso/PL function owned by DBA with database qualifier DB, e.g. sql:example(...) is converted into DB.DBA."example"(...).

In both cases, the function receives arguments in SQL format ('SQL valmode') and also returns the result in SQL format. The SPARQL compiler will automatically add code for format conversion into the resulting SQL code so SQL functions can be used even if define output:valmode 'LONG' forces the use of RDF representation in the result set.

16.2.9.1. Example with sql: namespace prefix

SQL>create procedure DB.DBA.ComposeInfo (
  in pname varchar,
  in pnick varchar := '',
  in pbox  varchar := '')
{
   declare ss varchar;
   ss := concat(pname, ' ', pnick, ' ', pbox);
   ss := rtrim (ss, ' ');
   return ss;

};
Done. -- 0 msec.

SQL>SPARQL
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
SELECT (sql:ComposeInfo (?name, ?nick, ?box))
FROM <http://www.w3.org/People/Berners-Lee/card>
WHERE
  {
    ?s rdf:type foaf:Person .
    optional{?s foaf:name ?name }.
    optional{?s foaf:nick ?nick }.
    optional{?s foaf:box ?box }.
    filter (?nick like '%TimBL%') .
  };
callret-0
VARCHAR
_______________________________________________________________________________

Timothy Berners-Lee TimBL

1 Rows. -- 30 msec.
See Also:

16.2.9.2. Example with sql: namespace prefix and bif:contains

SQL>SPARQL
SELECT DISTINCT ?cityUri ?cityName (sql:BEST_LANGMATCH (?cityName, 'en, en-gb;q=0.8, fr;q=0.7, *;q=0.1', '')) as ?bestCityName
WHERE
  {
    ?cityUri ?predicate ?value.
    ?cityUri a <http://dbpedia.org/ontology/City>.
    ?value bif:contains "London".
    OPTIONAL
      {
        ?cityUri rdfs:label ?cityName
      }
  };

cityUri                                              cityName                      bestCityName
ANY                                                  ANY 	                         ANY
______________________________________________________________________________________________________________
http://dbpedia.org/resource/Anerley	                 Anerley	                     Anerley
http://dbpedia.org/resource/Felixstowe	             Felixstowe	                   Felixstowe
http://dbpedia.org/resource/Chesham	                 Chesham	                     Chesham
http://dbpedia.org/resource/Stratford%2C_London	     Stratford, London	           Stratford, London
http://dbpedia.org/resource/Ashford%2C_Surrey	       Ashford (Surrey)	 A           shford (Surrey)
http://dbpedia.org/resource/Newmarket%2C_Suffolk	   Newmarket (Suffolk)	         Newmarket (Suffolk)
http://dbpedia.org/resource/North_Rhine-Westphalia	 Renania d'o Norte-Westfalia	 Renania d'o Norte-Westfalia
http://dbpedia.org/resource/West_Bromwich	           West Bromwich	               West Bromwich
....


16.2.9.3. Example with bif: namespace prefix

SQL>SPARQL
SELECT *
FROM <http://www.w3.org/people#>
WHERE { ?s ?p ?o . ?o bif:contains '"Timo*"'};
s                                               p                                     o
VARCHAR                                         VARCHAR                               VARCHAR
_______________________________________________________________________________

 http://www.w3.org/People/Berners-Lee/card#i	http://xmlns.com/foaf/0.1/name	      Timothy Berners-Lee
 http://www.w3.org/People/Berners-Lee/card#i	http://xmlns.com/foaf/0.1/givenname   Timothy

2 Rows. -- 2 msec.

See Also:


16.2.10. SPARQL DESCRIBE

The SPARQL specification does not define the precise output of DESCRIBE, so different applications may need different results for the same subject. Some applications need quick generation of short and incomplete results whereas others may need detailed reports composed from multiple sources.

The supported option values for sql:describe-mode are:

If define sql:describe-mode "xxx" is specified then the generated SQL code will use the procedures named:

DB.DBA.SPARQL_DESC_DICT_xxx (in subj_dict any, in consts any, in graphs
any, in storage_name any, in options any)

and

DB.DBA.SPARQL_DESC_DICT_xxx_PHYSICAL (in subj_dict any, in consts any,
in graphs any, in storage_name any, in options any)

In a new blank database, only two such pairs of procedures are created. Procedures DB.DBA.SPARQL_DESC_DICT_SPO and DB.DBA.SPARQL_DESC_DICT_SPO_PHYSICAL are for sql:describe-mode "SPO". This pair of procedures searches for all triples where the input IRIs are used as subjects; they are faster than the default routine which searches for all triples where the input IRIs are used as subjects or objects. Similarly, DB.DBA.SPARQL_DESC_DICT_CBD and DB.DBA.SPARQL_DESC_DICT_CBD_PHYSICAL are for sql:describe-mode "CBD". CBD stands for Concise Bounded Description of given subject (i.e., SPO + CBD of each blank node object found by SPO, recursively).

In each pair, both procedures have the same semantics but the second one is used if and only if the SPARQL compiler can prove that all subjects to process are from physical storage (DB.DBA.RDF_QUAD). Thus the second procedure will not search for subjects in Linked Data Views.

Each procedure should return a dictionary with triples as keys and integer 1 as values. So the dictionary is filled by calls like:

dict_put (resulting_dict,
          vector (subj_iri_id, pred_iri_id, obj_iri_id_or_rdf_box),
          1);

Procedure arguments are as follows:

One should grant execute permission on both procedures to SPARQL_SELECT before referring to them in SPARQL.

16.2.10.1. SPARQL DESCRIBE Examples

Assume the following statements are executed:

__rdf_set_bnode_t_treshold();
SET blobs ON;
SET echo ON;

SPARQL PREFIX xmp: <http://example.com/xmp/>
CLEAR GRAPH xmp:good1;

SPARQL PREFIX xmp: <http://example.com/xmp/>
CLEAR GRAPH xmp:good2;

SPARQL PREFIX xmp: <http://example.com/xmp/>
CLEAR GRAPH xmp:bad1;

SPARQL PREFIX xmp: <http://example.com/xmp/>
CLEAR GRAPH xmp:bad2;

SPARQL PREFIX xmp: <http://example.com/xmp/>
INSERT IN xmp:good1
  {
    xmp:Top1 xmp:item xmp:TheSubject .
    xmp:TheSubject xmp:details
            xmp:ChildObject ,
            ( xmp:car xmp:cadr xmp:caddr ) .
  };

SPARQL PREFIX xmp: <http://example.com/xmp/>
INSERT IN xmp:good2
  {
    xmp:Top2 xmp:items [ rdf:_1 xmp:TheSubject ; rdf:_2 xmp:OtherSubject ] .
  };
16.2.10.1.1. Examples SPARQL DESCRIBE -- No Option
SPARQL
DEFINE output:format "NICE_TTL"
PREFIX xmp: <http://example.com/xmp/>
DESCRIBE xmp:TheSubject FROM xmp:good1 FROM xmp:good2;

fmtaggret-NICE_TTL
LONG VARCHAR
 @prefix ns0: <http://example.com/xmp/> .
 @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
 ns0:Top1 ns0:item ns0:TheSubject .
 ns0:TheSubject ns0:details ns0:ChildObject , [ ] .
 _:vb78520012 rdf:_1 ns0:TheSubject .


16.2.10.1.2. Examples SPARQL DESCRIBE Option "SPO"

Example 1

SPARQL
DEFINE output:format "NICE_TTL"
DEFINE sql:describe-mode "SPO"
PREFIX xmp: <http://example.com/xmp/>
DESCRIBE xmp:TheSubject FROM xmp:good1 FROM xmp:good2;

fmtaggret-NICE_TTL
LONG VARCHAR
 @prefix ns0: <http://example.com/xmp/> .
 ns0:TheSubject ns0:details ns0:ChildObject , [ ] .

Example 2

SQL>set blobs on;
SQL>SPARQL
define sql:describe-mode "SPO"
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX sioct: <http://rdfs.org/sioc/types#>

DESCRIBE ?forum
FROM <http://demo.openlinksw.com/dataspace>
WHERE {
  ?forum rdf:type sioct:Weblog .
}
LIMIT 1;

callret-0
LONG VARCHAR
_______________________________________________________________________________

<http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://rdfs.org/sioc/types#Weblog> ,
                <http://atomowl.org/ontologies/atomrdf#Feed> ;
        <http://rdfs.org/sioc/ns#description> "XML templates demo's Weblog" ;
        <http://rdfs.org/sioc/ns#has_space> <http://demo.openlinksw.com/dataspace/bloguser/space#this> ;
        <http://rdfs.org/sioc/ns#container_of> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/20> ,
                <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/21> ;
        <http://rdfs.org/sioc/ns#id> "bloguser_blog" ;
        <http://xmlns.com/foaf/0.1/maker> <http://demo.openlinksw.com/dataspace/person/bloguser#this> ;
        <http://rdfs.org/sioc/ns#link> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> ;
        <http://atomowl.org/ontologies/atomrdf#entry> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/20> ,
                <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/21> ;
        <http://atomowl.org/ontologies/atomrdf#contains> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/21> ,
                <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/20> ;
        <http://atomowl.org/ontologies/atomrdf#title> "bloguser_blog" ;
        <http://www.w3.org/2000/01/rdf-schema#label> "XML templates demo's Weblog" ;
        <http://rdfs.org/sioc/ns#scope_of> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog#owner> ;
        <http://rdfs.org/sioc/ns#has_owner> <http://demo.openlinksw.com/dataspace/bloguser#this> ;
        <http://www.w3.org/2000/01/rdf-schema#isDefinedBy> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/sioc.rdf> ;
        <http://purl.org/dc/elements/1.1/identifier> "62"^^<http://www.w3.org/2001/XMLSchema#integer> ;
        <http://rdfs.org/sioc/services#has_service> <http://demo.openlinksw.com/RPC2> ,
                <http://demo.openlinksw.com/mt-tb> ,
                <http://demo.openlinksw.com/Atom/bloguser-blog-0> ,
                <http://demo.openlinksw.com/GData/bloguser-blog-0> .
<http://demo.openlinksw.com/RPC2> <http://rdfs.org/sioc/services#service_of> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/mt-tb> <http://rdfs.org/sioc/services#service_of> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog#owner> <http://rdfs.org/sioc/ns#has_scope> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/20> <http://rdfs.org/sioc/ns#has_container> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> ;
        <http://atomowl.org/ontologies/atomrdf#source> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog/21> <http://rdfs.org/sioc/ns#has_container> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> ;
        <http://atomowl.org/ontologies/atomrdf#source> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/dataspace/bloguser#this> <http://rdfs.org/sioc/ns#owner_of> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/dataspace/bloguser/space#this> <http://rdfs.org/sioc/ns#space_of> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/dataspace/person/bloguser#this> <http://xmlns.com/foaf/0.1/made> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/Atom/bloguser-blog-0> <http://rdfs.org/sioc/services#service_of> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .
<http://demo.openlinksw.com/GData/bloguser-blog-0> <http://rdfs.org/sioc/services#service_of> <http://demo.openlinksw.com/dataspace/bloguser/weblog/bloguser_blog> .


1 Rows. -- 240 msec.

16.2.10.1.3. Examples SPARQL DESCRIBE -- Option "CBD"

Example 1

SPARQL
DEFINE output:format "NICE_TTL"
DEFINE sql:describe-mode "CBD"
PREFIX xmp: <http://example.com/xmp/>
DESCRIBE xmp:TheSubject FROM xmp:good1 FROM xmp:good2;

Query result:
fmtaggret-NICE_TTL
LONG VARCHAR
 @prefix ns0: <http://example.com/xmp/> .
 @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
 ns0:TheSubject ns0:details ns0:ChildObject , ( ns0:car ns0:cadr ns0:caddr ) .

Example 2

SPARQL
DEFINE output:format "NICE_TTL"
DEFINE sql:describe-mode "CBD"
PREFIX xmp: <http://example.com/xmp/>
DESCRIBE xmp:TheSubject  xmp:good1 from xmp:good2;

Example 2

SQL>SPARQL
DEFINE sql:describe-mode "CBD"
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
DESCRIBE ?friend
WHERE
  {
    ?s foaf:knows ?friend  .
    ?friend foaf:nick ?nick.
    filter (?s=<http://www.advogato.org/person/rmorgan/foaf.rdf#me>)
}
;

@prefix rdf:	<http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix ns1:	<http://www.advogato.org/person/chrisd/foaf.rdf#> .
@prefix foaf:	<http://xmlns.com/foaf/0.1/> .
ns1:me	rdf:type	foaf:Person .
@prefix rdfs:	<http://www.w3.org/2000/01/rdf-schema#> .
ns1:me	rdfs:seeAlso	<http://www.advogato.org/person/chrisd/foaf.rdf> ;
	foaf:name	"Chris DiBona" ;
	foaf:nick	"chrisd" ;
	foaf:homepage	<http://www.dibona.com> ;
	foaf:mbox_sha1sum	"e8231d19ac0d11ccbdc565485054461e5d71f0d3" .
@prefix ns4:	<http://www.advogato.org/person/schoen/foaf.rdf#> .
ns1:me	foaf:knows	ns4:me .
@prefix ns5:	<http://www.advogato.org/person/jpick/foaf.rdf#> .
ns1:me	foaf:knows	ns5:me .
@prefix ns6:	<http://www.advogato.org/person/benson/foaf.rdf#> .
ns1:me	foaf:knows	ns6:me .
@prefix ns7:	<http://www.advogato.org/person/conrad/foaf.rdf#> .
ns1:me	foaf:knows	ns7:me .
@prefix ns8:	<http://www.advogato.org/person/starshine/foaf.rdf#> .
ns1:me	foaf:knows	ns8:me .
@prefix ns9:	<http://www.advogato.org/person/chip/foaf.rdf#> .
ns1:me	foaf:knows	ns9:me .
@prefix ns10:	<http://www.advogato.org/person/crackmonkey/foaf.rdf#> .
.....

16.2.10.1.4. Example SPARQL DESCRIBE -- Option "OBJCBD"

Example 1

SPARQL
DEFINE output:format "NICE_TTL"
DEFINE sql:describe-mode "OBJCBD"
PREFIX xmp: <http://example.com/xmp/>
DESCRIBE xmp:TheSubject FROM xmp:good1 FROM xmp:good2;

Query result:
fmtaggret-NICE_TTL
LONG VARCHAR
 @prefix ns0: <http://example.com/xmp/> .
 @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
 ns0:Top1 ns0:item ns0:TheSubject .
 ns0:Top2 ns0:items [ rdf:_1 ns0:TheSubject ] .

16.2.10.1.5. Example SPARQL DESCRIBE -- Custom "Post Porocessing" function
create function DB.DBA.SPARQL_DESC_POSTPROC_SOURCES (inout triples any array, in good_graphs any array, in bad_graphs any array, in storage varchar, in opts any array)
{
  declare triple, val, so_dict, so_lst, addon_dict, dlst any;
  dbg_obj_princ ('DB.DBA.SPARQL_DESC_POSTPROC_SOURCES (', triples, good_graphs, bad_graphs, storage, opts, ')');
  so_dict := dict_new (dict_size (triples));
  dict_iter_rewind (triples);
  while (dict_iter_next (triples, triple, val))
    {
      declare v any;
      v := triple[0];
      dict_put (so_dict, iri_to_id_nosignal (v), 1);
      v := triple[2];
      if (isiri_id (v) or (isstring (v) and box_tag (v) = 1))
        dict_put (so_dict, iri_to_id_nosignal (v), 1);
    }
  so_lst := dict_list_keys (so_dict, 1);
  foreach (any s_itm in so_lst) do
    {
      for (sparql select distinct ?g where { graph ?g {{ `iri(?:s_itm)` ?p ?o } union { ?s ?p  `iri(?:s_itm)` }} filter (<LONG::bif:position>(?g, ?:good_graphs))}) do
        dict_put (triples, vector (s_itm, iri_to_id('Source'), "g"), 1);
    }
  return triples;
}
;

SPARQL
DEFINE output:format "NICE_TTL"
DEFINE sql:describe-mode "CBD+SOURCES"
PREFIX xmp: <http://example.com/xmp/>
DESCRIBE xmp:TheSubject FROM xmp:good1 FROM xmp:good2;


SELECT sparql_to_sql_text ('
DEFINE output:format "NICE_TTL"
DEFINE sql:describe-mode "CBD+SOURCES"
PREFIX xmp: <http://example.com/xmp/>
DESCRIBE xmp:TheSubject FROM xmp:good1 FROM xmp:good2') as x long varchar;



16.2.11. Transitivity in SPARQL

Virtuoso SPARQL allows access to Virtuoso's SQL transitivity extension. Read the SQL section for a definition of the options.

The SPARQL syntax is slightly different from the SQL, although the option names and meanings are the same.

In SPARQL, the transitive options occur after a subquery enclosed in braces:

The below produces all the IRI's that are the same as <http://dbpedia.org/resource/New_York>.

SPARQL
SELECT ?syn
WHERE
  {
    {
      SELECT ?x ?syn
      WHERE
        {
          { ?x owl:sameAs ?syn }
          UNION
          { ?syn owl:sameAs ?x }
        }
    }
    OPTION ( TRANSITIVE, t_in (?x), t_out (?syn), t_distinct, t_min (0) )
    FILTER (?x = <http://dbpedia.org/resource/New_York>) .
  }
  

In this case, we provide a binding for ?x in the filter outside of the transitive subquery. The subquery therefore is made to run from in to out. The same effect would be accomplished if we bound ?syn and SELECT ?x, the designations of in and out are arbitrary and for transitive steps that can be evaluated equally well in both directions this makes no difference.

The transitive subquery in the above is

{SELECT ?syn
 WHERE
  {
    { SELECT ?x ?syn
      WHERE
       {
         { ?x owl:sameAs ?syn }
         UNION
         { ?syn owl:sameAs ?x}
       }
    } OPTION (TRANSITIVE, t_in (?x), t_out (?syn), t_distinct, t_min (0) )
  }
} .
  

Leaving out the option would just look for one step of owl:sameAs. Making it transitive will apply the subquery to all bindings it produces until all are visited at least once (the t_distinct modifier).

If the transitive step consists of a single triple pattern, there is a shorthand:

  <alice> foaf:knows ?friend option (transitive t_min (1))
  

will bind ?friend to all directly and indirectly found foaf:known individuals. If t_min had been 0, Malice> would have also been in the generated bindings.

The syntax is

  option (transitive transitivity_option[,...])

  transitivity_option ::=  t_in (<variable_list>)
  | t_out (<variable_list>)
  | t_distinct
  | t_shortest_only
  | t_no_cycles
  | t_cycles_only
  | t_min (INTNUM)
  | t_max (INTNUM)
  | t_end_flag (<variable>)
  | t_step (<variiable_or_step>)
  | t_direction INTNUM

  variable_list ::= <variable> [,...]

  variable_or_step ::= <variable> | path_id' | 'step_no'
  

Unlike SQL, variable names are used instead of column numbers. Otherwise all the options have the same meaning.

Some examples of the use of transitivity are:

16.2.11.1. Collection of Transitivity Option Demo Queries for SPARQL

16.2.11.1.1. Example for finding out what graphs contain owl:sameAs for "New York"

To find out what graphs contain owl:sameAs for Dan York, we do

   SELECT ?g ?x count (*) as ?count
   WHERE {
           {
             SELECT ?x ?alias ?g
             WHERE {
                     {
                       GRAPH ?g {?x owl:sameAs ?alias }
                     }
             UNION
                     {
                      GRAPH ?g {?alias owl:sameAs ?x}
                     }
                   }
           }
           OPTION ( TRANSITIVE,
                    t_in (?x),
                    t_out (?alias),
                    t_distinct,
                    t_min (1)) .
           FILTER (?x = <http://dbpedia.org/resource/New_York> ) .
         }
  

Here we select all paths that start with the initial URI and pass through one or more sameAs statements. Each step produces a result of the transitive subquery. The graph where the sameAs triple was found is returned and used as the grouping column. In this way we see how many times each graph is used. Note that graphs are counted many times since the graphs containing immediate sameAs statements are counted for paths of length 1, then again as steps on paths that reach to their aliases and so on.


16.2.11.1.2. Example for query that takes all the people known by Tim Berners-Lee, to a depth between 1 and 4 applications of the subquery

This query takes all the people known by kidehen, to a depth between 1 and 4 applications of the subquery. It then sorts them by the distance and the descending count of connections of each found connection. This is equivalent to the default connections list shown by LinkedIn.

  SPARQL
  SELECT ?o ?dist ((SELECT COUNT (*) WHERE {?o foaf:knows ?xx}))
  WHERE
    {
      {
        SELECT ?s ?o
        WHERE
          {
            ?s foaf:knows ?o
          }
      } OPTION ( TRANSITIVE,
                 t_distinct,
                 t_in(?s),
                 t_out(?o),
                 t_min (1),
                 t_max (4),
                 t_step ('step_no') as ?dist ) .
      FILTER (?s= <http://www.w3.org/People/Berners-Lee/card#i>)
    }
  ORDER BY ?dist DESC 3
  LIMIT 50
  

16.2.11.1.3. Example for query that takes all the people known by Tim Berners-Lee, to a depth between 2 and 4 applications of the subquery

This query takes all the people known by kidehen, to a depth between 2 and 4 applications of the subquery. It then sorts them by the distance and the descending count of connections of each found connection. This is equivalent to the default connections list shown by LinkedIn.

  SPARQL
  SELECT ?o ?dist ((SELECT COUNT (*) WHERE {?o foaf:knows ?xx}))
  WHERE
    {
      {
        SELECT ?s ?o
        WHERE
          {
            ?s foaf:knows ?o
          }
      } OPTION ( TRANSITIVE,
                 t_distinct,
                 t_in(?s),
                 t_out(?o),
                 t_min (2),
                 t_max (4),
                 t_step ('step_no') as ?dist) .
      FILTER (?s= <http://www.w3.org/People/Berners-Lee/card#i>)
    }
  ORDER BY ?dist DESC 3
  LIMIT 50
  

16.2.11.1.4. Example for finding how two people know each other and what graphs are involved in the connection

To find how two people know each other and what graphs are involved in the connection, we do:

  SPARQL
  SELECT ?link ?g ?step ?path
  WHERE
    {
      {
        SELECT ?s ?o ?g
        WHERE
          {
            graph ?g {?s foaf:knows ?o }
          }
      } OPTION ( TRANSITIVE,
                 t_distinct,
                 t_in(?s),
                 t_out(?o),
                 t_no_cycles,
                 T_shortest_only,
                 t_step (?s) as ?link,
                 t_step ('path_id') as ?path,
                 t_step ('step_no') as ?step,
                 t_direction 3) .
      FILTER (?s= <http://www.w3.org/People/Berners-Lee/card#i>
      && ?o = <http://www.advogato.org/person/mparaz/foaf.rdf#me>)
    }
    LIMIT 20
  

This query binds both the t_in and t_out variables. The ?g is left as a free variable. Also, specifying ?s and the system defined constants step_no and path_id as with t_step, we get for each transitive step a row of results with the intermediate binding of ?s, the count of steps from the initial ?s and a distinct identifier for the individual path, since there can be many distinct paths that link the ?s and ?o specified in the filter.

See the SQL transitive option section for details on the meaning of step_no and path_id.


16.2.11.1.5. Example for TBox Subsumption

Subsumption Demo Using Transitivity Clause

Yago Class Hierarchy (TBox) Subsumption

AlphaReceptors

# all subjects with IRI: <http://dbpedia.org/class/yago/AlphaReceptor105609111>,
# that are sub-classes of anything (hence ?y)
# without restrictions on tree levels
SELECT ?y
FROM <http://dbpedia.org/resource/classes/yago#>
WHERE
  {
    {
      SELECT *
      WHERE
        {
          ?x rdfs:subClassOf ?y .
        }
    }
    OPTION (TRANSITIVE, t_distinct, t_in (?x), t_out (?y) ) .
    FILTER (?x = <http://dbpedia.org/class/yago/AlphaReceptor105609111>)
  }

16.2.11.1.6. Example for Receptors
SELECT ?x
FROM <http://dbpedia.org/resource/classes/yago#>
WHERE
  {
    {
      SELECT *
      WHERE
        {
          ?x rdfs:subClassOf ?y .
        }
    } OPTION (transitive, t_distinct, t_in (?x), t_out (?y) ) .
  FILTER (?y = <http://dbpedia.org/class/yago/Receptor105608868>)
}

16.2.11.1.7. Inference Rule example using transitive properties from SKOS vocabulary
The following example demostrates the steps how to retrieve the skos ontology, add triples for skos:broaderTransitiveinto the graph, define inference rule, and at the and execute sparql query with inference rule and transitivity option. The queries were executed against the LOD instance (http://lod.openlinksw.com):
  1. Make the Context graph, assuming you don't want to load entire SKOS vocabulary into our Quad Store:
    SQL>SPARQL
    PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
    PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
    INSERT INTO GRAPH <urn:rules.skos> { skos:broader rdfs:subPropertyOf skos:broaderTransitive .
                                         skos:narrower rdfs:subPropertyOf skos:narrowerTransitive };
    
  2. OR Load entire SKOS ontology into Quad Store via iSQL interface (commandline or HTML based Conductor):
    SQL>DB.DBA.RDF_LOAD_RDFXML (http_get ('http://www.w3.org/2009/08/skos-reference/skos-owl1-dl.rdf'), 'no', 'urn:rules.skos');
    Done.
    
  3. Make Context Rule:
    SQL>rdfs_rule_set ('skos-trans', 'urn:rules.skos');
    Done.
    
  4. Go to SPARQL endpoint, for ex. http://lod.openlinksw.com/sparql
  5. Use inference rule pragma to set context rule for SPARQL query, i.e:
    SPARQL
    DEFINE input:inference "skos-trans"
    PREFIX p: <http://dbpedia.org/property/>
    PREFIX dbpedia: <http://dbpedia.org/resource/>
    PREFIX category: <http://dbpedia.org/resource/Category:>
    PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
    PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
    PREFIX geo: <http://www.georss.org/georss/>
    
    SELECT DISTINCT ?m ?n ?p ?d
    WHERE
      {
        ?m rdfs:label ?n.
        ?m skos:subject ?c.
        ?c skos:broaderTransitive category:Churches_in_Paris OPTION (TRANSITIVE) .
        ?m p:abstract ?d.
        ?m geo:point ?p
        FILTER ( lang(?n) = "fr" )
        FILTER ( lang(?d) = "fr" )
      }
    
  6. You will get 22 rows returned from the query. Note that for comparison, if the option (transitive) is ommitted, then only 2 rows will be returned in our example query:
    Transitive option

    Figure: 16.2.11.1.7.1.1. Transitive option

16.2.11.1.8. Inference Rule example using transitive properties from SKOS vocabulary: Variant II
This example shows how to find entities that are subcategories of Protestant Churches, no deeper than 3 levels within the concept scheme hierarchy, filtered by a specific subcategory. It demonstrates use of inference rules, sub-queries, and filter to obtain entities associated with category: Protestant_churches combined with the use of the transitivitve closure, sets to a maximum of 3 steps down a SKOS based concept scheme hierarchy:
  1. Make sure the inference rule "skos-trans" is created as described in the previous example
  2. Go to SPARQL endpoint, for ex. http://lod.openlinksw.com/sparql
  3. Use inference rule pragma to set context rule for SPARQL query, i.e:
    DEFINE input:inference "skos-trans"
    PREFIX p: <http://dbpedia.org/property/>
    PREFIX dbpedia: <http://dbpedia.org/resource/>
    PREFIX category: <http://dbpedia.org/resource/Category:>
    PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
    PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
    PREFIX geo: <http://www.georss.org/georss/>
    
    SELECT DISTINCT ?c AS ?skos_broader
           ?trans AS ?skos_narrower
           ?dist AS ?skos_level
           ?m ?n ?p AS ?geo_point
    WHERE
      {
        {
          SELECT ?c  ?m ?n ?p ?trans ?dist
          WHERE
            {
      	      ?m rdfs:label ?n.
      	      ?m skos:subject ?c.
      	      ?c skos:broaderTransitive category:Protestant_churches .
      	      ?c skos:broaderTransitive ?trans
                    OPTION ( TRANSITIVE,
                             t_distinct,
                             t_in (?c),
                             t_out (?trans),
                             t_max (3),
                             t_step ( 'step_no' ) as ?dist ) .
      	      ?m p:abstract ?d.
      	      ?m geo:point ?p
      	      FILTER ( lang(?n) = "en" )
      	      FILTER ( lang(?d) = "en" )
            }
        }
        FILTER ( ?trans = <http://dbpedia.org/resource/Category:Churches_in_London> )
      }
    ORDER BY ASC (?dist)
    
  4. You will get 22 rows returned from the query.
    Transitive option

    Figure: 16.2.11.1.8.1.1. Transitive option



16.2.12. Supported SPARQL-BI "define" pragmas

SPARQL-BI compiler and run-time support are not isolated from environment and some used heuristics are not perfect and sometimes different use cases require different behavior within same standard. These reasons are seen frequently in the industry, and the solution is well known: compiler pragmas. So we allow them at the beginning of any SPARQL query in form:

define QName value

16.2.12.1. Pragmas to control the input data set for the query

input:default-graph-uri works like "FROM" clause;
input:named-graph-uri works like "FROM NAMED" clause;
input:default-graph-exclude works like "NOT FROM" clause;
input:named-graph-exclude works like "NOT FROM NAMED" clause.

The difference is that these pragmas have higher priority and they can be used for security restrictions in combination with define input:freeze that blocks further changes in the list of source graphs. The web service endpoint (or similar non-web application) can edit the incoming query by placing list of pragmas ended with input:freeze in front of query text. Even if the intruder will try to place some graph names, it will get compilation error, not an access to the data. input:freeze disables all input:grab-... pragmas as well.

All these pragmas are described in more details here, but in addition there are some experimental:

These defines are described also here. Note that all of them can be used in option list of "FROM ... OPTION (get:... )" extended SPARQL-BI syntax for FROM/FROM NAMED clause.

Note that all of them can be used in option list of "FROM ... OPTION (get:... )" extended SPARQL-BI syntax for FROM/FROM NAMED clause.


16.2.12.2. Pragmas to control code generation


16.2.12.3. Pragmas to control the type of the result


16.2.12.4. Supported formats that return a string session


16.2.12.5. Supported formats that do not return a string session to the caller

Supported formats that do not return a string session to the caller, but form an HTTP response instead and send it directly to the client HTTP connection with an appropriate HTTP header:


16.2.12.6. Supported Special formats

A special format "_JAVA_" is for SPARQL queries sent via JDBC. It changes only the output of ASK queries.

The "_JAVA_" and "_UDBC_" are aliases in Virtuoso Version 6.1.5. Till Virtuoso 6.1.5 the default behaves as "TTL". For Virtuoso version 6.1.5 and higher it is ODBC/JDBC oriented e.g. "_UDBC_" is the default format for ODBC/JDBC clients.

Note: If you want to revert to old TTL behaviour, you should specify it explicitly via:

define output:format "TTL"

Note: Pragmas output:valmode and output:format may conflict if used together, and if they're not in conflict then output:valmode is redundant: the compiler knows for sure which output:valmode-s are needed by various output:format-s.


16.2.12.7. Minor notes

Values of most pragmas are strings. Exceptions are:

that have integer values.

Values of some pragmas a passed through the compiler to the run-time so they are seen in the generated SQL code as arguments of procedures:

so sometimes you may meet them in SQL debuggers output and the like.



16.2.13. Built-in bif functions


16.2.14. Sending SOAP Requests to Virtuoso SPARQL Endpoint

This section presents a sample scenario on how to execute a SPARQL query as a SOAP request to the Virtuoso SPARQL Endpoint.

  1. Assume the following sample SOAP request containing simple SPARQL query:
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
       <soapenv:Body>
          <query-request xmlns="http://www.w3.org/2005/09/sparql-protocol-types/#">
             <query xmlns="">SELECT DISTINCT ?z FROM virtrdf: {?x ?y ?z .} LIMIT 10</query>
          </query-request>
       </soapenv:Body>
    </soapenv:Envelope>
    
  2. Save locally the content from above for ex. to file with the name "soap.xml".
  3. To pass the SOAP request to a Virtuoso SPARQL Endpoint, execute the following curl command:
    $ curl -d@soap.xml -H "Content-Type:text/xml" -H "SOAPAction: ''" http://localhost:8890/sparql
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
      <soapenv:Body>
        <query-result xmlns="http://www.w3.org/2005/09/sparql-protocol-types/#">
          <sparql xmlns="http://www.w3.org/2005/sparql-results#" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/sw/DataAccess/rf1/result2.xsd">
            <head>
             <variable name="z"/>
            </head>
            <results distinct="false" ordered="true">
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#QuadStorage</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#QuadMap</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#QuadMapValue</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapColumn</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#QuadMapColumn</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapATable</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#QuadMapATable</uri>
                </binding>
              </result>
              <result>
                <binding name="z">
                 <uri>http://www.openlinksw.com/schemas/virtrdf#QuadMapFText</uri>
                </binding>
              </result>
            </results>
          </sparql>
        </query-result>
      </soapenv:Body>
    </soapenv:Envelope>
    

16.2.15. Use of Hash Join With RDF

For queries that touch large quantities of RDF data and have many selection confitions use of hash join is often desirable. For short lookup queries hash join is usually not desirable.

Depending on the version, the query optimizer may of may not do hash joins with RDF. This is controlled by the hash_join_enable flag.

To check the flag do:

sys_stat ('hash_join_enable');

The flag is set in the ini file in the [Flags] section:

[Flags]
hash_join_enable = 2

The flag may be transiently set with the SQL statement:

__dbf_set ('hash_join_enable', 2);

To check the effectiveness of hash joins and whether the optimizer introduces these in the first place, it is most convenient to use the profile function.

In the following example, we look at the star schema benchmark Q4:

SPARQL PREFIX rdfh: <http://lod2.eu/schemas/rdfh#>
SELECT SUM(?rdfh_lo_revenue) AS ?lo_revenue ?d_year ?p_brand1 
FROM <http://lod2.eu/schemas/rdfh-inst#ssb1_ttl_qb>
WHERE 
  {
    ?li a rdfh:lineorder ;
        rdfh:lo_orderdate ?lo_orderdate ;
        rdfh:lo_partkey ?lo_partkey ;
        rdfh:lo_suppkey ?lo_suppkey ;
	      rdfh:lo_revenue ?rdfh_lo_revenue .

    ?lo_orderdate rdfh:d_year ?d_year .
    ?lo_partkey rdfh:p_brand1 ?p_brand1 .
    ?lo_partkey rdfh:p_category "MFGR#12" .
    ?lo_suppkey rdfh:s_region "AMERICA" . 
  } 
GROUP BY ?d_year ?p_brand1 
ORDER BY ?d_year ?p_brand1
;

The query aggregates rows from a large fact table and selects based on a date range, a brand and the location of the supplier. To run this, it is best to put the query in a file and have profile ('sparql... ') wrapped around the text. Then in isql:

SET SET BLOBS ON;
LOAD q4.sql;

Without hash join the profile is:

{ 
time   1.9e-06% fanout         1 input         1 rows
 
Precode:
      0: __rdflit := Call __rdflit (rdflit170373)
      5: __rdflit := Call __rdflit (rdflit16802)
      10: BReturn 0
Subquery 31 
{ 
time     1e-06% fanout         1 input         1 rows
{ fork
time   0.00035% fanout         1 input         1 rows
{ fork
time       3.6% fanout     1e+06 input         1 rows
RDF_QUAD     1e+06 rows(s_18_9_t6.S, s_18_9_t6.O)
 inlined  P =  #dfh#p_brand1  G =  #nst#ssb1_ttl_qb 
time       1.7% fanout   0.03979 input     1e+06 rows
RDF_QUAD_POGS unq      0.04 rows (s_18_9_t7.S)
 P =  #dfh#p_category  ,  O = rdflit170373 ,  S = s_18_9_t6.S ,  G =  #nst#ssb1_ttl_qb 
time       2.5% fanout   180.179 input     39790 rows
 
Precode:
      0: __ro2sq := Call __ro2sq (s_18_9_t6.O)
      5: BReturn 0
RDF_QUAD_POGS   4.4e+02 rows(s_18_9_t2.S)
 P =  #dfh#lo_partkey  ,  O = k_s_18_9_t6.S G =  #nst#ssb1_ttl_qb 
time        35% fanout         1 input 7.16932e+06 rows
RDF_QUAD         1 rows(s_18_9_t3.O, s_18_9_t3.S)
 inlined  P =  #dfh#lo_suppkey  ,  S = s_18_9_t2.S G =  #nst#ssb1_ttl_qb 
time       4.5% fanout  0.201214 input 7.16932e+06 rows
RDF_QUAD_POGS unq       0.2 rows ()
 P =  #dfh#s_region  ,  O = rdflit16802 ,  S = cast ,  G =  #nst#ssb1_ttl_qb 
time        21% fanout         1 input 1.44256e+06 rows
RDF_QUAD         1 rows(s_18_9_t4.S, s_18_9_t4.O)
 inlined  P =  #dfh#lo_revenue  ,  S = k_s_18_9_t2.S G =  #nst#ssb1_ttl_qb 
time        12% fanout         1 input 1.44256e+06 rows
RDF_QUAD_POGS unq       0.8 rows (s_18_9_t0.S)
 P =  #-ns#type  ,  O =  #dfh#lineorder  ,  S = k_s_18_9_t2.S ,  G =  #nst#ssb1_ttl_qb 
time        14% fanout         1 input 1.44256e+06 rows
RDF_QUAD         1 rows(s_18_9_t1.O)
 inlined  P =  #dfh#lo_orderdate  ,  S = s_18_9_t0.S G =  #nst#ssb1_ttl_qb 
time       3.5% fanout         1 input 1.44256e+06 rows
RDF_QUAD         1 rows(s_18_9_t5.O)
 inlined  P =  #dfh#d_year  ,  S = cast G =  #nst#ssb1_ttl_qb 
time       1.9% fanout         0 input 1.44256e+06 rows
Sort (s_18_9_t5.O, s_18_9_t6.O) -> (s_18_9_t4.O, __ro2sq)
 
}
time   4.1e-05% fanout       280 input         1 rows
group by read node  
(s_18_9_t5.O, s_18_9_t6.O, aggregate, __ro2sq)
time   0.00043% fanout         0 input       280 rows
 
Precode:
      0: __ro2sq := Call __ro2sq (s_18_9_t5.O)
      5: BReturn 0
Sort (__ro2sq, __ro2sq) -> (aggregate)
 
}
time   2.9e-05% fanout       280 input         1 rows
Key from temp (aggregate, __ro2sq, __ro2sq)
 
 
After code:
      0: lo_revenue :=  := artm aggregate
      4: d_year :=  := artm __ro2sq
      8: p_brand1 :=  := artm __ro2sq
      12: BReturn 0
time   7.6e-07% fanout         0 input       280 rows
Subquery Select(lo_revenue, d_year, p_brand1)
}
 
After code:
      0: lo_revenue := Call __ro2sq (lo_revenue)
      5: d_year := Call __ro2sq (d_year)
      10: p_brand1 := Call __ro2sq (p_brand1)
      15: BReturn 0
time   6.3e-07% fanout         0 input       280 rows
Select (lo_revenue, d_year, p_brand1)
}


 5542 msec 2420% cpu, 2.11877e+07 rnd 8.13668e+06 seq   85.6039% same seg   13.6018% same pg 
Compilation: 10 msec 0 reads         0% read 0 messages         0% clw


  <para>With hash join the profile is:</para>
<programlisting><![CDATA[
{ 
time   1.4e-05% fanout         1 input         1 rows
time         7% fanout         1 input         1 rows
 
Precode:
      0: __rdflit := Call __rdflit (rdflit170373)
      5: __rdflit := Call __rdflit (rdflit16802)
      10: BReturn 0
{ hash filler
time     0.088% fanout     1e+06 input         1 rows
RDF_QUAD     1e+06 rows(s_18_9_t6.S, s_18_9_t6.O)
 inlined  P =  #dfh#p_brand1  G =  #nst#ssb1_ttl_qb 
time      0.15% fanout         0 input     1e+06 rows
Sort hf 39 (s_18_9_t6.S, s_18_9_t6.S) -> (s_18_9_t6.O)
 
}
time   0.00046% fanout         1 input         1 rows
{ hash filler
time    0.0004% fanout      2556 input         1 rows
RDF_QUAD_POGS   2.6e+03 rows(s_18_9_t5.S, s_18_9_t5.O)
 inlined  P =  #dfh#d_year  G =  #nst#ssb1_ttl_qb 
time   0.00056% fanout         0 input      2556 rows
Sort hf 56 (s_18_9_t5.S) -> (s_18_9_t5.O)
 
}
time    0.0036% fanout         1 input         1 rows
{ hash filler
time   0.00094% fanout     12068 input         1 rows
RDF_QUAD_POGS   1.2e+04 rows(s_18_9_t8.S)
 P =  #dfh#s_region  ,  O = rdflit16802 G =  #nst#ssb1_ttl_qb 
time   0.00046% fanout         0 input     12068 rows
Sort hf 69 (s_18_9_t8.S)
}
time     0.012% fanout         1 input         1 rows
{ hash filler
time    0.0026% fanout     39790 input         1 rows
RDF_QUAD_POGS     4e+04 rows(s_18_9_t7.S)
 P =  #dfh#p_category  ,  O = rdflit170373 G =  #nst#ssb1_ttl_qb 
time    0.0013% fanout         0 input     39790 rows
Sort hf 82 (s_18_9_t7.S)
}
Subquery 88 
{ 
time   1.5e-05% fanout         1 input         1 rows
{ fork
time   1.3e-05% fanout         1 input         1 rows
{ fork
time        52% fanout 7.16932e+06 input         1 rows
RDF_QUAD   1.8e+08 rows(s_18_9_t2.O, s_18_9_t2.S)
 inlined  P =  #dfh#lo_partkey  G =  #nst#ssb1_ttl_qb 
hash partition+bloom by 86 (tmp)hash join merged always card      0.04 -> ()
time       6.1% fanout         1 input 7.16932e+06 rows
 
Precode:
      0: s_18_9_t7.S :=  := artm s_18_9_t2.O
      4: BReturn 0
Hash source 82 merged into ts       0.04 rows(cast) -> ()
time         7% fanout  0.201214 input 7.16932e+06 rows
RDF_QUAD         1 rows(s_18_9_t3.O, s_18_9_t3.S)
 inlined  P =  #dfh#lo_suppkey  ,  S = s_18_9_t2.S G =  #nst#ssb1_ttl_qb 
hash partition+bloom by 73 (tmp)hash join merged always card       0.2 -> ()
time    0.0018% fanout         1 input 1.44256e+06 rows
Hash source 69 merged into ts        0.2 rows(cast) -> ()
time       2.3% fanout         1 input 1.44256e+06 rows
RDF_QUAD_POGS unq       0.8 rows (s_18_9_t0.S)
 P =  #-ns#type  ,  O =  #dfh#lineorder  ,  S = k_s_18_9_t2.S ,  G =  #nst#ssb1_ttl_qb 
time       2.3% fanout         1 input 1.44256e+06 rows
RDF_QUAD         1 rows(s_18_9_t1.O, s_18_9_t1.S)
 inlined  P =  #dfh#lo_orderdate  ,  S = s_18_9_t0.S G =  #nst#ssb1_ttl_qb 
hash partition+bloom by 60 ()
time      0.38% fanout         1 input 1.44256e+06 rows
Hash source 56           1 rows(cast) -> (s_18_9_t5.O)
time       2.2% fanout         1 input 1.44256e+06 rows
RDF_QUAD         1 rows(s_18_9_t4.O)
 inlined  P =  #dfh#lo_revenue  ,  S = k_s_18_9_t0.S G =  #nst#ssb1_ttl_qb 
time        20% fanout         1 input 1.44256e+06 rows
Hash source 39         1.6 rows(k_s_18_9_t2.O, k_s_18_9_t7.S) -> (s_18_9_t6.O)
time      0.86% fanout         0 input 1.44256e+06 rows
Sort (set_no, s_18_9_t5.O, s_18_9_t6.O) -> (s_18_9_t4.O)
 
}
time   0.00023% fanout       280 input         1 rows
group by read node  
(gb_set_no, s_18_9_t5.O, s_18_9_t6.O, aggregate)
time       0.1% fanout         0 input       280 rows
 
Precode:
      0: __ro2sq := Call __ro2sq (s_18_9_t6.O)
      5: __ro2sq := Call __ro2sq (s_18_9_t5.O)
      10: BReturn 0
Sort (__ro2sq, __ro2sq) -> (aggregate)
 
}
time    0.0002% fanout       280 input         1 rows
Key from temp (aggregate, __ro2sq, __ro2sq)
 
 
After code:
      0: lo_revenue :=  := artm aggregate
      4: d_year :=  := artm __ro2sq
      8: p_brand1 :=  := artm __ro2sq
      12: BReturn 0
time   5.3e-06% fanout         0 input       280 rows
Subquery Select(lo_revenue, d_year, p_brand1)
}
 
After code:
      0: lo_revenue := Call __ro2sq (lo_revenue)
      5: d_year := Call __ro2sq (d_year)
      10: p_brand1 := Call __ro2sq (p_brand1)
      15: BReturn 0
time   5.5e-06% fanout         0 input       280 rows
Select (lo_revenue, d_year, p_brand1)
}


 3101 msec 993% cpu, 1.14967e+07 rnd 1.81041e+08 seq   99.5619% same seg  0.417643% same pg 
Compilation: 23 msec 0 reads         0% read 0 messages         0% clw

These are runs on warm cache on a dataset of scale factor 30, about 3 bm triples.

We notice that the hash based plan completes faster and has a lower CPU percentage. This is to be expected since hash joins are specially useful for joins between a large table and a smaller one.

The index based plan does 21M random index lookups whereas the hash based one only 11M. We also note that the index access pattern is more local with the hash plan, with 99% of lookups hitting the same segment as the previous, against only 85%.

These numbers are in the summary at the bottom of each profile:

The index based plan starts with the smallest selection, in this case the days parts with the given brand. From this it joins to the lineorder and gets the supplier. It fetches the region of the supplier and leaves out the ones not in America.

The hash based plan makes a hash table of all the parts with the brand, all the suppliers in America and all the days in the time dimension. It then scans lineorder and first drops the rows whose part is not in the hash, then the rows where the supplier is not in the hash, then gets the year of each date. This last operation does not drop any rows but is still done by hash because there are relatively few days and the day to year translation is done a very large number of times.

The number of rows in and out of each operator is given after the time percent, above the operator. Fanout is the number of rows of output per one row of input.

Given the long-running queries of any workload, you can perform this same comparison to determine if hash join is useful in the case at hand. Looking at the real time and CPU% is usually enough.

You may experiment with these options and look at the profile output for each.

For some analytics workloads enabling hash join may give a factor of 2 or 3 more performance. For lookup workloads there may be no gain.

Sometimes a hash join may be used when an index lookupp would be better, thus in some cases it makes sense to turn off hash joins either per query or globally.