Coder Social home page Coder Social logo

vaimee / sepa Goto Github PK

View Code? Open in Web Editor NEW
26.0 8.0 10.0 34.68 MB

Get notifications about changes in your SPARQL endpoint.

Java 99.85% Dockerfile 0.15%
java semantic-web sparql sparql-endpoints sparql-query rdf rdf-store rdf-triples internet-of-things web-of-things

sepa's People

Contributors

andre-bisa avatar dependabot[bot] avatar desmovalvo avatar ferrariandrea avatar fr4ncidir avatar gregoriomonari avatar leobel96 avatar lroffia avatar ludovicogranata avatar relu91 avatar trivo78 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sepa's Issues

Update scheduler UncaughtExceptionHandler

java.lang.OutOfMemoryError: Java heap space
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Update Scheduler"
Exception in thread "I/O dispatcher 1" java.lang.OutOfMemoryError: Java heap space

Secure WebSockets on browser

Trying to connect to the secure WebSocket address of SEPA form browser result to an error. The error is generated due to the fact that the browser does not accept SEPA certificate. After manually accepting it visiting the following URL https://localhost:9443/secure/sparql the issue is gone.

Provide a better error when wsClient is not initializated

This issue is related to #27. As stated in that issue when new SPARQL11SEProtocol(properties); constructor is used the wsClient is not initialized.

Subscribing with the procedure below results in a 500 "Secure mode: unsecure request not allowed" response. It is totally unrelated with the root cause of the problem.

final SPARQL11SEProperties properties = new SPARQL11SEProperties(new File("file.jsap"));
client = new SPARQL11SEProtocol(properties);

SubscribeRequest sub = new SubscribeRequest("select * where {?a ?b ?c}");
Response aswer = client.subscribe(sub);

Suggested solution
Change the error to reflect the missing subscription handler or better delete new SPARQL11SEProperties(new File("file.jsap")); ad use only SPARQL11SEProtocol(SPARQL11SEProperties properties, ISubscriptionHandler handler)

Re-use socket

The socket of the http server should be created with option SO_REUSEADDR to avoid a painful waiting before restarting the engine

Access-control-allow-origin: http://localhost:4200 (duplicated)

POST /sparql HTTP/1.1
Host: www.vaimee.com:8443
Connection: keep-alive
Content-Length: 174
Pragma: no-cache
Cache-Control: no-cache
Accept: application/json
Origin: http://localhost:4200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3115.0 Safari/537.36
Authorization: Bearer eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJTRVBBVGVzdCIsImF1ZCI6WyJodHRwczpcL1wvd290LmFyY2VzLnVuaWJvLml0Ojg0NDNcL3NwYXJxbCIsIndzczpcL1wvd290LmFyY2VzLnVuaWJvLml0Ojk0NDNcL3NwYXJxbCJdLCJuYmYiOjE0OTYyMTUzNzUsImlzcyI6Imh0dHBzOlwvXC93b3QuYXJjZXMudW5pYm8uaXQ6ODQ0M1wvb2F1dGhcL3Rva2VuIiwiZXhwIjoxNDk2MjE1MzgxLCJpYXQiOjE0OTYyMTUzNzYsImp0aSI6IjdiNGIzNzdkLWVjY2EtNDBlNi1hZmE1LTdhYjMzYmNmMWUzZjphMjM3OTdmYS02YTkwLTQyMDQtODZjMC04NjExNDhiYjAwYmIifQ.V5W6ateLz7s7RFw6oQMCmGWPHF5NH51bvkpL9P9ef8SfAP_7uuTz426yj_MmPy6jx1wp-Rr2gocjNQLjuBbPYNVFi0XfNYjmUWSJpvYzMBsi-3n1kE0r7Wi_sU5Uot7sJwU7Vmt0D2XSA1t1f3DxLifnYiEc-6ujP442EdZpCU4
Content-Type: application/sparql-update
Referer: http://localhost:4200/page/unit-test
Accept-Encoding: gzip, deflate, br
Accept-Language: it-IT,it;q=0.8,en-US;q=0.6,en;q=0.4,la;q=0.2

HTTP/1.1 200 OK
Date: Wed, 31 May 2017 07:22:56 GMT
Content-type: application/json
Access-control-allow-origin: http://localhost:4200
Access-control-allow-origin: http://localhost:4200
Content-length: 488

XMLHttpRequest cannot load https://www.vaimee.com:8443/sparql. The 'Access-Control-Allow-Origin' header contains multiple values 'http://localhost:4200, http://localhost:4200', but only one is allowed. Origin 'http://localhost:4200' is therefore not allowed access.

java.net.BindException: Address already in use

Closing and re-starting SEPA I often get this exception:

java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processSessionRequests(DefaultListeningIOReactor.java:243) at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvents(DefaultListeningIOReactor.java:144) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) at org.apache.http.impl.nio.bootstrap.HttpServer$2.run(HttpServer.java:122) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) org.apache.http.nio.reactor.IOReactorException: Failure binding socket to address 0.0.0.0/0.0.0.0:8000 at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processSessionRequests(DefaultListeningIOReactor.java:248) at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvents(DefaultListeningIOReactor.java:144) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) at org.apache.http.impl.nio.bootstrap.HttpServer$2.run(HttpServer.java:122) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processSessionRequests(DefaultListeningIOReactor.java:243) ... 6 more

Then I have to wait some minutes before being able to re-start the SEPA again.

Include constrained devices

Is your feature request related to a problem? Please describe.
Not enough memory in a device to store everything, jsap, ysap, code, etc

Describe the solution you'd like
To make an update, I send a link towards a well known jsap or ysap, the tag of the update, the forced bindings. The sepa will perform substitutions...

Additional context
Constrained devices, on constrained connections

Reduce log during mvn tests

Is your feature request related to a problem? Please describe.
Right now the log is so verbose that it is almost unuseful. Its length exceeds the maximum line count in IntelliJ and TravisCI, which makes hard to pinpoint java exceptions or errors.

Describe the solution you'd like
Move most of the DEBUG messages to INFO level and use DEBUG level as default during maven build process

Describe alternatives you've considered
Another alternative is to change the debug level of maven build process ERROR.

Additional context
Issue #7 may be related to this feature request as we already removed some logging verbosity in the past.

"LONG" Query POST to Virtuoso ends with "wrong path" while GET trunks the the query

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

System information(please complete the following information):

  • OS: [e.g. iOS]
  • Engine version [e.g. 22]
  • SparqlEnpoint used [e.g. blazegraph 1.0.7]

Additional context
Add any other context about the problem here.

SEPA 2nd run fails with default .jpar(s)

Describe the bug
Error message:

[...]
# WEB: http://site.unibo.it/wot                                                          #
# WIKI: https://github.com/arces-wot/SEPA/wiki                                           #
##########################################################################################
2018-10-14T21:02:07,194 [ERROR] [main                     ] (EngineProperties.java:182) scheduler-timeout is missing
null

To Reproduce
Run 2 times the engine v0.9.5

Expected behavior
running normally

System information(please complete the following information):

  • OS: Ubuntu 18.04.1 LTS
  • Engine version: 0.9.5
  • SparqlEnpoint used [e.g. blazegraph 1.0.7]: blazegraph 2.1.4

Additional context
Deleting .jpars makes the engine run again.

Subscription duplication function

When I subscribe, I get a subscription id.

Let's consider that I have to build up another client, and its task is related to the same sparql subscription.
For now, I just take the same SPARQL and subscribe as I did in the previous.

Why not do something like

dup_subid = duplicate_subscription(subid)

which adds another client to the ones interested in a pre-existing subscription?

Grizzly exception (Websocket)

Jun 06, 2017 3:50:06 PM org.glassfish.grizzly.filterchain.DefaultFilterChain execute
WARNING: GRIZZLY0013: Exception during FilterChain execution
java.lang.IllegalStateException: Unknown protocol MCTP/1.0
at org.glassfish.grizzly.http.Protocol.valueOf(Protocol.java:111)
at org.glassfish.grizzly.http.HttpHeader.getProtocol(HttpHeader.java:815)
at org.glassfish.grizzly.http.HttpServerFilter.prepareResponse(HttpServerFilter.java:867)
at org.glassfish.grizzly.http.HttpServerFilter.encodeHttpPacket(HttpServerFilter.java:834)
at org.glassfish.grizzly.http.HttpServerFilter.commitAndCloseAsError(HttpServerFilter.java:1185)
at org.glassfish.grizzly.http.HttpServerFilter.sendBadRequestResponse(HttpServerFilter.java:1177)
at org.glassfish.grizzly.http.HttpServerFilter.onHttpHeaderError(HttpServerFilter.java:796)
at org.glassfish.grizzly.http.HttpCodecFilter.handleRead(HttpCodecFilter.java:583)
at org.glassfish.grizzly.http.HttpServerFilter.handleRead(HttpServerFilter.java:334)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:526)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.SameThreadIOStrategy.executeIoEvent(SameThreadIOStrategy.java:103)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeyEvents(SelectorRunner.java:415)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeys(SelectorRunner.java:384)
at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:348)
at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:279)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)
at java.lang.Thread.run(Thread.java:745)

Forced binding doesn't use the default type

Describe the bug
When you bind a SPARQL variable to a value in Java API, the APIs ignores the default type stated in JSAP and uses xsd:string.

To Reproduce
Steps to reproduce the behavior:

  1. Create a JSAP with one update and one forcedBinding with type xsd:decimal
  2. Set the forced binding using the one argument constructor of RDFTermLiteral
  3. execute the update
  4. check the data type of the value inserted in the DB.

Expected behavior
The dataType should xsd:decimal instead xsd:string

System information(please complete the following information):

  • OS: Windows
  • Engine version: 0.9.5
  • SparqlEnpoint used blazegraph

SPARQL11SEProtocol NPE on subscribe

Using SPARQL11SEProtocol with default constructor can cause a NullPointerException when subscribe method is called.

How to riproduce

final SPARQL11SEProperties properties = new SPARQL11SEProperties(new File("file.jsap"));
client = new SPARQL11SEProtocol(properties);

SubscribeRequest sub = new SubscribeRequest("select * where {?a ?b ?c}");
client.subscribe(sub); //NPE

Cause
The SPARQL11SEProtocol(properties) constructor doesn't initialize the wsClient with an instance of SPARQL11SEWebsocket. This probably was to done to avoid pointless usage of system resources since the client user would never ricieve any notification because he doesn't specify a ISubscriptionHandler.

Expected behavior
The client should use a specific exception to handle this kind of bad api use. Better it shouldn't implement the SPARQL11SEProtocol(properties) constructor because if the client wants only to update and query a SPARQL11Protocol instance is more suited.

Sepa as web thing

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Broken subscriptions for queries with ASK statement

Describe the bug
Subscribing to any SEPA engine with a query containing an ASK statment causes unexpected behaviours in the engine

To Reproduce
Steps to reproduce the behaviour:

  1. Use any sepa client api, like SEPA-JS
  2. Subscribe with this sparql:
    ASK { GRAPH auth:Acl { ?uuid acl:accessTo ?foi; acl:mode acl:Read, acl:Write; acl:agent ?webId } }
  3. Perform an update for the previously created subscription

Expected behavior
As a sparql compliant endpoint, the SEPA engine should support the ASK statement for updates, queries and subscriptions.

Blazegraph warning

WARN : HttpParser.java:1347: badMessage: 400 Illegal character 0x3 in state=START in '\x03<<<\x00\x00*%\xE0\x00\x00\x00\x00\x00Cookie:...=Test\r\n\x01\x00\x08\x00\x03\x00\x00\x00>>>1.1\r\nContent-Type...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' for HttpChannelOverHttp@6d644f25{r=0,c=false,a=IDLE,uri=-}
WARN : HttpParser.java:1347: badMessage: 400 Illegal character 0x3 in state=START in '\x03<<<\x00\x00*%\xE0\x00\x00\x00\x00\x00Cookie:...=Test\r\n\x01\x00\x08\x00\x03\x00\x00\x00>>>1.1\r\nContent-Type...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' for HttpChannelOverHttp@42b1498d{r=0,c=false,a=IDLE,uri=-}
WARN : HttpParser.java:1347: badMessage: 400 Illegal character 0x3 in state=START in '\x03<<<\x00\x00+&\xE0\x00\x00\x00\x00\x00Cookie:...hello\r\n\x01\x00\x08\x00\x03\x00\x00\x00>>>harset=UTF-8\r\nCon...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' for HttpChannelOverHttp@2bd8a226{r=0,c=false,a=IDLE,uri=-}
WARN : HttpParser.java:1347: badMessage: 400 Illegal character 0x3 in state=START in '\x03<<<\x00\x00+&\xE0\x00\x00\x00\x00\x00Cookie:...hello\r\n\x01\x00\x08\x00\x03\x00\x00\x00>>>harset=UTF-8\r\nCon...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' for HttpChannelOverHttp@ecaaaf8{r=0,c=false,a=IDLE,uri=-}

No notifications with the default graph (or in triple mode)

Describe the bug
There are no notifications when the subscription (and update) queries use the default graph (by not specifying any graph in queries).

To Reproduce
Steps to reproduce the behavior via the SEPA Playground:

  1. Add subscription:
SELECT ?o WHERE {
  <http://sepatest/testsubj> <http://sepatest/hasValue> ?o
}
  1. Issue query:
DELETE {
    <http://sepatest/testsubj> <http://sepatest/hasValue> ?o
} INSERT {
    <http://sepatest/testsubj> <http://sepatest/hasValue> "c"
} WHERE {
  OPTIONAL {
    <http://sepatest/testsubj> <http://sepatest/hasValue> ?o
}}
  1. There are no notifications in the Subscribe tab. However, the same query as was used to subscribe, returns the new value.

Expected behavior
I expect to see notification about new value of the subject in the Subscribe tab of the Playground.

System information(please complete the following information):

  • OS: [Windows 10]
  • Engine version [SEPA Broker Ver 0.9.11] (though the jar file i downloaded is named engine-0.9.12.jar)
  • SparqlEnpoint used [blazegraph 2.1.6-SNAPSHOT]

Additional context
Yes, the simple fix from my side would be to use some explicitly specified graph, however, it turns out to be impossible, because I am planning to use Blazegraph with inference enabled as the endpoint, however, Blazegraph doesn't support inference in quads mode, and it obviously doesn't support graphs in triples mode.

Handling Queries Content-type

SPARQL endpoints support many Content-types, SEPA should at least provide the Content-types of the underlining SPARQL endpoint.

Support formats like for example n-triples should make easier to parse CONSTRUCT queries. @desmovalvo is it?

<help wanted> Problems about implementing delete function based on NGSI-LD

Hallo, Happy new year guys.
I have a problem when implementing NGSI-LD protocols upon NGSI-LD. I just clone the NGSI-LD branch directly. The get entity is implemented and i would like to implement a delete function by entity ID.

The delete function is in NsgiLdRdfMapper.java as following:

public boolean deleteEntityById(String entityId) {
if (getEntityById(entityId) == null) {
return false;
}

	JsonObject jsonld = getEntityById(entityId);

	if (jsonld == null)
		return false;

	RDFDataset ds = fromJsonLd(jsonld);

	if (ds == null | ds.isEmpty())
		return false;

	String triples = nTriples(ds);
	String sparql = "DELETE DATA { GRAPH <" + NgsiLdRdfMapper.ngsiLdEntitiesGraph + "> {" + triples + "}";
	Response ret = update(sparql);

	if (ret.isError()) {
		ErrorResponse error = (ErrorResponse) ret;
		lastError = NgsiLdError.InternalError;
		lastError.setTitle(error.getError());
		lastError.setDetail(error.getErrorDescription());
		return false;
	}

	return true;
}

And in EntityById.java in handlers folder i declared the delete function:

protected void delete(String link) {
super.delete(link);

	// Get the entity ID
	String entityId = "";
	entityId = matcher.group("id");

	// Get entity graph
	JsonObject jsonld = ngsiLdRdfMapper.getEntityById(entityId);

	if (jsonld == null) {
		NgsiLdError error = ngsiLdRdfMapper.getLastError();
		setResponse(error.getErrorCode(), "application/json", error.getJsonResponse(), null);
		return;
	}

	if (ngsiLdRdfMapper.deleteEntityById(entityId)) {
		setResponse(204, null, null, null);
	}

}

When i try to delete the entity i posted using NGSI-LD, there is a java nullpoint exception.
My idea to implement delete is firstly to construct all RDF triples using query about the entity and then using update to delete this entity. The error is always nullpointer exception. Besides, since i don't know how to unit testing in this part.

It would be really nice if you guys could help me to have a look at my codes and may teach me how to unit testing protocols since each time when i build the project, i need to skill all tests, otherwise it failed.

best wishes
Bang

Automate engine jar creation

Right now there are some problems in the engine jar creation. In particular the assembly goal does not create a valid manifest with the Main-class field.

The expected behavior should be a sigle maven goal that creates a single rar file with:

  • jar with all the dependencies
  • sepa.jks
  • engine.jpar
  • endpoint.jpar

Violation of Liskov substitution principle in JSAP file

I was working on DTN branch and I saw what, in my opinion, is an important error done in design time. The class JSAP violates the Liskov substitution principle (https://en.wikipedia.org/wiki/Liskov_substitution_principle), to be more accurate in that class is written that the JSAP class IS-A SPARQL11SEProperties, but this is not correct. The original thought, probably, was that JSAP class USES the SPARQL11SEProperties class. The JSAP class is an abstraction of a real configuration file, that cannot be itself a collection of properties! This is an important difference that, in my opinion, should be fixed

json-ld and other formats

I've been asked:
"If I have a json-ld file, why do I have to: (i) parse it (ii) build my INSERT DATA (iii) post it to sepa?"
I think we should consider the possibility to send that kind of only update or only remove.

I will get some feedback, be prepared!

Impossible to use secure subscriptions (with Python3 APIs)

SEPA denies all the secure subscription requests through Python3 APIs (the registration and request of a token are fully working, as well as the secure update and query requests over HTTPS).

2018-01-11T11:50:01,894 [DEBUG] WebSocketWorker-44 (AuthorizationManager.java:525) Validate token
2018-01-11T11:50:01,894 [WARN ] WebSocketWorker-44 (SecureWebsocketServer.java:61) NOT AUTHORIZED

Subscriptions with OPTIONAL give wrong notifications

From @desmovalvo :

Let's consider an empty datastore. Subscribe to:

PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> 
PREFIX td: <http://wot.arces.unibo.it/ontology/web_of_things#> 
SELECT ?thing ?property ?action ?event ?tName ?pName ?aName ?eName 
WHERE { 
   ?thing rdf:type td:Thing . 
   ?thing td:hasName ?tName . 
   OPTIONAL { 
      ?thing td:hasEvent ?event . 
      ?event td:hasName ?eName } . 
   OPTIONAL { 
      ?thing td:hasAction ?action . 
      ?action td:hasName ?aName } . 
   OPTIONAL { 
      ?thing td:hasProperty ?property . 
      ?property td:hasName ?pName }
}

And consider the following two updates:

PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> 
PREFIX td: <http://wot.arces.unibo.it/ontology/web_of_things#> 
PREFIX qmul: <http://eecs.qmul.ac.uk/wot#> 
INSERT DATA {
   qmul:bad31340-4058-4a82-8f2e-3360b88cf910 rdf:type td:Thing. 
   qmul:bad31340-4058-4a82-8f2e-3360b88cf910 td:hasName 'fooThing' 
} 
PREFIX wot: <http://wot.arces.unibo.it/sepa#> 
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> 
PREFIX td: <http://wot.arces.unibo.it/ontology/web_of_things#> 
PREFIX qmul: <http://eecs.qmul.ac.uk/wot#> 
INSERT DATA {
  qmul:bad31340-4058-4a82-8f2e-3360b88cf910 td:hasEvent qmul:00f95a14-b605-41cb-85a6-543e57be9c45 .  
  qmul:00f95a14-b605-41cb-85a6-543e57be9c45 rdf:type td:Event . 
  qmul:00f95a14-b605-41cb-85a6-543e57be9c45 td:hasName 'fooEvent' . 
  qmul:00f95a14-b605-41cb-85a6-543e57be9c45 wot:hasOutputDataSchema '-' 
}

The first SPARQL Update is correctly notified with:

[Notif. 1] Added: [{'thing': {'type': 'uri', 'value': 'http://eecs.qmul.ac.uk/wot#bad31340-4058-4a82-8f2e-3360b88cf910'}, 'tName': {'type': 'literal', 'value': 'fooThing'}}]
[Notif. 1] Removed: []

The second SPARQL Update instead produces a wrong notification with removed results (that are not really removed!):

[Notif. 2] Added: [{'thing': {'type': 'uri', 'value': 'http://eecs.qmul.ac.uk/wot#bad31340-4058-4a82-8f2e-3360b88cf910'}, 'tName': {'type': 'literal', 'value': 'fooThing'}, 'event': {'type': 'uri', 'value': 'http://eecs.qmul.ac.uk/wot#00f95a14-b605-41cb-85a6-543e57be9c45'}, 'eName': {'type': 'literal', 'value': 'fooEvent'}}]
[Notif. 2] Removed: [{'thing': {'type': 'uri', 'value': 'http://eecs.qmul.ac.uk/wot#bad31340-4058-4a82-8f2e-3360b88cf910'}, 'tName': {'type': 'literal', 'value': 'fooThing'}}]

I have been able to reproduce the same result with both python3 and javascript. SEPA version: 0.8.4.
To easily reproduce the bug, simply copy and run the following python3 script (and the jsap file);

#!/usr/bin/python3

# global reqs
import logging
from uuid import uuid4
from sepy.LowLevelKP import *
from sepy.JSAPObject import *
from sepy.BasicHandler import *

class MyHandler:

    def __init__(self):
        self.notif = 0
    

    def handle(self, added, removed):
        logging.debug("[Notif. %s] Added: %s" % (self.notif, added))
        logging.debug("[Notif. %s] Removed: %s" % (self.notif, removed))
        self.notif += 1
    

# main
if __name__ == "__main__":

    # initialize the logging system
    logger = logging.getLogger('annotatorWT')
    logging.basicConfig(format='[%(levelname)s] %(message)s', level=logging.DEBUG)
    logging.debug("Logging subsystem initialized")
    
    # create an instance of the JSAP Object and the KP
    jsap = JSAPObject("bugtest.jsap")
    kp = LowLevelKP(None)
    
    # delete the content of SEPA
    # the initial state will be the "empty graph", but this is not mandatory
    logging.debug("Deleting the entire graph")
    kp.update(jsap.updateUri, jsap.getUpdate("DELETE_ALL", {}))

    # subscribe
    logging.debug("Subscribing:")
    sText = jsap.getQuery("THINGS", {})
    logging.info(sText)
    kp.subscribe(jsap.subscribeUri, sText, "things", MyHandler())

    # put a thing
    input("Press <ENTER> for the first insert\n")
    logging.debug("Performing the following update:")
    tid = str(uuid4())
    u = jsap.getUpdate("ADD_NEW_THING", {
        "name": "fooThing",
        "thing": jsap.namespaces["qmul"] + tid
    })
    logging.debug(u)
    kp.update(jsap.updateUri, u)

    # add an event
    input("Press <ENTER> for the second insert\n")
    eid = str(uuid4())
    logging.debug("Performing the following update:")
    u = jsap.getUpdate("ADD_EVENT", {
        "event": jsap.namespaces["qmul"] + eid,
        "thing": jsap.namespaces["qmul"] + tid,
        "eName": "fooEvent",
        "outDataSchema": "-"
    })
    logging.debug(u)
    kp.update(jsap.updateUri, u)    
        
    # wait, then destroy data
    try:
        input("Press <ENTER> to quit\n")
    except KeyboardInterrupt:
        logging.debug("Bye")
{
    "parameters": {
        "host": "localhost",
        "ports": {
            "http": 8000,
            "https": 8443,
            "ws": 9000,
            "wss": 9443
        },
        "paths": {
            "query": "/query",
            "update": "/update",
            "subscribe": "/subscribe",
            "register": "/oauth/register",
            "tokenRequest": "/oauth/token",
            "securePath": "/secure"
        }
    },
    "namespaces": {
        "wot": "http://wot.arces.unibo.it/sepa#",
        "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
        "dul": "http://www.ontologydesignpatterns.org/ont/dul/DUL.owl#",
        "td": "http://wot.arces.unibo.it/ontology/web_of_things#",	
        "qmul": "http://eecs.qmul.ac.uk/wot#"
    },
    "updates": {
	"DELETE_ALL":{
	    "sparql":"DELETE { ?s ?p ?o } WHERE { ?s ?p ?o }",
	    "forcedBindings":{}
	},
	"ADD_NEW_THING": {
            "sparql": "INSERT DATA {?thing rdf:type td:Thing. ?thing td:hasName ?name}",
            "forcedBindings": {
                "thing": {
                    "type": "uri",
                    "value": ""
                },
                "name": {
                    "type": "literal",
                    "value": ""
                }
            }
        },
        "ADD_EVENT": {
            "sparql": "INSERT DATA {?thing td:hasEvent ?event. ?event rdf:type td:Event . ?event td:hasName ?eName . ?event wot:hasOutputDataSchema ?outDataSchema }",
            "forcedBindings": {
                "event": {
                    "type": "uri",
                    "value": ""
                },
                "thing": {
                    "type": "uri",
                    "value": ""
                },
                "eName": {
                    "type": "literal",
                    "value": ""
                },
                "outDataSchema": {
                    "type": "literal",
                    "value": ""
                }
            }
        }	
    },
    "queries":{
	"THINGS":{
	    "sparql":"SELECT ?thing ?property ?action ?event ?tName ?pName ?aName ?eName WHERE { ?thing rdf:type td:Thing . ?thing td:hasName ?tName . OPTIONAL { ?thing td:hasEvent ?event . ?event td:hasName ?eName } . OPTIONAL { ?thing td:hasAction ?action . ?action td:hasName ?aName } . OPTIONAL { ?thing td:hasProperty ?property . ?property td:hasName ?pName }}",
	    "forcedBindings":{}
	}
    }
}

404 WebSocket Upgrade Failure

404 WebSocket Upgrade Failure
I have built the maven project successfully, However after running the engine, and trying to subscribe, I get the error that 404 WebSocket Upgrade Failure.

To Reproduce
Steps to reproduce the behavior:

  1. SEPA/json-ld
  2. mvn clean install
  3. Run engine
  4. go to http://127.0.0.1:9000/
  5. See the error

System information(please complete the following information):

  • OS: [Ubuntu 18.04.4]
  • Engine version [pulled the sepa/ngsi-ld]
  • SparqlEnpoint used [e.g. blazegraph 2.1.6]

Additional context
If there is something that I'm missing, please let me know.

Critical: websocket block

Uncaught exception in thread "WebSocketWorker-16":java.lang.OutOfMemoryError: Java heap space
        at java.util.LinkedHashMap.newNode(LinkedHashMap.java:256)
        at java.util.HashMap.putVal(HashMap.java:630)
        at java.util.HashMap.put(HashMap.java:611)
        at sun.util.resources.OpenListResourceBundle.loadLookup(OpenListResourceBundle.java:146)
        at sun.util.resources.OpenListResourceBundle.loadLookupTablesIfNecessary(OpenListResourceBundle.java:128)
        at sun.util.resources.OpenListResourceBundle.handleKeySet(OpenListResourceBundle.java:96)
        at java.util.ResourceBundle.containsKey(ResourceBundle.java:1807)
        at sun.util.locale.provider.LocaleResources.getTimeZoneNames(LocaleResources.java:263)
        at sun.util.locale.provider.TimeZoneNameProviderImpl.getDisplayNameArray(TimeZoneNameProviderImpl.java:124)2017-07-23 19:33:39,098 (KeepAlive.java:79) [DEBUG] @unsubscribeAll
2017-07-23 19:33:39,101 (TokenHandler.java:80) [DEBUG] Get token #254 (Available: 982)

        at sun.util.locale.provider.TimeZoneNameProviderImpl.getDisplayName(TimeZoneNameProviderImpl.java:99)
        at sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGetter.getName(TimeZoneNameUtility.java:240)2017-07-23 19:33:39,101 (KeepAlive.java:91) [DEBUG] >> Scheduling UNSUBSCRIBE request #254

2017-07-23 19:33:39,308 (RequestResponseHandler.java:124) [DEBUG] >> UNSUBSCRIBE #254 sepa://subscription/cea0cd5e-e165-4c33-b46f-8bb2031a9aa5
        at sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGetter.getObject(TimeZoneNameUtility.java:198)2017-07-23 19:33:39,309 (Scheduler.java:146) [DEBUG] >> UNSUBSCRIBE #254 sepa://subscription/cea0cd5e-e165-4c33-b46f-8bb2031a9aa5

2017-07-23 19:33:39,309 (Processor.java:95) [DEBUG] *Process* UNSUBSCRIBE #254 sepa://subscription/cea0cd5e-e165-4c33-b46f-8bb2031a9aa5
        at sun.util.locale.provider.TimeZoneNameUtility$TimeZoneNameGetter.getObject(TimeZoneNameUtility.java:184)2017-07-23 19:33:39,309 (SPUManager.java:64) [DEBUG] Process UNSUBSCRIBE #254

2017-07-23 19:33:39,309 (Processor.java:102) [DEBUG] << {"unsubscribed":"sepa://subscription/cea0cd5e-e165-4c33-b46f-8bb2031a9aa5"}
        at sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObjectImpl(LocaleServiceProviderPool.java:281)2017-07-23 19:33:39,310 (RequestResponseHandler.java:80) [DEBUG] << {"unsubscribed":"sepa://subscription/cea0cd5e-e165-4c33-b46f-8bb2031a9aa5"}

2017-07-23 19:33:39,310 (Scheduler.java:198) [DEBUG] << UNSUBSCRIBE RESPONSE #254 {"unsubscribed":"sepa://subscription/cea0cd5e-e165-4c33-b46f-8bb2031a9aa5"}
        at sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObject(LocaleServiceProviderPool.java:265)2017-07-23 19:33:39,310 (RequestResponseHandler.java:248) [DEBUG] Waiting for UNSUBSCRIBE requests...

        at sun.util.locale.provider.TimeZoneNameUtility.retrieveDisplayNamesImpl(TimeZoneNameUtility.java:166)
        at sun.util.locale.provider.TimeZoneNameUtility.retrieveDisplayName(TimeZoneNameUtility.java:137)
        at java.util.TimeZone.getDisplayName(TimeZone.java:400)
        at java.text.SimpleDateFormat.subFormat(SimpleDateFormat.java:1271)
        at java.text.SimpleDateFormat.format(SimpleDateFormat.java:966)
        at java.text.SimpleDateFormat.format(SimpleDateFormat.java:936)
        at java.text.DateFormat.format(DateFormat.java:345)
        at org.java_websocket.drafts.Draft_6455.getServerTime(Draft_6455.java:38)
        at org.java_websocket.drafts.Draft_6455.postProcessHandshakeResponseAsServer(Draft_6455.java:24)
        at org.java_websocket.WebSocketImpl.decodeHandshake(WebSocketImpl.java:250)
        at org.java_websocket.WebSocketImpl.decode(WebSocketImpl.java:173)
        at org.java_websocket.server.WebSocketServer$WebSocketWorker.run(WebSocketServer.java:781)```

Readme for libraries

The client-api and client-pac-pattern deserves their own readme with a section about how to add them to a java project. (Maven / Gradle)

Bug on Client class

There is a bug on Client class in function addDefaultDatatype.
When the bindings is null it explodes with a NullPointerException.

To fix it I suggest to check che nullable value:
if (bindings == null) return null;

General Question about the SPARQL UPDATE using DELETE Function.

Describe the bug
DELETE Function not working well

Hallo, I have a general question about the update function in SEPA Dashboard.

DELETE {
GRAPH http://uri.etsi.org/ngsi-ld/Entities {
urn:ngsi-ld:Vehicle:V123 http://uri.fiware.org/ns/datamodels/speed ?blank.
?blank http://uri.etsi.org/ngsi-ld/hasValue ?d
}
} INSERT {
GRAPH http://uri.etsi.org/ngsi-ld/Entities {
urn:ngsi-ld:Vehicle:V123 http://uri.fiware.org/ns/datamodels/speed ?blank.
?blank http://uri.etsi.org/ngsi-ld/hasValue "42"^^http://www.w3.org/2001/XMLSchema#integer
}
}

I use this code in the SEPA Dashboard to test the function of deletion. I would like to replace the original speed value from 23 to 42. I query all the necessary and the name of the graph, and now i would like to change the value. However, there is no changes happen. I subscribe the speed value in the Dashboard as well and there is no notification.

Expected Values
After updating, the speed value should change from the original 23 to the new integer value 42. Besides, the subscription of the speed value should provide an notification.

System information(please complete the following information):

  • OS: MacOS 11.1
  • Engine version [Version in the NGSI-LD branch]
  • blazegraph 2.1.5

Blazegraph out of memory

5259008 [Friday, September 29, 2017 2:27:39 PM UTC], commitCounter=1559918, commitRecordAddr={off=NATIVE:-3945402,len=422}, commitRecordIndexAddr={off=NATIVE:-8398,len=220}, blockSequence=2, quorumToken=-1, metaBitsAddr=60112764984, metaStartAddr=23836, storeType=RW, uuid=fd5915a5-0498-4a65-b30e-d47583c9fc2a, offsetBits=42, checksum=789653958, createTime=1506351399782 [Monday, September 25, 2017 2:56:39 PM UTC], closeTime=0}
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:552)
        at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460)
        at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        ... 1 more
Caused by: org.openrdf.query.UpdateExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=kb.spo.SPOC: lastRootBlock=rootBlock{ rootBlock=0, challisField=1559918, version=3, nextOffset=88965952577743, localTime=1506695260175 [Friday, September 29, 2017 2:27:40 PM UTC], firstCommitTime=1506351400906 [Monday, September 25, 2017 2:56:40 PM UTC], lastCommitTime=1506695259008 [Friday, September 29, 2017 2:27:39 PM UTC], commitCounter=1559918, commitRecordAddr={off=NATIVE:-3945402,len=422}, commitRecordIndexAddr={off=NATIVE:-8398,len=220}, blockSequence=2, quorumToken=-1, metaBitsAddr=60112764984, metaStartAddr=23836, storeType=RW, uuid=fd5915a5-0498-4a65-b30e-d47583c9fc2a, offsetBits=42, checksum=789653958, createTime=1506351399782 [Monday, September 25, 2017 2:56:39 PM UTC], closeTime=0}
        at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1080)
        at com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152)
        at com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1934)
        at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1536)
        at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1501)
        at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:714)
        ... 4 more
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=kb.spo.SPOC: lastRootBlock=rootBlock{ rootBlock=0, challisField=1559918, version=3, nextOffset=88965952577743, localTime=1506695260175 [Friday, September 29, 2017 2:27:40 PM UTC], firstCommitTime=1506351400906 [Monday, September 25, 2017 2:56:40 PM UTC], lastCommitTime=1506695259008 [Friday, September 29, 2017 2:27:39 PM UTC], commitCounter=1559918, commitRecordAddr={off=NATIVE:-3945402,len=422}, commitRecordIndexAddr={off=NATIVE:-8398,len=220}, blockSequence=2, quorumToken=-1, metaBitsAddr=60112764984, metaStartAddr=23836, storeType=RW, uuid=fd5915a5-0498-4a65-b30e-d47583c9fc2a, offsetBits=42, checksum=789653958, createTime=1506351399782 [Monday, September 25, 2017 2:56:39 PM UTC], closeTime=0}
        at com.bigdata.journal.AbstractJournal.commit(AbstractJournal.java:3131)
        at com.bigdata.rdf.store.LocalTripleStore.commit(LocalTripleStore.java:98)
        at com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.commit2(BigdataSail.java:3695)
        at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.commit2(BigdataSailRepositoryConnection.java:330)
        at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertCommit(AST2BOpUpdate.java:375)
        at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:321)
        at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1072)
        ... 9 more
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=kb.spo.SPOC
        at com.bigdata.journal.Name2Addr.handleCommit(Name2Addr.java:859)
        at com.bigdata.journal.AbstractJournal.notifyCommitters(AbstractJournal.java:2716)
        at com.bigdata.journal.AbstractJournal.access$1700(AbstractJournal.java:255)
        at com.bigdata.journal.AbstractJournal$CommitState.notifyCommitters(AbstractJournal.java:3422)
        at com.bigdata.journal.AbstractJournal$CommitState.access$2600(AbstractJournal.java:3298)
        at com.bigdata.journal.AbstractJournal.commitNow(AbstractJournal.java:4092)
        at com.bigdata.journal.AbstractJournal.commit(AbstractJournal.java:3129)
        ... 15 more
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=kb.spo.SPOC
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at com.bigdata.journal.Name2Addr.handleCommit(Name2Addr.java:749)
        ... 21 more
Caused by: java.lang.RuntimeException: Could not commit index: name=kb.spo.SPOC
        at com.bigdata.journal.Name2Addr$CommitIndexTask.call(Name2Addr.java:578)
        at com.bigdata.journal.Name2Addr$CommitIndexTask.call(Name2Addr.java:513)
        ... 4 more
Caused by: java.lang.OutOfMemoryError: Java heap space
WARN : QueuedThreadPool.java:591: 
ERROR: Banner.java:160: Uncaught exception in thread
java.lang.OutOfMemoryError: Java heap space
ERROR: Banner.java:160: Uncaught exception in thread
java.lang.OutOfMemoryError: Java heap space
WARN : ServletHandler.java:665: Error for /blazegraph/namespace/kb/sparql
java.lang.OutOfMemoryError: Java heap space
Killed

WebSocketGate does not validate webtoken

Describe the bug
SEPA engine launched in secure mode does not validate subscription request.

To Reproduce
Steps to reproduce the behavior:

  1. Start SEPA with the secure option set to true
  2. Create a secure WebSocket connection with SEPA engine
  3. Send a subscribe message
  4. SEPA replies with the query results

Expected behavior
SEPA should reply with the error message specified here

System information(please complete the following information):

  • OS: Windows
  • Engine version: 0.9.7 and before
  • SparqlEnpoint: Blazegraph 2.14

Pre-flight requests fail

Pre-flight requests fail with the latest version of SEPA. It is very easy to reproduce the error: simply open the dashboard or whatever application interacts with SEPA to perform updates/queries through HTTP.

jpar files

We should put them in .gitignore?
In any case, I think they should be downloaded when cloning the repo, but modifications should be not checked.

Improving EngineProperties saving

In my opinion the class it.unibo.arces.wot.sepa.engine.core.EngineProperties manages wrongly the default values. To be more precise the defaults are duplicated in the file, those are present both in "defaults" function and in every single getXXXX function.
I think the best solution is to use one class to manage the current settings and load and save the class via JSON format using the GSON library.
The structure I propose is:

  • EngineProperties offers methods: loadProperties, savePropoerties and loadDefaults.
  • EngineProperties has one field for each property to save.
  • Keep the getXXXX functions not to change all the project code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.