fraunhoferisst / dataspaceconnector Goto Github PK
View Code? Open in Web Editor NEWThis is an IDS Connector reference implementation.
License: Apache License 2.0
This is an IDS Connector reference implementation.
License: Apache License 2.0
"For our Supply Chain Execution Project, I want to use the IDS Connector on an ARM based Raspbery Pi 4 B+ 4GB with Raspbian 32 Bit. I can't use the provided docker images. It would be good to provide some ARM images in the future."
Return a list of all stored ContractAgreement. May be interesting for automated processes as an addition to the direct database interaction.
Test run fails, same applies for building docker container.
Build and run with -DskipTests works.
Details:
pathes to keystore and truststore are provided as URIs, e.g. "@id" : "file:///conf/keystore-localhost.p12"
KeyStoreManager extracts path but does not remove leading /
as expected by classLoader.getResourceAsStream(...)
Stacktrace:
Caused by: java.lang.NullPointerException
at de.fraunhofer.isst.ids.framework.configuration.KeyStoreManager.loadKeyStore(KeyStoreManager.java:105)
at de.fraunhofer.isst.ids.framework.configuration.KeyStoreManager.<init>(KeyStoreManager.java:65)
at de.fraunhofer.isst.ids.framework.spring.starter.ConfigProducer.<init>(ConfigProducer.java:52)
Moritz Keppler [email protected], Daimler TSS GmbH, legal info/Impressum
For the Configuration Manager and probably other services, it would be interesting to regularly check whether the connector is still "online"/accessible.
Dont mind me...
Add public endpoint for self-description without resource catalogs.
The configuration, as far as I can see, is now static within the JAR file of the project. Which entails that each time the configuration changes, the JAR has to be built up again.
Is it an idea to have an option to mount configuration on the file system of a Docker container? This enables the connector to use the same Docker image for different configurations, which enhances the ability to practically use the DataspaceConnector.
The default config.json
contains a proxy definition with hard-coded address http://proxy.dortmund.isst.fraunhofer.de:3128. I cannot reach this proxy and I don't think it is necessary for users outside the ISST. In fact, not being able to connect to the proxy causes issues when trying to run the connector. Would it be possible to remove the proxy definition? Or did I oversee another use case for this? Thank you :)
On resource creation, changes, metadata/data requests, contract negotiation, data accesses etc.
Remove unused imports from classes.
For advanced policy negotiation, it would be helpful that requested self-description are deserialized and interpreted or at least that a user is able to add known connectors to a list that is persisted by the connector.
A connector object in the repository could have the following information:
The h2 console cannot be accessed. The console reports H2 console error: No suitable driver found for 08001/0.
The problem seems to stem from the usage of custom filtering.
Hotfix: Disable the @component at the HttpTraceFilter. It is the only custom filter at the moment. This will disable http tracing!
The version is passed through from pom.xml to OpenApi (see issue #60). The same method can be used for the rest of the information needed in OpenApi.
Hi All,
This issue is mostly to document our discussion.
Currently, when using the DataspaceConnector on the server side, to expose an existing REST service, one has to specify the exact REST request that corresponds to a given Resource. However, many REST APIs require the passing of query parameters to be useful.
For example the service at https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1 provides air quality measurements for all of Europe. A typical use-case for this service would be:
https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Things?$filter=Datastreams/ObservedProperty/Name eq 'NO2'
https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Things?$filter=geo.intersects(Locations/location,geography'POLYGON ((0 55.7,0 52.4,5.6 52.4,5.6 55.7,0 55.7))')
https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Things(2281)/Datastreams?$select=name,id,unitOfMeasurement,observationType,properties&$expand=ObservedProperty
https://airquality-frost.k8s.ilt-dmz.iosb.fraunhofer.de/v1.1/Datastreams(245)/Observations?$filter=phenomenonTime ge 2020-08-01T00:00:00Z and phenomenonTime lt 2020-09-01T20:00:00Z&$select=result,phenomenonTime&$orderby=phenomenonTime asc
In all of these requests, other parameters can be used to tune the data that is returned, both the minimise the number of requests required, and to minimise the amount of data that needs to be sent. For instance, with the $select
and $expand
parameters.
Furthermore, for each request that returns a List of items, the response can be a subset of the list, and contain a @iot.nextLink
property, that points to the next subset. This nextLink is dynamically generated by the server. The DataspaceConnector on the client side would have to replace those links with links pointing to itself.
The service contains data from January 2018 till now, about 300 Million Observations, in 17974 Datastreams, for 4382 Stations.
A demo map with the data of this service can be found at https://datacoveeu.github.io/API4INSPIRE/maps/AirQuality.html
The complete API standard document can be found here: https://www.ogc.org/standards/sensorthings
Two connectors automatically negotiate a contract before the actual data is exchanged.
Getting ones Self-Decription has to be done by POST-multipart. (Here with given message-type: https://github.com/International-Data-Spaces-Association/InformationModel/blob/1e6029c0a575381017c75280b69143822fa49e4a/taxonomies/Message.ttl#L69)
Here the line in current 'openapi.yaml':
DataspaceConnector/openapi.yaml
Line 174 in 1168191
We suggest the following docker build improvements/fixes
cp *.jar
will fail otherwiseMoritz Keppler [email protected], Daimler TSS GmbH, legal info/Impressum
Creating a new resource in swagger GUI with the default settings creates a resource without default policy. This leads to Metadata could not be saved: Cannot invoke "de.fraunhofer.iais.eis.ContractOffer.toRdf()" because the return value of "java.util.ArrayList.get(int)" is null
when requesting that resource at the admin/api/request/description
endpoint.
When following the tutorial "Hands-on IDS Communication" using the provided demo application running in Docker, it is not possible to publish a resource data string OR to read this data string.
First, we register a resource through POST /admin/api/resources/resource
using the Swagger UI running on localhost:8080/admin
. We use the provided JSON as request body.
Subsequently, we register a corresponding resource representation using POST /admin/api/resources/{resource-id}/representation
, passing the returned UUID from the previous step. We amended the JSON in the request body as follows:
{ "type": "json", "byteSize": 105, "sourceType": "local", "source": { "username": "-", "password": "-" } }
Subsequently, we try publishing a resource data string "Test Data" using PUT /admin/api/resources/{resource-id}/data
passing the previously returned UUID. We receive a 201 Code with the response "Resource published".
However, once we try requesting the data string using GET /admin/api/resources/{resource-id}/data
, we receive an error 404, telling us "Resource not found".
I've problems with including the IDS-Certificate. Done steps:
server.ssl.key-store=classpath:conf/connector.p12
"ids:connectorDeployMode" : { "@id" : "idsc:PRODUCTIVE_DEPLOYMENT" },
"ids:keyStore" : { "@id" : "file:///conf/ieeconnector.p12" }
Starting the connector works without problems. On making a Description Request to another Connector, I get the answer
"ids:rejectionReason" : { "@id" : "idsc:MALFORMED_MESSAGE" }, "ids:securityToken" : { "@type" : "ids:DynamicAttributeToken", "@id" : "https://w3id.org/idsa/autogen/dynamicAttributeToken/60027237-5eff-465f-afeb-71dfd7769086", "ids:tokenValue" : "rejected!", "ids:tokenFormat" : { "@id" : "idsc:JWT" } }, "ids:senderAgent" : { "@id" : "https://w3id.org/idsa/autogen/baseConnector/42d834ec-855b-456e-8cac-009d5d56593a" }, "ids:correlationMessage" : { "@id" : "https://INVALID" },
and the text
Token could not be parsed!JWT strings must contain exactly 2 period characters. Found: 0
The console produce following error:
2020-11-26 09:47:20 ERROR TokenManagerService:164 - Error retrieving token: Unexpected code Response{protocol=http/1.1, code=400, message=Bad Request, url=https://daps.aisec.fraunhofer.de/v2/token}
Where is the problem?
6 of 12 test classes fail on mvn clean package
Failures:
ArtifactRequestMessageHandlingTest.requestArtifact_invalidId:140 expected:<https://w3id.org/idsa/code/NOT_FOUND> but was:<https://w3id.org/idsa/code/MALFORMED_MESSAGE>
ArtifactRequestMessageHandlingTest.requestArtifact_validId_provisionInhibited:120 expected:<https://w3id.org/idsa/code/NOT_AUTHORIZED> but was:<https://w3id.org/idsa/code/MALFORMED_MESSAGE>
DescriptionRequestMessageHandlingTest.requestArtifactDescription_invalidId:110 expected:<https://w3id.org/idsa/code/NOT_FOUND> but was:<https://w3id.org/idsa/code/MALFORMED_MESSAGE>
Errors:
ArtifactRequestMessageHandlingTest.requestArtifact_validId_provisionAllowed:93 » InvalidTypeId
DescriptionRequestMessageHandlingTest.requestArtifactDescription_validId:89 » InvalidTypeId
DescriptionRequestMessageHandlingTest.requestSelfDescription:67 » InvalidTypeId
After running the maven licence plugin and the maven dependency plugin there seem to be unused included dependencies.
Getting rid of those dependencies removes unused licence dependencies. Also the build time of images will be faster since the dependencies wont be downloaded.
By using maven build profiles different build configurations can be enabled/disabled. By doing so different build goals may be reached while the build complexity and thereby the build time of the project can be modified.
The following build profiles should be introduced:
No testing:
This profile should disable the test phase of the build. While implementing new features running the test with every compilation adds significant amount of build time.
No documentation:
This profile should disable the update and generation of the documentation. While implementing new features updating and generating the documentation is not necessary for every compilation. By skipping this step the build time can be reduced resulting in faster access to the application.
Release build:
The release build should enable the developer to build the application with the strictest rule sets and may run additional steps that ensure the quality of the build (e.g. Do not allow the application to build while there are unchecked warnings).
The version of the dataspace connector has to be manually set in multiple project files. By doing so there is a high risk that the version is not updated in all files. This could lead to major problems down the line since the version is used for runtime descriptions of the connector.
I suggest that the version is only set once in the pom.xml. The other locations may pull the information from the pom.
There are currently (at least) three locations where the version has to be set manually:
The last git tagged version is "v.3.2.1" while the current dataspace connector is at version 3.3.0. The tags from v.3.2.1 forward should be added.
Move sourceType
as type
to BackendSource
.
The representation's source could be extended by a SourceType.MONGODB.
Dear DataspaceConnector-Team,
we've setup two connectors on a server. After succesfully registering a data resource and corresponding representation on one, we tried accessing it from the other one (which runs on another port). First, we tried requesting the connector's description using POST/admin/api/request/description, passing the url http://{ip-address}:{port}/api/ids/data. We configured the relevant files in the repository for http to work.
This worked in older versions. With the latest commit however, we obtain an error message in the response body:
Failed to send description request message.
Did anyone else encounter this yet?
In the pom.xml multiple reference to the old gitlab repo are still present. This needs to be updated.
The pom information is currently passed through via the project.properties file. The file application.properties already contains configuration settings. By merging the files the configuration becomes more centralized.
The problem (and the reason two files exist in the first place) that needs to be investigated is that the file filtering produces problems when turned on for the resource containing the application.properties file.
The function is quite simple but is missing exception handling and some functionality such as selecting which of the uuids in an uri should be selected.
If a description request (/admin/api/request/description) is sent to the connector with a requested resource URI that does not exist in the requested connector, then the connector returns a HTTP Status Code 500 with the message.
In the example setup with two connectors the following request to the data-consumer connector:
https://localhost:8080/admin/api/request/description?recipient=https%3A%2F%2Flocalhost%3A8081%2Fapi%2Fids%2Fdata&requestedArtifact=https%3A%2F%2Fw3id.org%2Fidsa%2Fautogen%2FpublicKey%2F78eb73a3-3a2a-4626-a0ff-631ab50a00f9
would give the following response:
Metadata could not be saved: Could not resolve type id 'ids:RejectionMessage' as a subtype of [simple type, class de.fraunhofer.iais.eis.DescriptionResponseMessage]: known type ids = [ids:DescriptionResponseMessage] at [Source: (String)"{"@context":{"ids":"...
Expected:
Status Code 404 with the rejection message from the requested connector as payload.
The payload of the DescriptionResponseMessage is mapped to the DSC's metadata model. This causes errors when communicating with other connectors.
The representation's source could be extended by a SourceType.MINIO.
The readme mentions two zip files that seem to be missing: java-setup.zip and docker-setup.zip
This makes it impossible to follow the "Getting Started" chapter
During a demonstration, the counter has been increased, but the policy restriction was shown from the very first access.
The PIP currently links to an endpoint on the consumer side. This makes no sense if the provider specifies the uri in its policy.
Currently all data stored in the PostgreSQL database in the docker setup is lost after restarting. Add a persistent volume to keep data across restarts.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.