Coder Social home page Coder Social logo

solid-spec's Introduction

Solid

Solid Logo

Re-decentralizing the Web

Solid is a proposed set of standards and tools for building decentralized Web applications based on Linked Data principles.

Read more on solidproject.org.

solid-spec's People

Contributors

acoburn avatar bblfish avatar csarven avatar dmitrizagidulin avatar dsebastien avatar gitter-badger avatar jordanshurmer avatar kjetilk avatar melvincarvalho avatar michielbdejong avatar mitzi-laszlo avatar nicola avatar noeldemartin avatar otto-aa avatar rhiaro avatar rubenverborgh avatar sandhawke avatar sourcejedi avatar squarejaw avatar stebalien avatar steffandroid avatar timbl avatar tmciver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

solid-spec's Issues

HTTP Put to Create needs conditional

If you want to avoid overwriting accidentally a resource, you need to have a header that does a condition PUT. Since this is quite an obvious requirement, it should be added to the example to make it clear that this is feasible.

rww-play as implementation of SoLiD

I think rww-play counts as an implementation of SoLiD. Could that be listed too? This is where a table of features implemented by SoLiD would be useful, and each server can specify which features it implements - perhaps to what level .

ACL guidelines

Little more guidance on ACL would help with understanding it and people deploying solid making sure not to compromise access to information on the server.

  1. I understand that ACL works per instance of ldp:Resource (ldp:Container) and one should not allow queries on the whole dataset.

  2. Can one add ldp:member (ldp:contains) triple to include data accessible via certain instance of ldp:Resource in responses of queries made on ldp:Container (used as subject of ldp:member)

  3. how setting access on ldp:Container compares to OAuth scopes? https://tools.ietf.org/html/rfc6749#section-3.3 @aaronpk

Side-effects of a "like" activity

An example would be https://www.w3.org/wiki/Socialwg/Social_API/User_stories#Liking_And_Showing_Likes . If a user likes some content, it should have about three effects:

  1. A "like" activity is included in the user's recent activities stream.
  2. The user is included on the list of people who "like" the object.
  3. The object is included on the list of object the user "likes".

Would maintaining these effects be done by the API client, or would servers be expected to ensure these side-effects?

Post vs. Ping to shared data space + assigning graph names

I would like to discuss different approaches to post to group / shared data spaces. To my understanding SoLiD currently recommends direct HTTP POST to such shared data space.

post to shared space

IndieWeb for contrast, always recommends to publish in your personal data space first and then ping other space to request transclusion of your content you published independently.

ping to shared space

I find it especially interesting to analyze graph names shared dataspace could assign in those two scenarios.

  1. URI of container in shared data space Alice directly POST-ed to
  2. URI of webmention endpoint which Alice POST-ed to requesting transclusion
  3. URI of resource in Alice's personal data space used in webmention as source param
  4. URI of Alice's identity

Since data could easily get duplicated in multiple graphs if needed. I find it especially attractive to maintain additional named graph per each identity. Which possibly could have the same state in shared data space, no matter if we use direct Post or Ping approach.

seeAlso: http://webmention.org (converspace/webmention#40)

data workspaces should not be hardwired

From the current spec it seems that the names of the data-spaces are currently hardwired. This is not good from the point of LinkedData, extensibility, or internationalisation.

Instead we need to have an ontology for the relations from the account to the different resources. The current set of workspaces seem to be determined by Access Control rules, in which case the rights to them is in any case defined in the ACLs of the Containers.

It seems that these ontologies have not been created yet.

ACL Groups infinite loops

Consider this scenario:

  • Server A
    • Groups/WorkingGroup.ttl
    • Groups/WorkingGroup.ttl.acl (it says only B/Groups/Admins.ttl can read!)
  • Server B
    • Groups/Admins.ttl
    • Groups/Admins.ttl (it says only A/Groups/WorkingGroup.ttl can read!)
    • Work/File.ttl
    • Work/File.ttl.acl (it says only A/Groups/WorkingGroup.ttl can read! )

Now, when one tries to access any file where WorkingGroup or Admins can read (e.g. B/Work/File.ttl.acl), each server will ask the other to read the group, which will need to ask the other to read the Group to read the permission to read the Group, but then the server will ask back to read the other Group to read the other group permissions and so on, in an infinite loop.

Currently affects gold, I have commented out groups in ldnode

WebDAV methods?

Just saw that there are a lot of METHODS in the SoLiD examples https://github.com/linkeddata/SoLiD

HTTP/1.1 200 OK
Accept-Patch: application/json, application/sparql-update
Accept-Post: text/turtle, application/ld+json
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: OPTIONS, HEAD, GET, PATCH, POST, PUT, MKCOL, DELETE, COPY, MOVE, LOCK, UNLOCK
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: User, Triples, Location, Link, Vary, Last-Modified, Content-Length
Allow: OPTIONS, HEAD, GET, PATCH, POST, PUT, MKCOL, DELETE, COPY, MOVE, LOCK, UNLOCK

Are all the WebDAV ones used?

WebID RSA - some questions

I think this is in general a very good idea and will be needed for HTTP2.0 as TLS cannot be used there.
It does require cryptography in the browser, as I understand. Which browsers support those? Do we know what the UIs for those look like currently? How does the user select a public key to authenticate with? It would be good to have a section dedicated to that - a wiki page perhaps. And the particular protocol could be specified on the WebID Specs page.

Second as a WebID can have more than one key, my guess is that the browser has to send the public key as a link or in the header to the server. Otherwise the server needs to test each WebKey listed in the WebID Profile, and there may be a few.

I think there could be a better name too. RSA is a cryptographic algorithm, whereas what you are defining here is a protocol. Perhaps

  • WebID-Token-Auth
  • WebID-JS-Auth
  • ...

Do we need APIs for account management?

In this issue I bring back this conversation we had in October.
Quick note: this aims at future Solid specs, also it is more of a discussion than a proposal.

We use special APIs for account management, do we really need them?

Non-LDP API (current)

  • Creating an account: POST https://databox.me/,system/newAccount
  • Deleting an account: POST https://databox.me/,system/deleteAccount (I assume)

LDP API (possible)

  • Creating an account: POST https://nicola.databox.me/
  • Deleting an account: DELETE https://nicola.databox.me/

Note: Certificate generation with Keygen should be added - somehow
Note: this example assume that each new account will be in a subdomain (although Suborigins will eventually make subfolder webids possible)

However, I do see the value of non-ldp APIs (they basically solve the issues explained in the two "notes", and they are not really intrusive as of right now, see the 2-factor deletion discussed in #67), however keeping with consistency would be great!

Open for feedback!

(conversation started several times with @deiu that already super simplified the process, and with @dmitrizagidulin recently)

Cant connect to websocket using node ws

var WebSocket = require('ws');

process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';

var ws = new WebSocket('wss://databox.me/');

ws.on('open', function open() {
  console.log('connected');
  ws.send('ping');
});

ws.on('close', function close() {
  console.log('disconnected');
});

ws.on('message', function message(data, flags) {
  console.log(data);
});

Gives

node test.js 

events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: unexpected server response (403)
    at ClientRequest.response (/var/www/ld/test/node_modules/ws/lib/WebSocket.js:674:15)
    at ClientRequest.g (events.js:180:16)
    at ClientRequest.EventEmitter.emit (events.js:95:17)
    at HTTPParser.parserOnIncomingClient [as onIncoming] (http.js:1688:21)
    at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:121:23)
    at CleartextStream.socketOnData [as ondata] (http.js:1583:20)
    at CleartextStream.read [as _read] (tls.js:511:12)
    at CleartextStream.Readable.read (_stream_readable.js:320:10)
    at EncryptedStream.write [as _write] (tls.js:366:25)
    at doWrite (_stream_writable.js:223:10)

Same issue on localhost. I suspect this could be an issue with the node ws module. Will investigate further.

API to download/export account data

Create an API to enable users to download or export their account data. Useful when moving to/from a different server.

Open questions:

  • Is this just to export the WebID profile, or all of the data on a storage provider? (profile, preferences, all the workspaces)?

To do:

  • get consensus on the account export API
  • determine the format of the exported data (see #72)
  • document the API on solid-spec
  • implement on gold - see linkeddata/gold#27
  • implement on ldnode - create issue
  • add an 'Export Account' link to the profile-editor app

Ability to HTTP Patch the response headers

I may really be over thinking or looking something but:

If it isn't already possible, I think it'll be useful to be able to PATCH only the headers, e.g., Link. GETing the complete response, processing it, and then sending it all back to the server doesn't feel right for a number of reasons - most obvious one being that it will introduce unintended diffs.

SoLiD ontologies

We need to have coherence in the ontologies we use so that our apps can work together . At the minimum we need to list the ontologies which apps are using, with perhaps descriptions of the problems found in those ontologies. This can help us choose interoperable ontologies. We should also document how different apps use them, so that we can start gathering patterns of deployment.

split SoLiD spec into mini specs

The SoLiD spec is a bit long to read in one go. It would be better to split it along smaller documents each orthogonal to the other. Then there could be a table listing which of these specs were implemented by which platforms, and use that to guide future working groups.

implementation levels

Different parts of the SoLiD described here have different levels of implementation. Some are well established and even standard like LDP, some are not standardised yet, but have years of experience such as WebID-TLS and Web Access Control. Some are exploratory but pretty obvious such as the SEARCH method applied to SPARQL Queries. And some are very bleeding edge such as WebID-RSA that requires recent browsers, the development of a protocol for finding stored private keys, and need a security review. (see issue #5 ).

It would be good if the document could make clear for each part

  • which are implemented by which servers or client libraries
  • which specs they refer to for their implementation
  • ... ?

One could make a simple table for this so as to help guide people about where consensus has been achieved and where more research or implementation feedback is needed.

Account Creation

The account creation example seems to create a special case where JSON is sent instead of JSON-LD.
It would be good to have something more LDP compatible.

It would be better in that it would avoid inventing new concepts if account creation be just a special form of LDPC. We did that with rww-play where creation on the root would create sub domains. This should not be so surprising as @timbl has stated that the one error he thinks on the web was the order of the domains which would have better been io.rww instead of the actual rww.io. In that way one can understand why POSTing to a rww.io would create a subdomain bblfish.rww.io, since if the domain order would be inversed io.rww would create io.rww.bblfish. LDPC's do allow domains to be created where the name is not inside the container. IE LDP Containers are not Intuitive Containers - though that may be something nice to specify for LDP next. Still in this case it is helpful that LDPCs are not intutive containers.

globbing need to be follow your nose

Currently the globbing feature is not advertised. A client cannot know that a server supports it. There should either be a Link header of type ldp:globed or the LDPC should have that link following the patterns set by the LDP spec.

Question on ACL and defaultForNew

I am on my way to write my document on access control.
I would like to know how to handle this case:

<#AppendOnly>
 <http://www.w3.org/ns/auth/acl#defaultForNew> <./>;
 <http://www.w3.org/ns/auth/acl#agentClass> <http://xmlns.com/foaf/0.1/Agent>;
 <http://www.w3.org/ns/auth/acl#mode> <http://www.w3.org/ns/auth/acl#Read> .

If I try to read ./ on ldnode I currently get no access, but I do when I do

<#AppendOnly>
 <http://www.w3.org/ns/auth/acl#accessTo> <./>;
 <http://www.w3.org/ns/auth/acl#agentClass> <http://xmlns.com/foaf/0.1/Agent>;
 <http://www.w3.org/ns/auth/acl#mode> <http://www.w3.org/ns/auth/acl#Read> .

Should the first case work as well?

PUT for create requires extension of LDP

In the section HTTP PUT to create the following is written.

Another useful feature that is not yet part of LDP deals with using HTTP PUT to create new resources. This feature is really useful when the clients wants to make sure it has absolute control over the URI namespace -- e.g. migrating from one pod to another. Although this feature is defined in HTTP1.1 RFC2616, we decided to improve it slightly by having servers create the full path to the resource, if it didn't exist before.

Currently it is not clear how a client would ever know that a server implemented this feature without out-of-band information. Because LDP Containers are not Intuitive Containers, a client has no guarantee that when doing a PUT in a namespace it is even allowed to do so in that part of the name space. Furthermore it has no way of knowing in a standard way that the intermediate directories will be created.

I don't see this listed on the LDP Next Charter, though there is still time to add this feature there if it is felt to be useful enough.

NASCAR and Web Access Control

LDP Options and HEAD can return an Allowed-Methods header. It is not clear from the specs if this shows

a. the methods the agent who is accessing the resource with the credentials he has provided can see
b. if these are the methods the agent with the most privileges can use - ie what methods the resource CAN Allow.

• If a server interprets the spec as a. then a client that does not see a method will have to wonder if it would be able to get that method were it to authenticate.
• If a server interprets the spec as b then a client that gets 401ed on trying to use a mehtod will want to find out whether it is even worth its authenticating. Will I get access if I try one of my many credentials?

This is where we then hit the NASCAR problem. Unless the client reads the WAC file it won't know what types of credentials required to be able to act on the resource. So it would have to ask the user to try out all possible ways to authenticate, and none of them may actually be the right ones. This would be

  1. very inconvenient for the user as he'd have to try logging in, in many different ways
  2. a big source of privacy leaks, as the user would have to try many different credentials before being able to authenticate, and so give away more information about himself than needed.

There was an interesting thread on the Credentials CG, about the NASCAR problem for which we actually have a very useful answer with WAC. I gave an answer that showed how we can solve the problem:

PUT image example odd

In the paragraph Creating Document Files the example of a PUT is the following:

PUT / HTTP/1.1
Host: example.org
Content-Type: image/jpeg

Are you PUTing an image on the root? That seems a bit of a weird example, as furthermore that is likely to be an LDPC, so that you'd be squashing your LDPC. There are a number of other errors, of which also that the responses don't contain the Link header.

Side-effects of a "follow" activity

If user A follows user B, there should be about 4 changes to the state of the world:

  1. User "A" should be added to user B's "followers" collection.
  2. User "B" should be added to user A's "following" collection.
  3. A "follow" activity should be added to user A's recent activities stream.
  4. The same "follow" activity should be added to user B's inbox.

Whose responsibility is it to handle these additions in SoLiD?

SoLiD requires LDP

I've been studying the SoLiD spec more lately, and it occurs to me that as SoLiD mandates the use of an LDP stack, it's not suitable as a candidate for the SocialAPI that the WG needs to come out with. I don't believe that the WG's recommendations will get much uptake if they dictate, for example, what database developers implementing social applications must use. On the other hand, the work on SoLiD is really useful for any developers who do want to use LDP-compliant servers. It makes sense to me to see SoLiD as a thin layer between LDP and the SocialAPI (whatever that ends up looking like). Of course anything that is not LDP-specific would have potential to usefully feed in as the SocialAPI spec and SoLiD develop in conjunction.

I could imagine a Micropub client being able to interoperate with an LDP server via the SoLiD layer along the lines of (to post a note):

  • Client discovers micropub endpoint by looking for <link rel="micropub", or Link header.
    • The LDP server might store a relation between a [user|blog|profile|..] and the micropub endpoint. SoLiD allows retrieval of that relation and presents it for discovery by the client.
  • Client POSTs microformats JSON, optionally including a slug.
  • Client expects a 201 and a Location header.
    • SoLiD returns a 201 and a Location header.

And, for an ActivityPump client to interoperate with an LDP server via a SoLiD layer:

  • Client discovers outbox endpoint by looking for "outbox" relation in JSON of a user's profile.
    • LDP has stored a relation between a user and their outbox. SoLiD returns this appropriately via the user's profile.
  • Client POSTs an AS2 Activity JSON with Content-Type application/activity+json.
    • SoLiD converts that into turtle and generates the Slug headers, and posts to the appropriate container.
  • I'm not sure off the top of my head what an AP client expects. Probably 201 and Location header as previously.

Conversely, if a client wants to assume an LDP server, the SoLiD layer would allow it to anyway interoperate with any other servers that comply with the SocialAPI but not LDP.

I hope that makes some degree of sense. I'm still getting my head around SoLiD and LDP, so apologies if my examples are a bit sketchy. My main concern is that it's unrealistic for output of the WG to assume an LDP server or even data stored as any form of RDF. The SoLiD effort would be better served then making sure that (on the other end of the scale) we don't unwittingly create an API that would make it impossible to use the API with an LDP server. And then... everyone is happy?

link relations: acl, describedby and meta

Background

https://github.com/linkeddata/SoLiD#wac

HTTP/1.1 200 OK
....
Link: <https://example.org/data/.acl>; rel="acl"
Link: <https://example.org/data/.meta>; rel="describedby"

https://github.com/linkeddata/SoLiD#pubsub-notifications

HTTP/1.1 200 OK
...
Updates-Via: wss://example.org/

Questions

  1. Do you plan to register acl and describedby link relations with IANA and/or microformats.org?
  2. Will link relations acl and describedby also have full URI e.g. to use them in RDF
  3. How rel="describedby" differs from rel="meta"?
  4. Why choice of Updates-Via header and not for example link rel="updates-via"?
  5. Why .meta and .acl naming conventions if we can find relevant resources via link relations?

seeAlso

Cant find an ACL

curl -I https://inartes.databox.me/Public/dante/inferno-01
HTTP/1.1 200 OK
Link: <https://inartes.databox.me/Public/dante/inferno-01,acl>; rel="acl", <https://inartes.databox.me/Public/dante/inferno-01,meta>; rel="meta"

Follow your nose:

curl -I https://inartes.databox.me/Public/dante/inferno-01,acl
HTTP/1.1 404 Not Found

search via query does not have a follow your nose process

The new section suggestion to use a query does not have a follow-your-nose process of discovery. How is a client meant to know which url to use to send a query to? What should the language of that query be? Should it be POST or GET? What are the interactions?

The advantage of SEARCH or GET + Query as described by issue #4 - and as proposed in the initial SoLiD spec was that it was easy to use the patterns already expressed by LDP to show what features are available.

Standardize on a different term for "skins"

This first started as a discussion on PR 174, where I asked about the --skin command line parameter that's passed to ldnode to specify the default app it's going to use to render a resource.

"Skin" is an overloaded term, that most developers and web users understand to mean in the UI sense, a "theme". What is currently referred to as a "skin" by Solid devs, is the bit of app code (a viewer/renderer/editor etc) that is presenting the resource that's being browsed. So, for example, Warp is a "skin", since it determines how a particular resource (the file or directory being browsed) is displayed.

(@csarven suggested the term "default app", in that initial discussion.)

In the Fri Jan 8th team dev meeting, we discussed this issue again. There seems to be a general consensus that we should rename the term.

The question is - to what.

The candidates mentioned so far are:

  1. "default app"
  2. "fallback app"
  3. "suggested app" or "preferred app"
  4. "viewer" or "editor"
  5. "opener"
  6. "component" (in the Ember.js / Angular sense, which means "a view with some controller code/logic to go with it)
  7. "renderer"

What would be your preferred term?

Implement a privileged "admin" user on Solid servers

Implement a privileged admin/root user on Solid servers.

  1. The admin user id (in the form of a WebID) will be passed (as a command-line argument or config option) to the server on startup.
  2. This privileged admin user will have the ability to edit all other users' ACLs, and is to be used in the classic unix root user sense, for administration, account management and reporting.

This user can also be given access to an administrative-type app (Warp at first, but dedicated apps later on) which would give them an idea of system usage (how many users are on this particular server, what their disk usage is, etc, etc).

(action item from the Fri Jan 8th 2016 dev meeting)

move to schema.org?

In issue #22 I suggest documenting ontologies we use. We should also consider if one existing ontology would help us solve a lot of our decisions. schema.org is much more complete than most others, and deals with the kinds of problems we would find useful. Dan Brickley, Google, MS and other search engines. The vocabularies are much more complete than foaf for example.

  • we have a schema:Person class that resembles the foaf:Person a lot but has many more relations. It can
    • link person to a person with schema:knows
    • link a person to an organisation schema:worksFor - this can't be done in foaf other than through foaf:workplaceHomePage, which points to a homepage not a person.
    • link a person or a contact point to a telephone number via schema:telephone ( as timbl's contact ontology does )
  • It has https://schema.org/Event for events.
  • ...

The schema.org ontology is designed to not allow much inferencing, and we don't use much anyway. It's a bit of a pitty for some useful inverse functional properties such as foaf:mbox that do allow us to make good cases about indirect identifiers. But we can then still use that one when needed, as well as foaf:blog, foaf:openid, etc...

In schema.org one needs to always specify the types of the objects, as the relations are not strong enough to allow one to infer the types from them.

I think one can also use them correctly with hash uris. One just has to note that schema.org only tells people to use the schema:url relation instead of the JSON-LD @id tag, which means that in effect people using this who don't understand http-range-14 are not saying anything false, just something vague. For example as I pointed out on the Social Web WG a normal user who only cares to be well placed on search results and not to write a distributed social web will write:

{
  "@context": {
     "schema": "https://schema.org/"
  },
  "schema:url": "http://bblfish.net/",
  "schema:knows":  { "schema:url": "http://www.w3.org/People/Berners-Lee/" }
}

But SoLiD apps could complement that by writing something much more precise and efficient for Linked Data crawlers

{
  "@context": {
     "schema": "https://schema.org/"
  },
  "@id": "http://bblfish.net/#me",
  "schema:knows":  { 
     "@id": "http://www.w3.org/People/Berners-Lee/card#i" 
   }
}

The first graph represents the following turtle:

[] schema:url <http://bblfish.net/>
   shema:knows [ schema:url <http://www.w3.org/People/Berners-Lee/> ] .

The second graph the following

<http://bblfish.net/#me> schema:knows <http://www.w3.org/People/Berners-Lee/card#i> .

Both graphs say something true. The first one just says the person whose "url" is "http://bblfish.net/" knows the person whose "url" is "http://www.w3.org/People/Berners-Lee/", which just a pretty vague relation which may mean something like the location of the document. The second graph is precise, and much more easily useable by a machine. ( a lot less searching and guessing for it to do). Any client must be able to switch to the vague one anyway, when for example a fragemnt link does not connect for some reason. When that happens, it might require user intervention to fix the link, or interpretation by the user, which weakens the result, and slows down the task of automation.

Implement server-side /logout functionality

During the current WebID-TLS workflow, there's no way for a client to request to Log Out (stop authenticating as a particular WebID user / certificate).

Need to implement a server-side capability to break the WebID-TLS session.

Social WG oriented version of this draft

I have impression that current version will receive big push back from many Social WG participants. Few suggestions:

  1. JSON-LD only, those who want to do Turtle can still do that but now it can scary people away
  2. No SPARQL! If it can't function without it than very unlikely people in context of this WG will adopt it, if it can see comment on 1
  3. No LDPatch. To get started we can work with PUT and see if JSON Patch can satisfy WG use cases. While we seem to agree on not breaking compatibility with RDF as much as we can, I don't think people will agree on RDF first approach.

To clarify, I don't criticize current design, on contrary find it an amazing work. But what for people used to RDF may look like cool way of doing things, very likely will scary away many of Social WG members who at least currently don't look at The Web from this perspective.

semantic pingback/webmention

There is a discussion on semantic pingback/webmention going on here w3c/webmention#3

Pointing to some agreed on vocabularies would be very useful, but could make the spec way too long. This argues for splitting the spec up, with the front page consisting of an overview and an index to the other parts.

PubSub

Background

HTTP/1.1 200 OK
...
Updates-Via: wss://example.org/
sub https://example.org/data/test
pub https://example.org/data/test

Questions

  1. Any plans to support Server-Sent Events besides Websockets? I understand that client just subscribes and doesn't send any data to server.
  2. Could client receive 'fat pings' including payload, for example patch for the resource with optional checksum for checking integrity.

seeAlso

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.