Coder Social home page Coder Social logo

hyperledger / aries-rfcs Goto Github PK

View Code? Open in Web Editor NEW
319.0 51.0 217.0 87.11 MB

Hyperledger Aries is infrastructure for blockchain-rooted, peer-to-peer interactions

Home Page: https://wiki.hyperledger.org/display/ARIES/Hyperledger+Aries

License: Apache License 2.0

Python 30.48% HTML 17.85% Rust 21.76% Jupyter Notebook 28.78% Shell 1.12%
aries

aries-rfcs's People

Contributors

amanji avatar andrewwhitehead avatar ashcherbakov avatar b1conrad avatar berendsliedrecht avatar brentzundel avatar dbluhm avatar dgunseli avatar dhh1128 avatar genaris avatar jameskebert avatar jcourt562 avatar johncallahan avatar jsaur avatar kdenhartog avatar mikelodder7 avatar mikelytaev avatar oskar-van-deventer avatar patrik-stas avatar peacekeeper avatar ryjones avatar sklump avatar sukalpomitra avatar swcurran avatar talltree avatar telegramsam avatar timoglastra avatar tplooker avatar troyronda avatar vinomaster avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aries-rfcs's Issues

RFC 0025: rename "didcomms" --> "didcomm"

The rationale for doing this is described and may be debated in rocket.chat: https://chat.hyperledger.org/channel/aries?msg=haDnrASscA5S7HJC9

Here is an inline copy of the beginning of that thread:

[Daniel H]: In English, we have the phenomenon of count nouns and non-count nouns. A count noun is something that you count with integers: cars, shoes, phones, etc. We say, "I own 3 cars." A non-count noun is something that you quantify as a collective or mass: "My car is emitting a lot of smoke." "I bought 3.5 bushels of wheat."

Count nouns carry plural markers in English: I own 3 cars.

Non-count nouns do NOT carry plural markers in English. We would not say, "I saw 3 smokes coming out of my tailpipe when I started the car."

It is possible for a noun to be both a count noun and a non-count noun. "Water" is a great example. Normally, water is a non-count noun--we measure it in real units, as a collective. However, when a waiter takes our order at a restaurant, he may say, "Okay, so that's 3 waters, right?" When this happens, we're implying that we've packaged the non-count, collective noun in a discrete form.

When you turn nouns into adjectives, usually the non-count, non-plural version is more common. We talk about "water-borne illnesses" and "watercraft", not "waters-born illnesses" and "waterscraft".

So what about the word "communication"?

Well, "communication" is normally a non-count noun. We say things like "There's a lot of communication happening" (non-count collective) or "The internet backbone carries 3.9 terabytes of communication per second in the transatlantic cable" (units +real quantifying a collective but not discrete items).

We also use communication as a count noun, in the same way that my water example does. If each discrete communication event is counted, then we can say things like, "27 communications arrived on this radio channel in the past hour."

So, what would "didcomm" imply, and what would "didcomms" imply?

"Didcomm" means communication in bulk, uncounted. It is also the appropriate adjective form. "Didcomms" means individual, countable didcomm events.

Almost always, we want the non-plural, non-count sense.

As I said before, I really don't care how we talk informally--but in formal written artifacts, I suggest that we follow best practice grammar and use the form without the 's'.

RFC 0067: Correct JSON-LD context usage DID document conventions

On the last Aries WG call we discussed the current state of the DID communication DID document conventions, @talltree raised the point that the current defined convention omits a JSON-LD context notation in the service declaration.

Im opening this issue to ask how this aspect should be expressed.

As defined in the RFC currently we have the below

{
  "service": [{
    "id": "did:example:123456789abcdefghi#did-communication",
    "type": "did-communication",
    "priority" : 0,
    "recipientKeys" : [ "did:example:123456789abcdefghi#1" ],
    "routingKeys" : [ "did:example:123456789abcdefghi#1" ],
    "serviceEndpoint": "https://agent.example.com/"
  }]
}

Note we have type which is different to the @type used by JSON-LD, are these intended to be different?

If so would adding JSON-LD support at a service block level look like the following?

{
  "service": [{
    "@context": "https://schema.org",
    "@type": "DidCommunication",
    "id": "did:example:123456789abcdefghi#did-communication",
    "type": "did-communication",
    "priority" : 0,
    "recipientKeys" : [ "did:example:123456789abcdefghi#1" ],
    "routingKeys" : [ "did:example:123456789abcdefghi#1" ],
    "serviceEndpoint": "https://agent.example.com/"
  }]
}

The alternative is to opt for a syntax similar to the following

{
    "id": "did:example:123456789abcdefghi#did-communication",
    "type": "did-communication",
    "serviceEndpoint": {
      "@context": "https://schema.org",
      "@type": "DidCommunication",
      "recipientKeys": [ "did:example:789#key-1"],
      "routingKeys": [ "did:example:789#key-1"],
      "endpoint": "https://agent.endpoint.org"
  }
}

@peacekeeper @talltree your expertise on the DID spec and how it currently defines services would be greatly appreciated :)

RFC 0075 Payment decorator doesn't account for fiat

If the goal of this of RFC is to handle various forms of payment then we are missing a lot of information. This works for peer to peer payments like cash but in the real world that is not how payments work.

There are actually 4 parties when it comes to payments: Payer, payer's issuing bank, payee's acquiring bank, and payee. With credit/debit cards, the payer authorizes the payee to charge his card. The payee passes the consent to charge to his acquiring bank which then passes it to the credit/debit card networks. The networks do some verification like fraud checking then pass it onto the issuing bank. The issuing bank checks the consent and releases the funds over the network back to the acquiring bank. The acquiring bank holds the funds until the payee fulfills his part of the deal. This can be selling something to the payer and must deliver the product to collect payment. Reimbursement is different because payee has already delivered something.

I think this is low priority right now for the spec but wanted to make sure that this portion is noted.

RFC 0030: Connection Sync protocol merge algorithms

Have we consider the use of a CRDT as a merge algorithm for the connection sync protocol? They have a few advantages for their use one of which is that they have "Strong eventual consistency" which means that merge conflicts are resolved by the data structure.

RFC 0028 - Enhance `request` message

Problem
The request message of the Introduce protocol assumes both introducees already know each other - they just haven't formed a "DIDComm" connection between them.

The above assumption leaves out a large swathe of real-life introductions where the two introducees have no previous knowledge of each other.

Furthermore, requests for introductions in these types of scenarios typically include prerequisites:

  • My kitchen sink is clogged and leaking, can you recommend a good plumber?
  • We're thinking of buying a house - can you introduce me to your tax consultant?

Proposal
To address the first issue (no prior knowledge), let's explicitly make name an optional attribute of please_introduce_to.

I'm open to ideas to address the second issue (prerequisites). The key thing is that the requestor needs to convey the requisites to the Introducer, narrowing down for the latter the space of potential introducees to provide.

To me it seems like a good aproach would include the request-presentations~attach object from the Present Proof Protocol in a semantically meaningful way for the Introducer to understand that the request for proof is not for them to fulfill - it is for them to relay that request to the potential candidate introducees on behalf of the original requestor.

RFC 0116: Should evidence be limited to physical documents?

The initial motivation for this protocol was to address audit/regulatory/compliance concerns associated with KYC processing that may demand access to original source documents.

Based on conversations with sponsor users. there may be another use of the protocol, specifically as a conduit for the sharing of non-physical evidence such as cryptographic security features that aid in the building of confidence around levels of identity proofing assurance.

0114: Identity for Protocol Test Suite

I think the concept of predefined identities provides a potential route for us to get around how we bootstrap communication of the protocol test suite and agents under test. If we had a predefined identity for the protocol test suite, tests can be conducted without first establishing a connection through DID exchange.

Especially looking for comment from @dhh1128 on this one.

RFC0037: missing "Presentation Reject" message

Please describe the "Presentation Reject" message present in the choreography diagram of present-proof - there is no mention of it in the text.

Is this just the problem-report message?

Note: you can view the diagram in PR #171

For non local agents, should HTTPS still be required?

I was thinking about the implications of serving a UI over an HTTP endpoint and I think it opens up the possibility of a few Web app attacks like XSS, CSRFs, and JS injection. Should we be requiring agents that have Web App front ends run over HTTPS to prevent this?

RFC 0056 & RFC 0023 Usage of inline public keys

A reoccurring issue has been how to sufficiently represent inline keys in a concise syntax whilst preserving the required information around a public keys encoding and underlying type.

The did:key method appears to solve these issues by leveraging the following list of multicodecs.

Using this method we could replace any current references we currently have to inline public keys that at present do not include any information about the public keys encoding or type.

An example of this in the service decorator instance would be the following.

{
    "@type": "somemessagetype",
    "~service": {
        "recipientKeys": ["B12NYF8RrR3h41TDCTJojY59usg3mbtbjnFs7Eud1Y6u"],
        "routingKeys": ["B12NYF8RrR3h41TDCTJojY59usg3mbtbjnFs7Eud1Y6u"]
        "serviceEndpoint": "https://example.com/endpoint"
    }
}

Would change to

{
    "@type": "somemessagetype",
    "~service": {
        "recipientKeys": ["did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH"],
        "routingKeys": ["did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH"]
        "serviceEndpoint": "https://example.com/endpoint"
    }
}

Whereby resolvingdid:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH would yield the following

{
  "@context": "https://w3id.org/did/v1",
  "id": "did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH",
  "publicKey": [
    {
      "id": "did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH",
      "publicKeyBase58": "B12NYF8RrR3h41TDCTJojY59usg3mbtbjnFs7Eud1Y6u"
    }
  ]
 //Further information from the did doc omitted
}

Hence yeilding the underlying public key of B12NYF8RrR3h41TDCTJojY59usg3mbtbjnFs7Eud1Y6u, its encoding of base58 in this case and the type of key it is Ed25519VerificationKey2018

RFC 0021 - Sender and recipient identifiers used in envelope as DID key references

Currently at the envelope level of did comm messages, when senders and recipients are referred to, public keys are used. For example in RFC

Below is the JWE header

{
    "enc": "xchacha20poly1305_ietf",
    "typ": "JWM/1.0",
    "alg": "Authcrypt",
    "recipients": [
        {
            "encrypted_key": "L5XDhH15Pm_vHxSeraY8eOTG6RfcE2NQ3ETeVC-7EiDZyzpRJd8FW0a6qe4JfuAz",
            "header": {
                "kid": "GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL",
                "iv": "a8IminstXHi54_J-Je5IWlOcNgSwD9TB",
                "sender": "ftimwiiYRG7rRQbXgJ13C5aTEQIrsWDI_bsxDqiWbTlVSKPmw6418vz3HmMlelM8AuSiKlaLCmRDI4sDFSgZIcAVXo134V8o8lFoV1BdDI7fdKOZzrKbqCixKJk="
            }
        },
        {
            "encrypted_key": "eAMiD6GDmOtzREhI-TV05_Rhippy8jwOAu5D-2IdVOJgI8-N7QNSulYyCoWiE16Y",
            "header": {
                "kid": "HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum",
                "iv": "D4tNtHd2rs65EG_A4GB-o0-9BgLxDMfH",
                "sender": "sJ7piu4UDuL_o2pXb-J_JApxsaFrxiTmgpZjltWjYFTUir4b8MWmDdtzp0OnTeHLK9mFrhH4GVA1wVtnokUKogCdNWHscarQscQCRPZDKrW6boftwH8_EYGTL0Q="
            }
        }
    ]
}

Both the sender and the kid for the recipients are inline public keys.

The same is present in the non-repudiation envelope RFC

Below is the JWT header

{
    "alg":"EdDSA",
    "kid":"FYmoFw55GeQH7SRFa37dkx1d2dZ3zUF8ckg7wmL7ofN4"
}

Again the recipient, identified by the kid is an inline public key.

I think all of these references should be converted to did key references as the envelope level of did comms when speaking about senders and recipients of messages should be identified via did's rather than public keys.

This is the path already taken by did-auth-jose

RFC 0029: signature verification and MTC

After writing an implementation of Message Trust Contexts, I get the feeling that it may be helpful to have a context for signatures being successfully verified (SIG_VERIFY_OK or something). I think it plausible, for instance, to have a decorator pre-processor verify and unpack signatures such as connection~sig in the connection protocol, adding the context to the message. However, having more than one signature decorator inside of a message where one verifies and another does not muddies the waters. We would have to be able to specify which field verified and which one did not.

This might be scope creep for the concept of MTCs; but, I think we already face the same issue with the nonrepudiable context. If a message has a signature decorator inside of it that verifies, do we apply the nonrepudiable context to the whole message? Are MTCs, perhaps, recursive meaning that sections of a message can have their own MTC? Or do we need another data structure that adds more context to a specific message trust context?

cc @dhh1128

Should decorators be required?

While it seems every mature implementation I can think of is using all of the decorators described, we don't have an explicit requirement for agents to understand them. What are some arguments for or against requiring that full and/or static agents be required to process decorators?

RFC 0037: pres req should clarify intent of verifier

This is somewhat related to terms of use, but may not be identical. Tagging @mikelodder7 , who knows more.

The scenario that triggered this issue was work on OIDC->VC bridging, involving @tplooker and @swcurran and others. The observation was: if an OIDC provider engages in a VC-based proving protocol with the express purpose of relaying info about the prover, from credentials to an OIDC relying party, then the prover ought to know that the information they're supplying is going to flow to the OIDC relying party; they should not assume the verifier they're talking to is going to keep the info to itself.

This is just one example of a broader phenomenon, though.

A presentation request should probably express the intentions of the verifier. How formally it should do so is a question. We could imagine extreme formalism, which is a non-repudiable commitment by the verifier to terms of use. Or we could imagine something very fuzzy, like just a string that says "I need this data just for my own curiosity." Possibly a very formal expression of intent should be a subprotocol, and not bog down this one?

The actual presentation itself could express the intentions (or, in the extremely formal view, the requirements) of the prover.

Signing Or Encrypting Part of a Message

The following is a description for encrypting but applies equally well to signing.

​1.13.​ Encrypted Data Resource

Encrypted data resources also require and encryption/decryption key. Best practices for secure encryption/decryption exchange between two parties is to use asymmetric keys that are exchanged using a Diffie-Hellman key exchange that adds a type of authentication to the encryption/ecryption. This provides additional security against exploits. The NaCL (libsodium) library provides support for such ECC based Diffie-Hellman authenticated encryption (ECDH) with its crypto box function. As mentioned previously NaCL keys use the Curve25519 standard. These are different from Ed25519 (EdDSA) signing keys. Consequently a separate key identifier for encrypted data is required by any entity that owns encrypted data resources. Curve25519 public keys are 32 binary bytes long or 64 Hex encoded characters or 44 Base64 encoded characters (with padding).

The entity data resource indexed key list needs to include its public assymetric encryption keys.

Example entity resource with public encryption key denoted by the key "kind" field with value "Curve25519" :

{
  "did": "did:igo:abcdefghijklmnopqrABCDEFGHIJKLMNOPQRSTUVWXYZ",
  "signer": "did:igo:abcdefghijklmnopqrABCDEFGHIJKLMNOPQRSTUVWXYZ#0",
  "changed": "2000-01-01T00:00:00+00:00",
  "keys":
  [
    {
      "key": "abcdefghijklmnopqrABCDEFGHIJKLMNOPQRSTUVWXYZ",
      "kind": "EdDSA",
    },
    {
      "key": "abcdefghijklmnopqrABCDEFGHIJKLMNOPQRSTUVW123",
      "kind": "EdDSA",
    }
    {
      "key": "abcdefghijklmnopqrABCDEFGHIJKLMNOPQRSTUVWabc",
      "kind": "Curve25519",
    }
  ]
}

In the two party Diffie-Hellman key exchange the actual keys used for encryption and decryption are never transmitted. Instead the asymetric private key of the first party is combined with the asymetric public key of the second party to generate a "shared" key. Likewise the second party uses its private key and the first party's public key to generate an equivalent "shared" key. The shared key is not a symmetric key. This approach is used for the exchange of data between two entities within Indigo. To generate the shared key requires knowledge of both the encryptor and decryptor entity.

When an entity encrypts data it becomes the encryptor of the data. In a two party exchange of encrypted data using ECDH the recipient of the data is the decryptor. Both these entities need to be identified in order that both know how the associated ECDH shared key for encryption/decription is to be generated from the associated public/private keys.

Example encrypted data resource with signature:

{
  "did": "did:igo:abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQR",
  "signer": "did:igo:abcdefghijklmnopqrABCDEFGHIJKLMNOPQRSTUVWXYZ#1",
  "changed": "2008-09-15T15:53:00",
  "encryptor":     "did:igo:abcdefghijklmnopqrABCDEFGHIJKLMNOPQRSTUVWXYZ#2",
  "decryptor": "did:igo:abcdefghijklmnopqrABCDEFGHIJKLMNOPQRSTUVWXYZ#2",
  "crypt": "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
}
\r\n\r\n
"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"

The encryptor field provides a reference to the entity and public key (via fragment key index) that encrypted the data.

The decryptor field provides a reference to the designated entity and public key (vis fragment key index) meant to decrypt the data.

The crypt field provides the encrypted "crypttext" encoded in Base64 URL-File safe including padding (to a multiple of 4 characters).

Suppose for example the encrypted data comes from the following source JSON object:

{
  "name" : "John Smith",
  "city" : "San Jose",
  "zip" : "94088",
  "phone" : "8005551212"
}

To generate the crypt field value, the encryptor first serializes the source data using JSON to generate the plaintext. The encryptor then generates a shared key from the private key associated with the referenced encryptor public key and the decryptor public key. The encryptor then encrypts the JSON serialized string to generate the crypttext. The encryptor then Base64 URL-File safe encodes and pads to a multiple of 4 the crypttext to create the value of the crypt field. The encryptor then JSON serializes the full data resource and signs and attaches the signature.

For signing instead of encrypting the plaintext it is base64 encoded (to avoid JSON sea/deser non round trip issues) then signed.

The decryptor first verifies the full data resourse against the attached signature. The Decryptor then JSON deserializes the data resource and decodes the Base64 URL-File safe value of the crypt field. The decryptor then decrypts using a shared key generated from the encryptor's public key and the decryptor's private key associated with the referenced decryptor's public key. The resulting plaintext is a JSON serialized object. The decryptor then deserializes to get the source data.

When an entity wishes to encrypt data for its own consumption, that is, the entity is storing private data, then the entity is both the encryptor and decryptor. There are two ways to accomplish this. One is to for the entity to use two keys pairs of its own to generate the shared key. This requires no additional support. The other is for the service to support secret key or symmetric key encryption.

Tracking implementations to understand breaking change impacts

We've had two recent incidents of breaking changes made in RFCs that affect implementations - DID Exchange (discussed previously) and new Discover Features (the feature formerly known as protocol-discovery). In these early days, I think we need a way to track when there are implementations so that when breaking changes are made, the existing implementations are known and can be directly notified.

The implementations and their association with Aries-RFCs should be self-reported and should be tied to a specific version and point in time, so that when a breaking change is proposed, the impact can be understood.

Not sure what is the best way to do this that will handle our current case and scale appropriately:

  • Add an Implementations section to the RFCs that contain pointers to implementations set to the version implemented.
  • Add an Implementations.md document that lists pointers to implementations and the concepts/features implemented by the implementations.
  • Add an Implementations.md document that list implementations with a pointer to an aries-rfcs-supported.md document within the implementation repo. That document in turns lists the supported RFCs versions.

Maybe with the latter the definition can be strict enough that a utility can be created to walk the lists and report the impact of an RFC change?

Anyone have suggestions from what has been done in other communities at this stage of growth?

Term suggestion: Custodial Agent replace Cloud Agent, Non-custodial replace Edge

I'd like to suggest a change in terms to follow suit with a common verbiage arising in the digital currency space. It appears to have backing from Coincenter who are the leaders in public policy advocacy with regards to cryptocurrencies. This distinction emphasizes the differences between the two which is that a custodial wallet, while under your control, can be take control of by another party for a variety of legal compliance reasons or malicious activity (impersonation).

DIDComm message URL format

Currently in the did-exchange-protocol we define a URL format that connection invitations can be formatted to. This URL can then optionally be transmitted as is or encoded to a QR-Code.

As a reminder, the current format defined in this RFC is as follows

https://<domain>/<path>?c_i=<invitationstring>

Whereby the invitation string is a base64 URL safe encoding of a connection invitiation message.

Following this definition, the notion of more ephemeral interactions was introduced, where by protocols such as the presentation protocol could be performed using the ~service decorator to convey message sending details. This change has seen a need to generalize the URL format into something that is divorce from the particular DIDComm message that is being packed inside the URL format.

@tmarkovski @swcurran @dhh1128 @TelegramSam, I'm tagging you in this issue as I know you have all done quite a bit of thinking on this issue in particular.

The motivation for pursuing a URL format for DIDComm messages is driven mainly by the need for discoverability and the need for them to be able to be invoked into an IdentityWallet via deep links.

One of the key questions that has been asked previously is whether to use a custom scheme for DIDComm messages or stick with http/https.

A custom scheme such as didcomm would be similar to how the mailto scheme works for email clients, the price paid by this format is if a user is sent a DIDComm message in a URL format
that is prefixed with this didcomm scheme, invoking the URL on a device that has no identity wallet installed, would likely lead to some confusion for the user. As nothing on the device
would recognize the scheme.

Using a scheme such as http/https would in contrast means, if the URL was not recognized by an installed application, the browser would invoke the URL and could redirect the user to developer
resources to download a supported mobile wallet.

A notable limitation of registering a URL based on the 'http/https' scheme on IOS is that ownership of the domain must be proven by the app that wants to register the URL for deep link invocation.

This would prevent a generalized developer resource page being hosted.

My personal thoughts on the matter is that I think we should be opting for a custom URL scheme at the price of discoverability. Evidence of this approach can be seen in the following examples

  • Email clients mailto
  • Slack slack
  • OpenID Connect openid

This resolves the issue around registering apps to invoke the custom scheme, an example of this format could be the following

didcomm://?m=<did-comm-msg-b64URLsafe>

Another point of conversation related to this topic has been around the need to constrain the information in DIDComm messages sent using this format to ensure, the URL is still of a size
that can be encoded into a QR-Code and reliably scanned.

I would like to surface the idea of using a redirect parameter to load the contents of the DID Comm message, similar to how it is being suggested in the DIF interop project

didcomm://?r_uri="https://example.com/message/12345"

The behaviour would mean a supporting application that recieved this URL, would invoke an HTTP GET on the url given via the r_uri query parameter.

I am not un-aware of the unresolved questions in reference to user privacy and the potential attack vectors, but I remain un-convinced that there is a better solution
and that counter measures could not make this approach sufficiently safe. I also know that this option means the format would not always be offline capable where by a QR code is used as the transport medium.

DIDComm methods to prevent replay attacks

As far as I'm aware, the only requirement of content layer messages is that they include an @type field. I believe it may be necessary to also require the @id field or a timestamp in order to prevent replay attacks. While I'm not able to think of how it could be abused right now because the content is encrypted, I think it would be best practice for us to include something of this nature to prevent this from being combined and abused later.

@TelegramSam can you comment on the requirements of the content layer?
@mikelodder7 any thoughts on this based on what you're aware of?

RFC 0050: Wallet Name

During our call we started discussion about using different name for module which handles the keys. Seems like current name Wallet introduce a lot of confusion and it is already used by many projects and people in wider context.

Current definition[1]:

Wallet
A software module, and optionally an associated hardware module, for securely storing and accessing Private Keys, Link Secrets, other sensitive cryptographic key material, and other Private Data used by an Entity. A Wallet is accessed by an Agent. In Sovrin infrastructure, Wallets implement the emerging DKMS standards for interoperable decentralized cryptographic key management.

This issue is meant to discuss options and elaborate on proper naming.

[1] https://docs.google.com/document/d/1gfIz5TT0cNp2kxGMLFXr19x1uoZsruUe_0glHst2fZ8/edit#heading=h.dj01ko326kf8

Features section folder organization?

I noticed that in the features section we've got a bit of overlap around different classes of features. Some are protocols, where as some are modifications to generalized messaging (e.g. decorators or message threading). I think this may be confusing to someone who's looking at this repository as this grows. Would anyone mind (as well as have suggestions) if we grouped features together based on where they might fit?

The alternative to this is to allow the RFCs to exist as a flat structure, and then to create a GHPages site to organize logically similar features and concepts.

DIDComm Nomenclature

Opening this issue based on the conversation on the Aries WG call, tagging those that had a direct opinion on the call @swcurran @TelegramSam @dhh1128 @talltree @kdenhartog.

The pitch I made is around the issue we are having around the conflated use of the term protocol.

At present we speak about DIDComm as a messaging protocol, that is used to support different protocols such as credential exchange. DIDComm is also then hosted and communicated by a myriad of different transport layer protocols.

So in short we use protocol at three levels.

My thoughts on this matter is to redefine DIDComm as something along the following lines.

DIDComm is a messaging standard, that sets out semantics that are valid for DIDComm messages. As a standard it provides the basis for implementing DIDComm protocols such as credential exchange etc.

That way we have DIDComm protocols (e.g credential exchange) that are realised through the DIDComm messaging standard and are sent and received over different transports.

This may seem like a very subtle change in our language but I do feel it is a valuable clarification that will enable better communication with the wider community.

RFC 36: Suggested modifications to support revocation

I suggest two enhancements related to credential revocation:

  1. When an issuer revokes a credential, a message type is needed in order to notify the holder that the issuer has revoked the credential. This is preferable to having the issuer find out later, when trying to use the credential, that it has been revoked. This is a common scenario such as when a person is fired or quit their job. Or would it be more appropriate to introduce a new message family?

  2. The issue-credential message for indy does not have a slot to pass the revocation registry definition (i.e. the 2nd return value from issuerCreateAndStoreRevocReg) from the issuer to the prover. The prover requires this value when calling proverStoreCredential to store the credential.

Should the Aries Community DID be under did:aries, did:github, or https (rather than did:sov)?

From the discussion on #115

https://github.com/hyperledger/aries-rfcs/pull/115/files#r304002908

If the "magic" DID were a peer DID (or another non-globally resolvable namespace), there could be an RFC that defines both the did:peer:id along with the associated DID document (to allow for resolving service endpoints). Aries implementations could include that magic "DID" and DID document, as defined by the RFC. Implementations could update this DID/DID document as the RFC evolves.

RFC 0116: RFC Discussion/Questions

In RFC #116 there's a request_type attribute included in the evidence_request message. Then when the sender is processing the request_type object in the evidence_response format they need to reference back to the previous message they sent in order to process the message. Could we include the request_type attribute in someway for easier processing of the response?

[Question] 0005-didcomm: How is trust with Issuer/Verifier established?

Wanted to understand the mechanism/protocol for establishing a connection with "trusted" verifiers and issuers.

Use case/scenario: Alice is looking to open up a bank account with Big Blue Credit Union (BBCU) and does not have a previous relationship with BBCU. Upon navigating to BBCU she thinks she is on BBCU website and is making a connection with BBCU (she does not know that BBCU DID is tied to BBCU unless there is a DID-entity-resolution registry). How does Alice know that she is interacting with a trusted BBCU and not an untrusted "Big Red Credit Union"?

Similar to how we interact with websites, there is a secure connection lock in a browser that indicates if the website is trusted prior to connecting to the website. I am suggesting a similar experience needs to exist when end users are connecting with varying issuing/verifying entities in which they do not have a previous relationship with.

The problem we are addressing is ensuring that end users are not providing information to entities purporting to be something else but instead indicating to end users that this is a trusted entity. "Trusted entity" can be defined by an agency determining if an entity is on a ledger that is trusted or by a governing group that attests for an acceptable governance framework being adhered to (contractual business and legal).

Question: is there a protocol in DIDcom that can allow for this type of "trusted" connection response check or do we need to create a separate protocol standard that allows for a standardized approach to check and render a response that indicates "trusted" or "untrusted" based on community governance and/or agency implementation that determines if the entity is on a "trusted ledger" with the corresponding checks of adhering to the ledger governance framework. I suspect some agencies will be more strict than others given the industry and regulatory requirements.

CC: @vinomaster

RFC 0019: Updating to JWE compliant data model

Recently there's been discussion to update this work to be fully JWE compliant. One of the proposed solutions that's come up is to generate the following format:

{
    "protected": base64url({
        "typ": "prs.hyperledger.aries-auth-message",
        "alg": "ECDH+XC20PKW",
        "enc":"XC20P"
    }),
    "recipients": [
        {
            "encrypted_key": "base64url(encrypted CEK)",
            "header": {
                "kid": "base58(Recipient vk)",
                "iv": "base64url(CEK encryption IV)",
                "pk": "compactJWE(Sender PK)"
            }
        },
        {
            "encrypted_key": "base64url(encrypted CEK)",
            "header": {
                "kid": "base58(Recipient vk)",
                "iv": "base64url(CEK encryption IV)",
                "pk": "compactJWE(Sender PK)"
            }
        }
    ],
    "aad": "base64url(sha256(concat('.',sort([recipients[0].kid, recipients[n].kid]))))",
    "iv": "base64url(content encryption IV)",
    "ciphertext": "base64url(ciphertext)",
    "tag": "base64url(AEAD Authentication Tag)"
}

Exploded example of compactJWE(Sender PK):
{
    "protected": base64url({
        "iv": "base64url(CEK encryption IV)",
        "epk": "Ephemeral JWK",
        "typ": "jose",
        "cty": "jwk+json",
        "alg": "ECDH-ES+XC20PKW",
        "enc":"XC20P"
    }),
    "encrypted_key": "base64url(encrypted CEK)",
    "iv": "base64url(content encryption IV)",
    "ciphertext": "base64url(Encrypted Sender vk as JWK)",
    "tag": "base64url(AEAD Authentication Tag)"
}

Here's a provided example of what this would look like:

{
    "protected": "eyJ0eXAiOiJwcnMuaHlwZXJsZWRnZXIuYXJpZXMtYXV0aC1tZXNzYWdlIiwiYWxnIjoiRUNESCtYQzIwUEtXIiwiZW5jIjoiWEMyMFAifQ",
    "recipients": [
        {
            "encrypted_key": "whpkJkvHRP0XX-EqxUOHhHIfuW8i5EMuR3Kxlg5NNIU",
            "header": {
                "kid": "5jMonJACEPcLfqVaz8jpqBLXHHKYgCE71XYBmFXhjZVX",
                "iv": "tjGLK6uChZatAyACFzGmFR4V9othKN8S",
                "tag": "ma9pIjkQuzaqvq_5y5vUlQ",
                "pk": "eyJpdiI6IldoVGptNi1DX2xiTlQ4Q2RzN2dfNjdMZzZKSEF3NU5BIiwiZXBrIjp7Imt0eSI6Ik9LUCIsImNydiI6IlgyNTUxOSIsIngiOiJnNjRfblJSSFQyYk1JX1hjT1dHRTdJOGdQcU1VWTF4aUNub2J0LVhDUkNZIn0sInR5cCI6Impvc2UiLCJjdHkiOiJqd2sranNvbiIsImFsZyI6IkVDREgtRVMrWEMyMFBLVyIsImVuYyI6IlhDMjBQIn0.4zUt5tOOlcQWskJqxfMi0tNsfUCAzb5_PDfPqQ1h0Vw.xYkeEXV1_cSYFEd6UBMIfl8MWQfHaDex.XSNKTRXye5-iSXQ-aS_vQVZNEgFE6iA9X_KgSRMzihQBMoI1j4WM3o-9dMT9TeSyMvdq3gXt1NpvLdZHpJplahhk3mxMZL-vawm5Prtf.H7a5N-dggwdesjHyJCl06w"
            }
        },
        {
            "encrypted_key": "dDHydlp_wlGt_zwR-yUvESx9fXuO-GRJFGtaw2u6CEw",
            "header": {
                "kid": "TfVVqzPT1FQHdq1CUDe9XYcg6Wu2QMusWKhGBXEZsosg",
                "iv": "7SFlGTxQ4Q2l02D9HRNdFeYQnwntyctb",
                "tag": "9-O6djpNAizix-ZnjAx-Fg",
                "pk": "eyJpdiI6IkV6UjBFaVRLazJCT19oc05qOVRxeU9PVmVLRFFPYVp1IiwiZXBrIjp7Imt0eSI6Ik9LUCIsImNydiI6IlgyNTUxOSIsIngiOiJoU1g1NGt5ZTdsd0pBdjlMaUplTmh4eFhaV1N0M3hLSDBXUmh6T1NOb1c0In0sInR5cCI6Impvc2UiLCJjdHkiOiJqd2sranNvbiIsImFsZyI6IkVDREgtRVMrWEMyMFBLVyIsImVuYyI6IlhDMjBQIn0.qKmU5xO8Z1ZtRBWEjEMixb5VZG0mrY0LnjUGjLqktwg.EG-VOZSC2vLdoO5l2_A37IYvdXCckLZp.D4kgD6bYL1YfXyApk5ESKE2sc8TUiO-QGBtY-M5hcV_F88JPZdsi53Qofxk02ZxPHJZK-abDy45pIMH6-KUMDfE6WKhW3nPQhydPYutv.0SO4VjM8sDH-wGHcEpinTg"
            }
        }
    ],
    "aad": "OGY5ZDIxMDE3YTQ4MTc4YWE5MTk0MWQyOGJmYjQ1ZmZmMTYzYTE3ZjUxYjc4YjA3YTlmY2FlMmMwOTFlMjBhZg",
    "ciphertext": "x1lnQq_pZLgU2ZC4",
    "tag": "2JgOe9SRjJXddT9TyIjqrg",
    "iv": "fDGEXswlWXOBx6FxPC_u6qIuhADnOrW1"
}

I'm curious what other's opinions are on this approach. I personally am not a fan of the bloat the compact JWE adds in order to encrypt the sender's public key. However, I've not found another approach yet that seems satisfactory. If anyone has some suggestions it would be appreciated.

RFC 0019: Are the additional layers of encryption relevant for security or (only) privacy?

Dear Hyperledger-Aries and broader SSI Community,

I hope this is an appropriate place to reach out to you.

I am a PhD student interested in formal verification of security protocols, in particular relating to identity. I have been following the SSI work for a while. I believe that it is very promising, and I have become interested in contributing. I have started thinking if and how formal verification could help achieve desired security properties for agent-to-agent communication protocols (built on DIDComm).

As a first step, I am trying to better understand where cryptography is applied to achieve which security goals, which brings me to my question:

As described in RFC0019, it looks like an encryption layer is added in each routing step. For example, when, in 1 --> 2 --> 8 --> 9 --> 3 --> 4, 2 encrypts the message (anonycrypt) with 8's public key. It seems to be that for security, the innermost encryption layer (with 4's public key) would be sufficient. Are the additional encryption layers done for privacy purposes only (such that e.g. 8 only knows it should forward to 9 but does not know that the final recipient will be 4)? I see why this can be important, I just wonder whether it is also important for confidentiality and integrity of the message itself.

Thank you,
Sven

P.S: In general, what would you recommend I do to get involved in the community? I consider going to RWOT in September, are there other important events or similar coming up?

Request: selection criteria for identifying parties

(this is a follow up to this comment)

Currently, all exchanges of credentials and proofs are depicted as a simple interaction between two Agents where the RP/Verifier asks the Issuer/Holder for data. We need to support scenarios where the Recipient is unable to fulfill those requests directly, but can redirect the RP/Verifier towards a third Agent that can.

I think a generic solution could be devised such that it can be reused across several protocols (introduce, issue-credential, present-proof).

Some cases where such a feature would be useful:

  • An "CoolEventsHappeningNearYou" app that requests access to the User's calendar
  • A security service that requests access to the User's home sensors
  • Alice requests Bob for an introduction to any plumber he might know
  • Alice has her blood glucose levels checked regularly and provides her physician access to her lab results

RFC 0067: Service Conventions - inline keys

When DID resolver resolves DID url with service query it provides the block for whole service with keys references.

Does it make sens that DID resolver can already compile all keys inline so the requester does not have to resolve each key separately?

Keep in mind that did resolver does not necessary need to run remotely it could run locally which resolve did document according DID resolution spec

RFC 0094: Why routing agents, why onion routing

@swcurran @TelegramSam

Aries RFC 0094 introduces routing agents and routing envelopes. I am having serious second thoughts about this, also in relationship with Aries RFC 0025 about DIDComm Transports.

Here is my reasoning.

  • Use HTTP(S) as transport, if you just need a simple no-thrills service endpoint.
  • Use websockets as transport, if you need an efficient way to transmit multiple messages without the overhead of individual requests.
  • Use XMPP as transport, if you need an effective transport for incoming DID-Communication messages directly to mobile agents, like smartphones.
  • Use SMTP as transport, if you need a post-office type of service end-point.
  • Use TOR as transport, if you need the privacy of onion routing.
    Adding more transports is easy: just add them to the Aries RFC 0025 with a reference to how messages are encapsulated, cf Aries RFC 0024 on DIDCom over XMPP.

I argue that the DIDCom layer should be kept cleanly separated from the routing layer, which is a best internet practice. We are not routing experts. It makes no sense to me that we are trying to reinvent the wheel and introduce routing at the DIDCom layer, especially onion routing. If a party needs high privacy for incoming DIDCom messages, then advertise a TOR endpoint (e.g. 32rfckwuorlf4dlv.onion). If a party needs high privacy for outgoing messages, then route the messages via the TOR network and a public exit node.

As for the rationale of RFC 0094 ("what exactly does Alice have to know about Bob's domain to send Bob a message?"), my preferred answer is "just Bob's service endpoint(s) and associated encryption key(s)". It is a best telco/internet practice not to expose the inner workings of one party to another.

Note that I am aware of the "accepted" status of RFC 0094. I still hope I am allowed to question it.

Oskar

RFC 0021 - Encoding byte for content type of envelope and content messages

Currently as the RFC 0021 states, we are dependent on a JOSE based format for our envelope level and a JSON based structure for our content level message.

I would like to informally surface the idea to gauge community support that we should have an encoding byte for both of these message levels that defines the expected content type of the message.

The behavior would involve the first byte of a message (both envelope and content levels) defining the encoding definition. For example x01 = JOSE format for an envelope level message.

Rational

For did communications to be a future proof protocol I believe it should not have a dependency on a particular serialization format, instead did communications should merely define a data model and a set of supported serialization formats. Delaying the addition of support for this will make it harder to add in the future, therefore I believe it is important to have this discussion in near term.

Considerations

With both message levels, adding this encoding type opens a new avenue for negotiation, which will increase complexity for parties using did communications.

RFC 0036 and 0037: remove mime-type and encoding from previews?

I'm concerned about the use of mime-type and encoding in previews. The vast majority of properties in credentials should be scalar values (strings or integers or floats or dates). There are no mime types for such things, AFAIK (text/plain is given as a default value; do any other values make sense?). So having a property named mime-type doesn't make sense to me. I'm not sure that encoding makes any sense, either. We don't give any meaningful guidance about values for the property, other than to say that its default is null. If we mean encoding in the way that Indy means it, do we need to say that?

@sklump @swcurran @kdenhartog

RFC 0011 - Support for versioning decorators

This is already marked as a pending question in RFC-0011 but I am creating an issue to draw closer attention to it and promote discussion.

Since the conception of decorators, there have been many defined, it is now timely that we discussed strategies around how these could be versioned to allow their continual evolution.

Draft Syntax

A simple syntax could be to define a trailing character present after the decorators name that separates the name from a semver base version

{
 "~decorator_name/1.0" : {
    "decorator_field" : "value"
}

Considerations

At present we have a discovery protocol that supports discovering the supported message families of another party, would this protocols scope be expanded to also support the disclosure of supported decorators?

DID Resolution work

As we all know DID Resolution is a component we have to grapple with and there's a lot of differing opinions on this subject. I'm earmarking this issue for discussing this work so that we can communicate about the topic in between calls related to the topic. @tplooker I know you had some work on this and @swcurran had some ideas as well. Would you mind adding your proposal here to kick off the discussion? Pinging @esplinr @jovfer @dhh1128 @ashcherbakov and @troyronda as well because I believe they would be interested in the discussion.

RFC 0066: Non-repudiable signature support

Right now JWS has a requirement for supporting SHA-256 for a signature with many others optional. Would we want to support only edDSA (ed25519 signatures) and support additional signature algorithms in the future when we add additional key support?

This will affect how I write the RFC as well as what I take into consideration with the implementation.

@TelegramSam @mikelodder7 @tplooker @swcurran @dhh1128 pinging here since I think it's a better place to have the discussion.

enforce hyperlink hygiene with unit test

We have lots of broken hyperlinks, and suboptimal hyperlinks, in our RFCs. It would be nice to prevent such problems, in automated fashion. Here are things we could enforce:

  1. Hyperlinks (to markdown, to the web) aren't broken.
  2. Hyperlinks are in relative form (allowing for the RFCs repo to be embedded somewhere as when we use ReadTheDocs like we did in indy-hipe)
  3. Hyperlinks that point to indy should be redirected to aries where there is an equivalent.
  4. Hyperlinks should point to README.md rather than just a folder (so browsing takes people directly to a doc instead of to a folder with dozens of docs)
  5. Hyperlinks that include a fragment are valid.

RFC 0020: DID Reference Changes

The DID spec is evolving, and the DID deference example used in RFC 20 isn't going to be accurate. Current example:
did:sov:123456789abcdefghi1234;spec/exampleprotocol/1.0/exampletype
Likely new syntax:
did:sov:123456789abcdefghi1234;service=spec/exampleprotocol/1.0/exampletype

When should we make this change? What considerations are needed for the transition?

0067-didcomm-diddoc-conventions DID inline representation using DID Query

Using DID Query for Key Material

Introduction

During the Aries WG call 2019/07/10 suggestion was made to use DID query parameters as a mechanism for providing key material, namely the crypto suite type used to create the signature, to inline or ephemeral expressions for DIDs used in the DID Comm protocol fields. This is useful for peer DIDs as well as other applications.

One use case is in the service block as follows:

{
  "service": [{
    "id": "did:example:123456789abcdefghi#did-communication",
    "type": "did-communication",
    "priority" : 0,
    "recipientKeys" : [ "did:example:123456789abcdefghi#1" ],
    "routingKeys" : [ "did:example:123456789abcdefghi#1" ],
    "serviceEndpoint": "https://agent.example.com/"
  }]
}

In the block above the "id" field, the receipientKeys and routing keys array values all could benefit in some cases from an inline representation of the key material (crypto suite type) to allow participants to verify signatures made with the private keys underlying the associated DIDs without having to lookup the associated DID Documents. The inline representation could also be treated as preload for key material cache that expires according to another DID query parameter. This provides a compact way of managing the expiration of stale ephemeral DIDs.

Potential Resolution Issues

The relevant documents on DID syntax are the DID spec itself
(W3C DID Spec)[https://w3c-ccg.github.io/did-spec/]
and the draft DID resolver specification
(W3C DID Resolution Spec)[https://w3c-ccg.github.io/did-resolution/]

By way of background. The original design of the DID syntax and semantics included the idea that a DID could include many of the features of a URL (URI, URN). These features included path, query, and fragment components. Initially only the fragment had a specific use case where a special type of fragment component was identified called a DID Fragment (not to be confused with a generic URL fragment). The query and path were overlooked. Later discussion and pull requests repaired that by including the query and path components as formal parts of the DID ABNF. However over time the spec has evolved and now the only place that query and path show up are in ABNF is the new DID URL representation. It appears this was done to simplify the DID service endpoint resolution algorithm but some important semantics may have been lost in the process. The only use case described by the specification is for a DID that includes path, query, or fragment components is in a DID URL that includes a matrix parameter "service". The DID path and query on the DID URL would then be appended to the service endpoint url. This is both good and bad. Good because the spec does not prevent us from adding additional semantics to the query and bad because the spec may not provide insufficient guidance to implementers. Importantly this may be problematic WRT the the current DID Resolution algorithm spec. My reading of the algorithm is that it is either ambiguous or incomplete and leaves undefined some edge cases. It may be simply that I am misreading the algorithm.

The relevant clauses from the DID spec follow:

did                = "did:" method-name ":" method-specific-id
method-name        = 1*method-char
method-char        = %x61-7A / DIGIT
method-specific-id = *idchar *( ":" *idchar )
idchar             = ALPHA / DIGIT / "." / "-" / "_"
did-url            = did *( ";" param ) path-abempty [ "?" query ]
                     [ "#" fragment ]
param              = param-name [ "=" param-value ]
param-name         = 1*param-char
param-value        = *param-char
param-char         = ALPHA / DIGIT / "." / "-" / "_" / ":" /
                     pct-encoded

"
4.5 Path
A generic DID path is identical to a URI path and MUST conform to the path-abempty ABNF rule in [RFC3986]. A DID path SHOULD be used to address resources available via a DID service endpoint. See Section § 5.6 Service Endpoints .
A specific DID scheme MAY specify ABNF rules for DID paths that are more restrictive than the generic rules in this section.

4.6 Query
A generic DID query is identical to a URI query and MUST conform to the query ABNF rule in [RFC3986]. A DID query SHOULD be used to address resources available via a DID service endpoint. See Section § 5.6 Service Endpoints .
A specific DID scheme MAY specify ABNF rules for DID queries that are more restrictive than the generic rules in this section.

4.7 Fragment
A generic DID fragment is identical to a URI fragment and MUST conform to the fragment ABNF rule in [RFC3986]. A DID fragment MUST be used only as a method-independent reference into the DID Document to identify a component of a DID Document (e.g. a unique key description). To resolve this reference, the complete DID URL including the DID fragment MUST be used as the value of the key for the target component in the DID Document object.
A specific DID scheme MAY specify ABNF rules for DID fragments that are more restrictive than the generic rules in this section.
It is desirable that we enable tree-based processing of DIDs that include DID fragments (which resolve directly within the DID document) to locate metadata contained directly in the DID document or the service resource given by the target URL without needing to rely on graph-based processing.
Implementations SHOULD NOT prevent the use of JSON pointers ([RFC6901]).
"
Note that for the DID path, query only should is applied for usage in a DID URL with a service endpoint. Other usages are allowed. The DID fragment uses MUST but the use case is not sufficiently well defined and should allow other use when not the defined use case.

To be more specific, in the DID resolution spec the DID URL Resolution algorithm has a switch with 3 cases.
These switches are summarized (intermediate processing steps elided) with the input conditions and resultant below. I assume that the rules are processed in order. Because the resultant actions specifies a return then I am assuming that the rule processing is aborted once a valid antecedent is reached. In other words If the antecedent is true Then the consequent is evaluated and no further rules are processed, that is, IF the antecedent is false then skip to next rule. If instead the rules are cumulative in that return does not mean return but means include in the result and all the antecedents are always checked then the rules may be valid.

A) IF the input DID URL is equal to the input DID itself: THEN Return the resolved DID Document.

Not sure how to interpret this. If the DID URL is just a valid expression for a DID then this rule will always be true even with a query, path, or fragment component. Which would mean the next two rules are not evaluated. If a DID URL with a path, query, or fragment is not equal to the DID itself then the resolver will discard the DID unless it satisfies B or C

B) IF the input DID URL contains the matrix parameter service and optionally a DID Path, DID Query, and/or DID Fragment: THEN Return the output service endpoint URL.

C) IF the input DID URL contains a DID Fragment::THEN Return the output resource. (JSON-LD object whose id property matches the input DID URL)

This seems to be too greedy as it does not allow for JasonP or resolution into the DID document. Not all fragments correspond to an id property. This seems to be problematic no matter the definition.

Semantics

In a conventional URL the path component may be empty. The path component, consists of a sequence of path segments separated by a slash (/). In a conventional URL (Not DID Url) a path is ALWAYS defined for a URI, though the defined path may be empty (zero length). When the path is empty there is some default resource that is ALWAYS provided. This is not the case for a DID URL that is when a path component is missing there is no defined default resource unless the DID URL includes a service matrix parameter. Then the default resource is provided by the service endpoint.

I suggest that there is a valid default resource that could be defined for a DID URL for all other cases except when the DID URL includes a service matrix parameter. That default would be the did document itself. This is consistent with the precedent set by the DID Fragment specification where the fragment resolves to an object within the DID document. To be consistent with the conventional URL syntax, a default path component in a DID URL would also ALWAYS be defined.
The semantic is as follows:
WHEN the DID includes a service matrix parameter THEN the default path is given by the service endpoint resolution algorithm
OTHERWISE the default path resolves to the DID itself its meta-data from the DID Document.
The resolution of a non-empty path ie no default would be method dependent

Likewise the semantics of DID query could also be defined:
When the DID includes a service matrix parameter THEN the DID query is applied to the service endpoint.
Otherwise the did query is applied to the resource specified by the path. When the path is empty then the default path which would be DID itself and its metadata from the DID document.

In the conventional URL usage, a query string modifies the resource specified by the URL. Originally the main use case was to provide the data values for the fields belonging to a form resource. This usage is evocative of the proposed use herein of providing the field values for a key material authentication block associated with the DID as a resource.

What remains then is to specify what the query parameters should be.

I suggest mimicking the field names from the Authentication block.

{
      "id": "did:example:123456789abcdefghi#keys-2",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:example:123456789abcdefghi",
      "publicKeyBase58": "H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV"
    }

maybe put auth in front such as

did:example:12345678abcees?auth_type=Ed25519VerificationKey2018

another would be an expiration date which would force a did document resolution to get the most recent authentication block

did:example:12345678abcees?auth_type=Ed25519VerificationKey2018&auth_expires=20190712

TODO:

Clarify DID URL Resolution algorithm

Define Fields

Update associated specifications

Another use case of the DID query is for Hierarchically Deterministic Keychains.
The Query string can include the derivation path for say a BIP-44 derived did. This is described in this paper:

https://github.com/WebOfTrustInfo/rwot6-santabarbara/blob/master/final-documents/DecentralizedAutonomicData.pdf

These now are two cogent use cases for the DID Query for some important purpose other than modifying a service endpoint.

RFC 0036 and 0037: need to explain why payloads aren't shown

Per a discussion on the Aug 7 2019 community call...

The RFCs show messages, but omit detail about specific payloads (the actual content of credentials and presentations). The reason is deliberate (allow evolution of credential formats independent of the protocol that exchanges them). However, the RFCs don't explain this. We need verbiage in the RFCs to clarify.

We probably also need links in the RFCs to help people find the format of the payloads, even if the RFCs themselves don't show the data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.