Coder Social home page Coder Social logo

keri's Introduction

The KERI Working Group is no longer active under DIF. Ongoing KERI work has moved to the Web of Trust GitHub repository.

KERI (Archived at DIF)

Key Event Receipt Infrastructure - the spec and implementation of the KERI protocol

KERI Logo

Starting Points

  • For an overview and introduction, see the explanatory slideshow here or the public-facing website.
  • A partial draft of the spec (still pending substantial editorial reduction) is available in the KIDs directory.
  • The most recent version of the whitepaper is here. It contains an overview of the functional requirements and design of the KERI protocol, but should not be taken as authoritative as a specification and predates the experimental implementation.
  • Definitions, Questions and Answers categorized by topic here.
  • Contributor guidelines can be found here.
  • As the first experimental prototype is developed in python, updates to the white paper's concepts and new implementation guidance alike are being moved into "KERI implementation documents" (KIDs), which are the building blocks of the spec being refined in dialogue with the first implementations done through the group. ALL of this work (implementation and specs alike) are in PROPOSED stage at best, and in some cases even in EXPERIMENTAL mode (some code commits not even discussed).
  • There are separate repositories for Python, Rust, JavaScript, Go, and Java

Ways to contribute

  • Feel free to open an issue here if you have use-case ideas, or if you have suggestions on how to make the current KIDs more concise, crisper, or easier to understand from an implementer's point of view.

keri's People

Contributors

bumblefudge avatar chunningham avatar csuwildcat avatar hackmd-deploy avatar henkvancann avatar m00sey avatar mitfik avatar nembal avatar or13 avatar paulgrehan avatar peacekeeper avatar pfeairheller avatar smithsamuelm avatar stevetodd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keri's Issues

KERI Q&A

@henkvancann

Many of the questions in the Q&A revolve around statements like why is KERI reinventing the wheel. Or why is KERI not using blockchain or how can KERI solve some problem that block chain solves.

I think preface to these questions along the following might help.

KERI attempts to solve the "secure attribution" problem. One way of stating this problem is that we want to be able to securely integrally attribute any statement to its source. Fundamentally non-repudiable digital signatures do that but with one caveat. A digital signature is only a mechanism of secure attribution to the entity that controls or is in possession of the associated private key. If more than one entity holds that private key then we may no longer securely attribute any statement signed with that key to a given controlling entity. What KERI recognizes is that we can't guarantee secure attribution given key compromise. But we can make a lesser guarantee. That is we can detect key compromise when it happens or at least before it may harm any validator that wishes to act on a supposedly securely attributed statement. This lesser guarantee is provided by something called duplicity detection. A digitally signed statement is non-repudiable to the set of entities that have possession of the private key. Thus a key compromise may be seen as a malicious expansion of the set of entities that possess the private key from a purported set. For some entity to benefit from gaining control of a private key, they must make signed statements with that key. These are still non-repudiable to the set of possessors of the private key, its just that the set may include malicious actors once the key has been compromised.

KERI primarily cares about statements about the key state of an identifier. So the key events in KERI are signed statements about key state. The construction of key event messages in Keri is deign to leave no ambiguity about key state. All the statements belong to a hash chained signed data structure we call a KEY Event log. One may not make an inconsistent statement about KEY State with respect to a given KEL without it being detectable as inconsistent. One merely needs a copy of the KEL and given this copy any statement is either provably inconsistent, provably consistent or unconnected. When A set of consistent statement about key state are assembled they form a KEL. Any two versions of a key event log where all events are not identical means that the KEL are duplicitous.. And duplicity detection mechanisms do not require a totally wording distributed consensus algorithm. An immutable event log of key event messages is sufficient. So KERI says either we can make secure attribution when there is no detected duplicity or we detect duplicity and we no longer can make secure attribution because we have proof that the keys have been compromised by virtue of the duplicity (or equivalently that the set of possessors of the private key includes a malicious actor who is acting duplicitously).

Given duplicity has been detected the remaining question is how to resolve it. The easy answer is don't trust. The harder answer is resolve the duplicity such that one version may be appraised as the trusted one. Many of the elements of KERI are mechanisms to allow unambiguous resolution of duplicity via recovery from the associated key compromise. But KERI does not guarantee that all duplicity is resolvable or recoverable merely that it is detectable and once detected any validator may be protected from harm by not trusting any statements with unresolved duplicity. In contrast a distributed consensus ledger is not solving the secure attribution problem. It is solving the total ordering problem. These problems are not the same and hence the solutions may be very different in spite of some similarities.

Life Cycle Events and Key Rotation in KERI

A related issue is this one. It explains how the term revocation has two different meanings in the identity space.
#52

In KERI rotation is usually applied to keys and revocation to signed authorizations or signed statements such as VCs. This is because in KERI key rotation performs a key revocation and then a key replacement in one operation. So we rarely need to use the term key revocation in KERI. In general statement revocation and key revocation refer to two different things. However in token based security systems, typically a key revocation results in automatic revocation of all signed authorizations (tokens). So the two operations often appear synonymously which is the source of the confusion. They are not synonymous at all and it makes a big difference in your key management policy and infrastructure to understand why and how they differ.

Transferability

In KERI identifier prefixes may be transferable or non-transferable. In this case transfer means transfer of control authority. In KERI control authority is held by the holder of the the private key(s). So a transfer of control authority is executed by rotating the authoritative key set to a new key set. This means under the hood the old key set is revoked and then replaced by a new key set.

In KERI, a Key Event Logs (KEL) allows the establishment of the control authority (ie authoritative key set) for any and all events in the log. This means with keri, key rotation does not result in the automatic revocation of signed statements anchored to the KEL. This is important because it supports the vital use case of the VC issuer, holder, verifier model. If an issuer wants to issue thousands of VCs to holders and not have to reissue them (forcing the holder to refresh their copy of their VC) whenever the issuer's key management policy says its time to rotate keys, then some mechanism is needed for determining what the authoritative keys were when the VC was issued in order that the verifier may later verify that a VC is still valid despite one or more key rotations by the Issuer.

The Issuer, Holder, Verifier model of VCs trips up many security practitioners who are familiar with the OIDC policy of automatically revoking all tokens (analogous to all issued VCs) when ever the keys are rotated. I had one practitioner tell me that VCs cannot be used with rotatable keys because a key rotation would force revocation of all issued VCs. The practitioner was using the wrong model. One reason DIDs and VCs have been largely ledger based is that the ledger enables a verifier to determine what the authoritative keys were when a VC was issued. Not merely the authoritative keys at the time of verification.

The decoupling of authorization statement revocation from key rotation means that in order to revoke an authorization, an explicit revocation statement must be issued. Revocation registries keep track of statement revocations.

A KERI KEL keeps track of all statement that are anchored to it via KERI seals. Essential a list of hashes or hash tree root hashes. This allows a KEL to be authoritative for both issuance and revocation of statements not merely the establishment of the history of authoritative keys.

Lifecycle Events for Key Rotation

In KERI because identifier prefixes are cryptographically derived from public key(s) there are two ways to effectively rotate keys.

1- The simplest way is to abandon an identifier and create a new one. The new identifier will have a new set of keys. In this case the identifier is not transferable (ie the control authority of the identifier is not transferable to a new set of keys). But no KEL is ever needed.

2- The other approach is to rotate the authoritative key set for the identifier using a rotation event. In this case a KEL is needed to track the rotation events.

Depending on the application one might use 1- or 2- or some hybrid to manage key rotation.

Several circumstances indicate a need for key rotation.

  1. key hygiene: Over time with use signing keys become exposed and may be vulnerable to a side channel attack. So a prophylactic rotation refreshes the exposed time to zero.

  2. key compromise: A key may become compromised or there is evidence to suggest likely key compromise due to carelessness of the key holder or evidence of duplicitous events.

  3. transfer of ownership of the identifier: Some identifiers represent public value, brand value, social value or whatever. Often a business will have public facing identifiers that anchor its business transactions. But should the business be sold then the new owner may not want to use keys that are preexisting known to the old owner and therefore not secret. The new owner needs new keys.

  4. this is a special case of 3 where the identifier is for an IoT device. The device itself may change ownership or control. The new owners/controllers want to have new private keys. This means rotation. This last case is the most illustrative because IoT applications may depend on resource constrained devices where keeping a lot of KELs might be viewed as heavyweight in terms of storage. Whereas the first 3) typically involve non-resource constrained devices where storing a bunch of KELs is not a limitation.

IoT Use Case

We examine 4)

Typically in IoT applications devices are marshaled onto a network. This is the user equivalent to on-boarding. Its just that the IoT devices are not human so they need some entity or some agent software to perform the marshaling process. Typically marshaling a device on a network involves filling in a network mapping table. Think DNS Zone file for example.

A physical device is identifiable by it atoms, its atoms persist, so the possession or access to the device may be used as an authentication method. That is physical access to the device may be used to authenticate the "marshall" with the device when the device is marshaled onto a network.

Given that authentication may happen as a result of physical access to the device, One may choose to rotate the identifier for a given device instead of rotating the keys for an identifier of a given device. In other words one may use non-transferable identifiers which do not require KELs.

In other applications the persistence of the identifier is important (because it represents an abstract public entity such as a corporation, or business, or brand), not some physical device associated with an identifier. In this case keeping the persistence of the identifier is paramount, This means rotating the keys but keeping the identifier fixed. The cryptographic root-of-trust under pining a KEL allows secure verifiable rotation where the only attack vector is a side channel attack on the key management infrastructure. The key history is end-verifiable and is not dependent on the security of any intervening infrastructure.

A hybrid approach is to use both. Non-transferable and transferable identifiers. In this hybrid case, the marshall has a transferable identifier with rotatable keys but devices only have non-transferable identifiers (ie non-rotatable keys). These are explained in the example below.

Transfer of Ownership

Suppose the device is a thermostat. We will call this thermostat Tim for short. The marshal assigns an IP address to Tim. Because neither the IP address or the name Tim have any security properties, no other device can ever trust a message from Tim. It could be forged (source IP) forgeable and or packet contents are forgeable.

However if the marshal assigns to that physical thermostat a self certifying identifier (SCID) in addition to its name Tim and its IP address, then as long as the device itself is able to keep secure control over the associated private key for its SCID, any device may verify that a message came from Tim as long as the message is signed by the private key.

Marshalling Tim on the network means loading the mapping between Tim's SCID and IP into all the other devices on the network that need to receive or send messages from/to Tim. Likewise Tim's network mapping table must have a mapping of the SCIDs for all the other devices that Tim must send or receive messages to/from. If a SCID includes the public key, then devices may extract the public key from the SCID itself. If not then the inception event exposes the public key and the mapping table includes the inception event for each SCID.

This mapping is a type of identity system security overlay (network overlay of the SCID onto the IP address mapping)

Example Life Cycle Event: New owner of the building that houses the Thermostat Tim.

The new building owner buys the building and all the devices including the thermostat Tim. New owner does not want any possibility of old owner to have a back door into any devices for which the old owner could have had a private key and therefore could spoof verifiably signed messages with that private key.

So new owner needs to rotate all the private keys of all the devices to keys that the new owner controls.
There are two ways to do this with KERI.

  1. Rotate the SCIDs. For each device new owner creates a new SCID with new private keys. New owner re-marshalls the devices by updating the mapping between SCID and IP address in the associated device SCID to IP address mapping table on all the devices. IP addresses do not need to change.

  2. Rotate the key for each SCID: Device SCID does not change but the authoritative key pair does. This means that the mapping table in each device for Tim must include the history of key events for Tim's SCID not simply the SCID itself. Every device must have a KEL for every other device it communicates with.

The advantage of 2) is that the rotation event for each device may be sent out as a message to the network signed by the old private key already stored on the device. Each devices may update KELS of the devices they communicate with asynchronously and independently. The IP mapping tables never change. The disadvantage of 2) is that the devices must store the key event history for all the devices they communicate with.

The advantage of 1) is that the devices do not have to hold the key event history. The disadvantage of 1) is that marshaling the table requires some form of authentication. This means either a marshaling SCID or a physical authentication such as a button on the device. Most IoT devices use the button approach. A button requires physical access to the device. The new owner can be assured that the old owner does not have physical access to the building so can depend on physical authentication as secure. The IP mapping tables must be updated for all devices.

  1. A hybrid is to have a marshalling SCID. Then instead of each device keeping a key event history of every other device they must communicate with, they only keep a key event history of the marshaling SCID. In this case the device SCIDs are non-transferable. Remarshalling means rotating each device SCID which means creating a new set of keys from which the new SCID is derived. But each device may authenticate changes to the marshaling (mapping) table against the marshaling SCID not a physical button. This marshaling SCID is transferable (IE it supports rotatable keys). So change of owner means the the new owner gets an new key pair for the marshaling SCID. The old owner sends out a rotation event that informs all the devices that any new marshaling commands must now be authenticated with the new key pair for which only the new owner has the private key. Now the new owner can remotely update the marshaling tables by rotating device SCIDs without having to physically authenticate to each device. So only one key event history must be maintained by the devices, that of the marshall's SCID. All other SCIDs are non-transferable and a change of their key means getting a new SCID.

So for inexpensive devices that can re-marshalled with a physical authentication mechanism, the preferred approach is 1)

For more expensive devices or remote devices where physical security is not a given then 2) or 3) is the preferred approach.

If the remote devices are computers i.e. not resource constrained then 2) is simpler.

If the devices are resource constrained then 3) may be the best trade-off.

data type ambigious

data = config data list (icp) or data seal list (rot, itx)

prefer not to see an or type, can we split this up into 2 property names?

sith type ambigious

sith = signing threshold or list of weighted thresholds (icp, rot, dip, drt)

can it be exactly one of these, and not an or type?

Prefer it be just list of weighted thresholds (icp, rot, dip, drt)... but with some better example data.

Use Cases Document

Use cases for KERI (outside of DID methods) should be documented in order to gather requirements for remaining parts of the specification

Cryptographic Material Encoding—3rd Alternative Binary Encoding

Since the binary representation is being directly merged to master rather than developed in a branch then discussed and merged through a pull request, I'll open this as a separate issue.

The kid0001 commentary under the heading Cryptographic Material Encoding discusses the proposed binary representation of cryptographic material. The section discusses 3 encodings of cryptographic material:

  1. (R) Raw Binary Material with Code: a tuple of the derivation code (dc) and the raw material (rm): (dc, rm). Best seen as the parsed, in-memory representation.
  2. (T) Base64 Encoding: the one we're all familiar with and used within the body of the event
  3. (S) Binary Encoding: this is proposed in the link above

The commentary describes a choice between two options for converting from R to T and R to S:

  1. Binary-First: Derivation codes are one or more 8-bit tokens. The base64 representation is the concatenation of derivation code and the raw material, which is then base64 encoded. The binary representation is simply the non-base64-encoded concatenated.
  2. base64 code + base64 material: Derivation codes are one or more 6-bit tokens. The base64 representation is the concatenation of the derivation code and the base64-encoded raw material. The binary representation is essentially the base64 representation base64 decoded.

There is a third option, which I would like to propose:

  1. base64 derivation code + (base64 or binary): Derivation codes are one or more 6-bit tokens, as in option (2). The base64 representation is the same option (2). The binary representation, however, is the base64 representation of the derivation code with the raw material concatenated. Said a different way, the representations of the code itself are equivalent at the binary level.

This third option preserves "the stable derivation code characters in the textual domain" while also simplifying the decoding in the binary domain. Decoding in the textual domain is well understood. In the binary domain, a processor is able to read and interpret the codes as whole bytes and avoids having to do the extra work to extract the six-bit codes from the bytes.

From a compactness standpoint, for one-character-code encodings, the length of bytes is equivalent; option (2) essentially just shifts the raw material 2-bits and leaves 2 empty bits at the end. For two-character codes, it is also the same length since the code will occupy 12-bits, which will require 2 bytes and leave 4 empty bits at the end. For three-character codes (not currently used--a separate topic), the numbers are 3-bytes and 6 empty bits. Finally, it isn't until four-character derivation codes that option two becomes shorter.

This "mixed encoding" representation might seem odd, but perhaps it is better to think of it instead as having a constraint on derivation codes that allows it to be represented in both the textual and binary domain without translation.

Data model description

In writing #86, I put together a pseudo code representation of the data model, which I will put here and then will reference there. We may want to use different notation in the spec, but this may be useful in composing kid0002:

// Notation Info
Foo(bar:int, baz:[Byte]) // data structure with bar int field and baz Byte array
Biz = Qux | Quux | Quuz // Biz is union type of Qux, Quux, Quuz
Corge(...Foo, grault:int) // Corge is composed of all the fields of Foo with the addition of grault

// Receipts
Receipt(...Message, ...EventCoordinatesWithDigest, a:EventCoordinatesWithDigest) 
// ^^^^ a:EventCoordinatesWithDigest must point to establishment event
//         (root) EventCoordinatesWithDigest point to receipted event

// Events
EstablishmentEvent(...Event, ...KeyConfig, ...WitnessConfig)
InceptionEvent(...EstablishmentEvent, c:[ConfigurationTrait])
RotationEvent(...EstablishmentEvent, a:[Seal])

DelegatedInceptionEvent(...InceptionEvent, da:EventCoordinatesWithPreviousDigest)
DelegatedRotationEvent(...RotationEvent, da:EventCoordinatesWithPreviousDigest)

KeyConfig(kt:SignatureThreshold, k:[PublicKey])
SignatureThreshold = integer | WeightedSignatureThreshold

WitnessConfig(wt:integer, w:[BasicPrefix])

ConfigurationTrait = EstablishmentEventsOnly | DoNotDelegate
Seal = Digest | MerkleTreeRoot | EventCoordinatesWithDigest

KeyEvent(...Message, ...EventCoordinatesWithPreviousDigest)

// Message
Message(v:Version, t:MessageType, signatures:[Signature])

// Common
EventCoordinatesWithDigest(...EventCoordinates, d:Digest)
EventCoordinatesWithPreviousDigest(...EventCoordinates, p:Digest)
EventCoordinates(i:Prefix, s:integer)

Prefix = BasicPrefix | SelfAddressedPrefix | SelfSignedPrefix
BasicPrefix = PublicKey
SelfAddressedPrefix = Digest
SelfSignedPrefix = Signature

Version(major:integer, minor:integer, format:Format, size:integer)

Format = JSON | CBOR | MessagePack

// Crypto Primatives
MerkleTreeRoot(rd:Digest)

Signature(algorithm:SignatureAlgorithm, data:[byte])
Digest(algorithm:DigestAlgorithm, data:[byte])

SignatureAlgorithm = ED25519|ED448|ECSECP256K1
DigestAlgorithm = BLAKE3|...(omitted)...

Support for Ledger Registrar Backers

Currently the only Backer type supported by KERI are witnesses that participate in a witness pool using the KAACE distributed consensus algorithm. Witnesses are identified with an ephemeral (non-transferable) identifier that uses dedicated derivation codes for ephemeral non-transferable identifiers. Such coded identifiers include the public key is the identifier itself so no inception event or KEL is needed to verify signatures of a Witness. This is a performance optimization to avoid having to download KELs for witnesses and avoids the infinite regress of validating signatures of witnesses who have witnesses etc. When a witness' key is compromised, instead of rotating the witness keys, the witness identifier itself is rotated to a new ephemeral identifier. We rotate identifiers to rotate keys. The KEL events supports backer (witness) rotation by changing the set of identifiers.

However it has long been contemplated that KERI will support Backers that are registrars or oracles for shared ledgers such as Sovrin or Bitcoin etc. Currently there is not mechanism to distinguish between the Backer types and a ledger registrar backer does not participate in a KAACE consensus pool with witness backers.

The proposed mechanism here is that all non-witness backer types use non-ephemeral but non-transferable identifiers. A non-ephemeral but non-transferable identifier uses a transferable derivation code, thereby requiring an inception event but the nxt digest in its inception event is empty making it effectively non-transferable. But because an inception event is required we may use the seal list in the anchor field in the inception event to include a seal (digest) of all the configuration or meta data needed to know what ledger etc the Backer is tied too. This incepting meta-date can be provided as an attachment to the inception event for that Backer's identifier. Using a non-ephemeral but effectively non-transferrable identifier may avoid the infinite regress in that a Backer may not have backers (other than implicitly via the ledger itself that the backer is tied too).

This will require some code changes to detect non-witness Backers based on identifier type and inception event contents. This will be a different escrow path than witness backers. But the inception event allows us to securely designate any metadata needed to distinguish the Backer type and whatever else is needed to validate against that backer. Because the Backer identifier is still effectively non-transferable should the backer key-pair become compromised the Backer must be replaced with a new identifier. The same KEL event support for witness identifier rotation will work similarly for non-witness Backer identifier rotation.

notice of concern DIF-TOIP

Dear Sam, dear Charles,

corresponding to the DIF KERI WG call of June 22nd, 2021 (and previous expression of concerns in the DIF KERI WG slack channel) I'd like to again list my concerns about the proposed move of the DIF KERI WG to TOIP.

starting point:
as I understand, Sam proposed (with support from Drummond) to move the DIF KERI WG to TOIP for the following reasons:

  • 3 cases of IPR contribution were rejected by DIF due to IPR concerns
  • TOIP has more members as an organization (~240 TOIP vs 200 DIF)
  • GLEIF is a member of TOIP not of DIF (although in both cases GLEIF could maintain membership for free)
  • TOIP worldview is closer to Sam's and Drummond's view of KERI

note:
moving a standard WG from one organization to another is rare, and would be easier to justify if there were concrete causes with significant limitations for a group such as the DIF KERI WG to continue their work under its current organization (in this case DIF).
If such significant limitations are identified, it lies in the role of the co-chairs, of the DIF KERI WG in this case, to facilitate a transparent discussion with all stakeholders to resolve the issues at hand. A well documented course of action and possible resolution should follow from such a discussion.

critique:

  • there was no initiative to discuss the move of the DIF KERI WG to TOIP with the community in a proper way (e.g. describe the issues, suggest a discussion, invite all stakeholders by email, have a dedicated meeting to find a solution)
  • there was no community decision made according to the spirit or the letter or DIF's governing documents
  • the differences between TOIP and DIF and IPR-related roadblocks to contribution have been matter is treated in a manner disproportionate to their its potential impact and damage to the work that the community has put into KERI to date
  • if the move of the DIF KERI WG to TOIP were to take place, there has been little to no thinking/evaluation done about the potential loss in trust and the adverse reputational effect this could have on adoption of KERI
  • when KERI was contributed to DIF, a promise was made to the community to build out KERI together under a governance umbrella we all feel comfortable under. Many have since started contributing and some even implementing KERI. The DIF KERI WG has been continuously gaining momentum. Invalidating this promise is a delicate and substantial community concern. It needs to be treated as such with sincere care and transparency.

concerns as a community member:

  • Governance of IPR is essential to ensure potential adoption by corporations as well as governments. for the future of KERI it is important to have a stable and reliable governance of IPR
  • Jolocom has invested heavily into KERI ( Rust implementation ); also, Jolocom has implemented KERI into the JOLOCOM SDK and SmartWallet. KERI has become a building block of JOLOCOM's stack. Potential IPR issues could quite literary put take our investment as well as our entire company’s roadmap at risk.
  • there seems to be no understanding of the difference in membership quality that such a move may have to current DIF members. e.g. JOLOCOM currently is a full (“Associate”) member of DIF and a non-paying, non-governing contributing member of TOIP. To match the same quality of membership at TOIP, JOLOCOM would need to become a member of Linux Foundation before to then only being able to become a general member of TOIP with equivalent governance options. This would- resultting in a significant difference in membership fees (5x) which could be inhibitive to us. Given the centrality of KERI and other DIF specs to our future, we feel more comfortable having a say in the governance of the organization-- we could not take lightly such a move, for reasons not covered by the IPR/Membership differences document.

call to action:

  • invite all stakeholders by email, with a proper advance notice, to a dedicated call regarding the "proposition to move DIF KERI WG to TOIP"
  • facilitate a solution- driven discussion to resolve issues at hand
  • define and decide on next steps
  • the DIF KERI WG co-chairs have no entitlement from their WG so far to initiate any action to move DIF KERI WG to TOIP; co-chairs should make this transparent to all parties involved at this time
  • reconsider the current chairs as they are not representing the interest of the KERI community but of their own.

KEL/KERL Replay Mode Road Map

Replay Modes

KERI enables and benefits from several different modes for replaying event logs. To a large extent this is a result of KERI's asynchronous design. Signatures and Receipts on events may be collected asynchronously. This means replay may be also performed asynchronously. This also introduces some complexity. This issues outlines the different options or modes for replay and how KERI may support them as well as the performance trade-offs for each of the modes. For interoperability sake, we need to road map and prioritize the implementation of the different modes.

The existing event and receipt messages allow what I call bare metal disjoint replay mode. This replays events over TCP with attached indexed signatures and then receipts each separably with attached indexed signatures, couples or quadruples as appropriate to the type of signer of the receipt, witness, watcher, or validator. This is the least efficient bare metal mode but the easiest to support. For example:

ICP Event Msg | Count Code |Attached Signatures | RCT Msg | Count Code | Attached Couples (Prefix + Signature) | VRC Msg | Count Code | Attached Signatures

The other mode is called bare metal conjoint mode because all the signatures, couples, and quadruples may be attached to the associated event message in a single composed stream of attachments over TCP without any intervening receipt messages. The expanded Counter Code table enables full explication of heterogenous attached streaming material required for bare metal conjoint mode. This is the most efficient mode. For example:

ICP Event Msg | Extended Count Code |Attached Signatures | Extended Count Code | Attached Couples (Prefix + Signature) | | Extended Count Code | Attached Signatures

Concurrent Version

CP Event Msg |Stacked Count Codes |Attached Signatures | Stacked Count Code | Attached Couples (Prefix + Signature) | | Stacked Count Codes | Attached Signatures

Another streaming replay mode is qb2 where a composed qb64 stream is converted en mass to qb2 and streamed and then at the other end either parsed in qb2 or converted back to qb64 before parsing.

Other modes would be to use a Rest Interface on HTTP to replay.
The simplest is to replay the event in the response body and to replay the attached signatures in HTTP headers. But this does not stream events.

A more streaming like interface could be done with a push mechanism such as HTTP standard SSE (server sent events) in either disjoint or conjoint streaming of the events plus attached signatures, couples, or quadruples. Another HTTP related replay mode would be to use a web socket and then use either disjoint or conjoint streaming over the web socket.

The final suggested mode would be an enveloped Rest Query mode where the attached signatures are wrapped in a JSON, CBOR, or MsgPack envelope as a list of attachments and then returned as one large response body. This is probably the most cumbersome and least performant but may be a popular way to replay.

In addition to Replay over the internet (external replay modes) we also need to support internal replay for cloning the database as a security measure. Replay through the validation logic ensures the database was not undetectably corrupted while offline.

There is some replay logic to be defined for delegated identifiers where a replay of a delegated identifier is preceded by its delegators replay. Likewise for VRC receipts or else the escrow logic needs to be adjusted.

The replay roadmap also exposes the need to roadmap HTTP support, UXD (unix domain socket), and UDP support.
For internal replay between processes, UXD is much more performant than TCP and unlike its cousin UDP, UXD is reliable and does not suffer from MTU fragmentation issues. UDP is the most performant and scalable of the external replay modes but requires designing a reliable fragmentation protocol to address UDP MTU size limitations. This is not difficult to do but does require some extra work.

A replay consumer needs some way to initiate replay. For each of the replay modes we need to define how to initiate.
This suggest a new message types called a replay query by which a consumer can poll a KERL host for some or all of the KELs in its database. A clone query is all the KELs in its database. Where a focused query could simply ask for one KEL at a time or even just some events from a KEL. The query could also be limited to include some combination of controller signatures, witness signatures, watcher couples, and validator quadruples.

There are two types of replay in terms of event selection, full First Seen Replay and Recovered replay. Call these clone and sequence numbered replay.

For the HTTP ReSTful replay modes a query string format could provides all those query options. For a bare metal replay requests we need a message with one or more query fields that provide the same level of functionality.

Another consideration for replay is white listing replay and black listing play and replay.
The white list is used by a controller for playing message into its own KEL. It never accepts play in for its controlled KELs over the internet but only internally so that its own database never gets corrupted by inappropriate first seen events for its own KELs. It also may white list clones from other hosts so that it only accepts replay of either its own KELs or any KELs from hosts it trusts. A controller or any host for that matter may black list replay out or play in for KELs that have been erased. Or from duplicitous watchers or witnesses.

new message type qry or qkl or req

Proposed RoadMap

A) Bare Metal TCP Disjoint Replay
Add support for indexed witness attached signatures using new count code
Add query replay message and query fields

B) Bare Metal TCP Conjoint Replay
B1) Add support for count codes for couples and quadruples
B2) qb2 streaming support
B2) Internal Cloning of database (same process)
B3) white listing
B4) Composable Text Event Format (counter codes

C) Add HTTP support to KERI
C1) Add HTTP Rest Interface for push, put of all messages with attached signatures in Signature headers
C2) Define headers for attached signatures and receipt couples and quadruples
C3) Add query syntax for get requests
C4) Dump disjoint or conjoint replay into response body. Used chunked encoding.
C5) Add SSE get response Push with disjoint or conjoint replay with chunked encoding
C4) Add enveloped get response

D) Black Listing

E) Add UXD support to KERI for more performant internal inter-process replay (cloning)

F) Add UDP support to KERI

The order of events could be in the order listed or we could do A) and then parallelize B and C. Where one work stream is
B) and the other in parallel is C). This would allow some implement E) before finishing B) C) D).

The design for B) that is not A) is mostly done now with the expanded counter codes so it could be underway while the design for C) is ongoing.

Compact Event Element Label Names

Propose use compact element label names as a compromise between performance (bandwidth, storage) and friendliness

Compact Element Labels
Tradeoff between convenience and performance (bandwith and storage)

vs = version string (all)
id = identifier prefix (all)
sn = sequence number (all)
ilk = event ilk type (all)
dig = digest previous event (rot, itx)
sith = signing threshold or list of weighted thresholds (icp,rot)
keys = list of signing public keys (icp, rot)
next = next digest of ensuing threshold and signers (icp, rot)
toad = witness threshold of accountable duplicity tally (icp, rot)
wits = witness list of id prefixes (icp)
cuts = witness cut prune list (rot)
adds = witness add graft list (rot)
data = config data list (icp) or data seal list (rot or itx)
sigs = signature offset list (all)

Comparing KERI to Certificate Transparency

Wrote this while thinking about @csuwildcat comments from yesterday:

Comparing KERI to Certificate Transparency

I think one of the best ways to understand KERI is through comparing it to Certificate Transparency (CT)). Many of the same mechanisms used in CT are used in KERI to provide verifiability and detection of what CT calls suspicious behavior. In KERI, the focus is usually on duplicitous behavior—a controller that tries to spread conflicting key histories.

Consider the domain name system and some of its components and actors--the domain name, the owner, the domain registrar, key pairs, certificate authorities, certificates, and user agents (e.g., browsers). Add to that Certificate Transparency. Here's how they compare in KERI:

  • Domain Name ≈ KERI prefix/identifier/name; cryptographically derived from the public key and a digest of the next key (and other configuration data); universally unique
  • Domain Name Owner, Registrar, Certificate Authority, Private Keys Holder ≈ KERI Controller
  • Certificate ≈ KERI event log, self-signed by the controller
  • Certificate Transparency Loggers ≈ Witnesses
  • Certificate Transparency Monitors ≈ Watchers
  • User Agents/Web Browsers ≈ Verifiers

There are important distinctions that I want to highlight. In KERI, the prefix/identifier/name is cryptographically derived from the configuration of the identifier. This configuration includes the initial public key, a digest of the next key, the algorithms used, the witness identifiers (if the identifier is to be public, discussed later), and other pieces less relevant to this discussion. This configuration is signed using the initial public key. This configuration constitutes the inception log entry for the identifier. The identifier is the digest of the inception log entry (in the case of self-addressing prefixes). The controller can assert control of that identifier through the use of the private key. Unlike the certificate authority system that uses CAs to bind the domain name and the key pair, the controller binds their identifier to the key through the above cryptographically-based derivation process.

Later, if the controller included a digest of the next key in their inception, they can rotate the keys. The controller accomplishes this by creating a new log entry containing:

  • A digest of the previous log entry
  • The public key of the next key that was provided digested in the previous log entry
  • Any changes to the witness set

The controller then signs the entry and sends it to the witnesses to disseminate. (aside to daniel: outside of witnesses, I think sidetree uses a similar mechanism)

For publicly resolvable and verifiable identifiers, KERI uses witnesses to publicize the identifier's log. These are analogous to the loggers in Certificate Transparency (CT). The controller creates the log entry, signs it, and sends it to the witnesses. Each witness signs a receipt and sends it to the other witnesses in the identifier's witness set. When a witness has received the required number of receipts, it appends the log entry to the witness's copy of the identifier's log that is shared with watchers and validators. Witnesses will never share another version of that log entry (identified by the sequence number in the log entry). When the witness shares the log entry, they also share all the receipts they have of that entry.

CT employs monitors to ensure that loggers never change their logs and to watch for suspicious activity. Likewise, KERI watchers retrieve the logs from witnesses to look for similar suspicious/duplicitous activity from witnesses and controllers. Watchers communicate among themselves about the logs they receive to prevent eclipse attacks. Any conflicting logs propagated by controllers or witnesses is therefore quickly detectable. What has been seen by witnesses and watchers can't be unseen.

User agents can verify identifiers by either contacting specialized verifiers or contacting watchers and examining the logs. The white paper discusses composing verifiers of a system of judges and jurors. Essentially, the watchers provide a verifier with all they need to obtain and verify an identifiers log to determine the current state and detect any duplicity.

At this point, the KERI identifier is similar in value to a certificate signed by Let's Encrypt—the controller can prove rightful control of the identifier by exercising the private key. If the controller wants to assert its identity as a person or organization, they can do so with verifiable credentials linked to the identifier.

Note: to highlight the relevant mechanisms, I have omitted some of the features of KERI, such as multi-sig, interaction events, and delegation.

Normative reference issue in spec

Dear KERI community,

I would like to remind everyone that normative references should only point to IPR protected work items.
As I read through the Spec I found that the first normative reference is a personal repository on github.
Either get the owner to donate the repository to DIF or reconsider using it as a normative reference as this way the spec will not be approvable by DIF.

Thanks,

privileges/"org member"?

I need privileges to set up the project boards/kanban stuff... and I'm pretty much a total n00b as far as github goes.

Restore external content seals (anchors) to inception and delegated inception events. NFT use case example.

Restore External Content Anchoring to Inception Event

Inadvertently Dropped Support for External Content Anchoring from Inception Event

When we revised the KERI events to use compact labels we overlooked important use cases that depend on external content seals (anchors) in the inception events (icp and dip`, henceforth will just refer to inception event to include all types of inceptions events).

Originally the inception event included a data field list that could store both configuration traits and seals. This later was changed to a cnfg field list but still had the intention of storing both configuration traits. However we further sampled the configuration traits to be strings when we created the compact labels. algorithm and in the process overlooked the use cases for having seals in the inception event.

Currently the compact c field is only a list of configuration trait strings and does not allow for seals. Given the naming convention and semantics for compact labels this provided clarity of purpose but also excluded those seminal use cases. Some of this language (for seals in inception events) was also lost when we replaced the algorithm for deriving self-addressing self-certifying identifier prefixes using extracted data with using the eventing serialization but with a dummy prefix i field.

It is true that in general one can include external content anchors in an ixn event that immediately follows the icp without requiring the friction of a rotation event which rotates keys. However there are seminal use cases where the identifier wants to be strongly bound to the external content which requires a cryptographic commitment to that external content via a digest in the inception event itself (in a Seal). In this case the identifier prefix must self-addressing or self-signing. As a result the identifier derivation itself will be cryptographically bound to the external content. To restate, strong binding of external content to the identifier prefix, i field with self-addressing derivation code, requires that the external content or at least a cryptographic commitment to the content via a digest be included in the inception event itself. It was an unfortunate oversight that this functionality was dropped and needs to be restored.

The KERI white paper clearly assumes this capability for example:

This approach also enables content or document specific but self-certifying identifiers. This provides a mechanism for a content addressable database to enforce non-repudiable unique attribution to the content creator as controller. This binds the content to the creator in the content address. Different creator means different address. This makes confidential (encrypted) content more usable as the content address is bound to the controller from whom a decryption key must be obtained.

Add a field to Inception Event

Given the semantic cleanliness of separating config traits that may only appear in the inception event with seals to anchor external content, it does not make sense to overload the existing c field in the inception event to restore this functionality but instead add an a field as a list of anchors (seals) to external content. This is consistent with the other events that have an a field.

Revised Inception with content seals:

The proposed revised icp event with restored content anchoring derivation support is shown below:

{
    "v" : "KERI10JSON00011c_",
    "i" : "EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM",
    "s" : "0",
    "t" :  "icp",
    "kt":  "1",
    "k" :  ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],
    "n" :  "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
    "wt":  "1",
    "w" : ["DTNZH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8JZAo5CM"],
    "c" :  ["EO"],
    "a" :
            [
               {
                   "i": "EJJR2nmwyYAfSVPzhzS6b5CMZAoTNZH3ULvaU6Z-i0d8",
                   "s": "0",
                   "d": "ELvaU6Z-i0d8JJR2nmwyYAZAoTNZH3UfSVPzhzS6b5CM"
               }
            ]
}

As per convention used throughout, then there are no external seals the a field value is an empty list.

{
    "v" : "KERI10JSON00011c_",
    "i" : "EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM",
    "s" : "0",
    "t" :  "icp",
    "kt":  "1",
    "k" :  ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],
    "n" :  "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
    "wt":  "1",
    "w" : ["DTNZH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8JZAo5CM"],
    "c" :  ["EO"],
    "a" : []
}

Likewise the revised delegated inception, dip is shown below:

{
    "v" : "KERI10JSON00011c_",
    "i" :  "EJJR2nmwyYAfSVPzhzS6b5CMZAoTNZH3ULvaU6Z-i0d8",
    "s" : "0",
    "t" :  "dip",
    "kt":  "1",
    "k" :  ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],
    "n" :  "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
    "wt":  "1",
    "w" : ["DTNZH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8JZAo5CM"],
    "c" :  ["DND"],
    "a" :
            [
               {
                   "i": "EJJR2nmwyYAfSVPzhzS6b5CMZAoTNZH3ULvaU6Z-i0d8",
                   "s": "0",
                   "d": "ELvaU6Z-i0d8JJR2nmwyYAZAoTNZH3UfSVPzhzS6b5CM"
               }
            ],
    "da" :
           {
             "i":  "EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM",
             "s": "1",
             "t": "rot",
             "p": "E8JZAoTNZH3ULZ-i0dvaU6JR2nmwyYAfSVPzhzS6b5CM"
           }
}

Without any external content anchoring seals,

{
    "v" : "KERI10JSON00011c_",
    "i" :  "EJJR2nmwyYAfSVPzhzS6b5CMZAoTNZH3ULvaU6Z-i0d8",
    "s" : "0",
    "t" :  "dip",
    "kt":  "1",
    "k" :  ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],
    "n" :  "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
    "wt":  "1",
    "w" : ["DTNZH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8JZAo5CM"],
    "c" :  ["DND"],
    "a" : [],
    "da" :
           {
             "i":  "EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM",
             "s": "1",
             "t": "rot",
             "p": "E8JZAoTNZH3ULZ-i0dvaU6JR2nmwyYAfSVPzhzS6b5CM"
           }
}

Implications

The proposed revisions will not require many changes to actual code but will break many test vectors and as a result should be staged appropriately.

Seminal Use Cases

An important use case for disseminating and using digital or electronic content is to provenance the origin and chain-of-custody of that content. Cryptographically the primary mechanism for provenancing the origin and chain-of-custody is to make non-repudiable attestations about that data. Most commonly these non-repudiable attestation are comprised of a cryptographic commitment to that data that is in turn signed with a non-repudiable digital signature. The problem is that one needs to secure that attestation which requires some form of identifier system, such as KERI. With KERI the way to make the originating attestation as secure as possible is to bind the content to the identifier's derivation. These means using the self-addressing derivation which also binds the identifier (and content) to the incepting set of key pairs. Now any reference to that identifier is uniquely also a reference to that content and those key pairs. With KERI the highest level of security over key state is obtained when only establishment events are allowed in the KEL, i.e. no interaction events. This makes the pre-roated keys from each rotation event one-time use only.

This highest degree of security is important when one wants to establish a chain-of-custody over valuable content. With KERI
control of that content via its identifier with the external content seal in the inception event is established via its KEL. The KEL establishes control authority for key pairs. Whoever controls the private keys controls the key state.

KERI via rotation allows transfer of control over an identifier from one set of keys to another. The controlling entities of subsequent keys sets do not have to be the same. Thus one entity can transfer control of the identifier and hence the anchored content to another entity by rotating to a set of keys controlled by that other entity. All the first entity needs is a list of public keys designated by the second entity which it uses to derive the next key digest in a new rotation. These public keys, correspond to the private keys held by the second entity. To clarify, the first entity does a rotation where the next digest is the digest of this next set of public keys. If the inception event specifies in its config that only establishment events are allowed via the config trait EO then upon publication of that rotation the effective control of the KEL is now in the hands of the second entity.

NFT (Non Fungible Tokens)

More recently NFTs or non-fungible tokens have become popular as distributed shared ledger mechanisms for asserting and transferring ownership of unique digital content. Often NFTs are strongly tied to the crypto-currency associated with a given ledger. As a result most of these NFTs are not portable between shared ledgers or that portability requires some significant additional support via another protocol and/or, atomic swap, or tunneling wrapper in order to move them to a different ledger.

In this context I use the term ledger to mean a Byzantine fault tolerant algorithm involving a pool of nodes with some form or shared governance that provides total ordering of all transactions from an any number of clients to a cryptographically verifiable data structure called a ledger or blockchain. To be clear there are several main types of ledger algorithms with these properties, such as, Proof-of-Work (PoW), Proof-of-Stake (PoS), Byzantine Agreement (BA), and Verifiable Random Function (VRA). Extant ledger algorithms include many variants of each type and many hybrids of the four types. The important properties all ledgers share are liveness, safety, and double spend proofing.

KERI has a flexible witness designation feature. A KERI witness may be a ledger oracle. Thus when appropriate a KERI identifier may leverage all the properties of ledger such as liveness, safety, and double spend proofing by declaring its witness to be a ledger oracle. KERI also has the concept of watchers. In this case any watchers would also be oracles of the same ledger..

The KERI protocol also support modes of operation, namely, direct and indirect, where the witnesses and watchers may not be ledger oracles. In these modes KERI does not guarantee double spend proofing or liveness but does guarantee safety. A guarantee of safety is sufficient for solving the problem of secure attribution which is the primary problem an identifier system needs to solve. So when used in the majority use case, KERI does not incur the performance, latency, throughput, scalability, and governance costs of ledgers, but may leverage cloud infrastructure which has a cost floor that is usually several orders of magnitude lower than a ledger's cost floor.

However an NFT token requires in addition to safety both liveness and double spend proof properties. Thus to use KERI for an NFT means designating a ledger oracle as its witness. However because witnesses may be rotated in KERI via a rotation event, KERI thereby provides easy portability of its NFTs from one ledger to another. Thus KERI NFTs may have much lower friction for portability.

But in order to enable the securest form of NFTs on KERI, the inception event needs to include a content seal (anchor) to the digital content that is the target of the NFT.

Editorial Sprint

There has been some discussion and concerns expressed about rehousing the KERI WG. In the interest of maximising cooperation and minimising the potential for divergence, there is a proposal from some WG contributors to focus on collating the KIDs and whitepaper sections into a conventional specification document, particularly the core of the spec which is most unlikely to undergo breaking changes. Ideally, this document can remain compatable with future work. If the group can agree that this document is descriptive and comprehensive enough, it can be ratified into a 1.0 version of the core spec and provide a basis for continued work to contributors who may not wish to participate in future stages of the projects lifecycle.

It is my hope that this proposal can ameliorate concerns about confusion of the status of the project w.r.t. rehousing and spec-maturity as well as prevent divergence in the community and implementations. Concretely, I would suggest that this process take no longer than 6 weeks, ideally just 4, and that this process not prevent any further contribution of any form at any venue.

Regarding Voting

There was a question in the meeting today about how voting works. The following from the DIF Project Charter specifies the following, which I think applies:

  1. Decision Making
    4.1. Consensus/Voting/Approval . The Steering Committee and each Working Group will endeavor to make all decisions by consensus. Where the Steering Committee or Working Group cannot reach consensus with respect to a particular decision, the Steering Committee or Working Group will make that decision by a Supermajority Vote of the Steering Committee or Working Group Participants, as applicable.
    4.2. Notifications and Electronic Voting . The Executive Director is responsible for issuing all notifications of meetings and votes of the Steering Committee and each Working Group chair is responsible for issuing all notifications of meetings and votes of the Working Group for which it is the chair, in each case subject to the following minimum criteria: (i) in-person meetings require at least 30 days prior written notice, (ii) teleconference meetings require at least 7 days prior written notice (this requirement only applies to the notification of the first meeting of automatically recurring teleconference meetings), (iii) electronic votes require no advance notice but must be made pursuant to a clear and unambiguous ballot with only “yes” and “no” options, and the voting must remain open for no less than 7 days. These notification requirements with respect to the Project or that particular Working Group may be overridden upon unanimous consent of the Steering Committee or all applicable Working Group Participants that have attended and participated in at least 50% of the last 4 meetings of the Steering Committee or Particular Working Group.

Also relevant is the definition of Working Group Participant:

15.15. “​Working Group Participant​” means a Steering Member, Associate, or Contributor who executed the Working Group Charter for a particular Working Group.

Transaction event logs (TELs) -- how to tackle this?

Hi,

last KERI spec meeting covered the topic of how to address Verifiable Credentials (VC) verification in case the key pair has been rotated (so in KEL ). So the Transaction Event Logs (TEL) concept has been introduced by Sam.

As a reminder, this is the concept:
With such a concept it should be basically possible to create as many TEL's as one would like to do so, where each TEL serves exactly one state machine. So far so good, but immidiately some questions appear. In particular, what kind of Seal is the seal pointed in the KEL in the above diagram, a Digest Seal that anchors the digest of signed document (ie. VC)? If so, that would make the KEL to include any digest of any signed document ever (so would make it big) to make such statement true, which I am not so sure about. I may also not understand correctly the purpose of Transaction here so an explanation in this area would be highly appreciated.

Apart from above questions, there are also more infrastructural concerns toward which I currently don't see a clear answer. The simplest is how to build this thing. :-)

Let's elaborate a bit:
Imagine that there is an institution that issues some licenses and third parties (verifiers) would like to be able to do two things:

  • ensure that the license has been signed by that institution;
  • the license hasn't been revoked.

For the sake of this disussion, assume whether it is Open Loop Split Model or Closed Loop Split Model (basically Direct or Non-direct mode) is irrelevant.

To the questions/doubts:

  • how the verifier will be able to get the answer the first question so how to verify that the license has been signed by given issuer? Query KEL and look for digest of the VC (assuming the first paragraph of this Github issue stays valid)?
  • who controls the TEL and basically what is the concept for TEL's from the organizational point of view? In particular, any issuer that issues anything should provide such TEL and any verifier should be able to reach it somehow? In such scenario, shall we care ie. about high availability or this is the issuer concern how he handles that? To put another way: any KERI controller should provide the query logic for any additional TEL it provides?

I believe these questions come from the feeling that TEL seems to add some complexity to the KERI ecosystem. Whether it is narrow or broad topic I will be hoping that we will clarify it in this topic.

Witness Rotation and Economics (subject of future non-normative section)

I am just pulling out one great comment from the KERI <> Cert Transparency thread because I think it should be preserved as an open-issue for later editorial work.

@fimbault wrote:
Ok but "just replace the witnesses" is easy only if you control them somehow (feasible in a corporate setting) or have a public estimate of the trust you can put into them. Otherwise I can hope for the best, choose randomly (supposing they're discoverable) and rerotate if needed, but what if I want more assurances? The question is how do you pick them? how many ? what's the economic model for witnesses?

I think there are many good questions condensed into this paragraph-- maybe only one of them belongs in the spec (how does replacing the witnesses work as one step of the recovery-rotation process), but the others are definitely something to return to after spec is v0.1!

Proposed Key Event State Messages

Key Event State or Key State Messages

There are two forms. One when signed by entities with non-transferable identifiers such as witnesses and watchers and the other with signed by entities with transferable identifiers such as controllers and validators. The transferable form includes an event seal of the establishment event that provides the keys used to sign the event.

This proposal uses the two character codes in the proposed new code see issue #75. But can be changed to whatever we decide.

Message Kind (Ilk)

Non-transferable signer:
mt = "ksn"

Transferable signer:
mt = "kst"

Message for non-transferable signer is shown below. A non-transferable sourced key state message has attached a couplet of signer identifier and signature. This is the same format for signatures as the non-transferable sourced receipt message.

{
  "vn"   : "1.0",   // Version Number major.minor  only string not full version string
  "ip"  : "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",  // identifier prefix qualified Base64
  "sn"   : "0",  // lowercase hex string no leading zeros
  "mt"  : "ksn",  // message type of this message
  "et"   : "rot",  // event type of latest event in KEL
  "dg" :  "EAoTNZH3ULvaU6JR2nmwyYAfSVPzhzZ-i0d8JZS6b5CM",  // qualified Base64
  "st" : "1",  // lowercase hex string no leading zeros or list
  "kl" : ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],  // list of qual Base64
  "nk"  : "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",  // qualified Base64
  "wt" : "1",  // lowercase hex string no leading zeros
  "wl" : ["DnmwyYAfSVPzhzS6b5CMZ-i0d8JZAoTNZH3ULvaU6JR2"],  // list of qualified Base64
  "eo" : "true",  // list of config ordered mappings
  "ee" : 
         {
           "sn":  "1",
           "dg":  "EAoTNZH3ULvaU6JR2nmwyYAfSVPzhzZ-i0d8JZS6b5CM"
          }
  "dd": "false",
  "dp": ""
}

-AABB8KY1sKmgyjAiUDdUBPNPyrSz_ad_Qf9yzh'
                                b'DNZlEKiMc0Bi6u-ogCjhGeXMUV0Vls9RbefJ-W_daYc6aBVPY5fqMsBYhkl47Trh'
                                b'bpescYp-yBcfkEEKUQHEpZhoXgzw3IeDQ'

Message for transferable signer is shown below. A transferable sourced key state message includes an event seal for the event in the signer's KEL that provided the keys used to sign. Attached are indexed signatures using the keys listed in that event indicated by the seal. This is the same format for signatures as the transferable sourced receipt message.

{
  "vn"   : "1.0",   // Version Number major.minor  only string not full version string
  "ip"  : "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",  // identifier prefix qualified Base64
  "sn"   : "0",  // lowercase hex string no leading zeros
  "mt"  : "kst",  // message type of this message
  "et"   : "rot",  // event type of latest event in KEL
  "dg" :  "EAoTNZH3ULvaU6JR2nmwyYAfSVPzhzZ-i0d8JZS6b5CM",  // qualified Base64
  "st" : "1",  // lowercase hex string no leading zeros or list
  "kl" : ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],  // list of qual Base64
  "nk"  : "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",  // qualified Base64
  "wt" : "1",  // lowercase hex string no leading zeros
  "wl" : ["DnmwyYAfSVPzhzS6b5CMZ-i0d8JZAoTNZH3ULvaU6JR2"],  // list of qualified Base64
  "eo" : "true",  // list of config ordered mappings
  "ee" : 
         {
           "sn":  "1",
           "dg":  "EAoTNZH3ULvaU6JR2nmwyYAfSVPzhzZ-i0d8JZS6b5CM"
          }
  "dd": "false",
  "dp": "",
  "es":
         {
             "ip":  "EJZAoTNZH3ULvYAfSVPzhzS6b5aU6JR2nmwyZ-i0d8CM",
              "sn":  "1",
              "dg":  "EULvaU6JR2nmwyAoTNZH3YAfSVPzhzZ-i0d8JZS6b5CM"
        }
}

-AAB
AAwyZ-i0d8JZaU6JR2nAoTNZH3ULvYAfSVPzhzS6b5CM0BIJB4pny2YpuB3m6-pgyl4cU65RRCASb9qBNXUerVs4IDYnai29AXcPQJtudLPfzfvehicA7LrswWBPmNlNQK9g

Query Mode Format

Define abstract functionality of Query Replay Modes

Define query message qry
new message type qry or qkl or req
Define query string parameters

Security of KERI Software/Hardware Infrastructure.

Some comments on KERI security guarantees with respect to the root-of-trust in a SCID (self-certifying-identifeir).

One can say that as long as KERI signing and verification software runs in a trusted execution environment such as SGX, TrustZone, HSM, security controller etc.. that is evaluated to be “as secure as the key storage and RoT technology” then one can make the claim that with KERI as long as your secrets remain secret then the rest of the system is as secure as the crypto strength. How you protect your secrets is application dependent. However, one may use the threshold structure of KERI to enable secure signing and verification without TEEs.

Two Roots-of-Trust

The verifier's software/hardware needs to be as secure from the respect of the verifier as the root-of-trust of the signer. But it depends on what the "verifier" is. A verifier may be a single device or employ a network of watchers. Indeed KERI was designed specifically so that a pool of watchers may be employed for the "verifier" or validator function. When a pool of watchers is employed then that pool may be designed to be as secure as needed but without using TEEs (see detailed explanation below). Moreover because KERI splits the system into two sides, each side has two two distinct roots-of-trust each of a different kind. Because the verifier side is the easiest to manage, the KERI WP does not spend much time on it. The security consideration of both hardware and software on both sides should be considered.

To elaborate, the promulgation side (the controller) uses a root-of-trust that is a SCID (self certifying identifier) the security is based on the controller's private keys. The confirmation side ( verifier) has a different distinct root-of-trust which is based on the verifier's hardware/software. The verifier's root-of-trust is independent of the controller's. It is not a function of the key management of the controller. However both the controller's side and the verifier's side can multiply their attack surfaces such that relatively weak software/hardware on either side is compensated for with distributed consensus. So in the special case when the verifier is a single device or uses a single watcher then the watcher/verifier may very well benefit from using a TEE to have the same degree of security as the controller when the controller is using a TEE. But if the verifier is using a pool of watchers then the watchers as a pool can be as secure as a controller's TEE without any of the watchers using a TEE (see my further qualifications to this statement below).

Threshold Structure Security vs. TEE Security

One can have a long discussion about how secure a distributed consensus pool may be. But when comparing apples (key management and execution environment security) and oranges (distributed consensus security) approaches to security its hard to say that the security of a distributed consensus algorithm is necessarily less secure than the key management infrasture root-of-trust of any of its nodes. Although as a general rule, in an apples to apples comparisons, more complex is less secure.

This early influential paper by Nick Szabo does a good job of explaining why the two types of security are complementary and that the whole is greater than the sum of the parts.

Advances in Distributed Security

Szabo describes the concept of a "threshold structure", which underlies the ideas in the security guarantees of multi-signatures, distributed consensus, and MFA to name a few. KERI itself employs threshold structures namely, multi-sig and distributed consensus, and assumes that the key management is using best practices such as MFA.

What a threshold structure for security does for us is that we can have weaker key management or execution environment infrastructure individually but use a threshold structure to multiply our attack surface to achieve greater overall security. In other words overall security is greater than any of the individual parts on its own. For example, in MFA the combination of something you have and something you know is much more secure than either of the factors by themselves.

As is well known, a distributed consensus algorithm is a provable algorithm using a set of nodes where the algorithm provides a security guarantee that is greater than the security guarantee of any of its constituent nodes. It makes a guarantee that in spite of a compromise of F of the 3F+1 nodes, the ledger written by the pool as a whole is still secure. So it's improving security by multiplying the attack surface. This may be more or less complex depending on how one defines complexity.

To restate, the value of Byzantine Fault Tolerant algorithms is that one can arbitrarily increase the degree of difficulty to an attacker by multiplying the attack surface using a distributed network of nodes with sufficient non-common mode failure vulnerabilities.

This applies to KERI as well. The witnesses and watchers independently multiply the attack surfaces of the promulgator and the confirmer (validator/verifier) such that each witness or watcher may be relatively insecure but the system as a whole is highly secure. A better way of re-stating the topic statement is that:

Through threshold structures, KERI enables the promulgator to to make its system no less secure than the security of its secrets and enables the confirmer (validator) to make its security at least as secure as the security of the promulgator.

Obviously in resource constrained IoT applications there are limits to distributed consensus.

Nonetheless, the point of KERI is to capture the goodness of distributed consensus without the cost, latency, and throughput limitations of conventional blockchain distributed consensus algorithms for crypto currencies. The most important feature of a crypto currency is that it must be double spend proof. Because KERI's key event operations are idempotent they do not need to be double spend proofed, so we can greatly simplify the distributed consensus algorithm in KERI. Which makes KERI relatively more attractive for IoT applications by comparison.

As a result of the relaxation of double spend proofing, KERI is able to break the distributed consensus algorithm into two halves and simplify it in the process. As mentioned above the two halves are the promulgation half and the confirmation half.

The promulgation half is under the control of the holder of the private keys. The controller of the private keys has sole control over the security guarantees of the signed key events. and the promulgation of those events. The controller is not dependent on any other entity for its security. The controller is free to make its promulgation system (witness set) arbitrarily as secure as it desires. Although there is no point in making it more secure than its root-of-trust in its key management. The promulgation infrastructure is simply receipting. The receipted events are then broadcast. No attacker can forge any signed event without compromising the signing infrastructure. In indirect mode this means more than F of the witnesses. And when using multi-sig K of the N signatories. The worst an attacker can do without compromising the signing infrastructure of F of the witnesses, is to DDOS or delete the broadcast. This is a well delimited and well understand type of attack with well understood mitigations. It's not a serious impediment.

The confirmation half is under the control of the validator (verifier). The validator is confirming events promulgated by the controller. The validator is free to design and implement the security guarantees of its confirmation system (watchers) to be as secure as it needs and is not dependent on any external entity for that. The verification happens in the edge so an attacker must attack a multitude of edge verifiers (watchers) with little net gain. This multiplies the attack surface for the attacker. Mitigation mechanism for attacks on verifier hardware/software are also well known. Given that verification is largely a fixed operation in software, the verification hardware/software can be deployed by the verifier in a manner that is not predictable by any attacker. Once deployed the attacker must individually exploit each edge verifier (watcher). But multiplying the attack surface with multiple watchers, each validator may arbitrarily increase the difficulty of attack without having to use TEEs for its watchers.

By splitting the attack surfaces into two independent sides. The controller (promulgation side) and the validator (confirmation side) may independently set their security levels by multiplying their own attack surfaces separably.

KERI by design puts verification at the edge. The weakest case is an application where there is only one verifier/watcher, such as in a mobile device under the physical control of the verifier. But even in this case verification at the edge means the attacker is fighting an asymmetric war where the asymmetry is in the favor of the verifier. An attacker may only compromise one device at a time for a lot of effort. That does not mean that there are not common mode exploits such as attacking the crypto library on GitHub that the verifier software uses.

In use cases such as a remote payment card reading device, or some other like application where the sole verifier is a known high value target, then the verifying device benefits from being a TEE.

KERI design enables the confirmer (validator) to make a trade-off. A given verifier could be secured by using a verifiable attestable HSM (TEE) to perform the end verification operations on the KERI events i.e. the watcher. This provides assurance to the validator that the cryptographic signatures are verified correctly and that the verification software/firmware on the watcher has not been tampered with. Or the validator may choose to use a threshold structure to secure the signature verifications of its watchers through a distributed consensus pool of confirming nodes that each perform the signature verification and then compare results. Or the validator could do a combination where each watcher verifier is using an HSM for signature verification so that fewer verifiers are needed because each verifier is more secure due to its HSM. Likewise the promulgation (controller) can make similar trade-offs for its witnesses. More witnesses without HSM/TEE for signature verification and receipt (signature generation) or fewer with HSM/TEE. But most importantly the separation of control via the splitting up of promulgation and confirmation enables each to design and implement according to their desired guarantees of security.

As a use case, the end verifiability of KERI opens up an opportunity for HSMs for signature verification. Currently there are many low cost hardware devices (HSMs) such as a Ledger Nano, UbiKey, Trezor etc. The provide signature generation. But not so for signature verification. With KERI there may be applications especially in IoT where verifiable attestable HSMs may be used for event signature verification.

The trusted computing group recently released a standard for low cost HSM chips that would be applicable to HSM for signature verification for end verifiers:

https://trustedcomputinggroup.org/wp-content/uploads/TCG-DICE-Arch-Implicit-Identity-Based-Device-Attestation-v1-rev93.pdf

This combined with the IETF standard for remote attestation:

https://datatracker.ietf.org/wg/rats/about/

Would enable a new class of HSM for end verification where a validator could query the HSM and be assured via a signed attestation that its signature verification firmware has not been tampered with.

Typically mobile phone devices could be used as a type of general purpose HSM for KERI event signature verification but a dedicated USB device that only does verification could provide a better security trade off.

Verifier Assumptions

KERI makes some assumptions about the verifier.

  1. The software/hardware that is doing the verification is under the control of the verifier.
  2. The verification hardware/software is not dependent on any external infrastructure to do the verification at the time of verification other than to receive a copy of the events.
  3. The software/hardware that is verifying signatures has not been compromised. Singley when only one verifier/watcher or multiply above a threshold when a multiple of watchers.

If the software/hardware that is under the control of the verifier has been compromised then no cryptographic operations run by the verifier may be trusted by the verifier.

But Given 1) 2) and 3), the worst than an external attacker can do is a DDOS or delete attack to prevent the validator from seeing the events. Not forge an event and trick a validator into accepting it.
Duplicity detection, which protects, not against an external attacker, but against a malicious controller does require access to watchers that are also recording duplicitous events. So this may be called external but its generic and the goal is to make it ambient such that a malicious controller may be easily detected.

One useful way of describing KEIR is that it is a decentralized key management infrastructure that is based on key change events that supports attestable key events and consensus based verification of key events.

With KERI security becomes largely a function of your own infrastructure such as key management and not some other entity's “trusted” internet infrastructure. Each controller of an identifier gets to pick their own infrastructure. This is a big advantage as it does away with the whole house of cards that are the “trusted entities” in DNS/CA.

It’s much easier to secure one's own keys well than to secure everyone else's internet computing infrastructure well.

Proposal: Eliminate VRC event type

Since transferable identifier receipt quadruples can be attached to events and receipts, it is redundant to continue to have a VRC event. I propose that we eliminate the VRC event type and only use RCT. We probably want to wait until after IIW so we don't mess up demos.

Tracking kid0004 update across implementations

DFS-serialization for serialized extracted data set creation posed a malleability vulnerability when weighted sig thresholds are introduced (as explained in this PR in keripy). To maintain interop, this must be updated. The new process is as follows:

  1. collect inception data into an inception event with a dummy identifier prefix which is just # repeated to make correct length for chosen derivation code get correct version/length string from icp event
  2. set verstion/length string
  3. serialize
  4. use serialization as input to prefix derivation
  5. set icp identifier prefix to derived prefix

@chunningham @Sethi.Shivam please comment on this thread when OX and JS are PRd or merged to match

Follow up on Tuesday call

Dear WG members,

Thank you for joining the call Tuesday.

I hope we were able to clarify some of the misunderstandings and helped to bring everyone up to speed regarding the logistics of voting, the proposed move, and IPR implications of the above. If there is an outstanding question, please do not hesitate to contact me.

  • we agreed that all of you are interested in keeping the community together
  • we discussed the importance and added value of a ratified specification for referenceable IPR protection and recognition towards future implementers, distinct from the prototype codebases that are driving the design process
  • we talked about the differences between a spec and code/implementation
  • we clarified decision-making mechanisms and the importance of the community, and the value of hearing everyone's voice

While I am not an active contributor to the work in KERI WG, in my role as DIF operations lead,I hope we can collaborate closely in the future and be fully transparent and solution-oriented about issues that might arise. What I can do, however, is to pledge to advance in good faith the following initiatives:

  • Work with the Steering Committee & JDF Legal team to implement the Contributor Agreement for individuals (to replace the current Feedback Agreement and provide better protection and better opportunities to our members) as soon as possible.
  • Make sure to hear every individual's case who considers contributing to KERI WG and collaborate with the group and the SC and do our best to enable the person to contribute while maintaining DIF's IPR processes. In the past, our policy was not to entertain additional attestations or affidavits in direct compliance with JDF’s blanket policy on the subject, but we are discussing ways to do so on a case-by-case with JDF counsel.
  • Support the WG to edit the current KIDs into a “beta version” of the specifications and sub-specifications mentioned in the charter work as soon as possible so that all contributors can refer their implementations to a ratified specification hosted in DIF, even if it is partial and only covers base layers of the prototype.
  • Support the WG to make transparent decisions regarding the next steps.

For those who missed the call (2,5 hours), you can find the recording here

Digest Agility Adding back in sn to event seal

An earlier version of the protocol included the "sn" sequence number field in the event seal as follows:
JSON encoding for example.

{
   "pre": "Ebcdefgh...",
   "sn": "2",
  "dig": "E38c4rjto..."
}

This was changed to the following:

{
   "pre": "Ebcdefgh...",
  "dig": "E38c4rjto..."
}

One reason for the change is that the digest of the event is unique and can be used to look up the associated event in the KEL database merely by digest. This assumes however that the digest type used to index the event in the database is the same as the digest type in the seal. This is not a valid assumption in general unless the digest type for events entered in the databased and event chaining and event seals is fixed for a given version of the KERI protocol.

In addition there is ambiguity as to which digest to use when one gets an inception event as the inception event itself does not have a digest of the preceding event so we can't get a hint from the inception event itself. So anytime we store an event in the database by digest we have to assume a digest type. Any other digest of a different type requires recomputing that new type digest on the event.

The decision is between A and B:

A)
Add back in the sn to the event seal. This means that lookup for the event referenced by the seal happens by sn not digest. The lookup by sn then returns a digest that is then used to lookup the actual event. The reason for this two step index is that with a recovery the KEL maybe forked and multiple events will have the same sn. If the digest type used to index the event into the database is different from the seal digest type then a new digest of the event using the seal type must be computed to determine if the seal digest is indeed pointing to the correct event. A complication would be if the event referenced by the seal is not a rotation (establishment)event. In this case multiple recoveries may have occurred so that the sn appears in multiple forks. Fortunately for transferable receipts the seal may only referent to a rotation event which may not be recovered and appear in multiple forks. Likewise a delegation seal may only refer to an establishment event which may not be recovered and appear multiple times. Thus the latest event at that sn must be the event or else the seal is invalid. This means that we may only use event seals to refer to establishment events which may not be forked. Interactions events may be forked and the same sn may appear multiple times. We may then have to search all the instances to find the match. I believe that in all cases of prior event match that only the latest version of an event with the same sn is useful for matching previous event digest on a new event. So would only have to check compare against the last event version not all.

Likewise when receiving a new event and checking the previous event digest with the key state, one would have to first compare digest types and if they do not match recompute the key event state digest with the type of the new event's prior digest to compare if the digests are for the same event.

In general anytime a digest comparison is made, the comparison must first compare the types of the digests and if different then recompute one in order to make a comparison between digests of the same type to see if they refer to the same data.

B)
The alternative is to standardize on one digest type for a given version of the protocol. Then nothing changes. The disadvantage is that it might be a problem for some implementations (but any implementation that doesn't support any digest type already has an interoperability problem), but this may make it worse in the sense that whatever digest we pick must be the best one. This would not affect digests for external data seals or for the nxt keys, but only the event seal and any digest comparisons of event digests. The advantage is we wold not ever have to do the extra digest checks and computations nor incur the cost of adding sn to the event seal. We can assume that all events and event seals to events of a given type share the same digest type for a given version of the protocol. We still would have digest agility across versions but not by event.

C) Add an "own event digest type code" element to each event to indicate how to hash that event and reference it in general. This has the disadvantage of adding a new element to every event but does allow agility without incurring the type comparison and recompilation of digests when types don't match. So its more bandwidth for all events vs more computation for some events. All entries in databases, all seals of an event would use the hash type specified by the Own Digest Type Code element for that event. It would not require adding the "sn" field back into the event seal.

Please provide feedback as to concerns etc. preferences of which way to go to repair it.

Seal (Anchor) Format and Policy

Seal (Anchors) for arbitrary data:

Long discussion about arbitrary data anchored to KERI events via KERI Seals.
Original discussion was about how to make this anchoring extensible. The conclusion of this original discussion was to have defined tags or labels corresponding to the purpose of the anchored data. However after more consideration having any type of exposed or disclosed purpose provides a correlation vector that hurts privacy. Also because the data is not in the KERL the only way to interpret or use the data is if its provided externally. This means its purpose may also be disclosed externally when the data is provided. This means that there is no need to provided purpose extensibility. Only anchor composition extensibility.

Proposal Seals do not have an exposed data purpose but merely anchor function.

Three types of anchor functions

digest: Digest

root: Hash Tree Root

prefix: sn: digest: Event Link

or compact

dig:

root:

id: sn: dig:

Background

The interpretation of the data associated with the digest or hash tree root in the seal is independent of KERI. This allows KERI to be agnostic about anchored data semantics. Another way of saying this is that seals are data agnostic; they don’t care about the semantics of its associated data. This better preserves privacy because the seal itself does not leak any information about the purpose or specific content of the associated data. Furthermore, because digests are a type of content address, they are self-discoverable. This means there is no need to provide any sort of context or content specific tag or label for the digests. Applications that use KERI may provide discovery of a digest via a hash table (mapping) whose indexes (hash keys) are the digests and the values in the table are the location of the digest in a specific event. To restate, the semantics of the digested data are not needed for discovery of the digest within a key event sequence.

To elaborate, the provider of the data understands the purpose and semantics and may disclose those as necessary, but the act of verifying authoritative control does not depend on the data semantics merely the inclusion of the seal in an event. It’s up to the provider of the data to declare or disclose the semantics when used in an application. This may happen independently of verifying the authenticity of the data via the seal. This declaration may be provided by some external application programmer interface (API) that uses KERI. In this way, KERI provides support to applications that satisfies the spanning layer maxim of minimally sufficient means. Seals merely provide evidence of authenticity of the associated (anchored) data whatever that may be.

This approach follows the design principle of context independent extensibility. Because the seals are context agnostic, the context is external to KERI. Therefore the context extensibility is external to and hence independent of KERI. This is in contrast to context dependent extensibility or even independently extensible contexts that use extensible context mechanisms such as linked data or tag registries [84; 101; 105; 135]. Context independent extensibility means that KERI itself is not a locus of coordination between contexts for anchored data. This maximizes decentralization and portability. Extensibility is provided instead at the application layer above KERI though context specific external APIs that reference KERI seals in order to establish control authority and hence authenticity of the anchored (digested) data. Each API provides the context not KERI. This means that interoperability within KERI is focused solely on interoperability of control establishment. But that interoperability is total and complete and is not dependent on anchored data context. This approach further reflects KERI’s minimally sufficient means design aesthetic.

A seal is an ordered self-describing data structure. Abstractly this means each element of the seal has a tag or label that describes the associated element’s value. A minimal seal must have an element whose label indicates that the value is either a digest or the root of a hash tree. In either case (digest or root) the value is fully qualified Base64 with a prepended derivation code that indicates the type of hash algorithm used to create the digest or hash root. When the seal is referring to another event it must also include the sequence number (sn) and identifier prefix elements in order to indicate the location of the sealed event in its key event sequence. These element labels (digest, root, prefix, sn) are so far the only normative labels needed to process a KERI seal.
The data structure that provides the elements of a seal must have a canonical order so that it may be reproduced in a digest of elements of a event. Canonical ordering may be provided by a list of single element mappings. The order of appearance of each {label: value} mapping in the list is standardized.

Currently, three types of seals, digest, hash tree root (Merkle tree) and event.
Both or fully qualified Base64 with prepended derivation code for hash algorithm

Anchors hash links not data format. Keep KERI pure (qua standard, and the logs it produces) no need to define tags in KERI

Seals have labeled elements.
Must have digest or root (hash tree root)element
Event seal may also have sn and prefix elements.

Hash trees are sparse trees with finite tree depth using depth prefix on internal hash nodes. These prevents 2nd preimage attacks. Similar to certificate transparency.

Hyperledger Indy uses sparse Merkle tree library for revocation registry. Is that a good library to use for KERI Seal Merkle Trees?

Original Discussion Notes

Original Discussion Notes:

- Mitwicki (HCF): we use VCs this way, to contain a minimal payload (hashlink). secure data storage or more public data hubs can be linked this way.
    - Sam: not just hashlink but an identifier that has to be dereferenced 
    - Robert: We're using Manu's hashlinks (DRI) because it contains its own crypto suite etc, but we're a little worried that it exposes these logs to some kinds of correlation attacks.
    - Nader: Here's the reference for that hashlink standard https://tools.ietf.org/html/draft-sporny-hashlink-04
    - Sam: Tag-dependent level of correlatability/lock-in to one encoding or dereferencing dependency
    - Robert: [In our usage of VCs?], we have been thinking about the discovery mechanism via DID Resolution (going from DID to DID:Doc to Service Endpoint [to agent and thus to vault/storage and returned resource]
- 	Sam: Payloads should be encoding agnostic, mixing and matching should work This is because KERI only cares about the anchor/digest format. The anchored data is not transported in the KERI event so the anchored data can be any encoding.
- 	
- 	Mark L Smartopia: Kantara Initiative has been working on consent receipts along these lines, and standardizing on these [essentially legal] events across ontologies has been difficult; we're been able to standardize this and present it at ISO. A consent state record would be a notification and a receipt, and I think this could line up well; we have a GDPR delegation (in close dialogue w/w3c data privacy WG on ontology for this). In the last year, lots of standards have been completed and we're working towards a common record format (legally required to be open).
    - 	Sam: anchor/tag structure and tag-specific link out to data works with your logging needs?
    - 	Sam: a parser needs to know what is anchored by the anchor and how to consume/interpret the digest; the "tag name" is just for a KERI-level categorization of event types
    - 	Mark: we think of delegation as tripartite: regulator, subject, authority; our main data object is also tripartite: notice (event), notification (intermediary), receipt (record)
    - 	Sam: I think you might be imposing semantics from a similar but different definition of authority; I am trying to keep this super agnostic (and specifically about control authority events); 
    - 	Mark: Notary/notification ledger: legal notary + SSI-based ledger; 
    - 	Sam: is KERI preventing you from using KERI in this way? Is this a use-case and can KERI be used without adding anything to it? 
        - 	documentable as a use case for accompanying documentation? **TODO ITEM?**
    - 	Mark: 
    - 	Michael Gorlick: Digest format bears a resemblance to something I worked on IPFS
    - 	Sam: not coincidental! Based on that precedent
    - 	Mark: See also https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=coel
    - 	Nader: See also https://github.com/ipld/specs/blob/master/block-layer/content-addressable-archives.md for IPLD assembling hashes into a digest


- 	Robert: Payload <----> tag relationship (discoverable semantics?) 
    - 	Sam: But I'm afraid that puts the cart in front of the horse and creates a dependency; I want the tag to define the form of the digest, but I want to indirect AWAY from that and force the semantics out to the reference; this is the secure overlay for the internet, not everybody's data format 
    - 	Charles: arbitrary data gets us away from minimal sufficiency
    - 	Sam: Just anchors, that's what i'm calling minimmaly sufficient, spec should not dictate anything about what is anchored, no ontological or semantic conditions or restraints
- 	Robert: digest prefix ?
- 	Sam: These list of these tags relates to the DID core spec registry of did doc properties; KERI only cares about the prefixes and assumes a namespaces of self-certifying identifiers (not just DIDs); KERI needs to be a secure spanning layer for identifiers that needs to be manually agnostic to secure events happening in specific namespaces/resolution spaces; 
- stateless services and self-certifying identifiers
- 	example: a self-certifying did wrapped in a keri DID:Un
did:un:AVrTkep6H-Qt27fThWoNZsa884HA8tr54sHON1vWl6FE/path?kkey=me#flab

Sidetree / DID Peer Interop

For a while now, I have had a set of tests that look sorta like this:

  1. Keri Inception Event
  2. Serialization for Sidetree
  3. Serialization for DID Peer
  4. Keri Recovery Event
  5. Serialization for Sidetree
  6. Serialization for DID Peer

Today, Im pretty sure that all these things would look really different... but in a year or 2... they might look more similar...

And since this is a RUST repo.... Let me be the first to ask for a WASM module that we an get binding for so that it runs in web browsers / nodejs / goland / python / java / etc...

Signature/Digest Algorithm Forward Compatibility

The current encoding has to fail the connection when an unrecognized signature or digest material code is used. When these codes are used in attachments, there is not enough information for the protocol decoder to continue processing since the length of the material is unknown. Without that length, it is impossible for the encoder to know how far to advance beyond the unrecognized material. As a result, the connection enters a failed state.

To further illustrate, validator v1 provides a watcher with a receipt using a signature algorithm newly standardized in the KERI community. The watcher recognizes the code and successfully verifies the receipt. The watcher is then contacted by another validator, v2, that hasn't been updated yet and the watcher provides the receipt of v1 to v2. v2 encounters the signature with the new algorithm, doesn't recognize it, and doesn't know how long the signature is. v2 has to stop processing the stream. The watcher has no indication of why the connection failed. This can also happen with witnesses, though the validator case is interesting since it is not under the control or influence of the controller.

This is mitigated if the material is located in an attachment grouping that has a base64 4 char quadlet length, though it renders that group unreadable. If an attachment with an unknown code is outside one of these groupings, however, the failure is unrecoverable.

Inception Event Key Representations is defined in which RFC?

{
  "vs"   : "KERI10JSON00011c_",
  "id"   : "AaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",  // qualified Base64
  "sn"   : "0",  // lowercase hex string no leading zeros
  "ilk"  : "icp",
  "sith" : "1",  // lowercase hex string no leading zeros or list
  "keys" : ["AaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],  // list of qual Base64
  "next" : "DZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",  // qualified Base64
  "toad" : "1",  // lowercase hex string no leading zeros
  "wits" : [],  // list of qualified Base64
  "data" : [],  // list of config ordered mappings
  "sigs" : []  // optional list of or single lowercase hex string(s) no leading zeros
}

AaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM.... what is this, where is it defined (which RFC?)

Ideally, we can start with some known entropy, generate a well known key format (like base58btc multicodec / JWK) and then show how to convert that to a KERI key representation...

I am strongly opposed to inventing a new base64 based key representation along with KERI.... if we do that, we better make it configurable accross the board.

Key rotation rules

This is based on the W3C DID issue here: w3c/did-core#386
The test is reproduced here because it references KERI and is relevant to basic use cases of KERI.

Revocation vs Rotation

The whole point of digital signatures is that they are non-repudiable; they commit the signer to a historical action in a way that they can't deny. This is the basis for holding a party accountable. It's the non-repudiation that makes them useful.

If I use the DID Doc assertionMethod to sign a mortgage, I cannot say that when I rotate my key, it invalidates my commitment to pay the mortgage. The bank gave me money, and I incurred a debt. I can certainly sign a mortgage and then rotate my key; this prevents my old key from being used to incur new debts, but it doesn't cancel my old one. Rotation cannot be retroactive, because it would give the signer unilateral power to reinterpret history.

Revocation is different. It presupposes a shared understanding between multiple parties that unambiguous history needs to be given new semantics based on future developments. With that understanding in place, it's possible to construct mechanisms that allow either party on their own, both parties together, or even third parties to apply new semantics to a historical event, no matter how strongly attested. "Yes, Fred really did sign this mortgage. No question about it. But Fred wasn't legally competent when he did so. Therefore the mortgage is invalid -- not because we deny existence of the signature, but because we are viewing the event with different semantics."

We have to be able to tell the difference between rotation (unilaterally change rules for the future) and revocation (joint agreement about how/when to change semantics we apply to the past).

If an issuer uses assertionMethod to sign a VC, the issuer cannot say that when they rotate the key, it cancels their commitment to the assertions they once stood behind. That's because an unknown number of verifiers have already made decisions based on the reputational capital the issuer staked against their assertions; allowing the issuer to back out is like letting a lender off the hook for a mortgage without any loan repayment. There exists no shared understanding that the issuer has this unilateral ability to escape accountability and reputational consequences for their actions.

Besides, if rotating a key cancelled all previous assertions, how would we do the other operation (the one that merely changes future possibilities, without attempting to apply a new lens to the past)?

I don't know what language in the spec, if any, we might want to revise to clarify this point, but I think there's no question this must be the understanding we impart.

Revocation

Yes the term revocation is used in two completely different ways in the identity space. In the key management world one may speak of revoking keys. In the statement issuance, authorization issuance, or credential issuance space one may speak of revoking an authorization or a credential or a token.

This becomes confusing when revoking keys also implicitly revokes authorizations signed with those keys.

KERI terminology usually avoids that confusion because a key rotation operation is the equivalent of a key revocation operation followed by a key replacement operation. So one operation, rotate, is implemented instead of two operations (revoke and replace). A bare revocation is indicated by replacement with a null key. So only one operation is needed, that is, rotate where a special case of rotation is to rotate to a null key.

Given the KERI definition of rotate as applied to keys, within KERI, revocation is usually unambiguous applied to mean revocation of authorization statements. When in doubt just use the modifier key revocation vs statement revocation.

I hope that clarifies my terminology better.

Key Revocation vs. Signed Statement Revocation

Key rotation versus signed statement revocation. The authority of a signed statement is imbued to it by its signature and the keys used to create the signature. Is a signed statement authoritative/authorized after the keys used to sign it have been rotated? If not then the statement is effectively revoked as not longer being an authoritative/authorized statement. If the statement is still authoritative/authorized after the keys used to sign it have been rotated then is it not effectively revoked by the rotation itself but requires a separate signed revocation statement the rescinds/revokes its authoritative/authorized status. This revocation statement is signed by the current set of authoritative keys that may be different from the keys used to sign the statement being revoked.

Authorization tokens which are a form of signed statement often employ the rule 2) that when the keys used to sign the token have been rotated that this implies that the token’s authorization is revoked. Effectively the token is always verified by the current set of signing keys so it will fail verification after rotation. Whereas in Rule 1) the verification is w.r.t the set of signing keys used to create the signature at the time the statement was issued and signed. This means the verifier has to have a way if determining what the history or lineage of control authority was via a log or ledger to know that a statement was signed with the authoritative set of keys at the time. This means that the log or ledger must not only log the lineage of keys (key rotation history) but the statements signed by those keys (a digest of statement is sufficient). Otherwise a compromise of the current signing keys (which rotation protects from) would allow an exploit to create verifiable supposedly authorized statements after the keys have been rotated. So it either must be rule 1 or 2 or 3. And non-automatic revocation of signed statements requires a log of both the key rotation history and signed statement history.

Obviously if keys are not rotatable, then any signed statement may not be revoked by merely rotating keys but instead a revocation registry may be used to determine if a signed statement has been revoked by explicitly using a revocation statement. So non-rotatable keys may use a modified rule 4) where there is no key rotation history log or signed statement log but merely a revoked statement log. Although typically non-rotatable keys are used for ephemeral identifiers, in which case, revocation log is not used. Instead of rotating keys for ephemeral identifiers you just rotate the identifier (make a new one with a new set of keys) and abandon the old identifier and all its signed statements.

This makes sense to me. I always try to map these discussions into the healthcare use case where the physician is licensed by the DEA IFF they can be held accountable for signing a (controlled substance) prescription in a non-repudiable manner and also the prescription can be revoked when dispensed or "lost".

Under the DEA rules, the physician's authority to sign that prescription, can flow one of two ways. Either the process and tech itself is certified and audited, or another DEA credentialed person (maybe another physician or notary) is held accountable for issuing the credential of the prescriber. The issuer physician as effectively a notary. They are examining the process and other claims of the would-be prescriber, signing using their non-repudiable signature AND, importantly, making a log entry that they, as the notary, will keep of this transaction. The notary binds the issuing process (typically represented by a document such as a loan) to a non-repudiable signature of the prescriber (because the prescriber shows the notary a deduplicated, legally binding credential like a drivers license.)

This use-case is relevant because it is decentralized. The self-sovereign prescriber chooses the notary. The notary's logs are not public or aggregated but they are secure and auditable. The notary's logs have a reference to the document (the digest @SmithSamuelM mentions). For some related detail, here's a link to the recent HIE of One DHS SVIP (rejected) proposal for Trustee(R) Notary which I truly hope some in our community help us with.

It's turtles all the way down in the sense that the self-sovereign prescriber then takes on the role of issuer when Alice wants her Adderall. The prescriber signs her prescription, coded as a verifiable credential, and keeps the log with the digest of the prescription in case of audit. Revocation, of this VC remains an unsolved problem and this is where the DID aspect of SSI decentralization threatens to break down because the revocation mechanism has to be centralized and incredibly privacy-invasive for Alice. It's called a prescription drug monitoring program (PDMP) and the privacy issues they raise would fill a book.

Since this issue is about revocation of a VC, I would suggest that we need to give data holders the option to avoid the centralized revocation privacy problem by letting them authorize the verifier to get the VC directly from the issuer instead of passing the VC through an intermediate store. This is a privacy compromise because it leaks information to the issuer but it avoids the rotation problem because the VC can be ephemeral. Our SSI designs #382 must not take this option away from Alice. The logic is described in #370 (comment)

Delegator and Delegate communication

I tried to split the logic for Delegator and Delegate, but it seems to be quite tangled up.

I thought about a delegating event as an impulse for the Delegate to make a delegated event. However, the Delegator can't create one without a delegated event hash. If generating a delegation events pair starts with the delegated event, how does the Delegate know that it should create one?

So the question is how Delegator and Delegate communicate with each other, especially how and who initiates the delegation operation?

implementation: where and when do we start?

In this doc we listed a few languages and put some names there:

Languages: Python, Javascript (nodejs), Rust, Python +rust keripy, kerijs, keriox,
Discussion:
    Sam: Py
    Spherity (hi!): nodeJS
    Rust? JOLOCOM ftw (or at least Charles) “KeriOX”

@SmithSamuelM suggests we convene at the biweekly Identifier & Discovery group meetings hosted by DIF.

I'm a newbie at both Python and Rust (but experienced at Java & Golang) and I wish to get involved. Where do I track this work so I can get involved? Do I need to wait for the next DIF ID+Discovery call?

Rules for Recovery of Delegated Pre-Rotated Keys

Cooperative Recovery of Delegated Pre-Rotated Keys

Superseding Recovery

Supersede means that after an event has already been accepted as first seen into a KEL that a different event with the same sequence number is accepted that supersedes the pre-existing event at that sn. This enables recovery of events signed by compromised keys. The result of superseded recovery is that the KEL is forked at the sn of the superseding event. All events in the superseded branch of the fork still exist but by virtue of being superseded are disputed . The set of superseding event forks form the authoritative branch of the KEL. All the event superseded and not remain in the KEL and may be viewed in order of their acceptance because the database stores events in order of acceptance and denotes this order using the first seen ordinal number, or fn. The fn is not the same as the sn (sequence number). Each event accepted into a KEL has a unique fn but multiple events due to recovery forks may share the same sn.

Superseding Rules for Recovery at given SN (sequence number)

A. any rotation event may supersede an interaction event at the same sn. A non-delegated rotation may
not supersede another rotation at the same sn. An interaction event may not supersede any event. (This is the existing rule).

(B. and C. below provide the new rules)

B. a delegated rotation may supersede another delegated rotation at the same sn under either of the following conditions:
B1. the superseding rotation's delegating event is later than (sn is higher)
the superseded rotation's delegating event in the delegator's KEL.
B2. the sn of the superseding rotation's delegating event is the same as the
sn of the superseded rotation's delegating event in the delegator's KEL and the superseding rotation's delegating event is a rotation and the superseded rotation's delegating event is an interaction.

C. IF Neither A or B are satisfied then recursively apply rules A. and B. to the
delegating events of those delegating events and so on until A. or B.
is satisfied. If neither A. or B. is ever satisfied then the superseding rotation is discarded. The terminal case will
occur at the root KEL (non-delegated) where either A. or B. must be satisfied or else the superseding
rotation must be discarded.

ECDSA secp256k1 derivation data lengths

Table 14.2 in the paper describes one-char derivation codes (for 32-byte inputs) and includes the secp256k1 public key derivation procedures. These public keys are actually 33 bytes in length (or, rather annoyingly, 257 bits exactly), so they'll actually be a 4 char code. Alternatively, they could be digested to 32 bytes, but this seems unnecessary. Tentatively I'd propose they be labelled A and B in the yet-to-have-rows 4 char table.

Proposal for 2 character Labels on message elements

The current messages use short, 4 character or less labels. This a compromise between over the wire efficiency and understandability. New implementers of the protocol have noted that the short labels are not particularly informative and it may be no worse to just go with shorter 2 character labels. One character labels would loose any semblance of understandability. It seems that with two character labels they could be abbreviations of two words descriptive labels such that the 2 char abbreviation would still be sufficiently evocative of their meaning as to be useful.

In the table below, the existing labels are provided in the first column with a primary suggestion and secondary suggestions in successive columns

Label Description New Alt1 Alt2 Alt3
vs Version String vs
pre Identifier Prefix px ip
sn Sequence Number sn
ilk Message Kind mk ik
dig Previous Event Digest, Event Digest, Seal Digest dg ed sd
sith Signing Threshold st
keys List of Signing Keys ks kl
nxt Next Key Set Commitment nx nt nk
toad Threshold of Accountable Duplicity, Witness Threshold wt
wits List of Witnesses ws wl
cnfg Configuration Data List cd cl
cuts List of Witnesses to Cut cs cl wc
adds List of Witnesses to Add as al wa
data Payload Data List (Seal List) pd pl sl
seal Delegator Event Seal ds sl es
seal Signer Event Seal ss es
trait Configuration Trait ct tt
root Merkle Tree Root Digest rt rg rd
Establishment Only eo
Last Establishment Event le ee
Delegated dd
Delegator Prefix dx dp
version number vn
event type et

Ordered Mapping

This issue was to explore the concept and illicit feedback. The resulting proposal has been drafted into kid0003.md and a new issue is opened for kid0003.md

Ordered Mappings

Design Goals

  • future proof crypto agility and event content
    crypto agility means variable length crypto material. Fixed fields problematic

  • reasonably compact to IoT, and data streaming applications
    msgpack and CBOR are more compact than json
    UDP and other transports besides TCP
    reasonable element labels

  • KERI is well defined limited application so may make the tradeoff of less generality for increased performance (i.e. not as general as IPFS may be more optimized)

  • reasonably web friendly. Supports HTTP/JSON but not HTTP only or JSON only

  • reasonably programmer friendly
    use labeled mappings (label, value) and arrays not just concatenated values if possible

  • events may be only signed by controller to ordering of labels within events per se is not strictly required

  • event serialization discovery is simplified if labels are ordered so that version string label is always first. Discovery better supports other transports besides http

  • extract data sets for digests must be reproducibly ordered. Not an option.

Extracted Data

  • Needs to be repeatable so ordering matters
    Extracted Data Sets:
    • Inception event elements used to derive digest of self-addressing prefix.
    • Inception event elements used to derive signature of self-signing prefix.
    • Event elements used to derive digest of next key set with threshold
    • Event elements used to derive digest in seal of delegated prefix inception event.
    • Seal elements.

Design Decisions

  • Events are mappings and serialized as mappings.

  • Base64 most compact but web friendly encoding for crypto material. Don't need to support multiple web friendly encodings. One is enough. no need for more generality such as multiple bases.

  • Base65 compatible derivation code. Supports crypto agility future proof. Most compact representation leveraging Base64 pad characters. KERI constrained context means only need to encode crypto suite not context. One is enough no need for more generality.

Ordering Choices

Fixed fields not viable because of crypto material may be of variable length

  • extract to concatenated element values
    most compact but least programmer friendly and future proof
    Order preserving

  • extract to lists of lists
    compact and more programmer friendly and future proof but still not very programmer friendly
    Order preserving

  • extract to list of lists (list of tuples) of (label, value) pairs
    less compact but much more programmer friendly all future proof
    encoding independent.
    Order preserving.

  • encode mapping without extraction
    least compact (although can use compact label names) but most programmer friendly and future proof. But not order preserving unless apply additional ordering rules.

Mapping Ordering Choices

This proposal was originally based on the new alternative of ordered mapping support in Javascript ES6 for the Map object. Unfortunately, after looking more closely, JSON.stringify does not transparently support serializing Map objects even with the replacerreviver options. These change the contents of the serialized mapping so its not interoperable. This proposal has been revised to reflect the more limited choices. Instead the ownPropertyKeys method does preserve property creation order and may be used with replacer method to correctly serialize Javascript objects in property creation order. This needs to be verified.

  1. Sorted Label Mapping enforces lexicographic ordering in JSON.
    Ordering is lexicographic not logical. . Must play games with names to ensure nature logical order.
    example version string to be first have to " _vs "
    May not use ordering semantically.
    Order of appearance or missing elements may not carry meaning.
    Use json.stringify with replacer to sort lexicographically

  2. Sorted by creation order using replacer function and ownPropertyKeys method in Javascript
    Ordering is logical not lexicographic.
    Do not have to play games with version string label "vs" ok.
    May use ordering semantically.
    Use json.stringify with replacer to sort by ownPropertyKeys

Proposal

The proposal is two fold:
A) Impose an ordered mapping constraint on KERI event serializations to ensure the first field element is the version string to enable easier discovery.
For Javascript this may be accomplished by applying two options to the JSON.stringify method. These are:
1- sorting the fields lexicographically to ensure version string element is first with modified version string label "_vs" that ensures first lexicographically.
2- sorting the fields logically (creation order) to ensure version string element is first. Assuming Javascript replacer using ownPropertyKeys works with unmodified "vs" label.

B) Extracted subsets of KERI events.
1- Use mappings sorted logically assuming Javascript ownPropertyKeys works.
2- Use list of lists of (label, value) pairs to ensure consistent ordering of extracted data sets
3- Use list of lists of values to ensure consistent ordering of extracted data sets
4- Use concatenated values from lists of of lists of (label, value pairs) or list of values

The options in B are sorted in order of decreasing preference.

because extraction sets are not sent over the wire they do not impinge on normal JSON rest tooling.

Discussion:

One of the complications of serialization of mappings is whether or not the elements of the
mapping appear in any given order. The best ordering is insertion order. This allows logical
ordering independent of lexicographic ordering of the labels. This also shows up as the canonicalization order problem. For small well defined data structures like KERI events imposing an ordering is straightforward. The complication is trading off ordering for programmer convenience or ease of understanding. Data structures that support labeled fields ease understanding for programmers. A universally applicable data structure that imposes ordering but still provides self-describing labels of the elements is a list of lists of (label, value) pairs. This is not quite as programmer friendly as a mapping with {label: value} pairs. Unfortunately, historically mapping were implemented with hash tables that had no predictable ordering. Python dicts since 3.6 are ordered. Ruby objects are hash table based have have been ordered since 2007. As far as I can tell the rust serde library allows ordered mapping to/from JSON serializations so its not a problem in rust. I don't know about other languages like Go or Java. I anticipate that because their objects are more structured (not hash table based object like Python and JavaScript). Python has long had support for ordered dict objects and the various python json parsing libraries
preserve that ordering by default. Since Python 3.6 all dict are ordered by default so its comes for
free. In binary languages like Rust data structures are explicitly laid out so we do not have the hash table problem of un-predictable order that objects in Javascript and until recently dicts in Python had.

Using ordered event mappings at least for first value helps with serialization discovery.
If logical ordering is possible its also more user friendly than lexicographic ordering.

Automatic determination of the serialization encoding of the event when parsing it, becomes trivialized when the serializations are guaranteed to preserve the order of at least the version string element. This means we can depend on the "vs" or "_vs" element being the first element serialized. So parsing the encoding means looking at the first few bytes to extract the version string which will always appear in an unambiguous location at the front of the serialization instead of some variable location in the serialization.

Ordered extraction set mappings is more user friendly than using lists of lists of (label, value) pairs. This would make the library a little more programmer friendly than using an ordered list of lists of (label, value) pairs. We must impose ordering for extracted sets of elements in order to reliably compute digests. This may be done universally by using list of lists of (label, value) pairs. This is the way the paper is written . But this is less programmer friendly than using ordered mappings that serialize and de-seralize in order. This is dependent on Javascript support for ordered serialization in logical (object property creation order via ownPropertyKeys method).

Ordered Javascript Objects using ownPropertyKeys Method

We may impose ordered serialization with the JSON.stringify method by using the replacer option and the [[ownPropertyKeys]] internal method added in Javascript ES6 exposed as Reflect.ownKeys(). This method is provided in JavaScript ES6 (2015). [[ownPropertyKeys]] uses the property creation order. At the time of ES6, JSON.stringify was not required to use the [[ownPropertyKeys]] iteration order (ie property creation order). This appears to have been fixed in later versions of ES. (See the discussion below). As of this writing nodejs v14.2.0 and the latest versions of Chrome, Safari, and Firefox all appear to preserve property creation order in JSON.stringify(). This means that we can depend on and use that ordering in the specification. For earlier versions that support ES6 then a custom replacer function for JSON.stringify(x, replacer) will ensure that property creation order is used.

Using Reflect.ownKeys() that implements [[ownPropertyKeys]]

I adapted the following nodejs code to ensure that we may enforce property creation order in JSON.stringify for ES6 or later versions of JavaScript.

"use strict"
const replacer = (key, value) =>
  value instanceof Object && !(value instanceof Array) ? 
    Reflect.ownKeys(value)
    .reduce((ordered, key) => {
      ordered[key] = value[key];
      return ordered 
    }, {}) :
    value;

// Usage

// JSON.stringify({c: 1, a: { d: 0, c: 1, e: {a: 0, 1: 4}}}, replacer);

var main = function() 
{
    console.log("Running Main.");
    
    let x = { zip: 3, apple: 1, bulk: 2, _dog: 4 };
    let y = JSON.stringify(x);
    console.log(y);
    console.log(Object.keys(x));
    console.log(Object.getOwnPropertyNames(x));
    console.log(Reflect.ownKeys(x));
    console.log(replacer);
    let z = JSON.stringify(x, replacer);
    console.log(z);
}

if (require.main === module) 
{
    main();
}

Console Output

Running Main.
{"zip":3,"apple":1,"bulk":2,"_dog":4}
[ 'zip', 'apple', 'bulk', '_dog' ]
[ 'zip', 'apple', 'bulk', '_dog' ]
[ 'zip', 'apple', 'bulk', '_dog' ]
[Function: replacer]
{"zip":3,"apple":1,"bulk":2,"_dog":4}

https://stackoverflow.com/questions/30076219/does-es6-introduce-a-well-defined-order-of-enumeration-for-object-properties/30919039

In ES6 the stringify order does not follow the ownPropertyKeys (property creation order) ordering
but this will be fixed in ES 2020. ES11. Looks like Babel supports ES2020 already.

his question is about EcmaScript 2015 (ES6). But it should be noted that the EcmaScript2017 specification has removed the following paragraph that previously appeared in the specification for Object.keys, here quoted from the EcmaScript 2016 specification:

If an implementation defines a specific order of enumeration for the for-in statement, the same order must be used for the elements of the array returned in step 3.
Furthermore, the EcmaScript 2020 specification removes the following paragraph from the section on EnumerableOwnPropertyNames, which still appears in the EcmaScript 2019 specification:

Order the elements of properties so they are in the same relative order as would be produced by the Iterator that would be returned if the EnumerateObjectProperties internal method were invoked with O.
These removals mean that from EcmaScript 2020 onwards, Object.keys enforces the same specific order as Object.getOwnPropertyNames and Reflect.ownKeys, namely the one specified in OrdinaryOwnPropertyKeys. The order is:

Own properties that are array indexes,1 in ascending numeric index order
Other own String properties, in ascending chronological order of property creation
Own Symbol properties, in ascending chronological order of property creation
1 An array index is a String-valued property key that is a canonical numeric String2 whose numeric value i is an integer in the range +0 ≤ i < 232 - 1.

2 A canonical numeric String is a String representation of a Number that would be produced by ToString, or the string "-0". So for instance, "012" is not a canonical numeric String, but "12" is.

It should be noted that all major implementations had already aligned with this order years ago.

This would be similar to the following code snippet except instead of lexicographic sorting it uses the ownPropertyKeys as the order.
The following was used as example code and modified above.
https://stackoverflow.com/questions/16167581/sort-object-properties-and-json-stringify

https://gist.github.com/davidfurlong/463a83a33b70a3b6618e97ec9679e490

// Spec http://www.ecma-international.org/ecma-262/6.0/#sec-json.stringify
const replacer = (key, value) =>
	value instanceof Object && !(value instanceof Array) ? 
		Object.keys(value)
		.sort()
		.reduce((sorted, key) => {
			sorted[key] = value[key];
			return sorted 
		}, {}) :
		value;

// Usage

// JSON.stringify({c: 1, a: { d: 0, c: 1, e: {a: 0, 1: 4}}}, replacer);

JSON serialization ordered using ES6 Map Objects NOT VIABLE

Unfortunately although Javascript ES6 adds a new datatype called a Map that supports defined ordered elements that may be iterated in order and serialized/deserialized in order, the json.stringify does not transparently support serializing these objects in a way that both preservers ordering and also maintains interoperable json format. To clarify Map Objects in Javascript ES6 may be JSON encoded/decoded in an order preserving manner using the JSON.stringify and JSON.parse functions with replacer/reviver options but these change the format of the resultant json..

After looking more closely. JSON replacer/replacer reviver changes the contents of the mapping
so its a different serialization this makes this not viable for interoperability.

Left here for future reference.
https://stackoverflow.com/questions/29085197/how-do-you-json-stringify-an-es6-map

Both JSON.stringify and JSON.parse support a second argument. replacer and reviver respectively. With replacer and reviver below it's possible to add support for native Map object, including deeply nested values

function replacer(key, value) {
  const originalObject = this[key];
  if(originalObject instanceof Map) {
    return {
      dataType: 'Map',
      value: Array.from(originalObject.entries()), // or with spread: value: [...originalObject]
    };
  } else {
    return value;
  }
}
function reviver(key, value) {
  if(typeof value === 'object' && value !== null) {
    if (value.dataType === 'Map') {
      return new Map(value.value);
    }
  }
  return value;
}

Usage:

const originalValue = new Map([['a', 1]]);
const str = JSON.stringify(originalValue, replacer);
const newValue = JSON.parse(str, reviver);
console.log(originalValue, newValue);
Deep nesting with combination of Arrays, Objects and Maps

const originalValue = [
  new Map([['a', {
    b: {
      c: new Map([['d', 'text']])
    }
  }]])
];
const str = JSON.stringify(originalValue, replacer);
const newValue = JSON.parse(str, reviver);
console.log(originalValue, newValue);

next type ambigious

next = next digest of ensuing threshold and signers (icp, rot, dip, drt)

this appears to be a commitment scheme, would prefer to have it named commitment or similar... and the structure of the commitment input defined in JSON.

Revised KSN key state notification message

In implementing the "ksn" key state notification message I found that not having the "s" for sequence number field at the top level was jarring and confusing due to the inconsistency with every other message type which does have it at the top level. Indeed when leveraging code for creating messages or when reasoning about the code it kept tripping me up not to find it there. We have several fields as the top level in the key state message that are also at the top level in other messages but not the "sn" field. This lack of consistency is also bubbled up into the parametrization of utility functions etc.

I don't think any other implementations have implemented the key state message so I don't think it would affect anyone but the python implementation and fixing it is worth the improvement in consistency with other messages and the improved. coherence that results in reasoning about the code. It also moves three fields essentially used to compare key state with key events in a log back to the top level instead of nesting them, thus simplifying the comparison code syntax.

I understand that we added a nested dict to hold fields from the latest current event in the key state, but the reason for adding that dict was because the "t" field for the latest current event type 'icp' or 'rot" would conflict with the "t" field for the 'ksn' of the key state message and rather than change the label we added a nested dict, with the "e" label, but we also moved two other fields into "e" that didn't need to be moved. These are the "s" and the "d" field which are unique at the top level and are consistent with other messages that have them at the top level. The problem is that having a nested block for only one field the current event "t" field seemed out of place so two other fields were moved instead merely adding a new unique label. Now given my experienced implementing I think that was not the best choice.. After doing the implementation It became apparent to me that adding a new label would have been simpler less confusing and easier to implement and reason about.

The proposal is as follows:

remove the "e" block
move the "s" and "d" fields from the "e" block to the top level
add a new field at the top level with the label "te" field for message type of latest current event.

The proposed revised key state message looks like this.

{
  "v": "KERI10JSON00011c_",
  "i": "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",
  "s": "2",
  "t": "ksn",
  "d": "EAoTNZH3ULvaU6JR2nmwyYAfSVPzhzZ-i0d8JZS6b5CM",
  "te": "rot",
  "kt": "1",
  "k": ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],
  "n": "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
  "wt": "1",
  "w": ["DnmwyYAfSVPzhzS6b5CMZ-i0d8JZAoTNZH3ULvaU6JR2"],
  "c": ["eo"],
  "ee":
    {
      "s":  "1",
      "d":  "EAoTNZH3ULvaU6JR2nmwyYAfSVPzhzZ-i0d8JZS6b5CM",
      "wr": ["Dd8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CMZ-i0"],
      "wa": ["DnmwyYAfSVPzhzS6b5CMZ-i0d8JZAoTNZH3ULvaU6JR2"]
    },
  "di": "EJZAoTNZH3ULvYAfSVPzhzS6b5CMaU6JR2nmwyZ-i0d8",
  "a":
    {
      "i":  "EJZAoTNZH3ULvYAfSVPzhzS6b5aU6JR2nmwyZ-i0d8CM",
      "s":  "1",
      "d":  "EULvaU6JR2nmwyAoTNZH3YAfSVPzhzZ-i0d8JZS6b5CM"
    }
}

Instead of the current:

{
  "v": "KERI10JSON00011c_",
  "i": "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",
  "t": "ksn",
  "kt": "1",
  "k": ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],
  "n": "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
  "wt": "1",
  "w": ["DnmwyYAfSVPzhzS6b5CMZ-i0d8JZAoTNZH3ULvaU6JR2"],
  "c": ["eo"],
  "e":
    {
      "s": "2",
	  "t": "rot",
      "d": "EAoTNZH3ULvaU6JR2nmwyYAfSVPzhzZ-i0d8JZS6b5CM",
    },
  "ee":
    {
      "s":  "1",
      "d":  "EAoTNZH3ULvaU6JR2nmwyYAfSVPzhzZ-i0d8JZS6b5CM",
      "wr": ["Dd8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CMZ-i0"],
      "wa": ["DnmwyYAfSVPzhzS6b5CMZ-i0d8JZAoTNZH3ULvaU6JR2"]
    },
  "di": "EJZAoTNZH3ULvYAfSVPzhzS6b5CMaU6JR2nmwyZ-i0d8",
  "a":
    {
      "i":  "EJZAoTNZH3ULvYAfSVPzhzS6b5aU6JR2nmwyZ-i0d8CM",
      "s":  "1",
      "d":  "EULvaU6JR2nmwyAoTNZH3YAfSVPzhzZ-i0d8JZS6b5CM"
    }
}

Should the DID method spec include how to get a DID Doc from the DIDDoc Metadata?

I notice that there is not an example of an actual DIDDoc produced from a Key Event State with the definition of how it would be produced. Should that be documented? The purpose of a DID Method is to resolve a DID to a DIDDoc, so I think the process for producing the DIDDoc should be documented in the DID Method spec. beyond "It may include verification method and won't include service".

My assumption is that a KERI DID Resolver is that a piece of software would get a DID, (somehow) find the KEL for the DID, process the KEL to get to a Key State and then produce a DIDDoc based on the content of the Key State. AFAIK, all the steps up to that last one is documented elsewhere, and that step should be documented in the DID Method document.

A (mostly related question): Is there a way to insert an endpoint into the KEL, surface it in a Key State and then insert it into a DIDDoc derived from that Key State? That would seem to be a good use of a KERI identifier -- a stable ID, with provenance, a public key and way to contact the controller of the key.

Perhaps a separate issue. I would think the "version_id" and/or "version_time" DID resolution parameters should be supported in the DID Method. As I understand it, the KEL would be processed up the requested version Key State, and the DIDDoc produced from that state.

Compact Version String

Use Compact version string instead of full mime type:

KERI_cbor_1.0
KERI_json_1.0
KERI_mgpk_1.0

KERI_application/keri+json_1.0

Even if use full mime type it has to be extracted from version string. Also have to parse version string for version number anyway. So little extra work and makes more compact for better performance.

Version = (1, 0) (major, minor)

major increments with backward breaking changes
minor increments with backward compatible changes

Difference in semantics of message fields in events vs receipts

The identifier prefix has different semantics among message types. In events, the identifier prefix identifies the generator and signer of the message. In receipts, the identifier prefix relates to the event being receipted. The current structure does allow the system to work, though it was counter-intuitive when I first encountered it.

To be more intuitive, I would propose the following structure for receipts:

{
  "v" : "KERI10JSON00011c_",
  "i" : "AYAfSVPzhzS6b5CMaU6JR2nmwyZ-i0d8JZAoTNZH3ULv", 
  "s" : "4", 
  "t" : "rct",
  "d" : "DZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
  "a" : 
        [
          {
            "i" : "AaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",
            "s" : "1",
            "d" : "DZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM"
          }
        ]
}
(signature count)(signature)

The structure appears the same as a receipt by a transferrable identifier, but the root (i,s,d) coordinates are swapped with the "anchor" (i,s,d) coordinates. This is more verbose, for sure, but it also follows more in parallel with events that anchor data.

One nice thing this does is makes it work and appear similar to interaction events. Indeed, they are very parallel: the anchored data/seals in receipts, interaction events, and rotation events are attestations by the identifier identified at the root level of the message about the anchored data.

Simplify Delegated Events using Hetero Attachments

Background

When the delegated inception and delegated rotation events were first designed and implemented we did not have support for complex grouped hetero attachment types. Now that we do we are able to simplify some of the events. Recently working on event types for Transaction Event Logs (TELs) allowed us to do that design but informed with the capability of hetero attachments. This allowed us to simplify their design. TEL events share some similarities in terms of their security posture to delegated identifier events. They are both authorized in a KEL (Issuer or Delegator respectively) so share some of the same security considerations. This thinking about TELs prompted an understanding of a way to simplify delegated events in KERI.

Performance Problems Due to Synchrony Constraints of Existing Delegation Operation

Recall, that the protocol for creating delegated inception and rotation involves an exchange with two handshakes between the Delegator and Delegate. During this exchange the Delegator may not create any new events in its KEL. The Delegator is blocked until the exchange completes and this may include network timeouts due to delayed packets or dropped packets or connections. This creates a synchrony constraint which may cause performance issues at scale for Delegators. This was always recognized as a potential performance issue, but at the time there did not seem to be an alternative. With hetero attachments we now have an alternative that eliminates any synchrony in the exchange. The Delegator may create events asynchronously. The delegate must still block but the new proposed exchange has only one handshake not two..

For example suppose we have a delegated identifier node thats wants to do a rotation. It has to submit to the delegator node a request for a rotation. The delegator then responds with the event sn, type, and prior event digest of a new event it will creat to with an event seal to anchor the delegation. The delegator now has to wait for the delegate to respond with the finished delegated event so the delegator can digest it and then put an anchor seal to the finished delegated event in its new event. Once that is done the Delegate must get confirmation of the creation of the anchoring event in the Delegator's KEL before it is safe to publish the delegated event in its KEL. If any of these steps timeout due to network problems then it becomes a performance issue for the delegator. The delegator has to block its processing, waiting for a subsequent response from the delegate before it can finish creating the delegating event or any subsequent events.

The reason for the exchange is that the delegated events include in the signed portion of the event a location seal that includes the tuple `i, s, t, p' which are the compact labels for respectively, the delegators identifier prefix, the sequence number of the delegating event, the event type, and the digest of the prior event to the delegating event.

Security Considerations

However after some thought, however, only the the identifier prefix, i needs to be in the body of the delegated event. This makes a cryptographic commitment by the delegate to the delegator's key via the identifier. The other fields s, t, p enable a verifier to look up the delegating event and find the anchoring seal. They may be provided in an attachment for this lookup to occur.

From a security posture, KERI is not doing an atomic swap when it creates a delegation. A delegation is merely authorizing the delegated event. Multiple authorizations but from the same source (Delegator) provide equivalent authority, any one will do and more than one doesn't change that authority. For security the delegate needs to uniquely commit to the delegator as the source and the delegator needs to commit to the delegated event. If the Delegate did not commit to a Delegator then the same delegated event could be authorized by multiple Delegators which is an attack. But once the Delegate commits to a given Delegator then only one KEL will be authoritative for delegating seals (delegations). Which specific event creates the delegation (delegating seal) or more than one does not matter they are all similarly verifiable as authorized. All that a verifier needs is a reference to find one of the delegating seals, which one doesn't matter.

To elaborate, I see no advantage to an attacher to having multiple authorizing seals to the same delegated event. The authoritative ordering of authorized events is determined by the order they appear in the delegate's KEL not the order that the authorizing seals appear in the delegator's KEL. The first seen policy of any verifier of both the delegate's KEL and the delegator's KEL are what are decisive for fully verifying a delegate's KEL. But the only ordering that is verified when verifying a delegate's events is the ordering of events in the Delegate's KEL as determined by that KEL. The verifier first verifies a new event wrt to the delegate's latest event KEL sequence and digest and then lookups to see if there is a delegating seal in the Delegator's KEL. It does not care if there happens to be other copies of that very same delegating seal in other events in the delegator's KEL. Any one would satisfy its verification logic. It just needs one.

To reiterate, from a security perspective it doesn't appear to matter if there is more than one delegating event. Any one of the events is sufficient to authorize the delegation. The number or order of delegating events doesn't appear to matter as long as there is at least one. They all make the identical commitments to the very same delegated event.

There may be an odd corner case where the delegated event anchors do not appear in the same order in the delegator's KEL as they appear in the delegate's KEL. I don't think this matters. The ordering that is authoritative is the delegate's KEL not the delegator's. Moreover, a malicious delegator can't predict a future rotation event in the delegate's KEL because it doesn't know the delegate's pre-rotated keys. So the corner case would only happen if the delegate created multiple delegated event requests close together and the delegator processed them asynchronously in that is verified them in one processing order (which would have to be the delegated KEL order or they would not verify) but then anchored them in a different order in its delegating KEL. The first seen rule of a verifier when verifying the delegate's KEL means the verifier won't see a delegated KEL event if it hasn't yet been anchored in the delegator's KEL and it can look it up. So it will only verify in the delegated KELs ordering not the looked up anchor ordering since it doesn't care when or where it appeared in the delegating KEL just that it has already appeared by the time it does the lookup to verify and it knows how to find it.

A malicious delegator that knows the delegate's pre-rotated keys may create any delegations it wants any way. So it's no loss that I can see. Changing the order of delegating seals in the Delegator's KEL does not change the order of the delegated event in the Delegate's KEL. All the verifiable delegating seals commit to the same event ordering in the Delegate's KEL. Any verifier is still protected by the first seen duplicity detection rule.

Proposed change

Currently the location seal in the delegated event commits the delegate to a unique event in the delegator's KEL. But as explained above there does not appear to be a good security reason why any delegating seal in not equivalent. (This by the way, was the reasoning that arose out of examining the security posture of TELs). Any verifier has to commit to one first seen version of the Delegator's KEL and one first seen version of the Delegate's KEL. KERI is not currently imposing any ordering requirements on the order of appearance of delegating seals. Just that the the one pointed to by the Delegated event can be found. But we could point to it with an attachment instead.

In other words, now that we have complex attachments we can simplify the KERI delegated events by replacing in the delegated event, the delegating location seal with the delegator's identifier prefix and then attach the delegating event sequence number and the delegating event digest to the delegated event as an attachment.

This has two performance advantages:

  • It removes the synchronized handshake. With the proposed change, delegated identifiers may be created asynchronously by the delegator. The delegate can create the full delegated event send it to the delegator who anchors it and sends back an attachment in its reply with the sn and digest that points to the delegating event. The delegate must still wait for the response before publishing its delegated event, but the delegator is now never blocked waiting for a response from a delegate.

  • It simplifies the lookup and verification. Currently we must use a location seal in the delegated event. The location seal does not have the digest of the delegating event but the digest of the prior event to the delegating event. This is because in a cross anchor between two KELs, one of the anchors can't digest the other event if the other event includes of digest of the one event. So we have to use other information besides a digest to uniquely identify the delegating event. This includes three fields, namely, the sequence number, the event type, and the prior event digest. As a result, when doing validation, a validator has to look up two events, both the prior event to the delegating event and the delegating event to verify the location seal. With the new proposed change, because an attachment may be created after the delegating event is finished, we can simplify the attachment to only include the sequence number and actual digest of the delegating event. This simplifies lookup and verification of all delegated events.

The new proposed delegated events are as follows:

Delegated Inception

{
    "v" : "KERI10JSON00011c_",
    "i" :  "EJJR2nmwyYAfSVPzhzS6b5CMZAoTNZH3ULvaU6Z-i0d8",
    "s" : "0",
    "t" :  "dip",
    "kt":  "1",
    "k" :  ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],
    "n" :  "EZ-i0d8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
    "wt":  "1",
    "w" : ["DTNZH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8JZAo5CM"],
    "c" :  ["DND"],
    "a" : [],
    "di" :"EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM"
}

Attached to the event is a new hetero attachment group with delegating event sequence number and digest.

di is the compact label for delegator identifier prefix. This is the same label used in the key state message.

Delegated Rotation

{
    "v" : "KERI10JSON00011c_",
    "i" :  "EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM",
    "s" : "1",
    "t" :  "drt",
    "p" : "EULvaU6JR2nmwyZ-i0d8JZAoTNZH3YAfSVPzhzS6b5CM",
    "kt" :  "1",
    "k"  :  ["DaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM"],
    "n"  :  "EYAfSVPzhzZ-i0d8JZAoTNZH3ULvaU6JR2nmwyS6b5CM",
    "wt":  "1",
    "wa":  ["DTNZH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8JZAo5CM"],
    "wr":   ["DH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8TNZJZAo5CM"],
    "a" : [ ],
    "di" : "EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM"
}

Attached to the event is a new hetero attachment group with delegating event sequence number and digest.

di is the compact label for delegator identifier prefix. This is the same label used in the key state message.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.