Coder Social home page Coder Social logo

taps's People

Contributors

adityasaky avatar awwad avatar ecordell avatar hannesm avatar heartsucker avatar jhdalek55 avatar joshuagl avatar justincappos avatar lukpueh avatar mnm678 avatar ojdo avatar rdimitrov avatar santiagotorres avatar trishankatdatadog avatar trishankkarthik avatar vladimir-v-diaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

taps's Issues

Define preferred style for wrapping and enforce with linter

Discussed during review of #163 (#163 (comment)) we should define a stance on wrapping and add a linter to ensure pull requests conform to the preferred style.

@mnm678 expressed a desire to support line-wrapping at sentences as is done in OCI specs (see https://github.com/opencontainers/image-spec#markdown-style), rather than a strict character limit.

Are there linters which support this option? When SLSA looked in mid 2021 we could not easily find something (see slsa-framework/slsa#75).

TAP request: artifact discovery, index files and targets metadata

The following observations have been made at several points in time by different people and might be worth an informational TAP:

  • Artifact (target) discovery is not part of the TUF design, and only possible to a limited extent with TUF metadata
  • Artifact discovery is a crucial feature in many content repositories, e.g.
    • download latest version of project
    • list all versions of project
  • Content repositories may employ a custom search index for this purpose
  • The search index must be included in targets metadata for security reasons
  • If the search index is protected by targets metadata and lists additional information about artifacts, the actual artifacts may be omitted in targets metadata, and thus reduce metadata overhead.

References

TAP 4: potentially confusing explanation of delegations

(minor)

TAP 4's section Mechanism that Assigns Targets to Repositories provides a list of what is required for each mapping.

There are a few issues:

First:

Item A is worded such that it can be misread as referring to multiple mappings: "A. An ordered list of one or more repositories. When the updater is instructed to download metadata or target files, it tries each repository in the order listed." This can be cleared up a bit by adding "in the delegation" after "tries each repository".

Consider also adding "until a threshold of repositories in agreement about the metadata has been reached" to the "in the order listed" line.

Consider also having "Each mapping contains the following elements:" be on its own line to draw more attention to the list's label (which is currently just the end of a paragraph).

Second:

Item D should be worded better, I think. It currently reads "D. A threshold that indicates the minimum number of repositories that are required to sign for the same length and hash of a requested target as specified by element (B)." It sounds like the length and hash (or just hash) of a requested target is specified by element B..?

It should read something like D. A threshold indicating the minimum number of the repositories (listed in B) that are required to sign for the same information for the requested target (length and hash) for that target information to be considered valid.

If not, it at least needs a comma before "as specified by element (B)".

Backwards Compatibility (TAPs 3, 4, 5)

There's some hand-waving in the current TAPs about backwards-compatibility that I'd like to address. I think that backwards-compatibility is an important goal that would aid adoption, not just of TAPs but of TUF in general, by setting the precedent that the code you deploy today will continue to work tomorrow for some reasonable definition of tomorrow.

I think that the TAPs should analyze the pre- and post-TAP world for both client and server, and determine if it is safe to change the TAP to be backwards compatible. A client failure would be appropriate in situations where the feature provided by the TAP is in use and critical for valid client behavior.

TAP 3

Current Backwards Compatibility Section:

This TAP is incompatible with previous implementations of TUF because the targets metadata file format has been changed in a backwards-incompatible manner. However, note that it should take very little effort to adapt an existing implementation to resolve or encode delegations using the new file format.

New Backwards Compatibility Section Proposal:

This TAP can be made backwards compatible with the previous implementation of TUF by treating metadata with a name field the same as a metadata with a names field containing one entry.

Unaware clients (that is, clients that don't understand TAP3's changes) will fail to update to new metadata containing the new names key and not the name key.

Aware clients that attempt to update to new metadata containing the pre-TAP3 format will understand it as the degenerate case of TAP3's format and proceed as normal.

Because TAP3 does not address any security flaws, it is safe to allow old and new clients to coexist for a time.

TAP 4

Current Backwards Compatibility Section:

This specification is not backwards-compatible because it requires:

  • TUF clients to support additional, optional fields in the root metadata file.
  • A repository to use a specific filesystem layout.
  • A client to use a map file.
  • A client to use a specific filesystem layout.
  • A client to download metadata and target files from a repository in a specific manner.

New Backwards Compatibility Section Proposal:

This TAP can be made backwards compatible with the previous implementation of TUF by treating a client without a mapping.json file as one with a single entry that delegates * to the root URL configured in the TUF client.

Unaware clients (that is, clients that don't understand TAP4's changes) will not have the ability to distribute trust in the way TAP4 specifies, but will continue to work assuming all requested targets live in the single configured repository.

Aware clients that attempt to update, but don't have a mapping file, will continue to operate the same as unaware clients. Aware clients that attempt to update and have a mapping file by definition operate as specified in this TAP.

There may be a case where a user assumes that an Unaware client is actually an Aware client, in which the user believes a mapping file is being consumed but is in fact being ignored. In most useful cases, the user will become aware of the issue because they either have no way to provide a mapping file as specified by the client's interface or because the attempted operation will fail (unable to find required metadata). In the case where the mapping file is used for simple mirroring, and the interface is simply the presence of a mapping file on the filesystem, the user may be unaware that the metadata they use is not coming from the source they expect. This should be solved by clear messaging around the source of the metadata during updating operations. Additionally, any targets downloaded under these assumptions still have the full guarantees of TUF metadata in general.

Because TAP4 does not address any security flaws, it is safe to allow old and new clients to coexist for a time.

TAP 5

Current Backwards Compatibility Section:

This specification is not backwards-compatible with clients that do not recognize TAP 5, because the changes to the root and snapshot metadata file has implications on how metadata files are downloaded. However, note that it should take very little effort to incorporate these changes with an existing implementation.

New Backwards Compatibility Section Proposal:

This TAP is backwards compatible with the previous version of TUF.

Unaware clients (that is, clients that don't understand TAP5's changes) will not understand the URLs field and will either fail or ignore it. If they ignore it, all metadata files will be fetched from the same repository as root.json, after which the target file will either exist in the metadata or fail to be found. If it succeeds, then the client may be putting more load on the repository than intended (by ignore mirrors listed in URLs) or it may be served a different version of the target (with the same name). If it fails, then the target was not found, and the URLs were used to serve alternate metadata, and a client failure is appropriate.

Aware clients will see metadata without URLs as existing in the same repository as the root file, which is the same as the pre-TAP5 case. Aware clients can ignore the root hash, if it exists in the snapshot file, so they will continue to work with repositories that publish data under the pre-TAP format.

Because TAP5 does not address any security flaws, it is safe to allow old and new clients to coexist for a time.

@trishankkarthik @JustinCappos (please loop in anyone else that may be interested, just didn't want to spam everyone)

Discussion of TAP 13: User Selection of the Top-Level Target Files Through Mapping Metadata

TAP 13 was introduced in #118.

There is a reference implementation available (built on the legacy python-tuf code) at theupdateframework/python-tuf#1103.

Open questions (from the pr discussion):

  • repository vs client hosted alternate top-level targets metadata The TAP currently describes repository hosted metadata that is pointed to by the client, but this does not give the client as much control over the subset of packages on a repository that they trust. For example, a client may want to limit the use of a repository to 2 specific packages. In the current proposal they would need to upload new metadata that specifies these exact packages, which may not be possible if the repository is controlled by a third party. For this reason, I think we should consider allowing the client to provide a mapping to a local top-level targets file.

  • client mapping format This depends a bit on the above, but there are a couple of mapping formats described here and in the POC. One big question is whether this should be a part of the existing repository mapping metadata.

  • metadata on multiple repositories How does this TAP interact with TAP 4? Specifically, does the targets mapping apply in the same way for every repository, or should a client be able to specify a different mapping for each repository. It would give the client more flexibility if they could set different mappings for each repository, though this may require a larger change to the repository mapping metadata.

Discussion of TAP-17: Remove signature wrapper from TUF spec

This is a place to discuss removal of the signature wrapper from the TUF spec, introduced in #138.

Link to implementation: To Do

Outstanding issues and questions relating to the TAP:

  • Should the spec recommend/suggest/mention a format which implements the proposed properties? i.e., DSSE
  • what should the pedagogical examples look like after the spec is updated per this TAP? do we continue to use JSON for file format examples?
  • should we include guidance on payload type, particularly given this is a motivation for the TAP?
  • what recommendations should the spec make about capturing implementation details in a POUF, especially payload type (how to identify the implementation)?

Discussion of TAP 10: Remove native support for compressed metadata

A few questions / notes:

  1. I think we should provide two options: either you describe compressed metadata in TUF (for those who want it), or you don't. Right now, it's only that you don't. We have to describe the pros / cons to each approach.

  2. Why is it that we had the "compression_algorithms" attribute in the root metadata? Is it because if it was "['gzip']", then you wouldn't trust bzip metadata given to you? OK, I suppose that is somewhat useful, but why the focus on the algorithms? Is it because some compression algorithms are safer than others? Also, what about compression libraries --- wouldn't that also make sense? Where do we draw the line here?

  3. Why mandate anything in the root metadata at all? Why not list compressed / uncompressed metadata in the snapshot metadata? If a client likes it, it can download compressed metadata. Otherwise, it won't, and the server can use HTTP compression anyway. Even in the HTTP compression case, the client is not forced into using whatever compression the server suggests --- uncompressed is always mandatory.

  4. Very good observation, Vlad, about no point of using compression in TUF unless snapshot metadata also provides hashes. I think this is where Mercury-hash makes sense. We'll have to note the increased b/w cost due to: (1) possibly double the # of files (compressed + uncompressed), and (2) both hashes and version numbers for each file.

  5. What's the problem with compressed metadata unpacking to unexpected directory or filename? Does the client really have no control over this? Can't the client decompress things to a temp directory and see what comes out? Can compressed files even decompress to absolute file paths?

  6. I don't think the client update workflow is terribly complicated by compression. Timestamp and root metadata must always be available uncompressed. The rest follows from snapshot metadata. I think that's all, right?

POUF1 is out of date

Implementation Version(s) Covered: v0.12.*

We're 9 versions behind ๐Ÿ˜ฌ.

If we do update this, I think we should try to improve content as well:

  • A lot of the POUF is just repeating spec, that seems unnecessary? Or maybe I misunderstand the purpose
  • the http API is basically undocumented: the only reference seems to be this incorrect statement

    A client downloads these files by HTTP post request

  • filenames and related "input" validation is not handled

Discussion of TAP 15: Succinct hash bin delegations

TAP 15: Succinct hash bin delegations is available at https://github.com/theupdateframework/taps/blob/master/tap15.md

Pull requests related to TAP 15: #120

Outstanding items for TAP 15:

  • Can we make it more clear in the object format that a delegations object needs either name or succinct_hash_delegations, and path_hash_prefixes is optional. If the delegations object includes a succinct_hash_delegations field then any path_hash_prefixes field is ignored.
    • would be nice to do so in a way that suggests a relationship between succinct_hash_delegations and path_hash_prefixes
  • Resolve on the use of name vs bin_name_prefix (probably through some experimenting on the PoC) see #120 (comment)

TAP 19: should discuss privacy

While I was readin up on passim (a mDNS based content addressable storage) I noticed that their main use case fwupd does not use passim for artifact downloads because of privacy reasons: They don't want the other download "source" (which is potentially untrusted) to know which firmware files get downloaded.

I think TAP 19 should discuss the privacy implications as well: I think they exist in some form for all content addressable storage where the participants are not completely trusted (in comparison to traditional TUF where the privacy leak happens only to the relatively more trusted TUF artifact repository).

Support for downgrading/revoking a version?

Dears,
Is this the right place (since the discussion is empty)? If not, please navigate me to the proper place/process.

When dealing with production issues sometimes these are caused by introducing a new version of a package deployed from TUF. Currently, I am not aware of any ways we can revert (downgrade) deployment to the last known working version of the package.
Is this something I shall introduce as TAP?

Another improvement (or earlier mitigation of the production issues) would be A/B testing or rolling deployment (10%... 20%... 50% of clients...) or rings (beta/pre-release/early-adopters/slow-ring...).

WDYT?


ANSWER: we need to implement this ourselves in our client(s).

[TAP 8] Should rotate files be listed in snapshot metadata?

When working on the implementation of TAP 8, it became clear that there is an issue with TAP 8's requirement that the version number of the latest rotate file for each role is listed in snapshot metadata. Rotate files must be verified before the metadata for that role is used, but timestamp and snapshot roles must be verified before snapshot metadata is used. Thus the client cannot check snapshot metadata for version numbers of rotate files for timestamp and snapshot. I see two possible solutions.

Option 1: Disallow rotate files for the timestamp and snapshot role. Rotate files would only be used for targets roles, and could use the existing snapshot metadata checks.

pros:

  • the snapshot metadata check ensures that clients have the most up-to-date set of keys
  • use of snapshot metadata ensures only rotate files that exist are downloaded

cons:

  • prevents rotating of the frequently-used timestamp and snapshot roles

Option 2: Skip the snapshot check for timestamp and snapshot rotate files (but leave it in place for targets). Rotate files would be downloaded similar to the root metadata today, with clients looking for version 1, then version 2, etc

pros:

  • maintain the ability to rotate all roles

cons:

  • additional network calls are required even when no rotate files are present
  • potential denial of service (only if attacker controls the network)

I am leaning toward option 2, as I think the denial of service is already possible for an attacker that controls the network, and so the ability to rotate keys more frequently seems worth it, but I would appreciate feedback from others.

Thanks to @jku for helping me find this issue

Removing assumed TAP numbers in PR

I've noticed there have been some collisions for future TAP numbers: like for TAP 11 with #74 and #106, and TAP 12, with #103 and #107. According to TAP 1, numbers are supposed to be assigned when a proposal has been accepted. If this is the case, could these proposals be renamed to avoid confusion when discussing proposals?

Inaccuracies in TAP 4

The following sentence is inaccurate:

A "terminating" flag that instructs the updater whether to continue searching subsequent mappings after failing to download requested target files from the repositories specified in the first mapping.

First, it's not about failing to "download," but failing to find the target files. Second, it's not the "first mapping," but the current mapping IF it is matches the requested target file.

TAP 4/5 consolidation

Unless I am misunderstanding something, I believe that with some slight modification TAP4 can completely encapsulate the functionality of TAP5.

In the example in TAP5, a new root file is generated that specifies a url for the root (hosted privately), and the PyPI-hosted Django delegation (targets) file is specified in the targets block, meaning that the targets file is pulled from the PyPI but the keys used to sign it are pinned by the privately-hosted root file.

I believe the same effect can be created with TAP4 alone. Could you not generate an alternate root that specifies only the Django keys in the targets file? This would still be privately held and hosted, and you would simply specify your private root in the map.json file.

For example:

{
  "repositories": {
    "Django": ["http://example.com/django/metadata"],
    "PyPI":   ["https://pypi.python.org/"]
  },
  "mapping": [
    {
      // django targets go through the privately-hosted root
      "paths":        ["*django*"],
      "repositories": ["Django", "PyPI"],
      "terminating":  false,
    },
    {
      // Map all other targets only to PyPI.
      "paths":        ["*"],
      "repositories": ["PyPI"]
    }
  ]
}

and the root file at https://example.com/django/metadata/ has organization-supplied keys and pins to the Django project target keys?

Am I missing a fundamental difference between the two? If the only difference is that the snapshot doesn't list the root (so that roots can be swapped at will), couldn't that be specified by TAP4 as well?

I'm also wondering if it would make sense to allow a mapping to point to a local file as well as a url, so that a user could pin a delegated set of keys by generating a root locally.

Discussion of Fulcio TAP (TAP 18)

There are some remaining issues on the Fulcio TAP:

  • link to Sigstore documentation (as it is created) for root signing, signing, bundles, and verification

TAP 4: misleading graphic

Regarding TAP 4, in section Searching for Files on Multiple Repositories:

The graphic explaining the foo-1.0.tgz situation is either confusing or wrong:
It reads: "If a match for foo-1.0.tgz had not been found, the search would have terminated. Since the third mapping's pattern is set to *, it will always match with foo-1.0.tgz." But the 2nd delegation has terminating set to False in the graphic, so, no, the search would not have terminated, but would have continued to the 3rd delegation.

On a related, minor note: item 4 in this section should read "If the metadata is not a match, or if fewer than the threshold of repositories signed metadata about the desired target.."

Use Case 3 in TAP 4 doesn't seem to make sense

Use Cases 1 and 4 cover clients selecting which repository to trust for which targets and trusting a combination of multiple repositories for a given target, and so I think they already cover what Use Case 3 intends to demonstrate.

I think Use Case 3 just confuses things. If it's about client choice, that's covered in 1 and 4, and if it's about repository maintainer choice, then that's actually provided by TAP 3, counter to what Use Case 3 suggests.

TAP 6 forces parsing of untrusted data

TAP 6 proposes adding the spec_version field to the signed portion of metadata forces all clients to fully parse data before doing any validation on it. Ideally, a client parses as little untrusted data as possible.

For example, a client might prefer to send a (protobuf) message like:

message SignedMetadata {
  repeated Signature signatures = 1;
  bytes signed = 2;
}

message Signature {
  string keyid = 1;
  string method = 2;
  bytes sig = 3;
}

message RootMetadata {
  ...
}
...

Where the signed field in the SignedMetadata message would itself be a serialized RootMetadata protobuf message. This would allow a client to skip parsing the signed data until after the contents have been verified.

In theory, this could be done with JSON where the object is

{
  "signatures": [...],
  "signed": "<base64 string>"
}

And the base64 string is itself an encoded root metadata object. This itself would actually be convenient as it would skip the need for using canonical JSON since only a string is being signed.

I don't actually have a solid suggestion at this time for how to handle the cases of varying specification versions. Putting the spec_version field in attacker controlled space is problematic in some ways as is forcing a client to fully parse all data.

Related: #35

TAP 4: example mentions unknown functionality

TAP 4: example mentions unknown functionality
In section Example using the Reference Implementation's Map File:

There is an unexplained bit about excluding URLs:

	      // Some paths need not have a URL.  Then those paths will not be updated.
	      ...

First: There's no detail for this no-URL no-update feature anywhere else in the TAP. (We've discussed it outside the TAP, but it's not in there.)

Second: This is a feature of repositories in the map file, not paths. Paths don't have a URL. Paths map to a repository, which might in turn not have a URL assigned. Further, if that delegation is encountered before others and is terminating, then the targets delegated by that delegation will not be updated. The priority between delegations and the settings in delegations determine this behavior.

Discussion of TAP 16: Snapshot Merkle trees

This is a thread to discuss snapshot Merkle trees, introduced in #125.

Pull requests relating to TAP 16: #133

Outstanding issues and questions relating to this TAP (from @joshuagl in #125 (comment))

  • A reader has to understand Merkle trees to implement this TAP, for example to understand how to compute the root hash from a leaf node. Is that a concern? Should we link to a good overview of Merkle trees? Should we specify how this is expected to work?
    • are the metadata extensions for the Merkle tree truly abstract enough to support arbitrary (non-Merkle) tree algorithms? Should we PoC some other algorithms? perhaps in a different implementation?
  • Further PoC development of auditor integrations?
  • Recommendations on specifying the algorithms for a POUF
  • The Specification section suggests the snapshot Merkle tree replaces the single snapshot metadata file โ€“ is there any reason we shouldn't generate both? If we generate a Merkle tree only, should integrations still have a snapshot role with associated key? Should we explore this a bit more?
  • In places the TAP seems to suggest an auditor is required, whereas in others it indicates auditors are optional. Let's be sure to clarify.

TAP process (TAP 1)

I think the TAP process could use some clarification/modernisation. The following is a list of questions that arose whilst reviewing TAP 1.

  • Per TAP 1 the Final state is when the features of a TAP are integrated/implemented within the reference implementation. There is no mention of integration into the specification. Should there be an additional state for "in the spec" vs "in the reference implementation"? Or should Final be "in the spec"?

  • Should TAP formats recommend line-wrapping? The existing TAPs mostly seem to line wrap at ~72-79 characters.

  • Is Post History, and the mailing-list centric nature of TAP 1, still relevant? It seems to me that it makes sense to discuss TAPS here, on GitHub, with posts to the mailing list to draw interested parties into the discussion? TAP 1 suggests that TAP submissions, reviews and ownership transfers should happen on the mailing list. I haven't seen that happen in practice.

  • TAP 1 suggests that

    If there are multiple authors, each should be on a separate line following RFC 2822 continuation line conventions.

    that doesn't appear to be true in any of the current TAPs. Should we update the TAPs or change the guidance in TAP 1?

  • A TAP is accepted as a Draft, but there's also an Accepted status. It's not clear to me from reading TAP 1 when we are talking about accepted as Draft and accepted as Accepted. The statuses and how a TAP moves through them should be cleared up.

audit logs for root metadata changes

Given the importance of the root metadata, changes made to it are quite important. I was recently tasked with keeping an audit log of such changes. The naive approach is to more or less add a new DB column and move on. However, when I start to think of the qualities such a solution needs such as being tamper resistant, I start wonder if this shouldn't be optionally supported (or even recommended) in the TUF spec? For example, maybe we could add a new optional attribute to the signed root metadata reason. Maybe a free form string or some free form object. e.g:

signed:
  _Type: Root
  expires: 2022-08-19T16:23:01Z
  version: 2
  reason: User(1234) rotated root key that were due to expire

then we add a new targets signing key and get

signed:
  _Type: Root
  expires: 2022-09-19T16:23:01Z
  version: 3
  reason: User(456) added targets keyid 1234

I'm not totally sure this belongs in the root metadata and might need to be its own new artifact. Just curious if people had thought about this and if there was interest in something along these lines?

Should POUF-1 allow whitespace for prettified output?

In this conversation on #tuf, I'm in the process of renaming rust-tuf's interchange types to pouf, since that's more aligned with the tuf project. One complication though is that rust-tuf supports a prettified version of pouf-1 with JsonPretty. We use this to generate golden files to make sure code changes don't unintentionally change pregenerated metadata. It's much easier to read pretty json than minified json. However, according to pouf-1, this format uses OLPC's canonical json format, which disallows whitespace.

rust-tuf, and I'm guessing all the other implementations of POUF-1, can work with prettified metadata without issue. Should this be something that's formally supported? Or should implementations like this be treated as a non-standard POUF? If the latter, how should we refer to things like this? Should we avoid using the term POUF with things like this?

Introduce a status for approved/accepted TAPs that are not intended to make it into the core specification

In the most recent community meeting there was a sidebar discussion on the complexity of implementing TUF and how several TAPs (specifically TAP 4 and TAP 8) increase complexity for optional features.

As part of the discussion I proposed that we add an additional TAP status, or update the accepted status, to include a notion of a TAP which is reviewed and approved but, due to its optional nature, is considered supplementary to the specification and is not destined to become a part of the core specification document.

During the discussion the following pros and cons were discussed:

Pros

  • implementation simplicity and safety for those only interested in the core TUF functionality of today

Cons

  • confusion in how implementations/adoptions communicate which combination of TUF + TAPs are implemented
  • this potentially makes it harder to find a TUF implementation which suits all of an adopters needs
  • testing combinations of features is harder
  • unclear what this means for the reference implementation(s)

FWIW some of these cons (i.e., compatibility across implementations, lack of clarity around what exactly a TUF implementation implements) already exist today.

Filing this issue as a place to continue this discussion.

TAP process: issue with pre-implementation phase visibility

I'm writing down some observations on the TAP process as relative newcomer here, as promised in the community meeting:

The TAP process seems useful: the requirement for a design document and the requirement for a reference implementation for the design both make sense. However, my problem has been that reviewing at least the currently open TAPs has been much more difficult than expected. I feel that a core reason is that the problem definition and idea formulation phases are not publicly visible. TAP 1 does say this:

Each TAP MUST have a champion -- someone who writes the TAP using the style and format described below, shepherds the discussions in the appropriate forums, and attempts to build community consensus around the idea. The TAP champion (a.k.a. Author) SHOULD first attempt to ascertain whether the idea is TAP-able. Posting to the TUF issue tracker, the #tuf channel on CNCF Slack, or the TUF mailing list are good ways to go about this.

but in reality finding these discussions has been difficult.


In the community meeting I compared the TAP process to traditional open source SW development which has two visible phases:

  1. Problem definition phase: Issue/RFE is opened and discussion happens: "is this really a problem?", "what is the root cause?", "what options do we have?", "which option should we choose and why?"
  2. Implementation phase: A pull request of a specific solution is made and reviewed

whereas the TAP process is roughly:

  1. Implementation phase: A TAP pull request of a specific solution is made
  2. Implementation phase: A reference implementation pull request is made

There are still two steps but both are about implementing a specific solution. The problem space and the competing solutions are seemingly never discussed. I'm sure the discussion happened somewhere but unfortunately not seeing it and not being able to refer to it later affects my ability to review the actual implementations: if I don't understand why the initial design choices were made how can I say if the implementation is reasonable?

I don't really want to propose making the process more complex by establishing a specific way to discuss problems/ideas... but maybe someone else has good ideas on this?

TAP 3: Unspecified targets

I'm not sure if a TAP can be amended (or if an amendment is needed in the following case), but while reviewing the accepted version of TAP 3 I noticed that targets are never specified in the examples.

I kept reading comments like:

// We specify the name, keyids, and threshold of a single role allowed to sign the following targets.

Here is one example from the TAP:

{
  "signed": {
    "delegations": {
      "roles": [
        // This is the first delegation to a single role.
        {
          // We specify the name, keyids, and threshold of a single role allowed
          // to sign the following targets.
          // Each role uses a filename based on its rolename.
          "name": ROLENAME-1,
          "keyids": [KEYID-1],
          "threshold": THRESHOLD-1,
          ...
        },
        // This is the second delegation to a single role.
        // Note that this delegation is separate from the first one.
        // The first delegation may override this delegation.
        {
          "name": ROLENAME-2,
          "keyids": [KEYID-2],
          "threshold": THRESHOLD-2,
          ...
        }
        // Note that, unfortunately, there is no way to require multiple
        // roles to sign targets in a single delegation.
      ],
      ...
    },
    ...
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.