Coder Social home page Coder Social logo

root-signing's Introduction

sigstore framework

Fuzzing Status CII Best Practices

sigstore/sigstore contains common Sigstore code: that is, code shared by infrastructure (e.g., Fulcio and Rekor) and Go language clients (e.g., Cosign and Gitsign).

This library currently provides:

  • A signing interface (support for ecdsa, ed25519, rsa, DSSE (in-toto))
  • OpenID Connect fulcio client code

The following KMS systems are available:

  • AWS Key Management Service
  • Azure Key Vault
  • HashiCorp Vault
  • Google Cloud Platform Key Management Service

For example code, look at the relevant test code for each main code file.

Fuzzing

The fuzzing tests are within https://github.com/sigstore/sigstore/tree/main/test/fuzz

Security

Should you discover any security issues, please refer to sigstores security process

For container signing, you want cosign

root-signing's People

Contributors

asraa avatar bdehamer avatar bobcallaway avatar cpanato avatar danbev avatar dekkagaijin avatar dependabot[bot] avatar dlorenc avatar gabibguti avatar github-actions[bot] avatar haydentherapper avatar jku avatar joshuagl avatar k4leung4 avatar kommendorkapten avatar lkatalin avatar loosebazooka avatar lukehinds avatar malancas avatar mihaimaruseac avatar mnm678 avatar priyawadhwa avatar santiagotorres avatar sigstore-bot avatar trishankatdatadog avatar woodruffw avatar yrobla avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

root-signing's Issues

Delegations!

I think we need to start adding support for delegations here so we can let other projects reuse this!

TUF metadata should normalise to UTC time.

Description

While taking a look at the generated metadata I noticed that the expires entries in the metadata encode a timezone:

"expires": "2021-12-18T13:28:12.99008-06:00"

(from 1.root.json)

Whereas the the specification suggests that time should always be in UTC:

Metadata date-time follows the ISO 8601 standard. The expected format of the combined date and time string is "YYYY-MM-DDTHH:MM:SSZ". Time is always in UTC, and the "Z" time zone designator is attached to indicate a zero UTC offset. An example date-time string is "1985-10-21T01:21:00Z".

AFAIK the spec suggests UTC to simplify comparisons of expiration times (which is a common source of bugs in implementations) – I don't think it's a major issue that there's a non-UTC date time in the metadata, but would be good to resolve before the next round of metadata updates.

Verify root/targets before snapshotting. (Remove placeholder signatures)

Description

This caused a failure in root v3 -- empty sigs will return an invalid signature: https://github.com/theupdateframework/go-tuf/blob/ed6788e710fc3093a7ecc2d078bf734c0f200d8d/verify/verify.go#L107

It is interesting that Snapshot doesn't fail if the root/targets are invalid. This would be a nice feature to add to go-tuf for resilience. If that's out of scope for the repo manager, we should verify the metadata files here.

Tagging "releases": Capture state at each root-signing event

Description

This would be helpful to tag so that we can easily fetch the state of the repo initialization scripts for each root signing event.

I am testing a v3 change and realized it is probably easiest for me to "init" v1 w/ the v1 scripts, and so on.

Monitoring for metadata expiration?

Question

Do we have monitors/pagers in place to make sure that root/timestamp/snapshot/targets metadata are renewed before actually expiring in production? Otherwise, we risk accidentally running into the old "forgot to renew TLS cert in prod..."

Prerequisites for verification with old targets

Context: sigstore/cosign#1273

To support verifying old targets, we will need to enable consistent snapshots to persist versioned targets and snapshots.

  • Enable consistent snapshots in Sigstore root - I have tested locally with cosign and don't see any issues with consistent snapshots and versioned target names.
  • Confirm that versioned targets.json and all targets are in Sigstore's root, synced to GCS bucket - This step should be a no-op. I have tested locally that when a new target is added to staged/targets and the targets metadata is regenerated, we see both a versioned "x.targets.json" and the target be added to the targets folder as hash.target.ext alongside the previous target.
  • Stop generating snapshot frequently. I don't see value in generating snapshots as frequently as timestamps. Snapshots can be generated when targets are updated. If we don't reduce this and separate this from timestamp generation, as we increase timestamp generation frequency, storage costs will increase.
  • Generate timestamp more frequently - Optional for now as we aren't frequently updating targets
  • Automate checking in PR for timestamp generation - Optional, but highly recommended if we increase timestamp generation to daily or less than daily

Lemme know if you have any questions or concerns with any of these bullet points.

@asraa @bobcallaway @dlorenc

E2E testing: Create go e2e tests creating a repository and testing initialize

Description

  • We would need to refactor the scripts and commands to
  • (1) be Key agnostic
  • (2) Be repo store agnostic (use a temp filestore or in-memory)
  • (3) Be callable from a golang and have no repo processing in the script directories.

We will need to be very careful of making these changes in order to not disturb the current repo management workflows.

GCS Bucket Sync using GitHub workflow

Description

The sync job (https://github.com/sigstore/root-signing/blob/main/.github/workflows/sync.yml) uses google-auth action to authenticate to GCP. However, the job's been failing requiring manual updates the GCS bucket (https://github.com/sigstore/root-signing/actions/runs/1642206727) due to

"ServiceException: 401 Anonymous caller does not have storage.objects.create access to the Google Cloud Storage object."

despite being given GCS bucket ownership.

This is currently due to https://github.com/google-github-actions/setup-gcloud#workload-identity-federation-preferred

warning The bq and gsutil tools do no currently support Workload Identity Federation! You will need to use traditional service account key authentication for now.

Check TUF Key ID output in verification

Description

Verifying keys now prints out the TUF key ID for better matching. However, this truncates the ID (probably an int conversion issue)

+ ./verify keys --root piv-attestation-ca.pem --key-directory /home/asraa/git/test-sigstore-root/ceremony/2022-03-09/keys

Outputting key verification and OpenSSL commands...

VERIFIED KEY 18361710
	TUF key id: 824636473168

Separate SA for sync and signing

Description

Use separate SA for syncing to the GCS bucket and signing with the online keys on GCP to prevent compromises from having too far-reached consequences.

Automatic fetching of Fulcio signing certificates

Background

The managed version of Fulcio is backed by GCP CA Service. When creating a CA, the CA is created under a CA pool. CA pools store the list of active CAs. Certificate issuance is done through the CA pool, and the pool selects which CA the issued certificate will chain up to at random. The benefit of this is that you can rotate CAs without changing the certificate issuance code, as that code always targets the CA pool.

There is also an API for CA pools to fetch the list of CA certificates in a pool.

Enhancement

Regular Fulcio root certificate rotation should occur. While CA Service does not currently support automatic rotation, we could build a tool that triggers rotation (or this could be a manual process for now). Once a new CA is created, a script running in this repo could pick up the new root by calling the CA pool API for the list of CA certificates. The script could create PRs to sign the new Fulcio root. Once signed and checked in, the tool that triggers rotation could pick up on this and automatically disable and delete the previous CA in the pool.

cc @asraa

Package up the root!

Similar to ca-certs, we should package this up and make it easy to install and check against!

deps: Update go-tuf to v0.3.0

Description

Attempted in #196

Blocked on next release of sigstore/cosign:

# github.com/sigstore/cosign/pkg/cosign/tuf
../../go/pkg/mod/github.com/sigstore/[email protected]/pkg/cosign/tuf/client.go:429:27: undefined: client.IsLatestSnapshot

Version

Root/Targets Signing v3

Description

Tracking issue for all the changes we need for our v3 root signing enhancements. The v2 root will expire on 2022-05-11. Let's aim to re-sign in February.

Changes to do in root/targets:

  • Reduce root/target expiration to 4 months rather than 6
  • Rotate out one key-holder TBD
  • Create a test setup with >1 YubiKey
  • Add custom metadata indicated usage/lifetime/version of the target metadata for the Rekor/Fulcio/CT key
  • Add a revocation delegation @haydentherapper
  • Enable consistent snapshots (#80)
  • (Optional?): Start including the HSM certs into the root metadata
  • #161

Changes in snapshot/targets:

  • Increase snapshot re-signing to... 3 weeks? (and on delegation changes)
  • Decrease timestamp signing WITH automatic merging

Perhaps there are other changes in delegations but they don't need to be signed by the root/targets so I'm not ass worried about them.

cc @dlorenc @haydentherapper

Create staging environments

We need a staging environment for two purposes:

  1. Testing the TUF remote before it's used by Cosign. This is like a "pre-prod" environment, where the keys and targets are the same as production.
  2. Creating TUF metadata for the Sigstore staging environment, which will have a different set of targets. We should use a different set of target signing keys.

Let's create a GCS remote for each case.

For (1), we can configure the GCS sync to first push to the remote, and delay the GCS sync to the production remote by a day (I think we should still automate this, I'd prefer that we don't need to give explicit approval for each GCS sync), which will give us time to validate risky changes.

For (2), I know that we already have a TUF delegation set up for staging. Do we want to use this? Or do we want an entirely separate TUF root?

Upcoming keyholder rotation!

@mnm678 @SantiagoTorres @dlorenc @lukehinds @bobcallaway
CC: @asraa @haydentherapper

Hi fellow keyholders!

We're approaching the time where we need to rotate key holders on the sigstore public TUF root, so there a couple of things we need to pull together to prepare:

  • Deciding who rotates off
  • Establishing guidelines for the set of keyholders
  • Who we propose to add in this rotation

We should target to select a new keyholder and perform the rotation by March 15th. Please give your feedback on the process, proposed policies, and any ideas to make this more fun / exciting 😄

Who rotates off?

We need to select one of the 5 of us to step down; if anyone would like to volunteer, please reply all and let the group know and we will skip this process this time; however, feedback is appreciated a more general process as described below:

At a sigstore community meeting, we will have a community member (not a keyholder) share their screen and use a RNG to pick a number between 1 and 5. The number selected will corresponding to the current list of keyholders sorted in dictionary order by keyID:

Current list for example:

@dlorenc: 2f64fb5eac0cf94dd39bb45308b98920055e9a0d8e012a7220787834c60aef97
@lukehindsbdde902f5ec668179ff5ca0dabf7657109287d690bf97e230c21d65f99155c62
@mnm678eaf22372f417dd618a46f6c627dbc276e9fd30a004fc94f9be946e73f8bd090b
@SantiagoTorresf40f32044071a9365505da3d1e3be6561f6f22d0e60cf51df783999f6c3429cb
@bobcallaway: f505595165a177a41750a8e864ed1719b1edfccd5a426fd2c0ffda33ce7ff209

The person selected will be the lucky(?) person who will rotate off this time. 

Proposed guidelines for keyholders (feedback welcome!)

Terms: 

  • T: threshold for the root TUF role

MUST

  • No more than T-1 keyholders can be employed by the same company (including subsidiaries) at the same time 
    • Any change in employment or acquisition/merger that causes the current set of keyholders to be in violation of this rule must be rectified within 14 days
  • At least 1 member of the sigstore Technical Advisory Council @sigstore/core-team must always be a keyholder

SHOULD

  • We should aim for keyholders to be geographically distributed 
  • We should aim for keyholders to be representative of our diverse community
    • New (but engaged) community members
    • Those from underrepresented populations
    • Those who work or study in academic settings

More maintainers for repo

Description

We need a list of requirements to be a maintainer here. This should include:

  • Strong understanding of TUF specification
  • Good understanding of go-tuf
  • Good understanding of repo automation
  • Rough understanding of signing ceremony

Init from config

Description

I was making enhancements to the root and realized it would be really nice to configure a config that could be used to initialize next roots based on params:

  • expiration for each target type
  • thresholds
  • delegations
  • targets (with custom meta)
  • signers (either locations of HSM key data or GCP KMS signers)

What are some alternatives to dropping a custom YAML?

Race condition when publishing new TUF metadata to GCS

From actions workflow:

Run gcloud alpha --quiet storage cp --cache-control=no-store -r repository/repository/* gs://sigstore-tuf-root/
Copying file://repository/repository/1.root.json to gs://sigstore-tuf-root/1.root.json
Copying file://repository/repository/2.root.json to gs://sigstore-tuf-root/2.root.json
  
Copying file://repository/repository/rekor.json to gs://sigstore-tuf-root/rekor.json
Copying file://repository/repository/root.json to gs://sigstore-tuf-root/root.json
Copying file://repository/repository/snapshot.json to gs://sigstore-tuf-root/snapshot.json
Copying file://repository/repository/staging.json to gs://sigstore-tuf-root/staging.json
Copying file://repository/repository/targets/fulcio.crt.pem to gs://sigstore-tuf-root/targets/fulcio.crt.pem
Copying file://repository/repository/targets/ctfe.pub to gs://sigstore-tuf-root/targets/ctfe.pub
Copying file://repository/repository/targets/artifact.pub to gs://sigstore-tuf-root/targets/artifact.pub
Copying file://repository/repository/targets/rekor.pub to gs://sigstore-tuf-root/targets/rekor.pub
Copying file://repository/repository/targets/rekor.0.pub to gs://sigstore-tuf-root/targets/rekor.0.pub
Copying file://repository/repository/targets/fulcio_v1.crt.pem to gs://sigstore-tuf-root/targets/fulcio_v1.crt.pem
Copying file://repository/repository/targets.json to gs://sigstore-tuf-root/targets.json
Copying file://repository/repository/timestamp.json to gs://sigstore-tuf-root/timestamp.json

GCS has consistency within objects, but not across objects. That is, if a client attempts to download the TUF metadata after snapshot is updated but before timestamp.json is finished updating, they might get a mismatch.

Possible solutions:

  1. Stick the whole TUF repo into a tarfile or something similar to take advantage of within-object atomicity.
  2. Use something like the "stick files in a directory named after the current timestamp, and have a symlink/pointer to the latest directory" trick
  3. Turn on TUF consistent snapshots, and do (1) but only for the TUF metadata

CC @asraa

Documentation: Enhance linux instructions

Description

I hit an error earlier:

error: connecting to pscs: the Smart card resource manager is not running

We should add some docs for keyholders who hit this error to start and enable the pcsd daemon if they hit this issue

$ systemctl start pcscd.service
$ systemctl enable pcscd.service

Clarify the usage of each key

Description

Some suggestions for docs improvement.

For a new user evaluating the trustworthiness of these roots it would be good if there were some notes about how each of the keys listed in the repository is used.

There is a list of keyholders, but I couldn't see how to map them to the actual keys. For the non-root keys as well, I didn't find a description of who is the owner of a key or when it might be used. In addition to being a better overview of how the repo is managed, this info could also help if there is ever an issue with a leaking key or dubious signature.

Same could be improved for targets. Some of them are listed in the readme, but it would be great if they had more specific descriptions. For example, I don't know how artifact.pub is used or why there are multiple Fulcio certificates(is one of them superseding the other?). For trusted roots that issue their own certificates (like Fulcio) would be good to link to policy they commit to for verifying the user before granting them a trusted certificate.

TUF expiry contains microseconds

From TUF specification:

The expected format of the combined date and time string is "YYYY-MM-DDTHH:MM:SSZ". Time is always in UTC, and the "Z" time zone designator is attached to indicate a zero UTC offset. An example date-time string is "1985-10-21T01:21:00Z".

To me this says that the expiry string should not contain microseconds. Current sigstore metadata contains microseconds: https://github.com/sigstore/root-signing/blob/main/repository/repository/2.root.json#L27

I'm not sure if defining expiry this strictly in the spec is useful but the definition seems clear and in python-tuf we currently implement the spec strictly so fail to deserialize this metadata.

The related python-tuf issue (we'll have to decide if we should support msecs or not): theupdateframework/python-tuf#1858

Enhancement: Store the current keys/ folder under the top-level repository/

Description

Right now repository/ just contains the published metadata. It would be nice to store the current keys, so that I don't have to cp from the previous repo ceremony directory when I chain init fromthe next. When I do that, I need to grab updated snapshot/timestamp from the published repository.

If I store keys/ in the published repo, then the previous repo always becomes repository/ and I don't need to search a ceremony directory for the old key material.

When I make this change, I'll start the playbook for orchestrating key ceremonies. Since this is a weird detail of the ceremony & repository folder structure

WDYT?
@haydentherapper @dlorenc
I'd like to make that change, I think it's benign and simplifies the management.

Enhance metadata verification

Description

Currently the output of staged metadata verification looks like this after init

Outputting metadata verification at /home/asraa/git/test-sigstore-root/ceremony/2022-03-09...

Verifying root...
	Contains 0/3 valid signatures

Verifying snapshot...
	missing metadata snapshot.json

Verifying targets...
	Contains 0/3 valid signatures

Verifying timestamp...
	missing metadata timestamp.json
  • Add expirations
  • Add version information
  • Add any signature's key IDs
  • Add delegations

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.