Coder Social home page Coder Social logo

anoma / namada-trusted-setup Goto Github PK

View Code? Open in Web Editor NEW

This project forked from aleonet/aleo-setup

115.0 7.0 64.0 11.98 MB

Namada Trusted Setup Ceremony for the Multi-Asset Shielded Pool (MASP) enabling asset-agnostic private transfers

Home Page: https://namada.net

License: Other

Rust 98.44% Dockerfile 0.07% Makefile 0.11% Shell 0.30% Python 1.08%
ceremony trusted-setup zksnark namada

namada-trusted-setup's Introduction

Alt text

Namada Trusted Setup

The Namada Trusted Setup Ceremony generates the public parameters for the Multi-Asset Shielded Pool (MASP) circuit and guarantees its security. For more context, see the article Announcing the Namada Trusted Setup. This repository contains the coordinator code and CLI for the Ceremony.

Signing up for the Ceremony

Ceremony dashboard

During the ceremony, valid contributions will appear on the Namada Ceremony Dashboard

About Namada

Namada is a Proof-of-Stake layer 1 protocol for asset-agnostic, interchain privacy. Namada is Anoma's first fractal instance.

To learn more about the protocol, we recommend the following resources:

Participating

The Namada Trusted Setup CLI exposes two ways to contribute:

  • default, performs the entire contribution on the current machine default
  • offline, computes the contribution on a separate (possibly offline) machine (more details here)

This documentation provides instructions to contribute:

  1. By building the CLI from source
  2. From prebuilt binaries (manual setup)
  3. From prebuilt binaries (automated setup)

Participants are also encouraged to participate via their custom clients. For more suggestions on best practices, refer to RECOMMENDATIONS.md.

Contribution tokens

  • Each participant needs a unique token ($TOKEN) in order to participate in the ceremony. If your slot was confirmed, you should've received it by email before the start of the ceremony (09:00 UTC on the 19th of November 2022).
  • If you didn't receive a token but wish to participate, you can use the first come, first served list of tokens during the free-for-all cohorts in the ceremony. Follow @namadanetwork on Twitter for updates.

1. Building and contributing from source

First, you will need to install some dependencies. On debian-based systems you can use:

sudo apt update && sudo apt install -y curl git build-essential pkg-config libssl-dev

After that, you'll need to install Rust by entering the following command:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

If you already have Rust installed, make sure it is the most up-to-date version:

rustup update

Once everything is installed, clone the Namada Trusted Setup Ceremony GitHub repository and change directories into namada-trusted-setup:

git clone https://github.com/anoma/namada-trusted-setup.git
cd namada-trusted-setup && git checkout v1.1.0

Build the binary:

cargo build --release --bin namada-ts --features cli

Move binary on $PATH (might require sudo):

mv target/release/namada-ts /usr/local/bin 

Start your contribution:

namada-ts contribute default https://contribute.namada.net $TOKEN

2. Contributing from prebuilt binaries (manual setup)

We provide prebuilt x86_64 binaries for Linux, MacOS and Windows. For this, go to the Releases page and download the latest version of the client.

After download, you might need to give execution permissions with: chmod +x namada-ts-{distrib}-{version}

Finally start the client with:

./namada-ts-{distrib}-{version} contribute default https://contribute.namada.net $TOKEN

3. Contributing from prebuilt binaries (automated setup)

If you are on Linux or MacOS, we also provide an install script to automate binary setup. You can run the following command:

curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/anoma/namada-trusted-setup/main/install.sh | sh

and you are ready to contribute:

namada-ts contribute default https://contribute.namada.net $TOKEN

Troubleshooting

In MacOS, you might see appearing the warning "cannot be opened because the developer cannot be verified". To solve this, open the "Security & Privacy" control panel from System Preferences. In general tab, next to the info that the binary was prevented from running, click Allow Anyway. Run the binary again. This time a different prompt is shown. Click Open - the binary should run as you expect.

Advanced features

Advanced features are available to encourage creativity during your contribution.

Computation on another machine

You can generate the parameters on a machine that is offline or never connected to internet. Some examples are an air-gapped machine, a brand-new computer or a Raspberry PI. You will still need a second machine connected to the internet to carry out the necessary communication with the coordinator.

On the online machine give the following command:

cargo run --release --bin namada-ts --features cli contribute another-machine https://contribute.namada.net $TOKEN

This will start the communication process to join the ceremony and download/upload the necessary files. On the offline machine use the following command:

cargo run --release --bin namada-ts --features cli contribute offline

which will compute the contribution itself. This second command expects the file challenge.params got from the online machine to be available in the cwd and it will produce a contribution.params to be passed back to the online machine for shipment to the coordinator. The user will be responsible for moving these files around.

Verify a contribution

If you want to verify a contribution you can do it via CLI. After you have successfully contributed, a file called namada_contributor_info_round_${round_height}.json will be generated and saved in the same folder of the namada-ts binary, together with the parameter file namada_contribution_round_{ROUND}_public_key_{PUBLIC_KEY}.params. The file contains a json structure. You should copy the values of following fields:

  • public_key
  • contribution_hash
  • contribution_hash_signature
  • parameter_path - This is an optional argument, and it's the absolute path to the parameter file generated by the CLI (mentioned above)

and input them to:

namada-ts verify-contribution $public_key $contribution_hash $contribution_hash_signature $[parameter_path]

With the same procedure you can also verify any other contribution: you'll find all the data that you need at https://ceremony.namada.net.

Client Contribution Flow

  1. The client will ask you if you want to contribute anonymously:

    • If yes, your contribution will show as "anonymous" on the dashboard.
    • If no, you'll be asked to provide a name and an email address.
  2. Generation of a mnemonic: every participant will be asked to generate a mnemonic. These are compatible with accounts on Namada and you will need it if you end up being rewarded for your contribution!

    • The CLI will request you to verify 3 phrases of your mnemonic.
    • If you fail the verification, the CLI will crash and you'll have to start anew.
  3. You will need to wait a bit until it is your turn. Each round lasts between 4 min and 20 min. During the whole ceremony, please neither close your terminal, nor your internet connection. If you stay offline for more than 2 min, the coordinator will kick you out from the queue.

  4. When it is your turn, the client will download the challenge from the coordinator and save it to the root folder. The client will request you to enter:

    • A frenetically typed string
    • Or a string representation of your alternative source of randomness
  5. You have at most 20 minutes to compute your contribution and send it back to the coordinator.

  6. After successfully contributing, you can optionally submit a public attestation url (e.g. link to a tweet, article documenting your setup, video, etc). Note that the url must be http or https.

  7. If your contribution was valid, it'll show up on the dashboard!

Directory Structure

This repository contains several Rust crates that implement the different building blocks of the MPC. The high-level structure of the repository is as follows:

  • phase2-cli: Rust crate that provides a HTTP client that communicates with the REST API endpoints of the coordinator and uses the necessary cryptographic functions to contribute to the trusted setup.
  • phase2-coordinator: Rust crate that provides a coordinator library and a HTTP REST API that allow contributors to interact with the coordinator. The coordinator handles the operational steps of the ceremony like: adding a new contributor to the queue, authentificating a contributor, sending and receiving challenge files, removing inactive contributors, reattributing challenge file to a new contributor after a contributor dropped, verifying contributions, creating new files, etc.
  • phase2 and setup-utils: contain utils used in both the client and the coordinator.
  • The remaining files contain configs for CI and deployment to AWS EC2 and S3 bucket.

Audits

The original implementation of the coordinator for the Aleo Trusted Setup was audited by:

License

All code in this workspace is licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Community support

namada-trusted-setup's People

Contributors

apruden2008 avatar awasunyin avatar bmerge avatar brechtpd avatar cwgoes avatar dependabot-preview[bot] avatar ebfull avatar emmorais avatar fraccaman avatar gakonst avatar garethtdavies avatar gluk64 avatar gnosed avatar grarco avatar howardwu avatar ibaryshnikov avatar jasondavies avatar jules avatar kellpossible avatar kobigurk avatar mmaker avatar niklaslong avatar poma avatar raychu86 avatar shamatar avatar skywinder avatar ssdbkey avatar str4d avatar weijiekoh avatar zosorock avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

namada-trusted-setup's Issues

fix: when participant fails to send contribution, add next participant from the queue

In the current implementation, when a participant fails to contribute, the coordinator removes this participant from the contributors and adds a replacement contributor "coordinator-contributor". Unfortunately, as it is, the "coordinator-contributor" doesn't do anything and the ceremony gets stuck.

Instead of adding the "coordinator-contributor" we can add a real participant that is waiting in the queue.

The code that needs to be fixed is in coordinator.rs at fn drop_participant_from_storage(&mut self, drop: &DropParticipant) -> Result<(), CoordinatorError>:

// Assign a replacement contributor to the dropped tasks for the current round.
// FIXME: add next contributor from the queue instead of the replacement_contributor from the coordinator, that doesn't make any action.
round.add_replacement_contributor_unsafe(replace_action.replacement_contributor.clone())?;

Store keypair

Store the generated keypair to file json encoded.
Encrypt this file and add info about the contributions: contribution hash, timestamp, contribution hash file.

Tests

Need to:

  • Fix the broken tests in test_coordinator and e2e
  • implement tests for the attestation endpoint

Repo misc fixes

Fix

  • cargo check warnings
  • review masp import
  • all FIXMEs
  • Add a phantom cohort with no tokens at the end of the ceremony to leave enough time to the participants of the last cohort for their contributions
  • How to provide the contributors.json file to the frontend, two ways:
    • Keep the coordinator running forever while shutting down the join_queue endpoint (this would not require a phantom cohort)
    • Shut the coordinator down and publish the contributors.json file to S3
  • Remove unused dependencies
  • Produce final release tag
  • Upload last contribution verified to S3?
  • Fix messages and docs
  • Rename phase1 to phase2

Test

  • contribution on AnotherMachine
  • token blacklisting
  • Namada keypair generation
  • restart of coordinator?

Review

  • values for the env variables to set
  • state of the Coordinator server
  • state of the S3 server
  • state of the Amazon Parameter Store

Fix GET requests

GET requests carrying a body are not supported when running the coordinator behind AWS CDN. The solution to this depends on the specific endpoint. We can list two cases:

  • Get request with a body just to verify the signature -> in this case use Rocket's Request Guards which allow the access also to the managed state (https://api.rocket.rs/v0.4/rocket/request/struct.State.html)
  • Get request with a proper payload -> in this case we have to change the HTTP method to POST

Either way, we'll also need to rework the corresponding requests.

Missing ceremony files

When running the ceremony on the local machine all the correct files gets produced, meaning a directory structure like this:

t

When running the ceremony on the AWS instance though, the *.verified.signature files are not produced

CLI flow refactor

We need to apply some changes to the CLI flow:

  1. Pass the token as a CLI parameter instead of interactively asking it to the user
  2. Remove the step “do you want to participate in the incentivised testnet?”
  3. Request for the following info:
    a. Not anonymous: Full name + email address
    b. Anonymous: they don’t provide info
  4. After they’ve completed the contribution - they’re requested to submit an attestation

For point 4 we are going to send a signed request with the attestation to a new endpoint /contributor/attestation. Since we need to sign this message it will be possible to send this request as long as the CLI is running (since the keypair is in memory).

The attestation field must be added to the contributor.json files.

We need to publish the name of the contributor (possibly Anonymous) in the contributors.json file.

My token is invalid

My token comes from my email. But when I first used it, it was really used. Later, I saw my token in Unused tokens of Cohort 8. I don't understand. Why is that? The token originally belongs to me, but it is made public
image

Broken tests

Both coordinator and e2e tests are currently broken due to the many changes applied lately. Need to fix them

Dropped Contributors

Issue

There's an issue happening with some Contributors being kicked out of the ceremony while waiting in the queue. The problem seems to appear while the coordinator is verifying contributions. @gnosed correctly conjectured that the problem may reside in some Coordinator's timeout.

More specifically, the coordinator has three timeouts that affect a contributor:

  • queue_seen_timeout is relative to contributors in the queue and should be updated by the heartbeat signal sent by the contributor
  • contributor_seen_timeout seems to affect the current contributor and should also be susceptible to the heartbeat
  • participant_lock_timeout seems to be linked only to the lock on the chunk and not affected by the heartbeat (so only affected by the time the contributor holds the lock and therefore its speed of computation)

In the current implementation, the timeouts are set to two minutes, while the contributor sends a heartbeat every minute. This should suffice to prevent the contributor from being dropped out of the ceremony, but apparently, it doesn't work.

Explanation

The reason for this bug is in the verification step performed by the Coordinator periodically (once every minute). The function that computes the verification takes some time to execute (in the test done by @gnosed it took anything from 60 seconds to 80 seconds). This function also needs to acquire a write lock on the Coordinator, meaning that no other task can proceed while the coordinator is verifying the pending contributions.
Having said that, a scenario like the following could arise:

             60s
heartbeat --------heartbeat------------------------
heartbeat --------verify-------------------heartbeat
                                 80s

The contributor sends a first heartbeat. The coordinator receives the request: at this moment the lock on the shared state is free and can be acquired. The heartbeat is then processed and the response is sent back to the contributor.
After 60 seconds the contributor sends another heartbeat: this time, though, the lock on the coordinator has been just acquired by the verify function, which will execute and release the lock only after 80 seconds. Only now the lock can be acquired by the heartbeat handler but, by this time, 140 seconds have already elapsed (well over the 120s timeout) and the contributor will be dropped.

Timings

I benchmarked all the functions (in --release mode) that require a write lock on my machine getting the following results:

  • join_queue: 846μs
  • update_coordinator: 27ms
  • write_contribution: 96ms
  • write_contribution_signature: 7ms
  • lock_chunk: 3ms
  • get_challenge: 77ms
  • contribute_chunk: 364ms
  • verify: 2m 31s
  • heartbeat: 760μs

The only function that takes a long enough time to threaten the heartbeat period is verify.

Solution

Starting from the assumption that we cannot get rid of nor improve the lock system on the Coordinator, to prevent data races, the only possible solutions are:

  • improving the speed of the verification step
  • relaxing the timeouts values to keep the verify execution time in account

The current implementation of the verify function is already an improvement and is not guaranteed that we'll be able to improve it further, nor the magnitude of the improvement that we could possibly get.
The solution is to increase the timeouts of the Coordinator to account for the extra time required by the verification step

Pre-built binaries - no binary for Darwin/arm64

I'm trying to do option three - download the pre-built binaries.

I get the following result:

Creating binary folder...
Downloading new binary for Darwin/arm64...
No binary for Darwin/arm64.

Any ideas?

Graceful shutdown

At the moment the graceful shutdown procedure doesn't seem to work correctly. The rocket server does indeed shuts gracefully but the final state persisted to disk is not correct.

The correct procedure to close the ceremony is:

  1. Notify the rocket server to stop so that it doesn't accept new requests
  2. Give the rest server ALL the time to execute the current pending requests (possibly no grace time)
  3. Abort the update and verify tasks
  4. Call the verify function to perform one last verification of any pending contributions
  5. Call the update method to update the state of the coordinator for the last time
  6. Call the shutdown method of the coordinator to save the state to disk and stop the ceremony

Steps 3 to 5 must be executed once the rest server has shut down, so in the tokio::select! expression of the main function

Monitoring

It would be nice to have an endpoint to feed monitoring. We would need to export:

  • current round height
  • ceremony start time
  • cohort duration

We can place these inside the coordinator.json file and export it

KeyPair encoding

To support the incentivization process through the genesis file, the public key must be encoded with the same scheme used in the ledger. Currently, we are using base64 to encode the key in the executable and base58 when we use the pubkey as part of a file name. We should instead serialize it with borsh and hex encode it, just like in the ledger, to provide a 1:1 mapping

CDN timeout

Queue position: 2
Queue size: 2
Estimated waiting time: 10 min
Queue position: 1
Queue size: 1
Estimated waiting time: 5 min
Queue position: 1
Queue size: 1
Estimated waiting time: 5 min
Queue position: 1
Queue size: 1
Estimated waiting time: 5 min
Queue position: 1
Queue size: 1
Estimated waiting time: 5 min
Queue position: 1
Queue size: 1
Estimated waiting time: 5 min
Queue position: 1
Queue size: 1
Estimated waiting time: 5 min
Do you want to contribute on another machine? [y/n]
n
Do you want to input your own seed of randomness? [y/n]
n
Enter a random string to be used as entropy
dwqdqdqwdwqdw
2022-07-01T12:09:04.289492Z  INFO phase1_coordinator::commands::computation: Contribution hash: 0xf686713d2ac17fe0185ffc64755507ba0d6ed495f66c85cb954cd98fbbcc2e0fb03d8fe70f0e2372ece7f57b0694b704fae34bcd343436c5efc20aa89b941267
Randomness has been correctly produced in the target file
2022-07-01T12:09:05.558110Z  INFO phase1: Completed contribution in 12 seconds
thread 'main' panicked at 'Couldn't get the status of contributor: Server("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\" \"http://www.w3.org/TR/html4/loose.dtd\">\n<HTML><HEAD><META HTTP-EQUIV=\"Content-Type\" CONTENT=\"text/html; charset=iso-8859-1\">\n<TITLE>ERROR: The request could not be satisfied</TITLE>\n</HEAD><BODY>\n<H1>504 ERROR</H1>\n<H2>The request could not be satisfied.</H2>\n<HR noshade size=\"1px\">\nCloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection.\nWe can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.\n<BR clear=\"all\">\nIf you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.\n<BR clear=\"all\">\n<HR noshade size=\"1px\">\n<PRE>\nGenerated by cloudfront (CloudFront)\nRequest ID: MtOYTyic-PEz1TDzxo-vVHrrF36kICiV_C3xy0QHU_-AEkLz2XqKSA==\n</PRE>\n<ADDRESS>\n</ADDRESS>\n</BODY></HTML>")', phase1-cli/src/bin/phase1.rs:356:14
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Compress contribution info

The endpoint /contribution_info returns a json encoding a list of dictionaries which will grow in size quite a bit during the ceremony. We should make sure to compress the response of this endpoint

FFA token

We need to rework the struct of a token to discriminate between:

  • Cohort token
  • FFA token

This way we can keep the blacklisting logic implemented with #65 for the cohort token while avoiding blacklisting the FFA token

CLI cohort info

At the moment the CLI doesn't print a meaningful message when the join_queue method fails because of an invalid token for the current cohort. We should provide some info to the user about:

  • Cohort he's been assigned to
  • Start datetime of the above cohort
  • Time left to the above datetime

We can implement this in two ways:

  1. Add an extra endpoint to be queried at the beginning of the contribution to retrieve these info
  2. Embedded these info directly inside the token

The second option seems to be the best one since it would avoid extra calls to the coordinator server and would allow to implement this feature entirely client-side. To support this, we can create a json like the following:

{
  "start_datetime": "<UtcDateTime>",
  "cohort_duration": "<u64>",
  "cohort_number": "<u64>",
  "token": "<String>"
}

This json can then be serialized and encoded base64 to provide a compact representation tor the CLI input. The CLI can then reconstruct the original data and extract the relevant information for the user.

To better support this the token may become a CLI parameter instead of being asked to the user during the contribution process

CLI crashes after submission of contribution: ContributionFileSizeMismatch

The contribution was uploaded correctly according to https://ceremony.namada.net/

However, I wasn't able to submit an attestation link.

Final console log output:

[11/11] Notifying the coordinator of your uploaded contribution.
Your contribution is being processed... This might take a minute...
thread 'main' panicked at 'Contribution failed: Server-side error: Coordinator failed: ContributionFileSizeMismatch', phase2-cli/src/bin/namada-ts.rs:503:22

Running commit 9c3e6c6 on an Ubuntu 22.04.1 LTS

Refactor repo

  • Clean repo of all the unused file/folders coming from the original Aleo repo
  • Clean Cargo.toml files from unused imports
  • Restrict the elements imported from phase1-coordinator to phase1-cli to only the strictly necessary ones by using features
  • Eliminate all warnings in compilation
  • Add documentation and user guide

issue

the error i got in the contribution code.

thread 'main' panicked at 'ERROR: could not contact the Coordinator, please check the url you provided: Client("<!DOCTYPE html><html><head><meta charSet=\"utf-8\"/><meta name=\"viewport\" content=\"width=device-width\"/><meta name=\"description\" content=\"Namada Ceremony\" class=\"jsx-3805556923\"/><link rel=\"icon\" href=\"/favicon.ico\" class=\"jsx-3805556923\"/><title>404: This page could not be found</title><meta name=\"next-head-count\" content=\"5\"/><link rel=\"preconnect\" href=\"/\" crossorigin=\"anonymous\"/><link rel=\"preload\" href=\"/_next/static/css/5062966f6568b23f.css\" as=\"style\"/><link rel=\"stylesheet\" href=\"/_next/static/css/5062966f6568b23f.css\" data-n-g=\"\"/><noscript data-n-css=\"\"></noscript><script defer=\"\" nomodule=\"\" src=\"/_next/static/chunks/polyfills-c67a75d1b6f99dc8.js\"></script><script src=\"/_next/static/chunks/webpack-d38be8d96a62f950.js\" defer=\"\"></script><script src=\"/_next/static/chunks/framework-8c5acb0054140387.js\" defer=\"\"></script><script src=\"/_next/static/chunks/main-9a2ef954a54582b8.js\" defer=\"\"></script><script src=\"/_next/static/chunks/pages/_app-22a895f968eee2e4.js\" defer=\"\"></script><script src=\"/_next/static/chunks/pages/_error-8353112a01355ec2.js\" defer=\"\"></script><script src=\"/_next/static/qkje4ilflWIygQe8bYhb1/_buildManifest.js\" defer=\"\"></script><script src=\"/_next/static/qkje4ilflWIygQe8bYhb1/_ssgManifest.js\" defer=\"\"></script><style id=\"__jsx-3805556923\">html{font-family:'__Space_Grotesk_7b2156', '__Space_Grotesk_Fallback_7b2156'}</style></head><body><div id=\"__next\"><main style=\"width:100%;background-image:url(/BackgroundPattern.svg);background-size:64px\" class=\"_app-page__MainContainer-sc-b2a044b6-0 eVa-DvP\"><div class=\"header-components__HeaderContainer-sc-14498965-0 gtNgRo\"><div class=\"header-components__LeftAlignedSection-sc-14498965-1 jZEbTk\"><div class=\"image-components__ImageContainer-sc-27794caa-0 egHRXU\"><svg width=\"114\" height=\"32\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\" class=\"image-components__StyledImage-sc-27794caa-1 fjtkhB\"><g clip-path=\"url(#logo_svg__a)\" fill=\"#0FF\"><path d=\"M114 0h-9.334v32H114V0ZM101.866 0h-9.335v32h9.335V0ZM89.649 0h-9.335v32h9.335V0ZM72.244 31.677H62.068V10.633h-11.76v21.044H40.137V0h32.107v31.677Z\"></path><path d=\"M56.19 21.272a4.366 4.366 0 0 0 4.369-4.364 4.366 4.366 0 0 0-4.368-4.364 4.366 4.366 0 0 0-4.369 4.364 4.366 4.366 0 0 0 4.369 4.364ZM32.108 31.677H21.93V10.633H10.176v21.044H0V0h32.108v31.677Z\"></path></g><defs><clipPath id=\"logo_svg__a\"><path fill=\"#fff\" d=\"M0 0h114v32H0z\"></path></clipPath></defs></svg></div></div><div class=\"header-components__RightAlignedSection-sc-14498965-2 eOjtpn\"><div class=\"image-components__ImageContainer-sc-27794caa-0 egHRXU\"><svg width=\"71\" height=\"48\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\" class=\"image-components__StyledImage-sc-27794caa-1 fjtkhB\"><path d=\"M45.978 25.703h-21.22v21.42h21.22v-21.42ZM21.22 25.703H0v21.42h21.22v-21.42ZM70.736 25.703h-21.22v21.42h21.22v-21.42ZM70.736 0h-21.22v21.42h21.22V0ZM45.978 0h-21.22v21.42h21.22V0ZM21.22 0H0v21.42h21.22V0Z\" fill=\"#0FF\"></path></svg></div></div></div><div style=\"font-family:-apple-system, BlinkMacSystemFont, Roboto, &quot;Segoe UI&quot;, &quot;Fira Sans&quot;, Avenir, &quot;Helvetica Neue&quot;, &quot;Lucida Grande&quot;, sans-serif;height:100vh;text-align:center;display:flex;flex-direction:column;align-items:center;justify-content:center\"><div><style>\n body { margin: 0; color: #000; background: #fff; }\n .next-error-h1 {\n border-right: 1px solid rgba(0, 0, 0, .3);\n }\n\n @media (prefers-color-scheme: dark) {\n body { color: #fff; background: #000; }\n .next-error-h1 {\n border-right: 1px solid rgba(255, 255, 255, .3);\n }\n }</style><h1 class=\"next-error-h1\" style=\"display:inline-block;margin:0;margin-right:20px;padding:0 23px 0 0;font-size:24px;font-weight:500;vertical-align:top;line-height:49px\">404</h1><div style=\"display:inline-block;text-align:left;line-height:49px;height:49px;vertical-align:middle\"><h2 style=\"font-size:14px;font-weight:normal;line-height:49px;margin:0;padding:0\">This page could not be found<!-- -->.</h2></div></div></div><div class=\"footer-components__FooterContainer-sc-349bc1c6-2 iKEFwr\"><div class=\"footer-components__LeftAlignedSection-sc-349bc1c6-3 Eefsz\"><div class=\"footer-components__Column-sc-349bc1c6-5 footer-components__RightColumns-sc-349bc1c6-6 ioiteQ lexPrn\"><a href=\"https://namada.net\" class=\"footer-components__FooterLink-sc-349bc1c6-10 sdBIE\">Home</a><a href=\"https://blog.namada.net\" class=\"footer-components__FooterLink-sc-349bc1c6-10 sdBIE\">Blog</a><a href=\"https://namada.net\" class=\"footer-components__FooterLink-sc-349bc1c6-10 jyasUJ\">Talks</a><a href=\"https://specs.namada.net\" class=\"footer-components__FooterLink-sc-349bc1c6-10 sdBIE\">Specs</a><a href=\"https://docs.namada.net\" class=\"footer-components__FooterLink-sc-349bc1c6-10 sdBIE\">Docs</a></div><div class=\"footer-components__Column-sc-349bc1c6-5 footer-components__RightColumns-sc-349bc1c6-6 ioiteQ lexPrn\"><a href=\"https://namada.net/trusted-setup.html\" class=\"footer-components__FooterLink-sc-349bc1c6-10 sdBIE\">Trusted setup</a><a class=\"footer-components__FooterLink-sc-349bc1c6-10 jyeinJ\">ceremony</a></div></div><div class=\"footer-components__RightAlignedSection-sc-349bc1c6-4 lkBfMk\"><div class=\"footer-components__Column-sc-349bc1c6-5 ioiteQ\"><div class=\"footer-components__TestnetUpdatesButtonContainer-sc-349bc1c6-0 hshcpr\"><a href=\"https://dev.us7.list-manage.com/subscribe?u=69adafe0399f0f2a434d8924b&amp;id=263f552276\"><button class=\"StyledComponentButton__Button-sc-f4b75b27-0 footer-components__TestnetUpdatesButton-sc-349bc1c6-1 hgmlQZ gUlnSp\">Testnet updates</button></a></div><a class=\"footer-components__FooterLink-sc-349bc1c6-10 sdBIE\"><div class=\"footer-components__SocialsContainer-sc-349bc1c6-8 bJetAK\"><div class=\"footer-components__SocialsButton-sc-349bc1c6-9 hJzOqk\"><div style=\"color:black\" class=\"image-components__ImageContainer-sc-27794caa-0 egHRXU\"><svg width=\"28\" height=\"24\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\" class=\"image-components__StyledImage-sc-27794caa-1 fjtkhB\"><path fill-rule=\"evenodd\" clip-rule=\"evenodd\" d=\"m13.596 6.384.06 1.038-1.011-.127c-3.682-.487-6.9-2.14-9.63-4.915L1.678 1.003 1.335 2.02C.607 4.287 1.072 6.681 2.59 8.291c.809.89.627 1.017-.77.487-.485-.17-.91-.296-.95-.233-.142.149.344 2.076.728 2.839.526 1.06 1.599 2.097 2.772 2.712l.991.487-1.173.021c-1.133 0-1.173.021-1.052.466.405 1.377 2.003 2.839 3.783 3.475l1.255.444-1.093.678a11.084 11.084 0 0 1-5.422 1.568c-.91.021-1.659.106-1.659.17 0 .211 2.468 1.398 3.905 1.864 4.309 1.377 9.428.784 13.272-1.568 2.73-1.674 5.462-5 6.737-8.22.688-1.716 1.375-4.851 1.375-6.355 0-.975.061-1.102 1.194-2.267.668-.678 1.295-1.42 1.416-1.631.203-.403.182-.403-.85-.043-1.72.636-1.962.551-1.112-.402.627-.678 1.376-1.907 1.376-2.267 0-.063-.304.042-.648.233-.364.212-1.173.53-1.78.72l-1.093.36-.991-.699c-.546-.38-1.315-.805-1.72-.932-1.032-.296-2.61-.254-3.54.085-2.53.953-4.127 3.41-3.945 6.101Z\" fill=\"#0ff\"></path></svg></div></div><div class=\"footer-components__SocialsButton-sc-349bc1c6-9 hJzOqk\"><div class=\"image-components__ImageContainer-sc-27794caa-0 egHRXU\"><svg xmlns=\"http://www.w3.org/2000/svg\" width=\"24\" height=\"24\" class=\"image-components__StyledImage-sc-27794caa-1 fjtkhB\"><path d=\"M24 11.779a2.654 2.654 0 0 0-4.497-1.899c-1.81-1.191-4.259-1.949-6.971-2.046l1.483-4.669 4.016.941-.006.058a2.17 2.17 0 0 0 2.174 2.163c1.198 0 2.172-.97 2.172-2.163a2.171 2.171 0 0 0-4.193-.785l-4.329-1.015a.37.37 0 0 0-.44.249L11.755 7.82c-2.838.034-5.409.798-7.3 2.025a2.643 2.643 0 0 0-1.799-.712A2.654 2.654 0 0 0 0 11.779c0 .97.533 1.811 1.317 2.271a4.716 4.716 0 0 0-.086.857C1.231 18.818 6.039 22 11.95 22s10.72-3.182 10.72-7.093c0-.274-.029-.544-.075-.81A2.633 2.633 0 0 0 24 11.779zM6.776 13.595c0-.868.71-1.575 1.582-1.575.872 0 1.581.707 1.581 1.575s-.709 1.574-1.581 1.574-1.582-.706-1.582-1.574zm9.061 4.669c-.797.793-2.048 1.179-3.824 1.179L12 19.44l-.013.003c-1.777 0-3.028-.386-3.824-1.179a.369.369 0 0 1 0-.523.372.372 0 0 1 .526 0c.65.647 1.729.961 3.298.961l.013.003.013-.003c1.569 0 2.648-.315 3.298-.962a.373.373 0 0 1 .526 0 .37.37 0 0 1 0 .524zm-.189-3.095a1.58 1.58 0 0 1-1.581-1.574c0-.868.709-1.575 1.581-1.575s1.581.707 1.581 1.575-.709 1.574-1.581 1.574z\"></path></svg></div></div></div></a></div></div></div></main></div><script id=\"__NEXT_DATA__\" type=\"application/json\">{\"props\":{\"pageProps\":{\"statusCode\":404}},\"page\":\"/_error\",\"query\":{},\"buildId\":\"qkje4ilflWIygQe8bYhb1\",\"nextExport\":true,\"isFallback\":false,\"gip\":true,\"scriptLoader\":[]}</script></body></html>")', phase2-cli/src/bin/namada-ts.rs:644:10 note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

Cohort-based authentification token

If the queue is open to be joined by anyone, it can happen that many users join at the same time, leading to long waiting times.

To prevent that, we need to add support for cohorts. A cohort is a set of pre-selected users that are allowed to contribute in a pre-determined timeframe. Each user from a cohort receives a unique token that allows them to join the queue during the attributed timeframe.

For each cohort, tokens are stored in ./tokens/namada_tokens_cohort_{cohort}.json. Each time a user attempts to join the queue, the coordinator retrieves the list and checks if the received token is in it.

COHORT_TIME determines the total available time per cohort to contribute.

Example
The ceremony starts on the 09.09 at 12 PM UTC and we have COHORT_TIME = Duration::from_hours(24). The first cohort cohort = 0 will have 24 hours to contribute using the token they received. The second cohort cohort = 1 will start on the 10.09 at 12 PM UTC.

Health endpoint

It would be nice to have a /healthcheck endpoint, returning a 200 status code. The endpoint should try to read a file containing an arbitrary json content and, if present, return the json as response body.

Requests signature

Requests should be signed by the contributors (or coordinator) themselves.

The endpoints accessible only by the coordinator should accept only the signature coming from the coordinator itself.

TLS support in requests

Requests in file phase1-cli/src/requests.rs should support TLS when the aws compilation flag is set

Token blacklisting

At the moment, a participant is able to contribute multiple times with the same token (provided that these contributions happen in the correct cohort). This, effectively, prevent the correct load distribution across the cohorts. We should blacklist a token once a contribution (either valid or invalid) has been received. We must include a check on the blacklisted tokens in the /contributor/join_queue endpoint to prevent people from joining the ceremony with a blacklisted token.

Also, when blacklisting a token, we should check that no other contributor in the queue has provided that token when joining. For this last part is simply better to perform the check earlier: when a participant joins the queue we check that the token is not in the blacklist but also that no other participant in the queue, nor the current contributor, has provided the same token: to do this we'll need a support set where to store the tokens actually in queue or contributing. This way we don't need to attach the token to the participant info nor should we perform any additional check on the queue (or the current contributor) when switching to a new round.

This logic could be controlled by an env so that we can switch it on or off for testing or test runs.

More specifically:

  1. After a complete contribution has been received (so after the call to the contributor/contribute_chunk endpoint has been correctly handled) the token must be blacklisted regardless of the validity of the contribution. At the same time, the token must be removed from the set of current tokens
  2. The /contributor/join_queue endpoint must check that the provided token has not been blacklisted and that the token is not in the set of current tokens
  3. When joining the queue, the contributor's token must be added to the set of current tokens
  4. If a contributor is dropped during the contribution (for any reason) we should allow him to rejoin, so his token should be removed from the set of current tokens (here we apply the same logo of the IP address)
  5. No need to take action when a participant gets banned since his token has already been blacklisted in step 1

CLI User flow

The CLI needs to ask a couple of questions to the user that will determine the execution flow:

  1. Do you want to participate in the incentivised trusted setup?
  2. Do you want to take part in the contest?
  3. Do you want to contribute on another machine?
  4. Do you want to input your own seed of randomness?

Question 1. and 2. are asked when the user runs the CLI. At the end of the ceremony, input data (full name, email, contest_participant) needs to be signed together with the contribution hash, contribution file hash and public key, then sent back to the coordinator.

Question 3. and 4. are asked when the user has acquired the lock and the challenge was saved as a file. If the user wants to contribute on another machine we should have --offline flag that will skip the communication steps with the coordinator and go straight to the contribution part.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.