Coder Social home page Coder Social logo

input-output-hk / mithril Goto Github PK

View Code? Open in Web Editor NEW
113.0 35.0 31.0 393.83 MB

Stake-based threshold multi-signatures protocol

Home Page: https://mithril.network

License: Apache License 2.0

Dockerfile 0.24% Rust 90.55% Makefile 0.32% Shell 2.94% HCL 1.29% JavaScript 4.38% CSS 0.07% Nix 0.17% HTML 0.02% Handlebars 0.02%
blockchain cardano cardano-node mithril rust scalability multi-signature-aggregation multi-signatures multisig

mithril's Introduction

Mithril 🛡️

☀️ Introduction

Mithril is a research project the goal of which is to provide stake-based threshold multi-signatures(STM) on top of the Cardano network.

In a nutshell, Mithril can be summarized as:

A protocol that enables stakeholders in a proof-of-stake (PoS) blockchain network to individually sign messages, which are then aggregated into a multi-signature, guaranteeing that they represent a minimum share of the total stake.

In other words, an adversarial participant with less than this share of the total stake will be unable to produce valid multi-signatures. 🔐.

The goal of the first implementation of the Mithril network protocol is to provide a way to fast bootstrap a fully operating Cardano node in less than two hours, compared to the days it used to take before.

To unleash the power of Mithril and leverage new use cases, we have also implemented a framework in the Mithril network that allows the certification of multiple types of data, provided they can be computed deterministically.

🛡️ Mainnet availability

Mithril is currently a work in progress, and is available in its beta version on mainnet.

✔️ It is ready to be safely deployed in the SPO production infrastructure for Cardano mainnet.

⚠️ It is NOT yet completely ready for production usage of the artifacts produced before a minimum level of participation in the network (which depends on the artifact type).

Disclaimer

By using Mithril protocol, you understand the protocol is in development and that use of the mithril-signer, mithril-aggregator and mithril-client on mainnet is entirely at your own risk.

You also acknowledge and agree to have an adequate understanding of the risks associated with use of the Mithril network and that all information and materials published, distributed or otherwise made available on mithril.network and Mithril Github Repository is available on an ‘AS IS’ and ‘AS AVAILABLE’ basis, without any representations or warranties of any kind. All implied terms are excluded to the fullest extent permitted by law. For details, see also sections 7, 8 and 9 of the Apache 2.0 License.

🚀 Getting started with Mithril

If you are a Cardano SPO, a good entry point is the SPO onboarding guide. Additionally, you can find detailed instructions for running a signer node in this guide.

If you are interested in fast bootstrapping of a Cardano node, please refer to this guide.

Get access to tutorials, user manual, guides and plenty of documentation on our website!

Mithril wiki is also available here.

📡 Structure of the repository

This repository consists of the following parts:

  • Mithril aggregator: the node of the Mithril network responsible for collecting individual signatures from the Mithril signers and aggregating them into a multi-signature. The Mithril aggregator uses this ability to provide certified snapshots of the Cardano blockchain.

  • Mithril client: this is the client library that can be used by developers to interact with Mithril certified data in their applications.

  • Mithril client CLI: the CLI used for retrieving the certified artifacts produced by the Mithril network, eg the Cardano chain certified snapshots used to securely restore a Cardano node.

  • Mithril client WASM: the WASM compatible library used for retrieving the certified artifacts produced by the Mithril network.

  • Mithril common: this is the common library that is used by the Mithril network nodes.

  • Mithril STM: the core library that implements Mithril protocol cryptographic engine.

  • Mithril explorer: the explorer website that connects to a Mithril aggregator and displays its Certificate chain and artifacts.

  • Mithril infrastructure: the infrastructure used to power a Mithril network in the cloud.

  • Mithril signer: the node of the Mithril network responsible for producing individual signatures that are collected and aggregated by the Mithril aggregator.

  • Internal: the shared tools and API used by Mithril crates.

  • Mithril test lab: the suite of tools that allow us to test and stress the Mithril protocol implementations.

    • Mithril devnet: the private Mithril/Cardano network used to scaffold a Mithril network on top of a Cardano network.

    • Mithril end to end: the tool used to run tests scenarios against a Mithril devnet.

  • Protocol demonstration: a simple CLI that helps understand how the Mithril protocol works and the role of its protocol parameters.

  • Examples: out of the box working examples to get familiar with Mithril.

🌉 Contributing

The best way to contribute right now is to provide feedback. Start by giving a look at our documentation.

Should you have any questions, ideas or issues, we would like to hear from you:

When contributing to this project and interacting with others, please follow our Code of Conduct and our Contributing Guidelines.

mithril's People

Contributors

36thchambersoftware avatar abailly-iohk avatar abakst avatar albertodvp avatar alenar avatar algurin avatar brouwerq avatar ch1bo avatar curiecrypt avatar danielmain avatar demonsh avatar dlachaume avatar falcucci avatar ghubertpalo avatar ilya-korotya avatar iquerejeta avatar jbgi avatar jldodds avatar jpraynaud avatar m10f avatar manveru avatar neilburgess42 avatar obrezhniev avatar olgahryniuk avatar proxiweb avatar sfauvel avatar spaceships avatar sviksha avatar trevorbenson avatar yvan-sraka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mithril's Issues

How to produce snapshots

As the actual value of Mithril lies in the capability it gives to cardano-node users to boot their node faster, in minutes instead of hours, we want to understand how to produce reliable and reproducible snapshots from a node's DB.

Tasks to do:

  • Determine information needed to create a valid snapshot
  • the whole immutable folder of the database (required)
  • the protocolMagicId file (required)
  • the latest ledger state snapshot file in the ledger folder (optional)
  • Verify that the snapshotted files are platform independent

Tests successfully ran on the first 20 chunks of the immutable folders (macOS, Ubuntu, Ubuntu on Docker, Windows on 3 separate computers) 🟢

  • Determine best option to create a snapshot (difficulty to produce vs size vs time to restore)

Best option is Immutable + Ledger State, but this implies to modify the Cardano Node

Mainnet

Data Node Full Archive Snapshot Upload Download Restore Startup
Immutable Only standard 43GB 24GB ~28m ~45m ~25m ~12m ~420m
With Ledger State modified 45GB 25GB ~28m ~45m ~25m ~12m ~65m

Testnet

Data Node Full Archive Snapshot Upload Download Restore Startup
Immutable Only standard 9.5 GB 3.5 GB ~7m ~5m ~3m ~2m ~130m
With Ledger State modified 10 GB 3.5 GB ~7m ~5m ~3m ~2m ~6m

Host: x86 / +2 cores / +8GB RAM / +100GB HDD
Network: Download 150Mbps / Upload 75Mbps
Compression: gzip
Cardano Node: not running during snapshotting

  • Modify the Cardano Node such that it can produce deterministic snapshots

Commit: abailly-iohk/ouroboros-network@ae552cc
This task will be finalized in a separate issue: #100

  • Create a light CLI/shell script that can produce a snapshot and a deterministic digest from it

Question: Do we need to stop the node when the snapshot is done?
Answer: We can keep it running
Commit: 8fcbce9

Allow to extract the key pair from the `StmSigner`

Might be of interest to expose in the API an extraction function of the key pair (see discussion in #43 ). Initial decision was to not do this to avoid misuse of the keys, but might be more useful than dangerous.

Enhance documentation

We need to enhance the documentation:

  • Extract the documentation production in a separate workflow on the CI
  • Make a separate website like on the Hydra.family and Hydra with Docusaurus

Package and deploy mithril library for internal consumption

There is an interest from other projects within IOG to use capabilities provided by Mithril library (eg. ATMS for EVM-as-a-Sidechain project by @dzajkowski). We need to:

  • Automate build and packaging of the library
  • Package standalone executable when available (see #44 )
  • Deploy those binaries to some well-known location for easy retrieval

Should we make `k` a generic parameter?

An aggregate signature requires individual signatures over k different indices.

Should we make k a generic parameter? We seemed to remember (but couldn't find) that we wanted to support multiple settings of k. What is the best way to achieve that?

Prepare open-sourcing of repository

Mandatory Tasks

  • cleanup doc directory from IOG specific elements (will be done in #98)
  • align LICENSE with rest of eco-system (Apache License 2.0 same as Cardano Node)
  • #245
  • Refine README files
  • #191
  • Review/Comment/Improve Blog Post
  • Implement latest tag on Docker images
  • Design SPO registration form and provide link
  • Get legal sign off on website
  • Add missing comments in repository code for Rust (add #![warn(missing_docs)] in lib.rs and main.rs files)
  • Get sign-off from mgmt for public visibility of the repository
  • Add CONTRIBUTING.md
  • Add branch protection rules before merging a PR with Required Approval (see)

Write OpenApi specification for Aggregator

  • This specification will define the interface between the signers and the aggregators in a language agnostic way
  • This needs to be done earlier in order to also provide Monitor a well-defined interface defining messages it will see exchanged in the network
  • We can start from https://github.com/input-output-hk/mithril/blob/main/monitor/src/Mithril/Messages.hs, generate basic JSON from there and then use the YAML as the source of truth

Tasks to do:

  • Create the openapi.yaml file at the root of the repository
  • Update the architecture diagram to better reflect this specification
  • Add a workflow in the CI to create an UI from this specification

Truncated Tree Traversal

Currently, mithril certificates involve verifying a few hundred paths over a Merkle tree. For the parts of the paths that are close to the root, there will be large amounts of overlap. For example, there are exactly two children under the root. We thus expect ~half of the paths to use one child and the rest to use the other. This implies that both children appear multiple times in the certificate.

A simple approach to mitigate this is to select the X-th layer of the tree and expose it. This will require 2^X hashes in space. However, it will also allow us to truncate paths at the X+1 layer, rather than the root, saving k * X hashes. For X ~ log(k), this is a clear gain.

Motivational Example:

For k~256, we set X=8. We use 256 hashes to represent the exposed layer, but save 8 steps from each path. Effectively, each path is shortened by 7 steps.

Trees of 2^ [10/20/30] leaves have paths of length 10/20/30. Additionally, leaf preimages, ev values, signatures and path data add the equivalent of ~5 steps for each party. Thus, the approximate savings are: 7/15, 7/25, 7/35, i.e. 46%, 28% and 20%. Respectively.

For the current pools size of ca 2^12, this gives a 7/17 i.e. 42% size reduction.

Changes in the primitive:

Potentially. We can either change certificate aggregation and verification to expect compressed paths or add a pair of compression/decompression functions to convert between the two representations.

Changes in node logic:

Minimal. Nodes might need to store the “exposed” layer rather than only the AVK in order to aggregate.

Relevant to BP/Halo:

Yes. The benefits will be similar albeit smaller if the circuit uses a higher arity for the tree. The circuit will be hardcoded to expect a compressed representation of the tree, but the end result will be a more compact circuit. NB:there is no need for multiple versions of the tree here, we only need the compressed one.

Security impact:

None.

Low-Effort Alternative:

We can try and use off the shelf compression to exploit the overlap. This will likely provide smaller benefits as a compression function can not infer the value of a parent from the values of the children (and thus omit all parents after the exposed layer), but will be much simpler to implement. This low-effort approach does not work for circuit-based proofs however.

Merkle Tree Discussion and Todo

  • Avoiding hashing the leaves. Iñigo asked whether we could avoid hashing the leaves of the tree to save space. Our Merkle Tree does not take ownership of the leaves (field element + u64 for the stake). Hashing them is a space win, since the actual leaves then dropped by the caller. Do you agree?
  • Merkle Trees with a non-power-of-2 number of leaves contain nodes where there is only one child. We think that the standard way of handling this case is to hash the single child. Do you agree?
  • Using our own Merkle Tree implementation. We didn't have a lot of confidence in the Ark MT implementation. We had specific requirements at first, because of Poseidon, but when we dropped that, we overlooked the fact that we could use a library. We think it's worth investigating, it will lower maintenance cost and complexity of the library. @abakst will look into this.

Store snapshots on public address

We need to create and store snapshots archive to a publicly accessible address:

  • Create a snapshot (include latest ledger state even if it's not deterministic at the moment). Use process developed for mithril-snapshotter-poc
  • Upload to Google Cloud as a first step
  • Check the expiration feature of the archive on the cloud for cost efficiency
  • Investigate on the secrets management with Github or best practice at IOHK

Remove unnecessary dependencies

rand_core or rand_chacha are dependencies of rand. And, as far as I can see, we only need those two, so probably best to explicitly import rand_core and rand_chacha instead of rand

Typestate for Clerk and Signer

A Clerk or a Signer should only be initialised once the registration is closed. This can be enforced at type level. For instance, rather than creating an StmSigner out of an iterator of RegPartys, we should create it out of a ClosedKeyReg instance. Moreover, this creation should consume the StmInitializer, so that, in case the stake or participating parties change, we enforce the creation of a new StmInitializer.

This opens a question. Do we want to allow a transition from an StmSigner back to an StmInitializer? Or are we OK in just initializing from scratch an StmInitializer every time that stake/participants change?

I think the former makes sense, so that we can use the StmSigner to "keep" the consistent data (such as the party_id, sk, pk). What are your thoughts?

Note: this is a breaking change for the C-API.
cc: @abailly-iohk @algurin

Verify the signatures in `dedup_sigs_for_indices` instead of in `aggregate`

The aggregation function first selects one signature per index, and only then verifies the signature. Currently, signatures are first dismissed, without verification. This means that we might select an invalid signature, and dismiss a valid one for the same index. Then, signatures are verified, and if a single one fails, then the aggregation fails. Again, this is not necessary, as invalid signatures can be submitted, without the need of the signature failing. We only need k valid signatures.

This would allow an adversary to invalidate aggregation.

Define skeletal End-to-end test for client snapshots verification

Goal is to put in place ETE test infrastructure consisting in:

  • run the ETE test as part of CI
  • A private cluster of 3 cardano block producers,
  • A transaction generator/feeder,
  • 3 signers, one for each cardano node,
  • An aggregator server,
  • A "client" using the aggregator's API to download, verify and use a snapshot.

All of the above will obviously be mostly stubbed or faked atm.

Extra argument in aggregation function

Msp::aggregate_sigs has an unused msg input (code), because the paper includes it. We think Msp is not intended to be a general API, just a dependency for Mithril, so it makes sense not to have msg as an argument. What do you think?

Sizes of the pointer can be checked

Should be possible to abstract the pointer sizes from the caller. Otherwise, we should check that the given size of the pointer is as expected by the library.

Investigate use of db-analyser to produce snapshots

The tool db-analyser should provide a good way to create deterministic snapshots of the ledger state:

  • Check if we can rely on the slot number or the block number to produce consistent snapshots
  • Create a script/cli that can produce a deterministic snapshot from an existing ledger state

Include more complete benchmarks

It is of interest to include benchmarks which show the behaviour of STMs, MSPs, with different number of participants and parameters. Similarly, of interest to include benchmarks on how the size of the proof grows, as the number of participants grow.

Fail to compile rust library on linux

When I try to cargo build inside the rust directory, I get the following error:

error: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/curry/mithril/rust/target/debug/deps/libzeroize_derive-f42e2238bc0119fb.so)
   --> /home/curry/.cargo/registry/src/github.com-1ecc6299db9ec823/zeroize-1.4.1/src/lib.rs:220:9
    |
220 | pub use zeroize_derive::Zeroize;
    |         ^^^^^^^^^^^^^^

error: could not compile `zeroize` due to previous error
warning: build failed, waiting for other jobs to finish...
error: build failed

I have tried googling and found issues related to static linking suggesting musl should be used instead but even removing the staticlib target does not fix it.

Proof Trait Todos

This issue tracks our progress on the following tasks:

  • Comment code: why we used From<Statement> in mithril_proof
  • Comment: why Rc is used in mithril_proof::Statement
  • Code: change the output of Witness::verify to use a Result type so we can know more about what failed than false.

Modify Cardano node snapshot policy

Modify the Cardano Node such that it can produce deterministic snapshots.
This subject was previously addressed with a POC in #84

  • Modify the Cardano node snapshot policy so that it creates them at regular block interval
  • Add a test that validates the snapshot creation is deterministic and the same on different nodes
  • Create a PR on the Cardano node repository (https://github.com/input-output-hk/cardano-node)

Clarify Registration and PartyId

The UC security modelling in https://eprint.iacr.org/2021/916.pdf makes use of a PartyId.

Clarify how registration happens and the role of PartyId.

  • KeyReg initialization should take a list of valid party IDs, or create them itself. PartyId is currently used as an index into the Merkle Tree as well as a global identifier for each party. The former could be chosen by KeyReg while the latter could be a hash of the public key.
  • Address maliciously chosen PartyIds by ensuring the public keys are unique.

Hash to Curve

Implement hash-to-curve as given in this spec

  • Question: what is the public key Y for us? Do we ignore it? @iquerejeta
  • Mod by q after hashing
  • Implement it

Shorter Certificates via Early Stopping

Currently, mithril operates with a single set of f/m/k parameters. The parameter f determines the probability of success, and the protocol then requires k successes over m indices. For security and liveliness, k and m are chosen so that an adversarial stake will have negligible probability of success while an honest one will have a significant one. Importantly, this choice assumes that (1) the adversarial stake will refuse to cooperate with honest users and that (2) the adversarial stake is the maximum allowed by the definition.

If we relax the two above assumptions, we are able to be much more aggressive in our choice of parameters, with no security impact against adversarial forgeries. There is however considerable impact in the possibility for denial of service/ liveliness. This can however be overcome:

We can select two* pairs (k_a,m_a) and (k_c,m_c) with the first one being aggressively parametrized, and the second one conservatively. Signers operate as normal. Aggregators attempt to create an aggressively parametrized certificate before a conservative one, and verifiers prefer aggressive certificates to normal ones.

This solution realizes the space savings of the aggressive parametrization if the adversary is weaker than allowed, but retains liveliness versus a maximal adversarial stake. Against forgeries, the adversary gains a small benefit, but the overall gain is in the order of 1 bit of security i.e from one chance at 2^{-100} to (less than) two chances at 2^{-100}.

Motivational Example:

One current (k/m) pair is 357 out of 2643. Shorter certificates can be set to 228/1400 (36% smaller) with a fallback to the initial values if we are unable to locate 228 sigs in the first 1400 indices.

Changes in the primitive:

Low to None (assuming k,m not hardcoded or embedded in sigs/certs --also see #9 ).

Changes in node logic:

Yes, limited. Nodes need to be aware of both values and use them in the correct order.

Relevant to BP/Halo:

Yes, the benefits will be similar. Need a “short” version of the circuit to handle short proofs, but no structural changes are needed.

Security impact:

Negligible. We can quantify the advantage of the adversary to less than 2x of the original. We can either accept that, or re-parametrize for 2^{-101} base advantage which will still be bounded by 2^{-101} after doubling.

Unbalanced Trees for Unbalanced Stake Distributions

Currently, the Merkle tree used for the AVK is balanced which implies that signatures produced by any two parties will have the same length. Since parties with high stake are expected to produce signatures more often, it will be beneficial to place them in short branches. Vice versa, parties with low stake may be placed deeper with little cost (if amortized over the frequency of signing).

A variant of Huffman coding may be the optimal choice for this, but allowing the path length to vary completely might be too complex for circuit based systems. As an inbetween solution, consider a root with two subtrees: a “short” tree with the 2^Y largest stakeholders, and a “long” tree with everybody else.

Motivational Example

On June 08, Cardano had 2800 stake pools. This rounds up to a tree with 2^12 leaves and paths of length 12.

From these pools, the Top 512, held 92% of the stake. A subtree for these users would have paths of length 9 (+1 due to the subtree). The other users would lie in a tree with paths of length 12 (+1 due to the subtree). This implies that ~92% of signatures would have length 2 shorter than before, and ~8% would be one longer. On average, a signature would be shorter by 1.76 steps (~14%).

Changes in the primitive:

Yes. Verification must be able to handle paths of varying lengths (might be implicitly supported in the current version). Aggregate key generation will need to be changed to produce unbalanced trees.
Changes in node logic:
None.

Relevant to BP/Halo:

Limited. Circuit-based systems do not work well with conditional branches as we often need to represent both branches in the circuit and thus pay the cost of both options (or the more costly of the two if we can fold them together). This implies that a single circuit approach will obtain no benefit from the above.

For the basic short/long option, we might opt to prepare a number of preset circuits, each having a different mix of short and long branches opting to use the smallest circuit that is appropriate at each point. This will require some amount of effort and limit the benefits to a degree.

Security impact:

None. (Also see Scaling Potential).

Scaling Potential:

Incentive structures in place may keep the size of the stakeholder pool close to its current value.This implies that the savings percentage will likely not increase further. On the other hand, the above proposal can maintain performance against malicious/capricious users who wish to operate a significant number of stake pools with insignificant stake.

Include probability of success for different parameters

It is of interest to understand what are the changes of producing valid certificates given a set of parameters. To this end, it would be of interest to produce a matrix that determines what are the chances of succeeding in the certificate generation given different values of m, k, phi_f and nr of participants (and their stake).

Test the C API and the rust library with a single command

Ideally, we would test the rust library and the C api with a single, all powerful cargo test. Super ideally, we could document how to use the C API with the single, all powerful cargo doc. Seems there might be ways to achieve it. Otherwise, a Makefile would be just fine (but not as cool).

Question: Ev output

On one hand, the output of Msp::ev should be an element of Zp with a real proof system. On the other hand, with trivial concatenation, it is an output of blake2b. Finally, it must be compared against the output of phi, which is a real in the range [0,1]. The paper does not define how to do this conversion. We are currently just dividing 64 bit output from the hash by 2^64 and comparing that with the output of phi. What should we do in general here?

Define "End-to-End" property in Monitor language

We want to assert and verify (roughly) the following (Safety/liveness) property:

  • Nodes retrieve protocol parameters
  • Nodes register their keys
  • Snapshot is triggered
  • Nodes send signatures to aggregator
  • Aggregator eventually produces a valid certificate for the required snapshot

Mithril Client can restore Cardano Node from snapshot

We need to implement a lightweight Mithril Client: (At this stage no certificate verification)

  • Retrieve snapshots list from the Mithril Aggregator
  • Retrieve a snapshot details from the Mithril Aggregator
  • Download the archive of a snapshot (stored on a public server, http download only)
  • Unpack the archive and restore it to a database folder of a Cardano node

Deploy Mithril Aggregator cloud service

The Mithril Aggregator should be accessible on a public address:

  • Bootstrap a hosting environment on Google Cloud (Terraform and use packaged Docker image)
  • Create a workflow of CD in the CI to update the hosted Aggregator (Staging/QA/Prod environment)

Spin-up devnet locally for testing purpose

We need to create a local devnet for testing purpose of the Mithril Network components:

  • Create a script/process to launch a devnet working locally and/or for all the developers on a limited network

Validate all signatures or only `k`

We are planning to reward signers in case of successfully participating in the process, right? (in order to incentivise). If that’s the case what should happen in the following scenario. Suppose we have a threshold k. Now, suppose that there exists l > k valid signatures submitted by the participants. Should we reward all l signers? Or should we only reward the “first” k?

Create an executable exposing various Mithril TMS operations

Having a CLI available to do various operations in Mithril would be helpful for documenting, explaining and experimenting. Every language and environment can easily spawn processes or run shell commands so this would be provide a crude but simple integration point.

Reuse keys across epochs

Regarding StmInitializer, @pyrros observes

"We might want to separate key generation vs registration. Keeping them separate might be simpler for reusing keys across epochs, keeping them together might simplify registration error handling."

I think this sounds like a good idea. It seems to me that perhaps what we want is an StmInitializer that can produce multiple StmSigners (currently the API has a more linear flavor, consuming the StmInitializer via the finish() method). Based on the above though, I think the API ought to:

  • Allow creation of an StmInitializer
  • Allow client to generate a new key for the StmInitializer
  • Allow client to register current StmInitializer with the registration service
  • In one step retrieve all participants from the registration service and produce an StmSigner (currently this is split into a build_avk() step and a finish() step), but without consuming the StmInitializer, so that it can easily be used to re-register.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.