Coder Social home page Coder Social logo

entropyxyz / synedrion Goto Github PK

View Code? Open in Web Editor NEW
55.0 5.0 8.0 903 KB

Implementation of Canetti-Gennaro-Goldfeder-Makriyannis-Peled threshold signing scheme

Home Page: https://docs.rs/synedrion

License: GNU Affero General Public License v3.0

Rust 99.44% Makefile 0.42% TypeScript 0.14%

synedrion's Introduction

Synedrion

crate Docs License Coverage

A threshold signing library based on the CGGMP'21 scheme.

WARNING: the library is a work in progress (see Issues), and has not been audited. Use at your own risk.

This library is an implementation of a scheme described in "UC Non-Interactive, Proactive, Threshold ECDSA with Identifiable Aborts" by R. Canetti, R. Gennaro, S. Goldfeder, N. Makriyannis, and U. Peled (preprint is available at https://eprint.iacr.org/2021/060, and the published version at https://dl.acm.org/doi/10.1145/3372297.3423367).

Protocols

The library implements the following protocols from the paper:

  • ECDSA Key-Generation - generates the initial secret key shares and distributes the public counterparts between the nodes;
  • Auxiliary Info. & Key Refresh in Three Rounds - generates updates to the secret key shares and auxiliary information required for ZK proofs;
  • ECDSA Pre-Signing (Three-Round w/ O(n^2) Identification Cost) - performs all the signing calculations that do not depend on the message that is being signed;
  • ECDSA Signing (for Three-Round Presigning) - finishes up signing given a pre-hashed message.

The following components are work in progress:

  • Full support for identifiable aborts - proofs are currently being generated when malicious behavior is detected, but no API for their checking is exposed; see #43;
  • ECDSA Pre-Signing & Signing (Six-Round w/ O(n) Identification Cost) - see the tracking issue #36;
  • Threshold signing - basic functionality is available via ThresholdKeyShare, see #20 for more details;
  • Multiple shares per party - see #31;
  • Generic support for arbitrary curves - currently SECP256k1 is hardcoded, see #27 for more details.

High-level API

The library exposes a state-machine-like API which is the optimal choice for the majority of users. The set of available protocols is modified to match the common tasks:

  • KeyGen is a merge of Key-Generation and Key Refresh protocols from the paper, generating full key shares with the auxiliary information;
  • KeyRefresh is the Key Refresh protocol by itself, used for updating the key shares; and
  • InteractiveSigning is a merge of 3-round Presigning and the corresponding Signing protocols.

The initial state for each protocol is instantiated by calling a function from the sessions module (e.g. make_key_gen_session for the KeyGen protocol). Besides the RNG each protocol constructor takes the following common parameters:

  • The randomness shared by all other participants. This is used to generate the session ID which is included in the messages and is necessary to distinguish between parallel executions of the same protocol on the same machine;
  • A signer object to sign outgoing messages;
  • A list of verifiers corresponding to all the nodes participating in this session (that is, it includes the verifier of the local node).

Note that the order of the verifiers corresponds to the order of parties in the KeyShare object. That is, if you are executing a KeyGen protocol, the returned KeyShare will have shares in the order of the given verifiers, and if you are executing a KeyRefresh or InteractiveSigning protocol (which take a KeyShare as one of the inputs), the order of the shares in the used KeyShare must match the order in verifiers.

After the initial state is created, it goes through several rounds, in each of which it is used to create outgoing messages, verify and process the incoming messages, and finalize the round, creating a new state or the result. This would typically happen in a loop:

// <<< `session` was created by one of the constructors >>>

let mut session = session;
let mut cached_messages = Vec::new();

let key = session.verifier();

loop {
    // This is kept in the main task since it's mutable,
    // and we don't want to bother with synchronization.
    let mut accum = session.make_accumulator();

    // Note: generating/sending messages and verifying newly received messages
    // can be done in parallel, with the results being assembled into `accum`
    // sequentially in the host task.

    let destinations = session.message_destinations();
    for destination in destinations.iter() {
        // In production usage, this will happen in a spawned task
        // (since it can take some time to create a message),
        // and the artifact will be sent back to the host task
        // to be added to the accumulator.
        let (message, artifact) = session
            .make_message(&mut OsRng, destination)
            .unwrap();

        // <<< send out `message` to `destination` here >>>

        // This will happen in a host task
        accum.add_artifact(artifact).unwrap();
    }

    for preprocessed in cached_messages {
        // In production usage, this will happen in a spawned task.
        let result = session.process_message(preprocessed).unwrap();

        // This will happen in a host task.
        accum.add_processed_message(result).unwrap().unwrap();
    }

    while !session.can_finalize(&accum).unwrap() {
        // This can be checked if a timeout expired, to see which nodes have not responded yet.
        let unresponsive_parties = session.missing_messages(&accum);
        assert!(!unresponsive_parties.is_empty());

        let (from, message) = // <<< receive `message` from `from` here >>>

        // Perform quick checks before proceeding with the verification.
        let preprocessed = session
            .preprocess_message(&mut accum, &from, message)
            .unwrap();

        if let Some(preprocessed) = preprocessed {
            // In production usage, this will happen in a spawned task.
            let result = session.process_message(preprocessed).unwrap();

            // This will happen in a host task.
            accum.add_processed_message(result).unwrap().unwrap();
        }
    }

    match session.finalize_round(&mut OsRng, accum).unwrap() {
        FinalizeOutcome::Success(result) => break result,
        FinalizeOutcome::AnotherRound {
            session: new_session,
            cached_messages: new_cached_messages,
        } => {
            session = new_session;
            cached_messages = new_cached_messages;
        }
    }
}

The library follows a "sans-I/O" design, so the user API is a little convoluted. See below for explanations on what is happening in the loop.

Accumulator

The session object is immutable so that it could be passed to spawned tasks by reference. You may want to offload creating new messages and processing incoming ones to tasks since those things may take a significant amount of time (up to seconds). The accumulator, created anew in each round, is located in the main task and holds the results of spawned tasks.

Cached messages

It may happen that some nodes have already received all the messages from this round and started the next one, sending you messages from that round. If that happens, those messages will be saved in the accumulator, and returned on finalization as a part of FinalizeOutcome::AnotherRound. It is the user's responsibility to apply them in the next round.

Possible results

The state of each protocol is parametrized by a type implementing ProtocolResult. The Success type denotes the type of the contents of FinalizeOutcome::Success (e.g. it will be KeyShare or RecoverableSignature). The two remaining types correspond to some of the possible errors.

Errors

Every method of the state returns Error or one of its subcomponents (if it can be narrowed down) as a possible error. There are four different types of errors:

  • Local - this indicates a usage error (unfortunately, not everything can be enforced by types) or a bug in the library;
  • Provable - this is the case where a remote party is at fault and can be immediately identified as such. The contents of this variant can be published are sufficient to prove that the party with the given verifying key misbehaved.
  • Proof - this is a more complicated case when there has been a fault at the protocol level, but the faulty party cannot be immediately identified. The contents of this variant is a proof that you did your share of work correctly; some arbiter must collect these proofs from every party, and at least one will necessarily turn out missing or invalid, indicating the faulty party.
  • Remote - indicates that there has been a problem with the remote party, but the fault cannot be proven at the library's level. For example, if the message's signature is invalid or the message is corrupted, we cannot publish that as a proof of misbehavior, because we could have easily forged such a message ourselves. Depending on the delivery channel used one may or may not have some tangible evidence against the remote node in this case, but it cannot be handled at this library's level. Alternatively, one may flag such a node internally as unreliable, which can be further used to, say, avoid selecting it for future sessions.

synedrion's People

Contributors

ameba23 avatar fjarri avatar jakehemmerle avatar tuxxy avatar vitropy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

synedrion's Issues

Can we avoid repetition of passing `num_parties`/`index` in `SessionState` impls?

An impl of SessionState::get_messages() takes num_parties and index and passes them on to Stage::get_messages(). Can it somehow be done automatically, so that there's less boilerplate in an impl? These values are needed in Stage to initialize the message accumulator. The alternatives are:

  • Add methods returning these to the Round trait. Cons: have to impl for each type implementing Round, adding boilerplate there.
  • Make Stage keep them. Cons: have to explicitly pass these values to Stage::new in each Stage impl.

Seems like the former is more logical: each round knows how many parties it works with, and how many messages it expects in return. If we have these available on trait level, we can even have a trait-default method that returns (Receiving<R>, ToSendTyped, Accum) or something, so that we don't have to create the accumulator manually every time.

Tracking issue for the initial implementation

(Figure and section numbers correspond to the reference paper https://eprint.iacr.org/2021/060)

The 6-round protocol was moved to a separate issue (#36).

Main protocols:

  • Figure 3: Threshold ECDSA: Online Signing (includes Figs. 5, 6, 7, 8)
  • Figure 4: Threshold ECDSA: Non-Interactive Signing (includes Figs. 5, 6, 7, 8)

Preliminary steps:

  • Figure 5: ECDSA Key-Generation (includes Π^{sch})
  • Figure 6: Auxiliary Info. & Key Refresh in Three Rounds (includes Π^{sch}, Π^{prm}, П^{fac}, Π^{mod})

3-round signing:

  • Figure 7: ECDSA Pre-Signing (includes Π^{enc}, Π^{aff-g}, Π^{log*}, Π^{mul}, Π^{dec})
  • Figure 8: ECDSA Signing (includes Π^{aff-g}, Π^{mul*}, Π^{dec})

Sigma-protocols:

  • Figure 14: Paillier Encryption in Range ZK – Π^{enc} (#17)
  • Figure 15: Paillier Affine Operation with Group Commitment in Range ZK – Π^{aff-g} (#16)
  • Figure 16: Paillier-Blum Modulus ZK – Π^{mod}
  • Figure 17: Ring-Pedersen Parameters ZK – Π^{prm}
  • Figure 22: Schnorr PoK – Π^{sch}
  • Figure 25: Knowledge of Exponent vs Paillier Encryption – Π^{log*} (#18)
  • Figure 28: No Small Factor Proof - П^{fac} (#19)
  • Figure 29: Paillier Multiplication – Π^{mul} (#23)
  • Figure 30: Paillier Decryption modulo qΠ^{dec} (#24)
  • Figure 31: Multiplication Paillier vs Group – Π^{mul*} (#25)

Add an associated type for `crypto_bigint::DynResidueParams` to Uint traits.

Currently every time we need an integer in Montgomery form, we use UintModLike::new passing it the modulus, effectively instantiating DynResidueParams every time. This is slow, in many cases we would be able to reuse them. To do that, it has to be an associated type in UintModLike, which means we will need a trait for it as well.

Precompute safe primes to use in tests

Generating safe primes seems to be the slowest part of running the tests. Even with test Uint sizes it's still slow. Can we somehow pre-generate a bunch of primes (at the start of the test, or even just do it once and save in a file) and use them?

Log the random seed in ZK proofs and `protocols` tests

If there's a rare failure, it may be useful to know the seed to be able to reproduce it. Therefore instead of using OsRng everywhere, we will have to use it once to generate a seed, log it, and then use some PRNG seeded by it.

Add tests with `ProductionParams`

Currently we only test with TestParams. ProductionParams may have their own quirks, so it would be nice to add an option to test with them. Can be potentially added to CI as well - it's slow, but not unbearably slow.

Reduce boilerplate when merging rounds

The merged KeyGen + KeyRefresh protocol requires us to write quite a bit of boilerplate where all we do is just merge two types into one or separate them to pass on to lower levels. Can it be done generically?

Seems like we can't do it via a trait, the blanket impl will conflict with the sequential wrapper. Also for the parallel merge we need to assert that round numbers, destinations etc are the same between the two rounds. So a macro would probably be better.

Encapsulate operations with bitvectors

In the several places in the protocol we need a vector of random bits, which can be hashed or XOR'ed with another vector. See random_bits() to find where those are created. We need to either encapsulate them in our own type, or use something like bitvec

Missing sanity checks in the ZK proofs

Some things that need to be checked:

  • correct range of Signed and Bounded input values
  • check that the randomizers are made for the correct Paillier public keys (or encapsulate the keys in them)
  • check that the ciphertexts are made for the correct Paillier public keys (or encapsulate the keys in them, see #59)

Support one party owning several key shares

This, of course, can be emulated by one party maintaining several separate Session objects, but it is inefficient, since then it will have to effectively creates proofs for itself. Also it would be nice to bundle messages from that party into one message to reduce network latency.

We will need some kind of a ShareDistribution object (should it be separate, or a part of KeyShare?) that Session will be able to make use of.

Research the extension to threshold signing

The paper itself only describes n-of-n signing. We want at least t-of-n, and ideally a compartmentalized model (t-of-n or t' of k or something). The paper says (Section 1.2.8):

In this work we mainly focus on n-out-of-n multi-party signing, and do not explicitly consider the more general t-out-of-n threshold signing for t < n. Such a protocol can be derived almost immediately from our protocol herein for the online variant using Shamir secret-sharing, with relevant changes to the protocol’s components, similarly to Gennaro and Goldfeder [32].

([32] is "Fast Multiparty Threshold ECDSA with Fast Trustless Setup", https://dl.acm.org/doi/10.1145/3243734.3243859)

We would want that extension to be as independent from the main scheme as possible; ideally, it should be decoupled enough to constitute its own crate.

Possible implementations to use as an example:

Only create `KeyShareSeed` in a centralized way

Right now we have KeyShare::new_centralized() which generates the key shares and the auxiliary parameters. But then on the next refresh the nodes will re-generate the auxiliary parameters. Perhaps we should just create KeyShareSeed in a centralized way, and let the nodes handle the refresh since they are going to do that anyway?

This would also eliminate the following concern: if the centralized user generates the auxiliary parameters, he should also generate all the relevant proofs asserting that the generated parameters are well-formed. If we only generate KeyShareSeed, we won't need to worry about that.

Expose backend curve objects from the user API

We need, at least:

  • access to k256::ecdsa::VerifyingKey from KeyShare
  • access to k256::ecdsa::Signature and RecoveryId from whatever the signing returns. Since there's no single "recoverable signature" object anymore, maybe we have to create one.

Improved message passing

The unpublished version of the paper contains the following note:

To reconcile the two communications models, we propose the following process (in the spirit of echo-broadcasting [?]). In the point-to-point case, for an outgoing message m_{i,k} sent from P_i at round k. P_i is instructed to send a commitment h_{i,k} = com(m_{i,k}) over the broadcast channel and send the relevant data directly to each party using the point-to-point network. That way, if (any part of) m_{i,k} is malformed, an honest party P_j may report P_i as such; by having both parties reveal the underlying (faulty message). Notice that this process incurs a round complexity penalty because the protocol proceeds only if no one reports an issue. However, since the rounds can be “piggybacked” (cf. Remark 3.2) and the last round of each of our protocol phases is a broadcast round, this process does not incur any round-complexity penalty for us.

Remark 3.2.:

For non-unanimous halting [?], by sacrificing identifiable abort, the broadcast channel can be entirely replaced using echo-broadcasting with one extra round of communication; this can be achieved by having the parties “echo” the (hash of the) message they received from each party at the previous round. To obtain only one extra round of communication, notice that the parties are instructed to “piggyback” the echo on the rounds of the signing protocol.

So several things here:

  • Do we want to sacrifice identifiable abort?
  • Currently we do echo broadcasting as a separate round; how hard would it be to attach it to the next round generically?
  • We currently re-send the full messages, not hashes - that can be improved.
  • Do we need to do this thing with broadcasting the commitment in direct messages?

Group similar methods using traits

Consider RPParamsMod that has three methods commit(), commit_wide() and commit_xwide() which do the same thing, but with different types of integers as arguments. Instead, we can have a Commit trait with different associated types. The same applies e.g. for Ciphertext constructors.

Implement the special error round for AuxData protocol

See Fig. 6, the "Dec Error" part in the Output section. This would happen in the finalize() of round 3, and on failure we need to send some additional messages. So how would that look for the user? Every node would have to wait for some time waiting for an error message? Or perhaps we add an explicit round sending OK/error report instead?

Simplify declaring that a round is not a broadcast or a direct round

Currently to do that we have to make dummy implementations like

impl<P: SchemeParams> DirectRound for Round1<P> {
    type Message = ();
    type Payload = ();
    type Artefact = ();
}

Would be much better to be able to write

not_a_direct_round!(Round1);

It would be even better for that to be the case by default, but that would require Rust to support default associated type values, which is an unstable feature. Or is there another way?

Precompute values in Paillier keys

In various algorithms we use values that can be precomputed based on the Paillier secret key p, q or the public key N = pq. For example, the totient (phi(N) = (p-1)(q-1)), or N^(-1) mod phi(N). Two ways to optimize that:

  • Caching methods. Not very convenient in Rust, it's not Python.
  • Two types, one contains the minimal info, the other the minimal info plus all the precomputed values, convertible into each other. The minimal one can be serialized.

Checked ops or wrapped ops?

If we have an arithmetic operation not expected to fail (e.g. adding p and q both cast to DoubleUint, which means it'll never overflow), what do we do? Use checked ops and unwrap (checked ops in crypto-bigint return CtOption, e.g. https://docs.rs/crypto-bigint/latest/crypto_bigint/trait.CheckedAdd.html), or use wrapping ops? The former is consistent with the intent, but it is slower since CtOption::unwrap() will not be optimized away in release. Although the performance hit is probably minimal, since we only do that very occasionally.

Optimize test Uint sizes

The sizes of Uints don't have to be exactly double and quadruple; we can get by with 256 bits + 1 limb, 512 bits + 1 limb, 1024 bits + 1 limb - all we need is to cover the range of secp256k1 scalars with p and q. But that will only matter for tests anyway, and release tests are fast enough that we can do with a few extra limbs.

Should key shares have a non-zero scalar type?

RustCrypto stack offers a NonZeroScalar wrapper for scalars. If we were generating a regular secret key we would use that, but for key shares the usefulness is questionable. The paper does not ask us to generate non-zero random scalars. Should we bother with them at all? The only thing that a NonZeroScalar really provides is a save inversion, and we only ever invert a sum of scalars obtained from all the nodes, which we cannot statically enforce to be non-zero (although we might enforce it at a protocol level, halting if it turns out to be zero, which is admittedly quite improbable, even with malicious nodes). So maybe just get rid of NonZeroScalar completely?

Signing without revealing the signature to all nodes

In some applications it may be useful to not allow the nodes to see the final signature. To do that, we may cut short the Signing protocol, and instead of each node broadcasting sigma_i, just return them as a protocol result, along with the nonce.

Open questions:

  • Is that safe to do so? If we are broadcasting these values anyway, probably yes?
  • What is the best way to describe that in the current framework? A wrapper for the Presigning protocol returning a different result? Or a method on PresigningData (which will mean that we will need to expose the Presigning protocol iself)?
  • What is the course of action in case the summed-up signature is invalid? The paper prescribes generating a number of ZK proofs to identify the culprit, so we will need to return all the required randomness and execution transcript along with the result, so that the nodes could hold on to it, and build a proof later on user request.

Multiplication/exponentiation speed-ups

Currently, most of the time in the signing protocol is spent in Montgomery exponentiation. Key refresh is split between exponentiation and prime number generation, but the latter is mainly exponentiation again (most of the time is spent in Miller-Rabin tests). So it would help a lot if the exponentiation performance is improved.

Possible avenues:

  1. Replace schoolbook multiplication with Karatsuba or Toom-Cook. This may start making a difference at our integer sizes (2048 bit). This has to be done within crypto-bigint, see RustCrypto/crypto-bigint#66
  2. Use wNAF exponentiation instead of the current fixed-window one (for the cases where the exponent is not secret). This has to be done within crypto-bigint.
  3. crypto-bigint's pow() supports exponents of arbitrary size (that is you can raise Uint<N> into Uint<M> power). We currently only raise Uint<N> to Uint<N>, and implement Uint<N>^Uint<2*N> and Uint<N>^Uint<4*N> by breaking the exponent in halves and exponentiating separately. If we could use the arbitrary size exponentiation, it could make this faster, because we would not have to calculate x^{2^N} separately to merge the halves - it's already calculated by the fixed window algorithm.
  4. In some places where we calculate x^y mod N we also know phi(N) (the totient), so we can instead calculate x^(y mod phi(N)) mod N. If y is large (of the order of N^2), this may be faster than direct exponentiation.

Possible missing proofs

In Round1 of Presigning we are generating an enc proof for the ciphertext K, but not for the ciphertext G, which looks like an omission. Should we generate one for G as well?

32-bit and 64-bit compatibility testing

Since the code of this library is intended to be run both on 32- (e.g. WASM) and 64-bit platforms, and needs to perform consistently on all of them, we need to test that behavior. There has already been an issue with the ZK proof challenges being generated differently on different bitness (see #45).

To test that, we need to somehow run the protocol sessions with different nodes using different bitness. How do we do that? Can we use the existing WASM bindings to simulate a 32-bit target?

Generalize the library to use any curve

Right now we have k256 encoded, but it doesn't have to be. The scheme is applicable to any group with hard logarithm. At the very least we can generalize it to anything implementing the necessary traits from RustCrypto stack.

The main problem would be to make sure, as statically as possible, that the chosen Uint size exceeds the order of the curve scalar.

Enforce statically that the user can't obtain direct/broadcast messages from a round

Currently the user requests the message destination list, and then if it is not None, he passes it on to the message-generating method. Can it be made stricter, so that it is not possible to request a message to an arbitrary destination (thus eliminating some possible errors)? Perhaps, instead of the list of destinations, return a list of "message factory" objects? Would we be able to pass them on to the spawned tasks so that the messages can be generated in parallel?

Bundle the public key with `Ciphertext`

Instead of specifying the public key with each homomorphic operation on Ciphertexts (clunky and error-prone), we can create a bundled struct with the public key inside. So instead of

let x = y
    .homomorphic_mul(&pk, &a)
    .homomorphic_add(&pk, &b);

we will be able to just write

let x = y
    .homomorphic_mul(&a)
    .homomorphic_add(&b);

Implement failure/correctness proof checking for identifiable aborts

With #39 and #41 merged, we now have a (draft) proof generation on errors. A node may return either a proof of misbehavior of another node, or a proof of its own correct behavior.

What is currently missing is an API to verify these proofs. We need to decide what should be attached to them (the relevant signed messages by other nodes, certainly, but is there anything else?), and how can a third party verify them.

In the paper the error path is a part of the protocol, but since in practice we have a global state (blockchain) and, possibly, an independent arbiter (the user initiating the signing process), we can perhaps simplify things and decouple this verification from the happy path of the protocol execution.

Even if we did have only the nodes with no global state, it may still be worth it to keep the verification external to the happy path, because otherwise the message routing would get complicated (and it is complicated enough already), and every round would have to branch out depending on what path it received messages from, for each node - a generalization nightmare.

Message bundling

We may want to send several messages from one node at the same time:

  • To accommodate for one node owning several key shares (see #31);
  • To merge broadcast and direct messages (since technically we don't have a broadcasting facility anyway; although some users may have it, so how do we support both ways?); (this was done in #112)
  • To piggyback broadcast consensus messages on the messages from the next round (see #114).

A separate question: do we sign the whole bundle (faster), or each message separately (easier to attach only the needed ones to the proofs)?

Extract conversion between `Scalar` and `Uint` to the scheme parameters level

See mul_by_point(), mul_by_generator(), uint_from_scalar(). This connection should really be established on SchemeParameters level, because implementors of that trait are the ones fixing the scalar and integer types in a single parameters object. Especially this needs to be done in connection with #27.

Also here it may be worth it to split to_scalar() (returning an error if the range is exceeded) and to_scalar_reduced() (that will reduce the integer to the scalar range).

Add a "refresh ID" to key shares

After a KeyRefresh/Auxiliary protocol we get a keyshare with the same verifying key, which makes it easy to confuse with an obsolete keyshare (and, for example, start a signing protocol with one). We need some kind of a publicly accessible "refresh ID" which will be updated after each refresh. Can the rho value generated during the protocol be used for that? Or we can just hash PublicAux.

Handling of session IDs

Session IDs and sub-session IDs for the protocols need a review. We need to make sure that:

  • The implementation satisfies the requirements the security proofs in the paper are based on
  • The implementation doesn't care about session IDs more than it needs to, offloading the rest to the user (e.g. if the user wants to run several concurrent processes of key share generation on the same set of nodes, it's up to them to route the messages to a specific instance of the protocol state)
  • We enforce/guide the correct usage of session IDs as much as possible via the API

To be more specific (see Fig 3 for the outline):

The KeyGen protocol (Fig. 5) depends on some unspecified session ID sid (which is included in the hashes), and produces a random rid value (same for every node). Note that sid in the paper includes the curve parameters (which are currently fixed), and the sorted list of party IDs. Would it be enough to just have a random sid?

The AuxData protocol (Fig. 6) depends on ssid = sid + rid + [X] + i where [X] are the public key shares and i is the number of the party. But [X] and i are included in the hash along with ssid, and rid already corresponds to the generated [X], so I think we can take ssid = sid + rid. But since we may have to run AuxData many times (refreshing the key shares), we probably don't want to use the same ssid every time? Should we just make it entirely random? Or make it ssid = sid + rid + nonce (or a hash thereof), where nonce is incremented?

The Presigning protocol (Fig. 7) should correspond to the parameters generated in AuxData, so it should depend on ssid, and also on some nonce value (which is denoted \ell in the paper) because we may run it many times for the same AuxData parameters. Who keeps and increments the nonce? Probably the user should take care of that?

Implement Presigning/Signing with `O(N)` identification cost

(Figure and section numbers correspond to the reference paper https://eprint.iacr.org/2021/060)

  • Figure 9: ECDSA Pre-Signing (Rounds 1 to 4) (includes Π^{enc-elg}, Π^{aff-g}, Π^{aff-p}, Π^{log*})
  • Figure 10: ECDSA Pre-Signing (Round 5, 6 & Output) (includes Π^{log*}, Π^{elog})
  • Figure 13: ECDSA Signing

Errors:

  • Figure 11: Nonce-Reveal Fail (includes П^{Nth})
  • Figure 12: Pseudo-Key Reveal Fail (includes П^{Nth}, Π^{log})

Sigma-protocols:

  • Π^{Nth} (Section 2.2.2) - no figure but probably the same as Fig. 23?
  • Figure 23: Dlog Equality – Π^{log}
  • Figure 24: Dlog with El-Gamal Commitment – Π^{elog}
  • Figure 26: Paillier Affine Operation with Paillier Commitment ZK-Proof – Π^{aff-p}
  • Figure 27: Range Proof w/ EL-Gamal Commitment – Π^{enc-elg}

Error propagation

We have unwraps and panics in a lot of places now. All recoverable errors should be propagated to the user. It's harder to say about misuse errors; should we still panic? Or let the user catch it and write it in the log?

Also, how do we handle errors? A big catch-all enum? Or several layered enums (perhaps one for the protocol level, and one for the session level).

thiserror does not seem to support no-std, but snafu does. Use it, or just do things manually?

Report misbehaving party in errors

Now that the initial error propagation is in, we need to actually make some error enum and properly report misbehaving parties instead of returning an all-encompassing String.

Unfortunately, thiserror does not support no-std, and snafu is weird. displayerror may help with Display derivation, but it may be easier to just create the enums manually.

More descriptive type aliases for integers

Currently we have SingleUint for numbers modulo p and q, DoubleUint for numbers modulo N = pq or phi(N), and QuadUint for numbers modulo N^2 (Are there any other possibilities? Can't remember at the moment). Perhaps it will be better to have these aliases more descriptive (e.g. GroupElement or something).

Also note here that for tests the "double uint" may not be exactly double; e.g. "single" is 256 bits + 1 limb, "double" is 512 bits + 1 limb, to accommodate that additional bit we need in p and q to cover all the range of secp256k1 scalars (see #15).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.