Coder Social home page Coder Social logo

Comments (5)

irakliyk avatar irakliyk commented on April 28, 2024 2

Btw, is the verifier work dependent on the parameters? If it is, then specifying only a minimum security level is not sufficient, because then producing proofs that are more expensive to verify could cause a denial of service.

Yes - verifier work is roughly proportional to the proof size (which is roughly proportional to the number of queries). So, it is possible to submit a proof 10x of the expected size and make the verifier do 10x more work (this would be something like 1MB proof vs. usual 100KB proof). Making sure that proof sizes are in a "sane" range is left to the users of the library as by the time the proof gets to the verifier it is assumed to have been already deserialized. So, if someone decides to send a 100MB proof, it should be caught before it gets to the verifier.

That doesn't preclude having the verifying key specify what range of parameters are acceptable, so that the code using the verifier doesn't have to perform additional checks.

That's a good point. I wonder if a better solution could be something like this:

pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
    proof: StarkProof,
    pub_inputs: AIR::PublicInputs,
    acceptable_options: &AcceptableOptions,
) -> Result<(), VerifierError>

Where AcceptableOptions could look something like:

pub enum AcceptableOptions {
    MinConjecturedSecurity(u32),
    MinProvenSecurity(u32),
    OptionSet(Vec<ProofOptions>),
}

This way, users of the library will have to explicitly specify which proof parameters are acceptable. But they would retain flexibility do so by either by directly defining a set of parameters or specifying minimum acceptable security levels under different assumptions.

from winterfell.

daira avatar daira commented on April 28, 2024 1

So, it is up to the users of this library to make sure they check security level of the proof before passing it to the verify() function.

I've never heard of any other zk proof library requiring that; to me it's clearly broken. Typically, what the verifier does is provide a circuit-specific verifying key, and that key determines the proving system parameters exactly. Anything else gives a prover adversary power that it shouldn't have. (Even allowing the prover to choose "stronger" parameters is still power that it doesn't need and shouldn't have, since it increases the attack surface.)

from winterfell.

irakliyk avatar irakliyk commented on April 28, 2024

Typically, what the verifier does is provide a circuit-specific verifying key, and that key determines the proving system parameters exactly.

In the context of STARKs this is frequently not desirable for a couple of reasons:

  • We may want to provide options to generate proofs at different security levels for the same circuit. An example of this may be a STARK-based virtual machine which gives users an option to generate proofs at 100-bit or 128-bit security levels.
  • We may want to provide options to generate proofs at the same security level using different parameters (e.g., different combination of blowup factor, number of queries, grinding factor etc.). This is useful because it impacts things like proof generation time and proof size differently. For example, in some cases it may be OK for the prover to take much longer to produce a smaller proof. In other cases, we may tolerate bigger proof sizes, but want the prover to run as fast as possible. So for the same circuit and same security level we may want to adjust parameters to suit a specific use case.

In practice, this frequently means we could have up to a dozen (or maybe even more) different parameter sets for the same circuit, and the verifier should accept proofs for any of these sets.

So, given the above, the question is how does the prover let the verifier know which set of parameters they've chosen for a given proof. This could be done in a variety of ways and the way it is currently done in Winterfell is that the prover includes the set of parameters with the proof. It is then up to the verifier to accept or reject proofs based on the included parameters.

I do agree that this can cause issues, hence this issue and 3 potential options to address it. Other options are welcome too.

from winterfell.

daira avatar daira commented on April 28, 2024

In practice, this frequently means we could have up to a dozen (or maybe even more) different parameter sets for the same circuit, and the verifier should accept proofs for any of these sets.

That doesn't preclude having the verifying key specify what range of parameters are acceptable, so that the code using the verifier doesn't have to perform additional checks.

Btw, is the verifier work dependent on the parameters? If it is, then specifying only a minimum security level is not sufficient, because then producing proofs that are more expensive to verify could cause a denial of service.

from winterfell.

irakliyk avatar irakliyk commented on April 28, 2024

Closed by #219. We went with the approach described in the comment above. This will be a part of the v0.7.0 release.

from winterfell.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.