Comments (5)
Btw, is the verifier work dependent on the parameters? If it is, then specifying only a minimum security level is not sufficient, because then producing proofs that are more expensive to verify could cause a denial of service.
Yes - verifier work is roughly proportional to the proof size (which is roughly proportional to the number of queries). So, it is possible to submit a proof 10x of the expected size and make the verifier do 10x more work (this would be something like 1MB proof vs. usual 100KB proof). Making sure that proof sizes are in a "sane" range is left to the users of the library as by the time the proof gets to the verifier it is assumed to have been already deserialized. So, if someone decides to send a 100MB proof, it should be caught before it gets to the verifier.
That doesn't preclude having the verifying key specify what range of parameters are acceptable, so that the code using the verifier doesn't have to perform additional checks.
That's a good point. I wonder if a better solution could be something like this:
pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
proof: StarkProof,
pub_inputs: AIR::PublicInputs,
acceptable_options: &AcceptableOptions,
) -> Result<(), VerifierError>
Where AcceptableOptions
could look something like:
pub enum AcceptableOptions {
MinConjecturedSecurity(u32),
MinProvenSecurity(u32),
OptionSet(Vec<ProofOptions>),
}
This way, users of the library will have to explicitly specify which proof parameters are acceptable. But they would retain flexibility do so by either by directly defining a set of parameters or specifying minimum acceptable security levels under different assumptions.
from winterfell.
So, it is up to the users of this library to make sure they check security level of the proof before passing it to the
verify()
function.
I've never heard of any other zk proof library requiring that; to me it's clearly broken. Typically, what the verifier does is provide a circuit-specific verifying key, and that key determines the proving system parameters exactly. Anything else gives a prover adversary power that it shouldn't have. (Even allowing the prover to choose "stronger" parameters is still power that it doesn't need and shouldn't have, since it increases the attack surface.)
from winterfell.
Typically, what the verifier does is provide a circuit-specific verifying key, and that key determines the proving system parameters exactly.
In the context of STARKs this is frequently not desirable for a couple of reasons:
- We may want to provide options to generate proofs at different security levels for the same circuit. An example of this may be a STARK-based virtual machine which gives users an option to generate proofs at 100-bit or 128-bit security levels.
- We may want to provide options to generate proofs at the same security level using different parameters (e.g., different combination of blowup factor, number of queries, grinding factor etc.). This is useful because it impacts things like proof generation time and proof size differently. For example, in some cases it may be OK for the prover to take much longer to produce a smaller proof. In other cases, we may tolerate bigger proof sizes, but want the prover to run as fast as possible. So for the same circuit and same security level we may want to adjust parameters to suit a specific use case.
In practice, this frequently means we could have up to a dozen (or maybe even more) different parameter sets for the same circuit, and the verifier should accept proofs for any of these sets.
So, given the above, the question is how does the prover let the verifier know which set of parameters they've chosen for a given proof. This could be done in a variety of ways and the way it is currently done in Winterfell is that the prover includes the set of parameters with the proof. It is then up to the verifier to accept or reject proofs based on the included parameters.
I do agree that this can cause issues, hence this issue and 3 potential options to address it. Other options are welcome too.
from winterfell.
In practice, this frequently means we could have up to a dozen (or maybe even more) different parameter sets for the same circuit, and the verifier should accept proofs for any of these sets.
That doesn't preclude having the verifying key specify what range of parameters are acceptable, so that the code using the verifier doesn't have to perform additional checks.
Btw, is the verifier work dependent on the parameters? If it is, then specifying only a minimum security level is not sufficient, because then producing proofs that are more expensive to verify could cause a denial of service.
from winterfell.
Closed by #219. We went with the approach described in the comment above. This will be a part of the v0.7.0 release.
from winterfell.
Related Issues (20)
- panic in merkle's verify_batch HOT 1
- catch a panic in merkle proof verification HOT 1
- Update proven security to include IOP terms HOT 2
- 🅆1🄽🅃3🅁🅂0🄻🄳13🅁47
- RandomCoin trait simplification HOT 1
- Will it be made into zkvm in the future? HOT 1
- Remove duplicate query check in FRI HOT 3
- Implementing Keccak256 HOT 4
- mulfib8 example circuit is underconstrained
- `f64` field: `BaseElement` should not be convertible from `u64` or `u128` without error HOT 1
- Add serialization/deserialization for `usize` type HOT 1
- Accomodating more expressive transition constraints HOT 2
- `TraceTable::with_meta()` should be marked `unsafe`
- Suggestion: Remove outdated griffin hash implementation HOT 1
- Generalize auxiliary trace building logic HOT 1
- Simplify 2-d matrix types
- Generalize `TransitionConstraints` and `BoundaryConstraints` HOT 1
- Consider using the standard benchmark harness instead of criterion HOT 1
- DEEP polynomial with Lagrange kernel
- `Deserializable` should have an associated type error
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from winterfell.