facebook / winterfell Goto Github PK
View Code? Open in Web Editor NEWA STARK prover and verifier for arbitrary computations
License: MIT License
A STARK prover and verifier for arbitrary computations
License: MIT License
Per the CI this command succeeds:
cargo build --no-default-features --features alloc --target wasm32-unknown-unknown --release
However, a build with wasm-pack` or neon might be easier to use.
Does anyone have any examples of this library being called from JavaScript?
This issue aims to start a discussion on the merits of changing RandomCoin<B: StarkField, H: Hasher>
from a struct to a trait. The reasons for this being that for some choices of H
(specifically the ones based on the sponge construction) as well as of B
(e.g. when it is the base field of the algebraic hasher H
) seeding (which can be done with absorbing) and drawing random values (which can be done by squeezing) can be achieved in more efficient ways.
Currently, Matrix struct stores trace and polynomial evaluation data in a column-major order. This allows us to run LDE for each column independently, but presents challenges during constraint evaluation where we need to process two sequential rows. Thus, we spend some time reading rows from the trace by copying values from each column into an EvaluationFrame
struct.
A better approach could be to store data in a Matrix
in a row-major order. This has the following advantages:
EvaluationFrame
can just store pointers to rows in the trace - no need to move any data around.We currently make a hard assumption that for vector commitments (i.e., trace LDE commitment, constraint evaluation commitment, FRI layer commitments etc.) Merkle trees are used. It would be nice to abstract this assumption away and provide a way for the user to specify which vector commitment type to use.
This would help with such things as #38 and also would make #9 simpler as we could provide SaltedMerkleTree
as one of the vector commitment types.
To make vector commitments fully generic, we could use something like this:
pub trait VectorCommitment: Sized {
type Options;
type Item: Clone + Serializable + Deserializable;
type Commitment: Copy + Serializable + Deserializable;
type Proof: Clone + Serializable + Deserializable;
type MultiProof: Clone + Serializable + Deserializable;
type Error: Debug;
fn new(items: Vec<Self::Item>, options: Self::Options) -> Result<Self, Self::Error>;
fn commitment(&self) -> Self::Commitment;
fn open(&self, index: usize) -> Result<(Self::Item, Self::Proof), Self::Error>;
fn open_many(&self, indexes: &[usize]) -> Result<(Vec<Self::Item>, Self::MultiProof), Self::Error>;
fn verify(
commitment: Self::Commitment,
index: usize,
item: Self::Item,
proof: &Self::Proof
) -> Result<(), Self::Error>;
fn verify_many(
commitment: Self::Commitment,
indexes: &[usize],
items: &[Self::Item],
proof: &Self::MultiProof
) -> Result<(), Self::Error>;
}
The first step for implementing this is probably to update our current MerkleTree
implementation to comply with this interface.
Right now, we include first FRI layer into the FRI proof. This is redundant as the verifier can compute the values for the first FRI layer directly from trace and constraint queries. Removing first FRI layer from the proof should reduce of proof size roughly by 15%.
However, we can do this only when perfect zero knowledge is not required. When we implement ability to turn on perfect zero knowledge (via #9), we would need to include first FRI layer queries into the FRI proof. So, it probably makes sense to address this issue once #9 has been addressed.
Implementing high-performance version of Rescue hash function would be the first step towards recursive STARKs. Target performance should be 100K+ hashes (64B -> 32B) per second.
To achieve such level of performance, we would need to implement Rescue in a small (~64-bit) field. Description of Rescue hash function can be found here:
Potential parameters (for ~128-bit security level) could be:
We could also expose this implementation as two different variants: one with 4 field element output (~128 bit collision resistance) and 3 field element output (96-bit collision resistance). Although, it may be better to have a separate implementation for 96-bit version as that could be instantiated with a state width of 9 elements, and thus should be noticeably faster than the one using 12 elements.
Hello,
I tried running the STARK VDF (https://github.com/bobbinth/stark-vdf) using winterfell and when I run with cargo run
, i get this:
To fix I ran it with cargo run --release
and get the following output:
With this restriction, if we have an 11 steps trace table, then we must pad it to length 16.
Then when implementing get_pub_inputs
method of Prover, we just can not determine the last trace row
from the trace table parameter. We still need another variable indicating the real length of the trace table, in the above example, which is 11, not 16.
A new MerkleTree has already been implemented but it is not yet integrated into the rest of the library. Migrating to this new implementation will allow us to relax the condition that hash function output must be 32 bytes.
winterfell/utils/core/src/lib.rs
Lines 78 to 82 in c99530d
pub fn uninit_vector<T>(length: usize) -> Vec<T> {
let mut vector = Vec::with_capacity(length);
unsafe {
let data_ptr = vector.as_mut_ptr();
std::ptr::write_bytes(data_ptr, 0, length);
vector.set_len(length);
}
return vector;
}
This implementation uses std::ptr::write_bytes to initialize each element in the vector. This ensures that all elements in the vector are properly initialized, avoiding potential memory safety issues.
Currently, all our examples are done in a 128-bit field. It would be good to create an example in a 62-bit field. There are a few options here - but I think signature aggregation would be one of the more interesting ones.
When testing FFT, I try to interpolate a trace points and then evaluate as below.
let points: Vec<u32> = (1..=4).collect();;
let mut a: Vec<BaseElement> = points.iter().map(|e| BaseElement::from(e.clone()) ).collect();
let inv_twiddles = get_inv_twiddles::<BaseElement>(points.len());
println!("{:?}", points);
fft::interpolate_poly(&mut a, &inv_twiddles);
fft::evaluate_poly(&mut a, &inv_twiddles);
let res : Vec<_> = a.iter().map(|e| { e.as_int() }).collect();
println!("post {:?}", res);
output:
prev: 1, 2, 3, 4
post: 1, 4, 3, 2
I expect that the post
suppose to be as same as the prev
. Can you please help to explain how this happen?
It doesn't look like the merkle
crate supports consistency proofs between two different trees, but it does support (batched) inclusion proofs for leaves. Would y'all be open to adding consistency proof support?
Unless I should be using another method for serialization, deserialization of zero values in this field does not work due to their canonical representation. The code below will panic with the error message: InvalidValue("invalid field element: value 18446744069414584321 is greater than or equal to the field modulus")'
use winter_math::{fields::f64::BaseElement, StarkField};
use winter_utils::Deserializable;
use winter_utils::SliceReader;
let bytes = BaseElement::from(0u8).as_int().to_le_bytes();
let mut reader = SliceReader::new(&bytes);
BaseElement::read_batch_from(&mut reader, 1).unwrap();
The issue is the following line, which should be changed to a greater than inequality:
winterfell/math/src/field/f64/mod.rs
Line 584 in cd76df2
vadym@ubuntu:~/winterfell$ cargo build --release --manifest-path examples/Cargo.toml --features concurrent
Downloaded crossbeam-epoch v0.9.5
Downloaded memoffset v0.6.4
Downloaded rayon-core v1.9.1
Downloaded crossbeam-utils v0.8.5
Downloaded rayon v1.5.1
Downloaded crossbeam-deque v0.8.1
Downloaded crossbeam-channel v0.5.1
Downloaded 7 crates (423.1 KB) in 0.75s
Compiling autocfg v1.0.1
Compiling crossbeam-utils v0.8.5
Compiling crossbeam-epoch v0.9.5
Compiling scopeguard v1.1.0
Compiling rayon-core v1.9.1
Compiling either v1.6.1
Compiling num_cpus v1.13.0
Compiling structopt v0.3.25
Compiling memoffset v0.6.4
Compiling rayon v1.5.1
Compiling crossbeam-channel v0.5.1
Compiling crossbeam-deque v0.8.1
Compiling winter-utils v0.2.0 (/home/vadym/winterfell/utils/core)
Compiling winter-math v0.2.0 (/home/vadym/winterfell/math)
Compiling winter-rand-utils v0.2.0 (/home/vadym/winterfell/utils/rand)
Compiling winter-crypto v0.2.0 (/home/vadym/winterfell/crypto)
error[E0277]: `[B; N]` is not an iterator
--> crypto/src/hash/rescue/mod.rs:23:27
|
23 | result.iter_mut().zip(tail).for_each(|(r, t)| *r *= t);
| ^^^^
| |
| expected an implementor of trait `IntoIterator`
| help: consider borrowing here: `&tail`
|
= note: the trait bound `[B; N]: IntoIterator` is not satisfied
= note: required because of the requirements on the impl of `IntoIterator` for `[B; N]`
error[E0599]: the method `for_each` exists for struct `std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>`, but its trait bounds were not satisfied
--> crypto/src/hash/rescue/mod.rs:23:33
|
23 | result.iter_mut().zip(tail).for_each(|(r, t)| *r *= t);
| ^^^^^^^^ method cannot be called on `std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>` due to unsatisfied trait bounds
|
::: /home/vadym/snap/rustup/common/rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/zip.rs:13:1
|
13 | pub struct Zip<A, B> {
| -------------------- doesn't satisfy `_: Iterator`
|
= note: the following trait bounds were not satisfied:
`[B; N]: Iterator`
which is required by `std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>: Iterator`
`std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>: Iterator`
which is required by `&mut std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>: Iterator`
error[E0277]: `[[winter_math::fields::f62::BaseElement; 12]; 12]` is not an iterator
--> crypto/src/hash/rescue/rp62_248/mod.rs:262:27
|
262 | result.iter_mut().zip(MDS).for_each(|(r, mds_row)| {
| ^^^
| |
| expected an implementor of trait `IntoIterator`
| help: consider borrowing here: `&MDS`
|
= note: the trait bound `[[winter_math::fields::f62::BaseElement; 12]; 12]: IntoIterator` is not satisfied
= note: required because of the requirements on the impl of `IntoIterator` for `[[winter_math::fields::f62::BaseElement; 12]; 12]`
error[E0599]: the method `for_each` exists for struct `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>`, but its trait bounds were not satisfied
--> crypto/src/hash/rescue/rp62_248/mod.rs:262:32
|
262 | result.iter_mut().zip(MDS).for_each(|(r, mds_row)| {
| ^^^^^^^^ method cannot be called on `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>` due to unsatisfied trait bounds
|
::: /home/vadym/snap/rustup/common/rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/zip.rs:13:1
|
13 | pub struct Zip<A, B> {
| -------------------- doesn't satisfy `_: Iterator`
|
= note: the following trait bounds were not satisfied:
`[[winter_math::fields::f62::BaseElement; 12]; 12]: Iterator`
which is required by `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>: Iterator`
`std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>: Iterator`
which is required by `&mut std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>: Iterator`
error[E0277]: `[winter_math::fields::f62::BaseElement; 12]` is not an iterator
--> crypto/src/hash/rescue/rp62_248/mod.rs:311:26
|
311 | state.iter_mut().zip(acc).for_each(|(s, a)| *s *= a);
| ^^^
| |
| expected an implementor of trait `IntoIterator`
| help: consider borrowing here: `&acc`
|
= note: the trait bound `[winter_math::fields::f62::BaseElement; 12]: IntoIterator` is not satisfied
= note: required because of the requirements on the impl of `IntoIterator` for `[winter_math::fields::f62::BaseElement; 12]`
error[E0599]: the method `for_each` exists for struct `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>`, but its trait bounds were not satisfied
--> crypto/src/hash/rescue/rp62_248/mod.rs:311:31
|
311 | state.iter_mut().zip(acc).for_each(|(s, a)| *s *= a);
| ^^^^^^^^ method cannot be called on `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>` due to unsatisfied trait bounds
|
::: /home/vadym/snap/rustup/common/rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/zip.rs:13:1
|
13 | pub struct Zip<A, B> {
| -------------------- doesn't satisfy `_: Iterator`
|
= note: the following trait bounds were not satisfied:
`[winter_math::fields::f62::BaseElement; 12]: Iterator`
which is required by `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>: Iterator`
`std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>: Iterator`
which is required by `&mut std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>: Iterator`
error[E0277]: `[[winter_math::fields::f64::BaseElement; 12]; 12]` is not an iterator
--> crypto/src/hash/rescue/rp64_256/mod.rs:262:27
|
262 | result.iter_mut().zip(MDS).for_each(|(r, mds_row)| {
| ^^^
| |
| expected an implementor of trait `IntoIterator`
| help: consider borrowing here: `&MDS`
|
= note: the trait bound `[[winter_math::fields::f64::BaseElement; 12]; 12]: IntoIterator` is not satisfied
= note: required because of the requirements on the impl of `IntoIterator` for `[[winter_math::fields::f64::BaseElement; 12]; 12]`
error[E0599]: the method `for_each` exists for struct `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>`, but its trait bounds were not satisfied
--> crypto/src/hash/rescue/rp64_256/mod.rs:262:32
|
262 | result.iter_mut().zip(MDS).for_each(|(r, mds_row)| {
| ^^^^^^^^ method cannot be called on `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>` due to unsatisfied trait bounds
|
::: /home/vadym/snap/rustup/common/rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/zip.rs:13:1
|
13 | pub struct Zip<A, B> {
| -------------------- doesn't satisfy `_: Iterator`
|
= note: the following trait bounds were not satisfied:
`[[winter_math::fields::f64::BaseElement; 12]; 12]: Iterator`
which is required by `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>: Iterator`
`std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>: Iterator`
which is required by `&mut std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>: Iterator`
error: aborting due to 8 previous errors
Some errors have detailed explanations: E0277, E0599.
For more information about an error, try `rustc --explain E0277`.
error: could not compile `winter-crypto`
To learn more, run the command again with --verbose.
vadym@ubuntu:~/winterfell$
Currently, our implementation of degree-preserving projection folds evaluation domains by a factor of 4. Would be good to have the ability to dynamically change the folding factor between values of 4, 8, and 16 to see what impact this makes on prover performance, proof size etc.
This will let us generate proofs at 128-bit security level even in small fields (e.g. ~64 bits).
For the 62-bit field we are currently using, the irreducible polynomial could be: x3 - x + 2. But this won't work for our 128-bit field. This is not a problem per se, because we don't need to do cubic extensions of 128-bit fields, but it might be a good idea to investigate approaches of tying extension fields to specific base field (rather than giving one generic implementation for all extensions of a given degree).
Hi,
Could you port the examples folder to a different repository so it is easier to use. I am trying to use it and it has been a bit difficult to use. It would be much easier if you had a separate repository for examples.
A reference that has done this can be found here.
Thank you and thank you so much for the project. Very useful!
Our README says:
Therefore, our final goal is to efficiently distribute proof generation across many machines.
What's the current status of distributed proving? What are the plans?
For v0.6 release I'm thinking of adding the following to the library:
VectorCommitment
trait.Suggestions/comments are welcome.
In the following code degree for state[0] and state[1] is set to 2, however, I don't see the value for state[0] and state[1] is derived by multiplying value of any other state and so the degree must be 1
https://github.com/novifinancial/winterfell/blob/main/examples/src/lamport/aggregate/air.rs#L51
https://github.com/novifinancial/winterfell/blob/main/examples/src/lamport/aggregate/air.rs#L52
Let me know if I am missing something.
Thanks
Currently, Trace LDE is implemented as a concrete struct (see here). It might be good to instead define as a trait and provide a default implementation (similar to what we did with RandomCoin
recently). Making it a trait would make the structure more flexible and allow for custom implementations of Trace LDE which might be more suitable for various use cases (i.e., long-running provers where memory may not need to be allocated/de-allocated all the time, or provers which outsource LDE computations to hardware etc.).
The trait could look something like this:
pub trait TraceLde {
type BaseField: StarkField;
type HashFn: ElementHasher<BaseField = Self::BaseField>;
/// Takes main trace columns as input, interpolates them into polynomials in coefficient form,
/// evaluates the polynomials over the LDE domain, and commits to the polynomial evaluations.
///
/// Returns a tuple containing the column polynomials in coefficient from and the commitment
/// to the polynomial evaluations over the LDE domain.
fn set_main_trace(
main_trace: &ColMatrix<Self::BaseField>,
) -> (ColMatrix<Self::BaseField>, <Self::HashFn as Hasher>::Digest);
/// Takes auxiliary trace segment columns as input, interpolates them into polynomials in
/// coefficient form, evaluates the polynomials over the LDE domain, and commits to the
/// polynomial evaluations.
///
/// Returns a tuple containing the column polynomials in coefficient from and the commitment
/// to the polynomial evaluations over the LDE domain.
fn add_aux_segment<E: FieldElement<BaseField = Self::BaseField>>(
aux_trace: &ColMatrix<E>,
) -> (ColMatrix<E>, <Self::HashFn as Hasher>::Digest);
/// Reads current and next rows from the main trace segment into the specified frame.
fn read_main_trace_frame_into(
&self,
lde_step: usize,
frame: &mut EvaluationFrame<Self::BaseField>,
);
/// Reads current and next rows from the auxiliary trace segment into the specified frame.
fn read_aux_trace_frame_into<E: FieldElement<BaseField = Self::BaseField>>(
&self,
lde_step: usize,
frame: &mut EvaluationFrame<E>,
);
/// Returns trace table rows at the specified positions along with Merkle authentication paths
/// from the commitment root to these rows.
fn query(&self, positions: &[usize]) -> Vec<Queries>;
/// Returns the number of rows in the execution trace.
fn trace_len(&self) -> usize;
/// Returns blowup factor which was used to extend original execution trace into trace LDE.
fn blowup(&self) -> usize;
}
Creating an instances of TraceLde
would be handled by the prover. To do this, we'd need to add the following to the Prover
trait:
pub trait Prover {
...
type TraceLde: TraceLde<BaseField = Self::BaseField>,
fn new_trace_lde(&self, trace_info: &TraceInfo, blowup: usize) -> Self::TraceLde;
}
This structure can be further improved by abstracting away evaluations frame behind an associated and replacing read_*
methods with some sort of a frame iterator (which would be helpful for #80). But I'm leaving this for a separate issue.
Does Winterfell support recursive proofs? Is there a simple way to write a circuit which takes as inputs and checks a zkSNARK generated for another circuit (or the same circuit)? Or how would you recommend going about doing this with Winterfell?
winter-utils
is failing to build with --no-default-features
for all targets.
Vec
should be pulled from super
, and not from prelude, as they will not be available by default under no-std
.
Hi, @irakliyk, Is there any plan to support Poseidon Hash over GL or BN128๏ผ
Ideally, this should be a dev dependency as we use methods which rely on rand
crate almost exclusively for testing and benchmarking.
In the conjectured security evaluation, we have:
let field_security = field_size - lde_domain_size.trailing_zeros();
( i.e.
This conflicts with the conjectured security formulae given in EthSTARK paper, page 40, where for the field security we only consider the size of the extension field.
I was then wondering where this
If that is the case, then wouldn't it make sense to stick to the "conjectured field security" estimate for the conjectured security level?
It would be great if Hasher::Digest also had serde support:
https://github.com/novifinancial/winterfell/blob/main/crypto/src/hash/mod.rs#L22
Some AIR constraints cannot be expressed as functions involving just two consecutive rows of trace registers. An example of such a constraint is found in the Cairo whitepaper, where a multi-set permutation check is used to ensure that memory is continuous and read-only. This constraint checks address-value pairs for four different pointer types (pc
, dst
, op0
, op1
). When these pointers are located in the same trace row, it is not possible to implement the required cumulative product step constraint for nontrivial programs, as memory accesses for more than one pointer type can occur in a single execution step and are not broken down into individual rows. Thus when we specify additional auxiliary columns for all of the various address-value pairs (regardless of pointer type, and sorted by address) there will be a mismatch in trace length. While sorted address-value columns could be constructed for each pointer type, and continuity constraints could be modified to ensure that memory accesses restricted to a specific pointer type are read-only, it does not appear possible to ensure that the same memory accesses across different pointer types also remain read-only. To solve this issue, the Cairo whitepaper introduces "virtual columns" and "subcolumns" (page 49) to allow for larger evaluation frames that effectively provide non-overlapping windows of the execution trace.
With the above in mind, a simple modification of the current frame implementation could allow for larger window sizes, and otherwise seek to keep the interface the same. We could split the current EvaluationFrame
struct into an EvaluationFrame
trait and ConsecutiveEvaluationFrame
struct implementing this trait (similar to how ExecutionTrace
was split in a previous pull request). The ConsecutiveEvaluationFrame
(there is probably a better name) will have the same functionality as the current struct (it has a window size of 1), and remain in use for both the two OOD point evaluations and for AIR transition evaluations that only need the current and next row. Users that need a larger window for their transition function (e.g. the current 4 rows and next 4 rows) can create a custom struct satisfying the trait. It might also make sense to allow for the current
and next
functions to have differing windows sizes. I think at a minimum we will need to modify the evaluate_fragment
function in ConstraintEvaluator
and the read_main_trace_frame_into
function in TraceLde
.
These currently fail.
Also, would be good to not do an automatic bench run if a PR status is draft and do it only once the status changes from draft to read for review.
We currently only allow prime finite fields to implement the StarkField
trait. However, I believe that the library would benefit greatly from relaxing this requirement to any finite field, given that they still have large 2-adicity.
In particular, I have in mind adding support for Mersenne prime fields, for the following reason:
What's more, for Mersenne prime fields that would be of interesting size for STARKs:
The same field would allow support for FourQ) curve, adding efficient curve support for applications that may need it (I haven't done comparisons with ecgfp5 over f64
so I don't know by how much FourQ is more efficient).
Also the prime field
f64
, as the paper comparison was done over the binary field of same size. Unfortunately, the smallest
u32
, which could come in handy if (or when) GPU support is added to Winterfell (which I believe, @irakliyk feel free to correct me on this, will come sooner or later, whether through generic libraries or dedicated ones), especially as GPU register sizes are 32 bits.
In addition, hashing throughput of algebraic hashes over the primefield
Currently, the last step in FRI verification is https://github.com/novifinancial/winterfell/blob/e43b75d365e03a71288ede25a6c2c9d9dc021a4d/fri/src/verifier/mod.rs#L307-L319
This is inefficient in, at least, two ways:
(position,remainder[position])
.remainder[position] == evaluation
, just compute the set commitment to (position,evaluations[position])
and then check that the two set-commitments are equal. This assumes that the queries at the initial layer are drawn in such a way so as the last folded positions span the range [0, remainder.len() - 1]
. I assume this is already implemented in Winterfell but I haven't checked.As for the set-commitment, a sequential hash of remainder[i]
for
Right now, we specify a single folding factor for all FRI layers. Out of supported folding factors (4, 8, 16), it seems like 8 results in the smallest proof in most situations. However, it might be possible to optimize this even further by using different folding factors across different layers. I wouldn't expect the impact to be huge - but getting something around 5% proof size reduction may be possible.
One way to go about this is to deterministically create a schedule of folding factors based on extended domain size, and field element size, and hash function digest size. This schedule can then be provided as an input to FRI prover/verifier.
This is not a real bug because I guess is at most generating an extra unused random coefficient.
https://github.com/novifinancial/winterfell/blob/10a5f5ad90592889fe977c505012cd9f2159fb23/air/src/air/mod.rs#L512
In this line 3 coefficients are sampled for each trace polynomial: \alpha_i
for the trace opening to z
, \beta_i
for the opening to g*z
, and \gamma_i
for showing that the trace poly is defined over the base field. If we are not working over an extension field then we don't need to prove the third statement. I guess this is not a big problem because, at most, the extra coefficient never gets read.
Currently, the only commitment scheme supported by Winterfell is Merkle trees. We should investigate adding an additional commitment scheme: Verkle tree.
By using Verkle trees we could reduce proof sizes significantly (by like a factor of 6x - 8x) while giving up only post-quantum security. The big question is how would it affect proof generation time (e.g. how long it would take to construct a Verkle tree with 1M nodes?). Also, for performance and other reasons, we should probably use IPA-based Verkle trees (as opposed to KZG-based ones).
If the performance is acceptable, we should add Verkle tree commitments as one of dynamically configurable parameters - e.g. commitment_scheme
with the type looking something like this:
pub enum CommitmentScheme {
MerkleTree,
VerkleTree,
}
Some references on Verkle trees:
Hi, I am delighted to find the winterfell repository, it is helpful to me. I am trying to understand the examples you gave. I found the degree
variable in the air.rs difficult to learn. As the documentation describes that all trace columns have degree of 1 and when multiplying trace columns together, the degree increases by 1. Further, if a constraint involves multiplication of one trace column and one periodic column, we should use with_cycles
.
Here in merkle tree example. The first six terms all have a degree of 5.
let degrees = vec![
TransitionConstraintDegree::with_cycles(5, vec![HASH_CYCLE_LEN]),
TransitionConstraintDegree::with_cycles(5, vec![HASH_CYCLE_LEN]),
TransitionConstraintDegree::with_cycles(5, vec![HASH_CYCLE_LEN]),
TransitionConstraintDegree::with_cycles(5, vec![HASH_CYCLE_LEN]),
TransitionConstraintDegree::with_cycles(5, vec![HASH_CYCLE_LEN]),
TransitionConstraintDegree::with_cycles(5, vec![HASH_CYCLE_LEN]),
TransitionConstraintDegree::new(2),
];
However, when I further looked evaluate_transition
, I found there are rescue::enforce_round
constraints which have degree of 3 for the above first six terms. And then there are the following constraints.
// when hash_flag = 0, make sure accumulated hash is placed in the right place in the hash
// state for the next round of hashing. Specifically: when index bit = 0 accumulated hash
// must go into registers [0, 1], and when index bit = 0, it must go into registers [2, 3]
let hash_init_flag = not(hash_flag);
let bit = next[6];
let not_bit = not(bit);
result.agg_constraint(0, hash_init_flag, not_bit * are_equal(current[0], next[0]));
result.agg_constraint(1, hash_init_flag, not_bit * are_equal(current[1], next[1]));
result.agg_constraint(2, hash_init_flag, bit * are_equal(current[0], next[2]));
result.agg_constraint(3, hash_init_flag, bit * are_equal(current[1], next[3]));
// make sure capacity registers of the hash state are reset to zeros
result.agg_constraint(4, hash_init_flag, is_zero(next[4]));
result.agg_constraint(5, hash_init_flag, is_zero(next[5]));
The hash_init_flag
comes from a periodic column. not_bit
and bit
comes from a normal trace column. Take the first term as an example. The degree here is 3, and plus rescue::enforce_round
constraints, the total degree should be 6.
result.agg_constraint(0, hash_init_flag, not_bit * are_equal(current[0], next[0]));
// hash_init_flag * not_bit * are_equal(current[0], next[0]);
I did not get how the degree 5 in code are determined.
In addition, the two terms below, each of them should have a degree of 4, then plus rescue::enforce_round
constraints, the degree should be 7.
result.agg_constraint(2, hash_init_flag, bit * are_equal(current[0], next[2]));
result.agg_constraint(3, hash_init_flag, bit * are_equal(current[1], next[3]));
Can someone help me? appreciate.
Hi,
First, this is a really nice project. Thank you!
Onto my question: the recent Cairo paper describes using Randomized AIRs with Preprocessing (RAPs) for Cairo's memory. While they don't describe in detail how they modify the STARK protocol, it appears to allow the prover to first commit to a set of trace columns, then use that commitment to generate randomness that can be used in an additional set of trace columns. The main use case here is that with such capability, local constraints (as in AIR) can be used to verify global properties (such as that one column is a permutation of another, which is what is done in Cairo's memory).
Is this of interest to anyone here? I'd really appreciate some pointers or advice on how the API for winterfell might be best modified to support this. Of course, if the modifications are of use to anyone here I'd be happy to contribute them.
Thanks and have a nice weekend
Whenever there is a trace column with the same repeated entry, there's a mismatch on the expected degree of the composition poly as long as it is computed as constrain_degree*(trace_length -1) - divisor_degree
here https://github.com/novifinancial/winterfell/blob/46dce1adf06dcb0778b546054e938f0144fc1288/prover/src/constraints/evaluation_table.rs#L440 and here https://github.com/novifinancial/winterfell/blob/46dce1adf06dcb0778b546054e938f0144fc1288/air/src/air/transition/degree.rs#L103
The problem seems to be the use of trace_length-1
instead of the actual colum degree.
Although seems to be only a problem for debugging, it is annoying because it is not the expected behaviour. I wonder if this problem might also happen when some trace columns are generated, for example, as the evaluations of the identity polynomial
Proofs currently generated by the library leak info. This is not an issue when the primary use case for proofs is succinctness (e.g. light client, signature aggregation), but this is not desirable for use cases which need to hide private data (e.g. anon token, hash pre-image). Ideally, we should have an option to enable perfect zero-knowledge for proofs.
This will require the following changes:
into_batch()
method of Queries struct should return a vector of field elements as a second tuple value.
Rescue Prime hash function has been implemented as a part of #50. However, it is not yet integrated into the prover and verifier.
There is one challenging aspect here: RP62_248
works only in f62
field, however, prover and verifier expect the hash function to be generic over all fields. It would be nice if we could do some kind of "type narrowing" - e.g. if there is a mismatch between base field of AIR and hash function, a runtime error would be thrown, otherwise everything will work as expected. But I'm not sure if this is currently possible in Rust.
Hi there. I'm working on a blockchain-related project which uses winterfell
and I need to compile my code to WebAssembly, so that it can be executed on an EVM-compatible runtime.
I found that the generated wasm binary imports core::panicking::panic
somehow, even if panic = "abort"
is enabled:
> wasm-dis wasm32-unknown-unknown/release/contract.wasm | grep "import.*panic"
(import "env" "_ZN4core9panicking5panic17h6619a393253f8e08E" (func $_ZN4core9panicking5panic17h6619a393253f8e08E (param i32 i32 i32)))
And this function is called by $_ZN17compiler_builtins3int5shift4Ashl4ashl17h85b84ee9c80145b7E
, which is called by $_ZN17compiler_builtins3int5shift9__ashlti317hbfcf8d3fd2119232E
, which is called by $__ashlti3
, which is called by $_ZN11winter_math5field6traits10StarkField17get_root_of_unity17hc9d7430c89370043E
.
And here is $_ZN11winter_math5field6traits10StarkField17get_root_of_unity17hc9d7430c89370043E
:
https://github.com/novifinancial/winterfell/blob/f89e6c1011a9e3f42667905e94b81265d8999dae/math/src/field/traits.rs#L218-L227
So I guess rustc has some internal checks to ensure that the shl
operation doesn't overflow, which always call panic
.
I made the following change, and the compiled wasm no longer imports panic
.
- let power = Self::PositiveInteger::from(1u32) << (Self::TWO_ADICITY - n);
- Self::TWO_ADIC_ROOT_OF_UNITY.exp(power)
+ let power = 1u64 << (Self::TWO_ADICITY - n); // `TWO_ADICITY` is 39 in f62, and 40 in f128, so u64 is sufficient (?)
+ Self::TWO_ADIC_ROOT_OF_UNITY.exp(Self::PositiveInteger::from(power))
Since I am a beginner both to rust and winterfell
, I am wondering whether my modified code works in the same way.
If it makes sense, I am willing to submit a pull request.
We should implement custom serialization for StarkProof struct. The benefits of this are:
serde
as a dependency from all crates.PR #124 broke the build using concurrent feature, by changing the scope of the fft_in_place
function (originally in fft/serial
module) to private, and moving it to the fft/fft_inputs
module without updating the calls in the fft/concurrent
module.
I think this already happened in the past that a change breaking builds with parallelization went unnoticed in the main branch, perhaps we should consider having a check in the CI?
This issue was brought up by Quan Thoi Minh Nguyen from Google.
The core of the issue is how we define the verifier interface here. Basically, the verifier accepts a StarkProof
as an input and StarkProof
struct contains ProofOptions
struct. In turn, ProofOptions
contains a bunch of protocol parameters which, in the end, define the security level of the proof. This setup could allow a potential attacker to send a fake proof at a very low security level (e.g., just a few bits), and the verifier would accept this proof.
So, it is up to the users of this library to make sure they check security level of the proof before passing it to the verify()
function. While this works, it is probably not ideal as people might forget that this needs to be done leading to unintended concequences.
I can think of 3 ways of changing the verifier interface to make this type of bugs less likely.
verify()
functionWe could change the signature of the verify()
function to something like this:
pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
proof: StarkProof,
pub_inputs: AIR::PublicInputs,
) -> Result<u32, VerifierError>
Where the returned u32
would specify the security level of the verified proof.
ProofOptions
struct separatelyWe could change the signature of the verify()
function to something like this:
pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
proof: StarkProof,
options: ProofOptions,
pub_inputs: AIR::PublicInputs,
) -> Result<(), VerifierError>
This would also require the we remove ProofOptions
from StarkProof
.
We could change the signature of the verify()
function to something like this:
pub fn verify<AIR: Air, HashFn: ElementHasher<BaseField = AIR::BaseField>>(
proof: StarkProof,
pub_inputs: AIR::PublicInputs,
min_security_level: u32,
) -> Result<(), VerifierError>
My thinking is that option 1 is not really explicit enough and the return value could be ignored by the caller just as easily. Option 2 makes the interface less ergonomic and difficult to use, in my opinion. So, my preference would be with option 3 - but would love to know if others think differently.
Currently supported hash functions (SHA3 and BLAKE3) produce 256-bit outputs. This is an overkill when for proofs which target security levels below 128 bits.
For example, for proofs targeting 100-bit security level, hash output of 200 bits should be sufficient (as we still get 100-bit collision resistance). Using such a hash function should reduce proof sizes by about 20%.
As the first step, we could add just a single hash function - e.g. BLAKE3 with 192-bit output (we could call it Blake3_192
to keep the naming consistent with Blake3_256
that we currently have).
In this line there's a TODO saying "make sure layer queries hash into leaves of layer proof". Sounds like the verifier currently does not verify that, or is it verified somewhere else?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.