Coder Social home page Coder Social logo

halo2's Introduction

halo2 Crates.io

Minimum Supported Rust Version

Requires Rust 1.56.1 or higher.

Minimum supported Rust version can be changed in the future, but it will be done with a minor version bump.

Controlling parallelism

halo2 currently uses rayon for parallel computation. The RAYON_NUM_THREADS environment variable can be used to set the number of threads.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

halo2's People

Contributors

3for avatar alexander-camuto avatar ashwhitehat avatar brechtpd avatar chihchengliang avatar cperezz avatar daira avatar dconnolly avatar defuse avatar dependabot[bot] avatar ebfull avatar han0110 avatar honeywest avatar jonathanpwang avatar kilic avatar kunxian-xia avatar lispc avatar naure avatar ntampakas avatar nuttycom avatar parazyd avatar pinkiebell avatar r3ld3v avatar rex4539 avatar spherel avatar str4d avatar therealyingtong avatar trapdoorheader avatar velaciela avatar zhenfeizhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

halo2's Issues

Polynomials are evaluated twice (before multiopen and inside multiopen)

The ProverQuery struct does not contain the evaluation of the poly at the point (which is already computed before we invoke the multiopen prover).

/// A polynomial query at a point
#[derive(Debug, Clone)]
pub struct ProverQuery<'com, C: CurveAffine> {
/// point at which polynomial is queried
pub(crate) point: C::Scalar,
/// coefficients of polynomial
pub(crate) poly: &'com Polynomial<C::Scalar, Coeff>,
/// blinding factor of polynomial
pub(crate) blind: Blind<C::Scalar>,
}

Then the multiopen prover will evaluate the polynomials again. Therefore, I suggest we add another field called eval of type C::Scalar in the ProverQuery struct.

[feat] disable compress_selector

In our use case, we don't want serialized vk be too big (dozens of GiB). Then we will not store the [num_selector X (2degree rows)] select value. In order to do that, we prefer not use compress selector any longer, instead just trait each selector as a simple fixed column. So in vk, we only need to store the commitment of each of those fixed column instead 2degree values.

We need to adjust implementation carefully, so users cannot abuse or trigger some error accidently.

ELI15:

Which section of the Halo 2 book were you reading?

What was unclear?

What would help to make it clearer to you?

feat halo2-dev branch

we need a new halo2-dev-MMDD branch.

features:
scroll-dev-1220
tianyi/fft/mem-opt
sync pse/main

scroll-dev proof is not accepted by PSE verifier

I created a GWC proof with scroll-dev-1220 branch (I also tried scroll-dev-0920) using halo2_proofs/tests/plonk_api.
Then I switched to the PSE v2022_09_10 (before configurable instance query change) and read the previous proof in. The verifier does not accept:

thread 'plonk_api' panicked at 'assertion failed: strategy.finalize()', halo2_proofs/tests/plonk_api.rs:532:9

What exactly was changed in scroll-dev (I'm guessing something related to turning zk off)?

perf: optimize parallel fft

There have been two implementations of the FFT in halo2:

  1. the older one is parallel_fft() (v2022_0809)
  2. and the current is recursive_fft() (after privacy-scaling-explorations#36)

When splitting out sub-ffts, parallel_fft() has a lot of redundant memory accesses,
We have solved this problem and achieved sublinear acceleration on multi-core CPUs.

When unsafe code (L270, L344) is allowed, we can get another 10% improvement


To integrate this patch, you need to pick the latest 2 commits from parallel_fft_opt

image


For benchmarking, use branch: test_fft_opt, and run:
cargo test --release --package halo2_proofs --lib domain::test_fft -- --nocapture

AMD EPYC 7R32 (48 cores)

log_n = 8   ori_time: 1.788235ms     opt_time: 148.862µs     speedup: 12.08
log_n = 9   ori_time: 1.46092ms      opt_time: 264.874µs     speedup: 5.530
log_n = 10  ori_time: 819.161µs      opt_time: 2.270721ms    speedup: 0.360
log_n = 11  ori_time: 1.835406ms     opt_time: 2.252321ms    speedup: 0.814
log_n = 12  ori_time: 3.148143ms     opt_time: 5.397674ms    speedup: 0.583
log_n = 13  ori_time: 5.576377ms     opt_time: 5.612477ms    speedup: 0.993
log_n = 14  ori_time: 4.472701ms     opt_time: 5.989972ms    speedup: 0.746
log_n = 15  ori_time: 8.151602ms     opt_time: 6.512139ms    speedup: 1.251
log_n = 16  ori_time: 11.674981ms    opt_time: 8.898862ms    speedup: 1.311
log_n = 17  ori_time: 19.881883ms    opt_time: 11.798853ms   speedup: 1.685
log_n = 18  ori_time: 35.840553ms    opt_time: 18.853219ms   speedup: 1.901
log_n = 19  ori_time: 71.152448ms    opt_time: 32.842491ms   speedup: 2.166
log_n = 20  ori_time: 151.443976ms   opt_time: 67.273882ms   speedup: 2.251
log_n = 21  ori_time: 331.807999ms   opt_time: 136.822963ms  speedup: 2.425
log_n = 22  ori_time: 704.271073ms   opt_time: 278.359093ms  speedup: 2.530
log_n = 23  ori_time: 1.477827549s   opt_time: 553.424693ms  speedup: 2.670
log_n = 24  ori_time: 2.919710946s   opt_time: 1.105393876s  speedup: 2.641
log_n = 25  ori_time: 6.390272215s   opt_time: 2.231981721s  speedup: 2.863
log_n = 26  ori_time: 13.128261527s  opt_time: 4.529503699s  speedup: 2.898
log_n = 27  ori_time: 25.954740655s  opt_time: 9.248495642s  speedup: 2.806
log_n = 28  ori_time: 55.198835287s  opt_time: 19.78027198s  speedup: 2.790

AMD EPYC 7R32 (192 cores)

log_n = 8   ori_time: 3.74807ms      opt_time: 149.613µs     speedup: 25.15
log_n = 9   ori_time: 5.804869ms     opt_time: 366.215µs     speedup: 15.85
log_n = 10  ori_time: 5.041949ms     opt_time: 680.519µs     speedup: 7.413
log_n = 11  ori_time: 7.945028ms     opt_time: 1.073015ms    speedup: 7.404
log_n = 12  ori_time: 5.281402ms     opt_time: 2.455723ms    speedup: 2.151
log_n = 13  ori_time: 5.849819ms     opt_time: 4.801575ms    speedup: 1.218
log_n = 14  ori_time: 8.588547ms     opt_time: 9.530919ms    speedup: 0.901
log_n = 15  ori_time: 20.888314ms    opt_time: 10.381802ms   speedup: 2.012
log_n = 16  ori_time: 24.97523ms     opt_time: 32.097187ms   speedup: 0.778
log_n = 17  ori_time: 37.41044ms     opt_time: 29.850816ms   speedup: 1.253
log_n = 18  ori_time: 51.713583ms    opt_time: 18.666604ms   speedup: 2.770
log_n = 19  ori_time: 90.150187ms    opt_time: 23.275967ms   speedup: 3.873
log_n = 20  ori_time: 180.770491ms   opt_time: 46.031966ms   speedup: 3.927
log_n = 21  ori_time: 320.897128ms   opt_time: 1.164406098s  speedup: 0.275
log_n = 22  ori_time: 648.377941ms   opt_time: 149.23508ms   speedup: 4.344
log_n = 23  ori_time: 1.278251211s   opt_time: 259.321524ms  speedup: 4.929
log_n = 24  ori_time: 2.641640615s   opt_time: 515.228742ms  speedup: 5.127
log_n = 25  ori_time: 5.586176699s   opt_time: 1.449970708s  speedup: 3.852
log_n = 26  ori_time: 11.023842933s  opt_time: 1.934147691s  speedup: 5.699
log_n = 27  ori_time: 25.63376698s   opt_time: 3.835092488s  speedup: 6.684
log_n = 28  ori_time: 47.17084451s   opt_time: 7.657266252s  speedup: 6.160

dev tracker: cpu version of halo2-fri prover

Why change the assignment of unusable rows for advice columns?

Now in the advice columns, the blinding factors assignment to unusable rows is changed to a value of Scalar::one for the last row

In my understanding, the blinding factors are added to ensure zero knowledge. Is it possible to modify the deterministic assignment to the last line to ensure zero knowledge?or is there some other purpose?

In Zcash
https://github.com/zcash/halo2/blob/76b3f892a9d598923bbb5a747701fff44ae4c0ea/halo2_proofs/src/plonk/prover.rs#L294-L298

In Scroll-tech

// Add blinding factors to advice columns
for advice_values in &mut advice_values {
//for cell in &mut advice_values[unusable_rows_start..] {
//*cell = C::Scalar::random(&mut rng);
//*cell = C::Scalar::one();
//}
let idx = advice_values.len() - 1;
advice_values[idx] = Scheme::Scalar::one();
}

modify in this commit 4b53eee

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.