Coder Social home page Coder Social logo

scroll-tech / zkevm-circuits Goto Github PK

View Code? Open in Web Editor NEW

This project forked from privacy-scaling-explorations/zkevm-circuits

828.0 15.0 335.0 34.54 MB

License: MIT License

Rust 98.57% Makefile 0.08% Go 0.45% Solidity 0.73% Shell 0.10% Dockerfile 0.04% Handlebars 0.03%

zkevm-circuits's Introduction

Circuits for zkEVM

CI checks

Check out the work in progress specification to learn how it works.

Getting started

To run the same tests as the CI, please use: make test-all.

Running benchmarks

There are currently several benchmarks to run in the workspace in regards to the circuits. All use the DEGREE env var to specify the degree of the K parameter that you want to use for your circuit in the bench process.

  • Keccak Circuit prover benches. -> DEGREE=16 make packed_multi_keccak_bench
  • EVM Circuit prover benches. -> DEGREE=18 make evm_bench.
  • State Circuit prover benches. -> DEGREE=18 make state_bench

You can also run all benchmarks by running: make circuit_benches DEGREE=18.

GH Actions Benchmark Results

Circuit Benchmark Results are accessible here: https://grafana.zkevm-testnet.org/d/vofy8DAVz/circuit-benchmarks?orgId=1

  • circuit_benchmarks panel displays:
    • overall test result
    • timers and system statistics
    • url for downloading prover log and sys stat files
    • clickable sysstats_url element that loads the memory and cpu utilization profiles for the given test

Project Layout

This repository contains several Rust packages that implement the zkevm. The high-level structure of the repository is as follows:

bus-mapping

  • a crate designed to parse EVM execution traces and manipulate all of the data they provide in order to obtain structured witness inputs for the EVM Proof and the State Proof.

circuit-benchmarks

  • Measures performance of each circuit based on proving and verifying time and execution trace parsing and generation for each subcircuit

eth-types

  • Different types helpful for various components of the zkevm, such as execution trace parsing or circuits

external-tracer

  • Generates traces by connecting to an external tracer

gadgets

geth-utils

  • Provides output from latest geth APIs (debug_trace) as test vectors

integration-tests

  • Integration tests for all circuits

keccak256

  • Modules for Keccak hash circuit

mock

  • Mock definitions and methods that are used to test circuits or opcodes

testool

  • CLI that provides tools for testing

zkevm-circuits

  • Main package that contains all circuit logic

zktrie

  • Modules for Merkle Patricia Trie circuit

zkevm-circuits's People

Contributors

adria0 avatar aronisat79 avatar ashwhitehat avatar brechtpd avatar chihchengliang avatar cperezz avatar darth-cy avatar davidnevadoc avatar dreamwugit avatar ed255 avatar han0110 avatar haoyuathz avatar kunxian-xia avatar leolara avatar lightsing avatar lispc avatar naure avatar noel2004 avatar ntampakas avatar pinkiebell avatar roynalnaruto avatar rrzhang139 avatar scroll-dev avatar silathdiir avatar smtmfft avatar succinctpaul avatar therealyingtong avatar xiaodino avatar z2trillion avatar zhenfeizhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zkevm-circuits's Issues

precompile codehash could be unsound

In new test cases for begin-tx with precompile #306, we found that call to precompile with value will cause CodeDB initialise the code hash of precompile to EMPTY_HASH (prev value: Word::zero() since non-exist).

This could be unsound since the code hash of precompile address would be inconsistent (Word::zero() for non-exist account, EMPTY_HASH for empty account).

state circuit optimization

  1. speedup the padding rows assignment
  2. if assignment phase, check "first access reads don't change value" and "non-first access reads don't change value". log::error

TODOs for keccak circuit

  1. spec
  2. merge the three impl?
  3. q_padding and is_paddings have totally different meanings..
  4. when q_round_last, q_round is false; while when q_padding_last, q_padding is true..
  5. ROWS cannot be bigger than 28..
  6. change env var name like ROWS and DEGREE

In rw table, there should not a single tag for `AccountDestructed`

Currentl in rw_table, entry for SELFDESTRUCT op would be put in a specified row which is tagged as 'AccountDestructed', i.e. a different tag with common rw for accounts. As the result, all entries for deletion on account would be put below other rw entries for accounts in rw table.

Such a layout maybe problemic for the real-world contracts. With CREATE2 op, developer can relaunch a contract under the same address some was destructed before so consider following scenes (in the same block):

  1. contract A call SELFDESTRUCT and delete itself on address A
  2. A CREATE2 call re-create contract on address A

And we expect to see following layout in rw_table

RWC Tag Address FieldTag Remark
rwc for action 2 Account A Nonce write on address A for re-creation
rwc for action 2 Account A CodeHash write on address A for re-creation
rwc for action 1 AccountDestructed A destruct A

With the purposed scheme for mpt_lookup this table can not build a correct mpt_table because the entry for destruction has to be put before the rw actions on re-launch A.

And if we consider destruct A also means write 0 on Nonce, Balance and CodeHash of A, This rw table is in fact against the rule that for the specified address and tag, the rw counter must be ordered by ascending.

I advise a solution like:

  1. Abolish the 'AccountDestructed' tag
  2. For SELFDESTRUCT, break down it as 3 rw actions with the same rwc:
    • Write 0 on nonce
    • Write 0 on balance
    • Write 0 on codeHash
  3. To the witness generator for mpt circuit, it can consider an account is to be deleted if the current state is nonce == 0 and codeHash == 0. So for the entry in mpt_table which write codeHash to 0, it would provide the witness for deleting instead of a common write action (before this entry there should be another one which has write nonce to 0).

New design of public input circuit

The design doc is here.

The overall goal is to reduce the gas cost spent in verifying the final proof. The main observation is that keccak() function is much cheaper than AddMod/MulMod.

The new design is using keccak to compress the raw public inputs. The tx data is first hashed using keccak to get tx_hash. Then we will hash these 32-bytes tx_hash using keccak.

  keccak([
       coinbase,
       gas_limit,
       number,
       time,
       difficulty,
       base_fee,
       chain_id,
       prev_block_hashes[0],
       ...,
       prev_block_hashes[255],
       
       block_hash,
       prev_state_root,
       state_root,
       
       keccak(rlp(txs[0])), 
       keccak(rlp(txs[1])), 
       keccak(rlp(txs[2])), 
       keccak(rlp(txs[3])), 
       ....,
       keccak(rlp(txs[n-1])), 
       DUMMY_PADDING_TX_HASH,
       ....,
       DUMMY_PADDING_TX_HASH,        
   ])

The impl of this new PI circuit can be found at the feat/pi-circuit branch.

META: management of "lts" branch named like scroll-dev-MMDD

current active branch

scroll-dev-0521

Branch management

We have some diffs not merged into upstream but needed for our integration.
So we periodically fork a branch from upstream main and apply our diffs.
Once the scroll-dev-MMDD is made, only small patch(compatibility guaranteed, including API & perf & tests) & bug fixes will be applied onto(we make PRs against scroll-dev-MMDD branch). If we need big&new features, we fork a new scroll-dev-MMDD branch from upstream main, and rebase(squash or not both ok) our old+new patches.

We should never force push.

reconstruct access list correctly in bus-mapping module

Problem

a simple bytecode like extcodesize 0xXXXX may break the proof system due to incorrect access list calculated locally in statedb.rs.

Solution 1

reconstruct access list correctly in bus-mapping module locally @lightsing

ref:
https://eips.ethereum.org/EIPS/eip-2929
https://github.com/ethereum/go-ethereum/blob/b3fc9574ecba5143ee1b61f172100c9228a04e18/core/vm/eips.go#L115
https://github.com/ethereum/go-ethereum/blob/b3fc9574ecba5143ee1b61f172100c9228a04e18/core/state/statedb.go#L1000

in short (impl by this order):

  • extcodesize
  • extcodecopy
  • balance
  • precompiled contract
  • tx sender, tx receiver (maybe done?)
  • sstore sload (done)
  • *call* (done?)
  • optional access list in eip2930 (done?)

based on this branch: #164

Solution 2

Add it into StructLog directly: ethereum/go-ethereum#25278

CI speed / bitwise lookup table / faster MockProver

currently it costs more than 30 minutes to finish CI in appliedzkp/zkevm-circuits. I test it locally and find tests of bitwise op cost much time. (Other op test cost 10-30s, while bitwise op test needs 150-180s). Then I find the reason behind this is that bitwise op test needs full lookup table ( with bitwise lookup table, eg bitwise xor table has 256*256 entries, same for bitwise and/or) and other ops need stripped lookup table (without bitwise table). Using full lookup table needs more rows, namely more degree for the circuit.

In MockProver::verify, every gate and every lookup are evaluated on every row ๏ผˆ including the empty/padding rows). So if we can verify on less rows, we can make verify much faster. We can add a method call verify_at_rows(rows_ids: Box<Interator<usize>>), we know which rows can be skipped(padding rows, and non q_step rows), so from the application circuit side we can provide much less row ids to verifier to speedup verification.

We can do this after kilic rebase appliedzkp/halo2 on zcash/halo2.

Bug: bytecode step in evm test will add STOP even if original doesn't have

What command(s) is the bug in?

No response

Describe the bug

in any test like following, even the STOP op code is not added to the bytecode, but the running geth steps will contain STOPstep.

fn test_ok(opcode: OpcodeId, value: Word) {
       let n = opcode.postfix().expect("opcode with postfix");
       let mut bytecode = bytecode! {
           PUSH32(value)
       };
       for _ in 0..n - 1 {
           bytecode.write_op(OpcodeId::DUP1);
       }
       bytecode.append(&bytecode! {
           .write_op(opcode)
           //STOP
       });

       CircuitTestBuilder::new_from_test_ctx(
           TestContext::<2, 1>::simple_ctx_with_bytecode(bytecode).unwrap(),
       )
       .run();
   }

tried it on scroll-stable branch, will try on main branch.

Concrete steps to reproduce the bug

No response

backlog: adapt memory gadget impl up to date

if we read the codes of memory gadget VS calldatacopy calldataload, we will find the impl of memory gadget is quite old. For example: 32 identical lookups VS using condition to control lookup on/off and rwc increasement.

bug: call.address in access list?

for this tx: https://ethtx.info/mainnet/0x4096e88107bd14522dfc3500325bff580728504580f5103d8247a2a32425889f/

it seems when calling:

image

the USDC contract address 0xa0b8... (they use delegate call, the entrance/state contract 0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48 , logic contract 0xa2327a938febf5fec13bacfb16ae10ecbc4cbdcf ) is already in accesslist. We need to check whether it is true. If so, when was it added into accesslist?

Add 'println' in geth, and retrace tx?

`verify_proof` fails with 'assertion failed: bases.len() >= size'

I am trying to write a simple integration test to generate proof and verify it for a given block for StateCircuit. Here is my test function:

async fn state_circuit_random_run(block_num: u64) {
    use halo2_proofs::pairing::bn256::Fr;
    let mut folder = PathBuf::from(r"./outputs");
    let params = {
        folder.push("state_circuit.params");
        let mut fd = std::fs::File::open(folder.as_path()).unwrap();
        folder.pop();
        Params::<G1Affine>::read(&mut fd).unwrap()
    };
    log::info!("Read {} file successfully", "state_circuit.params");


    let vk = {
        folder.push("state_circuit.vkey");
        let mut fd = std::fs::File::open(folder.as_path()).unwrap();
        folder.pop();
        VerifyingKey::<G1Affine>::read::<_, StateCircuit<Fr>>(&mut fd, &params).unwrap()
    };

    log::info!("Read {} file successfully", "state_circuit.vkey");
    log::info!("test state circuit, block number: {}", block_num);
    let cli = get_client();
    let cli = BuilderClient::new(cli).await.unwrap();
    let builder = cli.gen_inputs(block_num).await.unwrap();

    // Generate state proof
    let stack_ops = builder.block.container.sorted_stack();
    // log::info!("stack_ops: {:#?}", stack_ops);
    let memory_ops = builder.block.container.sorted_memory();
    // log::info!("memory_ops: {:#?}", memory_ops);
    let storage_ops = builder.block.container.sorted_storage();
    // log::info!("storage_ops: {:#?}", storage_ops);

    const DEGREE: u32 = 17;

    let rw_map = RwMap::from(&OperationContainer {
        memory: memory_ops,
        stack: stack_ops,
        storage: storage_ops,
        ..Default::default()
    });

    let randomness = Fr::default();
    let circuit = StateCircuit::<Fr>::new(randomness, rw_map, 1 << 16);

    let pk = keygen_pk(&params, vk, &circuit).expect("keygen_pk should not fail");

    let instance = circuit.instance();
    let instance_slices: Vec<&[Fr]> = instance.iter().map(|v| &v[..]).collect();
    let mut transcript = Blake2bWrite::<_, _, Challenge255<_>>::init(vec![]);

    create_proof(
        &params,
        &pk,
        &[circuit],
        &[&instance_slices[..]], //&[&[]],
        OsRng,
        &mut transcript,
    )
    .expect("proof generation should not fail");
    let proof = transcript.finalize();
    let index = 0;
    {
        folder.push(format!("state_circuit_proof{}.data", index));
        log::info!("{}", folder.as_path().display());
        let mut fd = std::fs::File::create(folder.as_path()).unwrap();
        folder.pop();
        fd.write_all(&proof).unwrap();
    }

    {
        folder.push(format!("state_circuit_instance{}.data", index));
        let mut fd = std::fs::File::create(folder.as_path()).unwrap();
        folder.pop();
        let instances = json!(&[&instance_slices[..]]
            .iter()
            .map(|l1| l1
                .iter()
                .map(|l2| l2
                    .iter()
                    .map(|c: &Fr| {
                        let mut buf = vec![];
                        c.write(&mut buf).unwrap();
                        buf
                    })
                    .collect())
                .collect::<Vec<Vec<Vec<u8>>>>())
            .collect::<Vec<Vec<Vec<Vec<u8>>>>>());
        write!(fd, "{}", instances.to_string()).unwrap();
    }
    let vk = {
        folder.push("state_circuit.vkey");
        let mut fd = std::fs::File::open(folder.as_path()).unwrap();
        folder.pop();
        VerifyingKey::<G1Affine>::read::<_, StateCircuit<Fr>>(&mut fd, &params).unwrap()
    };
    
    // let params = params.verifier::<Bn256>(1).unwrap();
    let general_params: Params<G1Affine> =
        Params::<G1Affine>::unsafe_setup::<Bn256>(DEGREE.try_into().unwrap());
    let verifier_params: ParamsVerifier<Bn256> =
        general_params.verifier(DEGREE as usize * 2).unwrap();
    let strategy = halo2_proofs::plonk::SingleVerifier::new(&verifier_params);
    let mut verifier_transcript = Blake2bRead::<_, _, Challenge255<_>>::init(&proof[..]);
    log::info!("Got to {}", "halo2_proofs::plonk::verify_proof");
    halo2_proofs::plonk::verify_proof(
        &verifier_params,
        &vk,
        strategy,
        &[&instance_slices[..]], //&[&[]],
        &mut verifier_transcript, 
    )
    .unwrap();
}

Running this test causes a panic upon verify_proof call pointing to the commitment.rs file. Here is the panic:

thread 'test_state_circuit_random_run' panicked at 'assertion failed: bases.len() >= size', /.cargo/git/checkouts/halo2-afe2d0b0be6b3c3a/1fc6770/halo2_proofs/src/poly/commitment.rs:308:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Am I missing something here?

here you can find params and vkey files.

test: enable state circuit test for every opcode

Whenever we add an opcode, we add evm circuit test for it using run_test_circuit_incomplete_fixed_table(witness::build_block_from_trace_code_at_start(&bytecode)).

On the other hand, there are only a few hand-written state circuit tests in state.rs. We need to enable state tests for each opcode.

The "zero" account should has 0 value for codehash

Currently busmapping purpose an "zero" AccountData has codehash value with Keccak(nil):

pub fn zero() -> Self {
Self {
nonce: Word::zero(),
balance: Word::zero(),
storage: HashMap::new(),
code_hash: *CODE_HASH_ZERO,
}

This may cause some issues like on creating a new contract on empty address (on which the trie still have no a leaf), we get a mpt update entry for codehash Keccak(nil) -> <contract codehash> instead of 0 -> <contract codehash>

Also while reading a non-exist address, the account object being put into stateDb is not match to the codehash value it is read from trace (read zero but we have Keccak(nil) in stateDb

I purpose to update the 'zero' account object with codehash equal to zero, also an additional updating of codehash to Keccak(nil) when touching an address, which is purposed in EIP-2929

code improvements issues

  1. unify the "caller fetch return data from callee" codes. Many redundant codes, expand, copy, fill with 0 etc
  2. unify name of current_call and call..
  3. add comments to the CallContext field like value to make it easier to understand/debug codes.

err handling for return/revert stack/oog

Describe the feature you would like

background: #304 (review)

i am afraid the codes are not robust enough...

let gas_refund = geth_step.gas.0 - memory_expansion_gas_cost - code_deposit_cost;
let caller_gas_left = geth_step_next.gas.0 - gas_refund;

i need some test cases:

  1. return on empty stack
  2. return but gas not enough, due to memory expansion (non creation case)
  3. revert on empty stack
  4. revert but gas not enough, due to memory expansion

not high priority

Additional context

No response

inner block corner case

What command(s) is the bug in?

No response

Describe the bug

build a test case, where there are 3 (inner) blocks, the first has 1 tx and the second and third has 0 tx.

i am afraid the constraints "block number does not change if this is the last inner block" and "Constrain state machine transitions" may fail.

Concrete steps to reproduce the bug

No response

handle codehash 0->keccak(nil) in call

when we transfer some ether to a non-existed account in call, we need to add Rw { is_write: true, field_tag: CodeHash, prev_value: 0, value: keccak(nil) } similar to begin_tx:

// if the transfer values make an account from non-exist to exist
// we need to handle to codehash change
if !call.value.is_zero() {
state.account_write(
&mut exec_step,
call.address,
AccountField::CodeHash,
CodeDB::empty_code_hash().to_word(),
CodeDB::empty_code_hash().to_word(), // or Word::zero()?
)?;

real world case: in this tx, https://ethtx.info/mainnet/0x0deef21e4b563f4394b5d4dae01f476403c1fdb28137042907e8b725bf19c3fa/ , the address 0xaffa7abb81e9717b60d7f077fbaedc72bf1ed5dd is tranferred some ether, making itself from non-existed to existed.

reproduce problem:

RUST_LOG=trace TX_ID=0x0deef21e4b563f4394b5d4dae01f476403c1fdb28137042907e8b725bf19c3fa CIRCUIT=state cargo test --profile release --test mainnet --features circuits test_mock_prove_tx -- --nocapture

`block_convert` is quite confusing

RE: #65 (comment)

from https://github.com/appliedzkp/zkevm-circuits/blob/main/integration-tests/tests/circuits.rs#L287-L293 & https://github.com/appliedzkp/zkevm-circuits/blob/main/zkevm-circuits/src/evm_circuit/witness.rs#L589-L596 we can see that, block_convert takes the bytecode of a call/tx, but why we choose to pass a whole bus_mapping::circuit_input_builder::Block into it?
Does block_convert want to deal with a block? or just a piece of bytecode of a call/tx?

  1. if a block:
    it's very confusing that in https://github.com/appliedzkp/zkevm-circuits/blob/main/zkevm-circuits/src/evm_circuit/witness.rs#L554 we use "bytecode of a whole block" for tx_convert
  2. if a piece of bytecode of a call/tx:
    it's very confusing that we use all the ops in a block ([1] & [2]), and it's also very confusing that we iterate all the txs

the above only affects EvmCircuit, in StateCircuit it's more straightforward and easy to understand:
we always treat a block as a whole, and pass all the ops in a block into StateCircuit

Unable to run make test-all on MacOS Ventura 13.0

Hi everybody,

I am unable to run make test or make test-all on MacOS Ventura 13.0
Here is the error I am getting

error: the 'cargo' binary, normally provided by the 'cargo' component, is not applicable to the 'nightly-2022-08-23-aarch64-apple-darwin' toolchain
make: *** [test] Error 1

Is there any particular toolchain that I need to run this or any configuration I need to set up on my computer ?

Best regards.

bug: returndatasize after successful create

What command(s) is the bug in?

No response

Describe the bug

in EIP, it is said "CREATE and CREATE2 are considered to return the empty buffer in the success case and the failure data in the failure case.". I think it is inconsistent with current code:

https://github.com/privacy-scaling-explorations/zkevm-circuits/blob/65577a99d042c326cfb9ba5131d804805d504b0f/bus-mapping/src/circuit_input_builder/input_state_ref.rs#L859

Concrete steps to reproduce the bug

No response

2 same swap errors

Everything worked good except 2 same errors
Transaction Hash 0x9a93fcb277d1c87d396d88a973ab32755955bf55466e187b3eb1d5a0b6d8669d
Transaction Hash 0x398b7ef3fe60774aff6d89b8a507178984b997f6b04e1f668d0a1e2b4bbee402
2023-01-17 (1)
2023-01-17

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.