taikoxyz / taiko-mono Goto Github PK
View Code? Open in Web Editor NEWA based rollup. 🥁
Home Page: https://taiko.xyz
License: MIT License
A based rollup. 🥁
Home Page: https://taiko.xyz
License: MIT License
The first public test-net is likely to have only partial ZKP integration. We may want to add to the protocol a list of addresses that can propose/prove blocks.
The benchmark stat of the current open src ECDSA lib is:
prover generation: 29.5s
verification time: ~60ms
memory usage: ~7.5G
As the proof generation could be parallelized, if we can optimize it to maybe <15s and have an aggregation circuit to collect tens of TXs in maybe ~30s. We could use sig proof to protect the MEV of our L2. So that our protocol design could be better, and our assumption #84 might change accordingly.
Let's tracking future data here.
Unit and Integration tests for Bridge contracts
Now when we want to verify some thing happened on L2 we use a state proof. This works great in most cases but has some limitations. It can also be useful to use the transactions trie to verify things like how many transactions were done in the block, at what position a transaction happened, what data was used in a transaction etc...
To give one specific example: we want the prover to call a specific function on L2 at the start of the block with some fixed inputs. We could potentially use this method to make sure this is done correctly. The only thing to do on the circuits side is to allow the prover to do any tx at position 0 (and we then check on L1 the expected tx was done), all other transactions need to use the data from the tx queue, which is an easy enough change.
I'm not yet sure if this is the best approach.
We need to understand how deposits, withdrawals, and general communications are verified on the destination layer.
1) integration:LibTrieProof
verify()
verifies:
invalid argument 2: hex number > 64 bits
ProviderError: HttpProviderError
at HttpProvider.request (/home/runner/work/taiko-mono/taiko-mono/packages/protocol/node_modules/hardhat/src/internal/core/providers/http.ts:78:19)
at LocalAccountsProvider.request (/home/runner/work/taiko-mono/taiko-mono/packages/protocol/node_modules/hardhat/src/internal/core/providers/accounts.ts:187:34)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async EthersProviderWrapper.send (/home/runner/work/taiko-mono/taiko-mono/packages/protocol/node_modules/@nomiclabs/hardhat-ethers/src/internal/ethers-provider-wrapper.ts:13:20)
Looks like this is also introduced by #179 ?
With the current protocol implementation, if two L2 blocks are proposed within the same L1 block, these L2 blocks will have the same return value for block.timestamp
, which is not the case in Ethereum's client spec. However, we do not think this is an issue at all or it will have any impact to developer experience.
Taiko's developer documentation does need to mention this explicitly.
Some very useful discussions are available in this issue
Users:
TXS_FEES
L2 transaction fees using the L2's native coins. The native coins can be interpreted as Ether, TAI, or potentially another ERC20 token on L1.Proposers:
PROPOSING_TX_FEE
ETH as L1 tx fees, 2) PROVER_FEE
block proving fees, and 3) small fiat fees for IT infrastructure (ignored in our discussion)TXS_FEES
TXS_FEES - PROPOSING_TX_FEE - PROVER_FEE
Provers:
PROVING_TX_FEE
ETH as L1 tx fees, and 2) IT_COST
a considerable amount of ZKP computation hardware cost.PROVER_FEE
block proving fees, and potentially 2) PROVER_REWARD
a performance-based block reward in TAI.PROVER_FEE + PROVER_REWARD - PROVING_TX_FEE - IT_COST
Here I chose to use proposers instead of sequencers because in our rollup contract there is nothing related to ordering transactions in a block. Given that Ethereum uses PBS (proposer builder separation), we should probably also use proposers and builders for clarity. The builder-vs-proposer economics is beyond the scope of our tokenomics design.
The need for ZKPs will inevitably create a market where only the highest bids receive their desired proofs. Such a market will serve as a foundation for almost all ZKP projects that are fulfillable by the prover's services/hardware.
We may choose to or have to build our own prover market, and other projects may also do the same, but eventually, there will be project-independent markets that try to maximize the prover's financial return and minimize the proposer's cost. (It is very similar to DSP, or demand-side platform in web2 ads ecosystem). Such a market will most likely operate off-chain but offer on-chain verifications of off-chain deals.
INFO: A demand-side platform (DSP) is a type of software that allows an advertiser to buy advertising with the help of automation. Because they allow mobile advertisers to buy high-quality traffic at scale with minimal friction, DSPs are a powerful marketing automation tool.
It's safer to assume that there will be multiple such markets, and we want to allow our provers to choose their favorite markets and use the deals they have reached off-chain with third-party provers in our protocol. We need to support custom deal adapter, as illustrated below:
function proposeBlock(
BlockContext memory context,
bytes calldata txList,
bytes calldata zkDeal, // new field
address zkDealAgent, // new field
)
external
nonReentrant
{
// now we need to check that the agent is whitelisted by our protocol
checkDealAgent(zkDealAgent);
// then we check the deal is still valid and may do some bookkeeping
// for example, lock the penalties for XX hours.
IDealAgent(zkDealAgent).verifyDeal(zkDeal);
// Deposit proving fees
taiToken.transferFrom(…);
…
}
Once a deal is verified, a deal-prover is attached to this pending block before a deadline is reached. If the deal-prover proves the block before the deadline, the deposited proving fees will be transferred to the prover once the block is finalized, and the deal is consumed entirely or partially (a deal may serve multiple blocks).
If the deal-prover failed to prove the block before the deadline, the block is open for all provers. If another prover submits a ZKP successfully, a large amount of penalty (also specified by the deal) will be transferred out of the deal-prover's staking. These tokens will be used for 1) paying the actual prover an amount that's higher than the fair market price, and 2) putting into our DAO for other purposes.
A deal is something like this:
struct Deal {
address prover;
bytes32 contextHash;
address feeToken;
uint256 fee; // in TAI token
uint256 deadline;
uint256 penalties; // in TAI
bytes extra;
bytes signature;
}
TAI will be minted when:
PROVER_FEE
, and use it to buy back TAI to burn.related to this idea: actor.createLayer2(theirOwnTokenAddressOnL1) to clone our canonical L2 to their own L2 such that the native fee token on their L2 is the given theirOwnTokenAddressOnL1 token.
The previous PoS design requires a block proposer to submit its own ZKP, this ZKP cannot be stolen by others (i.e, ethereum miners). With this current design, if a ZKP is valid, it's valid for all transactors, thus it is easy to steal a ZKP in the mempool.
We do need to somehow allow the prover to include its own transacting address as part of the public data.
A challange: even if the above is possible, how can we verify in solidity that the ZKP is associated with msg.sender
without accessing the public input in blob in the future?
This issue is to improve initial documentation on the site. This will mainly answer basic factual questions but also include a start for some testnet guides.
If time permits, can also explore the idea of an i18n dropdown on the documentation website, to allow convenience to those where english is difficult. (otherwise will open another issue).
idea: use something like https://crowdin.com/ to create multiple versions of the pages in various languages (this is what ethereum-org uses for their website, and popular projects like pnpm)
In EIP-1559 spec, there's an additional check to ensure that the block's gas limit won't fluctuate too much (1/1024):
Since we have not enabled 1559 in layer 2, this check is currently not necessary. But if we enable 1559 in the future, then we should add this check in the contract? (and zkEVM (with EIP-1559 enabled) should verify this too...?)
1) LibTxUtils
should verify invalid transaction signatures:
AssertionError: expected '0x1127Bf5f01a23266A2af0A3B2034DC98242…' to equal '0x00000000000000000000000000000000000…'
+ expected - actual
-0x1127Bf5f01a23266A2af0A3B2034DC9824205D64
+0x0000000000000000000000000000000000000000
at Context.<anonymous> (test/libs/LibTxUtils.test.ts:169:21)
I found both cannonical
or canonical
are used in code, e.g:
cannonical: https://github.com/taikochain/taiko-mono/blob/9b19fc2d1c227dc693f82cf21309864a1bca87c6/packages/protocol/contracts/bridge/TokenVault.sol#L30
canonical: https://github.com/taikochain/taiko-mono/blob/9b19fc2d1c227dc693f82cf21309864a1bca87c6/packages/protocol/contracts/bridge/TokenVault.sol#L50
I think canonical
is the correct spell from: https://ell.stackexchange.com/questions/291438/cannonical-or-canonical
This is a way to implement the rough idea Karl Floersch had when I talked with him at devcon where he was thinking of way to speed up withdrawals when some state hasn't changed for some time (but without knowing how it could be done).
Currently only describes a way to implement the most basic case for ETH, but I believe this could be extended further for other assets and with fewer constraints.
Also still very possible this method is flawed because I might have missed something that makes this approach impossible. So please let me know if you think there's a problem somewhere!
In the general case withdrawals can only be processed for a block that is proven and all its parent blocks are also proven. And so because of that withdrawal times are limited by the proof generation time for a block. In some cases however a full ZK-EVM proof is not required as long as certain conditions apply.
Let's say there's an onchain finalized block at X
. At block X+n
(n
blocks later), a user submits an L2 transaction to the L2 ETH bridge to withdraw 1 ETH out of the rollup. Our goal is to be able to proof in another smart contract (on L1 or another L2) that the user's withdrawal transaction is a valid withdrawal transaction without requiring the onchain finalization of block X+n.
To do this we first have to know when a transaction is actually valid:
tx_gas_limit * tx_gas_price
tx_gas_limit
is high enough to successfully do the required transaction1 and 4 does not depend on any state so easy to check just by checking the transaction data itself. For 2 and 3 we do need to know some select (but not necessarily complete) data about the account at the point when the withdrawal transaction is done. But, taking some shortcuts, we know:
If we simplify the problem as much as possible we can just require that the account did not do an transaction in blocks X+1 up to when the withdrawal transaction is done (this is certainly not a requirement for this system to work, it just makes things easier). This makes is easy to know from the proven data at block X:
We can read this data directly from the post-state blockhash from block X. The transaction data we can read directly from the transaction data in block X+n. All that's left is for us to be able to efficiently prove that no transactions were done from the account in intermediate blocks (simplified case, otherwise we need to track the nonce and ETH balance, which should also be very doable). To do this efficiently we could depend on an optimized data structure to be attached to proposed blocks that allows this. The most basic one would be a hash of the list of originating addresses of each transaction in the proposed block which allows for a very cheap/simple ZKP to be generated.
This simplified system would already allow immediate withdrawals of ETH as long as (very roughly) the ETH in the account has not been transferred in too recently (roughly the time to generate a ZK-EVM proof). This seems very useful because this allows users to store ETH on L2 that is immediately available on both L2 and L1/other L2s, in a capital efficient manner and without any dependency on 3rd party liquidity providers. And on top of this basic system it seems likely that support for assets like NFTs can be built that cannot be sped up in different ways.
For optimistic rollups that seems like a very attractive system because with very minimal circuits/ZKP support they can support much faster withdrawals. For zkRollups the usefulness largely depends on how fast blocks can already proven with a full ZK-EVM proof.
HTTP API for exposing events via a chain ID and adress pairing. Unsure if we will need this as it's possible the frontend can query The Graph or something instead.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.