Coder Social home page Coder Social logo

helios's People

Contributors

0xmodene avatar abdelstark avatar ckoopmann avatar controlcpluscontrolv avatar dadepo avatar danilowhk avatar drspacemn avatar enitrat avatar fmhall avatar giovannivignone avatar glihm avatar haoyuathz avatar hsyodyssey avatar kevinheavey avatar ltfschoen avatar ncitron avatar orenyomtov avatar patstiles avatar pcarranzav avatar prestwich avatar qiweiii avatar refcell avatar rex4539 avatar rkreutz avatar sagar-a16z avatar sergey-melnychuk avatar simon-saliba avatar sragss avatar whfuyn avatar zvolin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helios's Issues

error: could not compile `client`

Steps taken :

  1. git clone git clone https://github.com/a16z/helios.git
  2. cargo build

getting below error,

thread 'rustc' panicked at 'forcing query with already existing DepNode

  • query-key: Canonical { max_universe: U0, variables: [CanonicalVarInfo { kind: Region(U0) }, CanonicalVarInfo { kind: Region(U0) }], value: ParamEnvAnd { param_env: ParamEnv { caller_bounds: [Binder(OutlivesPredicate(ReLateBound(DebruijnIndex(1), BoundRegion { var: 0, kind: BrAnon(0, None) }), ReLateBound(DebruijnIndex(1), BoundRegion { var: 1, kind: BrAnon(1, None) })), []), Binder(OutlivesPredicate(rpc::RpcInner, ReLateBound(DebruijnIndex(1), BoundRegion { var: 1, kind: BrAnon(1, None) })), [])], reveal: UserFacing, constness: NotConst }, value: Binder(TraitPredicate(<for<'a, 'b, 'c, 'd, 'e, 'f, 'g, 'h, 'i> {std::future::ResumeTy, &'a mut execution::evm::Evm<'b, execution::rpc::http_rpc::HttpRpc>, &'c execution::types::CallOpts, &'d execution::evm::Evm<'e, execution::rpc::http_rpc::HttpRpc>, impl std::future::Future<Output = std::result::Result<std::collections::HashMap<ethers::types::H160, execution::types::Account>, execution::errors::EvmError>>, ()} as std::marker::Send>, polarity:Positive), []) } }
  • dep-node: evaluate_obligation(a5bd18beafa14380-25b55d5386557813)', /rustc/42325c525b9d3885847a3f803abe53c562d289da/compiler/rustc_query_system/src/dep_graph/graph.rs:316:9

........
........

note: rustc 1.67.0-nightly (42325c525 2022-11-11) running on x86_64-apple-darwin

note: compiler flags: --crate-type lib -C embed-bitcode=no -C split-debuginfo=unpacked -C debuginfo=2 -C incremental=[REDACTED]

note: some of the compiler flags provided by cargo are hidden

query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation for<'a, 'b, 'c, 'd, 'e, 'f, 'g, 'h, 'i> {core::future::ResumeTy, &'a mut execution::evm::Evm<'b, execution::rpc::http_rpc::HttpRpc>, &'c execution::types::CallOpts, &'d execution::evm::Evm<'e, execution::rpc::http_rpc::HttpRpc>, impl core::future::future::Future<Output = core::result::Result<std::collections::hash::map::HashMap<primitive_types::H160, execution::types::Account>, execution::errors::EvmError>>, ()}: core::marker::Send
#1 [typeck] type-checking rpc::<impl at client/src/rpc.rs:112:1: 112:31>::estimate_gas
#2 [typeck_item_bodies] type-checking all item bodies
#3 [analysis] running analysis passes on this crate
end of query stack
error: could not compile client

bug: avoid hardcoded gas limit when fetching access lists

Right now we use 10M gas limit to create the access list, which can cause errors if your address doesn't have enough funds to pay for the gas. To fix, use an untrusted eth_estimateGas to set the gas limit. We should also gracefully handle errors when someone uses an address that has no gas funds as the from parameter. When this happens, we can just return an empty accounts map and force the evm to fetch state on the fly (will be slow).

fix memory leak in node

Currently Node stores all historical ExecutionPayload entries in a hashmap. This mapping has an unbounded size, and as the node runs for a long period of time this can grow to be quite large. We should probably replace this mapping with a more suitable data structure like a queue.

write readme

We should include sections for installing, contributing, and an in depth description of how everything works.

Feature request: Seed data back to the p2p network

I quote from the Mustekala project by MetaMask (shelved during crypto winter):

"Let’s say there are 10,000 discoverable online/active Full Ethereum Nodes. If we were to turn all Metamask users (About 1.5 million) into Light Clients it would overwhelm them very quickly."

It could be cool for Helios to contribute back to the p2p network if it is to reach widespread adoption.

The Mustekala architecture docs could be an inspiration.

This feature would be synergistic with #59.

Remaining RPC Methods

There are a few RPC methods from the Ethereum RPC Spec that have not been implemented within Helios yet.

The main ones that might be useful to our users are the following...

  1. eth_sync
  2. eth_coinbase
  3. eth_feeHistory
  4. eth_newFilter
  5. eth_newBlockFilter
  6. eth_newPendingTransactionFilter
  7. eth_uninstallFilter
  8. eth_getFilterChanges
  9. eth_getFilterLogs
  10. eth_getTransactionByBlockHashAndIndex
  11. debug_getRawHeader
  12. debug_getRawBlock
  13. debug_getRawTransaction
  14. debug_getRawReceipts
  15. debug_getBadBlocks

Installed binary fails due to glibc version

Tried installing on Ubuntu 18.04 following the instructions for heliosup, but attempting to run the resulting helios binary faced with the following errors:

helios: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by helios)
helios: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by helios)

(Ubuntu 18.04 has glibc 2.27.)

Normally this would happen because the build was performed on a newer operating system. Building on an old one should give better compatibility. Alternatively, statically linking a libc (by targeting musl) would avoid this issue.

support socks5 proxies

This will allow Tor and Nym support.

I have a draft of this already, but am noticing many RPC providers reject connections coming from non-browsers via tor. Need to see if there is a good way around this.

missing hexadecimal prefix

I was experimenting with react-native-helios and encountered the following discrepancy when calling ethers.provider.getCode() against the Helios RPC:

const helios = getHeliosProvider(ethereumMainnet);
const alchemy = new ethers.providers.AlchemyProvider(
  'mainnet',
   apiKey,
);

const address = "0x00000000006c3852cbEf3e08E8dF289169EdE581";

const alchemyCode = await alchemy.getCode(address); // 0x60806040526004361015610013575b6...
const heliosCode = await helios.getCode(address); // 60806040526004361015610013575b6... (invalid hexlify value)

It looks like we're not returning with the expected hexadecimal prefix. If I add 0x to the response returned by Helios, the two responses are identical.

I'm running helios at 4c72344b55991b6296ccbb12b3c9e3ad634d593e; I'll bring this up to latest to see if the issue persists. πŸ™

bug: when verifying proofs, shared_prefix_length doesn't exit when a non-matching nibble is found

Hi! I noticed the shared_prefix_length function in the proofs module compares the paths from the proof and the path we're verifying, but it doesn't exit early if a nibble that doesn't match is found:

https://github.com/a16z/helios/blob/master/execution/src/proof.rs#L136-L138

For instance, when comparing 0xabcd with 0xabfd it will return a shared prefix length of 3 instead of 2.

This means a proof could actually be showing a divergent path, but because some nibbles match at the end of the path, the verifier will advance to later nibbles and potentially validate an invalid proof (though this is probably highly impractical to exploit in a real-world account or storage proof).

Moreover, as we're walking along the proof, if such a divergent path were encountered, the verifier should reject the proof if this is not the last node.

(I noticed this while comparing the verifier implementation to https://github.com/lidofinance/curve-merkle-oracle/blob/main/contracts/MerklePatriciaProofVerifier.sol#L74-L95 as it has a similar structure)

I have a fix for this ready in https://github.com/pcarranzav/helios/tree/fix-shared-prefix-length and will PR right after posting this (my laptop just seems to be taking ages to install the dependencies so that I can check the tests run properly...)

Illegal instruction (core dumped)

Running with helios --execution-rpc https://eth-mainnet.g.alchemy.com/v2/... results in the following:

[2022-12-15T02:50:59Z INFO  client::rpc] rpc server started at 127.0.0.1:8545
Illegal instruction (core dumped)

Expose all types under `execution/src/types`

Following #145, additional types need to be exported in order to use the helios lightclient externally.

I will submit a PR proposing to export the remaining types, which will enable the use of the client functions from external locations.

add interaction diagram

Hey y'all! Found your project through beerus, a Starknet light client that leverages helios.

I'm trying to understand the interaction flow that occurs in a helios light client and thought that a simple diagram (excalidraw-like) could help!

something in the spirit of

image

image

Thank you all for your help!

missing storage slots when getting accounts causing panics

Some calls are causing a very unexpected failure.

For example, calling renderBroker(5) on 0x8bb9a8baeec177ae55ac410c429cbbbbb9198cac always fails on line 287 of evm.rs. This is very unexpected, since in this case we are just fetching and proving that slot, and immediately fetching the slot out of the map and unwrapping it.

some checkpoint values cause failed syncs

Using 0x85e6151a246e8fdba36db27a0c7678a575346272fe978c9281e13a8b26cdfa68 for example causes an RpcError on fetching the bootstrap, immediately followed by an InvalidPeriod error. It seems other checkpoints in this sync period are failing as well.

helios return hex value with leading zero, and golang will report:json: cannot unmarshal hex number with leading zero digits into Go value of type *hexutil.Big

golang client make request getBalance:
{"jsonrpc":"2.0","id":1,"method":"eth_getBalance","params":["0x0143bd0cc24d0e1ea46353be5a7972f42abcb175","latest"]}

helios return:
{"jsonrpc":"2.0","result":"0x000000000000000000000000000000000000000000000000000824962cde342e","id":1}

then golang client panic with fatal error:
json: cannot unmarshal hex number with leading zero digits into Go value of type *hexutil.Big

I check others rpc endpoint will return:
{"jsonrpc":"2.0","result":"0x824962cde342e","id":1}

complete representation for beacon block body

Currently BeaconBlockBody includes dummy values for proposer_slashings, attester_slashings, and voluntary_exits. This prevents our beacon block verification from working whenever any of these fields are present.

better configuration

Right now, the configuration options leave a lot to be desired. There Config struct basically holds all configuration data, regardless of whether that configuration should be user facing or not. For example, the user probably doesn't care about the slot fork blocks of each hardfork or the genesis validator root, but they almost certainly care about their rpc url or checkpoint block. We should probably separate these concerns.

The interaction between CLI flags and config options are also somewhat poorly structured, and can probably use some refactoring. Along with that, I think we should allow users to set config options using environment variables.

implement block tx count rpc methods

Implement eth_getBlockTransactionCountByHash and eth_getBlockTransactionCountByNumber.

To implement this, I suggest looking at how eth_getBlockByHash and eth_getBlockByNumber are implemented. We should be able to service these requests without making any rpc requests, since the ExecutionPayload that we have already has a transactions field which we can check the length of.

handle application distribution

We probably want this to be as easy as possible to install. At the very least we should add some CI to cross compile binaries for all major platforms. It might also be a good idea to build something similar to foundryup to automate the install process.

bug: handle full_tx parameter in rpc calls

eth_getBlockByNumber and eth_getBlockByHash both take a full_tx parameter. If it is true, we need to provide the full tx for each transaction in the transaction list rather than just the hash. Right now, we ignore this parameter and assume it is false. This can cause bugs in certain dapps (it seems to break aave).

mismatched types when attempting to import helios and ethers into the same project

Trying the sample code in the README leads me to a few compilation errors such as this:

error[E0308]: mismatched types
   --> /home/naps62/.cargo/git/checkouts/helios-b3bc464b79507b80/e4071fe/execution/src/evm.rs:256:13
    |
255 |         Ok(Some(AccountInfo::new(
    |                 ---------------- arguments to this function are incorrect
256 |             account.balance,
    |             ^^^^^^^^^^^^^^^ expected struct `primitive_types::U256`, found struct `ethers::types::U256`
    |
    = note: struct `ethers::types::U256` and struct `primitive_types::U256` have similar names, but are actually distinct types

I tried overriding ethers to use the same fork specified in this repo, but that seems to have changed nothing

replace trie lib with a more permissive lib

Currently for merkle trees we use openethereum/parity-ethereum, which is licensed with GPL. Since we want to release this under MIT, we cannot use this library due to its copyleft attribute. I think the best replacement here would be to use paritytech/trie which is licensed under Apache.

support calling engine api

For integrating with EL full nodes, we should support calling the engine api, specifically engine_forkchoiceUpdatedV1, which Akula uses to follow the chain.

parallelize rpc calls

Right now we are using mutex locks to access the node from the rpc. This is definitely not optimal, since it means we cannot to concurrent reads, and different rpc calls can block eachother. In practice, we very rarely use the lock to perform a write and very often perform reads. We need to research here what the best mechanism to make these concurrent reads work.

bug: shutdown sometimes hangs

Reported by Mason. I can't repro this consistently, but have had this happen in the past. A nice solution might be to follow what some other nodes do, and allow the user to ctrl-c multiple times in a row to force exit the application.

Export config::checkpoints

Currently "checkpoints" are not being exported.

So I submitted a PR adding it to exports and any other config module if needed.

ensure that client never returns stale data

Right now, if the client is unable to advance on the beaconchain, all client fetch methods continue to operate on the stale chain state. This is not optimal, since it could lead to unintended behavior (such as parameterizing a uniswap slippage limit with a stale price quote).

To fix, we should check the timestamp of the most recent beacon block, and throw an error if it is too old when users make requests using latest or finalized blocknums, or are calling eth_blockNumber.

check checkpoint age when starting up

When Helios starts, it needs to fetch a recent beacon blockhash to use as the weak subjectivity checkpoint. If this blockhash is too old (under worst case conditions too old is ~14 days), it is possible for an attacker to trick Helios into following the wrong chain. While this attack is hard to pull off (requires millions in capital to fill the staking deposit and withdrawal queues), we should still check the checkpoint age, and if it is too old, throw and error and tell the user how to fetch a good blockhash.

To do this, use the bootstrap fetched here and check bootstap.header.slot's age using the expected_current_slot and slot_timestamp methods in Consensus.

If it is older that 14 days, throw an error (consensus errors can be found here)

bug: dns and too many open files errors

Logs provided by Mason. I am still unable to produce locally though.

thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Too many open files (os error 24)
Location:
    /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:89:27', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Too many open files (os error 24)
Location:
    /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:89:27', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Too many open files (os error 24)
Location:
    /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:89:27', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:73:20', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:73:20', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:73:20', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
[2022-09-23T02:53:02Z INFO  consensus::consensus] applying optimistic update     slot=4756462  confidence=98.05%  delay=00:00:15
[2022-09-23T02:53:05Z WARN  client::rpc] Too many open files (os error 24)
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
[2022-09-23T02:53:41Z INFO  consensus::consensus] applying optimistic update     slot=4756465  confidence=99.22%  delay=00:00:18
[2022-09-23T02:53:43Z WARN  client::rpc] Too many open files (os error 24)
[2022-09-23T02:55:12Z INFO  consensus::consensus] applying optimistic update     slot=4756472  confidence=98.83%  delay=00:00:25
[2022-09-23T02:55:22Z WARN  client::rpc] (code: -32000, message: already known, data: None)
[2022-09-23T02:55:24Z INFO  consensus::consensus] applying optimistic update     slot=4756474  confidence=99.41%  delay=00:00:13
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
[2022-09-23T02:56:03Z INFO  consensus::consensus] applying optimistic update     slot=4756477  confidence=99.02%  delay=00:00:16
[2022-09-23T02:56:05Z WARN  client::rpc] (code: -32000, message: nonce too low, data: None)
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
[2022-09-23T02:56:48Z INFO  consensus::consensus] applying finality update       slot=4756416  confidence=83.01%  delay=00:13:793
[2022-09-23T02:56:49Z INFO  consensus::consensus] applying optimistic update     slot=4756480  confidence=83.01%  delay=00:00:26
[2022-09-23T02:56:51Z WARN  client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN  client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN  client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN  client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN  client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN  client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
   0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
   1: dns error: failed to lookup address information: nodename nor servname provided, or not known
   2: failed to lookup address information: nodename nor servname provided, or not known
Location:
    execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
[2022-09-23T02:57:39Z INFO  consensus::consensus] applying optimistic update     slot=4756485  confidence=97.85%  delay=00:00:16
[2022-09-23T02:57:41Z WARN  client::rpc] Too many open files (os error 24)

cache checkpoint hash for future runs

When the node shuts down, we should write a suitable checkpoint block to a file for later use. Otherwise it's possible that people will use very old checkpoints that are outside of the weak subjectivity period.

add finalized block tags

Currently, block numbers are represented as Option<u64> with None referring to the latest block. We should change this representation two an enum with types for a specific number, latest, and finalized.

refactor core consensus

Right now the core machinery inside consensus/consensus.rs isn't quite identical to the spec. I chose to rewrite it at the time since I didn't like how the spec structures it, but I think for safety it will be best to perfectly mirror the spec to avoid introducing any vulnerabilities.

implement logging rpc methods

We currently don't have any of the rpc methods related to fetching logs implemented. There seems to be a lot of them, and I'm not sure which ones dapps use the most (it seems dapps are using logs less and less nowadays). I think the best plan is to use a bunch of dapps and keep track of which of the log methods they are using and implement them.

improve error handling

We need to do a pretty thorough audit of the error handling throughout the codebase.

  • make sure that we never do a naked unwrap which will crash the application
  • Never return Err in main or the application crashes
  • Gracefully handle rpc servers returning malformed data
  • Figure out the best behavior for when we catch the rpc lying
    • We could just gracefully handle this and return an rpc error and emit a warning log
    • We might also want to actually panic and crash the application since it is important for the user to immediately know
  • Use specific error types instead of generic eyre

use access lists to parallelize state reads in calls

Certain calls touch large amounts of state across many accounts. Most notably, the Uniswap frontend makes a multicall to read the user's balance of every since token in the default tokenlist, which ends up touching the state of ~200 contracts. When processing this call, the node ends up needing to grab proofs for each different token, which it fetches sequentially as needed during the call. This leads to the call being processed incredibly slowly.

We can use access lists to significantly speed up these kinds of calls. To start, we fetch the access list for the call using the eth_createAccessList rpc call. From there, we can dispatch a eth_getProof call for each state access in parallel. As the calls return, we can store them. Once all calls have returned, we can perform the evm computation using the stored values.

One important edge case that we need to handle here is if the evm accesses state that isn't in our storage data structure. This can happen in cases where eth_getAccessList is not deterministic (block hashes can be used to do this). In these cases, we still need to fetch the proof via rpc.

Panic

Finally got it running only to now get this output:

thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26

create top level crate with cleaner exports

We probably want a single top level crate that re-exports most of the functionality, while hiding a lot of the internals better. This means that most people who want to consume helios can then just import one crate rather than a bunch of the required crates in this workspace. We can also hide a lot of functionality that users don't need here. If anyone does happen to need it (such as some of the internal consensus stuff), they are free to import any of the individual packages.

add wasm bindings

We may need to experiment with removing references to the filesystem in the checkpoint saving logic to make this work. It also looks like ethers-rs is not properly compiling to wasm at the moment. BLST may need some messing around with. Overall its a decent sized endeavor, but will really help deliver on the portability front.

The ethers-rs issue is tracked at gakonst/ethers-rs#1824

Missing RPC documentation

We are currently missing a central location to view all RPC methods implemented by Helios. The rpc.rs file defines the methods but we do not have documentation.

fetch beaconchain data from p2p network

This one might be a stretch, but if we can fetch all beaconchain data (updates and beacon blocks) from the p2p network, we could entirely remove the need for an eth2 rpc. Nimbus nodes already gossip all the light client data we need.

investigate other chains that can be added to helios

Other EVM chains may be able to work with Helios. Some chains may have viable light sync protocols that we can swap out for, and some L2s may be workable. We should look into the following.

  • Avalanche
  • Polygon
  • BSC
  • Optimism
  • Arbitrum
  • zkSync

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.