Coder Social home page Coder Social logo

paritytech / parity-bridges-common Goto Github PK

View Code? Open in Web Editor NEW
267.0 24.0 131.0 15.09 MB

Collection of Useful Bridge Building Tools ๐Ÿ—๏ธ

License: GNU General Public License v3.0

Rust 99.35% Shell 0.15% Dockerfile 0.25% Handlebars 0.25%

parity-bridges-common's Introduction

Parity Bridges Common

This is a collection of components for building bridges.

These components include Substrate pallets for syncing headers, passing arbitrary messages, as well as libraries for building relayers to provide cross-chain communication capabilities.

Three bridge nodes are also available. The nodes can be used to run test networks which bridge other Substrate chains.

๐Ÿšง The bridges are currently under construction - a hardhat is recommended beyond this point ๐Ÿšง

Contents

Installation

To get up and running you need both stable and nightly Rust. Rust nightly is used to build the Web Assembly (WASM) runtime for the node. You can configure the WASM support as so:

rustup install nightly
rustup target add wasm32-unknown-unknown --toolchain nightly

Once this is configured you can build and test the repo as follows:

git clone https://github.com/paritytech/parity-bridges-common.git
cd parity-bridges-common
cargo build --all
cargo test --all

Also you can build the repo with Parity CI Docker image:

docker pull paritytech/ci-unified:bullseye-1.77.0-2024-04-10-v20240408
mkdir ~/cache
chown 1000:1000 ~/cache #processes in the container runs as "nonroot" user with UID 1000
docker run --rm -it -w /shellhere/parity-bridges-common \
                    -v /home/$(whoami)/cache/:/cache/    \
                    -v "$(pwd)":/shellhere/parity-bridges-common \
                    -e CARGO_HOME=/cache/cargo/ \
                    -e SCCACHE_DIR=/cache/sccache/ \
                    -e CARGO_TARGET_DIR=/cache/target/  paritytech/ci-unified:bullseye-1.77.0-2024-04-10-v20240408 cargo build --all
#artifacts can be found in ~/cache/target

If you want to reproduce other steps of CI process you can use the following guide.

If you need more information about setting up your development environment Substrate's Installation page is a good resource.

High-Level Architecture

This repo has support for bridging foreign chains together using a combination of Substrate pallets and external processes called relayers. A bridge chain is one that is able to follow the consensus of a foreign chain independently. For example, consider the case below where we want to bridge two Substrate based chains.

+---------------+                 +---------------+
|               |                 |               |
|     Rococo    |                 |    Westend    |
|               |                 |               |
+-------+-------+                 +-------+-------+
        ^                                 ^
        |       +---------------+         |
        |       |               |         |
        +-----> | Bridge Relay  | <-------+
                |               |
                +---------------+

The Rococo chain must be able to accept Westend headers and verify their integrity. It does this by using a runtime module designed to track GRANDPA finality. Since two blockchains can't interact directly they need an external service, called a relayer, to communicate. The relayer will subscribe to new Rococo headers via RPC and submit them to the Westend chain for verification.

Take a look at Bridge High Level Documentation for more in-depth description of the bridge interaction.

Project Layout

Here's an overview of how the project is laid out. The main bits are the bin, which is the actual "blockchain", the modules which are used to build the blockchain's logic (a.k.a the runtime) and the relays which are used to pass messages between chains.

โ”œโ”€โ”€ modules                  // Substrate Runtime Modules (a.k.a Pallets)
โ”‚  โ”œโ”€โ”€ beefy                 // On-Chain BEEFY Light Client (in progress)
โ”‚  โ”œโ”€โ”€ grandpa               // On-Chain GRANDPA Light Client
โ”‚  โ”œโ”€โ”€ messages              // Cross Chain Message Passing
โ”‚  โ”œโ”€โ”€ parachains            // On-Chain Parachains Light Client
โ”‚  โ”œโ”€โ”€ relayers              // Relayer Rewards Registry
โ”‚  โ”œโ”€โ”€ xcm-bridge-hub        // Multiple Dynamic Bridges Support
โ”‚  โ”œโ”€โ”€ xcm-bridge-hub-router // XCM Router that may be used to Connect to XCM Bridge Hub
โ”œโ”€โ”€ primitives               // Code shared between modules, runtimes, and relays
โ”‚  โ””โ”€โ”€  ...
โ”œโ”€โ”€ relays                   // Application for sending finality proofs and messages between chains
โ”‚  โ””โ”€โ”€  ...
โ””โ”€โ”€ scripts                  // Useful development and maintenance scripts

Running the Bridge

Apart from live Rococo <> Westend bridge, you may spin up local networks and test see how it works locally. More details may be found in this document.

parity-bridges-common's People

Contributors

acatangiu avatar adoerr avatar alvicsam avatar bkontur avatar boundless-forest avatar bulatsaif avatar chevdor avatar cuteolaf avatar dependabot-preview[bot] avatar dependabot[bot] avatar fewensa avatar general-beck avatar hacpy avatar halfpint104 avatar hbulgarini avatar hcastano avatar hujw77 avatar kaichaosun avatar nuke-web3 avatar omahs avatar oxarbitrage avatar pepyakin avatar rcny avatar serban300 avatar sergejparity avatar svyatonik avatar tomusdrw avatar tripleight avatar vmarkushin avatar xiaoch05 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

parity-bridges-common's Issues

Forbid appending blocks to forks that are competing with finalized chain

follow up of #128 (comment)

Let's say we need 3-of-5 signatures to treat block finalized. ValidatorsSet1(validator1, validator2, validator3, validator4, validator5) has generated headers:

(H1, validator1) -> (H2, validator2) -> (H3, validator2)
                 \-> (H2', validator3) -> (H3', validator3) -> (H4', validator3) -> (H5', validator3)

The header H2 signals new authorities change to the 2-of-3 set ValidatorsSet2(validator6, validator7, validator8). Then headers H4(validator3) and H5(validator4) are imported, updating bet finalized header to H2 and activating ValidatorsSet2.

Current implementation allows subsequent import of (H6', validator1) + (H7', validator2) + (H8', validator3) => we'll end up with actually unfinalized header (H6') and reject proper H6. We should forbid that. This should be possible without any additional storage reads - we may use the same ancestry we're using when computing finality votes.

Related bonus issue: when we finalize some block and best block is not descendant of this block, we must update best block to that finalized block. Ideally, it should be best(all descendants of finalized block), but it would require maintaining additional structs in storage. Best would be updated anyway, when next header will be imported.

Prepare actual weights for runtime transactions

In #69 I had to switch to Substrate master to be able to use GrandpaJustification::decode_and_verify_finalizes() from builtin. It turned out that the main change is that now all runtime calls (transactions) require annotated weights. So before considering PoA -> Substrate headers sync to be finished, we need to prepare actual weights for all calls. I've started to do that, but it's a quite complicated process + as some runtime changes are still planned (I'm mostly talking about #62 and maybe #38 ), it isn't the right moment to do that now.

The main problem is that we can't compute weight of header import call by looking at the header itself. The main source of uncertainty is finality computations - we may need to read arbitrary number of headers (and write some updates after that) if validators haven't been finalized headers for too long. At the beginning I had some ideas that we'll cache computation results in the storage, so we don't need to recompute it on every block, but iirc I've stuck with something - so most probably this is impossible, and we'll need storage reads anyway :/ But may worth looking at it again.

That said, most probably, we need to introduce some amortized weight for import calls. This may also affect some params/default constants in relay, because weight may be too big to fit into 'normal' block.

So I'm proposing to mark all methods with #[weight=0] (or whatever you suggest) and change that afterwards.

Unify headerchain of ethereum and substrate

The idea is to have a common low-level module to track header chain. That module could be instantiated for different chains and hopefully re-used for other projects as well (like BTC bridge).

Rough, plan:

  • verify_events(at_hash, Vec<event>, proof_data) method
  • verify_transactions(at_hash, Vec<tx>, proof_data) method
  • Incentivisation hooks, but by default not incentivised.
  • Ideally a way to send unsigned transactions, but it should be configurable.
  • We can deploy multiple instances of the same module with different parameters.
    Governance to control the status of the module (accepting headers or not, etc).
  • Has some notion of finality.
  • Prune old headers, similar to how ethereum bridge does that.
  • Hook for pruned headers (To allow some extra indexing in case we want to build an CHT or something in the future).

Improve Test Readability

We should strive to make tests more readable by abstracting common setup/teardown stuff into helper functions. We should also try and be explicit about what is being checked in a test with the use of intermediate variables. There's some room for improvement in places like modules/ethereum/src/{finality, import}.rs, but we should be checking the entire codebase.

Clean tests are not limited to the things I've suggested here, but it's just what I've noticed we could work on with the existing tests.

Compile PoA relay

Seems that currently the PoA <-> Substrate relay code does not compile on the CI, we should get that sorted out,.

Check benchmarks compilation on CI

#136 brings first benchmark and there are some others planned. We need to run cargo check on CI to test if benchmarks are still compilable - they're too fragile, especially when we touch test code.

Rework Relay CLI

Currently the CLI is build with clap and the way we handle params has imho two disadvantages:

  1. There is a bunch of back-and-forth and stringifyied parameters between cli.yml and main.rs
  2. We start with default values of all parameters and only override them if they are provided in the CLI. Some parameters however should be mandatory (like eth-contract address). The defaults are configured for --dev chain, but to be fair I'd prefer to have them required instead of learning one-by-one which are misconfigured.

I'd suggest to rewrite the CLI to structopt.

Tracking Issue from Substrate Repo for Sub2Sub bridge components

I'm migrating this from the Substrate repo for the sake of continuity. I think that these TODOs should be broken up into individual issues at some point, but since the design is still in a bit of flux that can wait.

Original PR TODOs

This PR is going to track the development of #1850. Development will be done on other branches and PR'd into this one. This is being done in order to not have half baked code in Substrate, as well as making it easier for others to review.

Some of the TODOs are:

  • Have a module that can act as a light client for another chain (not necessarily do anything with that data though)
  • Verify GRANDPA finality proofs on-chain
  • Verify block ancestry proofs on-chain
  • Have an RPC for actors to submit claims from other chains
  • Message delivery and acknowledgement infrastructure
  • Default message handler for the receiving end of the bridge

Additionally, a bridge relay node will need to be implemented, which will need to do the following:

  • Relay finalized headers to bridged chains
  • Relay outbound messages to bridged chains
  • Relay message delivery acknowledgements to bridged chains

The bridge relay node issues could probably be tracked in a separate issue as they are fairly independent of the module itself.

Use compact trie proofs for both PoA receipts and transactions

Right now the whole set of block' PoA transactions receipts and transactions is used to prove that given transaction/receipt has been included into PoA block. We could make this proof smaller by only providing affected trie nodes.

This would make PoA -> Substrate bridge testing a bit harder, so imo this could be done after initial launch.

Make relay (easy) configurable for other chains

original comment: #58 (comment)

The problem is that we can't (at least I do not know how atm) make relay generic over runtime/chain. That's why if we're going to point it to other chain, it would need to be reconfigured and rebuilt before any transaction will be submitted.

The only way I could think of, is to extract everything that is runtime/chain specific (including crypto type) to separate file, like this one. This also must be noted in crate' readme.

Relay healthcheck endpoint

Would be good if relay was able to report it's health. This is useful for dockerized deployments or simply for monitoring.

Related: #141

Detect and report validators misbehavior

imagine #34 in unsigned transactions context - we can't penalize validator for spamming tx pool and blocks with good bad' headers AND validator can just do not relay these blocks to other PoA peers => it'll be never reported as malicious using builtin PoA mechanisms. So we need to detect such validators in bridge and report them back to PoA chain.

The easiest fix (suggested by @tomusdrw ): (c) "...should basically just make sure that every header seen on the bridge is gossiped in the network and that should be enough."

Repo settings have been changed.

Repository's settings have been updated in the following ways:

  • protected branch set to "master"
  • status checks must pass in order for a pull request to be merged

other settings:

branch: "master",
required_status_checks: {
  contexts: ["cla"],
  strict: false
},
enforce_admins: true,
required_pull_request_reviews: null,
restrictions: null

For more information, see octokit documentation

This issue was created for informational purposes only. Feel free to close this issue at any time.

Regards,
cla-bot-2020

Relay Substrate headers using Offchain Worker

To replace header relayers we could send transactions from offchain workers. This is usable for one-direction submission of headers for:

  • Substrate->Substrate (both signed and unsigned)
  • Substrate->PoA [Arguable if worth it

The main goal of this task would be to provide:

  1. A pallet that wraps headers into chain-specific transactions + nonce management (for signed)
  2. Configurable gas pricing/fee.
  3. An offfchain worker implementation submitting such transactions (store status in local storage, handle missed headers?)
  4. Generic RPC support (different formats for Substrate and Eth)
  5. Making transaction format generic (so that we can support multiple chains).
  6. Potentially abstract into a crate (so that can be used separately from the runtime).

Only accept headers-in-unsigned-transactions when we know we're almost at the tip of PoA chain

extracted point(4) from #26, because it'll need another round of research:
...
4) one other restriction that we may introduce: we know approximate block time of PoA (it is stepDuration...stepDuration * maximumEmptySteps) - what if we only accept unsigned transactions when we believe we're at the tip of PoA chain (i.e. when timestampModule.timestamp - BestKnownPoAHeader.timestamp < N * approximateBlockTime)?
...

Need to be caution with computing approximate block time, because (c): "there were some issues recently on network (running some super old PoA), where after upgrade the current step and "expected step" did not really fully match. I.e. due to some time drifts, validators being down, etc the step calculated from stepDuration did not match the actual step in the network. Maybe if we make it relative, i.e. using the step from the last known block as a reference point, this could work"

Test BridgeStorage trait

There are now 2 trait Storage implementors in PoA module: InMemoryStorage and BridgeStorage. When the module was created, I've used in-memory storage to embed everything in parity-ethereum. #45 adds more logic to some storage methods && there are no actual tests for it.

So we either need to: (1) add tests for BridgeStorage, or (2) change all existing tests to use BridgeStorage.

Bridge (Relay) state dashboard

Built for instance using https://redash.io/
Might require exposing some server endpoints from the relayer or to add some reporting to things like Prometheus or Telemetry.

Things that would be good to see:

  • Latest PoA Block (Foreign Chain)
  • Latest Substrate Block (Home Chain)
  • Latest foreign chain block on home chain (and vice-versa)
  • Last relayed block (transaction send) (both directions)
  • Uptime
  • Any internal relayer metrics we would like to see

RUSTSEC-2018-0006: Uncontrolled recursion leads to abort in deserialization

Uncontrolled recursion leads to abort in deserialization

Details
Package yaml-rust
Version 0.3.5
URL chyh1990/yaml-rust#109
Date 2018-09-17
Patched versions >= 0.4.1

Affected versions of this crate did not prevent deep recursion while
deserializing data structures.

This allows an attacker to make a YAML file with deeply nested structures
that causes an abort while deserializing it.

The flaw was corrected by checking the recursion depth.

See advisory page for additional details.

Steps required to setup and operate PoA -> Substrate bridge

Get all the required things to run PoA -> Substrate one way bridge (with #26).

Stuff like:

  • Required machines and clients (get in touch with devops team)
  • Monitoring (and some basic failure detection, possibly post in the riot channel)
  • Status page
  • Westend Runtime Upgrade process (who & how)
  • Clarify API required from the PoA header chain module (most likely verify_event and verify_transaction. Maybe verify_storage?).

Share data reads when several headers are imported at once

Now when we're importing several PoA headers in single call, every header import starts from the scratch - i.e. we're creating ImportContext, ValidatorsSet and FinalityVotes (see #116 ). But we actually may reuse FinalityVotes from the parent' header import (if parent is imported in the same call) + construct ImportContext and ValidatorsSet after parent is imported (without any additional reads). This will decrease number of redundant storage reads significantly, because relay is now configured to submit 32 headers in single transaction.

Prune old headers with separate transaction

And potentially reward submitter for pruning (if prune transaction is signed? or should we only accept signed pruning transactions?)

PART 1: Problem with unsigned transactions

  1. authority generates multiple valid headers at its slot;
  2. (most of) headers are 'mined' and occupy storage;
  3. we have potentially unlimited number of entries in the HeadersByNumber map;
  4. later, when one of these headers is finalized, we may end up exceeding block (weight? fee?) limit because we need to prune these headers here => transaction with valid header submitted by good validator is not accepted => the sync is stuck.

PART 2: Problem with signed transactions
The problem is the same, though malicious authority will pay for its actions. But the sync will stuck anyway.

PART 3: Potential solution
Do not intermix headers sync with headers pruning - there should be another API entry for pruning obsolete headers. Probably we need to reward for calling this method?

Another solution would be to disable pruning at all.

Support multiple instances of Bridge pallet

It should be possible to have multiple instances of bridge pallet in the runtime. For instance to support Kovan + Custom PoA testnet.

Features:

  • Runtime flags to enable / disable particular instance of the bridge ("admin" rights).
  • Relay to specify which instance should it be connected to
  • (?) A BridgeHub pallet to list available bridges and their metadata (to use in UIs for instance).

Alternatively we can have a BridgeHub pallet being a proxy to individual deployments (so the access control is done in BridgeHub for instance)

May involve tweaking Relay to speak to the right instance.

Gas Pricing helper tool

Since the gas price situation in some networks can change drastically I think we should introduce a separate tool that would make sure that the transactions get through in a timely fashion.
I imagine that this can be implemented separately and the main responsibility of the tool would be:

  1. Monitor transactions sent from a particular account.
  2. Re-send the same transactions with higher gas price in case they are not getting through.

Obviously that will require the tool to have access to the private key. The gas price bumping strategy should be configurable.

See for some context: #65 (comment)

Support forced GRANDPA validators change

There's no way currently to support that in light clients - we need some way to trust that handoff. So we need to stop syncing (that fork) when we see a forced change.

CLI to relay transaction proofs

A simple CLI tool that the user can run for themselves that will:

  • Monitor transactions from address A to a pre-defined locking contract address B
  • Each time the transaction is made it will construct an inclusion proof and submit it to Substrate chain (note it has to wait for the headers to be available first).
  • The transaction is sent from a pre-configured (and topped up) substrate account.

In the future we might extend this to monitor multiple ETH addresses instead.
(so you can run a relayer for others)

Explore sync-headers-using-unsigned-transactions option in PoA bridge

What I learned from talks with Tomek: switching from signed transactions to unsigned would allow us to deduplicate on txpool/network (since all txs for block B will have the same hash H). This, however, brings some problems:

  1. we can't reward submitters for providing correct blocks - this could be solved by switching to reward-for-messages scheme, because to sync messages submitters would need to sync headers first;
  2. we can't submit several blocks in single transaction - the hashes would be different.

I'd like to explore this option && see:

  1. how many sync-header transactions we could verify in a sec - unsigned transactions need to be verified before they accepted to tx pool;
  2. whether using unsigned transactions would lead to slower sync (because we can't provide multiple blocks in single tx), or not.

Eth2Sub transactions are removed from txpool

Seems like after last Substrate update, there's a problem with eth2sub syncing. Some transactions are removed from tx pool with ExhaustsResources error. Relay currently do not watch for submitted transactions => because nothing is updated in Substrate state, it waits 5 minutes before 'restarting' itself and resubmitting same headers again.

I'm not sure, if that's the problem with:

  1. our configuration - maybe it works that way with #[weight = 0]? something is missed from the runtime;
  2. some already-fixed Substrate issue - probably ref update will solve this;
  3. not-yet-fixed Substrate issue.

In any case, dropping transactions from pool with that error seems strange - the pool is empty (there's at most 4 signed transactions there during eth2sub), so which resources are exhausted?

Required for #91

log

Relay can't detect if node is offline

Part1: It looks like after we have migrated to shared client, we also lost ability to receive transport-related RPC errors. I.e. if you start relay with both nodes online, but then stop and start some node again, relay will hang forever, because transport errors are lost somewhere in background threads.

Part2 (depends on way how we resolve part1): if stopping node would spoil jsonrpsee::Client forever, we'll need to create new instance. I've started implementing that before realizing that all transport errors are lost. Intermediate results are in this branch. I'll leave it there for now.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.