Coder Social home page Coder Social logo

hyperlane-xyz / hyperlane-monorepo Goto Github PK

View Code? Open in Web Editor NEW
223.0 24.0 250.0 43.33 MB

The home for Hyperlane core contracts, sdk packages, and other infrastructure

Home Page: https://hyperlane.xyz

License: Other

Shell 0.41% Dockerfile 0.06% Rust 51.43% Smarty 0.29% JavaScript 0.10% Solidity 15.10% TypeScript 32.02% HCL 0.59%
blockchain interchain monorepo hyperlane

hyperlane-monorepo's Introduction

Hyperlane

GitHub Actions codecov Foundry License: MIT

Versioning

Note this is the branch for Hyperlane v3.

V2 is deprecated in favor of V3. The code for V2 can be found in the v2 branch. For V1 code, refer to the v1 branch.

Overview

Hyperlane is an interchain messaging protocol that allows applications to communicate between blockchains.

Developers can use Hyperlane to share state between blockchains, allowing them to build interchain applications that live natively across multiple chains.

To read more about interchain applications, how the protocol works, and how to integrate with Hyperlane, please see the documentation.

Working on Hyperlane

Foundry

First ensure you have Foundry installed on your machine.

Run the following to install foundryup:

curl -L https://foundry.paradigm.xyz | bash

Then run foundryup to install forge, cast, anvil and chisel.

foundryup

Check out the Foundry Book for more information.

Node

This repository targets v20 of node. We recommend using nvm to manage your node version.

To install nvm

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

To install version 20

nvm install 20
nvm use 20

You should change versions automatically with the .nvmrc file.

Workspaces

This monorepo uses Yarn Workspaces. Installing dependencies, building, testing, and running prettier for all packages can be done from the root directory of the repository.

  • Installing dependencies

    yarn install
  • Building

    yarn build

If you are using VSCode, you can launch the multi-root workspace with code mono.code-workspace, install the recommended workspace extensions, and use the editor settings.

Logging

The typescript tooling uses Pino based logging, which outputs structured JSON logs by default. The verbosity level and style can be configured with environment variables:

LOG_LEVEL=DEBUG|INFO|WARN|ERROR|OFF
LOG_FORMAT=PRETTY|JSON

Rust

See rust/README.md

Release Agents

  • Tag the commit with the current date in the format agents-yyyy-mm-dd; e.g. agents-2023-03-28.
  • Create a Github Release with a changelog against the previous version titled Agents MMMM DD, YYYY, e.g. Agents March 28, 2023.
  • Include the agent docker image tag in the description of the release
  • Create a summary of change highlights
  • Create a "breaking changes" section with any changes required
  • Deploy agents with the new image tag (if it makes sense to)

hyperlane-monorepo's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyperlane-monorepo's Issues

Support "commitments" in watcher

See #55 for a description of "commitments".

The watcher should watch Replicas. Every time a commitment is played on a Replica, it should call a view function on the Home contract to confirm a matching snapshot.

If there is not a matching snapshot, the watcher should:

  1. Call xAppConnectionManager.unenrollReplica
  2. Call Home.improperUpdate

Parameterize optics-deploy script by environment

Many of the optics-deploy scripts are now the same boiler plate code but with a different environment specified. We should move these to scripts/common and have the user pass the environment as an argument.

Private Fork Branch Management

This is a special situation which requires us to deal with branches in a slightly different way. A goal is to keep certain changes private for competitive reasons, specifically around deployment/infra tooling, agents and possible expansions. For those we should have separate branches in addition to the bridge-buddies main branch:

  • tooling
  • agents
  • expansion

For all changes that can be public we should make them against the public repo. Upon merge, also merge them into private main.

For changes that can't be, we should:

  1. Make PRs against the above feature branches
  2. When merging, do not merge with the PR numbers in the title
  3. Manually merge the feature branches into private main

Whenever comfortable, we should then occasionally merge from the feature branches into public main

Do not require xApps to deserialize messages

xApp developers are forced to deserialize byte arrays into arguments, which is cumbersome. Instead, we should give them an interface that is roughly equivalent to function calls which is what they're already used to.

Example xApp developer experience:

import {IERC20} from "IERC20.sol";
contract ERC20TokenRouter is IERC20TokenRouter {
  mapping(uint32 => bytes32) public routers;
  mapping(address => uint256) public balances;

  function sendRemoteTransfer(uint32 _destination, address _to, address _value) external {
    address _router = _mustHaveRouter(_destination);
    balances[msg.sender] -= _value;
    bytes callData = abi.encodeCall(
        IERC20TokenRouter.receiveRemoteTransfer,
        // It would be better if the xApp developer did not have to pass these themselves, as the
        // Home contract will need to require that they are correct.
        (_localDomain, address(this), msg.sender, _to, _value)
    );
    Home(xAppConnectionManager.home()).remoteCallV1(_destination, _router, callData);
  }

  function receiveRemoteTransfer(uint32 _origin, address _sender, address _to, address _value) onlyReplica onlyTokenRouter(_origin, _sender) external {
    balances[_to] -= _value;
  }
}

https://github.com/bridge-buddies/issues/issues/28

Tooling to migrate existing Optics v2 xApps to Abacus

While many of the relay smart contracts are upgradable, new instances of the relay will be deployed whenever major changes are released.

Xapps can migrate to new relay instances so long as they are backwards compatible by calling xAppConnectionManager.enrollReplica() to enroll the new replicas.

Once all the new replicas are enrolled, xapps can call xAppConnectionManager.setHome to start directing messages to the new relay.

Eventually, xapps should call xAppConnectionManager.unenrollReplica to deprecate the old relay instance.

There should be governance tooling to support this process.

Support "commitments" in relayer

See #55 for a description of "commitments".

The relayer should read commitments published by the updater, and selectively push them to Replica contracts.

Proposed criteria:

  • The commitment contains a new message for the Replica
  • It has been more than s seconds since the last commitment relayed to the Replica, where s is configurable per destination chain, balancing latency and cost

Proper gas estimation for Replica.process

The current version of Replica.sol requires that the process transaction gasLimit greatly exceed the gas that the transaction will actually consume, see this line.

This scares users who are paying for their own message processing. The rationale is to prevent messages from being marked as processed when they were given very little gas.

Instead, we can process the message, passing along gasleft() - RESERVE_GAS. If the call fails and was given less than PROCESS_GAS, we revert the transaction.

This approach preserves the current semantics without giving users a misleading gas limit.

xApp configurable fraud window

Allow xApps to specify the fraud window based on their specific security needs.

One proposal:

Replicas:

  • store the timestamp that a root was submitted
  • calls destination.xAppConnectionManager().acceptableRoot(root, rootTimestamp) before processing a message

xAppConnectionManager:

  • stores mapping of [destination => fraud window] as well as a default fraud window, setter callable by destination

xApps:

  • can optionally call xAppConnectionManager.setFraudWindow() to set their fraud window

Support "commitments" in Home and Replica

This issue is to replace the existing "update" model for synchronizing Home and Replica contracts.

The Home contract will no longer keep a queue of merkle roots. Instead, upon calls to an external function snapshot, it will store a mapping(root -> leaf index).

The updater will wait the appropriate confirmation time and then sign a commitment to (root, leaf index). Replicas will accept this commitment iff the leaf index is greater than the greatest leaf index that they've already committed.

Updaters are slashable iff they've signed a commitment to a snapshot that is not present in the mapping on Home.

Smart contract updater

Replicas can have updates sent to them by a smart contract, which is responsible for verifying the update.

One approach would be for a chain's UpdaterManager to not just manage the updater for Home, but also any replicas deployed on the same chain. UpdaterManager becomes responsible for verifying signatures and calling Replica.update() onlyUpdaterManager

This allows us to keep the same style of updater (ECDSA signatures) for now, but in the future easily move to more decentralized approaches by upgrading (or rotating) UpdaterManager

Namespace commitments by version

In order to be able to keep the same updater/validator set but migrate to a new relay instance, commitments will need to be namespaces by instance so that commitments from instance n are not treated as fraudulent commitments in instance n+1.

This will be important once the validator set is large enough so that everyone doesn't need to configure new keys.

Users only need to pay gas on the source chain

Cross-chain transactions introduce additional UX friction around gas payments by requiring that gas be paid on multiple chains, with multiple currencies. It is important that we design an experience that abstracts this reality away from the user. It should "just work".

Single message design

To illustrate how we might do this, we start by describing a solution that allows a user to pay on the source chain for the processing of a single cross-chain message.

Process bounties

In short, the user pays for the gas to dispatch the message, and funds a processBounty for that message. When the message is processed on the destination chain, the Replica records who processed the message. The processor is then able to dispatch a message back to the source chain, redeeming the bounty.

We do this by building a ProcessBounty xApp. On the source chain, users call ProcessBounty.post(msgHash, value), which places a bounty of value on the message with hash msgHash. On the destination chain, processors call ProcessBounty.claim(msgHash), which checks the Replica to confirm that the message was processed, and dispatches a message back to the source chain ProcessBounty instance. The processor processes that message to claim the bounty. Replicas will need to store who processed a particular message, which can be done for additional cost over the present system by overloading the Replica.messages mapping.

Multi message design

While most (all?) cross-chain transactions of today pass only a single message between chains, transactions of the future are likely to spawn multiple cross-chain messages. This significantly complicates the problem of paying for message processing only on the source chain, for two reasons:

  1. Putting a bounty on the original message hash is insufficient, as there may be many other messages associated with this transaction.
  2. The bounty will need to be divided fairly between multiple processors, to each according to their contribution to processing the messages associated with this transaction.

Message context

To solve (1), we introduce the concept of message context. An additional bytes32 context storage variable is added to the Home contract. Before a Replica processes a message, it sets context equal to the message hash of the message that is about to be processed. When the Replica is done processing a message, it clears context.

When a xApp sends a cross-chain message to the Home contract, the current context is automatically added to the message, much like origin and sender are. This means that all messages spawned as part of the same cross-chain transaction will contain a commitment to the history of messages within the same cross-chain transaction that proceeded them.

Storing message processing costs

To solve (2), when a message is processed, Replicas store a mapping of msgHash => keccak(processor, gas*gasPrice, messageContext).

Putting it all together

When a user initiates a cross-chain transaction, they call ProcessBounty.placeBounty(sourceMsgHash, value).

The bounty is valid for [one day], during which processors call ProcessBounty.fileClaim(processedMessageHash, sourceMessageHash, sourceMessageContext, processorAddress, gasCost, proofOfContext) on the destination chain for each message that they processed as part of the user's cross-chain transaction. The proofOfContext consists of all message hashes between sourceMessageHash and processedMessageHash which can be used along with sourceMessageContext to reconstruct processedMessageContext to confirm that processedMessage is indeed downstream of sourceMesage. ProcessBounty can then confirm that processedMessage was processed by processorAddress at a cost of gasCost.

The ProcessBounty router on the domain on which the message was processed then sends the message ProcessBounty.acceptClaim(sourceMessageHash, processorAddress, gasCost, gasCurrency) on the source domain.

The processor processes the message on the source domain, and the ProcessBounty contract stores a commitment to (sourceMessageHash, processorAddress, gasCost, gasCurrency).

After the deadline to file claims has expired, anyone can purchase the bounty by paying a multiplier of [110%] of the total gas costs incurred in each token. Each processor gets the portion of these funds associated with the gas costs they paid, allowing the bounty to be split fairly without the need for oracle prices.

For example, if the transaction incurred fees of 0.1 CELO, 0.05 SOL, and 0.2 ETH, paid by processors A, B, and C respectively, anyone could purchase the bounty in exchange for sending 0.11 CELO to A, 0.055 SOL to B, and 0.22 ETH to C.

Bounty purchases can be made more efficient by using wrapped tokens that exist on the same chain as the bounty.

Further improvements

In order to make sure their messages get processed, the user will need to set a bounty higher than the cost of processing their messages plus the cost of claiming the bounty. This excess value can be thought of as split between the following three parties:

  1. The user
  2. The processors
  3. The purchaser

The design described in the previous section does not return any of this excess value to the user. Instead, the excess value is split between:

  • The processors, who receive [10%] more tokens than they spent on processing the user's messages. The processors are not reimbursed for the cost of filing claims, and thus the [10%] must be set to be high enough to cover this.
  • The purchaser, who receives the excess value remaining after the processors have taken their cut.
Refunding excess value to the user

Rather than allowing the purchaser to purchase the entire bounty, you could make the bounty available to the purchaser grow over time, starting at [50%] of the bounty specified by the user. Any remaining bounty that was not purchased is refunded to the user at the time of purchase.

Make optics-deploy the logical home for config

Config is currently fragmented across three packages:

  • optics-deploy

    • config/**/$NETWORK.ts
      • Chain object contains network name, domain, rpc info, deployer key, and tx gas settings
        • One object per network
        • Consumed by optics deploy in order to send transactions
      • CoreConfig/BridgeConfig objects contain info about how to parameterize the contracts (e.g. agent addresses).
        • One object per environment per network
        • Consumed by optics deploy in order to initialize contracts, verify on-chain state matches expectation
    • scripts/$ENVIRONMENT/agentConfig.ts
      • AgentConfig object contains gcloud/aws info, docker image
        • One object per environment
        • Consumed by optics deploy in order to configure agent helm charts
  • optics-provider

    • src/optics/domains/$ENVIRONMENT.ts
      • OpticsDomain object contains info about contract addresses, network domains, and pagination (for polygon)
        • One object per environment per network
        • Consumed by optics provider in order to instantiate contract objects
  • rust

    • config/$ENVIRONMENT/$NETWORK_config.json
      • RustConfig (ts) / Settings (rust) object contains Home/Replica contract addresses, network domains, signer info, rpc info, tracing/db config
        • One file per network per environment
        • Consumed by the agents as a Settings object, used to configure DB, indexing, tracing, signers, and create contract objects
    • config/$ENVIRONMENT/$NETWORK_contracts.json
      • One file per network per environment
        • Consumed by optics-deploy to instantiate CoreContracts object, used for checking and modifying existing deploys
        • Seemingly unused by the rust agents
    • config/$ENVIRONMENT/$NETWORK_verification.json
      • One file per network per environment
        • Consumed by optics-deploy to verify contracts on etherscan(s)
        • Seemingly unused by the rust agents

Proposal:

At a high level, my suggestion is to move as much config into optics-deploy as possible. This means removing unused config files in rust, and unifying the OpticsDomain type with what's saved in optics-deploy.

Specifically, I'm proposing the following:

  • optics-deploy

    • config/networks/$NETWORK.ts
      • ChainConfig object contains network name and domain
      • Deployer object contains rpc info, deployer key, and tx gas settings
    • config/environments/$ENVIRONMENT/agent.ts
      • One AgentConfig object containing gcloud/aws info, docker image
    • config/environments/$ENVIRONMENT/core.ts
      • One CoreContractsConfig object per network containing info about how to initialize and configure contracts
    • config/environments/$ENVIRONMENT/bridge.ts
      • One BridgeConfig object per network containing info about how to initialize and configure contracts
    • config/environments/$ENVIRONMENT/contracts/$NETWORK_contracts.json
      • ContractAddresses object, which contains Core and Bridge contract addresses (and domains)
    • config/environments/$ENVIRONMENT/contracts/$NETWORK_verification.json
    • A list of ContractVerification objects
  • optics-provider

    • src/optics/domains/$ENVIRONMENT/$NETWORK_contracts.json
      • ContractAddresses object, copied over from optics-deploy.
      • To minimize changes to optics-provider (for now) we add code in optics-provider to parse into OpticsDomain
      • Can relatively easily follow up to remove OpticsDomain and use ContractAddresses instead
      • Can add a CI check to enforce files match those in optics-deploy
  • rust

    • config/$ENVIRONMENT/$NETWORK.json
      • To minimize changes to agents we keep everything here the same
      • As at present, generated programmatically by optics-deploy from config that lives in optics-deploy

Support "commitments" in updater agent

See #55 for a description of "commitments".

Updater agents should call Home.snapshot() at a configurable and regular interval.

For any snapshots that the updater hasn't already signed, the updater should sign the snapshot and push the commitment somewhere it can be read by the relayer.

Future versions of the updater can and should be smarter about when they choose to call snapshot().

Remove updater from Common/Home

After #55 , the Home no longer needs to know who the updater is. improperUpdate can be moved to UpdaterManager, which should have permission to call Home.setFailed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.