Coder Social home page Coder Social logo

aptos-labs / aptos-core Goto Github PK

View Code? Open in Web Editor NEW
5.8K 5.8K 3.5K 340.55 MB

Aptos is a layer 1 blockchain built to support the widespread use of blockchain through better technology and user experience.

Home Page: https://aptosfoundation.org

License: Other

Shell 0.27% Makefile 0.01% HTML 0.03% Rust 56.47% JavaScript 0.07% Python 1.81% TypeScript 4.72% Dockerfile 0.05% Smarty 0.01% Mustache 0.05% HCL 0.68% PLpgSQL 0.01% Move 35.77% Go 0.01% Boogie 0.01% PowerShell 0.06%
aptos blockchain blockchain-network move smart-contracts

aptos-core's People

Contributors

areshand avatar banool avatar bchocho avatar bmwill avatar clay-aptos avatar davidiw avatar dependabot-preview[bot] avatar dependabot[bot] avatar geekflyer avatar gregnazario avatar huitseeker avatar ibalajiarun avatar igor-aptos avatar joshlind avatar junkil-park avatar lightmark avatar meng-xu-cs avatar movekevin avatar msmouse avatar phlip9 avatar rexhoffman avatar rustielin avatar sblackshear avatar sherry-x avatar sunshowers avatar vgao1996 avatar wrwg avatar xli avatar zacharydenton avatar zekun000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aptos-core's Issues

Improve Faucet observability

  • Add metrics support to Faucet so that we can gain observability
  • When logged errors happen, try to provide traces

Revisit all the configuration settings

Scan through our repo to revisit all the configuration settings, make sure:

  • default value is reasonable
  • have documentation to explain what they do and how to configure it
  • remove unused fields (e.g., enabling API)

[VM] investigate VM error scheme on state fetch failure

In benchmark runs, when I accidentally broke the proof format which should've resulted in an error from VerifiedStateView::get(), the VM discards transactions, which seemed not the right thing to do (Panicking might be a better choice here.)

Try to reproduce, investigate, and probably fix.

[storage] expose low priority writes from rocksdb and misc

Rocksdb has a write option to set it low priority. Our pruners should use this option.

Other options that seems no brainer to enable:

  1. use "multithreaded" flavor of the DB https://docs.rs/rocksdb/latest/rocksdb/type.DB.html#compatibility-and-multi-threaded-mode

  2. these options upon DB open:

       let mut opts = Options::default();

        opts.increase_parallelism(num_cpus::get() as i32);
        opts.optimize_level_style_compaction(1024 * 1024);
        opts.set_use_adaptive_mutex(true);

[storage] `Either::or` should take closures

pub fn or(cond: bool, a: A, b: B) -> Self {

^ take closures so that parameter evaluation can be lazy, like here:

InMemSubTreeInfo::create_leaf_with_update(self.updates[0], self.generation),

cargo bench -p scratchpad --features bench to run the criteron based benchmark (diem/diem#8388) and see if there's performance regression.

Operational tooling

Operational tooling is critical for the success of us, our developers, and our partners. The goal of this issue is to build a plan, get buy-in, and execute:

  • Identify and describe all the current tools (diem-move, config/management, config/seed-peers-generator, config/generate-key, swiss-knife, diem-documentation-tool -- one can simply grep for main.rs
  • Determine and establish consensus on the required features
  • Help convert existing permission-centric tooling to be agnostic -- where appropriate -- this may be tie into platform (smart contract space)
  • Explore ol's approach: https://github.com/OLSF/libra/tree/main/ol/cli

Some additional requirements:

  • Validator and fullnode creation and registration
  • Key and address rotation
  • Backup and restore in collaboration with state sync
  • Retrieve and print node configuration
  • Retrieve and print on-chain information for a node
  • Validate local deployment (maybe by running a mini testnet?)
  • Add back smoke tests that were deleted during the port from DPN to aptos

Remove DPN and experimental and converge on aptos-framework

Currently, our code base has several frameworks but only one is under development / being maintained: aptos-framework. In order to get there, there's a lot of work:

  • Identify critical features only found in the DPN/experimental branches that we want and build them out
  • Delete experimental branch and any dependencies
  • Create a new aptos-framework-types crate and fill migrate over all DPN / aptos-framework
  • Update REST to support a aptos-framework configuration (maybe?)
  • Migrate E2E Testsuite and E2E Tests to aptos-framework
  • Migrate Smoke tests and Forge to aptos-framework
  • Update the genesis builder to aptos-framework
  • Delete DPN
  • Actually make sure core is running through appropriate transactional-tests independent of aptos-framework

Support 32-byte addresses

The initial account authenticator, roughly the hash of the pubic keys for an account, represent the account address at the time of creation. At which point, a user can change their account authenticator to whatever they like. Right now, this is represented by 16 bytes, as a result, the only way to securely create these accounts without some sort of land grabbing is to require proof of key ownership. This is very unnatural in web3 space, where accounts can often be created as a by-product of another operation. In order to support that behavior, we must expand our account addresses to be back at 32-bytes. That begins first with modifying: https://github.com/diem/move/blob/main/language/move-core/types/src/account_address.rs#L21 with a new feature for 32-bytes -- something I am happy to accept.

From there, one needs to dig through our code particularly in types/src/transactions/auth to determine where we have other quirks and slowly make this a reality.

Implement deep health check for node

Implement a health check function to evaluate all component health, for example:

  • Is the node participating in consensus?
  • Is the node up-to-date with the latest version?
  • Is the node connected to other nodes?
  • Is the node accepting connection from others?

we should be able to hit the health check endpoint and get okay or bad for the node health.

Eliminate compiled code from the repo

There's a lot of compiled move code in the repository, there are a few tasks to eliminate it. First the motivation, compiled code makes it very tricky make sense of our eventual complete renaming and this is a legacy of the past. There's a lot of fragility around the compiled code and it doesn't mesh with our working model. For example, when trying to rename, one must actually update the compiled binaries manually otherwise a complete rebuild of the project will fail. There's no documentation around that.

  • Eliminate genesis and framework binaries and instead rely on "fresh" instances
  • Delete all notion of compiled scripts and maybe even scripts from within the codebase since we don't have any applications right now (this includes in porting all the e2e-testssuites to use script functions)

One should be able to delete the following from .gitignore:

!aptos-move/diem-framework/DPN/releases/artifacts/**
!aptos-move/diem-framework/experimental/releases/artifacts/**
!aptos-move/diem-framework/aptos-framework/releases/artifacts/**

RUSTSEC in dependencies in branch main

Found RUSTSEC in dependencies in job https://github.com/aptos-labs/diem-internal/actions/runs/1903324290

    Fetching advisory database from `https://github.com/RustSec/advisory-db.git`
      Loaded 398 security advisories (from /opt/cargo/advisory-db)
    Updating crates.io index
    Scanning Cargo.lock for vulnerabilities (702 crate dependencies)
Crate:         chrono
Version:       0.4.19
Title:         Potential segfault in `localtime_r` invocations
Date:          2020-11-10
ID:            RUSTSEC-2020-0159
URL:           https://rustsec.org/advisories/RUSTSEC-2020-0159
Solution:      No safe upgrade is available!
Dependency tree: 
chrono 0.4.19
โ”œโ”€โ”€ x 0.1.0
โ”œโ”€โ”€ tera 1.7.1
โ”‚   โ””โ”€โ”€ move-prover-boogie-backend 0.1.0
โ”‚       โ””โ”€โ”€ move-prover 0.1.0
โ”‚           โ”œโ”€โ”€ move-stdlib 0.1.0
โ”‚           โ”‚   โ”œโ”€โ”€ smoke-test 0.1.0
โ”‚           โ”‚   โ”‚   โ””โ”€โ”€ shuffle 0.1.0
โ”‚           โ”‚   โ”œโ”€โ”€ shuffle-custom-move-code 0.1.0
โ”‚           โ”‚   โ”œโ”€โ”€ move-unit-test 0.1.0
โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ shuffle-custom-move-code 0.1.0
โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ shuffle 0.1.0
โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ move-cli 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ shuffle 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ move-transactional-test-runner 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ diem-transactional-test-harness 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ diem-framework 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”œโ”€โ”€ vm-genesis 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”œโ”€โ”€ vm-validator 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ diem-mempool 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-v1 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-multiplexer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ diem-node 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ shuffle 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-node 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ diem-fuzzer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ diem-fuzz 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-node 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-key-manager 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ smoke-test 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-json-rpc 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ smoke-test 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-node 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-key-manager 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-fuzzer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ diem-api 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ diem-node 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-fuzzer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-api 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ consensus 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ generate-format 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ diem-node 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ diem-fuzzer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ diem-key-manager 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ diem-json-rpc 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ diem-api 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ””โ”€โ”€ consensus 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”œโ”€โ”€ state-sync-v1 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”œโ”€โ”€ state-sync-driver 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ””โ”€โ”€ state-sync-multiplexer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”œโ”€โ”€ language-e2e-tests 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ language-e2e-testsuite 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ diem-transactional-test-harness 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ diem-transaction-benchmarks 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ diem-fuzzer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ””โ”€โ”€ diem-e2e-tests-replay 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”œโ”€โ”€ genesis-viewer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”œโ”€โ”€ executor-test-helpers 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ vm-validator 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-v1 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-multiplexer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-driver 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”œโ”€โ”€ executor 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ vm-validator 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-v1 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-multiplexer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-driver 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ executor-test-helpers 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ executor-benchmark 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ execution-correctness 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ consensus 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-node 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-key-manager 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-json-rpc 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ diem-genesis-tool 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-v1 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ state-sync-multiplexer 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ smoke-test 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ shuffle 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ forge 0.0.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ testcases 0.0.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ forge-cli 0.0.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ smoke-test 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ shuffle 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ jsonrpc-integration-tests 0.0.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ forge-cli 0.0.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ executor-test-helpers 0.1.0
โ”‚           โ”‚   โ”‚   โ”‚   โ”‚           โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ executor-benchmark 0.1.0

[VM] proper prologue error report

Prologue is used to validate if the txn is good for execution, including checking sequence number, gas deposit, auth key etc.

fn run_prologue<S: MoveResolver>(

Failed checks are move-abort, but the error is translated to DiscardedVMStatus in

pub fn convert_prologue_error(

DiscardedVMStatus is an enum of hard-coded error code.

The reason we do the conversion is because move-abort is supposed to charge gas and persist but move-abort in prologue is going to be discarded. keep_or_discard function is here.

We'd like a better way to report the prologue errors to users to provide flexibility on customized prologue errors instead of changing the move status code.

[vm] adapter features

there're a few features we'd want to land on adapter side to better support aptos framework:

[Bug] Inaccessibility of metrics when running in Docker

๐Ÿ› Bug

It seems that inside the container, the application is listening to 127.0.0.1:9091 (instead of 0.0.0.0:9091), which causes no connection to be accepted from the host:

curl 127.0.0.1:9101/metrics 
curl: (7) Failed to connect to 127.0.0.1 port 9101: Connection refused

To reproduce

Just run FullNode in Docker, then:

curl 127.0.0.1:9101/metrics 2> /dev/null | grep aptos_state_sync_version | grep type

Expected Behavior

aptos_state_sync_version{type="committed"} 113184
aptos_state_sync_version{type="highest"} 113184
aptos_state_sync_version{type="synced"} 113184
aptos_state_sync_version{type="target"} 113185

[storage][statedb] M1.2: Make resources and module under the same account separate items in the global state

  • #402

    • Currently the JMT LeafNode holds the actual value of an state item embedded, and the leaf node moves (the nibblepath changes for a leaf node as its neighbors being created), which makes indexing on the values tricky / impossible.
    • separate that out
      • the "fact" table can be keyed by (creation_version, HashValue), which minimalizes the impact on performance when compaction is done, because the compaction will be rare in the first place.
      • an index can be (RawStateKey, creation_version) which will be frequently under compaction but the size of record is minimalized to mitigate the performance hit.
      • with the index, a range_get(ledger_version, start_key, end_key, limit) needs to be implemented.
  • Proofs can be got separately with state values

  • The in-mem SMT on the executor side gets both the value and the proof.

    • (stretch) move proof to separate pipeline so that it doesnโ€™t add to txn execution latency.
  • The API and other in-node places queries the index and access the value without proof.

    • the account/{}/resource and account/{}/modules Rest API calls needs to be implmented with range_get()

Devex tooling

First impressions last. The perception of a dev coming into a new platform is shaped by how much time they to spend figuring out just how to get started vs being able to jump straight into code. I've got some initial thoughts on tooling to make that onboarding devex more painless. I'm approaching this from the view point of a developer who is going to build applications on top of Aptos (as opposed to someone who would meaningfully contribute to actual Aptos coding/development). There are a handful of things that can be done relatively quickly to speed the process of 0 to 1 for a new Aptos dev.

  • A page on aptos.dev that provides guidance on basic IDE setup, e.g. links to VC Code/Intellij plugins with maybe a sentence or two on each one in terms of utility, is it a candidate for longer term support/development, etc. Guidance on setting up IDE for debugging.

  • Another page with quick links to relevant references. There's some good material out there (move-book.com, https://diem.github.io/move/introduction.html, etc). Some of it is linked to currently, but it's buried in pages. Having it all referenced in one prominent location would be helpful.

  • In addition to the tutorial on running a local testnet, we need a tutorial on running a local faucet. At the bottom of the testnet tutorial page is a line that says, "It is important to note, that this solution does not include a Faucet. That is left as an exercise to the reader." That's likely a punt from Diem that carried over, but I don't really want homework and there's not a lot of utility of blindly struggling through the faucet setup (which I am struggling with right now with INVALID_AUTH errors). Local testnet is fairly useless without coin to pay gas fees.

  • Some kind of debug support is critical. I've got no way of knowing what's going on in code right now other than elaborate unit tests and waiting for errors or unexpected outcomes. There is reference to Std::Debug in the code with a Debug::print function, but it doesn't seem to be implemented. Not sure where the debug::print output would even show up - but assuming maybe it would hit the console in a local testnet?

  • Related, it would be great to have the ability to do a real debug session in VS Code with breakpoints, REPL, watches, etc. Implementing something like that is way beyond my domain knowledge, but that seems like a heavy lift. Still very important, but that's probably a longer term item.

  • Command line tools for basic things like account creation, compiling & publishing modules, faucet access, init a project shell, etc. I've seen some references to CLI in the various repositories, but it's unclear as to whether or not those will be actively supported.

  • The "will they be supported" comment applies broadly to the various things scattered throughout the Aptos repository. I'd prioritize an effort to separate things that are likely going away (#81 #83 and #183 address this partly).

  • In advance of (or in addition to) the CLI, a reference repository with a shell of a project ready for development with the main.rs and move_unit_tests.rs files (and perhaps some explanation about their role in lining to .move files). Ideally, that would include tasks/scripts for compiling and deploying.

  • Library packages/SDK (e.g npm and equivalents) of basic utility code (everything in first_transaction.(ts | py | rs) would be a good start.

Those are the thoughts I've had to date. Will add more here as I come up with additional bright ideas.

[network] Support Load balanced Full Nodes

๐Ÿš€ Feature Request

Supporting a load balancer in front of multiple full nodes. E.g. /dns/load-balancer-name/tcp/6080/.../.

Motivation

Right now full nodes have to be addressed individually. This would allow for an operator to scale greater with a single address and not have to distribute lists of addresses.

Pitch

Allow for sharing an identity (a public key, peer id pair) over multiple nodes without conflicts:

  • The mempool client makes assumptions about the status of the upstream
  • Ambiguous on state sync
  • Are there any other services that need investigation, e.g. state sync v2

[Feature Request] Ledger history pruning

๐Ÿš€ Feature Request

Remove old transactions, events and anything else relevant from the DB to free up disk space.

The length of history to keep should be configurable (in number of transaction versions).

Motivation

Just like we prune historical nodes in the StateDB, the ledger itself, specifically the transactions, the transaction output writesets, the transaction accumulator, the events and relevant indices need to be pruned from the node. A validator basically needs only the latest data of everything, but a fullnode might choose to serve data from a relatively long window, so make it configurable.

Pitch

Describe the solution you'd like

  • Try to issue deletes in atomic DB transactions (schemadb::SchemaBatch) so that the DB is internally consistent.
  • Try to delete with range_delete() to lower the burden on the DB.
  • Design algorithms on the global accumulator to keep the correct internal nodes.
  • Design solution to clean up the index for get_transaction_by_hash() API, which can be tricky.
  • Provide ability to report the oldest version the DB currently has.
  • It makes sense to make the configurable size of the ledger history window DIFFERENT than the state prune window, because the latter is more costly and less useful.

references:

Downstream mempools are unaware if a transaction is discarded upstream

Failed txn (without enough gas) is stuck in mempool after it's been notified by consensus.

from logs,

debug!(LogSchema::new(LogEntry::CleanRejectedTxn).txns(txns_log));
is triggered.

However the txn is not cleared in the mempool. (mempool size is still 1) http://mon.dev.testnet.aptoslabs.com/grafana/d/mempool/mempool?orgId=1&from=1646460874257&to=1646461407666

[storage][statedb] M0: Make it possible to store different types of objects, in addition to account blob

With the ability to store new things under the global state DB, in addition to the account blobs we have today, we can start implementing advanced container data structures like the table (like this but not targeting the EVM).

  • The state DB should now talk about "state items" instead of "account state blobs".
    • create enum StateKey with AccountStateBlob as the sole variant for now.
      • The enum has a function encode() to convert the key to a struct RawStateKey(Vec<u8>); for the state blob, the encoding should be simply "acc_blb_" | account_address
    • In the DB interfaces (e.g. DbReader, StateStore ), use StateKey as key, and RawStateValue(Vec<u8>) as value, (instead of AccountAddress and AccountStateBlob accordingly).
      • actually, use RawStateValue(Option<Vec<u8>>) as the value, simulating deletion.
    • bring AccountStateBlob related serialization / deserialization up, to the callsites.

refs:

mempool enhancement

  • Issue #79: need to figure out a way to populate transactions that are discarded during execution back to fullnode.
  • more efficient dissemination with more nodes
  • leverage proof of availability from DAG protocols

Benchmark performance with 1k nodes

We'd like to benchmark with 1k nodes cluster for performance and see if there's any low hanging fruit we can optimize. Especially on consensus critical path.

Potential optimization we could do:

  • support compression for network messages
  • parallelize signature verification
  • make message verification async

[operational-tooling] revamp it for aptos-framework

operational-tooling test suites in smoke-test is disabled during the migration to aptos-framework. AptosFramework will have a different staking based validator system, after that we should bring back those tooling

[Feature Request][Consensus] Stake aware broadcast protocol

๐Ÿš€ Feature Request

Implement Stake aware broadcast protocol in Aptos

Motivation

Currently, the broadcast protocol naively broadcasts data to validators in the order of their account address. This has two issues

  1. Validators with lower account address always gets priority to receive the broadcast communication for the proposal and have an unfair advantage of being able to participate in the next voting round.
  2. We don't take the validator's stake into account - which can slow the consensus down if validators with the highest stake receive broadcast at last, which might delay gathering votes from 2f+1 stake.

Describe the solution you'd like

Modify the broadcast protocol to implement stake-aware broadcasting. This can be done by ordering the validator by their voting power (or stake) instead of doing that via account address. However, we also don't want to make this order deterministic as then validators with the highest stake will always have the advantage of participating in the voting. Instead of doing that, we can introduce a random weighted ordering technique, which ensures that a validator with x% of total stake will have x% chance to be on the top of the broadcast list.

[VM] support different module publish strategies

Module (re)publish is one core feature we want to support. Although we enabled open publishing in genesis, we lack of the ability to control for republish.

Currently move supports republish as long as it passes the verification. https://github.com/diem/move/blob/170ea3d47e0e2888c148872629675f698d9b1e84/language/move-vm/runtime/src/runtime.rs#L65

We'd like more granular controls and explicit on the publish strategies, e.g. support frozen modules. starcoin achieves it by adding PublishModuleBundleOption to the session api. https://github.com/starcoinorg/starcoin/blob/master/vm/vm-runtime/src/starcoin_vm.rs#L484-L498 it forks the move repo to achieve it https://github.com/starcoinorg/move/blob/starcoin-main/language/move-vm/runtime/src/move_vm_adapter.rs#L62

[Storage] Make SchemaBatch Preserve Order

๐Ÿš€ Feature Request

Motivation

The current problem is the order of the writeOps is not executed the same way when the writeOps is inserted in the schemaBatch (schemaBatch https://github.com/aptos-labs/aptos-core/blob/dd905906b82541b196ed06dee02a179c8d64f06e/storage/schemadb/src/lib.rs#L64).
This will lead to unexpected behavior. For example, if we have inserted a sequence of writeOps insert key 6, insert key 7, insert key 8, delete key range [6, 8), we expect key 8 would be left in the batch while key 6 and 7 will be deleted.
However, as schemaBatch is a map, the actual execution order can be delete key range[ 6, 8), insert key 6, insert key 7, insert key 8, this will result in no schema being deleted.

Pitch

Describe the solution you'd like
change the map used for schemaBatch to vec

Are you willing to open a pull request? (See CONTRIBUTING)
yes

Additional context

[storage][statedb] M3: logical ownership tree

(details TBD)

  • state items has a link to its "parent" in the ownership hierarchy.
  • By simply looking at the root of this hierarchy one will know the total separate state items and the total size in bytes of them. Rent can be charged accordingly.
  • To support such hierarchy for the mapping data structure, we might need to introduce a new type of global state item type: StateKey::TableMeta

[storage] SchemaBatch::range_delete() & SchemaBatch::range_delete_inclusive()

  1. expose range_delete() RockDB API to the code base.

It's currently available on schemadb::DB but not the batch, so one can't include a range delete operation in a batch, mixed with other write ops.

  1. add range_delete_inclusive() capability to the DB and the batch.

range_delete() deletes a half open range, as in [begin, end), the right side of which is exclusive. For convenience, we want to have range_delete_inclusive([first, last]).


I think we can implement it as range_delete([first, last || 0x0), lmk if I'm wrong.

yeah I was wrong.. it takes a bit more to calculate the next possible key.. (only when it's 0xFF..F that the next key is that affixed with a 0 byte)

[storage][statedb] M1.1 create native mapping data structure in the VM adapter

  • [external dependency] ability of Move VM to yield write sets that write things other than resources and modules.
  • [external dependency] See if the Move VM can bring support for "native structs".
  • [dependency] #289 ability of the statedb to store things other than the account blobs
  • Depending on if "native structs" are supported, implement a mapping data structure as either a "native struct" or Move struct.
    • Either way, a table_id (a GUID) is used to identify the table. The "handle" type held as part of resource, representing the table, holds the table_id
    • Add StateKey::TableItem type support from the statedb. (Note the TableMeta is not required at this stage.)
    • Write native functions that converts mapping access operations onto operations on table items in the statedb.

Implement an NFT module and user space

We need a simple NFT module for aptos-framework, what does that entail?

  • NFT module in move (description at the bottom)
  • Simple forge test
  • User space structures for reading NFTs in Rust

A good starting spot is actually our NFT and NFTGallery in experimental:
https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/experimental/sources/NFTGallery.move
https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/experimental/sources/NFT.move

There's a lot of cruft in this code though

  1. We can't store too large a collection on a single user account, so we should either not create explicit collections -- this can be an event based concept or we should have a cap -- where only indexers know the relationship of a collection
  2. So many public functions that are inaccessible, if it isn't used, we should delete it -- apparently whoever wrote it thought we can call functions from userspace :(
  3. It isn't entirely clear how it works and how to publish new NFTs -- e.g., documentation is missing

Key requirements:

  • Able to store a lot of data with minimal impact on gas fees associated to users minting or transferring
  • Enumerable -- should be able to programmatically access any NFT from within a module
  • Allows for both the owner and creator / minter to be discoverable and prove ownership
  • Allows for the owner to transfer to a new owner

Proposal 1)

User points to one or more NFT collections, where each collection points to a set of NFTs.

  • Each user has a list of NFT collections that they manage (optional)
  • Each NFT collection is stored on a distinct account and has { minter, creator, uri, name, number of tokens, creation timestamp, namespace }
  • Each NFT is stored on a distinct account and has { owner, uri, token id, collection address, secret }
  • Permissions
    • Minter and creator can increase the number of tokens
  • Process for creating accounts:
    • Generate sha3 256 of { contract, minter, name } or { contract, name, token id, namespace } and append "0x00" for preimage
    • Set auth key to zero
    • Specify owner
  • Namespace -> { minter, name, VM timestamp }
  • In additional transactions data into the NFT accounts

Issue) an adversary could notice that a user is attempting to create a bunch of new accounts and try to claim those accounts beforehand by submitting the same data hash. This could result in lost resources, so it may be good to have the namespace be generated by something less easy to compute.

[State Sync] Intelligent Aptos Data Client

Today, the Aptos Data Client is very primitive: in order to identify the availability of data around it, it samples the storage summaries of peers in a random fashion. As a result, nodes often don't have a sufficient view of the state around them until enough samples have been taken (which may take a non-negligible amount of time).

To address this, we should make the Aptos Data Client a little more intelligent:

  1. First, when a new peer connects to a node, the Aptos Data Client should immediately poll that peer (i.e., to identify the peer's data summary as soon as possible).
  2. Second, when regularly polling peers (i.e., to stay up-to-date with changes in data availability), the Aptos Data Client should avoid using random sampling and instead provide some form of fairness or guarantees (e.g., a round-robin scheme, or selecting the next peer by identifying the one with the oldest storage summary).

There may also be other optimizations/improvements we can make, but I'll leave that up to the reader ๐Ÿ˜„.

Delete DiemID from Aptos-Core

DiemID is a legacy service like ENS that made sense in our DPN world. We plan on removing all DPN constructs, but this is one of them that is relatively modular and can be cleanly eliminated. It also is very visible in our public interfaces -- like Faucet. Success can be verified by running our e2e and faucet tests.

cargo test --package smoke-tests
cargo test --package diem-faucet

Here's a couple pointers of what should definitely be gone:
https://github.com/aptos-labs/aptos-core/blob/main/types/src/diem_id_identifier.rs
https://github.com/aptos-labs/aptos-core/blob/main/diem-move/diem-framework/DPN/sources/DiemId.move

By the end you should have a clear idea of how to remove (and add) move <-> userspace functionality -- which will be particularly useful as we expand on the platform in the coming weeks / months.

[State Sync] Tracker: State Sync v2 (MVP)

Last Update: 18 March 2022
Owner: @JoshLind

Goals

This issue is to track: (i) the development and deployment of state sync v2; and (ii) the deprecation and removal of state sync v1 (a secondary task).

Task Completion and Checklist

  • Propose a new storage API for state sync v2 and identify the information currently missing from the implementation and APIs.
  • Clean up the interfaces and abstractions between state sync, mempool and consensus.
  • Fix the panic in the existing reconfiguration notification code.
  • Clean up the reconfiguration subscription service and add support for event subscriptions.
  • Build a simple template for the Storage Service (server-side). Stub out unsupported API calls.
  • Define the Aptos Data Client API and implement a simple interface that can used as a starting point.
  • Build a Data Streaming Service that supports (bulk) epoch and transaction fetching.
  • Extend the Data Streaming Service to support (bulk) transaction outputs and account fetching.
  • Extend the Data Streaming Service to support (continuous) transaction and transaction output fetching.
  • Extend the Data Streaming Service to support stream termination and multiple stream creation.
  • Add simple integration and unit tests to the Data Streaming Service.
  • Add logging for the Data Streaming Service.
  • Add metrics for the Data Streaming Service.
  • Add configs for the Data Streaming Service.
  • Extend the Data Streaming Service to handle data advertising attacks (i.e., handle each case in a simple way).
  • Clean up some of the naming around the Data Streaming Service.
  • Build a simple State Sync Multiplexer that wraps State Sync v1.
  • Build a simple Storage Service.
  • Add configs to the Storage Service.
  • Add metrics and logging to the Storage Service.
  • Implement strategies B1 and S1 (transaction execution) in state sync v2.
  • Mark state sync v1.1 for deprecation.
  • Modify storage to serve account state chunks dynamically.
  • Implement strategy B4 (waypoint account state syncing) in state sync v2.
  • Implement strategy B3 (epoch-skipping and waypoint syncing) in state sync v2.
  • Modify storage to persist and serve transaction outputs.
  • Implement strategies B2 and S2 (applying transaction outputs) in state sync v2.
  • Remove state sync v1.1 from the codebase.
  • Update strategies B1 and S1 to use pipelined execution (i.e., decouple execute and commit).
  • Update runbooks to explain different state syncing strategies
  • Update metric dashboards to include state sync v2

Post MVP Tasks

  • Reason about Data Advertising Attacks
  • Explore issues around maximum data sizes in a single network packet

Improve our CLA process

We're gonna have signed CLAs go to [email protected]
Someone is then going to have to update a DB with the github user id of the person that signed
When reviewing contributions, someone is going to have to manually validate that the person has signed a cla

maybe we can somehow make it so that a github action reads from the db and automatically sets whether or not they have signed the cla -- much like in diem/diem

[storage] `diemdb::RocksdbPropertyReporter` reports more properties

  1. The mentioned in title diemdb::RocksdbPropertyReporter now reports only a few properties out of potentially a large list of them.

  2. also a manually maintained ROCKSDB_PROPERTY_MAP seems not worth the maintenance burden.

  3. Potentially expose all of these:
    https://github.com/facebook/rocksdb/blob/d74468e348068b1d47e836c2c71141e670ae287b/include/rocksdb/db.h#L1056-L1101

  4. figure out are those applicable to all CFs, or are there ones that's meaningful for the default CF only. Adjust code accordingly.

  5. decide on if ROCKSDB_PROPERTY_MAP stays.

Update Rust packages

We haven't updated our packages in quite a while, we should perform this occasionally especially Rust.

Delete Scripts from the Repo

Legacy code used Scripts to perform operations on the VM. Scripts are awesome in that you can perform complex operations in a single transaction. Scripts are not user friendly in that they required users to write and compile move snippets to even do trivial things like P2P transactions. The team then introduced Script Functions to complement Scripts to cover the common use cases. Script Functions are how many other blockchains works, where a transaction indicates a function and a set of parameters to be executed by the VM. As part of our migration to 32-bytes and to Aptos-Framework, we have a lot of (potentially) useful tests that depend on Script Functions, let's port them over and delete the DPN Scripts:

  • Cleanup framework code on scripts (aptos-move/framework///legacy
  • Review aptos-move/e2e-testsuite
  • Identify other uses of script( and clean that up

Remove GovernanceRoles

GovernanceRoles are only relevant to the DPN and shouldn't be in Aptos core at all.

We can most certainly eliminate the notion and prioritization of it in mempool.

I'd be curious if we could remove it from more pieces of the code and just treat it as a scalar value until we can eliminate the diem-vm adapter.

Make sure all DB APIs return proper error message when the content requested is out of the pruner window.

This is part of #103

  1. Whenever possible, our error message should indicate that content is pruned.
  2. When not possible (get_txn_by_hash), return None
  3. APIs shouldn't panic on out-of-window queries
  4. would be nice if we convert the DB interface to return AptDbError, so the tests can assert by error types. We can do something like impl From<anyhow::Error> for AptDbError and a AptDbError::Misc(anyhow::Error) so that not much needs to change.

[Feature Request] Introduce mock framework for unit testing

๐Ÿš€ Feature Request

Introduce mock framework for easier unit testing.

Motivation

There is a bunch of manually written Mocks in Aptos code base (refer to MockSharedMempool for example). The reason for that being, when the code was written there was no good mock testing framework. https://docs.rs/mockall/latest/mockall/ seems to be a matured framework that is well supported for mock testing. We can introduce this framework and transition these tests to start using the framework.

[Feature Request] Enable 20-byte account address

๐Ÿš€ Feature Request

Motivation

we want to have more bytes in address to provide more addresses for future application scenarios and provide better security.
Currently, diem move provide a feature supporting 20 bytes address. However, there are still a lot of work required to onboard 20 bytes address. Here are some of steps to be done:

step 1. fix some broken module in Move. This is still in 16 bytes and will make our test broken even if we are on 20 bytes
https://github.com/diem/move/blob/main/language/testing-infra/transactional-test-runner/src/vm_test_harness.rs#L36.

step 2: there are multiple incompatible changes on move side. The function we used no longer exists and we have to change our function call if we want to include change in step 1. some of the incompatible changes are: (1) requiring a pass custom cost table areshand/move@1a039e4, (2) regex pinned to a higher version in move 1.5.5, while we pinned to a lower version 1.4.3 in multiple places, which cause conflicts

step 3: we probably could switch to use the 20 bytes address move then by updating our toml file with address20 feature

Pitch

Describe the solution you'd like
use the 20-byte account address provided by move.

Describe alternatives you've considered
32 byte address is under discussion.

Are you willing to open a pull request? (See CONTRIBUTING)
Yes

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.