Coder Social home page Coder Social logo

aips's People

Contributors

0xmaayan avatar aladeenb avatar alexfilip2 avatar alinush avatar areshand avatar banool avatar bchocho avatar danielxiangzl avatar davidiw avatar gerben-stavenga avatar hariria avatar igor-aptos avatar johnchanguk avatar joshlind avatar junkil-park avatar lightmark avatar michelle-aptos avatar movekevin avatar msmouse avatar mstraka100 avatar rex1fernando avatar runtian-zhou avatar sherry-x avatar sitalkedia avatar thepomeranian avatar vusirikala avatar wintertoro avatar wrwg avatar xindingw avatar zjma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aips's Issues

[AIP-26][Discussion]Quorum Store

AIP-26 - Quorum Store

Summary

Quorum Store is a production-optimized implementation of Narwhal [1], that improves consensus throughput. Quorum Store was tested in previewnet, a mainnet-like 100+ node network, where it increased TPS by 3x. It will remove consensus as the primary bottleneck for throughput in mainnet. It has a wide surface area that changes details of how validators disseminate, order, and execute transactions.

Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-26.md

[AIP-X] [Add Smart Contract developer fee sharing]

AIP Discussion

Implement a fee-sharing mechanism, where half of the transaction fee of a smart contract call will go to the contract’s contributor address(es), and the other half will be burnt as usual, or distributed per AIP-7.

Discussion and feedback thread for AIP.

Link to AIP:

Motivation

Developers historically have had a difficult time finding funding sources unless they either a) create their own token, or b) are funded by a foundation/grant. In the United States especially, creating your own token creates significant legal hurdles so as to not be considered a security by the SEC. Receiving grants from a foundation is also a difficult task, but not insurmountable. Adding a clear money/incentivization path for smart contract developers would be a massive boon for the Aptos ecosystem.

In several competing ecosystems, including Near, Smart Contract developers get 30% of all fees spent invoking a smart contract. This incentivizes Smart Contract developers to build on the platform, as getting further mindshare means further smart contract invocations. Further, this gives a clearer incentive/monetary path for developers.

Juno is another such Smart Contract platform that has recently added a fee-share mechanism, with developers receiving 50% of fees and validators receiving the other 50%.

Pitch

Describe the solution you'd like

Rather than burn all fees spent on block creation, share part of it with smart contract developers. Ideally this would be a value configurable by governance so an on-the-fly change can be made.

[AIP-6][Discussion] Delegation pool for node operators

AIP-6: Delegation pool for node operators

Summary

Currently, only token owners with 1M APT are able to stake their tokens and earn staking rewards. We want to enable everyone in the community to have access to staking rewards as well. We propose adding a delegation contract to the framework. This extension enables multiple token owners to contribute any amount of APT to the same permissionless delegation pool. As long as the total amount of APT in a delegation pool meets the minimum stake required for staking, the validator node on which it is set on will be able to join the active set and earn staking rewards.

Participants (delegators) would be rewarded proportionally to their deposited stake at the granularity of an epoch. Delegators will continue to have the same stake management controls (such as add_stake, unlock, withdraw etc.) in the delegation pool, similar to pool owners in the existing staking-contract implementation.

For the P0, existing stake pools cannot be converted into a delegation stake pool. A new delegation stake pool would have to be created. However, this could be a future development.

Motivation

The current staking mechanism puts the community at a disadvantage, as it is less probable that a single individual has 1M APT tokens to start their own validator.

Given this functionality is enabled, the community can participate in staking activities and incentivize validators thus adding value to the ecosystem.

On the other hand, the entry stake requirement makes it difficult for some actors, possibly experienced in maintaining and securing blockchain validators, to join the network as node operators. With this new functionality, they can get support from the community to enter the active set.

In the current staking implementation, the activeness of a validator is influenced by a single entity's stake (stake-pool owner) which can leave the node unstaked at any time (actually on lockup period expiration). In the new implementation, this scenario is less likely to occur.

Rationale

This feature has the potential to increase the number of validators in the ecosystem leading to further decentralization and a higher amount of tokens staked on the protocol.

Specification

A detailed specification of the proposed solution can be found here.

To summarize, the following behavior would be supported:

  • Admin of the delegation pool = node operator
  • Initiates delegation pool
  • Join validator set
  • Leave validator set
  • Set commission rate at the start of contract
    • Contract will pay commission out first, then rewards to principal
    • Commission rate cannot be changed
  • Delegators = token owners
    1. Add stake
    2. Schedule stake for unlocking
    3. Cancel unlocking of stake (moves stake to previous state)
    4. Withdraw stake
  • Reset-lockup = No one
  • Delegated voter = Admin

The operator fee, previously configured by the owner, will be immutably set at pool's creation by the node operator itself. Therefore, delegators have the option to participate in pools of their choice with regard to commission fee.

Reference Implementation

There is a reference implementation, integrated directly into the framework, treating aptos_framework::stake module as a black box, at https://github.com/bwarelabs/aptos-core/tree/bwarelabs/shares_based_delegation_pool.

Risks and Drawbacks

The staking API initially exposed would incur a higher gas cost (only for delegation pools) as additional resources have to be stored and maintained in order to keep track of individual delegators' stakes and rewards earned by pool's aggregated stakes.

Future Potential

We could uniformly enforce that a delegator cannot decrease the total stake on the pool below the active threshold or decide to fully unstake the delegation pool.

We could restrict the node operator to deposit a minimum stake amount in order to allow its pool to accept delegations.

Suggested implementation timeline

We hope to get it on the mainnet in Q1, but this is pending further technical scoping.

[AIP-3][Discussion] Governance Multi-Step Proposal

Summary

The Aptos on-chain governance is a process by which the Aptos community can govern operations and development of the Aptos blockchain via proposing, voting on, and resolving proposals.

Currently, an on-chain proposal can only contain one script. Because the MoveVM applies changes to the VM at the end of a script, we cannot include changes that are dependent on previous changes in the same script. This means that we oftentimes need multiple scripts to complete an upgrade.

When we have any upgrade that involves multiple scripts, we need to make multiple single-step proposals, have voters vote on each of them, and resolve each of them manually. We would like to simplify this process by supporting multi-step proposals - this will allow multiple scripts within one on-chain proposal and will execute the scripts in order if the proposal passes.

Motivation

With multi-step proposals, we will be able to create an on-chain proposal that contains multiple steps, have voters vote on it, and execute all scripts in order at once. This will save effort from the community, as well as enforce that the scripts will be executed in order.

Alternatives Considered

Alternative Solution I: Keeping the Status Quo

Currently, proposing an upgrade with multiple scripts using single-step proposals is time-consuming, manual and likely error-prone if only part of the scripts get executed. We think adding support for multi-step proposals is better because it will save time for proposers, voters, resolvers, and ensures that we execute the steps in the right order.

Alternative Solution II: Making a change to the MoveVM

Because the MoveVM applies all changes at the end of the script, we cannot include all upgrades within one single script when some changes are dependent on the other ones in the same script.

We could make a VM-level change to apply changes to the VM at the end of each transaction - this would allow executing all upgrades within one single script. However, it will be a much larger change that will take more than multiple months and affects more parts of the system. We think the smart contracts change essentially achieves the same thing, and is simpler than a VM-level change.

Specification

To support multi-step proposals, we introduce the concept of a chain of scripts. The voter approves / rejects the entire chain of scripts by voting yes / no on the first script of the chain. There is no partial yes / no.

For example, if we have a multi-step proposal that contains <script_a, script_b, script_c, script_d>, the user will only vote on the first script script_a. By voting yes on script_a, they say yes to the entire chain of scripts <script_a, script_b, script_c, script_d>.

When a multi-step proposal passes, we will use CLI to resolve the chain of scripts sequentially.

Chain of Scripts
Screen Shot 2022-11-15 at 12 54 08 PM
When we produce the execution scripts, we start from the last script (let's say it's the xth proposal), hash it, and pass the hash to the x-1th proposal. The on-chain proposal will only contain the first script, but the content and order of the entire chain is implicitly hashed in the first script. We will provide CLI toolings for generating the chain of scripts and verifying that all scripts are present and are in the right order.

Creating and Voting on Multi-Step Proposals

The flow will mostly stay the same for creating and voting on proposals:

  • When creating a proposal, the proposer will only pass in the first script in the execution_hash parameter.
  • When voting on a proposal, voting yes / no on the first script indicates that the voter approves / rejects the entire chain of execution scripts.

Resolving Multi-Step Proposals

In our proposed multi-step proposal design, when resolving a multi-step proposal,
Screen Shot 2022-11-15 at 3 17 24 PM

  • aptos_governance::resolve_multi_step_proposal() returns a signer and replaces the current execution hash with the hash of the next script.
  • voting::resolve() checks that the hash of the current script is the same as the proposal’s current execution hash, update the proposal.execution_hash on-chain with the next_execution_hash, and emits an event for the current execution hash.
  • voting::resolve() marks that the proposal is resolved if the current script is the last script in the chain of scripts.

Security

  • Smart Contracts Security & Audit

The implementation will go through security audit with a respectable third-party audit firm to ensure the safety and correctness of the code and the operations.

  • Validation

We are mostly concerned about two types of validation:

  1. validate that all scripts are present;
  2. validate that executing them one by one in order works (chain and replay are good).

We will support CLI tooling for the above validations.

  • Decentralization & Governance

This proposal changes the mechanism of what the content of a proposal includes (multiple scripts vs. single scripts), and does not change the powers of the on-chain governance. There is no impact on decentralization or governance abilities.

Tooling

We will add CLI and governance UI support for creating, voting on, and resolving multi-step proposals.

Reference Implementation

Draft PR:

aptos-labs/aptos-core#5445

Risks and Drawbacks

Decentralization - no impact or changes.

Security - the added governance code will go through strict testing and auditing.

Tooling - CLI support only covers Aptos governance but the multi-step mechanism can be reused for general-purpose DAO/governance. In the near future, the Aptos team and community can improve CLI/tooling to be more generic.

Tentative Implementation Timeline

11/07 - 11/22

  • publish draft AIP for community feedback.
  • design review and preliminary security review for the multi-step proposal design.

11/22 - 11/30

  • formalize AIP
  • flesh out draft PR, add unit tests, and get the PR landed.
  • deploy Move changes on Devnet.

11/30 - 12/14

  • add CLI support.
  • operational testing.
  • deploy Move changes on Testnet.

12/15 - 12/20

  • e2e testing.
  • potentially deploy Move changes on Mainnet.

[AIP-35][Charging Invariant Violation Errors]

AIP Discussion

Summary

Charge transactions that triggered invariant violation error instead of discarding them.

Motivation

Invariant violation error is a special type of errors that gets triggered in the Aptos VM where some unexpected invariants are being violated. Right now transactions that triggered such error will be marked as discarded which could potentially be a DDoS vector for our network as it leaves users to be able to submit computations without being charged.

Examples of transactions that could trigger an invariant violation errors are transactions that violates MoveVM's paranoid type checker.

Read more about it here: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-35.md

Wallet Interface

some wallets export aptos object, some wallets not.

and now aptos interface lacks query method.

getAccountModule(s)
getAccountResource(s)
getAccountTransaction(s)

[AIP-27][Discussion]Sender Aware Transaction Shuffling

AIP-27 - Sender Aware Transaction Shuffling

Summary

Sender-aware shuffling enables the shuffling of transactions within a single block so that transactions from the same sender are apart from each other as much as possible. This is done to reduce the number of conflicts and re-execution during parallel execution. Our end-to-end performance benchmark show that sender-aware shuffling can improve the TPS by 25%.

Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-27.md

[AIP-8][Discussion] Higher-Order Inline Functions for Collections

aip title author discussions-to Status last-call-end-date type created updated
8 Higher-Order Inline Functions for Collections wrwg #33 Draft TBD Standard (Framework) 2023/1/9 2023/1/9

Summary

Recently, the concept of inline functions has been added to Aptos Move. Those functions are expanded at compile time and do not have an equivalent in the Move bytecode. This ability allows them to implement a feature which is currently not available for regular Move functions: taking functions, given as lambda expressions, as parameters. Given this, we can define popular higher-order functions like 'for_each', 'filter', 'map', and 'fold' for collection types in Move. In this AIP, we suggest a set of conventions for those functions.

Motivation

It is well-known that higher order functions lead to more concise and correct code for collection types. They are widely popular today in mainstream languages, including Rust, TypeScript, Java, C++, and more.

Rationale

Move has currently no traits which would allow to define an Iterator type which comprehends available functions across multiple collection types. Here, we want to establish at least a convention for naming and semantics of the most common of such functions. This allows framework writers to know which functions to provide, developers to remember which functions are available, and auditors to understand what they mean in the code.

Specification

Foreach

Each iterable collection SHOULD offer the following three functions (illustrated by the vector<T> type):

public inline fun for_each<T>(v: vector<T>, f: |T|);
public inline fun for_each_ref<T>(v: &vector<T>, f: |&T|);
public inline fun for_each_mut<T>(v: &mut vector<T>, f: |&mut T|);

Each of those functions iterates over the collection in the order specific to that collection. The first one consumes the collection, the second one allows to refer to the elements, and the last one to update the elements. Here is an example using for_each_ref:

fun sum(v: &vector<u64>): u64 {
  let r = 0;
  for_each_ref(v, |x| r = r + *x);
  r
}

Fold, Map, and Filter

Each iterable collection SHOULD offer the following three functions which transpose into a new collection of the same or different type:

public inline fun fold<T, R>(v: vector<T>, init: R, f: |R,T|R ): R;
public inline fun map<T, S>(v: vector<T>, f: |T|S ): vector<S>;
public inline fun filter<T:drop>(v: vector<T>, p: |&T|bool) ): vector<T>;

To illustrate the semantics of the fold and the map function, we show the definition of the later in terms of the former:

public inline fun map<T, S>(v: vector<T>, f: |T|S): vector<S> {
    let result = vector<S>[];
    for_each(v, |elem| push_back(&mut result, f(elem)));
    result
}

Effected Data Types in Aptos

Those data types in the Aptos frameworks should get the higher-order functions (TO BE COMPLETED):

  • Move stdlib
    • vector
    • option
  • Aptos stdlib
    • simple map
    • ?
  • Aptos framework
    • ?
  • Aptos tokens
    • property map
    • ?

Notes

  • It is recommended that collection types outside of the Aptos frameworks use the same conventions.
  • Only collection types which are expected to be iterable in a single transaction should offer these functions. For example, tables are not such collections.
  • Some collection types may diverge by adding less or more of those functions, depending on the type of the collection.

Reference Implementation

TODO: link to the code in the framework at main once the PRs landed

Risks and Drawbacks

None visible.

Future Potential

  • Parts of the Aptos framework can be rewritten to make them better auditable by removing low-level loops and replacing them by calls to higher-order functions.
  • The Move Prover can benefit from these functions to avoid looplow-level which are one of the most challenging parts of working with the prover.
  • In the future, function parameters may also be supported by the Move VM. Then this proposal simply generalizes such that higher-order functions on collections not necessarily need to be inline functions.

Suggested implementation timeline

The Move compiler implementation became feature complete with PR 822. The remaining effort is small, so we expect to fit this into the very next release.

[AIP-17][Discussion] Reducing Execution Costs by Decoupling Transaction Storage and Execution Charges

AIP-17 Discussion

Discussion and feedback thread for AIP-17.

Link to AIP:

https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-17.md

AIP-17 - Reducing Execution Costs by Decoupling Transaction Storage and Execution Charges

Summary

This AIP proposes to decouple storage space related gas charges from execution and I/O gas charges. Execution and I/O charges will continue to be determined by the gas unit price, hence the Aptos "fee market". Storage space related gas charges will be based on absolute values in the native token. This decoupling enables substantial reduction in transaction costs especially for transactions with heavy execution and I/O heavy dependencies.

Motivation

In Blockchain architecture, there are two fundamental types of resources, categorized by the nature of their scarcity:

  1. Transient: CPU, IOPS and bandwidth - effectively replenished every instant that the system remains online. The price of effectively unlimited resources can be driven by demand, making an underutilized system cheap or even free. Because these resources are bought or rented and a sunk cost, not using them is nearly pure waste, with the caveat that there is some amount of cost associated with power and possibly network traffic.
  2. Permanent: State items once allocated, cost disk space and impose performance penalty on the entire fleet forever, unless deleted. It makes sense to charge state items whenever they exist in the DB, and according to the time span they exist in the DB. Similarly, transactions themselves and emitted events occupy disk space for non-negligible period of time, although not permanently because the nodes do prune the ledger, should be charged accordingly.

Each transaction must have a maximum amount of gas to limit both the time it can execute and how much storage it can consume. As a result, execution, IO, and space consumption must be defined relative to each other. As a result, execution fees tend to be above market rate, whereas storage fees do not reflect their scarcity and make support for concepts like refunds challenging.

The proposal here is to charge storage space consumption in terms of the native token instead of gas units, so it's independent of user specified gas unit price. To simplify implementation, the user interface does not change, so the effective cost will be deducted from the maximum transaction fee specified with a user transaction.

Alternative - Maximum Storage Space in a Transaction

A more systematic approach would be to introduce a new field on the transaction to specify maximum storage fees. While this provides a more systematic and explicit means to indicate intended outcomes, it creates a burden on the ecosystem to adopt a new transaction standard. Current expectations are that such additional friction provides limited value; however, over time, Aptos will aggregate more useful features to expose and will adopt this along with other useful updates to the transaction interface.

Specification

Language and framework

No visible change to how one writes Move, no visible change to how one uses the SDK and CLI. But the economics changes:

Economics

The dimensions of storage gas charges will remain because these operations do impose runtime transient resource consumption. :

  • per_item_create and per_item_write will be adjusted.
  • per_byte_create and per_byte_writeremains the same.

As a follow-up, the distinction between create and write variations of the storage gas parameters can be removed. On top of that, all storage gas charges can potentially scale lower, as they no longer bear the responsibility of defending against state explosion.

In addition, storage space related gas parameters will be defined in the unit of the native token. At runtime, the cost is calculated in the native token and converted to gas units according to the user specified gas unit price: charge_in_gas_unit = charge_in_octas / gas_unit_price

configuration

These entries will be added to the global gas schedule to specify the native token costs for various storage space consuming operations:

  • storage_fee_per_state_slot_create
  • storage_fee_per_excess_state_byte
  • storage_fee_per_event_byte
  • storage_fee_per_transaction_byte

Each of the per byte charges respect a per transaction free quota:

  • free_write_bytes_quota: (1KB) This is existing, now governs storage_fee_per_excess_state_byteas well.
  • large_transaction_cutoff: (600 bytes) This is existing, now governs storage_fee_per_transaction_byte as well.
  • free_event_bytes_quota: (1KB) This is new, which governs storage_fee_per_event_byte.

These per transaction hard limits for different categories of gas charges will be added to the global gas schedule to reflect that the network has different amounts of resources under different categories, a reasonable amount of gas spent on disk space allocation in a single transaction might be sufficient for another transaction to run for minutes of CPU consuming operations.

  • max_execution_gas: This is in gas units.
  • max_io_gas_per: This is in gas units, governing the transient aspects of the storage cost, i.e. IOPS and bandwidth.
  • max_storage_fee: This is in Octas, governing the new category of fees described in this proposal.

Reference Implementation

#6683
#6816
#6837

Transaction Type Current Cost New Cost Change reduced by factor
(minimal per transaction charge) 15000 200 -98.67% 75.0x
Transfer 54200 600 -98.89% 90.3x
CreateAccount 153600 101600 -33.85% 1.5x
CreateTransfer 188000 101900 -45.80% 1.8x
CreateStakePool 776200 207700 -73.24% 3.7x
RotateConsensusKey 2178300 21800 -99.00% 99.9x
JoinValidator100 625300 461100 -26.26% 1.4x
AddStake 675700 461500 -31.70% 1.5x
UnlockStake 163500 1600 -99.02% 102.2x
WithdrawStake 153900 1600 -98.96% 96.2x
LeaveValidatorSet100 610500 460900 -24.50% 1.3x
CreateCollection 174100 100800 -42.10% 1.7x
CreateTokenFirstTime 382100 152400 -60.12% 2.5x
MintToken 117100 1200 -98.98% 97.6x
MutateToken 272200 52200 -80.82% 5.2x
MutateToken2ndTime 129000 1300 -98.99% 99.2x
MutateTokenAdd10NewProperties 430900 4300 -99.00% 100.2x
MutateTokenMutate10ExistingProperties 451100 4500 -99.00% 100.2x
PublishSmall 745700 107400 -85.60% 6.9x
UpgradeSmall 660200 8100 -98.77% 81.5x
PublishLarge 10735800 9810700 -8.62% 1.1x

Risks and Drawbacks

Combining the storage fee as part of the gas charge is unintuitive. Better tooling will be required to gain visibility into the cost structure of transactions. Furthermore, the documentation site needs to be updated with these changes.

Future Potential

Deletion Refund

To incentivize "state hygiene", or state space cleanup, further effort will enable refunding part or full of the storage fee paid for allocating a storage slot. The fact that the allocation will be charged in the native token instead of gas units as a result of this proposal largely eliminates the concerns around storage refund arbitrage.

Suggested deployment timeline

Testnet and Mainnet in March.

[AIP-33][Block Gas Limit]

AIP-33 - Block Gas Limit

Summary

The per-block gas limit (or simply block gas limit) is a new feature that terminates block execution when the gas consumed by the committed prefix of transactions exceeds the block gas limit. This ensures that each block is executed within a predetermined limit of computational resources / time, thereby providing predictable latencies for latency-critical applications that involve even highly sequential and computationally heavy transactions.

Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-33.md

[AIP-15][Discussion] Token Standard Reserved Properties

AIP Discussion

Discussion and feedback thread for AIP.


aip: 16
title: Token Standard Reserved Properties
author: areshand
discussions-to: #28
Status: Review
last-call-end-date (*optional):
type: Standard (framework)
created: 01/06/2023

Summary

This proposal introduces framework-reserved properties to

  1. prevent properties used by the token framework being changed by creators freely.
  2. reserving properties that can be extended to support programmable token behavior ex: token freezing, soulbound token, etc

Motivation

We have existing token properties used by our token standard to control who can burn the token. However, when the token’s default properties are mutable, creators can add these control properties after the token has been minted. The creator can burn these tokens from collectors. This is a known issue called out in the token standard to set the token default properties to be immutable as a best practice. To prevent this entirely, this proposal is to make it infeasible to update the control properties after the token creation.

The reserved framework properties can be utilized for controlling token behavior to make it programmable. One example is having a framework reserved property to freeze tokens at the token store.

Specification

We have 3 existing control properties:

  • TOKEN_BURNABLE_BY_CREATOR
  • TOKEN_BURNABLE_BY_OWNER
  • TOKEN_PROPERTY_MUTATBLE

When these 3 properties exist in the TokenData’s default_properties, the creator can use them to control burn and mutation eligibility. We want to prevent the creators from mutating these framework-reserved properties after the token creation.

Reserve all keys with “TOKEN_” prefix for framework usage. When the creator mutates the token_properties stored with the token or the default_properties stored with the TokenData, we will check if the property name starts with “TOKEN_” and abort the mutation if any properties start with “TOKEN_” prefix.

We add friend functions add/update_non_framework_reserved_properties. to the property map module. This function will check all the properties to be added/updated and only non_framework_reserved properties can be added or updated.

// this function will be called by token mutation methods to add or update token properties
public(friend) fun add_non_framework_reserved_properties(map: &mut PropertyMap, key: String, value: PropertyValue)
public(friend) fun update_non_framework_reserved_properties(map: &mut PropertyMap, key: String, value: PropertyValue)

Risks and Drawbacks

Overhead of token default property mutation cost

The validation of framework reserved properties will cost extra gas when calling mutate_tokendata/token_property , Currently, when creators mutate the properties, the function loop through all the properties to be mutated and update the property value. The additional cost would be we need to check if the property key starts with TOKEN_. This cost should be minimal by using substring matching and early exit of comparisons. Also the current function already does many validations on the string length, key duplication, etc, the overhead of checking if a key has a prefix should have negligible impact on gas costs and user experience.

Timelines

The change has yet to be implemented. Ideally, these should will landed to main in 1 to 2 weeks, then testnet, followed by mainnet.

Future Potentials

This unblocks follow-up AIPs on soulbound token and token freezing leveraging this framework-reserved properties.

References

[AIP-5][Discussion] Soulbound and Freezing Token Standard

[AIP-5] Soulbound and Freezing Token Standard

This standard allows for the existence of soulbound NFTs and freezing tokens.

Motivation

Soulbound tokens have enormous potential in a wide variety of applications. However, with no standard, soulbound tokens are incompatible with the Aptos-Token standard. Consensus on a common standard is required to allow for broader ecosystem adoption, as well as to make it easier for application developers to build expressive applications on top of the standard.

This AIP (Aptos Improvement Proposal) envisions soulbound tokens as non-transferable NFTs that will unlock primitives such as robust proof of attendance, credentials, distributed governance rights, and much more. However, soulbound tokens must be distinguishable from Aptos-Tokens to provide this utility.

Benefits:

  • Consensus on a soulbound token standard
  • Monitoring a single event store for soulbound and non-soulbound tokens
  • Compatibility with the Aptos-Token standard.

Requirements

  • Tokens minted to an account cannot be transferred.
  • Work with all existing dApps (decentralized applications).
  • Ability to tell if a token is soul bound or not.

Approach

Extend property map to have framework reserved keys

Reserve all keys with “##” for framework usage.

When adding keys to property_map, we need to check if the key starts with “##” and disallow adding or creating property_map with these keys.

We add a token package friend function for adding framework-reserved control flags

// This function should only be called by framework modules
public(friend) fun add_framework_reserved_control_flag(map: &mut PropertyMap, key: String, value: PropertyValue)

Note: after this change, the property_map will become a dedicated data structure for the token package.

  • TODO: Verify property_map has not been widely used by the community and a similar data structure is provided in the framework package.
  • TODO: Validation of the framework key costs of the extra gas used for all the property map creation and inserting. Verify the gas cost.

Annotate the token as soul bound

  • Introduce a framework reserved token property “##freezing_type” of u8 type. When the freezing_type is 0, this value is reserved for the soulbound token.
  • Have new methods to create soulbound tokens by updating the property’s value to 1.
  • Don’t allow withdrawal if the token’s frozen.
// Mint soulbond tokens specifically
public entry create_mint_soulbound_token(creator: &signer, owner: address, collection_name: String, token_name: String){
    // Create token_data
    let token_data_id = create_token_data(...);
    // Mint a token from token_data_id
    let token = mint_token(token_data_id);

    // Offer token to owner
    token_transfers::offer(creator, owner, token_id, ..);
    // Annotate the token property #freezing_type to 0
    add_framework_reserved_control_flag(token.token_properties, "“##freezing_type", 1);
}

public fun is_soulbound_token(token_id: TokenId): bool {
    // Check the token properties
}

public fun withdraw_token(account: &signer, id: TokenId, amount: u64){
    let token_data_id = id.token_data_id;
    // Check the token's properties and validate if the ##freezing_type value is set
    ...
}

Extension

We can extend the approach to support general frozen tokens to token stores. For example, the token owner or creator wants to freeze the token to their token store after an expiration time. We can use ##freezing_type = 2 for the time-based freezing. we can introduce another system-reserved control flag to specify the timestamp ##freezing_expiration_time.

public fun freeze_token(owner: &signer, token_id: TokenId, expiration_time: u64){
    // annotate the token with two properties above
}

public withdraw(...){
    //check if the token is frozen and the expiration time.
}

Other Alternatives

  • Store tokens inside locker modules, the biggest downside being the onboarding of dApps (decentralized applications) to monitor the new locker standard.

Suggested Implementation Timeline

To be determined.

References

--

Special thanks to Bo Wu.

[AIP-34][Unit Gas Price Estimation]

AIP-34 - Unit Gas Price Estimation

Motivation

Transactions are prioritized at mempool, consensus, and execution based on unit gas price — the price (in APT) that each transaction has committed to pay for each unit of gas. Clients need a way to estimate the unit gas price they should use to get a transaction executed on chain in a reasonable amount of time. This proposal contains a design where Aptos fullnodes provide an API to be used by clients for this purpose.

Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-34.md

[AIP-X][Discussion] NFT Royalty Enforcement

AIP Discussion

Discussion and feedback thread for AIP.

Link to AIP: #62

Summary

This document provides our recommendation and alternatives for how royalty enforcement can work on Aptos. We are collecting community feedback on the options presented here so we may decide the best path moving forward together.

Motivation

We observed the NFT communities on other chains are divided over the controversial topic of royalty enforcement on marketplaces. We want to ensure the creators’ royalties are respected on Aptos so creators can feel safe and incentivized to continue building on Aptos.

Currently, royalty on Aptos is dependent on our community marketplaces and dapps charging the royalties on behalf of the creators. This creates the opportunity for royalty-free marketplaces to reduce the price by skipping the royalty payments.

Without the community forming a royalty enforcement standard together, the creators and royalty-required marketplaces are at a competitive disadvantage to royalty-free marketplaces.

Rationale

When comparing different options for enforcing royalties, the main considerations are:

  • The strength of the enforcement: The enforcement should be effective enough to dissuade anyone from purchasing a token without paying royalties.
  • The impact on the future growth of the NFT projects: The selected approach should not limit the potential of building new and exciting applications.
  • The decentralization of the approach: The selected approach should be decentralized, agreed upon and executed by the ecosystem instead of any single entity.

Specification

Recommended approach

Detecting NFT Violating Royalty Treatment

Aptos may provide an API service that provides information to help creators find all the tokens that violate the royalty. The creator can then use the information to annotate those NFTs and flag them as royalty-violating tokens.

The API service would be built on top of an indexer that scans through the NFT transactions on-chain to look for token trades that do not pay the appropriate royalties to the creator’s royalty account. This indexer can be deployed and run by the community.

Annotating NFT Violating the Royalty

We can introduce a new on-chain map RoyaltyViolation that records all the NFTs that violate the royalty. The creator has the authority to set the value in this on-chain map. Anyone can read from the map to check if any token has violated the royalty.

Note: Only NFTs are qualified for this annotation where an NFT has a globally unique token_id.

Struct RoyaltyViolation has key {
	royalty_violation_token: Table<token_id, bool>
}

public bool set_royalty_violated(creator: &signer, token_id: TokenId);
public bool is_royalty_violated(token_id: TokenId);

The creator can then use the API service to detect the violating tokens too using their own heuristics. Once the creator knows a token has been traded without paying royalties, they can annotate the token with ‘true’ in RoyaltyViolation table.

Enforcement

The enforcement will be done through Aptos Ecosystem projects. The violating tokens can be identified and restricted from trading and participating in various dapps. There are three main areas where the enforcement can take place. Any one of these areas can greatly restrict the violating token’s usability.

  1. Token Utility: The creator can easily check if the token has violated the royalty and stop providing utility or service to these tokens.
  2. Major Marketplaces: We can work with major marketplaces to stop listing tokens that have violated the royalty. The trade of these tokens would be limited.
  3. Other Dapps: Any dapps can also adopt their own restriction and stop providing service to holders of violating tokens.

Alternatives

We can allow the creators to update a whitelist or blacklist of accounts eligible for token transfer. With these two lists, the NFT can be deposited only to the whitelisted accounts, or the NFT cannot be deposited to the blacklisted accounts. Creators can update these lists when they observe a marketplace or app violating the royalty.

The caveat of this blacklist approach is that the marketplaces can easily switch to a new account frequently without restricting its operation. Creators probably cannot keep up with updating their blacklists at the same pace as the marketplaces. Meanwhile, updating frequently and maintaining a large blacklist will cost lots of gas.

The caveat of the whitelist approach is it greatly restricts the token liquidity and prevents other interesting applications to be built on top of the token. The owner cannot transfer their tokens for normal usage. It also stops other developers from building an interesting application for these tokens as they cannot store these tokens in their own contracts.

Options comparison

Annotation recommendation Blacklist Alternative Whitelist Alternative
Enforcement strength Strong Low Strong
Growth Limitation Risk Low Low High
Decentralization Yes Yes Yes

FAQ:

If I notice only some fungible token holders are violating the royalty, how can I annotate the token as royalty violation?

You can use the mutate token properties method to turn those specific fungible tokens to NFTs where they will have their unique token_ids. You can then annotate these tokens as royalty-violating tokens.

As a buyer who does not want to buy any token that could be annotated by creators, what should I do?

Creators’ desired royalty is recorded on-chain within the token metadata. With our current proposal, you can avoid royalties only by purchasing tokens that have no royalty requirements.

[AIP-2][Discussion] Multiple Token Changes

Summary

This proposal contains the following changes:

  1. token CollectionData mutation functions: provide set functions to mutate the TokenData fields based on the token mutability setting.
  2. token TokenData metadata mutation functions: provide set functions to mutate the CollectionData fields based on collection mutability setting.
  3. A bug fix for collection supply underflow: fix a bug that cause underflow error when burning unlimited collection’s TokenData.
  4. Fix the order of events: when minting token, the deposit events enter queue before the mint events. This changes corrects the order to make the token deposit event after the mint event.
  5. Make a public entry for transfer with opt-in: provide an entry function to allow transfer token directly when users opt-in direct transfer.
  6. Ensure royalty numerator is smaller than denominator: we observe about 0.004% token having a royalty > 100%. This change introduces an assertion to ensure royalty is always smaller or equal to 100%.

Motivation

Change 1, 2: the motivation is to support CollectionData and TokenData mutation based on the mutability config so that creators can update the fields based on their own application logic to support new product features

Change 3,4: the motivation is fix existing issues to make the token contract works correctly

Change 5: this motivation is to allow the dapp to directly call the function without deploying their own contracts or scripts

Change 6: this is to prevent potential malicious token that could lead to charging a higher fee than the token price.

Rationale

Change 1, 2 are fulling existing token standard specification without introducing new functionalities

Change 3, 4, 5 and 6 are small fix and straightforward changes.

Reference Implementation

The PRs of the changes above are listed below:

Change 1, 2:
aptos-labs/aptos-core#5382
aptos-labs/aptos-core#5265
aptos-labs/aptos-core#5017

Change 3: aptos-labs/aptos-core#5096

Change 4: aptos-labs/aptos-core#5499

Change 5: aptos-labs/aptos-core#4930

Change 6: aptos-labs/aptos-core#5444

Risks and Drawbacks

Change 1, 2 are internally reviewed and undergo auditing to fully vet the risks

Change 3, 4 and 6 are to reduce the identified risks and drawbacks

Change 5 is to improve usability. This change doesn’t introduce new functionality

Timeline
Reference implementation changes will be deployed in devnet on 11/14 (PST) for ease of testing and providing feedback/discussion.
This AIP will be open for public comments until 11/17.
After discussion, reference implementation changes will be deployed in testnet on 11/17 for testing.

[AIP-X][Account abstraction to realize Social recovery]

AIP Discussion

Discussion and feedback thread for AIP.
I believe that account abstraction function & Social recovery function is needed for Aptos Ecosystem

Link to AIP:
to realize social key, not only some authority called guardian, but also paymaster function (maybe bundler) is needed.
Let's implement this function watching

Please refer these pages
https://eips.ethereum.org/EIPS/eip-4337
cardano-foundation/CIPs#309

< Copy AIP text here as well for reader's ease >
AIP28: Implementation of Account Abstraction in Aptos

overview:
In this proposal, we propose how to apply the concept of Account Abstraction in the Ethereum ecosystem to the Aptos blockchain. Aptos uses the Move language and focuses on UserOperation simulation and validation. In particular, there are restrictions on the information and storage that can be accessed when validating UserOperations.

specification:

For UserOperation validation simulation, call simulateValidation(userop)
Does not call forbidden opcodes and restricts storage access
Storage is limited to data associated with the sender address
The proposed implementation simulates and validates UserOperations and restricts interaction with special contracts such as factories, paymasters and signature aggregators. It also incorporates concepts such as access lists and forbidden opcodes.

Additionally, the concept of alternate mempools has been introduced to allow for specific use cases by whitelisting specific paymasters and signature aggregators.

This AIP28 proposes an implementation of Account Abstraction on the Aptos blockchain, enabling flexible account management with a focus on UserOperation simulation and verification.

Thank you for reading, Please wait for the update...

[AIP-1][Discussion] Proposer selection improvements within Consensus Protocol

AIP Discussion

Discussion and feedback thread for AIP.

Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-1.md

Summary

This change brings two simple improvements to proposer selection:

  • it changes so that we look at more recent voting history, making system react faster
  • it makes proposer selection much less predictable, reducing attack surface by malicious actors

Background

In Aptos Blockchain, progress is organized into rounds. In each round, new block is proposed and voted on. There is a special role of proposer, deterministically decided for each round, that is responsible for collecting votes from the previous round and proposing a new block for current round. Goals of proposer selection (a decision on which node should be a proposer in a round) are:

  • be fair to all nodes - both so that all nodes are asked to do their fair share of work, as well as so they can get their fair share of rewards (in combination with staking rewards logic). Fair share means it should be proportional to their stake.
  • prefer nodes that are operating correctly, as round failures increase commit latency and reduce throughput

Current proposer selection is done via ProposerAndVoter LeaderReputation algorithm. It looks at the past history, in one window for proposer history, and a smaller window for voting history. Then reputation_weight is chosen for each node:

  • if proposer round failure rate within the proposer window is strictly above threshold, use failed_weight (currently 1)
  • otherwise, if node had no proposal rounds and no successful votes, use inactive_weight (currently 10).
  • otherwise, use the default active_weight (currently 1000).

And then, reputation_weight is scaled by staked_amount, and next proposer is pseudo-randomly selected, given those weights.

Window sizes are chosen such that they are large enough to have enough signal to be reasonably stable, and not too large - to be able to adapt to changes quicker. For every block, we get proposer signal only for a single node, but voting single for two-thirds of the nodes. That means that proposer window needs to be larger, while we can keep voting window shorter.

Motivation and Specification

This proposal is to upgrade ProposerAndVoter into ProposerAndVoterV2 selection algorithm. New proposer selection algorithm makes two changes to the logic:

  • voter history window.
    • For looking at historical node performance, we look at proposals within (round - 10*num_validators - 20, round - 20) window. For voters we are looking at (round - 10*num_validators - 20, round - 9*num_validators - 20). We ignore last 20 rounds, because history is looked at committed information, and consensus and commit is decoupled, and there can be a few rounds delay between each. Beyond that, voters window is unnecessarily stale. With the new change, we will be looking at (round - num_validators - 20, round - 20) range for voters.
    • Untitled Diagram drawio(1)
    • Main effect of this change will be, that nodes that are joining validator set or were offline/lagging for a while and just caught up, will have a significantly shorter delay before being treated as active and being selected as proposer.
  • seed for pseudo-random selection
    • Currently seed used for pseudo-random selection is tuple (epoch, round). This makes every round be an independent random choice, but makes it predictable. That means that it is relatively easy to figure out who are going to be selected proposers for the future rounds, and this gives malicious actors easier ways to attack/exploit the network. There are various known ways that predictable leader election simplifies the attacks - Denial-of-service can be more easily achieved by only attacking the leaders, potential front-running of transactions is easier if leader is known in advance, etc. With the new change, seed is going to be changed to (root_hash, epoch, round), making it much less predictable.

Reference Implementation

aptos-labs/aptos-core#4253
aptos-labs/aptos-core#4973

Risks and Drawbacks

Future Potential

Suggested implementation timeline

Above PRs have been committed, and are being tested, and prepared to be released to the mainnet.
To enable the change above, additional governance proposal for consensus onchain config needs to be executed. E2E smoke test has been landed as well, to confirm governance proposal can be executed smoothly.

It is running on devnet for more than a week, though devnet has limit and only AptosLabs run validators, so change is not stress-tested.
We will test it out on testnet in a week or two.
If no further changes are needed, proposal is planned to be created, and sent for voting, by the end of December.

[AIP-37][Filter duplicate transactions within a block]

AIP Discussion

Motivation

With Quorum Store (see AIP-26), a block proposal may include duplicate transactions due to Quorum Store batches being opaque. (Duplicate transactions are also possible with byzantine proposers within quorum store.) Duplicate transactions cannot affect correctness of execution, i.e., the first version will always succeed and duplicates will be discarded. However, there is concern that duplicate transactions could affect the performance of parallel execution of blocks because they could induce conflicts. We propose filtering the duplicates before blocks are executed in the VM.

Read more about it here: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-37.md

[AIP-23][Discussion] Make Ed25519 Public Key Validation Return False if Key Is the Wrong Length

AIP-23 - Make Ed25519 public key validation native return false if key has the wrong length

Summary

Changes the function native_public_key_validate used by the native Move function native fun pubkey_validate_internal to return false if the public key provided is the wrong length. Previously, this function would abort if the key length provided was incorrect. This change is gated by a feature flag.

Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-23.md

[AIP-19][Discussion]Enable updating commission_percentage

AIP-19 - Enable updating commission_percentage

Discussion and feedback thread for AIP 19: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-19.md

Summary

This AIP proposes an update to staking_contract.move, which would allow the stake pool owner to change commission_percentage.

Motivation

Currently, commission_percentage cannot be changed. The updating commission percentage feature will allow for better adaptability to changing market conditions.

Rationale

Considerations:

  1. The staking contract tracks how much commission needs to be paid out to the operator. Updating commission_percentage is a convenience function added to staking_contract.move to allow a stake pool owner to update the commission percentage paid to the operator.
  2. Commission_percentage can be updated by the stake pool owner at any time. Commission is earned on a per epoch basis. The change takes into effect immediately for all future commissions earned when the update function is called, but will not be retroactively applied to any previously earned commissions.
  3. UpdateCommissionEvent gets emitted when the update_comission function is called

Alternative solutions:

The staking contract would have to be ended and a new one has to be created in order to change the commission_percentage. This is a less ideal solution as it creates more operational overhead and would result in missed staking rewards.

Reference Implementation

https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/staking_contract.move

https://github.com/aptos-labs/aptos-core/pull/6623/

Risks and Drawbacks

Changing commission_percentage may introduce uncertainty into commission earnings because it is possible that operators are paid at different commission rates during an unlock cycle. However, there is no additional action required for the operator as changes take into effect immediately.

We can mitigate this in a future iteration by implementing a max commission change per period. This is not a concern with the current owner-operator structure.

Future Potential

This feature will give the stake pool owner more flexibility over the commission_percentage to reflect changing market conditions.

Suggested implementation timeline

Targeting end of Q1

Suggested deployment timeline

This feature is currently on devnet and testnet as part of v1.3.

[AIP-12][Discussion] Multisig Accounts v2

aip-12

title: Multisig Accounts v2
author: movekevin
discussions-to:
Status: Draft
last-call-end-date (*optional:
type: Standard (Framework)
created:
updated:

Summary

This AIP proposes a new multisig account standard that is primarily governed by transparent data structures and functions in a smart contract (multisig_account) with more ease of use and powerful features than the current multied25519-auth-key-based accounts. There’s also a strong direction for this to involve as part of a more general account abstraction in Aptos with more types of accounts and functionalities for users to manage their accounts.

This is not meant to be a full-fledged multisig wallet product but instead just the primitive construct and potentially SDK support to enable the community to build more powerful multisig products.

Motivation

Multisig accounts are important in crypto and are used:

  • As part of a DAO or developer group to upgrade, operate, and manage smart contracts.
  • To manage on chain treasuries.
  • For individual to secure their own assets so the loss of one key would not lead to loss of funds.

Currently, Aptos supports multied25519 auth keys, which allows for multisig transactions:

  • This is different from multi-agent transactions where multiple signers sign the txs separately leading to multiple signers being created when the tx is executed.
  • The multisig account can be created by calling create_account with the right address, which is a hash of the list of owners’ public keys, the threshold k (k-of-n multisig), and multied25519 scheme identifier (1), concatenated. The multisig enforcement will then be done through the multisig account’s auth key.
  • To create a multisig tx, the tx payload needs to passed around, and k private keys that are part of the multisig account needs to sign with the right [authenticator setup](https://aptos.dev/guides/creating-a-signed-transaction/#multisignature-transactions).
  • To add or remove an owner or change the threshold, owners need to send a tx with enough signatures to change the auth key to reflect the new list of owner public keys and the new threshold.

There are several problems with this current multisig setup that make it hard to use:

  • It’s not easy to tell who are the current owners of the multisig account and what the required signature threshold is. This information needs to be manually parsed from the auth key.
  • To create the multisig account’s auth key, users need to concatenate the owners’ public keys and add the signature threshold at the end. Most people don’t even know how to get their public keys or that they are different from addresses.
  • Users would have to manually pass around the tx payload to gather enough signatures. Even if the SDK makes the signing part easy, storing and passing this tx requires a database somewhere and some coordination to execute when there are enough signatures.
  • The nonce in the multisig tx has to be the multisig account’s nonce, not the owner accounts’ nonces. This usually would invalidate the a multisig tx if other txs were executed before it, increasing the nonce. Managing the nonce here for multipe in-flight txs can be tricky.
  • Adding or removing owners is not easy as it involves changing the auth key. The payload for such transactions would not be easily understandable and needs some special logic for parsing/diffing.

Proposal

We can create a more user-friendly multisig account standard that the ecosystem can build on top of. This consists of two main components:

  1. A multisig account module that governs creating/managing multisig accounts and creating/approving/rejecting/executing multisig account transactions. Execution function will be private by default and only executed by:
  2. A new transaction type that allows an executor (has to be one of the owners) to execute a transaction payload on the behalf of a multisig account. This will authenticate by calling the multisig account module’s private execution function. This transaction type can also be generalized to support other impersonation/delegation use cases such as paying for gas to execute another account’s transaction.

Data structures and multisig_account module

  • A multisig_account module that allows easier creating and operating a multisig account
    • The multisig account will be created as a standalone resource account with its own address
    • The multisig account will store the multisig configs (list of owners, threshold), and a list of transactions to execute. The transactions must be executed (or rejected) in order, which adds determinism.
    • This module also allows owners to create and approve/reject multisig transactions using the standard user transactions (these functions will be standard public entry functions). Only executing these multisig account transactions would need the new transaction type.
struct MultisigAccount has key {
  // The list of all owner addresses.
  owners: vector<address>,
  // The number of signatures required to pass a transaction (k in k-of-n).
  signatures_required: u64,
  // Map from transaction id (incrementing id) to transactions to execute for this multisig account.
  // Already executed transactions are deleted to save on storage but can always be accessed via events.
  transactions: Table<u64, MultisigTransaction>,
  // Last executed or rejected transaction id. Used to enforce in-order executions of proposals.
  last_transaction_id: u64,
  // The transaction id to assign to the next transaction.
  next_transaction_id: u64,
  // The signer capability controlling the multisig (resource) account. This can be exchanged for the signer.
  // Currently not used as the MultisigTransaction can validate and create a signer directly in the VM but
  // this can be useful to have for on-chain composability in the future.
  signer_cap: Option<SignerCapability>,
}

/// A transaction to be executed in a multisig account.
/// This must contain either the full transaction payload or its hash (stored as bytes).
struct MultisigTransaction has copy, drop, store {
  payload: Option<vector<u8>>,
  payload_hash: Option<vector<u8>>,
  // Owners who have approved. Uses a simple map to deduplicate.
  approvals: SimpleMap<address, bool>,
  // Owners who have rejected. Uses a simple map to deduplicate.
  rejections: SimpleMap<address, bool>,
  // The owner who created this transaction.
  creator: address,
  // Metadata about the transaction such as description, etc.
  // This can also be reused in the future to add new attributes to multisig transactions such as expiration time.
  metadata: SimpleMap<String, vector<u8>>,
}

New transaction type to execute multisig account transactions

// Existing struct used for entry function payload
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize)]
pub struct EntryFunction {
    pub module: ModuleId,
    pub function: Identifier,
    pub ty_args: Vec<TypeTag>,
    #[serde(with = "vec_bytes")]
    pub args: Vec<Vec<u8>>,
}

```rust
// Existing struct used for EntryFunction payload, e.g. to call "coin::transfer"
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize)]
pub struct EntryFunction {
    pub module: ModuleId,
    pub function: Identifier,
    pub ty_args: Vec<TypeTag>,
    #[serde(with = "vec_bytes")]
    pub args: Vec<Vec<u8>>,
}

// We use an enum here for extensibility so we can add Script payload support
// in the future for example.
pub enum MultisigTransactionPayload {
    EntryFunction(EntryFunction),
}

/// A multisig transaction that allows an owner of a multisig account to execute a pre-approved
/// transaction as the multisig account.
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize)]
pub struct Multitsig {
    pub multisig_address: AccountAddress,

    // Transaction payload is optional if already stored on chain.
    pub transaction_payload: Option<MultisigTransactionPayload>,
}

End-to-end flow

  1. Owners can create a new multisig account by calling multisig_account::create.
    1. This can be done as a normal user tx (entry function) or on chain via another module that builds on top.
  2. Owners can be added/removed any time by calling multisig_account::add_owners or remove_owners. The transactions to do still need to follow the k-of-n scheme specified for the multisig account.
  3. To create a new transaction, an owner can call multisig_account::create_transaction with the transaction payload: specified module (address + name), the name of the function to call, and argument values.
    1. The payload data structure is still under experimentation. We want to make it easy for off-chain systems to correctly construct this payload (or payload hash) and can debug if there are issues.
    2. This will store the full transaction payload on chain, which adds decentralization (censorship is not possible) and makes it easier to fetch all transactions waiting for execution.
    3. If gas optimization is desired, an owner can alternatively call multisig_account::create_transaction_with_hash where only the payload hash is stored (module + function + args). Later execution will be verified using the hash.
    4. Only owners can create transactions and a transaction id (incrementing id) will be assigned.
  4. Transactions must be executed in order. But owners can create multiple transactions in advance and approve/reject them.
  5. To approve or reject a transaction, other owners can call multisig_account::approve() or reject() with the transaction id.
  6. If there are enough rejections (≥ signatures threshold), any owner can remove the transaction by calling multisig_account::remove().
  7. If there are enough approvals (≥ signatures threshold), any owner can execute the next transaction (by creation) using the special MultisigTransaction type with an optional payload if only a hash is stored on chain. If the full payload was stored at creation, the multisig transaction doesn’t need to specify any params beside the multisig account address itself. Detailed flow in VM:
    1. Transaction prologue: The VM will first invoke a private function (multisig_account::validate_multisig_transaction) to verify that the next transaction in the queue for the provided multisig account exists and has enough approvals to be executed.
    2. Transaction execution:
      1. VM first obtains the payload of the underlying call in the multisig tx. This shouldn’t fail if transaction prologue (validation) has succeeded.
      2. VM then tries to execute this function and records the result.
      3. If successful, VM invokes multisig_account::successful_transaction_execution_cleanup to track and emit events for the successful execution
      4. If failed, VM throws away the results of executing the payload (by resetting the vm session) while keeping the gas spent so far. It then invokes multisig_account::failed_transaction_execution_cleanup to track the failure.
      5. At the end, gas is charged to the sender account and any pending Move module publishing is resolved (in case the multisig tx publishes a module).

Reference Implementation

(WIP)

aptos-labs/aptos-core#5894

Risks and Drawbacks

The primary risk is smart contract risk where there can be bugs or vulnerabilities in either the smart contract code (multisig_account module) or API and VM execution. This can be mitigated with thorough security audit and testing.

Future Potential

An immediate extension to this proposal is to add script support for a multisig tx. This would allow defining more complex atomic multisig txs.

In the longer term:

The proposal as-is would not allow on-chain execution of multisig transactions - other modules can only create transactions and allow owners to approve/reject. Execution would require sending a dedicated transaction of the multisig transaction type. However, in the future, this can be made easier with dynamic dispatch support in Move. This would allow on chain execution and also off-chain execution via the standard user transaction type (instead of the special multisig transaction type). Dynamic dispatch could also allow adding more modular components to the multisig account model to enable custom transaction authentication, etc.

Another direction multisig account can enable is more generic account abstraction models where on-chain authentication can be customized to allow account A to execute a transaction as account B if allowed via modules/functionalities defined by account B. This would enable more powerful off-chain systems such as games to abstract away transaction and authentication flow without the users needing intimate understanding of how they work.

Suggested implementation timeline

Targeted code complete (including security audit) and testnet release: February 2023.

[AIP-21][Discussion] Fungible Asset using objects

AIP Discussion

Discussion and feedback thread for AIP.

Link to AIP: #96


aip: 21
title: Fungible Asset Standard using objects
author: lightmark
Status: Draft
type: Standard (Framework)
created: 04/11/2022

[AIP-x] Fungible Asset using Objects

Summary

This AIP proposes a standard of Fungible Asset (FA) using Move Objects. In this model, any object, which is called Metadata in the standard, can be used as metadata to issue fungible asset units. This standard provides the building blocks for applications to explore the possibilities of fungibility.

Motivation

We are eager to build fungible asset on Aptos as it plays an critical role in the Web3 ecosystem beyond cryptocurrency. it enables the tokenization of various assets, including commodities, real estate, and financial instruments, and facilitate the creation of decentralized financial applications.

  • Tokenization of securities and commodities provides fractional ownership, making these markets more accessible to a broader range of investors.
  • Fungible tokens can also represent ownership of real estate, enabling fractional ownership and providing liquidity to the traditionally illiquid market.
  • In-game assets such as virtual currencies and characters can be tokenized, enabling players to own and trade their assets, creating new revenue streams for game developers and players.

Besides aforementioned features, fungible asset is a superset of cryptocurrency as coin is just one type of fungible asset. Coin module in Move could be replaced by fungible asset framework.

Rationale

The rationale is two-folds:

We witnessed an drastically increasing needs of fungible asset framework from Aptos community and partners. The earlier coin module is obsolete and insufficient for today's needs partially due to the rigidity of Move structs and the inherently poor extensibility that it’s built upon. Also, the basic model of authorization management is not flexible enough to enable creative innovations of fungible asset policy.

The old coin module has a noticeable deficiency that the store ability makes ownership tracing a nightmare. Therefore, it is not amenable to centralized management such as account freezing because it is not programmably feasible to find all the coins belonging to an account.

fungible asset framework is born to solve both issues.

Specification

fungible_asset::Metadata serves as metadata or information associated with a kind of fungible asset. Any object with fungibility has to be extended with this resource and then become the metadata object. It is noted that this object can have other resources attached to provide richer context. For example, if the fungible asset represents a gem, it can hold another Gem resource with fields like color, size, quality, rarity, etc.

#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
    /// Define the metadata required of an metadata to be fungible.
    struct Metadata has key {
        /// The current supply of the fungible asset.
        supply: u64,
        /// The maximum supply limit where `option::none()` means no limit.
        maximum: Option<u64>,
        /// Name of the fungible metadata, i.e., "USDT".
        name: String,
        /// Symbol of the fungible metadata, usually a shorter version of the name.
        /// For example, Singapore Dollar is SGD.
        symbol: String,
        /// Number of decimals used for display purposes.
        /// For example, if `decimals` equals `2`, a balance of `505` coins should
        /// be displayed to a user as `5.05` (`505 / 10 ** 2`).
        decimals: u8,
    }

FungibleStore only resides in an object as a container/holder of the balance of a specific fungible asset.

FungibleAsset is an instance of fungible asset as a hot potato

#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
/// The store object that holds fungible assets of a specific type associated with an account.
    struct FungibleStore has key {
        /// The address of the base metadata object.
        metadata: Object<Metadata>,
        /// The balance of the fungible metadata.
        balance: u64,
        /// Fungible Assets transferring is a common operation, this allows for freezing/unfreezing accounts.
        allow_ungated_balance_transfer: bool,
    }

/// FungibleAsset can be passed into function for type safety and to guarantee a specific amount.
/// FungibleAsset is ephermeral that it cannot be stored directly and will have to be deposited back into a store.
    struct FungibleAsset {
        metadata: Object<Metadata>,
        amount: u64,
    }

Primary and Secondary Stores

Each account can own multiple FungibleStores but only one is primary and the rest are called secondary stores. The primary store address is deterministic, hash(owner_address | metadata_address | 0xFC). Whereas the secondary store could be created when needed.

The difference between primary and secondary stores are summarized as:

  1. Primary store address is deterministic to the owner account so there is no need to index.
  2. Primary store supports unilateral sending so that it will be created on demand if not exist. Whereas secondary ones do not.
  3. Primary store cannot be deleted.

Reference Implementation

aptos-labs/aptos-core#7183

aptos-labs/aptos-core#7379

aptos-labs/aptos-core#7608

Fungible asset main APIs

public entry fun transfer<T: key>(
    sender: &signer,
    from: Object<T>,
    to: Object<T>,
    amount: u64,
)
public fun withdraw<T: key>(
        owner: &signer,
        store: Object<T>,
        amount: u64,
    ): FungibleAsset
public fun deposit<T: key>(store: Object<T>, fa: FungibleAsset)
public fun mint(ref: &MintRef, amount: u64): FungibleAsset
public fun mint_to<T: key>(ref: &MintRef, store: Object<T>, amount: u64)
public fun set_ungated_transfer<T: key>(ref: &TransferRef, store: Object<T>, allow: bool)
public fun burn(ref: &BurnRef, fa: FungibleAsset)
public fun burn_from<T: key>(
        ref: &BurnRef,
        store: Object<T>,
        amount: u64
    )
public fun withdraw_with_ref<T: key>(
        ref: &TransferRef,
        store: Object<T>,
        amount: u64
    )
public fun deposit_with_ref<T: key>(
        ref: &TransferRef,
        store: Object<T>,
        fa: FungibleAsset
    )
public fun transfer_with_ref<T: key>(
        transfer_ref: &TransferRef,
        from: Object<T>,
        to: Object<T>,
        amount: u64,
    )

Fungible store main APIs

#[view]
public fun primary_store_address<T: key>(owner: address, metadata: Object<T>): address

#[view]
    /// Get the balance of `account`'s primary store.
    public fun balance<T: key>(account: address, metadata: Object<T>): u64

#[view]
/// Return whether the given account's primary store can do direct transfers.
public fun ungated_balance_transfer_allowed<T: key>(account: address, metadata: Object<T>): bool

/// Withdraw `amount` of fungible asset from `store` by the owner.
public fun withdraw<T: key>(owner: &signer, metadata: Object<T>, amount: u64): FungibleAsset

/// Deposit `amount` of fungible asset to the given account's primary store.
public fun deposit(owner: address, fa: FungibleAsset)

/// Transfer `amount` of fungible asset from sender's primary store to receiver's primary store.
    public entry fun transfer<T: key>(
        sender: &signer,
        metadata: Object<T>,
        recipient: address,
        amount: u64,
    )

Risks and Drawbacks

  • Make an asset fungible is not an irreversible operation and there is no way to clear the fungible asset data if not needed any more.
  • The solution to use primary store is not perfect in that the same DeriveRef could also be used by other module to squat the primary store object. This requires the creator of metadata to bear that in mind. So we limit the function can be only called by primary_store module for now. The reason behind this is the name deriving scheme does not have native domain separator for different modules.

Future Potential

There is still some room for improvements of management capabilities and the way to locate fungible asset objects. Once we have a powerful indexer with a different programming model then it may be unnecessary to have primary store anymore.

Suggested implementation timeline

By end of March.

Suggested deployment timeline

on Devnet by early April, on Testnet by mid April and Mainnet by early May.

[AIP-28][Partial voting for on chain governance]

AIP 28 - Partial voting for on chain governance

Summary

With delegation_pool.move, end users are able to participate in staking, but the delegation pool owner votes on behalf of the entire pool. Partial voting proposes changes to Aptos Governance by enabling delegators to participate in on chain governance and vote on governance proposals in proportion to their stake amount.

Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-28.md

[AIP-X][Discussion] Improve multisig v2 owner schema update flow

Background

This AIP is being submitted per the request of @movekevin in aptos-labs/aptos-core#8525

Summary

Presently the multisig v2 Move APIs require multiple transactions for certain operations, which would be streamlined via abstraction and helper functions. aptos-labs/aptos-core#8525 adds these functions.

Motivation

Multisig v2 operations are already cumbersome compared with single-signer operations. Streamlining the process will lead to less friction for distributed governance solutions

Specification

The following function signatures are proposed:

    /// Like `create_with_owners`, but removes the calling account after creation.
    ///
    /// This is for creating a vanity multisig account from a bootstrapping account that should not
    /// be an owner after the vanity multisig address has been secured.
    public entry fun create_with_owners_then_remove_bootstrapper(
        bootstrapper: &signer,
        owners: vector<address>,
        num_signatures_required: u64,
        metadata_keys: vector<String>,
        metadata_values: vector<vector<u8>>,
    ) acquires MultisigAccount {

    /// Swap an owner in for an old one, without changing required signatures.
    entry fun swap_owner(
        multisig_account: &signer,
        to_swap_in: address,
        to_swap_out: address
    ) acquires MultisigAccount {

    /// Swap owners in and out, without changing required signatures.
    entry fun swap_owners(
        multisig_account: &signer,
        to_swap_in: vector<address>,
        to_swap_out: vector<address>
    ) acquires MultisigAccount {

    /// Swap owners in and out, updating number of required signatures.
    entry fun swap_owners_and_update_signatures_required(
        multisig_account: &signer,
        new_owners: vector<address>,
        owners_to_remove: vector<address>,
        new_num_signatures_required: u64
    ) acquires MultisigAccount {

    /// Add new owners, remove owners to remove, update signatures required.
    fun update_owner_schema(
        multisig_address: address,
        new_owners: vector<address>,
        owners_to_remove: vector<address>,
        optional_new_num_signatures_required: Option<u64>,
    ) acquires MultisigAccount {

The final function is an abstraction that can be substituted for existing owner modification functions.

Alternatives

Do nothing, thus enforcing transactional complexity

Reference implementation

aptos-labs/aptos-core#8525

Risks and drawbacks

Smart contract risk

[AIP-X][Discussion] Allocate places for Aptos Ru Community team to validate Aptos mainnet

Proposal/Solution

Hello Aptos team and community

We represent the Aptos Ru Community team: one of the earliest, most active adopters, coordinators and contributors of Aptos across the Russian across Russian speaking universe. Our team includes both Aptos Moderators and Coordinators.

Among many other things we are creators of the following projects:
• Weekly RuWorkshop events/Ecosystem reviews
• AptosRuHub website
• Russian translation of Official Aptoslabs Medium
• Russian translation of Official Aptos Developer Documentation
• Step-by-step video guides how to launch Aptos node (from the earliest devnet up to Ait3)
• Telegram group with c. 2000 subscribers
• Weekly summaries of Move Mondays and Ru Workshops

And also our team helped to set up lots of nodes in DevNet and educated many people on Aptos
Each of us has proven himself from the best side and has a brilliant reputation in the community and on the Aptos discord server.
We are constantly working on community development, improving the quality of materials and events.

Our objective
Grow the community by increasing its activity and size.

Limitation
Lack of financial resources for the preparation of better quality materials (articles, videos, etc.), event marketing, attracting influencers, promotional gifts, giveaways, etc.,

Proposal/solution
Allocate places for Aptos Ru Community team to validate Aptos mainnet.

This proposal allows to achieve the following:

  • accelerate the development of the Russian-speaking community
  • improve decentralization of the Aptos network

Allocating places for validation would solve the problem of financing community project (Aptos Ru Community) and would be able to improve their quality and stimulate activity, since the rewards from network validation would not leave the project, but would be used to promote different activities and as the result speedup attraction/onboarding of users and builders to Aptos Network across Russian speaking audience.

Given our successful participation in all stages of the AIT as well as in validation activity in other projects, we have all the necessary experience to participate in Mainnet validation

We are Aptos Ru community team and we want to grow our community and help Aptos to engage new members and grow the whole Ru community.

All our resources
https://link3.to/aptosrucommunity
https://t.me/AptosRUcommunity
https://tiny.one/aptosruhub
https://youtube.com/@aptosrucommunity
https://t.me/AptosWorkshopRU
https://cr-nepos.gitbook.io/aptos-ru/
https://medium.com/@aptos-in-russian

[AIP-13][Discussion] Coin standard improvements

Summary

Since the launch of the Aptos Mainnet, there are some improvements suggested by the community that will speed up adoption of the Coin standard. Specifically:

  1. Implicit coin registration: Currently recipients of a new Coin need to explicitly register to be able to receive it the first time. This creates friction in user experience with exchanges (CEXs and DEXs alike) and also with wallets. Switching to an implicit model where registration is not required will significantly improve UX.
  2. Added support for batch transfers: This is a minor convenience improvement that allows an account to send coins, both the network coin (APT) and others, to multiple accounts in a single transaction.

Since (2) is a minor improvement, the rest of this proposal will discuss the details for (1).

Motivation and Rationale

Currently, in many cases, coins (ERC-20 tokens, including APT) cannot be sent directly to an account if:

  • The account has not been created. Aptos_account::transfer or account::create_account needs to be called by another account that can pay for gas. This is generally a slight annoyance but not too big of a pain.
  • The account has not registered to receive that coin type. This has to be done for every custom coin (APT is registered by default when creating an account). This is the main cause of complaints/pain.

The primary historical reason for this design is to let an account explicitly opt-in for tokens/coins that they want and not receive random ones they don’t. However, this has led to user and developer pains as they need to remember to register the coin type, especially so when only the recipient account can do this. One important use case that has run into this issue to CEX transfers that involve custom coins.

Proposal

We can switch to a model where CoinStore (created by registration) is implicitly created upon transfer if it doesn't exist for a specific CoinType. This can be added as a separate flow in aptos_coin, similar to aptos_coin::transfer which implicitly creates an account upon an APT transfer if one doesn't exist. In addition, accounts can choose to opt-out of this behavior if desired. The detailed flow is below:
1. aptos_coin::transfer_coins(from: &signer, to: address, amount: u64) by default will register the recipient address (create CoinStore) for CoinType if one doesn't exist.
2. An account can choose to opt-out of (e.g. to avoid receiving spammy coins) by calling aptos_account::set_allow_direct_coin_transfers(false). They can also later revert this with set_allow_direct_coin_transfers(true). By default, any existing accounts before this proposal is implemented and new accounts afterward will be implicitly opted into receiving all coins.

Implementation

Reference implementation:

Risks and Drawbacks

Since this is a new flow instead of modifying the existing coin::transfer function, existing dependency on coin::register and coin::transfer should not be affected. There is only one known potential risk: Since an account is opted-in to receiving arbitrary coins by default, they can get spammed by malicious users sending hundreds or thousands of these spammy coins. This can be mitigated:

  • Wallets can maintain a known reputable coin list and use it to filter their user's coin list. This is a known standard UX commonly see in other chains/wallets such as Metamask.
  • Resources API currently allow pagination, which helps when an account has too many resources. This mitigates any risks of "DDOS" an account by creating too many resources (one CoinStore per arbitrary coin).
  • Indexing systems can further help filter out spammy coins. This can be seen with popular explorers on other chains such as Etherscan.

Timeline

This change can be rolled out to Testnet for testing in the week of 2/1 or 2/8 (PST time).

[AIP-36][Universally Unique Identifiers]

AIP Discussion

Summary

This AIP proposes to add a new native function create_uuid in Aptos framework that generates and outputs a unique 256-bit identifier (of type address) for each function call. The create_uuid function calls can run efficiently and in parallel. In other words, when two transactions run create_uuid method, they can be executed in parallel without any conflicts. Initially, we will use these unique identifiers internally as addresses for newly created Move Objects.

Motivation

There is a general need to be able to create unique identifiers or addresses. There is no such utility in Move today, and so various alternatives have been used, which bring performance implications. We want to provide such an utility for all usecases that needed it.

Concretely, when a new object is created, we need to associate it with a unique address. For named objects, we deterministically derive it from the name. But for all other objects, we currently derive it from a GUID (Globally Unique Identifier) that we create on the fly. A GUID consists of a tuple (address, creation_num). We create GUID by having address be the account address of the object or resource’s creator, and the creation_num is the guid sequence number of the object or resource created by that account. As the sequence number creation_num has to be incremented for each object/resource creation, GUID generation is inherently sequential, within the same address. As an example, in Token V2 whenever a new token is minted using token::create_from_account, object is created to back it, which uses GUID generated based on the collection address, and so all such mints from the same collection are inherently sequential.

This AIP thereby creates a new type of identifier called UUID (Universally Unique Identifier). This is a 256-bit identifier that is universally unique amongst all the identifiers generated by all the accounts for all purposes. We propose adding a new native function create_uuid in Aptos framework. Every time a MOVE code calls create_uuid, the function outputs a universally unique 256-bit value.

Read more about it here: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-36.md

[AIP-9][Discussion] Resource Groups

AIP-9 - Resource Groups (Discussion)

aip: 9
title: Resource Groups
author: davidiw, wrwg, msmouse
discussions-to: #26
Status: Draft
last-call-end-date:
type: Standard (Interface, Framework)
created: 2023/1/5
updated: N/A
requires: N/A

Summary

This AIP proposes resource groups to support storing multiple distinct Move resources together into a single storage slot.

Motivation

Over the course of development, it often becomes convenient to add new fields to a resource or support an optional, heterogeneous set of resources. However, resources and structs are immutable after being published to the blockchain, hence, the only pathway to add a new field is via a new resource.

Each distinct resource within Aptos requires a storage slot. Each storage slot is a unique entry within a Merkle tree or authenticated data structure. Each proof within the authenticated data structure occupies 32 * LogN bytes, where N is the total amount of storage slots. At N = 1,000,000, this results in a 640 byte proof.

With 1,000,000 storage slots in use, adding even a new resource that contains only an event handle uses approximately 680 bytes, where the event handle requires only 40. The remaining 640 bytes comes from the new authenticated data proofs, which can be orders of magnitude larger than the data being authenticated. Beyond the capacity demands, reads and writes incur additional costs associated with proof verification and generation, respectively.

Resource groups allow for dynamic, co-location of data such that adding a new event can be done even after creation of the resource group and with a fixed storage and execution costs independent of the amount of slots in storage. In turn, this provides a convenient way to evolve data types and co-locate data from different resources.

Rationale

A resource group co-locates data into a single storage slot by encoding within the Move source files attributes that specify which resources should be combined into a single storage slot. Resource groups have no semantic effect on Move, only on the organization of storage.

At the storage layer, the resource groups are stored as a BCS-encoded BTreeMap where the key is a BCS-encoded fully qualified struct name (address::module_name::struct_name, e.g., 0x1::account::Account) and the value is the BCS-encoded data associated with the resource.

image

The above diagram illustrates data stored at address 0xcafef00d. 0x1::account::Account is a resource stored at address 0xcafef00d. 0xaa::resource::Group contains a set of resources or a resource group stored at the same address. The resource group packs multiple resources into the group. Resources within a resource group require nested reading, wherein first the resource group must be read from storge followed by reading the specific resource from the resource group.

Alternative 1 — Any within a SimpleMap

One alternative that was considered is storing data in a SimpleMap using the any module. While this is a model that could be shipped without any change to Aptos-core, it incurs some drawbacks around developer and application complexity both on and off-chain. There’s no implicit caching, and therefore any read or write would require a deserialization of the object and any write would require a serialization. This means a transaction with 3 writes would result in 3 deserializations and 3 serializations. In order to get around this, the framework would need substantial, non-negligible changes, though with the emergence of SmartMap there may be more viability here. Finally, due to the lack of a common pattern, indexers and APIs would not be able to easily access this data.

Alternative 2 — Generics

Another alternative was using templates. The challenge with using templates is that data cannot be partially read without knowing what the template type is. For example, consider an object that might be a token. In resource groups, one could easily read the Object or the Token resource. In templates, one would need to read the Object<Token>. This could also be worked around by complex framework changes and risks around partially reading BCS-encoded data, an application, which has yet to be considered. The same issues in Move would impact those using the REST API.

Generalizations of Issues

There are myriad combinations between the above two approaches. In general, the drawbacks are

  • High costs associated with deserialization and serialization for each read and/or write.
  • The current limitations around returning a reference to global memory limit utility of generics and add overheads to reads and writes of objects.
  • Limitations on standards resulting in more complexity for API and Indexer usage.
  • Data access within models that want to leverage a struct with store. A struct with key ability has stricter and more understandable properties than store. For example, the latter can lead to data being placed in arbitrary places, complicating global addressing and discoverability, which may be desirable for certain applications.

Specification

Within the Framework

A resource group consists of several distinct resources, or a Move struct that has the key ability.

Each resource group is identified by a common Move struct:

#[resource_group(scope = global)]
struct ObjectGroup { }

Where this struct has no fields and the attribute: resource_group. The attribute resource_group has the parameter scope that limits the location of other entries within the resource group:

  • module — only resources defined within the same module may be stored within the same resource group.
  • address — only resources defined within the same address may be stored within the same resource group.
  • global — there are no limitations to where the resource is defined, any resource can be stored within the same resource group.

The motivation of using a struct is that

  1. It allows all resources within the group to have a compile time validation that they are within that group
  2. It can build upon the existing storage model that knows how to read and write data stored at StructTags. Thus it limits the implementation impact to the VM and readers of storage, storage can remain agnostic to this change.
  3. Only struct and fun can have attributes, which in turn let’s us define additional parameters like scope.

Each entry in a resource group is identified by the resource_group_member attribute:

#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
struct Object has key {
    guid_creation_num: u64,
}

#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
struct Token has key {
    name: String,
}

During compilation and publishing, these attributes are checked to ensure that:

  1. A resource_group has no abilities and no fields.
  2. The scope within the resource_group can only become more permissive, that is it can either remain at a remain at the same level of accessibility or increase to the next.
  3. Each resource within a resource group has a resource_group_member attribute.
  4. The group parameter is set to a struct that is labeled as a resource_group.
  5. During upgrade, an existing struct cannot either add or remove a resource_group_member.

The motivation for each of these requirements are:

  1. Ensures that a resource_group struct won't be used for other storage purposes. While there is no strict requirement that this be true, it is intended to mitigate confusion to developers.
  2. Making a scope less permissive can result in breakage of deployed resource_group_members.
  3. Without explicitly labeling a resource resource_group_member, there is no way for Move to know that it is within a resource_group.
  4. Is discussed above as the intent to enforce clean typesafety and a single place to define the properties of the resource group.
  5. If there exists data stored wtihin a resource, entering or leaving a resource group can result in that data being inaccessible.

Within Storage

From a storage perspective, a resource group is stored as a BCS-encoded BTreeMap<StructTag, BCS encoded MoveValue>, where a StructTag is a known structure in Move of the form: { account: Address, module_name: String, struct_name: String }. Whereas, a typical resource is stored as a BCS encoded MoveValue.

Resource groups introduce a new storage access path: ResourceGroup to distinguish from existing access paths. This provides a cleaner interface and segregation of different types of storage. This becomes advantageous to indexers and other direct readers of storage that can now parse storage without inspecting module metadata. Using the example above, 0x1::account::Account is stored at AccessPath::Resource(0xcafef00d, 0x1::account::Account), whereas the resource group and its contents are stored at AccessPath::ResourceGroup(0xcafef00d, 0xaa::resource::Group)

The only way to tell that a resource is within a resource group is by reading the module metadata associated with the resource. After reading module metadata, the storage client should either directly read form the AccessPath::Resource or by first reading AccessPath::ResourceGroup followed by deserializing the BTreeMap and then extracting the appropriate resource.

At write time, an element of a resource group must be appropriately updated into a resource group by determining the delta the resource group as a result of the write operation. This results in the handful of possibilities:

  • The resource group doesn’t exist in storage, this is a new write op.
  • The resource group already exists, then this is a modification, even if the element is new to the resource group or an element is being removed.
  • All the elements have been removed, then this is a deletion.

Within the Gas Schedule and the VM

The implications for the gas schedule are:

  • Reading a single resource from a resource group results in a charge for all the data within a resource group.
  • Writing a single resource to a resource group results in a charge for all the data within a resource group.

Within the Interface

The above text in storage discusses the layout for resources and resources groups. User facing interfaces, such as a REST API, should not be exposed to resource groups. It is entirely a Move concept. A direct read on a resource group should be avoided. A resource group should be flattened and included within a set of resources when reading bulk resources at an address.

Reference Implementation

aptos-labs/aptos-core#6040

Risks and Drawbacks

  • This requires adjustments to the API layer to read resources stored within a resource group.
  • Paginated reads of resources on an account would need to be able to handle the discrepancy between distinct resources and those that are directly countable.
  • Entities reading change sets would need to be aware of how resources within resource groups are stored.
  • Each resource within a resource group adds the cost of a StructTag (likely much less than 100 bytes). Accesses to a resource group will incur an extra deserialization for reads and an extra deserialization and serialization for writes. This is cheaper than alternatives and still substantially cheaper than storage costs. Of course, developers are free to explore the delta in their own implementations as resource groups does not eliminate individual resources.

None of these are major roadblocks and will be addressed as part of the implementation of Resource Groups.

Future Potential

While resources cannot seamlessly adopted into resource groups, it is likely that many of the commonly used resources are migrated into new resources within resource groups to give more flexibility to upgradeability, because a resource group does not lock developers into a fixed resource layout. In fact, this returns Aptos back to supporting a more idiomatic Move, which co-locates resources stored at an address — being freed from perf considerations which hindered developers before.

Another area worth investigating is whether or not a templated struct can be within a resource group depending on what the generic type is. Consider the current Aptos Account and the CoinStore<AptosCoin>. Storing them separately has negative impact on performance and storage costs.

In the current VM implementation, resources are cached upon read. This can be improved with caching of the entire resource group at read time.

Suggested implementation timeline

  • Heading to DevNet before end of January.
  • Should be in the February testnet cut.
  • Ideally lands in Mainet sometime in March.

References

AIP template issues

The YAML header does not contain documentation for the following fields and their format:

  • created -> Explain what this is and its appropriate (date?) format.
  • last-call-end-date -> Explain what this is and its appropriate (date?) format.

Furthermore, for the aip field, it says "(this is determined by the AIP Manager)". However, one would expect to just put the AIP # here. So please clarify if this should be entered by the user (and what should be entered), or if it should be left blank. "Determined by the AIP manager" does not clarify if I need to enter anything there :)

[AIP-20][Discussion] Generic Cryptography Algebra and BLS12-381 Implementation

AIP Discussion

Discussion and feedback thread for AIP.

Link to AIP: #86

Summary

This AIP proposes the support of generic cryptography algebra operations in Aptos standard library.

The initial list of the supported generic operations includes group/field element serialization/deserialization, basic arithmetic, pairing, hash-to-structure, casting.

The initial list of supported algebraic structures includes groups/fields used in BLS12-381, a popular pairing-friendly curve as described here.

Either the operation list or the structure list can be extended by future AIPs.

Motivation

Algebraic structures are fundamental building blocks for many cryptographic schemes, but also hard to implement efficiently in pure Move.
This change should allow Move developers to implement generic constructions of those schemes, then get different instantiations by only switching the type parameter(s).

For example, if BLS12-381 groups and BN254 groups are supported, one can implement a generic Groth16 proof verifier construction, then be able to use both BLS12-381-based Groth16 proof verifier and BN254-based Groth16 proof verifier.

BLS12-381-based Groth16 proof verifier has been implemented this way as part of the reference implementation.

Rationale

An alternative non-generic approach is to expose instantiated schemes directly in aptos_stdlib.
For example, we can define a Groth16 proof verification function
0x1::groth16_<curve>::verify_proof(vk, proof, public_inputs): bool
for every pairing-friendly elliptic curve <curve>.

For ECDSA signatures which require a hash function and a group, we can define
0x1::ecdsa_<hash>_<group>::verify_signature(pk, msg, sig):bool
for each pair of proper hash function <hash> and group <group>.

Compared with the proposed approach, the alternative approach saves the work of constructing the schemes for Move developers. However, the size of aptos_stdlib can multiply too fast in the future.
Furthermore, the non-generic approach is not scalable from a development standpoint: a new native is needed for every combination of cryptosystem and its underlying algebraic structure (e.g., elliptic curve).

To keep the Aptos stdlib concise while still covering as many use cases as possible, the proposed generic approach should be chosen over the alternative approach.

Specifications

Generic Operations

Structs and Functions

Module aptos_std::crypto_algebra is designed to have the following definitions.

  • A generic struct Element<S> that represents an element of algebraic structure S.
  • Generic functions that represent group/field operations.

Below is the full specification in pseudo-Move.

module aptos_std::crypto_algebra {
    /// An element of the group `G`.
    struct Element<S> has copy, drop;

    /// Check if `x == y` for elements `x` and `y` of an algebraic structure `S`.
    public fun eq<S>(x: &Element<S>, y: &Element<S>): bool;

    /// Convert a u64 to an element of an algebraic structure `S`.
    public fun from_u64<S>(value: u64): Element<S>;

    /// Return the additive identity of field `S`, or the identity of group `S`.
    public fun zero<S>(): Element<S>;

    /// Return the multiplicative identity of field `S`, or a fixed generator of group `S`.
    public fun one<S>(): Element<S>;

    /// Compute `-x` for an element `x` of a structure `S`.
    public fun neg<S>(x: &Element<S>): Element<S>;

    /// Compute `x + y` for elements `x` and `y` of a structure `S`.
    public fun add<S>(x: &Element<S>, y: &Element<S>): Element<S>;

    /// Compute `x - y` for elements `x` and `y` of a structure `S`.
    public fun sub<S>(x: &Element<S>, y: &Element<S>): Element<S>;

    /// Try computing `x^(-1)` for an element `x` of a structure `S`.
    /// Return none if `x` does not have a multiplicative inverse in the structure `S`
    /// (e.g., when `S` is a field, and `x` is zero).
    public fun inv<S>(x: &Element<S>): Option<Element<S>>;

    /// Compute `x * y` for elements `x` and `y` of a structure `S`.
    public fun mul<S>(x: &Element<S>, y: &Element<S>): Element<S>;

    /// Try computing `x / y` for elements `x` and `y` of a structure `S`.
    /// Return none if `y` does not have a multiplicative inverse in the structure `S`
    /// (e.g., when `S` is a field, and `y` is zero).
    public fun div<S>(x: &Element<S>, y: &Element<S>): Option<Element<S>>;

    /// Compute `x^2` for an element `x` of a structure `S`. Faster and cheaper than `mul(x, x)`.
    public fun sqr<S>(x: &Element<S>): Element<S>;

    /// Compute `2*P` for an element `P` of a structure `S`. Faster and cheaper than `add(P, P)`.
    public fun double<G>(element_p: &Element<G>): Element<G>;

    /// Compute `k*P`, where `P` is an element of a group `G` and `k` is an element of the scalar field `S` of group `G`.
    public fun scalar_mul<G, S>(element_p: &Element<G>, scalar_k: &Element<S>): Element<G>;

    /// Compute `k[0]*P[0]+...+k[n-1]*P[n-1]`, where
    /// `P[]` are `n` elements of group `G` represented by parameter `elements`, and
    /// `k[]` are `n` elements of the scalarfield `S` of group `G` represented by parameter `scalars`.
    ///
    /// Abort with code `std::error::invalid_argument(E_NON_EQUAL_LENGTHS)` if the sizes of `elements` and `scalars` do not match.
    public fun multi_scalar_mul<G, S>(elements: &vector<Element<G>>, scalars: &vector<Element<S>>): Element<G>;

    /// Efficiently compute `e(P[0],Q[0])+...+e(P[n-1],Q[n-1])`,
    /// where `e: (G1,G2) -> (Gt)` is a pre-compiled pairing function from groups `(G1,G2)` to group `Gt`,
    /// `P[]` are `n` elements of group `G1` represented by parameter `g1_elements`, and
    /// `Q[]` are `n` elements of group `G2` represented by parameter `g2_elements`.
    ///
    /// Abort with code `std::error::invalid_argument(E_NON_EQUAL_LENGTHS)` if the sizes of `g1_elements` and `g2_elements` do not match.
    public fun multi_pairing<G1,G2,Gt>(g1_elements: &vector<Element<G1>>, g2_elements: &vector<Element<G2>>): Element<Gt>;

    /// Compute a pre-compiled pairing function (a.k.a., bilinear map) on `element_1` and `element_2`.
    /// Return an element in the target group `Gt`.
    public fun pairing<G1,G2,Gt>(element_1: &Element<G1>, element_2: &Element<G2>): Element<Gt>;

    /// Try deserializing a byte array to an element of an algebraic structure `S` using a given serialization format `F`.
    /// Return none if the deserialization failed.
    public fun deserialize<S, F>(bytes: &vector<u8>): Option<Element<S>>;

    /// Serialize an element of an algebraic structure `S` to a byte array using a given serialization format `F`.
    public fun serialize<S, F>(element: &Element<S>): vector<u8>;

    /// Get the order of structure `S`, a big integer little-endian encoded as a byte array.
    public fun order<G>(): vector<u8>;

    /// Cast an element of a structure `S` to a super-structure `L`.
    public fun upcast<S,L>(element: &Element<S>): Element<L>;

    /// Try casting an element `x` of a structure `L` to a sub-structure `S`.
    /// Return none if `x` is not a member of `S`.
    ///
    /// NOTE: Membership check is performed inside, which can be expensive, depending on the structures `L` and `S`.
    public fun downcast<L,S>(element_x: &Element<L>): Option<Element<S>>;

    /// Hash an arbitrary-length byte array `msg` into structure `S` with a domain separation tag `dst`
    /// using the given hash-to-structure suite `H`.
    public fun hash_to<St, Su>(dst: &vector<u8>, msg: &vector<u8>): Element<St>;

    #[test_only]
    /// Generate a random element of an algebraic structure `S`.
    public fun rand_insecure<S>(): Element<S>;

In general, every structure implements basic operations like (de)serialization, equality check, random sampling.

For example, A group may also implement the following operations. (Additive notions are used.)

  • order() for getting the group order.
  • zero() for getting the group identity.
  • one() for getting the group generator (if exists).
  • neg() for group element inversion.
  • add() for basic group operation.
  • sub() for group element subtraction.
  • double() for efficient group element doubling.
  • scalar_mul() for group scalar multiplication.
  • multi_scalar_mul() for efficient group multi-scalar multiplication.
  • hash_to() for hash-to-group.

As another example, a field may also implement the following operations.

  • order() for getting the field order.
  • zero() for the field additive identity.
  • one() for the field multiplicative identity.
  • add() for field addition.
  • sub() for field subtraction.
  • mul() for field multiplication.
  • div() for field division.
  • neg() for field negation.
  • inv() for field inversion.
  • sqr() for efficient field element squaring.
  • from_u64() for quick conversion from u64 to field element.

Similarly, for 3 groups G1, G2, Gt that admit a bilinear map, pairing<G1, G2, Gt>() and multi_pairing<G1, G2, Gt>() may be implemented.

For a subset/superset relationship between 2 structures, upcast() and downcast() may be implemented.
E.g., in BLS12-381 Gt is a multiplicative subgroup from Fq12 so upcasting from Gt to Fq12 and downcasting from Fq12 to Gt can be supported.

Shared Scalar Fields

Some groups share the same group order,
and an ergonomic design for this is to
allow multiple groups to share the same scalar field
(mainly for the purpose of scalar multiplication),
if they have the same order.

In other words, the following should be supported.

// algebra.move
pub fun scalar_mul<G,S>(element: &Element<G>, scalar: &Scalar<S>)...

// user_contract.move
**let k: Scalar<ScalarForBx> = somehow_get_k();**
let p1 = one<GroupB1>();
let p2 = one<GroupB2>();
let q1 = scalar_mul<GroupB1, ScalarForBx>(&p1, &k);
let q2 = scalar_mul<GroupB2, ScalarForBx>(&p2, &k);

Handling Incorrect Type Parameter(s)

There is currently no easy way to ensure type safety for the generic operations.
E.g., pairing<A,B,C>(a,b,c) can compile even if groups A, B and C do not admin a pairing.

Therefore, the backend should handle the type checks at runtime.
For example, if a group operation that takes 2+ type parameters is invoked with incompatible type parameters, it must abort.
For example, scalar_mul<GroupA, ScalarB>() where GroupA and ScalarB have different orders, will abort with a “not implemented” error.

Invoking operation functions with user-defined types should also abort with a “not implemented” error.
For example, zero<std::option::Option<u64>>() will abort.

Implementation of BLS12-381 structures

To support a wide-enough variety of BLS12-381 operations using the aptos_std::crypto_algebra API,
we implement several marker types for the relevant groups of order r (for G1, G2 and Gt) and fields (e.g., Fr, Fq12).

We also implement marker types for popular serialization formats and hash-to-group suites.

Below, we describe all possible marker types we could implement for BLS12-381
and mark the ones that we actually implement as "implemented".
These, we believe, should be sufficient to support most BLS12-381 applications.

Fq

The finite field $F_q$ used in BLS12-381 curves with a prime order $q$ equal to
0x1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab.

FormatFqLsb

A serialization format for Fq elements,
where an element is represented by a byte array b[] of size 48 with the least significant byte (LSB) coming first.

FormatFqMsb

A serialization format for Fq elements,
where an element is represented by a byte array b[] of size 48 with the most significant byte (MSB) coming first.

Fq2

The finite field $F_{q^2}$ used in BLS12-381 curves,
which is an extension field of Fq, constructed as $F_{q^2}=F_q[u]/(u^2+1)$.

FormatFq2LscLsb

A serialization format for Fq2 elements,
where an element in the form $(c_0+c_1\cdot u)$ is represented by a byte array b[] of size 96,
which is a concatenation of its coefficients serialized, with the least significant coefficient (LSC) coming first:

  • b[0..48] is $c_0$ serialized using FormatFqLsb.
  • b[48..96] is $c_1$ serialized using FormatFqLsb.

FormatFq2MscMsb

A serialization format for Fq2 elements,
where an element in the form $(c_0+c_1\cdot u)$ is represented by a byte array b[] of size 96,
which is a concatenation of its coefficients serialized, with the most significant coefficient (MSC) coming first:

  • b[0..48] is $c_1$ serialized using FormatFqLsb.
  • b[48..96] is $c_0$ serialized using FormatFqLsb.

Fq6

The finite field $F_{q^6}$ used in BLS12-381 curves,
which is an extension field of Fq2, constructed as $F_{q^6}=F_{q^2}[v]/(v^3-u-1)$.

FormatFq6LscLsb

A serialization scheme for Fq6 elements,
where an element in the form $(c_0+c_1\cdot v+c_2\cdot v^2)$ is represented by a byte array b[] of size 288,
which is a concatenation of its coefficients serialized, with the least significant coefficient (LSC) coming first:

  • b[0..96] is $c_0$ serialized using FormatFq2LscLsb.
  • b[96..192] is $c_1$ serialized using FormatFq2LscLsb.
  • b[192..288] is $c_2$ serialized using FormatFq2LscLsb.

Fq12 (implemented)

The finite field $F_{q^12}$ used in BLS12-381 curves,
which is an extension field of Fq6, constructed as $F_{q^12}=F_{q^6}[w]/(w^2-v)$.

FormatFq12LscLsb (implemented)

A serialization scheme for Fq12 elements,
where an element $(c_0+c_1\cdot w)$ is represented by a byte array b[] of size 576,
which is a concatenation of its coefficients serialized, with the least significant coefficient (LSC) coming first.

  • b[0..288] is $c_0$ serialized using FormatFq6LscLsb.
  • b[288..576] is $c_1$ serialized using FormatFq6LscLsb.

NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.

G1Full

A group constructed by the points on the BLS12-381 curve $E(F_q): y^2=x^3+4$ and the point at infinity,
under the elliptic curve point addition.
It contains the prime-order subgroup $G_1$ used in pairing.

G1 (implemented)

The group $G_1$ in BLS12-381-based pairing $G_1 \times G_2 \rightarrow G_t$.
It is a subgroup of G1Full with a prime order $r$
equal to 0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001.
(so Fr is the associated scalar field).

FormatG1Uncompr (implemented)

A serialization scheme for G1 elements derived from https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-11.html#name-zcash-serialization-format-.

Below is the serialization procedure that takes a G1 element p and outputs a byte array of size 96.

  1. Let (x,y) be the coordinates of p if p is on the curve, or (0,0) otherwise.
  2. Serialize x and y into b_x[] and b_y[] respectively using FormatFqMsb.
  3. Concatenate b_x[] and b_y[] into b[].
  4. If p is the point at infinity, set the infinity bit: b[0]: = b[0] | 0x40.
  5. Return b[].

Below is the deserialization procedure that takes a byte array b[] and outputs either a G1 element or none.

  1. If the size of b[] is not 96, return none.
  2. Compute the compression flag as b[0] & 0x80 != 0.
  3. If the compression flag is true, return none.
  4. Compute the infinity flag as b[0] & 0x40 != 0.
  5. If the infinity flag is set, return the point at infinity.
  6. Deserialize [b[0] & 0x1f, b[1], ..., b[47]] to x using FormatFqMsb. If x is none, return none.
  7. Deserialize [b[48], ..., b[95]] to y using FormatFqMsb. If y is none, return none.
  8. Check if (x,y) is on curve E. If not, return none.
  9. Check if (x,y) is in the subgroup of order r. If not, return none.
  10. Return (x,y).

NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.

FormatG1Compr (implemented)

A serialization scheme for G1 elements derived from https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-11.html#name-zcash-serialization-format-.

Below is the serialization procedure that takes a G1 element p and outputs a byte array of size 48.

  1. Let (x,y) be the coordinates of p if p is on the curve, or (0,0) otherwise.
  2. Serialize x into b[] using FormatFqMsb.
  3. Set the compression bit: b[0] := b[0] | 0x80.
  4. If p is the point at infinity, set the infinity bit: b[0]: = b[0] | 0x40.
  5. If y > -y, set the lexicographical flag: b[0] := b[0] | 0x20.
  6. Return b[].

Below is the deserialization procedure that takes a byte array b[] and outputs either a G1 element or none.

  1. If the size of b[] is not 48, return none.
  2. Compute the compression flag as b[0] & 0x80 != 0.
  3. If the compression flag is false, return none.
  4. Compute the infinity flag as b[0] & 0x40 != 0.
  5. If the infinity flag is set, return the point at infinity.
  6. Compute the lexicographical flag as b[0] & 0x20 != 0.
  7. Deserialize [b[0] & 0x1f, b[1], ..., b[47]] to x using FormatFqMsb. If x is none, return none.
  8. Solve the curve equation with x for y. If no such y exists, return none.
  9. Let y' be max(y,-y) if the lexicographical flag is set, or min(y,-y) otherwise.
  10. Check if (x,y') is in the subgroup of order r. If not, return none.
  11. Return (x,y').

NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.

G2Full

A group constructed by the points on a curve $E'(F_{q^2}): y^2=x^3+4(u+1)$ and the point at infinity,
under the elliptic curve point addition.
It contains the prime-order subgroup $G_2$ used in pairing.

G2 (implemented)

The group $G_2$ in BLS12-381-based pairing $G_1 \times G_2 \rightarrow G_t$.
It is a subgroup of G2Full with a prime order $r$ equal to
0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001.
(so Fr is the scalar field).

FormatG2Uncompr (implemented)

A serialization scheme for G2 elements derived from
https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-11.html#name-zcash-serialization-format-.

Below is the serialization procedure that takes a G2 element p and outputs a byte array of size 192.

  1. Let (x,y) be the coordinates of p if p is on the curve, or (0,0) otherwise.
  2. Serialize x and y into b_x[] and b_y[] respectively using FormatFq2MscMsb.
  3. Concatenate b_x[] and b_y[] into b[].
  4. If p is the point at infinity, set the infinity bit in b[]: b[0]: = b[0] | 0x40.
  5. Return b[].

Below is the deserialization procedure that takes a byte array b[] and outputs either a G2 element or none.

  1. If the size of b[] is not 192, return none.
  2. Compute the compression flag as b[0] & 0x80 != 0.
  3. If the compression flag is true, return none.
  4. Compute the infinity flag as b[0] & 0x40 != 0.
  5. If the infinity flag is set, return the point at infinity.
  6. Deserialize [b[0] & 0x1f, ..., b[95]] to x using FormatFq2MscMsb. If x is none, return none.
  7. Deserialize [b[96], ..., b[191]] to y using FormatFq2MscMsb. If y is none, return none.
  8. Check if (x,y) is on the curve E'. If not, return none.
  9. Check if (x,y) is in the subgroup of order r. If not, return none.
  10. Return (x,y).

NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.

FormatG2Compr (implemented)

A serialization scheme for G2 elements derived from
https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-11.html#name-zcash-serialization-format-.

Below is the serialization procedure that takes a G2 element p and outputs a byte array of size 96.

  1. Let (x,y) be the coordinates of p if p is on the curve, or (0,0) otherwise.
  2. Serialize x into b[] using FormatFq2MscMsb.
  3. Set the compression bit: b[0] := b[0] | 0x80.
  4. If p is the point at infinity, set the infinity bit: b[0]: = b[0] | 0x40.
  5. If y > -y, set the lexicographical flag: b[0] := b[0] | 0x20.
  6. Return b[].

Below is the deserialization procedure that takes a byte array b[] and outputs either a G2 element or none.

  1. If the size of b[] is not 96, return none.
  2. Compute the compression flag as b[0] & 0x80 != 0.
  3. If the compression flag is false, return none.
  4. Compute the infinity flag as b[0] & 0x40 != 0.
  5. If the infinity flag is set, return the point at infinity.
  6. Compute the lexicographical flag as b[0] & 0x20 != 0.
  7. Deserialize [b[0] & 0x1f, b[1], ..., b[95]] to x using FormatFq2MscMsb. If x is none, return none.
  8. Solve the curve equation with x for y. If no such y exists, return none.
  9. Let y' be max(y,-y) if the lexicographical flag is set, or min(y,-y) otherwise.
  10. Check if (x,y') is in the subgroup of order r. If not, return none.
  11. Return (x,y').

NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.

Gt (implemented)

The group $G_t$ in BLS12-381-based pairing $G_1 \times G_2 \rightarrow G_t$.
It is a multiplicative subgroup of Fq12,
with a prime order $r$ equal to 0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001.
(so Fr is the scalar field).
The identity of Gt is 1.

FormatGt (implemented)

A serialization scheme for Gt elements.

To serialize, it treats a Gt element p as an Fq12 element and serialize it using FormatFq12LscLsb.

To deserialize, it uses FormatFq12LscLsb to try deserializing to an Fq12 element then test the membership in Gt.

NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.

Fr (implemented)

The finite field $F_r$ that can be used as the scalar fields
associated with the groups $G_1$, $G_2$, $G_t$ in BLS12-381-based pairing.

FormatFrLsb (implemented)

A serialization format for Fr elements,
where an element is represented by a byte array b[] of size 32 with the least significant byte (LSB) coming first.

NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0, blst-0.3.7.

FormatFrMsb (implemented)

A serialization scheme for Fr elements,
where an element is represented by a byte array b[] of size 32 with the most significant byte (MSB) coming first.

NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0, blst-0.3.7.

HashG1XmdSha256SswuRo (implemented)

The hash-to-curve suite BLS12381G1_XMD:SHA-256_SSWU_RO_ that hashes a byte array into G1 elements.
Full specification is defined in https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-hash-to-curve-16#name-bls12-381-g1.

HashG2XmdSha256SswuRo (implemented)

The hash-to-curve suite BLS12381G2_XMD:SHA-256_SSWU_RO_ that hashes a byte array into G2 elements.
Full specification is defined in https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-hash-to-curve-16#name-bls12-381-g2.

Reference Implementation

https://github.com/aptos-labs/aptos-core/pull/6550/files

Risks and Drawbacks

Developing cryptographic schemes, whether in Move or in any other language, is very difficult due to the inherent mathematic complexity of such schemes, as well as the difficulty of using cryptographic libraries securely.

As a result, we caution Move application developers that
implementing cryptographic schemes using crypto_algebra.move and/or the bls12381_algebra.move modules will be error prone and could result in vulnerable applications.

That being said, the crypto_algebra.move and the bls12381_algebra.move Move modules have been designed with safety in mind.
First, we offer a minimal, hard-to-misuse abstraction for algebraic structures like groups and fields.
Second, our Move modules are type safe (e.g., inversion in a group G returns an Option).
Third, our BLS12-381 implementation always performs prime-order subgroup checks when deserializing group elements, to avoid serious implementation bugs.

Future Potential

The crypto_algebra.move Move module can be extended to support more structures (e.g., new elliptic curves) and operations (e.g., batch inversion of field elements), This will:

  1. Allow porting existing generic cryptosystems built on top of this module to new, potentially-faster-or-smaller curves.
  2. Allow building more complicated cryptographic applications that leverage new operations or new algebraic structures (e.g., rings, hidden-order groups, etc.)

Once the Move language is upgraded with support for some kind of interfaces,
it can be use to rewrite the Move side specifications to ensure type safety at compile time.

Suggested implementation timeline

The change should be available on devnet in April 2023.

[AIP-29][Peer monitoring service]

AIP-29 - Peer monitoring service

Summary

This AIP proposes a new “Peer Monitoring” service that operates on Aptos nodes and allows nodes to track, share and monitor peer information and behaviors. The service aims to: (i) improve node latency and performance, by allowing components to make smarter decisions when disseminating and receiving data; (ii) improve operational quality and reliability, by allowing nodes to dynamically work around failures and unstable peers; (iii) improve peer discovery and selection, by providing a foundation for peer bootstrapping and gossip; (iv) improve security, by allowing nodes to better detect and react to malicious peers and bad behaviors; and (v) improve network observability, by gathering and exporting important network information.

Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-29.md

[AIP-18][Discussion] Introducing SmartVector and SmartTable to aptos_std


aip: 18
title: Introducing SmartVector and SmartTable to aptos_std
author: lightmark
Status: Draft
type: Standard (Framework)
created: 03/07/2022

AIP-18 - Introducing SmartVector and SmartTable to apto_std

Summary

This AIP proposes to move two storage-efficient data structures into Aptos Framework. In general, those two structs can
lower the storage footprint by packing several elements into one storage slot instead of one per slot as what a normal
Table would do.

Motivation

Move is not hard to learn. But the intricacies between Move and Infra are not that intuitive, such as how gas scheduling
work including storage and execution. Specifically, how the data structures in Move are stored in storage and how are
the data represented and what the layout looks like, are not well understood. Having Witnessed many misuses of vector
and table, our sequencing and associative container types, across various ecosystem projects on Aptos, we are pretty
aware due to the lack of understanding of our gas schedule including both execution and storage, most move developers on
Aptos are not able to write the most efficient smart contract code for gas optimization. This leads to:

  1. Some projects complained that gas charged is more expensive than expected.
  2. People abuse Table, which is what we try to disincentivize in the long run for small state storage.

So we plan to provide a one-size-fits-all solution for both vector and table data structures that can handle data
scaling issue in a more optimized way considering the storage model and gas schedule. Therefore, most developers do not
have to concern too much with gas cost between different choices of container types. Instead, they could focus more on
the product logic side.

Rationale

The design principle is to put more data into one slot without significant write amplification.

  • SmartVector would take as less slots as possible. Each slot could contain more than one element. When a predefined
    size of slot is reached, it would necessarily open a new slot to balance cost of bytes written and item creation.
  • SmartTable would also pack as many key-value pairs into one slot as possible. While the slot exceeds a threshold, it
    should be able to grow one bucket at a time. In the meanwhile, the number of key-value pairs in each slot should not
    be too skewed.

Specification

SmartVector

Data Structure Specification

struct SmartVector<T> has store {
inline_vec: vector<T>,
big_vec: Option<BigVector<T>>,
}

In a nutshell, SmartVector consists of an Option<vector<T>> and an option<BigVector<T>>, which is
a TableWithLength<T> with metadata inherently. It is noted that we use vector to replace option here to
avoid drop capability constraint on T.The idea is:

  1. When the total size of data in the smart vector is relatively small, only inline_vec will have data and it stores
    all the data as a normal vector. At this time, smart vector is just a wrapper of normal vector.
  2. When the number of elements in inline_vec reached a threshold(M), it will create a new BigVector<T>
    into big_vec with a bucket size(K) calculated based on the estimated average serialized size of T. Then all the
    following elements to push will be put into this BigVector<T>.

Interfaces

SmartVector implements most basic functions of std::vector.

It is noted that remove, reverse and append would be very costly in terms of storage fee because they all
involve a number of table items modification.

Determine default configurations

The current solution is using the size_of_val(T) of the current element to push multiplied by len(inline_vec) + 1 , if it is greater than a hardcoded value, 150, this new element will become the first element in big_vec, whose
bucket_size, K, is calculated by dividing a hardcoded value, 1024, by the average serialized size of all the
elements in inline_vec and the element to push.

SmartTable

Data Structure Specification

/// SmartTable entry contains both the key and value.
struct Entry<K, V> has copy, drop, store {
hash: u64,
key: K,
value: V,
}

struct SmartTable<K, V> has store {
buckets: TableWithLength<u64, vector<Entry<K, V>>>,
num_buckets: u64,
// number of bits to represent num_buckets
level: u8,
// total number of items
size: u64,
// Split will be triggered when target load threshold is reached when adding a new entry. In percent.
split_load_threshold: u8,
// The target size of each bucket, which is NOT enforced so oversized buckets can exist.
target_bucket_size: u64,
}

SmartTable is basically a TableWithLength where key is a u64 hash mod h(hash) of the user key and value is a
bucket, represented by a vector of all user key-value(kv) pairs with the same hashed user key. Compared to
native Table, it makes table slot more compact by packing several kv pairs into one slot instead of one per slot.

SmartTable internally adopt linear hashing(LH) algorithm which
implements a hash table
and grows one bucket at a time. In our proposal, each bucket take one slot, represented by vector<Entry<K, V>> in as
value type in a TableWithLength. LH serves well for the motivation because the goal is to minimize the number of slots
while maintaining a table-like structure dynamically.

There are two parameters determining the behavior of SmartTable.

  • split_load_threshold: when a new kv pair is inserted, the current load factor will be calculated
    as load_factor = 100% * size / (target_bucket_size * num_buckets) .
    • If load_factor ≥ split_load_threshold, it means the current table is a bit bloated and needs a splitting.
    • Otherwise, no action is needed since the current number of buckets are good enough to hold all the data.
  • target_bucket_size: The ideal number of kv pairs each bucket holds. It is noted that this is not enforced but only
    used as an input to calculating load factor. In reality, sometimes an individual bucket size could exceed this value.

Interfaces

SmartTable implements all the std::table functions.

Determine default configurations

  • split_load_threshold: 75%
  • target_bucket_size: max(1, 1024 / max(1, size_of_val(first_entry_inserted)))

The current heuristic to automatically calculate target_bucket_size if not specified, is dividing the free quota, 1024,
by the size of the first entry inserted into the table.

Linear Hashing(LH) in SmartTable

  • LH stores kv pairs into buckets. Each bucket stores all the kv pairs having the same hash of their keys. In
    SmartTable, each bucket is represented as a vector<Entry<K, V>>. A potential followup is to replace it with
    a native ordered map.
  • LH requires a family of hash functions. At any time, two functions are used in this family. SmartTable uses
    h(key)=hash(key) mod 2^{level} and H(key)=hash(key) mod 2^{level + 1} as hash functions that the result is
    always an integer.
  • level is an internal variable starting from 0. When 2^{level} buckets are created, level increments so h(key)
    and H(key) double their modulo base together. For example, previously h(key) = hash(key) % 2, and H(key) = hash
    (key) % 4. After level increments, h(key) = hash(key) % 4, and H(key) = hash(key) % 8.
Split
  1. SmartTable starts with 1 bucket and level = 0. h(key) = hash(key)%1, H(key) = hash(key)%2. For each round of
    splitting, we start from bucket 0.
  2. If splitting happens, the next bucket to split is incremental until reaching the last bucket of this level
    round, 2^level - 1. When the last bucket is split, actually during this round we have split 2^level buckets,
    resulting in an additional 2^level buckets, in total the number of buckets is doubled. Then we increment level, and
    start another split round from 0 again. Correspondingly, h(key) and H(key) change by double their modulo base
    together.
  3. The index of the bucket to split is always num_buckets ^ (1 << level) not the one we just inserted a kv pair
    into. num_buckets % (1 << level)
  4. When splitting happens, all the entries in the split bucket will be redistributed between it and the new bucket
    using H(key)
Lookup

Lookup is tricky as we have to use both h(key) and H(key) for lookups. First we calculate bucket_index = H(key) if
the result is an index of an existing bucket, it means the H(key) actually works so we just use bucket_index to find
the right bucket. However, if the result is invalid for existing bucket, it means the corresponding bucket has not been
split yet. So we have to turn to h(key) to find the correct bucket.

Reference Implementation

smart_vector and
smart_table

Risks and Drawbacks

The potential drawbacks of these two data structures are:

  1. No easy to index as each of them pack multiple entries into one slot/bucket.
  2. For SmartTable, the gas saving may be not ideal for now for some operations since it does linear search for lookup
    and adding item may trigger bucket splitting and reshuffling.
  3. The smart data structures are not well supported by indexer as it involves table with opaque internals.
  4. Under the current gas schedule the gas cost may be much higher since we are re-charging the storage fee each time.
    But we are expecting a different gas schedule to be published soon when we’ll benchmarking the gas cost of smart data
    structures.

2 can be mitigated by using a native ordered map implementation as a bucket.

Gas Saving

After the 100x execution gas reduction, the we benchmark the gas cost of creation and add 1 element into
vector/SmartVector/Table/SmartTable.

gas units creation with 10000 u64 elements push a new element read an existing element
vector 4080900 3995700 2600
smart vector 5084900 2100 400
gas units creation with 1000 u64 kv pairs add a new kv pair read an existing kv
table 50594900 50800 300
smart table 2043500 700 300

Reflected by the table above, smart data structures outperform vector and table for large datasets a lot in terms of
both creation and updates.

In a nutshell, we recommend using smart data structures for use cases involving large datasets such as whitelist. They
also can be easily destroyed if the internal elements have drop.

Future Potential

  • Currently, we use size_of_val to automatically determine the configurations of both data structures. If Move can
    support serialized size estimation natively, the cost of those operations could drop a lot.
  • As mentioned before, bucket splitting incurring possibly reshuffling and linear scan when searching is costly when
    vector is used as a bucket. If there is a native map struct, the gas cost would be highly cut down.

Suggested implementation timeline

Code complete: March 2023

Suggested deployment timeline

Testnet release: March 2023

[AIP-4][Discussion] Update Simple Map To Save Gas

AIP: Update Simple Map To Save Gas

Summary

Change the internal implementation of SimpleMap in order to reduce gas prices with minimal impact on Move and Public APIs.

Motivation

The current implementation of SimpleMap uses a BCS-based logarithmic comparator identifying slots on where to store data in a Vector. Unfortunately this is substantially more expensive than a trivial linear implementation, because each comparison requires BCS serialization followed by a comparison. The conversion to BCS cannot be resolved as quickly as traditional comparator can and substantially impacts gas prices.

Proposal

  • Replace the internal definition of find from the current logarithmic implementation to a linear search across the vector.
  • Replace the functionality within add to call vector::push_back and append new values instead of inserting them into their sorted position.

Rationale

This will result in the following performance differences:

Operation Gas Unit Before Gas Unit After Delta
CreateCollection 174200 174200
CreateTokenFirstTime 384800 384800
MintToken 117100 117100
MutateToken 249200 249200
MutateTokenAdd10NewProperties 1148700 390700 64%
MutateTokenMutate10ExistingProperties 1698300 411200 75%
MutateTokenAdd90NewProperties 20791800 10031700 51%
MutateTokenMutate100ExistingProperties 27184500 10215200 62%
MutateTokenAdd300NewProperties (100 existing, 300 new) 126269000 135417900 -7%
MutateTokenMutate400ExistingProperties 143254200 136036800 5%

When the token only has 1 property on-chain, we can see the mutation token cost doesn’t change. However,

if the user wants to add 10 new properties or update existing properties, the gas cost is reduced by 64% and 75%.

if the user wants to store 100 properties on-chain, the gas cost is reduced by 51% and 62%.

600 property mutations were also tested, but failed due to exceeding maximum gas.

Implementation

Draft benchmark PR https://github.com/aptos-labs/aptos-core/pull/5765/files

// Before the proposed change
public fun add<Key: store, Value: store>(
        map: &mut SimpleMap<Key, Value>,
        key: Key,
        value: Value,
  ) {
      let (maybe_idx, maybe_placement) = find(map, &key);
      assert!(option::is_none(&maybe_idx), error::invalid_argument(EKEY_ALREADY_EXISTS));

      // Append to the end and then swap elements until the list is ordered again
      vector::push_back(&mut map.data, Element { key, value });

      let placement = option::extract(&mut maybe_placement);
      let end = vector::length(&map.data) - 1;
      while (placement < end) {
          vector::swap(&mut map.data, placement, end);
          placement = placement + 1;
      };
 }
// After the change
public fun add<Key: store, Value: store>(
    map: &mut SimpleMap<Key, Value>,
    key: Key,
    value: Value,
) {
    let maybe_idx = find_element(map, &key);
    assert!(option::is_none(&maybe_idx), error::invalid_argument(EKEY_ALREADY_EXISTS));

    vector::push_back(&mut map.data, Element { key, value });
}
// Before the proposed change
fun find<Key: store, Value: store>(
    map: &SimpleMap<Key, Value>,
    key: &Key,
): (option::Option<u64>, option::Option<u64>) {
    let length = vector::length(&map.data);

    if (length == 0) {
        return (option::none(), option::some(0))
    };

    let left = 0;
    let right = length;

    while (left != right) {
        let mid = left + (right - left) / 2;
        let potential_key = &vector::borrow(&map.data, mid).key;
        if (comparator::is_smaller_than(&comparator::compare(potential_key, key))) {
            left = mid + 1;
        } else {
            right = mid;
        };
    };

    if (left != length && key == &vector::borrow(&map.data, left).key) {
        (option::some(left), option::none())
    } else {
        (option::none(), option::some(left))
    }
}

// After the change
fun find_element<Key: store, Value: store>(
    map: &SimpleMap<Key, Value>,
    key: &Key,
): option::Option<u64>{
    let leng = vector::length(&map.data);
    let i = 0;
    while (i < leng) {
        let element = vector::borrow(&map.data, i);
        if (&element.key == key){
            return option::some(i)
        };
      i = i + 1;
    };
    option::none<u64>()
}

Risks and Drawbacks

The internal representation for two SimpleMaps generated before and after the change will be different. For example, assuming a set of {c: 1, b: 2, a: 3} . The behavior before would create a set with an internal vector of the following: {a: 3, b: 2, c: 1} . After this change, it will be stored as {c: 1, b: 2, a: 3} . This results in two forms of breaking changes:

  • Within Move, a SimpleMap’s equality property changes.
  • As a client, the internal layout for SimpleMap changes.

The impact of these risks is relatively limited from our knowledge. Using a SimpleMap for equality is an esoteric application that the team has yet to see in the wild. The layout of SimpleMap is not considered at either the API layer or any of the AptosLabs SDKs.

We also see a performance drop for a very large simple_map once there are over 400 properties on-chain.

Timeline

TBD

[AIP-X][Discussion]


aip:
title: Calling between move contracts
author: evilboyajay
discussions-to:
Status: Draft
last-call-end-date:
type: Framework
created: 2023/01/22
updated:
requires:


AIP-(TBD) - (Calling between move contracts )

Summary

This AIP proposes one contract to call another contract directly without importing using the contract address and function.

Motivation

While writing multisig move contracts on APTOS, I have not been able to dynamically call other smart contracts because of missing calling between the contract function. This limits the ability of smart contracts to interact with other smart contracts, which eventually impacts governance.

Rationale

This AIP will provide the ability for smart contracts to dynamically call other contracts. This will enable the creation of more complex and modular smart contract systems like multisig contracts that can call other contracts dynamically when all the multi-sig owners sign the transaction. In the transaction, data, accounts, module name, and functions are passed as parameters to the contract.

Specification

This is just an example if we make a function call which can call another contract functions, this is how we can make a transaction builder.

public fun call_contract(account: &signer, module_address: address, module_function: u8, parameters: vector<u8>){
  call(module_address,module_function,parameters)
}

Here, parameters are the parameter of functions that needs to be passed,
module_address is the address of the contract,
module_function is the name of the function to be called.

Reference Implementation

https://docs.solana.com/developing/programming-model/calling-between-programs

https://solidity-by-example.org/delegatecall/

Risks and Drawbacks

To be decided

Future Potential

This can be used in governance systems, where the smart contract can manage the voting power and based on that it can execute the transactions to other modules.

Suggested implementation timeline

To be decided

[AIP-X][Discussion] Code Verification API

AIP Discussion

This API proposal seeks to introduce a standard protocol for the verification of Aptos Move smart contract code. This API is designed to ensure the safety and trustworthiness of smart contract code, assisting developers in effectively verifying their code. The standard suggests the common rules that should be followed by all parties building and operating smart contract code verification API servers within the Aptos network.

Read more about it here: < Link to AIP >

[AIP-7][Discussion] Transaction fee distribution

AIP PR: #22

Summary

Currently, all transaction fees are burnt on Aptos. This design choice does not motivate validators to prioritise the highest value transactions, as well as use bigger blocks of transactions, leading to lower throughput of the system. For example, a validator can submit an empty block and still get rewarded. We would like to solve this issue by distributing transaction fees to validators. In particular, we want to collect transaction fees for each block, store them, and then redistribute at the end of each epoch.

Motivation

If transaction fees are redistributed to validators, we will be able to both 1) ensure that the highest value transactions have a priority 2) increase the throughput of the system to gain more performance advantages of the parallel execution.

Proposal

Enable collection and per-epoch distribution of transaction fees to validators.

Rationale

In order to keep the system configurable, we have an on-chain parameter which dictates what percentage of transaction fees should be burnt (called burn_percentage). This way, burning 100% of transaction fees would allow the system to have the same behaviour as it has now. Burning 0% would mean that all transaction fees are collected and distributed to validators. The formula deciding how much to burn and how much to collect is the following:

burnt_amount   = burn_percentage * transaction_fee / 100
deposit_amount = transaction_fee - burnt_amount

Based on the discussion with the community, the initial burning percentage can be set and later upgraded via a governance proposal. While it seems like 0% burning would be a reasonable initial parameter value, one has to consider the effects on the tokenomics, e.g. inflation.

Alternatives

Alternative I: Distributing fees every transaction

Recall that we want to distribute transaction fee to the validator which proposed the block. Doing it per transaction has numerous disadvantages:

  1. Updating the balances of validators per each transaction is a bottleneck for the parallel execution engine, as it creates a read-modify-write dependencies. While it can be alleviated by implementing balance as aggregator, it is not possible in the current system.

  2. Validators obtain voting rewards on per-epoch basis, and certain smart contracts can rely on that fact. Having the balance of the validator change per transaction can break this logic and be not compatible.

Alternative II: Distributing fees every block

In order to solve the first disadvantage from Alternative I, we can instead collect into a special balance which uses aggregator to avoid read-modify-write conflicts in parallel execution. This way each transaction in the block updates the aggregator value with the fee, and when the block ends the total value is distributed to the proposer of the block.

Note that this approach does not solve the second disadvantage from Alternative I, which leads to our proposal.

Implementation

Draft PR: aptos-labs/aptos-core#4967

Algorithm overview

  1. When executing a single transaction in the block, the epilogue puts the gas fees into a special aggregatable coin stored on the system account. In contrast to standard coin, changing the value of aggregatable coin does not cause conflicts during the parallel execution. Aggregatable coin can be "drained" to produce the standard coin.

  2. In the first transaction of the next block (i.e. BlockMetadata), we process collected fees in the following way:

  • Drain the aggregatable coin to obtain the sum of transaction fees of the previous block. As mentioned above, draining returns a standard coin.
  • Decide what proportion of fees is burnt and what is distributed later (see the formula in rationale).
  • Burn the amount which is not supposed to be distributed.
  • Store the amount which is supposed to be distributed to a special table stored on the systems account. Basically, this creates (or updates) a map entry from the proposer's address to the fee yet to be distributed.
  • Store the address of this block proposer so that the next block we can repeat the procedure in (2) and process fees.
  1. At the end of the current epoch, distribute collected fees. For each pending active and inactive validator, get the collected fee and deposit it to the stake pool of the validator.

Note: in the current implementation fees for governance proposals are simply burnt. This is subject to discussion and can change in the future

Risks and Drawbacks

Decentralization - no impact or changes.

Performance - the usage of aggregator avoids the conflicts in parallel executor.

Security - the added code will go through strict testing and auditing.

Tooling - no impact or changes.

Future Potential

The proposed change allows to control what fraction of the transaction fees is collected and what fraction is burnt. In the future it can be adjusted using a governance proposal.

Timeline

The change is currently being tested using e2e Move tests and smoke tests. Ideally, it should be enabled in the testnet soon.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.