aptos-foundation / aips Goto Github PK
View Code? Open in Web Editor NEWAptos Improvement Proposals (AIPs)
Home Page: https://governance.aptosfoundation.org/
Aptos Improvement Proposals (AIPs)
Home Page: https://governance.aptosfoundation.org/
Quorum Store is a production-optimized implementation of Narwhal [1], that improves consensus throughput. Quorum Store was tested in previewnet, a mainnet-like 100+ node network, where it increased TPS by 3x. It will remove consensus as the primary bottleneck for throughput in mainnet. It has a wide surface area that changes details of how validators disseminate, order, and execute transactions.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-26.md
Discussion and feedback thread for AIP.
Link to AIP:
< Copy AIP text here as well for reader's ease >
Implement a fee-sharing mechanism, where half of the transaction fee of a smart contract call will go to the contract’s contributor address(es), and the other half will be burnt as usual, or distributed per AIP-7.
Discussion and feedback thread for AIP.
Link to AIP:
Developers historically have had a difficult time finding funding sources unless they either a) create their own token, or b) are funded by a foundation/grant. In the United States especially, creating your own token creates significant legal hurdles so as to not be considered a security by the SEC. Receiving grants from a foundation is also a difficult task, but not insurmountable. Adding a clear money/incentivization path for smart contract developers would be a massive boon for the Aptos ecosystem.
In several competing ecosystems, including Near, Smart Contract developers get 30% of all fees spent invoking a smart contract. This incentivizes Smart Contract developers to build on the platform, as getting further mindshare means further smart contract invocations. Further, this gives a clearer incentive/monetary path for developers.
Juno is another such Smart Contract platform that has recently added a fee-share mechanism, with developers receiving 50% of fees and validators receiving the other 50%.
Describe the solution you'd like
Rather than burn all fees spent on block creation, share part of it with smart contract developers. Ideally this would be a value configurable by governance so an on-the-fly change can be made.
Currently, only token owners with 1M APT are able to stake their tokens and earn staking rewards. We want to enable everyone in the community to have access to staking rewards as well. We propose adding a delegation contract to the framework. This extension enables multiple token owners to contribute any amount of APT to the same permissionless delegation pool. As long as the total amount of APT in a delegation pool meets the minimum stake required for staking, the validator node on which it is set on will be able to join the active set and earn staking rewards.
Participants (delegators) would be rewarded proportionally to their deposited stake at the granularity of an epoch. Delegators will continue to have the same stake management controls (such as add_stake
, unlock
, withdraw
etc.) in the delegation pool, similar to pool owners in the existing staking-contract implementation.
For the P0, existing stake pools cannot be converted into a delegation stake pool. A new delegation stake pool would have to be created. However, this could be a future development.
The current staking mechanism puts the community at a disadvantage, as it is less probable that a single individual has 1M APT tokens to start their own validator.
Given this functionality is enabled, the community can participate in staking activities and incentivize validators thus adding value to the ecosystem.
On the other hand, the entry stake requirement makes it difficult for some actors, possibly experienced in maintaining and securing blockchain validators, to join the network as node operators. With this new functionality, they can get support from the community to enter the active set.
In the current staking implementation, the activeness of a validator is influenced by a single entity's stake (stake-pool owner) which can leave the node unstaked at any time (actually on lockup period expiration). In the new implementation, this scenario is less likely to occur.
This feature has the potential to increase the number of validators in the ecosystem leading to further decentralization and a higher amount of tokens staked on the protocol.
A detailed specification of the proposed solution can be found here.
To summarize, the following behavior would be supported:
The operator fee, previously configured by the owner, will be immutably set at pool's creation by the node operator itself. Therefore, delegators have the option to participate in pools of their choice with regard to commission fee.
There is a reference implementation, integrated directly into the framework, treating aptos_framework::stake
module as a black box, at https://github.com/bwarelabs/aptos-core/tree/bwarelabs/shares_based_delegation_pool.
The staking API initially exposed would incur a higher gas cost (only for delegation pools) as additional resources have to be stored and maintained in order to keep track of individual delegators' stakes and rewards earned by pool's aggregated stakes.
We could uniformly enforce that a delegator cannot decrease the total stake on the pool below the active threshold or decide to fully unstake the delegation pool.
We could restrict the node operator to deposit a minimum stake amount in order to allow its pool to accept delegations.
We hope to get it on the mainnet in Q1, but this is pending further technical scoping.
We enhanced the libraries with FixedPoint64 support, extra math functions, string formatting and extra inline functions.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-24.md
Discussion and feedback thread for AIP.
Link to AIP:
Pending
The Aptos on-chain governance is a process by which the Aptos community can govern operations and development of the Aptos blockchain via proposing, voting on, and resolving proposals.
Currently, an on-chain proposal can only contain one script. Because the MoveVM applies changes to the VM at the end of a script, we cannot include changes that are dependent on previous changes in the same script. This means that we oftentimes need multiple scripts to complete an upgrade.
When we have any upgrade that involves multiple scripts, we need to make multiple single-step proposals, have voters vote on each of them, and resolve each of them manually. We would like to simplify this process by supporting multi-step proposals - this will allow multiple scripts within one on-chain proposal and will execute the scripts in order if the proposal passes.
With multi-step proposals, we will be able to create an on-chain proposal that contains multiple steps, have voters vote on it, and execute all scripts in order at once. This will save effort from the community, as well as enforce that the scripts will be executed in order.
Alternative Solution I: Keeping the Status Quo
Currently, proposing an upgrade with multiple scripts using single-step proposals is time-consuming, manual and likely error-prone if only part of the scripts get executed. We think adding support for multi-step proposals is better because it will save time for proposers, voters, resolvers, and ensures that we execute the steps in the right order.
Alternative Solution II: Making a change to the MoveVM
Because the MoveVM applies all changes at the end of the script, we cannot include all upgrades within one single script when some changes are dependent on the other ones in the same script.
We could make a VM-level change to apply changes to the VM at the end of each transaction - this would allow executing all upgrades within one single script. However, it will be a much larger change that will take more than multiple months and affects more parts of the system. We think the smart contracts change essentially achieves the same thing, and is simpler than a VM-level change.
To support multi-step proposals, we introduce the concept of a chain of scripts. The voter approves / rejects the entire chain of scripts by voting yes / no on the first script of the chain. There is no partial yes / no.
For example, if we have a multi-step proposal that contains <script_a, script_b, script_c, script_d>
, the user will only vote on the first script script_a
. By voting yes on script_a
, they say yes to the entire chain of scripts <script_a, script_b, script_c, script_d>
.
When a multi-step proposal passes, we will use CLI to resolve the chain of scripts sequentially.
Chain of Scripts
When we produce the execution scripts, we start from the last script (let's say it's the x
th proposal), hash it, and pass the hash to the x-1
th proposal. The on-chain proposal will only contain the first script, but the content and order of the entire chain is implicitly hashed in the first script. We will provide CLI toolings for generating the chain of scripts and verifying that all scripts are present and are in the right order.
Creating and Voting on Multi-Step Proposals
The flow will mostly stay the same for creating and voting on proposals:
execution_hash
parameter.Resolving Multi-Step Proposals
In our proposed multi-step proposal design, when resolving a multi-step proposal,
aptos_governance::resolve_multi_step_proposal()
returns a signer and replaces the current execution hash with the hash of the next script.voting::resolve()
checks that the hash of the current script is the same as the proposal’s current execution hash, update the proposal.execution_hash
on-chain with the next_execution_hash
, and emits an event for the current execution hash.voting::resolve()
marks that the proposal is resolved if the current script is the last script in the chain of scripts.Security
The implementation will go through security audit with a respectable third-party audit firm to ensure the safety and correctness of the code and the operations.
We are mostly concerned about two types of validation:
We will support CLI tooling for the above validations.
This proposal changes the mechanism of what the content of a proposal includes (multiple scripts vs. single scripts), and does not change the powers of the on-chain governance. There is no impact on decentralization or governance abilities.
Tooling
We will add CLI and governance UI support for creating, voting on, and resolving multi-step proposals.
Draft PR:
Decentralization - no impact or changes.
Security - the added governance code will go through strict testing and auditing.
Tooling - CLI support only covers Aptos governance but the multi-step mechanism can be reused for general-purpose DAO/governance. In the near future, the Aptos team and community can improve CLI/tooling to be more generic.
11/07 - 11/22
11/22 - 11/30
11/30 - 12/14
12/15 - 12/20
Details to the AIP proposal:
Please leave your feedback in this issue!
Charge transactions that triggered invariant violation error instead of discarding them.
Invariant violation error is a special type of errors that gets triggered in the Aptos VM where some unexpected invariants are being violated. Right now transactions that triggered such error will be marked as discarded which could potentially be a DDoS vector for our network as it leaves users to be able to submit computations without being charged.
Examples of transactions that could trigger an invariant violation errors are transactions that violates MoveVM's paranoid type checker.
Read more about it here: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-35.md
some wallets export aptos
object, some wallets not.
and now aptos
interface lacks query method.
getAccountModule(s)
getAccountResource(s)
getAccountTransaction(s)
Sender-aware shuffling enables the shuffling of transactions within a single block so that transactions from the same sender are apart from each other as much as possible. This is done to reduce the number of conflicts and re-execution during parallel execution. Our end-to-end performance benchmark show that sender-aware shuffling can improve the TPS by 25%.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-27.md
aip | title | author | discussions-to | Status | last-call-end-date | type | created | updated |
---|---|---|---|---|---|---|---|---|
8 | Higher-Order Inline Functions for Collections | wrwg | #33 | Draft | TBD | Standard (Framework) | 2023/1/9 | 2023/1/9 |
Recently, the concept of inline functions has been added to Aptos Move. Those functions are expanded at compile time and do not have an equivalent in the Move bytecode. This ability allows them to implement a feature which is currently not available for regular Move functions: taking functions, given as lambda expressions, as parameters. Given this, we can define popular higher-order functions like 'for_each', 'filter', 'map', and 'fold' for collection types in Move. In this AIP, we suggest a set of conventions for those functions.
It is well-known that higher order functions lead to more concise and correct code for collection types. They are widely popular today in mainstream languages, including Rust, TypeScript, Java, C++, and more.
Move has currently no traits which would allow to define an Iterator
type which comprehends available functions across multiple collection types. Here, we want to establish at least a convention for naming and semantics of the most common of such functions. This allows framework writers to know which functions to provide, developers to remember which functions are available, and auditors to understand what they mean in the code.
Each iterable collection SHOULD offer the following three functions (illustrated by the vector<T>
type):
public inline fun for_each<T>(v: vector<T>, f: |T|);
public inline fun for_each_ref<T>(v: &vector<T>, f: |&T|);
public inline fun for_each_mut<T>(v: &mut vector<T>, f: |&mut T|);
Each of those functions iterates over the collection in the order specific to that collection. The first one consumes the collection, the second one allows to refer to the elements, and the last one to update the elements. Here is an example using for_each_ref
:
fun sum(v: &vector<u64>): u64 {
let r = 0;
for_each_ref(v, |x| r = r + *x);
r
}
Each iterable collection SHOULD offer the following three functions which transpose into a new collection of the same or different type:
public inline fun fold<T, R>(v: vector<T>, init: R, f: |R,T|R ): R;
public inline fun map<T, S>(v: vector<T>, f: |T|S ): vector<S>;
public inline fun filter<T:drop>(v: vector<T>, p: |&T|bool) ): vector<T>;
To illustrate the semantics of the fold
and the map
function, we show the definition of the later in terms of the former:
public inline fun map<T, S>(v: vector<T>, f: |T|S): vector<S> {
let result = vector<S>[];
for_each(v, |elem| push_back(&mut result, f(elem)));
result
}
Those data types in the Aptos frameworks should get the higher-order functions (TO BE COMPLETED):
TODO: link to the code in the framework at main
once the PRs landed
None visible.
The Move compiler implementation became feature complete with PR 822. The remaining effort is small, so we expect to fit this into the very next release.
Discussion and feedback thread for AIP-17.
Link to AIP:
https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-17.md
This AIP proposes to decouple storage space related gas charges from execution and I/O gas charges. Execution and I/O charges will continue to be determined by the gas unit price, hence the Aptos "fee market". Storage space related gas charges will be based on absolute values in the native token. This decoupling enables substantial reduction in transaction costs especially for transactions with heavy execution and I/O heavy dependencies.
In Blockchain architecture, there are two fundamental types of resources, categorized by the nature of their scarcity:
Each transaction must have a maximum amount of gas to limit both the time it can execute and how much storage it can consume. As a result, execution, IO, and space consumption must be defined relative to each other. As a result, execution fees tend to be above market rate, whereas storage fees do not reflect their scarcity and make support for concepts like refunds challenging.
The proposal here is to charge storage space consumption in terms of the native token instead of gas units, so it's independent of user specified gas unit price. To simplify implementation, the user interface does not change, so the effective cost will be deducted from the maximum transaction fee specified with a user transaction.
A more systematic approach would be to introduce a new field on the transaction to specify maximum storage fees. While this provides a more systematic and explicit means to indicate intended outcomes, it creates a burden on the ecosystem to adopt a new transaction standard. Current expectations are that such additional friction provides limited value; however, over time, Aptos will aggregate more useful features to expose and will adopt this along with other useful updates to the transaction interface.
No visible change to how one writes Move, no visible change to how one uses the SDK and CLI. But the economics changes:
The dimensions of storage gas charges will remain because these operations do impose runtime transient resource consumption. :
per_item_create
and per_item_write
will be adjusted.per_byte_create
and per_byte_write
remains the same.As a follow-up, the distinction between create
and write
variations of the storage gas parameters can be removed. On top of that, all storage gas charges can potentially scale lower, as they no longer bear the responsibility of defending against state explosion.
In addition, storage space related gas parameters will be defined in the unit of the native token. At runtime, the cost is calculated in the native token and converted to gas units according to the user specified gas unit price: charge_in_gas_unit = charge_in_octas / gas_unit_price
These entries will be added to the global gas schedule to specify the native token costs for various storage space consuming operations:
storage_fee_per_state_slot_create
storage_fee_per_excess_state_byte
storage_fee_per_event_byte
storage_fee_per_transaction_byte
Each of the per byte charges respect a per transaction free quota:
free_write_bytes_quota
: (1KB) This is existing, now governs storage_fee_per_excess_state_byte
as well.large_transaction_cutoff
: (600 bytes) This is existing, now governs storage_fee_per_transaction_byte
as well.free_event_bytes_quota
: (1KB) This is new, which governs storage_fee_per_event_byte
.These per transaction hard limits for different categories of gas charges will be added to the global gas schedule to reflect that the network has different amounts of resources under different categories, a reasonable amount of gas spent on disk space allocation in a single transaction might be sufficient for another transaction to run for minutes of CPU consuming operations.
max_execution_gas
: This is in gas units.max_io_gas_per
: This is in gas units, governing the transient aspects of the storage cost, i.e. IOPS and bandwidth.max_storage_fee
: This is in Octas, governing the new category of fees described in this proposal.Transaction Type | Current Cost | New Cost | Change | reduced by factor |
---|---|---|---|---|
(minimal per transaction charge) | 15000 | 200 | -98.67% | 75.0x |
Transfer | 54200 | 600 | -98.89% | 90.3x |
CreateAccount | 153600 | 101600 | -33.85% | 1.5x |
CreateTransfer | 188000 | 101900 | -45.80% | 1.8x |
CreateStakePool | 776200 | 207700 | -73.24% | 3.7x |
RotateConsensusKey | 2178300 | 21800 | -99.00% | 99.9x |
JoinValidator100 | 625300 | 461100 | -26.26% | 1.4x |
AddStake | 675700 | 461500 | -31.70% | 1.5x |
UnlockStake | 163500 | 1600 | -99.02% | 102.2x |
WithdrawStake | 153900 | 1600 | -98.96% | 96.2x |
LeaveValidatorSet100 | 610500 | 460900 | -24.50% | 1.3x |
CreateCollection | 174100 | 100800 | -42.10% | 1.7x |
CreateTokenFirstTime | 382100 | 152400 | -60.12% | 2.5x |
MintToken | 117100 | 1200 | -98.98% | 97.6x |
MutateToken | 272200 | 52200 | -80.82% | 5.2x |
MutateToken2ndTime | 129000 | 1300 | -98.99% | 99.2x |
MutateTokenAdd10NewProperties | 430900 | 4300 | -99.00% | 100.2x |
MutateTokenMutate10ExistingProperties | 451100 | 4500 | -99.00% | 100.2x |
PublishSmall | 745700 | 107400 | -85.60% | 6.9x |
UpgradeSmall | 660200 | 8100 | -98.77% | 81.5x |
PublishLarge | 10735800 | 9810700 | -8.62% | 1.1x |
Combining the storage fee as part of the gas charge is unintuitive. Better tooling will be required to gain visibility into the cost structure of transactions. Furthermore, the documentation site needs to be updated with these changes.
To incentivize "state hygiene", or state space cleanup, further effort will enable refunding part or full of the storage fee paid for allocating a storage slot. The fact that the allocation will be charged in the native token instead of gas units as a result of this proposal largely eliminates the concerns around storage refund arbitrage.
Testnet and Mainnet in March.
The per-block gas limit (or simply block gas limit) is a new feature that terminates block execution when the gas consumed by the committed prefix of transactions exceeds the block gas limit. This ensures that each block is executed within a predetermined limit of computational resources / time, thereby providing predictable latencies for latency-critical applications that involve even highly sequential and computationally heavy transactions.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-33.md
Discussion and feedback thread for AIP.
This proposal introduces framework-reserved properties to
We have existing token properties used by our token standard to control who can burn the token. However, when the token’s default properties are mutable, creators can add these control properties after the token has been minted. The creator can burn these tokens from collectors. This is a known issue called out in the token standard to set the token default properties to be immutable as a best practice. To prevent this entirely, this proposal is to make it infeasible to update the control properties after the token creation.
The reserved framework properties can be utilized for controlling token behavior to make it programmable. One example is having a framework reserved property to freeze tokens at the token store.
We have 3 existing control properties:
TOKEN_BURNABLE_BY_CREATOR
TOKEN_BURNABLE_BY_OWNER
TOKEN_PROPERTY_MUTATBLE
When these 3 properties exist in the TokenData’s default_properties, the creator can use them to control burn and mutation eligibility. We want to prevent the creators from mutating these framework-reserved properties after the token creation.
Reserve all keys with “TOKEN_” prefix for framework usage. When the creator mutates the token_properties stored with the token or the default_properties stored with the TokenData, we will check if the property name starts with “TOKEN_” and abort the mutation if any properties start with “TOKEN_” prefix.
We add friend functions add/update_non_framework_reserved_properties
. to the property map module. This function will check all the properties to be added/updated and only non_framework_reserved properties can be added or updated.
// this function will be called by token mutation methods to add or update token properties
public(friend) fun add_non_framework_reserved_properties(map: &mut PropertyMap, key: String, value: PropertyValue)
public(friend) fun update_non_framework_reserved_properties(map: &mut PropertyMap, key: String, value: PropertyValue)
Overhead of token default property mutation cost
The validation of framework reserved properties will cost extra gas when calling mutate_tokendata/token_property
, Currently, when creators mutate the properties, the function loop through all the properties to be mutated and update the property value. The additional cost would be we need to check if the property key starts with TOKEN_
. This cost should be minimal by using substring matching and early exit of comparisons. Also the current function already does many validations on the string length, key duplication, etc, the overhead of checking if a key has a prefix should have negligible impact on gas costs and user experience.
The change has yet to be implemented. Ideally, these should will landed to main in 1 to 2 weeks, then testnet, followed by mainnet.
This unblocks follow-up AIPs on soulbound token and token freezing leveraging this framework-reserved properties.
Discussion and feedback thread for AIP.
Link to AIP: #59
Soulbound tokens have enormous potential in a wide variety of applications. However, with no standard, soulbound tokens are incompatible with the Aptos-Token standard. Consensus on a common standard is required to allow for broader ecosystem adoption, as well as to make it easier for application developers to build expressive applications on top of the standard.
This AIP (Aptos Improvement Proposal) envisions soulbound tokens as non-transferable NFTs that will unlock primitives such as robust proof of attendance, credentials, distributed governance rights, and much more. However, soulbound tokens must be distinguishable from Aptos-Tokens to provide this utility.
Benefits:
Extend property map to have framework reserved keys
Reserve all keys with “##” for framework usage.
When adding keys to property_map, we need to check if the key starts with “##” and disallow adding or creating property_map with these keys.
We add a token package friend function for adding framework-reserved control flags
// This function should only be called by framework modules
public(friend) fun add_framework_reserved_control_flag(map: &mut PropertyMap, key: String, value: PropertyValue)
Note: after this change, the property_map
will become a dedicated data structure for the token package.
Annotate the token as soul bound
// Mint soulbond tokens specifically
public entry create_mint_soulbound_token(creator: &signer, owner: address, collection_name: String, token_name: String){
// Create token_data
let token_data_id = create_token_data(...);
// Mint a token from token_data_id
let token = mint_token(token_data_id);
// Offer token to owner
token_transfers::offer(creator, owner, token_id, ..);
// Annotate the token property #freezing_type to 0
add_framework_reserved_control_flag(token.token_properties, "“##freezing_type", 1);
}
public fun is_soulbound_token(token_id: TokenId): bool {
// Check the token properties
}
public fun withdraw_token(account: &signer, id: TokenId, amount: u64){
let token_data_id = id.token_data_id;
// Check the token's properties and validate if the ##freezing_type value is set
...
}
We can extend the approach to support general frozen tokens to token stores. For example, the token owner or creator wants to freeze the token to their token store after an expiration time. We can use ##freezing_type = 2
for the time-based freezing. we can introduce another system-reserved control flag to specify the timestamp ##freezing_expiration_time
.
public fun freeze_token(owner: &signer, token_id: TokenId, expiration_time: u64){
// annotate the token with two properties above
}
public withdraw(...){
//check if the token is frozen and the expiration time.
}
To be determined.
--
Special thanks to Bo Wu.
Transactions are prioritized at mempool, consensus, and execution based on unit gas price — the price (in APT) that each transaction has committed to pay for each unit of gas. Clients need a way to estimate the unit gas price they should use to get a transaction executed on chain in a reasonable amount of time. This proposal contains a design where Aptos fullnodes provide an API to be used by clients for this purpose.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-34.md
Discussion and feedback thread for AIP.
Link to AIP: #62
This document provides our recommendation and alternatives for how royalty enforcement can work on Aptos. We are collecting community feedback on the options presented here so we may decide the best path moving forward together.
We observed the NFT communities on other chains are divided over the controversial topic of royalty enforcement on marketplaces. We want to ensure the creators’ royalties are respected on Aptos so creators can feel safe and incentivized to continue building on Aptos.
Currently, royalty on Aptos is dependent on our community marketplaces and dapps charging the royalties on behalf of the creators. This creates the opportunity for royalty-free marketplaces to reduce the price by skipping the royalty payments.
Without the community forming a royalty enforcement standard together, the creators and royalty-required marketplaces are at a competitive disadvantage to royalty-free marketplaces.
When comparing different options for enforcing royalties, the main considerations are:
Detecting NFT Violating Royalty Treatment
Aptos may provide an API service that provides information to help creators find all the tokens that violate the royalty. The creator can then use the information to annotate those NFTs and flag them as royalty-violating tokens.
The API service would be built on top of an indexer that scans through the NFT transactions on-chain to look for token trades that do not pay the appropriate royalties to the creator’s royalty account. This indexer can be deployed and run by the community.
Annotating NFT Violating the Royalty
We can introduce a new on-chain map RoyaltyViolation that records all the NFTs that violate the royalty. The creator has the authority to set the value in this on-chain map. Anyone can read from the map to check if any token has violated the royalty.
Note: Only NFTs are qualified for this annotation where an NFT has a globally unique token_id.
Struct RoyaltyViolation has key {
royalty_violation_token: Table<token_id, bool>
}
public bool set_royalty_violated(creator: &signer, token_id: TokenId);
public bool is_royalty_violated(token_id: TokenId);
The creator can then use the API service to detect the violating tokens too using their own heuristics. Once the creator knows a token has been traded without paying royalties, they can annotate the token with ‘true’ in RoyaltyViolation table.
Enforcement
The enforcement will be done through Aptos Ecosystem projects. The violating tokens can be identified and restricted from trading and participating in various dapps. There are three main areas where the enforcement can take place. Any one of these areas can greatly restrict the violating token’s usability.
We can allow the creators to update a whitelist or blacklist of accounts eligible for token transfer. With these two lists, the NFT can be deposited only to the whitelisted accounts, or the NFT cannot be deposited to the blacklisted accounts. Creators can update these lists when they observe a marketplace or app violating the royalty.
The caveat of this blacklist approach is that the marketplaces can easily switch to a new account frequently without restricting its operation. Creators probably cannot keep up with updating their blacklists at the same pace as the marketplaces. Meanwhile, updating frequently and maintaining a large blacklist will cost lots of gas.
The caveat of the whitelist approach is it greatly restricts the token liquidity and prevents other interesting applications to be built on top of the token. The owner cannot transfer their tokens for normal usage. It also stops other developers from building an interesting application for these tokens as they cannot store these tokens in their own contracts.
Annotation recommendation | Blacklist Alternative | Whitelist Alternative | |
---|---|---|---|
Enforcement strength | Strong | Low | Strong |
Growth Limitation Risk | Low | Low | High |
Decentralization | Yes | Yes | Yes |
If I notice only some fungible token holders are violating the royalty, how can I annotate the token as royalty violation?
You can use the mutate token properties method to turn those specific fungible tokens to NFTs where they will have their unique token_ids. You can then annotate these tokens as royalty-violating tokens.
As a buyer who does not want to buy any token that could be annotated by creators, what should I do?
Creators’ desired royalty is recorded on-chain within the token metadata. With our current proposal, you can avoid royalties only by purchasing tokens that have no royalty requirements.
Summary
This proposal contains the following changes:
CollectionData
mutation functions: provide set functions to mutate the TokenData fields based on the token mutability setting.TokenData
metadata mutation functions: provide set functions to mutate the CollectionData fields based on collection mutability setting.Motivation
Change 1, 2: the motivation is to support CollectionData
and TokenData
mutation based on the mutability config so that creators can update the fields based on their own application logic to support new product features
Change 3,4: the motivation is fix existing issues to make the token contract works correctly
Change 5: this motivation is to allow the dapp to directly call the function without deploying their own contracts or scripts
Change 6: this is to prevent potential malicious token that could lead to charging a higher fee than the token price.
Rationale
Change 1, 2 are fulling existing token standard specification without introducing new functionalities
Change 3, 4, 5 and 6 are small fix and straightforward changes.
Reference Implementation
The PRs of the changes above are listed below:
Change 1, 2:
aptos-labs/aptos-core#5382
aptos-labs/aptos-core#5265
aptos-labs/aptos-core#5017
Change 3: aptos-labs/aptos-core#5096
Change 4: aptos-labs/aptos-core#5499
Change 5: aptos-labs/aptos-core#4930
Change 6: aptos-labs/aptos-core#5444
Risks and Drawbacks
Change 1, 2 are internally reviewed and undergo auditing to fully vet the risks
Change 3, 4 and 6 are to reduce the identified risks and drawbacks
Change 5 is to improve usability. This change doesn’t introduce new functionality
Timeline
Reference implementation changes will be deployed in devnet on 11/14 (PST) for ease of testing and providing feedback/discussion.
This AIP will be open for public comments until 11/17.
After discussion, reference implementation changes will be deployed in testnet on 11/17 for testing.
Discussion and feedback thread for AIP.
I believe that account abstraction function & Social recovery function is needed for Aptos Ecosystem
Link to AIP:
to realize social key, not only some authority called guardian, but also paymaster function (maybe bundler) is needed.
Let's implement this function watching
Please refer these pages
https://eips.ethereum.org/EIPS/eip-4337
cardano-foundation/CIPs#309
< Copy AIP text here as well for reader's ease >
AIP28: Implementation of Account Abstraction in Aptos
overview:
In this proposal, we propose how to apply the concept of Account Abstraction in the Ethereum ecosystem to the Aptos blockchain. Aptos uses the Move language and focuses on UserOperation simulation and validation. In particular, there are restrictions on the information and storage that can be accessed when validating UserOperations.
specification:
For UserOperation validation simulation, call simulateValidation(userop)
Does not call forbidden opcodes and restricts storage access
Storage is limited to data associated with the sender address
The proposed implementation simulates and validates UserOperations and restricts interaction with special contracts such as factories, paymasters and signature aggregators. It also incorporates concepts such as access lists and forbidden opcodes.
Additionally, the concept of alternate mempools has been introduced to allow for specific use cases by whitelisting specific paymasters and signature aggregators.
This AIP28 proposes an implementation of Account Abstraction on the Aptos blockchain, enabling flexible account management with a focus on UserOperation simulation and verification.
Thank you for reading, Please wait for the update...
Discussion and feedback thread for AIP.
Link to AIP:
< Copy AIP text here as well for reader's ease >
Discussion and feedback thread for AIP.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-1.md
This change brings two simple improvements to proposer selection:
In Aptos Blockchain, progress is organized into rounds. In each round, new block is proposed and voted on. There is a special role of proposer, deterministically decided for each round, that is responsible for collecting votes from the previous round and proposing a new block for current round. Goals of proposer selection (a decision on which node should be a proposer in a round) are:
Current proposer selection is done via ProposerAndVoter LeaderReputation algorithm. It looks at the past history, in one window for proposer history, and a smaller window for voting history. Then reputation_weight is chosen for each node:
And then, reputation_weight is scaled by staked_amount, and next proposer is pseudo-randomly selected, given those weights.
Window sizes are chosen such that they are large enough to have enough signal to be reasonably stable, and not too large - to be able to adapt to changes quicker. For every block, we get proposer signal only for a single node, but voting single for two-thirds of the nodes. That means that proposer window needs to be larger, while we can keep voting window shorter.
This proposal is to upgrade ProposerAndVoter into ProposerAndVoterV2 selection algorithm. New proposer selection algorithm makes two changes to the logic:
(round - 10*num_validators - 20, round - 20)
window. For voters we are looking at (round - 10*num_validators - 20, round - 9*num_validators - 20)
. We ignore last 20 rounds, because history is looked at committed information, and consensus and commit is decoupled, and there can be a few rounds delay between each. Beyond that, voters window is unnecessarily stale. With the new change, we will be looking at (round - num_validators - 20, round - 20)
range for voters.aptos-labs/aptos-core#4253
aptos-labs/aptos-core#4973
Above PRs have been committed, and are being tested, and prepared to be released to the mainnet.
To enable the change above, additional governance proposal for consensus onchain config needs to be executed. E2E smoke test has been landed as well, to confirm governance proposal can be executed smoothly.
It is running on devnet for more than a week, though devnet has limit and only AptosLabs run validators, so change is not stress-tested.
We will test it out on testnet in a week or two.
If no further changes are needed, proposal is planned to be created, and sent for voting, by the end of December.
This AIP proposes to add a flag in delegation_pool.move
to allow “permissioned” acceptance of stakers.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-31.md
With Quorum Store (see AIP-26), a block proposal may include duplicate transactions due to Quorum Store batches being opaque. (Duplicate transactions are also possible with byzantine proposers within quorum store.) Duplicate transactions cannot affect correctness of execution, i.e., the first version will always succeed and duplicates will be discarded. However, there is concern that duplicate transactions could affect the performance of parallel execution of blocks because they could induce conflicts. We propose filtering the duplicates before blocks are executed in the VM.
Read more about it here: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-37.md
false
if key has the wrong lengthChanges the function native_public_key_validate
used by the native Move function native fun pubkey_validate_internal
to return false
if the public key provided is the wrong length. Previously, this function would abort if the key length provided was incorrect. This change is gated by a feature flag.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-23.md
Proposed is to refund part of storage fee (introduce in API-17) paid for storage slot allocation to the original payer when a slot is freed by deletion.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-32.md
Discussion and feedback thread for AIP 19: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-19.md
This AIP proposes an update to staking_contract.move
, which would allow the stake pool owner to change commission_percentage
.
Currently, commission_percentage
cannot be changed. The updating commission percentage feature will allow for better adaptability to changing market conditions.
Considerations:
commission_percentage
is a convenience function added to staking_contract.move
to allow a stake pool owner to update the commission percentage paid to the operator.Commission_percentage
can be updated by the stake pool owner at any time. Commission is earned on a per epoch basis. The change takes into effect immediately for all future commissions earned when the update function is called, but will not be retroactively applied to any previously earned commissions.update_comission
function is calledAlternative solutions:
The staking contract would have to be ended and a new one has to be created in order to change the commission_percentage
. This is a less ideal solution as it creates more operational overhead and would result in missed staking rewards.
https://github.com/aptos-labs/aptos-core/pull/6623/
Changing commission_percentage
may introduce uncertainty into commission earnings because it is possible that operators are paid at different commission rates during an unlock cycle. However, there is no additional action required for the operator as changes take into effect immediately.
We can mitigate this in a future iteration by implementing a max commission change per period. This is not a concern with the current owner-operator structure.
This feature will give the stake pool owner
more flexibility over the commission_percentage
to reflect changing market conditions.
Targeting end of Q1
This feature is currently on devnet and testnet as part of v1.3.
title: Multisig Accounts v2
author: movekevin
discussions-to:
Status: Draft
last-call-end-date (*optional:
type: Standard (Framework)
created:
updated:
This AIP proposes a new multisig account standard that is primarily governed by transparent data structures and functions in a smart contract (multisig_account) with more ease of use and powerful features than the current multied25519-auth-key-based accounts. There’s also a strong direction for this to involve as part of a more general account abstraction in Aptos with more types of accounts and functionalities for users to manage their accounts.
This is not meant to be a full-fledged multisig wallet product but instead just the primitive construct and potentially SDK support to enable the community to build more powerful multisig products.
Multisig accounts are important in crypto and are used:
Currently, Aptos supports multied25519 auth keys, which allows for multisig transactions:
There are several problems with this current multisig setup that make it hard to use:
We can create a more user-friendly multisig account standard that the ecosystem can build on top of. This consists of two main components:
struct MultisigAccount has key {
// The list of all owner addresses.
owners: vector<address>,
// The number of signatures required to pass a transaction (k in k-of-n).
signatures_required: u64,
// Map from transaction id (incrementing id) to transactions to execute for this multisig account.
// Already executed transactions are deleted to save on storage but can always be accessed via events.
transactions: Table<u64, MultisigTransaction>,
// Last executed or rejected transaction id. Used to enforce in-order executions of proposals.
last_transaction_id: u64,
// The transaction id to assign to the next transaction.
next_transaction_id: u64,
// The signer capability controlling the multisig (resource) account. This can be exchanged for the signer.
// Currently not used as the MultisigTransaction can validate and create a signer directly in the VM but
// this can be useful to have for on-chain composability in the future.
signer_cap: Option<SignerCapability>,
}
/// A transaction to be executed in a multisig account.
/// This must contain either the full transaction payload or its hash (stored as bytes).
struct MultisigTransaction has copy, drop, store {
payload: Option<vector<u8>>,
payload_hash: Option<vector<u8>>,
// Owners who have approved. Uses a simple map to deduplicate.
approvals: SimpleMap<address, bool>,
// Owners who have rejected. Uses a simple map to deduplicate.
rejections: SimpleMap<address, bool>,
// The owner who created this transaction.
creator: address,
// Metadata about the transaction such as description, etc.
// This can also be reused in the future to add new attributes to multisig transactions such as expiration time.
metadata: SimpleMap<String, vector<u8>>,
}
// Existing struct used for entry function payload
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize)]
pub struct EntryFunction {
pub module: ModuleId,
pub function: Identifier,
pub ty_args: Vec<TypeTag>,
#[serde(with = "vec_bytes")]
pub args: Vec<Vec<u8>>,
}
```rust
// Existing struct used for EntryFunction payload, e.g. to call "coin::transfer"
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize)]
pub struct EntryFunction {
pub module: ModuleId,
pub function: Identifier,
pub ty_args: Vec<TypeTag>,
#[serde(with = "vec_bytes")]
pub args: Vec<Vec<u8>>,
}
// We use an enum here for extensibility so we can add Script payload support
// in the future for example.
pub enum MultisigTransactionPayload {
EntryFunction(EntryFunction),
}
/// A multisig transaction that allows an owner of a multisig account to execute a pre-approved
/// transaction as the multisig account.
#[derive(Clone, Debug, Hash, Eq, PartialEq, Serialize, Deserialize)]
pub struct Multitsig {
pub multisig_address: AccountAddress,
// Transaction payload is optional if already stored on chain.
pub transaction_payload: Option<MultisigTransactionPayload>,
}
(WIP)
The primary risk is smart contract risk where there can be bugs or vulnerabilities in either the smart contract code (multisig_account module) or API and VM execution. This can be mitigated with thorough security audit and testing.
An immediate extension to this proposal is to add script support for a multisig tx. This would allow defining more complex atomic multisig txs.
In the longer term:
The proposal as-is would not allow on-chain execution of multisig transactions - other modules can only create transactions and allow owners to approve/reject. Execution would require sending a dedicated transaction of the multisig transaction type. However, in the future, this can be made easier with dynamic dispatch support in Move. This would allow on chain execution and also off-chain execution via the standard user transaction type (instead of the special multisig transaction type). Dynamic dispatch could also allow adding more modular components to the multisig account model to enable custom transaction authentication, etc.
Another direction multisig account can enable is more generic account abstraction models where on-chain authentication can be customized to allow account A to execute a transaction as account B if allowed via modules/functionalities defined by account B. This would enable more powerful off-chain systems such as games to abstract away transaction and authentication flow without the users needing intimate understanding of how they work.
Targeted code complete (including security audit) and testnet release: February 2023.
Discussion and feedback thread for AIP.
Link to AIP: #96
This AIP proposes a standard of Fungible Asset (FA) using Move Objects. In this model, any object, which is called Metadata in the standard, can be used as metadata to issue fungible asset units. This standard provides the building blocks for applications to explore the possibilities of fungibility.
We are eager to build fungible asset on Aptos as it plays an critical role in the Web3 ecosystem beyond cryptocurrency. it enables the tokenization of various assets, including commodities, real estate, and financial instruments, and facilitate the creation of decentralized financial applications.
Besides aforementioned features, fungible asset is a superset of cryptocurrency as coin is just one type of fungible asset. Coin module in Move could be replaced by fungible asset framework.
The rationale is two-folds:
We witnessed an drastically increasing needs of fungible asset framework from Aptos community and partners. The earlier coin module is obsolete and insufficient for today's needs partially due to the rigidity of Move structs and the inherently poor extensibility that it’s built upon. Also, the basic model of authorization management is not flexible enough to enable creative innovations of fungible asset policy.
The old coin module has a noticeable deficiency that the store
ability makes ownership tracing a nightmare. Therefore, it is not amenable to centralized management such as account freezing because it is not programmably feasible to find all the coins belonging to an account.
fungible asset framework is born to solve both issues.
fungible_asset::Metadata
serves as metadata or information associated with a kind of fungible asset. Any object with fungibility has to be extended with this resource and then become the metadata object. It is noted that this object can have other resources attached to provide richer context. For example, if the fungible asset represents a gem, it can hold another Gem
resource with fields like color, size, quality, rarity, etc.
#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
/// Define the metadata required of an metadata to be fungible.
struct Metadata has key {
/// The current supply of the fungible asset.
supply: u64,
/// The maximum supply limit where `option::none()` means no limit.
maximum: Option<u64>,
/// Name of the fungible metadata, i.e., "USDT".
name: String,
/// Symbol of the fungible metadata, usually a shorter version of the name.
/// For example, Singapore Dollar is SGD.
symbol: String,
/// Number of decimals used for display purposes.
/// For example, if `decimals` equals `2`, a balance of `505` coins should
/// be displayed to a user as `5.05` (`505 / 10 ** 2`).
decimals: u8,
}
FungibleStore
only resides in an object as a container/holder of the balance of a specific fungible asset.
FungibleAsset
is an instance of fungible asset as a hot potato
#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
/// The store object that holds fungible assets of a specific type associated with an account.
struct FungibleStore has key {
/// The address of the base metadata object.
metadata: Object<Metadata>,
/// The balance of the fungible metadata.
balance: u64,
/// Fungible Assets transferring is a common operation, this allows for freezing/unfreezing accounts.
allow_ungated_balance_transfer: bool,
}
/// FungibleAsset can be passed into function for type safety and to guarantee a specific amount.
/// FungibleAsset is ephermeral that it cannot be stored directly and will have to be deposited back into a store.
struct FungibleAsset {
metadata: Object<Metadata>,
amount: u64,
}
Each account can own multiple FungibleStore
s but only one is primary and the rest are called secondary stores. The primary store address is deterministic, hash(owner_address | metadata_address | 0xFC)
. Whereas the secondary store could be created when needed.
The difference between primary and secondary stores are summarized as:
public entry fun transfer<T: key>(
sender: &signer,
from: Object<T>,
to: Object<T>,
amount: u64,
)
public fun withdraw<T: key>(
owner: &signer,
store: Object<T>,
amount: u64,
): FungibleAsset
public fun deposit<T: key>(store: Object<T>, fa: FungibleAsset)
public fun mint(ref: &MintRef, amount: u64): FungibleAsset
public fun mint_to<T: key>(ref: &MintRef, store: Object<T>, amount: u64)
public fun set_ungated_transfer<T: key>(ref: &TransferRef, store: Object<T>, allow: bool)
public fun burn(ref: &BurnRef, fa: FungibleAsset)
public fun burn_from<T: key>(
ref: &BurnRef,
store: Object<T>,
amount: u64
)
public fun withdraw_with_ref<T: key>(
ref: &TransferRef,
store: Object<T>,
amount: u64
)
public fun deposit_with_ref<T: key>(
ref: &TransferRef,
store: Object<T>,
fa: FungibleAsset
)
public fun transfer_with_ref<T: key>(
transfer_ref: &TransferRef,
from: Object<T>,
to: Object<T>,
amount: u64,
)
#[view]
public fun primary_store_address<T: key>(owner: address, metadata: Object<T>): address
#[view]
/// Get the balance of `account`'s primary store.
public fun balance<T: key>(account: address, metadata: Object<T>): u64
#[view]
/// Return whether the given account's primary store can do direct transfers.
public fun ungated_balance_transfer_allowed<T: key>(account: address, metadata: Object<T>): bool
/// Withdraw `amount` of fungible asset from `store` by the owner.
public fun withdraw<T: key>(owner: &signer, metadata: Object<T>, amount: u64): FungibleAsset
/// Deposit `amount` of fungible asset to the given account's primary store.
public fun deposit(owner: address, fa: FungibleAsset)
/// Transfer `amount` of fungible asset from sender's primary store to receiver's primary store.
public entry fun transfer<T: key>(
sender: &signer,
metadata: Object<T>,
recipient: address,
amount: u64,
)
DeriveRef
could also be used by other module to squat the primary store object. This requires the creator of metadata to bear that in mind. So we limit the function can be only called by primary_store
module for now. The reason behind this is the name deriving scheme does not have native domain separator for different modules.There is still some room for improvements of management capabilities and the way to locate fungible asset objects. Once we have a powerful indexer with a different programming model then it may be unnecessary to have primary store anymore.
By end of March.
on Devnet by early April, on Testnet by mid April and Mainnet by early May.
by that, I think Aptos chain will more active and incentive people to action in chain.
For example, about currently Souffl3 Bake-off event if do bake once, we must pay 0.06 APT gas fee to do it.
With delegation_pool.move
, end users are able to participate in staking, but the delegation pool owner votes on behalf of the entire pool. Partial voting proposes changes to Aptos Governance by enabling delegators to participate in on chain governance and vote on governance proposals in proportion to their stake amount.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-28.md
This AIP is being submitted per the request of @movekevin in aptos-labs/aptos-core#8525
Presently the multisig v2 Move APIs require multiple transactions for certain operations, which would be streamlined via abstraction and helper functions. aptos-labs/aptos-core#8525 adds these functions.
Multisig v2 operations are already cumbersome compared with single-signer operations. Streamlining the process will lead to less friction for distributed governance solutions
The following function signatures are proposed:
/// Like `create_with_owners`, but removes the calling account after creation.
///
/// This is for creating a vanity multisig account from a bootstrapping account that should not
/// be an owner after the vanity multisig address has been secured.
public entry fun create_with_owners_then_remove_bootstrapper(
bootstrapper: &signer,
owners: vector<address>,
num_signatures_required: u64,
metadata_keys: vector<String>,
metadata_values: vector<vector<u8>>,
) acquires MultisigAccount {
/// Swap an owner in for an old one, without changing required signatures.
entry fun swap_owner(
multisig_account: &signer,
to_swap_in: address,
to_swap_out: address
) acquires MultisigAccount {
/// Swap owners in and out, without changing required signatures.
entry fun swap_owners(
multisig_account: &signer,
to_swap_in: vector<address>,
to_swap_out: vector<address>
) acquires MultisigAccount {
/// Swap owners in and out, updating number of required signatures.
entry fun swap_owners_and_update_signatures_required(
multisig_account: &signer,
new_owners: vector<address>,
owners_to_remove: vector<address>,
new_num_signatures_required: u64
) acquires MultisigAccount {
/// Add new owners, remove owners to remove, update signatures required.
fun update_owner_schema(
multisig_address: address,
new_owners: vector<address>,
owners_to_remove: vector<address>,
optional_new_num_signatures_required: Option<u64>,
) acquires MultisigAccount {
The final function is an abstraction that can be substituted for existing owner modification functions.
Do nothing, thus enforcing transactional complexity
Smart contract risk
Proposal/Solution
Hello Aptos team and community
We represent the Aptos Ru Community team: one of the earliest, most active adopters, coordinators and contributors of Aptos across the Russian across Russian speaking universe. Our team includes both Aptos Moderators and Coordinators.
Among many other things we are creators of the following projects:
• Weekly RuWorkshop events/Ecosystem reviews
• AptosRuHub website
• Russian translation of Official Aptoslabs Medium
• Russian translation of Official Aptos Developer Documentation
• Step-by-step video guides how to launch Aptos node (from the earliest devnet up to Ait3)
• Telegram group with c. 2000 subscribers
• Weekly summaries of Move Mondays and Ru Workshops
And also our team helped to set up lots of nodes in DevNet and educated many people on Aptos
Each of us has proven himself from the best side and has a brilliant reputation in the community and on the Aptos discord server.
We are constantly working on community development, improving the quality of materials and events.
Our objective
Grow the community by increasing its activity and size.
Limitation
Lack of financial resources for the preparation of better quality materials (articles, videos, etc.), event marketing, attracting influencers, promotional gifts, giveaways, etc.,
Proposal/solution
Allocate places for Aptos Ru Community team to validate Aptos mainnet.
This proposal allows to achieve the following:
Allocating places for validation would solve the problem of financing community project (Aptos Ru Community) and would be able to improve their quality and stimulate activity, since the rewards from network validation would not leave the project, but would be used to promote different activities and as the result speedup attraction/onboarding of users and builders to Aptos Network across Russian speaking audience.
Given our successful participation in all stages of the AIT as well as in validation activity in other projects, we have all the necessary experience to participate in Mainnet validation
We are Aptos Ru community team and we want to grow our community and help Aptos to engage new members and grow the whole Ru community.
All our resources
https://link3.to/aptosrucommunity
https://t.me/AptosRUcommunity
https://tiny.one/aptosruhub
https://youtube.com/@aptosrucommunity
https://t.me/AptosWorkshopRU
https://cr-nepos.gitbook.io/aptos-ru/
https://medium.com/@aptos-in-russian
Since the launch of the Aptos Mainnet, there are some improvements suggested by the community that will speed up adoption of the Coin standard. Specifically:
Since (2) is a minor improvement, the rest of this proposal will discuss the details for (1).
Currently, in many cases, coins (ERC-20 tokens, including APT) cannot be sent directly to an account if:
The primary historical reason for this design is to let an account explicitly opt-in for tokens/coins that they want and not receive random ones they don’t. However, this has led to user and developer pains as they need to remember to register the coin type, especially so when only the recipient account can do this. One important use case that has run into this issue to CEX transfers that involve custom coins.
We can switch to a model where CoinStore (created by registration) is implicitly created upon transfer if it doesn't exist for a specific CoinType. This can be added as a separate flow in aptos_coin, similar to aptos_coin::transfer which implicitly creates an account upon an APT transfer if one doesn't exist. In addition, accounts can choose to opt-out of this behavior if desired. The detailed flow is below:
1. aptos_coin::transfer_coins(from: &signer, to: address, amount: u64) by default will register the recipient address (create CoinStore) for CoinType if one doesn't exist.
2. An account can choose to opt-out of (e.g. to avoid receiving spammy coins) by calling aptos_account::set_allow_direct_coin_transfers(false). They can also later revert this with set_allow_direct_coin_transfers(true). By default, any existing accounts before this proposal is implemented and new accounts afterward will be implicitly opted into receiving all coins.
Reference implementation:
Since this is a new flow instead of modifying the existing coin::transfer function, existing dependency on coin::register and coin::transfer should not be affected. There is only one known potential risk: Since an account is opted-in to receiving arbitrary coins by default, they can get spammed by malicious users sending hundreds or thousands of these spammy coins. This can be mitigated:
This change can be rolled out to Testnet for testing in the week of 2/1 or 2/8 (PST time).
This AIP proposes a 1.5% yearly decrease in staking rewards, which is part of the Aptos tokenomics requirement.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-30.md
This AIP proposes to add a new native function create_uuid in Aptos framework that generates and outputs a unique 256-bit identifier (of type address) for each function call. The create_uuid function calls can run efficiently and in parallel. In other words, when two transactions run create_uuid method, they can be executed in parallel without any conflicts. Initially, we will use these unique identifiers internally as addresses for newly created Move Objects.
There is a general need to be able to create unique identifiers or addresses. There is no such utility in Move today, and so various alternatives have been used, which bring performance implications. We want to provide such an utility for all usecases that needed it.
Concretely, when a new object is created, we need to associate it with a unique address. For named objects, we deterministically derive it from the name. But for all other objects, we currently derive it from a GUID (Globally Unique Identifier) that we create on the fly. A GUID consists of a tuple (address, creation_num)
. We create GUID by having address
be the account address of the object or resource’s creator, and the creation_num
is the guid sequence number of the object or resource created by that account. As the sequence number creation_num
has to be incremented for each object/resource creation, GUID generation is inherently sequential, within the same address. As an example, in Token V2 whenever a new token is minted using token::create_from_account
, object is created to back it, which uses GUID generated based on the collection address, and so all such mints from the same collection are inherently sequential.
This AIP thereby creates a new type of identifier called UUID (Universally Unique Identifier). This is a 256-bit identifier that is universally unique amongst all the identifiers generated by all the accounts for all purposes. We propose adding a new native function create_uuid
in Aptos framework. Every time a MOVE code calls create_uuid
, the function outputs a universally unique 256-bit value.
Read more about it here: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-36.md
aip: 9
title: Resource Groups
author: davidiw, wrwg, msmouse
discussions-to: #26
Status: Draft
last-call-end-date:
type: Standard (Interface, Framework)
created: 2023/1/5
updated: N/A
requires: N/A
This AIP proposes resource groups to support storing multiple distinct Move resources together into a single storage slot.
Over the course of development, it often becomes convenient to add new fields to a resource or support an optional, heterogeneous set of resources. However, resources and structs are immutable after being published to the blockchain, hence, the only pathway to add a new field is via a new resource.
Each distinct resource within Aptos requires a storage slot. Each storage slot is a unique entry within a Merkle tree or authenticated data structure. Each proof within the authenticated data structure occupies 32 * LogN
bytes, where N
is the total amount of storage slots. At N = 1,000,000
, this results in a 640 byte proof.
With 1,000,000 storage slots in use, adding even a new resource that contains only an event handle uses approximately 680 bytes, where the event handle requires only 40. The remaining 640 bytes comes from the new authenticated data proofs, which can be orders of magnitude larger than the data being authenticated. Beyond the capacity demands, reads and writes incur additional costs associated with proof verification and generation, respectively.
Resource groups allow for dynamic, co-location of data such that adding a new event can be done even after creation of the resource group and with a fixed storage and execution costs independent of the amount of slots in storage. In turn, this provides a convenient way to evolve data types and co-locate data from different resources.
A resource group co-locates data into a single storage slot by encoding within the Move source files attributes that specify which resources should be combined into a single storage slot. Resource groups have no semantic effect on Move, only on the organization of storage.
At the storage layer, the resource groups are stored as a BCS-encoded BTreeMap where the key is a BCS-encoded fully qualified struct name (address::module_name::struct_name
, e.g., 0x1::account::Account
) and the value is the BCS-encoded data associated with the resource.
The above diagram illustrates data stored at address 0xcafef00d
. 0x1::account::Account
is a resource stored at address 0xcafef00d
. 0xaa::resource::Group
contains a set of resources or a resource group stored at the same address. The resource group packs multiple resources into the group. Resources within a resource group require nested reading, wherein first the resource group must be read from storge followed by reading the specific resource from the resource group.
One alternative that was considered is storing data in a SimpleMap
using the any
module. While this is a model that could be shipped without any change to Aptos-core, it incurs some drawbacks around developer and application complexity both on and off-chain. There’s no implicit caching, and therefore any read or write would require a deserialization of the object and any write would require a serialization. This means a transaction with 3 writes would result in 3 deserializations and 3 serializations. In order to get around this, the framework would need substantial, non-negligible changes, though with the emergence of SmartMap
there may be more viability here. Finally, due to the lack of a common pattern, indexers and APIs would not be able to easily access this data.
Another alternative was using templates. The challenge with using templates is that data cannot be partially read without knowing what the template type is. For example, consider an object that might be a token. In resource groups, one could easily read the Object
or the Token
resource. In templates, one would need to read the Object<Token>
. This could also be worked around by complex framework changes and risks around partially reading BCS-encoded data, an application, which has yet to be considered. The same issues in Move would impact those using the REST API.
There are myriad combinations between the above two approaches. In general, the drawbacks are
struct
with store
. A struct
with key
ability has stricter and more understandable properties than store
. For example, the latter can lead to data being placed in arbitrary places, complicating global addressing and discoverability, which may be desirable for certain applications.A resource group consists of several distinct resources, or a Move struct
that has the key
ability.
Each resource group is identified by a common Move
struct:
#[resource_group(scope = global)]
struct ObjectGroup { }
Where this struct
has no fields and the attribute: resource_group
. The attribute resource_group
has the parameter scope
that limits the location of other entries within the resource group:
module
— only resources defined within the same module may be stored within the same resource group.address
— only resources defined within the same address may be stored within the same resource group.global
— there are no limitations to where the resource is defined, any resource can be stored within the same resource group.The motivation of using a struct
is that
StructTag
s. Thus it limits the implementation impact to the VM and readers of storage, storage can remain agnostic to this change.struct
and fun
can have attributes, which in turn let’s us define additional parameters like scope
.Each entry in a resource group is identified by the resource_group_member
attribute:
#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
struct Object has key {
guid_creation_num: u64,
}
#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
struct Token has key {
name: String,
}
During compilation and publishing, these attributes are checked to ensure that:
resource_group
has no abilities and no fields.scope
within the resource_group
can only become more permissive, that is it can either remain at a remain at the same level of accessibility or increase to the next.resource_group_member
attribute.group
parameter is set to a struct that is labeled as a resource_group
.struct
cannot either add or remove a resource_group_member
.The motivation for each of these requirements are:
resource_group
struct won't be used for other storage purposes. While there is no strict requirement that this be true, it is intended to mitigate confusion to developers.resource_group_member
s.resource_group_member
, there is no way for Move to know that it is within a resource_group
.From a storage perspective, a resource group is stored as a BCS-encoded BTreeMap<StructTag, BCS encoded MoveValue>
, where a StructTag
is a known structure in Move of the form: { account: Address, module_name: String, struct_name: String }
. Whereas, a typical resource is stored as a BCS encoded MoveValue
.
Resource groups introduce a new storage access path: ResourceGroup
to distinguish from existing access paths. This provides a cleaner interface and segregation of different types of storage. This becomes advantageous to indexers and other direct readers of storage that can now parse storage without inspecting module metadata. Using the example above, 0x1::account::Account
is stored at AccessPath::Resource(0xcafef00d, 0x1::account::Account)
, whereas the resource group and its contents are stored at AccessPath::ResourceGroup(0xcafef00d, 0xaa::resource::Group)
The only way to tell that a resource is within a resource group is by reading the module metadata associated with the resource. After reading module metadata, the storage client should either directly read form the AccessPath::Resource
or by first reading AccessPath::ResourceGroup
followed by deserializing the BTreeMap
and then extracting the appropriate resource.
At write time, an element of a resource group must be appropriately updated into a resource group by determining the delta the resource group as a result of the write operation. This results in the handful of possibilities:
The implications for the gas schedule are:
The above text in storage discusses the layout for resources and resources groups. User facing interfaces, such as a REST API, should not be exposed to resource groups. It is entirely a Move concept. A direct read on a resource group should be avoided. A resource group should be flattened and included within a set of resources when reading bulk resources at an address.
StructTag
(likely much less than 100 bytes). Accesses to a resource group will incur an extra deserialization for reads and an extra deserialization and serialization for writes. This is cheaper than alternatives and still substantially cheaper than storage costs. Of course, developers are free to explore the delta in their own implementations as resource groups does not eliminate individual resources.None of these are major roadblocks and will be addressed as part of the implementation of Resource Groups.
While resources cannot seamlessly adopted into resource groups, it is likely that many of the commonly used resources are migrated into new resources within resource groups to give more flexibility to upgradeability, because a resource group does not lock developers into a fixed resource layout. In fact, this returns Aptos back to supporting a more idiomatic Move, which co-locates resources stored at an address — being freed from perf considerations which hindered developers before.
Another area worth investigating is whether or not a templated struct can be within a resource group depending on what the generic type is. Consider the current Aptos Account
and the CoinStore<AptosCoin>
. Storing them separately has negative impact on performance and storage costs.
In the current VM implementation, resources are cached upon read. This can be improved with caching of the entire resource group at read time.
The YAML header does not contain documentation for the following fields and their format:
created
-> Explain what this is and its appropriate (date?) format.last-call-end-date
-> Explain what this is and its appropriate (date?) format.Furthermore, for the aip
field, it says "(this is determined by the AIP Manager)". However, one would expect to just put the AIP # here. So please clarify if this should be entered by the user (and what should be entered), or if it should be left blank. "Determined by the AIP manager" does not clarify if I need to enter anything there :)
Discussion and feedback thread for AIP.
Link to AIP: #86
This AIP proposes the support of generic cryptography algebra operations in Aptos standard library.
The initial list of the supported generic operations includes group/field element serialization/deserialization, basic arithmetic, pairing, hash-to-structure, casting.
The initial list of supported algebraic structures includes groups/fields used in BLS12-381, a popular pairing-friendly curve as described here.
Either the operation list or the structure list can be extended by future AIPs.
Algebraic structures are fundamental building blocks for many cryptographic schemes, but also hard to implement efficiently in pure Move.
This change should allow Move developers to implement generic constructions of those schemes, then get different instantiations by only switching the type parameter(s).
For example, if BLS12-381 groups and BN254 groups are supported, one can implement a generic Groth16 proof verifier construction, then be able to use both BLS12-381-based Groth16 proof verifier and BN254-based Groth16 proof verifier.
BLS12-381-based Groth16 proof verifier has been implemented this way as part of the reference implementation.
An alternative non-generic approach is to expose instantiated schemes directly in aptos_stdlib.
For example, we can define a Groth16 proof verification function
0x1::groth16_<curve>::verify_proof(vk, proof, public_inputs): bool
for every pairing-friendly elliptic curve <curve>
.
For ECDSA signatures which require a hash function and a group, we can define
0x1::ecdsa_<hash>_<group>::verify_signature(pk, msg, sig):bool
for each pair of proper hash function <hash>
and group <group>
.
Compared with the proposed approach, the alternative approach saves the work of constructing the schemes for Move developers. However, the size of aptos_stdlib can multiply too fast in the future.
Furthermore, the non-generic approach is not scalable from a development standpoint: a new native is needed for every combination of cryptosystem and its underlying algebraic structure (e.g., elliptic curve).
To keep the Aptos stdlib concise while still covering as many use cases as possible, the proposed generic approach should be chosen over the alternative approach.
Module aptos_std::crypto_algebra
is designed to have the following definitions.
Element<S>
that represents an element of algebraic structure S
.Below is the full specification in pseudo-Move.
module aptos_std::crypto_algebra {
/// An element of the group `G`.
struct Element<S> has copy, drop;
/// Check if `x == y` for elements `x` and `y` of an algebraic structure `S`.
public fun eq<S>(x: &Element<S>, y: &Element<S>): bool;
/// Convert a u64 to an element of an algebraic structure `S`.
public fun from_u64<S>(value: u64): Element<S>;
/// Return the additive identity of field `S`, or the identity of group `S`.
public fun zero<S>(): Element<S>;
/// Return the multiplicative identity of field `S`, or a fixed generator of group `S`.
public fun one<S>(): Element<S>;
/// Compute `-x` for an element `x` of a structure `S`.
public fun neg<S>(x: &Element<S>): Element<S>;
/// Compute `x + y` for elements `x` and `y` of a structure `S`.
public fun add<S>(x: &Element<S>, y: &Element<S>): Element<S>;
/// Compute `x - y` for elements `x` and `y` of a structure `S`.
public fun sub<S>(x: &Element<S>, y: &Element<S>): Element<S>;
/// Try computing `x^(-1)` for an element `x` of a structure `S`.
/// Return none if `x` does not have a multiplicative inverse in the structure `S`
/// (e.g., when `S` is a field, and `x` is zero).
public fun inv<S>(x: &Element<S>): Option<Element<S>>;
/// Compute `x * y` for elements `x` and `y` of a structure `S`.
public fun mul<S>(x: &Element<S>, y: &Element<S>): Element<S>;
/// Try computing `x / y` for elements `x` and `y` of a structure `S`.
/// Return none if `y` does not have a multiplicative inverse in the structure `S`
/// (e.g., when `S` is a field, and `y` is zero).
public fun div<S>(x: &Element<S>, y: &Element<S>): Option<Element<S>>;
/// Compute `x^2` for an element `x` of a structure `S`. Faster and cheaper than `mul(x, x)`.
public fun sqr<S>(x: &Element<S>): Element<S>;
/// Compute `2*P` for an element `P` of a structure `S`. Faster and cheaper than `add(P, P)`.
public fun double<G>(element_p: &Element<G>): Element<G>;
/// Compute `k*P`, where `P` is an element of a group `G` and `k` is an element of the scalar field `S` of group `G`.
public fun scalar_mul<G, S>(element_p: &Element<G>, scalar_k: &Element<S>): Element<G>;
/// Compute `k[0]*P[0]+...+k[n-1]*P[n-1]`, where
/// `P[]` are `n` elements of group `G` represented by parameter `elements`, and
/// `k[]` are `n` elements of the scalarfield `S` of group `G` represented by parameter `scalars`.
///
/// Abort with code `std::error::invalid_argument(E_NON_EQUAL_LENGTHS)` if the sizes of `elements` and `scalars` do not match.
public fun multi_scalar_mul<G, S>(elements: &vector<Element<G>>, scalars: &vector<Element<S>>): Element<G>;
/// Efficiently compute `e(P[0],Q[0])+...+e(P[n-1],Q[n-1])`,
/// where `e: (G1,G2) -> (Gt)` is a pre-compiled pairing function from groups `(G1,G2)` to group `Gt`,
/// `P[]` are `n` elements of group `G1` represented by parameter `g1_elements`, and
/// `Q[]` are `n` elements of group `G2` represented by parameter `g2_elements`.
///
/// Abort with code `std::error::invalid_argument(E_NON_EQUAL_LENGTHS)` if the sizes of `g1_elements` and `g2_elements` do not match.
public fun multi_pairing<G1,G2,Gt>(g1_elements: &vector<Element<G1>>, g2_elements: &vector<Element<G2>>): Element<Gt>;
/// Compute a pre-compiled pairing function (a.k.a., bilinear map) on `element_1` and `element_2`.
/// Return an element in the target group `Gt`.
public fun pairing<G1,G2,Gt>(element_1: &Element<G1>, element_2: &Element<G2>): Element<Gt>;
/// Try deserializing a byte array to an element of an algebraic structure `S` using a given serialization format `F`.
/// Return none if the deserialization failed.
public fun deserialize<S, F>(bytes: &vector<u8>): Option<Element<S>>;
/// Serialize an element of an algebraic structure `S` to a byte array using a given serialization format `F`.
public fun serialize<S, F>(element: &Element<S>): vector<u8>;
/// Get the order of structure `S`, a big integer little-endian encoded as a byte array.
public fun order<G>(): vector<u8>;
/// Cast an element of a structure `S` to a super-structure `L`.
public fun upcast<S,L>(element: &Element<S>): Element<L>;
/// Try casting an element `x` of a structure `L` to a sub-structure `S`.
/// Return none if `x` is not a member of `S`.
///
/// NOTE: Membership check is performed inside, which can be expensive, depending on the structures `L` and `S`.
public fun downcast<L,S>(element_x: &Element<L>): Option<Element<S>>;
/// Hash an arbitrary-length byte array `msg` into structure `S` with a domain separation tag `dst`
/// using the given hash-to-structure suite `H`.
public fun hash_to<St, Su>(dst: &vector<u8>, msg: &vector<u8>): Element<St>;
#[test_only]
/// Generate a random element of an algebraic structure `S`.
public fun rand_insecure<S>(): Element<S>;
In general, every structure implements basic operations like (de)serialization, equality check, random sampling.
For example, A group may also implement the following operations. (Additive notions are used.)
order()
for getting the group order.zero()
for getting the group identity.one()
for getting the group generator (if exists).neg()
for group element inversion.add()
for basic group operation.sub()
for group element subtraction.double()
for efficient group element doubling.scalar_mul()
for group scalar multiplication.multi_scalar_mul()
for efficient group multi-scalar multiplication.hash_to()
for hash-to-group.As another example, a field may also implement the following operations.
order()
for getting the field order.zero()
for the field additive identity.one()
for the field multiplicative identity.add()
for field addition.sub()
for field subtraction.mul()
for field multiplication.div()
for field division.neg()
for field negation.inv()
for field inversion.sqr()
for efficient field element squaring.from_u64()
for quick conversion from u64 to field element.Similarly, for 3 groups G1
, G2
, Gt
that admit a bilinear map, pairing<G1, G2, Gt>()
and multi_pairing<G1, G2, Gt>()
may be implemented.
For a subset/superset relationship between 2 structures, upcast()
and downcast()
may be implemented.
E.g., in BLS12-381 Gt
is a multiplicative subgroup from Fq12
so upcasting from Gt
to Fq12
and downcasting from Fq12
to Gt
can be supported.
Some groups share the same group order,
and an ergonomic design for this is to
allow multiple groups to share the same scalar field
(mainly for the purpose of scalar multiplication),
if they have the same order.
In other words, the following should be supported.
// algebra.move
pub fun scalar_mul<G,S>(element: &Element<G>, scalar: &Scalar<S>)...
// user_contract.move
**let k: Scalar<ScalarForBx> = somehow_get_k();**
let p1 = one<GroupB1>();
let p2 = one<GroupB2>();
let q1 = scalar_mul<GroupB1, ScalarForBx>(&p1, &k);
let q2 = scalar_mul<GroupB2, ScalarForBx>(&p2, &k);
There is currently no easy way to ensure type safety for the generic operations.
E.g., pairing<A,B,C>(a,b,c)
can compile even if groups A
, B
and C
do not admin a pairing.
Therefore, the backend should handle the type checks at runtime.
For example, if a group operation that takes 2+ type parameters is invoked with incompatible type parameters, it must abort.
For example, scalar_mul<GroupA, ScalarB>()
where GroupA
and ScalarB
have different orders, will abort with a “not implemented” error.
Invoking operation functions with user-defined types should also abort with a “not implemented” error.
For example, zero<std::option::Option<u64>>()
will abort.
To support a wide-enough variety of BLS12-381 operations using the aptos_std::crypto_algebra
API,
we implement several marker types for the relevant groups of order r
(for G1
, G2
and Gt
) and fields (e.g., Fr
, Fq12
).
We also implement marker types for popular serialization formats and hash-to-group suites.
Below, we describe all possible marker types we could implement for BLS12-381
and mark the ones that we actually implement as "implemented".
These, we believe, should be sufficient to support most BLS12-381 applications.
Fq
The finite field
0x1a0111ea397fe69a4b1ba7b6434bacd764774b84f38512bf6730d2a0f6b0f6241eabfffeb153ffffb9feffffffffaaab.
FormatFqLsb
A serialization format for Fq
elements,
where an element is represented by a byte array b[]
of size 48 with the least significant byte (LSB) coming first.
FormatFqMsb
A serialization format for Fq
elements,
where an element is represented by a byte array b[]
of size 48 with the most significant byte (MSB) coming first.
Fq2
The finite field
which is an extension field of Fq
, constructed as
FormatFq2LscLsb
A serialization format for Fq2
elements,
where an element in the form b[]
of size 96,
which is a concatenation of its coefficients serialized, with the least significant coefficient (LSC) coming first:
b[0..48]
is FormatFqLsb
.b[48..96]
is FormatFqLsb
.FormatFq2MscMsb
A serialization format for Fq2
elements,
where an element in the form b[]
of size 96,
which is a concatenation of its coefficients serialized, with the most significant coefficient (MSC) coming first:
b[0..48]
is FormatFqLsb
.b[48..96]
is FormatFqLsb
.Fq6
The finite field
which is an extension field of Fq2
, constructed as
FormatFq6LscLsb
A serialization scheme for Fq6
elements,
where an element in the form b[]
of size 288,
which is a concatenation of its coefficients serialized, with the least significant coefficient (LSC) coming first:
b[0..96]
is FormatFq2LscLsb
.b[96..192]
is FormatFq2LscLsb
.b[192..288]
is FormatFq2LscLsb
.Fq12
(implemented)The finite field
which is an extension field of Fq6
, constructed as
FormatFq12LscLsb
(implemented)A serialization scheme for Fq12
elements,
where an element b[]
of size 576,
which is a concatenation of its coefficients serialized, with the least significant coefficient (LSC) coming first.
b[0..288]
is FormatFq6LscLsb
.b[288..576]
is FormatFq6LscLsb
.NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.
G1Full
A group constructed by the points on the BLS12-381 curve
under the elliptic curve point addition.
It contains the prime-order subgroup
G1
(implemented)The group
It is a subgroup of G1Full
with a prime order
equal to 0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001.
(so Fr
is the associated scalar field).
FormatG1Uncompr
(implemented)A serialization scheme for G1
elements derived from https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-11.html#name-zcash-serialization-format-.
Below is the serialization procedure that takes a G1
element p
and outputs a byte array of size 96.
(x,y)
be the coordinates of p
if p
is on the curve, or (0,0)
otherwise.x
and y
into b_x[]
and b_y[]
respectively using FormatFqMsb
.b_x[]
and b_y[]
into b[]
.p
is the point at infinity, set the infinity bit: b[0]: = b[0] | 0x40
.b[]
.Below is the deserialization procedure that takes a byte array b[]
and outputs either a G1
element or none.
b[]
is not 96, return none.b[0] & 0x80 != 0
.b[0] & 0x40 != 0
.[b[0] & 0x1f, b[1], ..., b[47]]
to x
using FormatFqMsb
. If x
is none, return none.[b[48], ..., b[95]]
to y
using FormatFqMsb
. If y
is none, return none.(x,y)
is on curve E
. If not, return none.(x,y)
is in the subgroup of order r
. If not, return none.(x,y)
.NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.
FormatG1Compr
(implemented)A serialization scheme for G1
elements derived from https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-11.html#name-zcash-serialization-format-.
Below is the serialization procedure that takes a G1
element p
and outputs a byte array of size 48.
(x,y)
be the coordinates of p
if p
is on the curve, or (0,0)
otherwise.x
into b[]
using FormatFqMsb
.b[0] := b[0] | 0x80
.p
is the point at infinity, set the infinity bit: b[0]: = b[0] | 0x40
.y > -y
, set the lexicographical flag: b[0] := b[0] | 0x20
.b[]
.Below is the deserialization procedure that takes a byte array b[]
and outputs either a G1
element or none.
b[]
is not 48, return none.b[0] & 0x80 != 0
.b[0] & 0x40 != 0
.b[0] & 0x20 != 0
.[b[0] & 0x1f, b[1], ..., b[47]]
to x
using FormatFqMsb
. If x
is none, return none.x
for y
. If no such y
exists, return none.y'
be max(y,-y)
if the lexicographical flag is set, or min(y,-y)
otherwise.(x,y')
is in the subgroup of order r
. If not, return none.(x,y')
.NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.
G2Full
A group constructed by the points on a curve
under the elliptic curve point addition.
It contains the prime-order subgroup
G2
(implemented)The group
It is a subgroup of G2Full
with a prime order
0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001.
(so Fr
is the scalar field).
FormatG2Uncompr
(implemented)A serialization scheme for G2
elements derived from
https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-11.html#name-zcash-serialization-format-.
Below is the serialization procedure that takes a G2
element p
and outputs a byte array of size 192.
(x,y)
be the coordinates of p
if p
is on the curve, or (0,0)
otherwise.x
and y
into b_x[]
and b_y[]
respectively using FormatFq2MscMsb
.b_x[]
and b_y[]
into b[]
.p
is the point at infinity, set the infinity bit in b[]
: b[0]: = b[0] | 0x40
.b[]
.Below is the deserialization procedure that takes a byte array b[]
and outputs either a G2
element or none.
b[]
is not 192, return none.b[0] & 0x80 != 0
.b[0] & 0x40 != 0
.[b[0] & 0x1f, ..., b[95]]
to x
using FormatFq2MscMsb
. If x
is none, return none.[b[96], ..., b[191]]
to y
using FormatFq2MscMsb
. If y
is none, return none.(x,y)
is on the curve E'
. If not, return none.(x,y)
is in the subgroup of order r
. If not, return none.(x,y)
.NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.
FormatG2Compr
(implemented)A serialization scheme for G2
elements derived from
https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-11.html#name-zcash-serialization-format-.
Below is the serialization procedure that takes a G2
element p
and outputs a byte array of size 96.
(x,y)
be the coordinates of p
if p
is on the curve, or (0,0)
otherwise.x
into b[]
using FormatFq2MscMsb
.b[0] := b[0] | 0x80
.p
is the point at infinity, set the infinity bit: b[0]: = b[0] | 0x40
.y > -y
, set the lexicographical flag: b[0] := b[0] | 0x20
.b[]
.Below is the deserialization procedure that takes a byte array b[]
and outputs either a G2
element or none.
b[]
is not 96, return none.b[0] & 0x80 != 0
.b[0] & 0x40 != 0
.b[0] & 0x20 != 0
.[b[0] & 0x1f, b[1], ..., b[95]]
to x
using FormatFq2MscMsb
. If x
is none, return none.x
for y
. If no such y
exists, return none.y'
be max(y,-y)
if the lexicographical flag is set, or min(y,-y)
otherwise.(x,y')
is in the subgroup of order r
. If not, return none.(x,y')
.NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.
Gt
(implemented)The group
It is a multiplicative subgroup of Fq12
,
with a prime order
(so Fr
is the scalar field).
The identity of Gt
is 1.
FormatGt
(implemented)A serialization scheme for Gt
elements.
To serialize, it treats a Gt
element p
as an Fq12
element and serialize it using FormatFq12LscLsb
.
To deserialize, it uses FormatFq12LscLsb
to try deserializing to an Fq12
element then test the membership in Gt
.
NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0.
Fr
(implemented)The finite field
associated with the groups
FormatFrLsb
(implemented)A serialization format for Fr
elements,
where an element is represented by a byte array b[]
of size 32 with the least significant byte (LSB) coming first.
NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0, blst-0.3.7.
FormatFrMsb
(implemented)A serialization scheme for Fr
elements,
where an element is represented by a byte array b[]
of size 32 with the most significant byte (MSB) coming first.
NOTE: other implementation(s) using this format: ark-bls12-381-0.4.0, blst-0.3.7.
HashG1XmdSha256SswuRo
(implemented)The hash-to-curve suite BLS12381G1_XMD:SHA-256_SSWU_RO_
that hashes a byte array into G1
elements.
Full specification is defined in https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-hash-to-curve-16#name-bls12-381-g1.
HashG2XmdSha256SswuRo
(implemented)The hash-to-curve suite BLS12381G2_XMD:SHA-256_SSWU_RO_
that hashes a byte array into G2
elements.
Full specification is defined in https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-hash-to-curve-16#name-bls12-381-g2.
https://github.com/aptos-labs/aptos-core/pull/6550/files
Developing cryptographic schemes, whether in Move or in any other language, is very difficult due to the inherent mathematic complexity of such schemes, as well as the difficulty of using cryptographic libraries securely.
As a result, we caution Move application developers that
implementing cryptographic schemes using crypto_algebra.move
and/or the bls12381_algebra.move
modules will be error prone and could result in vulnerable applications.
That being said, the crypto_algebra.move
and the bls12381_algebra.move
Move modules have been designed with safety in mind.
First, we offer a minimal, hard-to-misuse abstraction for algebraic structures like groups and fields.
Second, our Move modules are type safe (e.g., inversion in a group G returns an Option).
Third, our BLS12-381 implementation always performs prime-order subgroup checks when deserializing group elements, to avoid serious implementation bugs.
The crypto_algebra.move
Move module can be extended to support more structures (e.g., new elliptic curves) and operations (e.g., batch inversion of field elements), This will:
Once the Move language is upgraded with support for some kind of interfaces,
it can be use to rewrite the Move side specifications to ensure type safety at compile time.
The change should be available on devnet in April 2023.
This AIP proposes a new “Peer Monitoring” service that operates on Aptos nodes and allows nodes to track, share and monitor peer information and behaviors. The service aims to: (i) improve node latency and performance, by allowing components to make smarter decisions when disseminating and receiving data; (ii) improve operational quality and reliability, by allowing nodes to dynamically work around failures and unstable peers; (iii) improve peer discovery and selection, by providing a foundation for peer bootstrapping and gossip; (iv) improve security, by allowing nodes to better detect and react to malicious peers and bad behaviors; and (v) improve network observability, by gathering and exporting important network information.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-29.md
This AIP proposes to move two storage-efficient data structures into Aptos Framework. In general, those two structs can
lower the storage footprint by packing several elements into one storage slot instead of one per slot as what a normal
Table
would do.
Move is not hard to learn. But the intricacies between Move and Infra are not that intuitive, such as how gas scheduling
work including storage and execution. Specifically, how the data structures in Move are stored in storage and how are
the data represented and what the layout looks like, are not well understood. Having Witnessed many misuses of vector
and table
, our sequencing and associative container types, across various ecosystem projects on Aptos, we are pretty
aware due to the lack of understanding of our gas schedule including both execution and storage, most move developers on
Aptos are not able to write the most efficient smart contract code for gas optimization. This leads to:
Table
, which is what we try to disincentivize in the long run for small state storage.So we plan to provide a one-size-fits-all solution for both vector
and table
data structures that can handle data
scaling issue in a more optimized way considering the storage model and gas schedule. Therefore, most developers do not
have to concern too much with gas cost between different choices of container types. Instead, they could focus more on
the product logic side.
The design principle is to put more data into one slot without significant write amplification.
struct SmartVector<T> has store {
inline_vec: vector<T>,
big_vec: Option<BigVector<T>>,
}
In a nutshell, SmartVector
consists of an Option<vector<T>>
and an option<BigVector<T>>
, which is
a TableWithLength<T>
with metadata inherently. It is noted that we use vector
to replace option
here to
avoid drop
capability constraint on T
.The idea is:
inline_vec
will have data and it storesinline_vec
reached a threshold(M), it will create a new BigVector<T>
big_vec
with a bucket size(K) calculated based on the estimated average serialized size of T
. Then all theBigVector<T>
.SmartVector implements most basic functions of std::vector
.
It is noted that remove
, reverse
and append
would be very costly in terms of storage fee because they all
involve a number of table items modification.
The current solution is using the size_of_val
(T) of the current element to push multiplied by len(inline_vec) + 1
, if it is greater than a hardcoded value, 150, this new element will become the first element in big_vec
, whose
bucket_size, K
, is calculated by dividing a hardcoded value, 1024, by the average serialized size of all the
elements in inline_vec
and the element to push.
/// SmartTable entry contains both the key and value.
struct Entry<K, V> has copy, drop, store {
hash: u64,
key: K,
value: V,
}
struct SmartTable<K, V> has store {
buckets: TableWithLength<u64, vector<Entry<K, V>>>,
num_buckets: u64,
// number of bits to represent num_buckets
level: u8,
// total number of items
size: u64,
// Split will be triggered when target load threshold is reached when adding a new entry. In percent.
split_load_threshold: u8,
// The target size of each bucket, which is NOT enforced so oversized buckets can exist.
target_bucket_size: u64,
}
SmartTable
is basically a TableWithLength
where key is a u64
hash mod h(hash) of the user key and value is a
bucket, represented by a vector of all user key-value(kv) pairs with the same hashed user key. Compared to
native Table
, it makes table slot more compact by packing several kv pairs into one slot instead of one per slot.
SmartTable internally adopt linear hashing(LH) algorithm which
implements a hash table
and grows one bucket at a time. In our proposal, each bucket take one slot, represented by vector<Entry<K, V>>
in as
value type in a TableWithLength
. LH serves well for the motivation because the goal is to minimize the number of slots
while maintaining a table-like structure dynamically.
There are two parameters determining the behavior of SmartTable.
load_factor = 100% * size / (target_bucket_size * num_buckets)
.
SmartTable implements all the std::table
functions.
max(1, 1024 / max(1, size_of_val(first_entry_inserted)))
The current heuristic to automatically calculate target_bucket_size if not specified, is dividing the free quota, 1024,
by the size of the first entry inserted into the table.
vector<Entry<K, V>>
. A potential followup is to replace it withh(key)=hash(key) mod 2^{level}
and H(key)=hash(key) mod 2^{level + 1}
as hash functions that the result ish(key)
H(key)
double their modulo base together. For example, previously h(key) = hash(key) % 2, and H(key) = hash2^level - 1
. When the last bucket is split, actually during this round we have split 2^level buckets,num_buckets ^ (1 << level)
not the one we just inserted a kv pairnum_buckets % (1 << level)
H(key)
Lookup is tricky as we have to use both h(key)
and H(key)
for lookups. First we calculate bucket_index = H(key)
if
the result is an index of an existing bucket, it means the H(key)
actually works so we just use bucket_index
to find
the right bucket. However, if the result is invalid for existing bucket, it means the corresponding bucket has not been
split yet. So we have to turn to h(key)
to find the correct bucket.
The potential drawbacks of these two data structures are:
2 can be mitigated by using a native ordered map implementation as a bucket.
After the 100x execution gas reduction, the we benchmark the gas cost of creation and add 1 element into
vector/SmartVector/Table/SmartTable.
gas units | creation with 10000 u64 elements | push a new element | read an existing element |
---|---|---|---|
vector | 4080900 | 3995700 | 2600 |
smart vector | 5084900 | 2100 | 400 |
gas units | creation with 1000 u64 kv pairs | add a new kv pair | read an existing kv |
---|---|---|---|
table | 50594900 | 50800 | 300 |
smart table | 2043500 | 700 | 300 |
Reflected by the table above, smart data structures outperform vector and table for large datasets a lot in terms of
both creation and updates.
In a nutshell, we recommend using smart data structures for use cases involving large datasets such as whitelist. They
also can be easily destroyed if the internal elements have drop
.
size_of_val
to automatically determine the configurations of both data structures. If Move canvector
is used as a bucket. If there is a native map
struct, the gas cost would be highly cut down.Code complete: March 2023
Testnet release: March 2023
Change the internal implementation of SimpleMap in order to reduce gas prices with minimal impact on Move and Public APIs.
The current implementation of SimpleMap uses a BCS-based logarithmic comparator identifying slots on where to store data in a Vector. Unfortunately this is substantially more expensive than a trivial linear implementation, because each comparison requires BCS serialization followed by a comparison. The conversion to BCS cannot be resolved as quickly as traditional comparator can and substantially impacts gas prices.
find
from the current logarithmic implementation to a linear search across the vector.add
to call vector::push_back
and append new values instead of inserting them into their sorted position.This will result in the following performance differences:
Operation | Gas Unit Before | Gas Unit After | Delta |
---|---|---|---|
CreateCollection | 174200 | 174200 | |
CreateTokenFirstTime | 384800 | 384800 | |
MintToken | 117100 | 117100 | |
MutateToken | 249200 | 249200 | |
MutateTokenAdd10NewProperties | 1148700 | 390700 | 64% |
MutateTokenMutate10ExistingProperties | 1698300 | 411200 | 75% |
MutateTokenAdd90NewProperties | 20791800 | 10031700 | 51% |
MutateTokenMutate100ExistingProperties | 27184500 | 10215200 | 62% |
MutateTokenAdd300NewProperties (100 existing, 300 new) | 126269000 | 135417900 | -7% |
MutateTokenMutate400ExistingProperties | 143254200 | 136036800 | 5% |
When the token only has 1 property on-chain, we can see the mutation token cost doesn’t change. However,
if the user wants to add 10 new properties or update existing properties, the gas cost is reduced by 64% and 75%.
if the user wants to store 100 properties on-chain, the gas cost is reduced by 51% and 62%.
600 property mutations were also tested, but failed due to exceeding maximum gas.
Draft benchmark PR https://github.com/aptos-labs/aptos-core/pull/5765/files
// Before the proposed change
public fun add<Key: store, Value: store>(
map: &mut SimpleMap<Key, Value>,
key: Key,
value: Value,
) {
let (maybe_idx, maybe_placement) = find(map, &key);
assert!(option::is_none(&maybe_idx), error::invalid_argument(EKEY_ALREADY_EXISTS));
// Append to the end and then swap elements until the list is ordered again
vector::push_back(&mut map.data, Element { key, value });
let placement = option::extract(&mut maybe_placement);
let end = vector::length(&map.data) - 1;
while (placement < end) {
vector::swap(&mut map.data, placement, end);
placement = placement + 1;
};
}
// After the change
public fun add<Key: store, Value: store>(
map: &mut SimpleMap<Key, Value>,
key: Key,
value: Value,
) {
let maybe_idx = find_element(map, &key);
assert!(option::is_none(&maybe_idx), error::invalid_argument(EKEY_ALREADY_EXISTS));
vector::push_back(&mut map.data, Element { key, value });
}
// Before the proposed change
fun find<Key: store, Value: store>(
map: &SimpleMap<Key, Value>,
key: &Key,
): (option::Option<u64>, option::Option<u64>) {
let length = vector::length(&map.data);
if (length == 0) {
return (option::none(), option::some(0))
};
let left = 0;
let right = length;
while (left != right) {
let mid = left + (right - left) / 2;
let potential_key = &vector::borrow(&map.data, mid).key;
if (comparator::is_smaller_than(&comparator::compare(potential_key, key))) {
left = mid + 1;
} else {
right = mid;
};
};
if (left != length && key == &vector::borrow(&map.data, left).key) {
(option::some(left), option::none())
} else {
(option::none(), option::some(left))
}
}
// After the change
fun find_element<Key: store, Value: store>(
map: &SimpleMap<Key, Value>,
key: &Key,
): option::Option<u64>{
let leng = vector::length(&map.data);
let i = 0;
while (i < leng) {
let element = vector::borrow(&map.data, i);
if (&element.key == key){
return option::some(i)
};
i = i + 1;
};
option::none<u64>()
}
The internal representation for two SimpleMaps generated before and after the change will be different. For example, assuming a set of {c: 1, b: 2, a: 3}
. The behavior before would create a set with an internal vector of the following: {a: 3, b: 2, c: 1}
. After this change, it will be stored as {c: 1, b: 2, a: 3}
. This results in two forms of breaking changes:
The impact of these risks is relatively limited from our knowledge. Using a SimpleMap for equality is an esoteric application that the team has yet to see in the wild. The layout of SimpleMap is not considered at either the API layer or any of the AptosLabs SDKs.
We also see a performance drop for a very large simple_map once there are over 400 properties on-chain.
Timeline
TBD
We enable some select set of structs as valid parameters to entry and view functions.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-25.md
aip:
title: Calling between move contracts
author: evilboyajay
discussions-to:
Status: Draft
last-call-end-date:
type: Framework
created: 2023/01/22
updated:
requires:
This AIP proposes one contract to call another contract directly without importing using the contract address and function.
While writing multisig move contracts on APTOS, I have not been able to dynamically call other smart contracts because of missing calling between the contract function. This limits the ability of smart contracts to interact with other smart contracts, which eventually impacts governance.
This AIP will provide the ability for smart contracts to dynamically call other contracts. This will enable the creation of more complex and modular smart contract systems like multisig contracts that can call other contracts dynamically when all the multi-sig owners sign the transaction. In the transaction, data, accounts, module name, and functions are passed as parameters to the contract.
This is just an example if we make a function call which can call another contract functions, this is how we can make a transaction builder.
public fun call_contract(account: &signer, module_address: address, module_function: u8, parameters: vector<u8>){
call(module_address,module_function,parameters)
}
Here, parameters are the parameter of functions that needs to be passed,
module_address is the address of the contract,
module_function is the name of the function to be called.
https://docs.solana.com/developing/programming-model/calling-between-programs
https://solidity-by-example.org/delegatecall/
To be decided
This can be used in governance systems, where the smart contract can manage the voting power and based on that it can execute the transactions to other modules.
To be decided
Discussion and feedback thread for AIP.
Link to API:
This API proposal seeks to introduce a standard protocol for the verification of Aptos Move smart contract code. This API is designed to ensure the safety and trustworthiness of smart contract code, assisting developers in effectively verifying their code. The standard suggests the common rules that should be followed by all parties building and operating smart contract code verification API servers within the Aptos network.
Read more about it here: < Link to AIP >
This AIP proposes a framework around Token Objects that enables developers to create tokens and collections without writing any Move code. It does this by making decisions on business logic, data layout, and providing entry functions.
Link to AIP: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-22.md
AIP PR: #22
Currently, all transaction fees are burnt on Aptos. This design choice does not motivate validators to prioritise the highest value transactions, as well as use bigger blocks of transactions, leading to lower throughput of the system. For example, a validator can submit an empty block and still get rewarded. We would like to solve this issue by distributing transaction fees to validators. In particular, we want to collect transaction fees for each block, store them, and then redistribute at the end of each epoch.
If transaction fees are redistributed to validators, we will be able to both 1) ensure that the highest value transactions have a priority 2) increase the throughput of the system to gain more performance advantages of the parallel execution.
Enable collection and per-epoch distribution of transaction fees to validators.
In order to keep the system configurable, we have an on-chain parameter which dictates what percentage of transaction fees should be burnt (called burn_percentage
). This way, burning 100% of transaction fees would allow the system to have the same behaviour as it has now. Burning 0% would mean that all transaction fees are collected and distributed to validators. The formula deciding how much to burn and how much to collect is the following:
burnt_amount = burn_percentage * transaction_fee / 100
deposit_amount = transaction_fee - burnt_amount
Based on the discussion with the community, the initial burning percentage can be set and later upgraded via a governance proposal. While it seems like 0% burning would be a reasonable initial parameter value, one has to consider the effects on the tokenomics, e.g. inflation.
Alternative I: Distributing fees every transaction
Recall that we want to distribute transaction fee to the validator which proposed the block. Doing it per transaction has numerous disadvantages:
Updating the balances of validators per each transaction is a bottleneck for the parallel execution engine, as it creates a read-modify-write dependencies. While it can be alleviated by implementing balance as aggregator, it is not possible in the current system.
Validators obtain voting rewards on per-epoch basis, and certain smart contracts can rely on that fact. Having the balance of the validator change per transaction can break this logic and be not compatible.
Alternative II: Distributing fees every block
In order to solve the first disadvantage from Alternative I, we can instead collect into a special balance which uses aggregator to avoid read-modify-write conflicts in parallel execution. This way each transaction in the block updates the aggregator value with the fee, and when the block ends the total value is distributed to the proposer of the block.
Note that this approach does not solve the second disadvantage from Alternative I, which leads to our proposal.
Draft PR: aptos-labs/aptos-core#4967
Algorithm overview
When executing a single transaction in the block, the epilogue puts the gas fees into a special aggregatable coin stored on the system account. In contrast to standard coin, changing the value of aggregatable coin does not cause conflicts during the parallel execution. Aggregatable coin can be "drained" to produce the standard coin.
In the first transaction of the next block (i.e. BlockMetadata
), we process collected fees in the following way:
Note: in the current implementation fees for governance proposals are simply burnt. This is subject to discussion and can change in the future
Decentralization - no impact or changes.
Performance - the usage of aggregator avoids the conflicts in parallel executor.
Security - the added code will go through strict testing and auditing.
Tooling - no impact or changes.
The proposed change allows to control what fraction of the transaction fees is collected and what fraction is burnt. In the future it can be adjusted using a governance proposal.
The change is currently being tested using e2e Move tests and smoke tests. Ideally, it should be enabled in the testnet soon.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.