stacks-network / clarity-benchmarking Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Smoothing could improve approachability of the costs, from a contract developer point of view. Smoothing is definitely a good idea for functions that really should be identical — like arithmetic functions and comparisons.
We should go through and manually smooth “once” — there’s still some changes potentially coming from the benchmarking results, so once those are done, we should smooth out some of the results.
Add more information to graphs in notebook. Label graph axes, label which functions are transformed
These functions are currently measured as constants in the benchmarking suite:
cost_fetch_entry
cost_set_entry
cost_fetch_var
cost_set_var
cost_list_cons
cost_index_of
cost_hash160
cost_sha256
cost_sha512
cost_sha512t256
cost_keccak256
cost_print
I think they should be measured as linear (in the PR #15, list_cons
is changed to linear).
Right now the Clarity interpreter is invoked directly, meaning the clarity-wasm runtime is never used. In order to benchmark clarity-wasm, we need to run Clarity contracts like this
Currently cost of executing contract-of
is static and it is set to 1000. In proposed costs it has been set to 45847. I believe both values are way to high.
contract-of
takes Trait
input and unwraps it to Principal
.
In reality it is impossible to store Trait
as variable so needs to be passed from the outside world or hard-coded in contract code. In addition to that we don't have Trait
data type in clarity - it is internal data type used only by this function.
For example with code like this:
(use-trait token-a-trait 'SPAXYA5XS51713FDTQ8H94EJ4V579CXMTRNBZKSF.token-a.token-trait)
(define-public (do-something (contract <token-a-trait>))
(begin
(ok (contract-of contract))))
we call do-something
function using:
(contract-call? .myContract do-something 'SP000000000000000000002Q6VF78.my-token)
Principal SP000000000000000000002Q6VF78.my-token
is wrapped internally by Clarity VM with to satisfy Trait datatype, and then unwrapped back to principal by contract-of
.
Because of that I think it should be as cheap as any other unwrapping function if not cheaper.
The function introduced here needs to be benchmarked: stacks-network/stacks-core#3294
Database operations should not use an actual MARF backend in the benchmarks: these operations are already charged for their MARF operations by being assessed a marf read or write in those cost dimensions. The runtime component of their cost should not incorporate any MARF overhead.
These operations are:
cost_at_block
cost_create_ft
cost_block_info
cost_stx_balance
cost_stx_transfer
cost_ft_mint
cost_ft_transfer
cost_ft_balance
cost_ft_get_supply
cost_ft_burn
poison_microblock
cost_analysis_storage
cost_analysis_use_trait_entry
cost_analysis_get_function_entry
cost_load_contract
cost_create_map
cost_create_var
cost_create_nft
cost_fetch_entry
cost_set_entry
cost_fetch_var
cost_set_var
cost_contract_storage
cost_nft_mint
cost_nft_transfer
cost_nft_owner
cost_nft_burn
For each of the generator functions, we now return input size as a field of the output.
In order to avoid having to cross-reference the stacks-blockchain repo, we should add comments to each generator function for the unit of the input size (ex: size of type signature, length of contract, etc.)
Fix commented out estimation functions in notebook
Clarity cost functions typically sum the input argument's object size to feed an input to the cost function. For example, in https://github.com/blockstack/stacks-blockchain/blob/master/src/vm/functions/sequences.rs#L45
let mut arg_size = 0;
for a in args.iter() {
arg_size = arg_size.cost_overflow_add(a.size().into())?;
}
runtime_cost(ClarityCostFunction::ListCons, env, arg_size)?;
However, the input size in the benchmarks refers to different values (e.g., an input size of 10 may mean 10 integers). When digesting the benchmark results, the analysis scripts should scale x dimension appropriately.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.