Coder Social home page Coder Social logo

clarity-benchmarking's People

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clarity-benchmarking's Issues

Smooth outputted cost functions

Smoothing could improve approachability of the costs, from a contract developer point of view. Smoothing is definitely a good idea for functions that really should be identical — like arithmetic functions and comparisons.

We should go through and manually smooth “once” — there’s still some changes potentially coming from the benchmarking results, so once those are done, we should smooth out some of the results.

Annotate Jupyter

Add more information to graphs in notebook. Label graph axes, label which functions are transformed

Functions that should be linear are tested as constants

These functions are currently measured as constants in the benchmarking suite:

cost_fetch_entry
cost_set_entry
cost_fetch_var
cost_set_var
cost_list_cons
cost_index_of
cost_hash160
cost_sha256
cost_sha512
cost_sha512t256
cost_keccak256
cost_print

I think they should be measured as linear (in the PR #15, list_cons is changed to linear).

Reduce costs of `contract-of` function

Currently cost of executing contract-of is static and it is set to 1000. In proposed costs it has been set to 45847. I believe both values are way to high.

contract-of takes Trait input and unwraps it to Principal.
In reality it is impossible to store Trait as variable so needs to be passed from the outside world or hard-coded in contract code. In addition to that we don't have Trait data type in clarity - it is internal data type used only by this function.

For example with code like this:

(use-trait token-a-trait 'SPAXYA5XS51713FDTQ8H94EJ4V579CXMTRNBZKSF.token-a.token-trait)
(define-public (do-something (contract <token-a-trait>))
  (begin
    (ok (contract-of contract))))

we call do-something function using:

(contract-call? .myContract do-something 'SP000000000000000000002Q6VF78.my-token)

Principal SP000000000000000000002Q6VF78.my-token is wrapped internally by Clarity VM with to satisfy Trait datatype, and then unwrapped back to principal by contract-of.

Because of that I think it should be as cheap as any other unwrapping function if not cheaper.

Database operations should use a MemoryBackingStore rather than a MARF

Database operations should not use an actual MARF backend in the benchmarks: these operations are already charged for their MARF operations by being assessed a marf read or write in those cost dimensions. The runtime component of their cost should not incorporate any MARF overhead.

These operations are:

cost_at_block
cost_create_ft
cost_block_info
cost_stx_balance
cost_stx_transfer
cost_ft_mint
cost_ft_transfer
cost_ft_balance
cost_ft_get_supply
cost_ft_burn
poison_microblock
cost_analysis_storage
cost_analysis_use_trait_entry
cost_analysis_get_function_entry
cost_load_contract
cost_create_map
cost_create_var
cost_create_nft
cost_fetch_entry
cost_set_entry
cost_fetch_var
cost_set_var
cost_contract_storage
cost_nft_mint
cost_nft_transfer
cost_nft_owner
cost_nft_burn

Add docs for input size unit

For each of the generator functions, we now return input size as a field of the output.

In order to avoid having to cross-reference the stacks-blockchain repo, we should add comments to each generator function for the unit of the input size (ex: size of type signature, length of contract, etc.)

Benchmark "input size" needs to be scaled to their clarity size

Clarity cost functions typically sum the input argument's object size to feed an input to the cost function. For example, in https://github.com/blockstack/stacks-blockchain/blob/master/src/vm/functions/sequences.rs#L45

    let mut arg_size = 0;
    for a in args.iter() {
        arg_size = arg_size.cost_overflow_add(a.size().into())?;
    }

    runtime_cost(ClarityCostFunction::ListCons, env, arg_size)?;

However, the input size in the benchmarks refers to different values (e.g., an input size of 10 may mean 10 integers). When digesting the benchmark results, the analysis scripts should scale x dimension appropriately.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.