algorand / graviton Goto Github PK
View Code? Open in Web Editor NEW๐งโ๐ฌ verify your TEAL program by experiment and observation
License: MIT License
๐งโ๐ฌ verify your TEAL program by experiment and observation
License: MIT License
Like algorand/pyteal#610 and algorand/py-algorand-sdk#408, graviton bundles extra artifacts into its source distribution (e.g. tests). The story's request is to limit the source distribution to graviton
.
The goal clerk dryrun
command (in opposition to the goal clerk dryrun-remote
) is in theory capable of running dry runs without an actual node running. If we use this functionality, we could potentially save on most of the startup times of the C.I. in this repo (and pyteal as well). In theory, this would work as follows:
offline
option to executions: e.g. DryRunExecutor.execute_one_dryrun(offline=True, ...)
goal clerk dryrun ...
blackbox.DryRunInspctor
PySDK's PR #283 introduces DryRunInspector
which is very similar in scope to blackbox.DryRunTransactionResult
.
The two classes should be unified.
One possible approach is to rename to a existing class and subclass PySDK's as follows:
class DryRunInspector(algosdk.DryRunTransactionResult):
...
Quoting @michaeldiamant
... we ought to disambiguate steps needed for act.
In order to get act to pass, I sourced various dependencies that I believe are not needed for github.
Disambiguation here means creating 1 or more steps dedicated to act dependencies. Goal is to make it easier to maintain the build.
I think the following dependencies are act-specific though it requires a round of testing:
curl
,nodejs
,python3-pip
Docker install (e.g.docker-ce
)
Setupdocker-compose
My own thoughts:
build.yml
refer back to our Makefile
make
commands that have act
as their suffix (e.g. make dependencies-act
)make install-for-github
v. make install-for-act
that only differ in one command that runs only locallySee below for a list of DryRunExecutor
methods that allow executing dryruns. It is cumbersome and confusing to have so many methods that basically do the same thing. Ideally, it would be nice to have a single method with the following API:
class DryRunExecutor:
@singledispatch
@classmethod
def execute(
cls: self,
algod: AlgodClient,
program: str,
input: Any,
*,
mode: ExecutionMode = ExecutionMode.Application,
abi_method_signature: Optional[str] = None,
... several more params such as sender, sp, is_app_create, accounts ...
):
raise NotImplementedError("Implement execute method")
This makes use of functools
singledispatch decorator
Currently, graviton's dry run execution are inconsistent in how abi-information is handled. Some allow abi_argument_types
and abi_return_types
to be provided while some also allow abi_method_signature
to be provided. The last parameter includes all the information that abi_argument_types
and abi_return_types
provides and therefore it is confusing to provide all the parameters.
After an investigation into the usage of dry run execution in PyTeal, it became apparent that in all relevant situations, an ABIReturnSubroutine
object is available for inspection, and it comes with a method method_signature()
which could be used to populate an abi_method_signature
parameter for dry run execution.
Streamline the dry run execution API by refactoring the dry run executing methods as follows:
abi_argument_types
abi_return_type
abi_method_signature
execute()
function using @singledispatch
and @process.register
to dispatch via input
's type (Further investigation is required here. It may not be possible given the nesting of possible inputs (list[tuple]
... etc .... We may also prefer a multipledispatch approach taking into account the mode
parameter's type. It may also be infeasible to dispatch directly and therefore we might need to match directly and delegate.)run()
method, with 2 companion convenience methods run_one()
and run_sequence()
supress_abi
(not done)DryRunExecutor.execute_one_dryrun()
DryRunExecutor.dryrun_logicsig()
DryRunExecutor.dryrun_app()
DryRunExecutor.dryrun_logicsig_on_sequence()
DryRunExecutor.dryrun_multiapps_on_sequence()
DryRunExecutor.dryrun_app_pair_on_sequence()
ABIContractExecutor.dryrun_app_on_sequence()
@barnjamin introduces Dry Run to Atomic Transaction Composer in PY-SDK PR #278
To better compose with existing functionality, we should refactor the recent work introduced in #21 so as to leverage the "official way" of running dry run for Atomic Transaction Composers.
#27 introduces badges for "powered by algorand" and for visitiors counts. It would be great to add additional badges for:
As you can see in the full printout below, when we call invariant.validates()
on an invariant that was defined
by a 2-variable predicate, we get an inscrutable expected
description.
This invariant was defined via:
predicate = (lambda args, actual: "PASSE" == actual if args[0] else True)
name = 'DryRunProperty.status'
invariant = Invariant(predicate, name=name)
and spat out the following msg
:
<<<<<<<<<<<Invariant for 'DryRunProperty.status' failed for for args (2,): actual is [PASS] BUT expected [<function test_exercises.<locals>.<lambda> at 0x1082557e0>]>>>>>>>>>>>
E AssertionError: ===============
E <<<<<<<<<<<Invariant for 'DryRunProperty.status' failed for for args (2,): actual is [PASS] BUT expected [<function test_exercises.<locals>.<lambda> at 0x1082557e0>]>>>>>>>>>>>
E ===============
E App Trace:
E step | PC# | L# | Teal | Scratch | Stack
E --------+-------+------+-------------------+-----------+----------------------
E 1 | 1 | 1 | #pragma version 6 | | []
E 2 | 2 | 2 | arg_0 | | [0x0000000000000002]
E 3 | 3 | 3 | btoi | | [2]
E 4 | 7 | 6 | label1: | | [2]
E 5 | 9 | 7 | store 0 | 0->2 | []
E 6 | 11 | 8 | load 0 | | [2]
E 7 | 13 | 9 | pushint 2 | | [2, 2]
E 8 | 14 | 10 | exp | | [4]
E 9 | 6 | 4 | callsub label1 | | [4]
E 10 | 15 | 11 | retsub | | [4]
E ===============
E MODE: ExecutionMode.Signature
E TOTAL COST: None
E ===============
E FINAL MESSAGE: PASS
E ===============
E Messages: ['PASS']
E Logs: []
E ===============
E -----BlackBoxResult(steps_executed=10)-----
E TOTAL STEPS: 10
E FINAL STACK: [4]
E FINAL STACK TOP: 4
E MAX STACK HEIGHT: 2
E FINAL SCRATCH: {0: 2}
E SLOTS USED: [0]
E FINAL AS ROW: {'steps': 10, ' top_of_stack': 4, 'max_stack_height': 2, 's@000': 2}
E ===============
E Global Delta:
E []
E ===============
E Local Delta:
E []
E ===============
E TXN AS ROW: {' Run': 1, ' cost': None, ' last_log': '`None', ' final_message': 'PASS', ' Status': 'PASS', 'steps': 10, ' top_of_stack': 4, 'max_stack_height': 2, 's@000': 2, 'Arg_00': 2}
E ===============
E <<<<<<<<<<<Invariant for 'DryRunProperty.status' failed for for args (2,): actual is [PASS] BUT expected [<function test_exercises.<locals>.<lambda> at 0x1082557e0>]>>>>>>>>>>>
E ===============
It will include this PR which will improve typing in this repo.
Hi, found your project on the Algorand Discord and it's made my testing significantly easier, great job and thanks! I hope it will be fully integrated into PyTeal very soon.
While incorporating it into my own unit tests, I wrote an additional wrapper around the BlackboxWrapper that you included in the PyTeal repository to skip some of the steps that I was doing repeatedly, like compiling the TEAL, arranging inputs correctly, casting to numeric types etc.
This wrapper allows you to write tests like this:
import pyteal as pt
@SubroutineRunner(input_types=[pt.TealType.uint64, pt.TealType.uint64])
@pt.Subroutine(pt.TealType.uint64)
def uint64_subr(x: pt.Expr, y: pt.Expr) -> pt.Expr:
return pt.Div(x, y)
@SubroutineRunner(input_types=[pt.TealType.uint64, pt.TealType.uint64])
@pt.Subroutine(pt.TealType.bytes)
def bytes_subr(x: pt.Expr, y: pt.Expr) -> pt.Expr:
return pt.BytesDiv(pt.BytesMul(pt.Itob(x), pt.Bytes("base16", "0x03e8")), pt.Itob(y))
x, y = [8, 10, 50], [2, 9, 10]
for input, output in uint64_subr.run_sequence(y=y, x=x).inout:
assert output == input["x"] // input["y"]
for input, output in bytes_subr.run_sequence(x=x, y=y).inout:
assert output == (1000 * input["x"]) // input["y"]
...which I think maybe could be of interest to others. It automatically takes care of mapping keyword arguments to the correct order when calling the subroutine, calls stack_top() for uint64 and last_log() for bytes and converts to Python ints and calls the dryrun for apps or logicsigs depending on an optional pt.Mode argument in the decorator, but is otherwise a pretty simple extra wrapper that just gives a shorter path from the subroutine to the dryruns.
The PyTeal repository doesn't seem like the place to share it, since right now graviton is only used within the tests, so I don't know where I could do a pull request, as it depends on code I found within the PyTeal repo. In case you think it's of any interest now or in the future, just let me know where you think is the best place to share the code, otherwise feel free to close it. :)
There's lots more to write here. But start with TASK 1:
Since #33 we have better visibility into program budget via budget_added
and budget_consumed
. However, these weren't added to the report()
function. E.g., part of a recent report looked like had a TXN AS ROW
missing this info:
TXN AS ROW: {' Run': 0, ' cost': -2009, ' last_log': '`0000000000011950', ' final_message': 'PASS', ' Status': 'PASS', 'steps': 89, ' top_of_stack': 72016, 'max_stack_height': 3, 's@000': 72016, 's@001': 3}
budget_added
and budget_consumed
into csv_row()
Since the main consumer of this repo is pyteal and pyteal requires mypy and black, this repo should follow similar conventions and processes.
go-algorand
's Dry RunAs of go-algorand PR #3957 the dry run response transaction top-level-field cost is being deprecated. There is one usage of this field in non-deprecated graviton code.
The good news is that new better fields BudgetConsumed
and BudgetAdded
are being introduced in the same PR and that the cost can be calculated as
net cost = BudgetConsumed - BudgetAdded
cost
with the net cost
formula above and stops referencing the original cost
fieldBudgetConsumed
and BudgetAdded
in the same way that cost
isGraviton was introduced into PyTeal's C.I. process in its PR #249. In that PR, a docker image of our Sandbox algod
was introduced. This approach is significantly faster and more lightweight compared to the approach taken in this repo: build the sandbox from scratch each time, but leverage Github Action docker layer caching). We should.
Modify Makefile
and .github/workflows/build.yml
to mimic those of PyTeal with regards to running the integration tests.
pip install graviton
?Currently, to install graviton locally you have to clone with:
git clone https://github.com/algorand/graviton.git
pip install git+https://github.com/algorand/graviton
Graviton should be available as most other python libraries are.
Some points to consider:
There are various improvement that can be made.
Of note is the current docker-compose setup which probably is not necessary acording to this list of Ubuntu image installed software.
This is a follow up to a discussion on #27 screenshot here:
Lingering TODO from #21:
pay / axfr / ...
transactions are too far away (compared to the number of such args in the method signature and or foreign array) the program is rejected with error.Currently, dry runs return program execution traces which are exposed in the Inspector
class via its report()
method. Thus, in principle, it should be possible to compute statistics on which lines of associated TEAL program were executed over a sequence of dry runs. With such statistics in hand, coverage reports are therefore possible.
Flesh this issue out with deliverables and example code coverage frameworks that can serve as inspiration.
There are several type aliases in the repo. They actually can be marked with a type: TypeAlias
. EG:
from typing import TypeAlias, Union
MyCompoundType: TypeAlias = Union[str, int, None]
There are a bunch of violations of D.R.Y. in these methods. This deserves a re-evaluation with a bias towards removing the extract*
methods if possible.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.