Defined appropriately, the pedersen hash satisfies a composition property H(a || b) = H(a) H(b). This is used at various points for efficiency but in an ad hoc manner. There should be some abstraction that captures this nicely.
Right now in snarky there are a bunch of different interpreters, which all essentially do the same recursion over the AST. I think these should be unified to reduce bug surface area
Right now witness generation is kinda slow. I suspect a lot of time is eaten up in evaluating Cvar.ts, which are just ASTs of + and * with indices into an array at the leaves. There is sharing between Cvar.ts so this should be reflected in sharing intermediate results of computation.
are ultimately I think bad, because the coercion to Cvar.t leaves implicit the mapping between booleans and field elements. It would be better to have a function Boolean.to_zero_one : Boolean.var -> Cvar.t
Logs are hard to trace when every number is 300 (soon to be 800) bits long. When tracing logs to understand behavior, we should have a switch on logproc to hide the details here.
Right now every process that does proving has a separate copy of the proving keys. The result is memory usage is several times worse than it should be. This is bad, we should change it.
Transition hashes should probably just go in the state so that states have computationally unique histories. As is, there is "malleability" in terms of what the nonce is as it's not included in the state.
I suggest getting rid of write_or_drop as its use can be a bit error prone since there is not necessarily any relationship between the writer and reader provided.
Have a system like "number" which unifies packed and unpacked variables. It's annoying having to manually track around things that get packed and unpacked. Would be better to have "lazy" evaluation for unpacking so that any repeated unpacking is free.
To do so:
[ ] Move the c-bindings to a separate project with libsnark as a submodule
[ ] Inline those c-bindings into our monorepo, make libsnark a submodule (recursively)
We have write_or_drop (which is prefer earlier thing)
We have write_or_exn (which means die if data gets backed up)
We don't have the "prefer later things" notion. What we'd want to do here is make some way to cancel downstream tasks that haven't finished yet to unblock the pipes.
Specifically we possibly want to use this when we're being too slow at getting ledgers but fast at generating ledger-hashes for the ledger-fetcher since this will prevent the miner from getting the most up-to-date information fast enough.