perlin-network / wavelet Goto Github PK
View Code? Open in Web Editor NEWWrite once, run forever. Deploy robust, scalable, decentralized WebAssembly applications on Wavelet.
Home Page: https://wavelet.perlin.net
License: MIT License
Write once, run forever. Deploy robust, scalable, decentralized WebAssembly applications on Wavelet.
Home Page: https://wavelet.perlin.net
License: MIT License
There is no proper shutdown for ledger as of now, thats why in tests we currently shut down grpc server and wait for some amount of time. Which makes tests non deterministic, since there a lot of requests still trying to access nodes and sometimes port collision happens.
There are a number of attacks that can be made if we assume such a relationship.
Therefore, there should only exist transaction senders that create transactions that affect their own account.
A set percentage of the value of a transaction (specifically for transfer transactions only) should be deducted as a transaction fee from the transactions sender.
Just for testing sake, the set percentage should be 5%.
Once that is done, we should then provide this 5% transaction fee over to a validator that deserves it as a reward.
5:47PM INF Listening for peers. addr: 127.0.0.1:3000
panic: the private key specified is invalid: 67724480df7c62815c9891fd50492ea1b589df130a072db09afc664431a76f6e12eaa2f2688b8eb9cd64aa293378b6adce7d63bae070f8995e84e73da6ed745c
goroutine 1 [running]:
main.start(0xc0002bd0a8, 0x1b25580, 0xc0000a2000, 0x1b1d3c0, 0xc00000e010)
/home/rkeene/devel/perlin-dev/wavelet/cmd/wavelet/main.go:334 +0x17d3
main.Run.func3(0xc0002a0160, 0x0, 0x0)
/home/rkeene/devel/perlin-dev/wavelet/cmd/wavelet/main.go:274 +0x779
gopkg.in/urfave/cli%2ev1.HandleAction(0x18757c0, 0xc0002728d0, 0xc0002a0160, 0x0, 0x0)
/home/rkeene/go/pkg/mod/gopkg.in/urfave/[email protected]/app.go:490 +0xc8
gopkg.in/urfave/cli%2ev1.(*App).Run(0xc00029e000, 0xc000030080, 0x8, 0x8, 0x0, 0x0)
/home/rkeene/go/pkg/mod/gopkg.in/urfave/[email protected]/app.go:264 +0x590
main.Run(0xc000030080, 0x8, 0x8, 0x1b25580, 0xc0000a2000, 0x1b1d3c0, 0xc00000e010, 0xc00007e000)
/home/rkeene/devel/perlin-dev/wavelet/cmd/wavelet/main.go:282 +0x1859
main.main()
/home/rkeene/devel/perlin-dev/wavelet/cmd/wavelet/main.go:75 +0x7b
It doesn't work when using the path for the private-key either.
If any of the peers specified as CLI arguments to "wavelet
" are down then consensus stalls
I modified the recursive invocation example so that instead of starting the bomb by explicitly calling the bomb
function on the contract, it's kicked off in on_money_received
. To set up the bomb first set_own_id
is called to allow the contract to store the id for recursive invocation.
I expected this to behave like the example, expending all of the senders account PERLS in gas fees; though this would be even more nefarious because it's kicked off from a simple pay [address] [ammount]
call.
Actual behavior: The contract is recursively invoked as expected but the gas is never deducted from the caller (see the logs in the gist above).
I'm not sure what the desired behavior here is; maybe the gas not being deducted is a feature to protect senders from this kind of attack? Will this terminate eventually when some max recursion limit is hit? I left it running on my local test net for about 5 minutes with no end in sight.
Right now rate limiting is done on a per-node scope for transactions being ingested. This implies that a node will lock itself down from ingesting any more transactions should it individually be under high load.
What should be done instead is that we have rate limiting done on a per-IP/host basis.
Additionally, we should double check if transactions submitted from the API are properly prioritized over nops being broadcasted by a single node.
Currently snowball used for dealing with two entities - rounds for finalization and just bools for determining whether node is out of sync.
Generic approach at the moment is to store in snowball entity which implements interface to get its id (to store it in the map).
Better approach might be used, maybe code generation, maybe something else.
Right now state syncing is done by loading the state into RAM and serving out 64KiB chunks, this needs to be done by streaming from disk to avoid DoS
Wavelet uses the default source from math/rand
package, which is not seeded by default. I looked through the source code and I don't see it being seeded anywhere. Not sure if this is intended or not.
To further mitigate safety/liveness issues by the way, there are more changes I, @rkeene and @losfair propose to append to Snowball that may only apply w.r.t. Wavelet's DAG and consensus round structure.
One huge improvement is to allow for nodes to overwrite their preferred round decision based on the contents of said round.
A node may choose to overwrite their preferred round if:
Some round that is valid and collapsible has an ending critical transaction that is at a lower depth than the node's current preferred round's ending critical transaction depth.
Some round that is valid and collapsible encompasses a larger number of ancestral transactions from it's round's ending critical transaction.
Currently spawning many contracts quickly uses a lot of CPU, investigate why
Check performance of multiple nodes benchmarking at the same time.
We want to change the way fees are rewarded in a round to be proportional to the stake of each sender in the round, with numerical rounding applied.
For example a round with 5 transactions, each at 10 kens in fees:
With the following stake weight:
The total stake in the round is SenderA.stake (10) + SenderB.stake (3) + SenderC.stake (5) + SenderA.stake (10) + SenderD.stake (8)
== 36
The reward would be.
20 / 36
== 55.56% (27.78 kens == 28 kens)3 / 36
== 8.33% (4.17 == 4 kens)5 / 36
== 13.89% (6.94 kens == 7 kens)8 / 36
== 22.22% (11.11 kens = 11 kens)(It's significant to note that the rewards should be exactly equal to the fees, so rounding must be done in such a way to achieve that)
The advantage here is computing the reward (as well as the fees, added in #232) only requires walking the graph one time, and fees are distributed more evenly rather than encouraging nodes to create low-value transactions to increase their likelihood of getting a reward)
Build fails on 32 bit raspberry pi from source archive wavelet-0.1.1
The relevant error is:
...
# github.com/perlin-network/wavelet/avl
avl/node.go:371:17: constant 4294967295 overflows int
package main
avl/node.go:379:19: constant 4294967295 overflows int
avl/node.go:519:16: constant 4294967295 overflows int
avl/node.go:529:19: constant 4294967295 overflows int
The problem is that len
returns a platform independent int
.
Removing the checks:
if len(n.key) > math.MaxUint32 {
panic("avl: key is too long")
}
lets the build to succeed, though I imagine you'll want to find a platform independent way to keep those assertions.
Transaction IDs, and smart contract IDs are currently hex encoded inside the ledger.
It is not necessary though, and we can just pass raw bytes around internally inside the ledger.
We can remove this redundant hex encoding/decoding for transaction IDs and smart contract IDs.
The only place where we really need hex encoding/decoding of transaction/smart contract IDs is in the HTTP API (the api
package) where JSON requests/responses to the HTTP API would contain hex-encoded versions of the transaction/smart contract IDs.
Parameters such as:
... should be configurable in real-time via CLI.
We want to support the use case where a user with no balance may call a smartcontract that is so defined, alternatively put that a user's balance is not considered or affected with a call to a smartcontract is so defined. Instead, a contract-specified or user-specified wallet is used to fund the transaction.
Similar to: https://medium.com/tabookey/1-800-ethereum-gas-stations-network-for-toll-free-transactions-4bbfc03a0a56
Additionally, this use case requires that either only certain public keys or all public keys be permitted to engage with this functionality of the specific smart contract. Some mechanism must exist to for the "funding-wallet" to authorize its use by the smartcontract as well.
Indeed the mechanism may be a new type of message where wallets permit contracts to pull funds from their account for specified other accounts.
This is an investigation into what it would take, and what the resulting feature would look like, and if it should be pursued.
panic: avl: could not find node cc17901df20f06d798e5a553a00d6c1a
goroutine 48 [running]:
github.com/perlin-network/wavelet/avl.(*Tree).mustLoadNode(...)
/src/avl/tree.go:297
github.com/perlin-network/wavelet/avl.(*Tree).mustLoadLeft(0xc0017baba0, 0xc000287290, 0xc000f894a8)
/src/avl/tree.go:332 +0xbb
github.com/perlin-network/wavelet/avl.(*node).lookup(0xc000287290, 0xc0017baba0, 0xc000042060, 0x22, 0x30, 0x22, 0xffffffffffffffff, 0xc000287170, 0x0)
/src/avl/node.go:246 +0xf6
github.com/perlin-network/wavelet/avl.(*node).lookup(0xc000287170, 0xc0017baba0, 0xc000042060, 0x22, 0x30, 0x22, 0x1, 0xc000286f30, 0x0)
/src/avl/node.go:249 +0x18d
github.com/perlin-network/wavelet/avl.(*node).lookup(0xc000286f30, 0xc0017baba0, 0xc000042060, 0x22, 0x30, 0x22, 0xffffffffffffffff, 0xc000286e10, 0x0)
/src/avl/node.go:251 +0x221
github.com/perlin-network/wavelet/avl.(*node).lookup(0xc000286e10, 0xc0017baba0, 0xc000042060, 0x22, 0x30, 0x22, 0x1, 0x0, 0x203000)
/src/avl/node.go:249 +0x18d
github.com/perlin-network/wavelet/avl.(*node).lookup(0xc000032090, 0xc0017baba0, 0xc000042060, 0x22, 0x30, 0x30, 0x40de26, 0xc000042060, 0xc000042060)
/src/avl/node.go:251 +0x221
github.com/perlin-network/wavelet/avl.(*Tree).Lookup(0xc0017baba0, 0xc000042060, 0x22, 0x30, 0x22, 0xc000042060, 0x1, 0x30)
/src/avl/tree.go:99 +0x5b
github.com/perlin-network/wavelet.readUnderAccounts(0xc0017baba0, 0xe6e1c5a99ef1a568, 0x9f155a1789a13618, 0x589eeb073c27dc36, 0x18327e9529358b44, 0x14a926e, 0x1, 0x1, 0xc0017baba0, 0xbffe70, ...)
/src/db.go:258 +0xf1
github.com/perlin-network/wavelet.ReadAccountStake(0xc0017baba0, 0xe6e1c5a99ef1a568, 0x9f155a1789a13618, 0x589eeb073c27dc36, 0x18327e9529358b44, 0x0, 0x0)
/src/db.go:149 +0x64
github.com/perlin-network/wavelet.CollectVotesForSync(0xc000194240, 0xc00019a1c0, 0xc00025ade0, 0xc0000ba690, 0x2)
/src/vote.go:64 +0x4c2
created by github.com/perlin-network/wavelet.(*Ledger).SyncToLatestRound
/src/ledger.go:908 +0x11e
There's a minor bug here: https://github.com/perlin-network/wavelet/blob/master/tx_json.go#L288-L292
Also, it's a good idea to write proper tests for the functions in tx_json.go
.
From within a smart contract, calling another smart contract may be broken -- it was working at one point, but tests need to be added around it
It would be useful to have a genesis/ folder available, in which the genesis of the ledgers state is compiled from all files within the folder.
An account state may be specified within the genesis/ folder in the form of a [account address].json
file with key-value pairs.
A smart contract may be specified within the genesis/ folder in the form of a [contract address].wasm
file with accompanying [contract address].[page index].dmp
files representing the contracts memory pages.
Commands may also be implemented for dumping out specified account state or smart contract state like so:
dump/dmp <account/contract address>
Track down issue that causes consensus rounds to get held up, currently experienced on Linux, and not Mac/Darwin
We want to change the way fees are computed to be charged proportional to the transaction size.
(e.g., transaction.fee = tx.sizeInBytes * feePerByte
where feePerByte
is a network-wide constant)
Investigation in progress
An improvement that may be made is to improve the ordering of transactions for a round. Currently, we perform a BFS over all transactions from the rounds ending transaction.
Instead, to enforce an ordering, after discovering the ancestry of a consensus round, we should sort the transactions by depth and lexicographical ordering by their transaction IDs, and then proceed to apply them to the ledger.
This enforces the DAG structure to enforce a more linear, topological graph ordering in which transactions are applied to the ledger.
Recently we added a restart mechanism (#204). This restart is not very graceful.
In particular, transactions which we have adding locally but not gossiped out may be lost.
If possible, have the restart mechanism shutdown the listener RPC listener and wait a bit before restarting to give this a greater chance of gossiping out transactions.
They seem to be failing
From the looks of things, it seems that a code cleanup needs to be in order for tx_applier.go.
Lines 31 to 46 in 05cdfa2
There are three separate structs that I believe either may be merged into one, or otherwise ridden off. This'll help reduce some memory allocations alongside the hot paths of applying transactions to the ledger state.
Low priority for the time being, but definitely something necessary.
There seems to be a simpler method I believe for creating a difficulty system, which also allows us to remove anything to do with timestamps in Wavelet.
The purpose of the difficulty system is to ensure there is enough of a time window for all nodes to be able to submit transactions in a single consensus round.
All we need is one single protocol parameter, MIN_DIFFICULTY
.
As a prerequisite, all users must now include the depth
of their transaction into all of their transactions.
Difficulty is now: MIN_DIFFICULTY * log2(len(graph)) / log2(graph.height)
. len(graph)
is the number of the transactions within the view-graph at some consensus round ID, and the height is the topological height of the view-graph itself.
Based on experiments on a local PC where communication is optimal, the MIN_DIFFICULTY
could feasibly be set to 8. The intuition behind the formulation is that the number of transactions in a graph is linear with respect to the number of nodes in the network.
The height of the graph is a weak representation on how out-of-sync nodes are. This is because nodes are simultaneously selecting parents at approximately the same depth, and therefore the height logically grows linearly slower with respect to the number of nodes in the network in comparison to the number of transactions in the graph.
By having the ratio of the number of transactions in the graph, and the height of the graph as a logarithmic factor, adversaries that attempt to flood the network with transactions will not be able to do any damage. Any flooding of transactions is a mere constant factor added to the number of transactions in the graph.
Fix the issue identified by the spamming test added by #153
Attempt to get rough, non-binding consensus from set of peers before sending a transaction to the network. This will make it more likely that a transaction will be successfully inserted into the ledger in times of high network load.
Create Disconnect and Connect API and CLI Methods.
CLI: /connect <host> [<port>]
CLI: /disconnect <host> [<port>]
CLI: /restart <--hard>
API: /node/connect/<:host>[/<:port>]
API: /node/disconnect/<:host>[/<:port>]
API: /node/restart?hard=true
For Connect calls: port missing, use configured port
For Disconnect calls: port missing, match any port (disconnect all if multiple matches)
If node is disconnected (shut down) wavelet's peer table still will show this address. This is intended behavior since we don't know when this node will get up so we still should consider it as a peer.
Then if node is connected again with same public key but with different IP address, on target node in peers table this public address is present, in single instance but with old IP address.
Expected: node should be present in peers table but with new address.
Note: this is not the case if node is getting manually disconnected with either api or cli methods. Because in this case it gets removed from node's peer table.
Investigate whether PoW for transactions should be added to increase the cost of spam, or if transaction fees are sufficient.
If PoW is required, decide how it should work.
It could be used as a prioritizer (more work = higher priority) or as a threshold, etc
There is inevitable overhead using the current way state is managed right now, by transitioning between gossiping
and querying
by shutting down all of their goroutines associated to each respective tate.
To mitigate this, all goroutines to do with handling received messages (listenForQueries()
, listenForGossip()
, etc.) should only be spawned once, and should never be shut down unless the ledger is explicitly shut down/closed.
All goroutines to do with handling consensus logic/state (query()
, gossip()
) may optionally be handled by a single goroutine as well. Switching between query()
and gossip()
is a matter of checking if our node has a preferred transaction or not, which can make sense as a single goroutine.
This effectively would alleviate the need to switch between goroutine contexts all the time, improving performance.
When I call a contract function that will take 700k PERL gas, on a contract that has 100mil PERL gas deposited, from a wallet with 500k PERLs, with a 1 PERL gas limit, I get an "insufficient gas" error, the wallet pays 2 PERLS in transaction fees, and the contract's gas deposit balance stays the same.
When I call the same function from a wallet with 100mil PERLs, it executes successfully, deducts gas from the contract's gas balance, and only deducts the 2 PERL transaction fee from the wallet's balance.
It is expected that when the contract pays the gas fees, the calling wallet account does not have to have enough PERLs to theoretically pay the gas fees. This allows users to run expensive contracts without needing a large amount of PERLs in their wallet.
Referring to issue #245.
Lens generate private key that has prefixed zero bits of 0.
This will cause issue if you use the key to run Wavelet, since the Wavelet's SKademlia requires the private key to have 1 prefixed zero bits (C1).
The possible solutions are:
Update Lens and/or Wavelet to make C1 to match.
Note: If we make Lens to fulfil the C1, it'll increase CPU usage of the Lens server.
Make C1 configurable, and have the user pass the correct value of C1.
As of right now, a lot of bandwidth is being wasted sending out-of-sync checks several times a second.
Only if we notice our peers are consistently at a higher view ID should we attempt to check if we are out of sync.
We can start checking when we receive gossip requests or query requests with transactions that are signed to have a higher view ID than our current one.
For future work, we should only start checking when we receive N amount of gossip/query requests within a certain time period (10 seconds) repetitively before we check if we are out of sync. Otherwise known as the circuit breaker pattern.
Variate benchmarks in terms of tag, payload, number of transactions, and number of times CollapseTransactions gets called.
Currently the graph (or some subset of it) is held entirely in memory, but as it becomes larger we need to be able to page out parts of it.
Look into a safe way to have parts of the graph exist only on disk while some parts exist in working memory and on disk.
One option is to create a CLI command to dump a pprof heap and goroutine dump -- pretty much the same as we do when our max memory detector is invoked in recovery.go
.
Another option is to expose pprof over the HTTP API (but this may be subject to abuse, rate-limiting may be enough though)
It has been reported that wavelet#166 caused the websocket polling to be impaired
Right now, if less than alpha * K peers consider a transaction you gossip to not be correct, your node will consider your transaction to have failed.
Out of the K peers that you gossip to however, it is possible that a small number of those peers have considered your transaction to be correct. If so, and they or their neighboring peers create a critical transaction, your transaction by the end of the consensus round would have been accepted.
One option is to overall consider gossiping successful if any single peer out of your K peers considers your transaction correct, and to notify whether or not the transaction in the end was accepted/rejected by the time the consensus round ends.
Additionally, should gossiping for a single transaction fail, it would be good to retry gossiping a number of times by having a circular buffer of transactions to retry resending.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.