Coder Social home page Coder Social logo

myelnet / pop Goto Github PK

View Code? Open in Web Editor NEW
37.0 5.0 3.0 13.38 MB

Run a point-of-presence within Myel, the community powered content delivery network.

Home Page: https://myel.dev

License: BSD 3-Clause "New" or "Revised" License

Go 96.53% Makefile 0.18% Dockerfile 0.54% Shell 2.36% JavaScript 0.34% HTML 0.05%
filecoin ipfs p2p peer-to-peer decentralized

pop's Introduction


๐Ÿฟ

pop


Run a point-of-presence within Myel, the community powered content delivery network.

Technical Highlights

  • Libp2p, IPFS and Filecoin protocols for data transfer.
  • Content is routed via Gossipsub.
  • New content to cache is directly replicated by connected peers.
  • Payments to retrieve content are made via Filecoin payment channels.

Background

Our mission is to build a community powered content delivery network that is resilient ๐Ÿฆพ, scalable ๐ŸŒ, and peer-to-peer โ†”๏ธ to suit the long-term needs of Web3 applications.

We're currently using Filecoin building blocks and are aspiring to make this library as interoperable as possible with existing Web3 backends such as IPFS.

This library is still experimental so feel free to open an issue if you have any suggestion or would like to contribute!

Install

As a CLI:

$ go install github.com/myel/pop/cmd/pop@latest

As a library:

$ go get github.com/myelnet/pop/exchange

CLI Usage

Run any command with -h flag for more details.

$ pop
USAGE
  pop subcommand [flags]

This CLI is still under active development. Commands and flags will
change until a first stable release. To get started run 'pop start'.

SUBCOMMANDS
  start   Starts a POP daemon
  off     Gracefully shuts down the Pop daemon
  ping    Ping the local daemon or a given peer
  put     Put a file into an exchange transaction for storage
  status  Print the state of any ongoing transaction
  commit  Commit a DAG transaction
  get     Retrieve content from the network
  list    List all content indexed in this pop
  wallet  Manage your wallet

FLAGS
  -log info  Set logging mode

Metrics Collection

pop nodes can push statistics measuring the performance of retrievals to an InfluxDB v2 database if certain environment variables are set. Set these variables as follows:

export INFLUXDB_URL=<INSERT InfluxDB ENDPOINT>
export INFLUXDB_TOKEN=<INSERT TOKEN>
export INFLUXDB_ORG=<INSERT ORG>
export INFLUXDB_BUCKET=<INSERT BUCKET>

Auto-updating

pop supports auto-updating via GitHub webhooks.

To activate it pass the flag -upgrade-secret to pop start such that it matches the GitHub webhook secret set for your server. Make sure you run as sudo, or that your current user has authorization to modify your pop binary.

Your pop will then automatically download and install new releases.

Deployment

You can deploy a cluster of nodes on AWS using kubernetes, as detailed in infra/k8s.

Library Usage

See go docs.

pop's People

Contributors

alexander-camuto avatar cat-turner avatar gallexis avatar stefanvanburen avatar tchardin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pop's Issues

Remove dependency on Filecoin's FFI

Since we don't plan on supporting storage anymore, let's remove all trace of that in:

  • build scripts
  • wallet (remove support for BLS keys)
  • anywhere else it might be requested
  • remove popp since we don't need it either

Smart chunking

When it is possible to detect file type with the extension name, we should select an appropriate chunking strategy. This improves deduplication, data transfer speed and makes the network overall more efficient.
Here's some general guidelines:

  • Audio and video content should have trickle layout and chunk sizes of 1MB.
  • Images, compressed archives (.zip etc), size splitter with 1MB chunks, balanced layout.
  • Text, JSON etc. Buzhash chunker with balanced layout and 16kb chunks for best deduplication.
    We can probably experiment with different params but this seems to be reasonable efforts.

issue when installing pop on mac M1

When following the doc to install pop on a mac with m1 CPU, I had to do 2 steps that where not in the doc :

  • Install Cargo
  • run make as root otherwise it wouldn't want to install pop to /usr/local/bin/ (I'm not sure we want to run make as root though, but it was the only way for me to install it)

Exchange: Tx should only combine DAGs together and not be concerned with "DAG-ifying" content

Currently, the Tx interface exposes a PutFile interface which takes a path and takes care of adding the file as a unixfs DAG using our defined params. This method should be removed in favor of the more generic Put which only links to a CID. This method is already implemented and methods using PutFile should be updated to use the implementation specific method available in the usage context. An example of how this is done can be found here:

pop/node/server.go

Lines 243 to 262 in 8796fb3

c, err := s.node.Add(r.Context(), tx.Store().DAG, part)
if err != nil {
http.Error(w, "failed to add file", http.StatusInternalServerError)
return
}
stats, err := utils.Stat(r.Context(), tx.Store(), c, sel.All())
if err != nil {
http.Error(w, "failed to get file stat", http.StatusInternalServerError)
return
}
key := part.FileName()
if key == "" {
// If it's not a file the key should be the form name
key = part.FormName()
}
err = tx.Put(key, c, int64(stats.Size))
if err != nil {
http.Error(w, "failed to put file in tx", http.StatusInternalServerError)
return
}

This should be updated same as above:
err := nd.tx.PutFile(args.Path)

Run the tests in node_test.go to check it works as expected.

All the addFile and other related methods/fields such as ChunkSize can be removed from the Tx struct.

tx_test.go will need to be updated and use a new mocknet convenience method. Since the existing mocknet method creates a new store ID, a new method needs to be created to take a specific store instead so this method:

func (tn *TestNode) LoadFileToNewStore(ctx context.Context, t testing.TB, dirPath string) (ipld.Link, multistore.StoreID, []byte) {
stID := tn.Ms.Next()
store, err := tn.Ms.Get(stID)
require.NoError(t, err)
f, err := os.Open(dirPath)
require.NoError(t, err)
var buf bytes.Buffer
tr := io.TeeReader(f, &buf)
file := files.NewReaderFile(tr)
// import to UnixFS
bufferedDS := ipldformat.NewBufferedDAG(ctx, store.DAG)
params := helpers.DagBuilderParams{
Maxlinks: unixfsLinksPerLevel,
RawLeaves: true,
CidBuilder: nil,
Dagserv: bufferedDS,
}
db, err := params.New(chunk.NewSizeSplitter(file, int64(unixfsChunkSize)))
require.NoError(t, err)
nd, err := balanced.Layout(db)
require.NoError(t, err)
err = bufferedDS.Commit()
require.NoError(t, err)
// save the original files bytes
return cidlink.Link{Cid: nd.Cid()}, stID, buf.Bytes()
}

Could be changed into something like:

func (tn *TestNode) LoadFileToStore(ctx context.Context, t testing.TB,  store *multistore.Store, path string) (ipld.Link, []byte) {
f, err := os.Open(dirPath)
	require.NoError(t, err)

	var buf bytes.Buffer
	tr := io.TeeReader(f, &buf)
	file := files.NewReaderFile(tr)

	// import to UnixFS
	bufferedDS := ipldformat.NewBufferedDAG(ctx, store.DAG)

	params := helpers.DagBuilderParams{
		Maxlinks:   unixfsLinksPerLevel,
		RawLeaves:  true,
		CidBuilder: nil,
		Dagserv:    bufferedDS,
	}

	db, err := params.New(chunk.NewSizeSplitter(file, int64(unixfsChunkSize)))
	require.NoError(t, err)

	nd, err := balanced.Layout(db)
	require.NoError(t, err)

	err = bufferedDS.Commit()
	require.NoError(t, err)

	// save the original files bytes
	return cidlink.Link{Cid: nd.Cid()}, stID, buf.Bytes()
}

func (tn *TestNode) LoadFileToNewStore(ctx context.Context, t testing.TB, dirPath string) (ipld.Link, multistore.StoreID, []byte) {
	stID := tn.Ms.Next()
	store, err := tn.Ms.Get(stID)
	require.NoError(t, err)

	link, bytes := tn.LoadFileToStore(ctx, t, store, dirPath)
        return link, stID, bytes
}

So then in the test instead of this:

require.NoError(t, tx.PutFile(fname))

You can write:

link, bytes := pnodes[0].LoadFileToStore(ctx, t, tx.Store(), fname)
rootCid := link.(cidlink.Link).Cid
require.NoError(t, tx.Put(KeyFromPath(fname), rootCid, len(bytes)))

This is def way more verbose but based on a conversation with Adin from PL, this module shouldn't be concerned with how files are imported. This is also a good way to learn more about how files are imported/DAG-ified before getting put into the transaction.

Separate hashing function when responding to caching requests

  • Currently providers can claim they already have a CID locally when receiving a caching request. This provides a vector of attack whereby malicious providers could lie about the content they have locally to intentionally reduce the replication factor / redundancy of specific pieces of content.

  • A simple solution to this is for the provider to back up this claim using a new hash of the content DAG (eg. keccak / sha-3) that the CID alone wouldn't provide -- serving as a simple proof that the provider does have the content.

    • the problem is that the provider can hold content, hash it, and subsequently delete it, but store the hash to respond to subsequent requests.
    • a potential solution is to use a keyed hash function, whereby a node sending caching requests also includes a randomly generated key as a payload. The nodes responding then have to hash the DAG using that key, providing a proof that, at least at that given point in time, they did actually hold the content.

`pop off` cli command

Gracefully shutdown the pop with a CLI command. shutdown might be a more appropriate name but off seems shorter and fun.

Add `-maxppb` flag to the start config

-maxppb (max price per byte) is a parameter currently used for the get command. We should also give the option to change this parameter globally when starting the node. To do this, add a survey step in the start command so it is also saved on the JSON config file. The default should be 5 and display the unit (attoFIL).

Decentralized hole punching

  • We currently use ngrok as a NAT hole punching solution. Although easy to use this introduces a myriad of setbacks:
    • ngrok servers are not distributed geographically, so we really take a hit in performance
    • ngrok code is proprietary so its hard to figure out what exactly is going on behind the scenes
    • ngrok relays / servers can't act as providers on our network -- which is a missed opportunity

In light of the PL project flare developments we should consider rolling out their relay-circuit v2 implementation for the public nodes on our networks (i.e those not behind a NAT).

Some brief notes on how to implement this:

  • Use autonat to determine if a node is behind a NAT or not -- if not, automatically promote a node to a relay.
  • Multi-address can contain information as to whether a peer requires a connection via a relay or not -- and through which relay.
  • Relays can see requests for content and would determine if they themselves should cache the content to boost performance and avoid the messaging roundtrips hole-punching requires -- this would be a first step in introducing a performance boosting hierarchy to the network.
  • Baking in support for WebRTC would remove the need for the DNS records we have to maintain atm -- which would make it easier to onboard new providers (as WebRTC is currently the only protocol that can perform hole-punching browser side)

TestMapFieldSelector seems to be racy

Just noting this to investigate if it happens again

=== RUN   TestMapFieldSelector
{"level":"error","error":"set pipe: deadline not supported","time":"2021-06-11T18:08:45Z","message":"failed to set read deadline"}
{"level":"error","error":"no peer given ID","time":"2021-06-11T18:08:45Z","message":"failed to record latency"}
{"level":"error","error":"set pipe: deadline not supported","time":"2021-06-11T18:08:45Z","message":"failed to set read deadline"}
{"level":"error","error":"No state for /1623434925932: datastore: key not found","time":"2021-06-11T18:08:46Z","message":"attempting to configure data store"}
2021-06-11T18:08:46.937Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `13`, event `10`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
panic: test timed out after 10m0s

Add websocket transport

I believe websocket transport is already enabled by default, all we need is to add "/ip4/0.0.0.0/tcp/41505/ws" to the listening addresses.

Check default address balance before executing retrievals

Both client and providers need a minimum amount of FIL to execute a paid retrieval. On the client side, we should be checking if there are sufficient funds in the wallet before executing an offer. This should be implemented in node.Load during triage when selecting an offer to execute.
On the provider side we should be checking if there are sufficient funds in the wallet as well in order to pay for gas fees when collecting funds. This should be executed at the exchange.handleQuery level. I think 0.1FIL should be enough for most gas transactions.
Balance can be read using the filecoin API, StateGetActor and reading the Balance property on Actor type.

Index should keep track of individual transaction keys

Problem

In the current system, every time a node receives or creates a new transaction a new ref is added to the index:

pop/exchange/index.go

Lines 72 to 80 in 8796fb3

type DataRef struct {
PayloadCID cid.Cid
PayloadSize int64
StoreID multistore.StoreID
Freq int64
BucketID int64
// do not serialize
bucketNode *list.Element
}

And uses this method to set it in the index:

pop/exchange/index.go

Lines 272 to 290 in 8796fb3

// SetRef adds a ref in the index and increments the LFU queue
func (idx *Index) SetRef(ref *DataRef) error {
idx.mu.Lock()
defer idx.mu.Unlock()
k := ref.PayloadCID.String()
idx.Refs[k] = ref
idx.size += uint64(ref.PayloadSize)
if idx.ub > 0 && idx.lb > 0 {
if idx.size > idx.ub {
idx.evict(idx.size - idx.lb)
}
}
// We evict the item before adding the new one
idx.increment(ref)
if err := idx.root.Set(context.TODO(), k, ref); err != nil {
return err
}
return idx.Flush()
}

This only registers the root of the transaction (PayloadCID). In some situation, the node may retrieve only a single key from the transaction though will set the root in the index, this results in the node thinking it has the full transactions and not querying the network.

Solution

We need to add a Keys field to the DataRef struct. The field could be an array or a map. I think a map might be more convenient/faster to check if a key is in the index. So maybe map[string]bool. This requires regenerating the cbor file by running (in /exchange) once the new field is added to the struct:

go generate ./index.go

Thinking we could add a convenience method to the *DataRef:

if ref.Has("hello.txt") {

Then this is mainly used here:

pop/exchange/tx.go

Lines 420 to 430 in 8796fb3

// IsLocal tells us if this node is storing the content of this transaction or if it needs to retrieve it
func (tx *Tx) IsLocal() bool {
// We have entries this means the content from this root is stored locally
if len(tx.entries) > 0 {
return true
}
if _, err := tx.index.GetRef(tx.root); err != nil {
return false
}
return true
}

Which could become:

func (tx *Tx) IsLocal(key string) bool {
	// We have entries this means the content from this root is stored locally
	if len(tx.entries) > 0 {
		return true
	}
	if ref, err := tx.index.GetRef(tx.root); err != nil {
		return ref.Has(key)
	}
	return true
}

Lastly there are a few places it should be updated mainly:

pop/exchange/tx.go

Lines 377 to 381 in 8796fb3

err := tx.index.SetRef(&DataRef{
PayloadCID: tx.root,
StoreID: tx.storeID,
PayloadSize: tx.size,
})

which should use keys from all the entries set in the Put step.

pop/node/popn.go

Lines 755 to 760 in 8796fb3

// Register new blocks in our supply by default
err = nd.exch.Index().SetRef(&exchange.DataRef{
PayloadCID: c,
StoreID: tx.StoreID(),
PayloadSize: int64(res.Size),
})

I might be forgetting some tests as well.

Test

If all this goes well, uncommenting this test should pass:

pop/node/node_test.go

Lines 532 to 549 in 8796fb3

// @TODO: register keys in index
// Now let's try to request the second file
// got2 := make(chan *GetResult, 2)
// cn.notify = func(n Notify) {
// require.Equal(t, n.GetResult.Err, "")
// got2 <- n.GetResult
// }
// cn.Get(ctx, &GetArgs{
// Cid: fmt.Sprintf("/%s/data2", ref.PayloadCID.String()),
// Strategy: "SelectFirst",
// Timeout: 1,
// })
// res = <-got2
// require.NotEqual(t, "", res.DealID)
// res = <-got2
// require.Greater(t, res.TransLatSeconds, 0.0)

Specify peers for pop dispatch

pop import can currently import car files and dispatch data to -cache-rf peers -- this method doesn't allow for us to specify which peers receive the duplicate data.

As we roll out tests to the network and onboard new peers it would be great to be able to dispatch test data solely to new peers (for efficiency etc...).

We could add a -peer flag to pop import that specifies a single (or multiple) peers to replicate content to or create a new dispatch method which takes in the same flag and sends already imported data to the specified peers.

Unify CLI api in a standard JSON RPC format which can be sent either via IPC or Websockets

Still spec'ing out what the RPC library should look like but we could start by switching the Message to use an RPC format instead of the basic JSON struct. Here is a good example of json RPC messages:
https://github.com/ethereum/go-ethereum/blob/5cff9754d795971451f2f4e8a2cc0c6f51ce9802/rpc/json.go#L49-L58

See #93 for details about websocket. I think it should be fairly simple to send through the same connection but websocket protocol can be troublesome sometimes.

TestMultipleGet is racy

Sometimes fail on CI with output:

=== RUN   TestMultipleGet
{"level":"error","error":"No state for /1623943149477: datastore: key not found","time":"2021-06-17T15:19:10Z","message":"attempting to configure data store"}
2021-06-17T15:19:10.488Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `0`, event `4`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
2021-06-17T15:19:10.489Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `0`, event `4`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
{"level":"error","error":"No state for /1623943149478: datastore: key not found","time":"2021-06-17T15:19:10Z","message":"attempting to configure data store"}
2021-06-17T15:19:10.499Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `0`, event `4`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
2021-06-17T15:19:10.500Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `0`, event `4`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
2021-06-17T15:19:10.512Z	ERROR	gs-traversal	runtraversal/runtraversal.go:43	failed to load link=bafy2bzacecddt5b3fodntlgkrtin6nj6ybrflotjzcpqutw4zgxdpk73lfnnk, nBlocksRead=0, err=blockstore: block not found
2021-06-17T15:19:10.512Z	ERROR	gs-traversal	runtraversal/runtraversal.go:32	traversal completion check failed, nBlocksRead=0, err=skip
    node_test.go:623: 
        	Error Trace:	node_test.go:623
        	            				popn.go:280
        	            				popn.go:697
        	            				popn.go:752
        	            				node_test.go:626
        	Error:      	Not equal: 
        	            	expected: ""
        	            	actual  : "context deadline exceeded"
        	            	
        	            	Diff:
        	            	--- Expected
        	            	+++ Actual
        	            	@@ -1 +1 @@
        	            	-
        	            	+context deadline exceeded
        	Test:       	TestMultipleGet

Use a Myel node for bootstrap and gate connections from IPFS nodes

  • Confirm that a Myel node can be used as a bootstrap node (I think the DHT server mode is enabled by default but I might be wrong so you can double check by connecting 2 nodes to a 3rd Myel node and see if they discover each other.
  • Use a connection gater to prevent Myel nodes from connecting to IPFS nodes. Two ways to go about it would be either checking the user agent header or the protocols a peer supports.

Requesting the same content multiple times from the same provider fails

Problem

Gossip sub prevents peers from receiving duplicate messages, so when requesting the same CID multiple times generates the same message providers will treat it as duplicates and not reply again. This could be an issue if a client needs to request the same content multiple times maybe because they don't have space to cache it locally.

Solution

We need some type of ID to differentiate the requests

cli start -Bootstrap flag should allow a list of bootstrap addresses

Currently the flag is a string which is passed as a single slice item to the node options. The survey output should also support multiple addresses either comma or space separated. You can test by adding the address of the gateway on top of the bootstrap node /ip4/3.129.144.139/tcp/41505/p2p/12D3KooWQkRgaZoMykWoyWpdYUcCpzisd9vnkagnZsw3n3w7jbji.

Add protocol multiplexer

Exploring the benefits of using a multiplexing library of network connections such a cmux, allowing Pop to serve different protocols (HTTP / RPC / TCP / ...) on the same listener.

Generate SSL certificates for Myel providers

How could Myel providers who run a Pop on their home devices easily generate a SSL certificate so clients can retrieve over WSS

Benefits:
Each swarm could use its own SSL certificates to ensure :

  1. A possible communication of peers using web browsers with the Providers
  2. A safe communication channel

Problem:

  1. Deal with ACME DNS challenge
  2. Might be too SPOF

Hints :

Create a garbage collection strategy

Since using a single blockstore for storing everything, eviction does not garbage collect the designated blocks but only removes reference in the index. I'm still debating between periodic garbage collection or triggering it upon eviction. Creating some testground scenarios might be useful to compare both strategies.

Create nodes with โ€œtestingโ€ roles

We need nodes with "testing" roles in that they are capable of autonomously:

  • Sending files to be cached by other peers.
  • Retrieving files from peers at random time intervals.
  • Logging and displaying stats from these actions.

The nodes should be capable of running multiple test scenarios.

eg. nodes send File A 99% of the time to emulate the coverage of a "popular"/ oft requested file. They send File B 1% of time and then gather stats that compare coverage and retrieval performance for A and B.

eg. nodes have a library of files in order of increasing size. They collect stats on pushing / retrieval times relative to file size.

TestMultiTx is racy

Failed in CI with output:

{"level":"error","error":"No state for /1626167599029: datastore: key not found","time":"2021-07-13T09:13:20Z","message":"attempting to configure data store"}
2021-07-13T09:13:20.035Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `0`, event `4`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
2021-07-13T09:13:20.035Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `0`, event `4`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
{"level":"error","error":"No state for /1626167599030: datastore: key not found","time":"2021-07-13T09:13:20Z","message":"attempting to configure data store"}
2021-07-13T09:13:20.051Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `0`, event `4`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
2021-07-13T09:13:20.051Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `0`, event `4`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
2021-07-13T09:13:20.053Z	ERROR	fsm	fsm/fsm.go:92	Executing event planner failed: Invalid transition in queue, state `6`, event `27`:
    github.com/filecoin-project/go-statemachine/fsm.eventProcessor.Apply
        /home/runner/go/pkg/mod/github.com/filecoin-project/[email protected]/fsm/eventprocessor.go:137
    tx_test.go:415: could not finish gtx2

Improve payment channel settlement on the provider side

  • Context: In the case of a simple retrieval, the provider can redeem the received vouchers and settle the channel once the transfer is finished.
  • Problem: if the client wishes to reuse a payment channel a while longer say for progressively retrieving parts of a DAG, it becomes more complex for the provider to know when is a good time to settle a payment channel.
  • Naive solution: wait for the client to call settle so the provider knows the client no longer needs the channel and can redeem all the vouchers as one. This is nice because it means the provider needn't pay for the settle gas costs.
  • Caveat: What if the client disappears for whatever reasons without calling settle, the provider must then have a way to collect their earnings. It also means the provider must subscribe to chain events.
  • Enhanced solution: The client must set a MinSettleHeight param on the vouchers which guarantees no one can call settle before then. The provider reads the value and can decide to update the payment channel using the voucher or just wait knowing more transfers might be coming. If the client doesn't call settle by the chain height, the provider can just redeem and settle the channel.
  • Security consideration: Providers shouldn't accept vouchers with a TimelockMin value 12h over the MinSettleHeight as it would mean the client can call settle and collect back their funds before the provider can redeem the vouchers.
  • Additional improvements: Subscribing to chain epochs from a lotus node RPC puts too much strain and dependency on 3rd party infrastructure. Nodes should connect to a few lotus peers directly and subscribe to the gossip topic announcing new blocks. This could be a good start for enabling pushing blocks directly in the future.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.