Coder Social home page Coder Social logo

lightnode's People

Contributors

divyakoshy avatar gdsoumya avatar jazg avatar loongy avatar rahulghangas avatar ross-pure avatar roynalnaruto avatar ryanswrt avatar susruth avatar tok-kkk avatar vinceau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lightnode's Issues

Caching data spends a lot time on marshaling/unmarshaling data.

Currently, the cacher of lightnode is using a TTL table of kv. It will marshal/unmarshal when write/read the response. Since the response can be really big, marshal/unmarshal is really expensive in this case.

For a queryBlocks request, the response can be 6- 10MB. A single read/write will take 0.2-0.3 seconds.

The best best solution I can think of is to have a TTL implementation of the bounded map which don't do marshaling/unmarshaling when wirte/read.

Wait for confirmations before forwarding transactions to Darknodes

Lightnodes should accept RenVM transactions with any number of confirmations (including zero confirmations).

Until the transactions have the minimum number of confirmation (depending on whether the underlying blockchain transaction is Bitcoin, Bitcoin Cash, Zcash, Ethereum, etc.), the Lightnode should store the transaction and periodically check its confirmations.

Once the transaction has the minimum number of confirmations, the Lightnode must forward the transaction to multiple Darknodes.

Tracking

  • Add SQL support to the Lightnodes.
  • Persist transactions into SQL.
  • Periodically check transactions for the minimum number of confirmations.
  • Forward confirmed transactions to multiple Darknodes.
  • Periodically garbage collect unconfirmed transactions (they can be resubmitted later).
  • Periodically garbage collect completed transactions (only remember the last N number of transactions).

Support for transaction tagging

Currently, there is no way to query transactions by any kind of application-specific or use-case-specific data. The only possibility is querying transactions by status. Ideally, RenVM is able to support user flows where the user is identified in an arbitrary way (this allows maximum flexibility between both chains and use-cases) and can query their transactions.

Design

This issue proposes the addition of an optional ”tags” parameter in the ”ren_submitTx” and ”ren_queryTxs” methods. These tags are considered indices for the transaction, and transactions can be looked up using these tags (”ren_queryTx” will filter by the tags when the parameter is present).

This optional parameter is intentionally malleable (it can be modified during gossiping throughout the peer network), because different nodes might want to index transactions differently, and we do not want to force nodes to respect the index. In this sense, the only sensible way to use tags is when interacting with a trusted Lightnode (either one that is owned, or one that is managed by a trusted API provider like the Ren team). Note: using tags with untrusted Lightnodes is not a security concern, it just provides no guarantees that the tag will be respected.

Multiples

We should implement this using ”tags”: [] not ”tag” and let nodes decide what the maximum number of supported tags are. Default to 1 is essentially the same as only supporting ”tag”, but it is future compatible with supporting multiple tags if that becomes needed.

Pagination

A direct consequence of having an optional tags is pagination and ordering. The ”ren_queryTxs” currently requires no specific ordering. This needs to be changed such that transactions are ordered by their transaction hashes (given that most storage will already be indexing by this value, it is the least intrusive one to order by). We also need to include optional ”page” and ”pageSize” parameters (where each node supports their own maximum page size limit). The ”page” tells the node to skip the first page*pageSize number of transactions. The ”pageSize” is the maximum number of transactions to return. Returning 0 transactions implies there are no more pages.

Hosted Lightnodes

The hosted Lightnodes managed by the Ren team will respect tags. We will default to a maximum of 1 tag for now, and if there is evidence of need for more, we will reconsider this limit. Pages will be limited to 10 transactions.

Darknodes

Darknodes will no respect any tags. This minimises their resource requirements.

Implementation

  • Add the ”ren_queryTxs” method.
  • Add an optional ”tags” parameter time ”ren_submitTx” and ”ren_queryTxs”. It must be an arbitrary list of 32-byte hashes.
  • Add optional ”page” and ”pageSize” (both integers) to ”ren_queryTxs”. Negative values should be clamped to zero.
  • Add an option to the JSON-RPC 2.0 server to limit number of ”tags” supported and maximum ”pageSize” (default to one).
  • Add ”tag0” as a column to the Lightnode SQL database.
  • Support SELECT * WHERE tag0 = type queries against the database when resolving content for the ”ren_queryTxs” method.
  • Require ”ren_queryTxs” to return transactions ordered by hash only when the pagination parameters are present. At all other times, transactions can be returned in any order (ordering is only important because of pagination).

GSN support in lightnode

Lightnodes can be configured (on a per-transaction basis) to turn "done" RenVM transactions into GSN-compatible transactions and submit them to the GSN. This would require the Lightnode to analyse the RenVM transaction payload, convert it into whatever format GSN required, and send it to GSN. This should happen automatically; allowing users to send their BTC to a Bitcoin Gateway and then shutdown their machine and walk away. In the background, the Lightnodes continue to handle everything else (waiting for confirmations, submission to RenVM, waiting for "done", and then optional submission to GSN).

Lightnode redesign

Motivation

The current light node is not up to date with both phi and the current darknode. The lightnode can no longer easily use the server implementation in the darknode, and so this part will need to be rewritten. This is also a chance to take lessons from the previous implementations and simplify the design somewhat - requests that get sent to the lightnode should mostly just be passed through to the darknodes, and the only logic that the lighnode should be concerned with is deciding which darknodes to send requests to and how to consolidate the (possibly multiple) responses from the darknodes into one response for the user.

Architecture

The architecture will be a basic pipeline, with some additions.

Pipeline Stages

Server

The first stage of the pipeline will be the server. We will include rate limiting here. The only other task before handing the request on will be some type converting.

Validator

This stage will include logic that checks whether or not a request should be sent to the darknode. The more we can filter out at this stage, the easier it is going to be for the darknodes.

Cache

The lightnode will use caching to improve response times for repeat requests. A response in the cache can have as a key the hash of the request object.

Dispatcher

This is responsible for sending a request to the darknodes and propagating a response. The logic here can vary in two ways:

  • how many/which darknodes will the request be sent to?
  • when sending a request to multiple darknodes, how will their responses be combined into a single response for the user that sent the request to the lightnode?

The interface for the dispatcher should be such that an implementer need only consider these two pieces of logic.

Additional Parts

Another thing that the lightnode will need to do is have a background task that keeps track of the darknodes that are currently in the network. This can be achieved by the task periodically selecting a random subset of the known darknodes and querying them for their peers. After receiving these peers, it will send them to the dispatcher task, which will contain a store of darknode addresses, and the dispatcher will update its store with any new addresses.

Improve handling of invalid or incorrectly formatted data

Lightnode (on Devnet) currently makes some assumptions about the type of data it expects to receive. For example, the user can specify a value is of type extBtcCompatUTXO and provide a decimal to cause the Lightnode to panic.

Using connection pool for redis

We have seen a lot of the below error recently.

[watcher] error setting last checked block number in redis: ERR max number of clients reached

And according to heroku, using a connection pool might be a good solution for this error.
We should look into the golang redis implementation and start using connection pool in lightnode

Improve test coverage

We want to improve test coverage for each package and have an integration test for the whole lightnode. Tests should be integrated into our existing CI and triggered by every new commit.
Since lightnode is the main int user interact with

Tests

  • Tests should check the error message is meaningful when the input is invalid or something goes wrong.
  • Benchmarking (use a top-level integration test that floods the Lightnode with many parallel requests).
  • Fuzzing (random bytes)
  • Fuzzing (random JSON)
  • Fuzzing (random well-formed but invalid requests)
  • Ensure that every package has some level of testing
  • Improve top-level integration testing
  • Trying to aim for 80%+ test coverage.

CI

  • Use GitHub Actions to run all tests and report test coverage.
  • Update the badges in the README.md file.

Reuse code from the Darknode where possible to remove code duplication

The Lightnode has been updated (#25) to wait for confirmations before sending transactions to Darknodes. Due to the structure of the Darknode, there were certain aspects that could not be imported directly such as the transformer pipeline or hash calculations. This task involves updating the Lightnode (and the Darknode, where required) to make it possible to import these directly and prevent the codebases from falling out of sync in the future.

Migrate to GitHub Actions from CircleCI

We want to migrate all of our CI/CD into GitHub Actions so that we can minimize the number of different services that we are using, but still maximize the integration of different features. Moving to GitHub Actions means that we can drop CircleCI (which we have been having issues with), but still allows for a deep integration with GitHub.

Tracking

  • Translate the CircleCI config file into one that is compatible with GitHub Actions.
  • Check that Actions is passing all checks.
  • Require status checks from Actions to be passing before allowing PRs to merge.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.