Coder Social home page Coder Social logo

polygon-dash's Introduction

Polygon Dash

Polygon Dash contain two projects in a one repo:

  • Python backend app
  • Vue dashboard frontend

Polygon Dashboard

Usage

Polydash uses Poetry dependency manager.

  1. Install Poetry
    curl -sSL https://install.python-poetry.org | python3 -
  2. Install dependencies
    poetry install
  3. Enter Poetry-managed Python environment
    poetry shell
  4. The shell should change to something like (polydash-py3.10) ~/.../backend. Now run Polydash as Python module
    python -m polydash

Frontend

Run script to deploy:

```
 ./deploy_frontend.sh
```

Installing Node.js using NVM

Ubuntu or MacOS:

  1. Install NVM (Node Version Manager):

    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash

    For MacOS run

    source .zshrc
  2. Install the latest LTS version of Node.js using NVM:

    nvm install --lts
  3. After running the above command, you can confirm your installation by checking the version of node:

    node -v

Project Setup

npm install

Compile and Hot-Reload for Development

npm run dev

Compile and Minify for Production

npm run build

Lint with ESLint

npm run lint

Run app in a docker

Run up and rebuild backed

docker compose up --build

polygon-dash's People

Contributors

shockedshodan avatar ichorid avatar kagameow avatar super-dev-rust avatar grimadas avatar

Watchers

 avatar

polygon-dash's Issues

Transactions pre-confirmation dashboard

We want to build a dashboard showing all the transactions currently in the Polygon mempool and the probability of those being mined in the next block. The dashboard must work real-time.

Roadmap:

  • Normalize the scheme in the current database (to DRY and reduce the stored data)
  • Convert the mempool history to use hosted Timescale DB (reduce storage size)
  • Create a table with denormalized data for pending and just-mined transactions
  • Implement a simple data-series-based algorithm for predicting the probability of a transaction being mined in the next block (gas-based)
  • Add an endpoint for querying a transaction's probability of being mined within the next X blocks (for use with Policy Vault and Keyper Wallet)
  • Design and implement the frontend for the new element of the dashboard (pending transactions and probabilities)

Improve the deanon process

There are several fixes/improvements for the deanon process:

  1. Do not stop at the highest confidence, discarding the node if we already have put this node into the list to give away, but continue the process - it will fix the issue when only a handful of IPs were returned by the system. Should be handled by this MR
  2. Check the IP address for liveliness - this fixes the issue when we give away dead IP addresses
  3. Merge the tables DeanonNodeByBlock and DeanonNodeByTx to make the addition of new heuristics simpler
  4. Introduce more heuristics to map PeerID to PubKey; some examples are:
    • look at PK of transaction signer - if it was received from peer P, we can increase the confidence between this P and PK
    • normalize the first heurestic implementation by division over the total number of txs/blocks seen from this peer(?)

Simple transaction pre-confirmation algorithm

The algorithm estimates how long a transaction must wait until it is accepted into a block. The estimation is based on its mempool popularity, gas fees, etc. In time, we will be able to analyze the historical mempool data and find better strategies.

Accurate estimation of the likelihood that transaction will be accepted (or not) within the next X blocks.
The estimation is expressed in confidence from 0% to 100%. The estimation is by design conservative.

For the first version, we want the simplest solution possible.

Algorithm Details

The algorithm will be developed incrementally in phases.

Phase 1. Simple Algorithm based on Gas Estimate of pending transactions

We closely monitor the network for pending transactions and record those into our local database with a timestamp. Each new transaction goes into the “pending” list.
We also monitor the blocks to mark some pending transactions as “final.” When marked as “final,” they are removed from the pending list.
The pending transactions are sorted/ranked according to their priority in the block. Gray transactions are all pending transactions, and blue transactions are transactions that (we estimate) will be selected for the next block.
When a user requests if some transaction X has priority, we answer High if it is in the estimated selected subset and Low if not.

image

Similar Projects and Links

BlockCypher - Bitcoin confidence factor

https://www.blockcypher.com/dev/bitcoin/#confidence-factor

https://eprint.iacr.org/2012/248.pdf

RPC Router

We want to be able to "best-effort" route RPC transactions to the best miner based on the rankings served by our dashboard. To do that, we should create a kind of "transparent proxy" imitating Alchemy/Infura API format.
The logic of the proxy is as follows:

  • "write"-type requests (e.g. send transaction) are routed to best-ranked available miner
  • all the others types (e.g. "read" blockchain state) are forwarded to centralized RPC providers Alchemy/Infura

Scalable mempool collection engine

Our mempool collection method (single-thread local polling) is currently unscalable and inefficient. We should:

  • separate the mempool collector into an independent bot-service
  • switch to a different DB engine that is more scalable and better suited for distributed events gathering
  • create a Docker container with the collector bot and create some AWS/Ansible/etc scripts for easy deployment

Related to #8

Add pending transactions table

Extracting data from a fully normalized remote DB can be costly. We can simplify and speed-up the process by effectively caching the set of pending transactions for use in the upcoming transactions dashboard. To do so, we will create a single table with denormalized data for the current pending transactions. As soon as a transaction is mined, it will be marked for removal from this DB after a short period (e.g. an hour).

Alternatively, we may opt for an in-memory data structure, completely omitting SQL usage.

Normalize the transactions DB

Currently, some transaction data fields exist in multiple copies in the DB, e.g., transaction hashes used for addressing, etc. It poses two problems:

  1. storing multiple copies violates DRY principle, which may lead to inconsistencies and errors
  2. storing long fields (e.g. hashes) is less efficient in regards to space than storing just short keys

Before moving onward, we need to normalize the database.

Dashboard redesign

The dashboard design should get a facelift and add more data displayed.
This will require some attention from UI designer first.

Transaction pre-confirmation endpoint

To serve the probabilities of transactions being mined, we need to add a new endpoint to our backend. The endpoint schema should be as close to Alchemy/Infura format as possible, to ease interoperability.

There will be two main users for the endpoint:

  • the dashboard web UI
  • Guardian Vault / Keyper Wallet apps

Database optimization

SQLite is insufficient for the dashboard needs, it already drops transactions. We should switch to PostgreSQL, Redis, or some other scalable solution.

Attack on Polygon

We should demonstrate MEV and/or censorship attack on Polygon network and prevent/detect it with our Dashboard and MEV-router tech.

Miner deanonymization

The principal problem of making the findings of our dashboard actionable is associating miner identities/addresses from blocks with exact RPC endpoints. We should resort to some of the following methods:

  • passively observing what RPC node (miner) reported the block/transaction first
  • actively sending transactions to RPCs to test hypotheses about what RPC is associated with what node

Move mempool data to Timescale DB

Historical mempool data takes lots of space. Essentially, transaction occurrences at validators' mempools are a time series of events.
Timescale DB is an AWS-hosted proprietary Postgres extension that provides an excellent opportunity to reduce our storage requirements dozens of times. Basically, Timescale introduces a specialized time-series-optimized compression method for timestamps in Postgres.

We should move our Postgres instance to Timescale cloud facilities ASAP to reduce operational costs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.