Coder Social home page Coder Social logo

oracle-core's Introduction

Oracle Core v2.0

The oracle core requires that the user has access to a full node wallet in order to create txs & perform UTXO-set scanning. Furthermore, each oracle core is designed to work with only a single oracle pool. If an operator runs several oracles in several oracle pools then a single full node can be used, but several instances of oracle cores must be run (and set with different API ports).

The current oracle core is built to run the protocol specified in the EIP-0023 PR.

Getting started

Docker Image

AMD64 and ARM64 images are available from Docker Hub Repo

The container runs under oracle-core user (9010 uid), if using bind mount for container's /data folder (where config files and other data lives), set the container's uid for the host's folder ownership ( ex: chown -R 9010:9010 oracle_data ).

An example docker run command:

docker run -d \
 -v /path/on/host:/data \
 -p 9010:9010 \
 -p 9011:9011 \
 -e ORACLE_NODE_API_KEY=CHANGE_ME_KEY \
 ergoplatform/oracle-core:latest

To enter container shell for debugging or pool modifications:

docker exec -it -u oracle-core <container id> /bin/sh

Download

Get the latest release binary from Releases Or install it from the source code with:

cargo install --path core

If you want to run it as systemd daemon check out this section. Run it with oracle-core --help or oracle-core <SUBCOMMAND> --help to see the available commands and their options.

Setup

Generate an oracle config file from the default template with:

oracle-core generate-oracle-config

and set the required parameters:

  • oracle_address - a node's address that will be used by this oracle-core instance(pay tx fees, keep tokens, etc.). Make sure it has coins;
  • node_url node URL;

Set the environment variable ORACLE_NODE_API_KEY to the node's API key. You can put it in the .secrets file and then run source .secrets to load it into the environment. This way, the key does not get stored in the shell history.

Bootstrapping a new oracle pool

To bootstrap a new oracle pool:

  • Run
oracle-core bootstrap --generate-config-template bootstrap.yaml

to generate an example of the bootstrap config file.

  • Edit bootstrap.yaml (see the parameters list below);
  • Make sure node's wallet is unlocked;
  • Run
oracle-core bootstrap bootstrap.yaml

to mint tokens and create pool, refresh, update boxes. The pool_config.yaml file will be generated. It contains the configuration needed to run this pool;

  • Run an oracle with
oracle-core run

Bootstrap parameters available to edit:

  • [token]:name, description - token names and descriptions that will be used to mint tokens;
  • [token]:quantity - number of tokens to mint;
  • data_point_source - can be one of the following: NanoErgUsd, NanoErgXau, NanoErgAda;
  • min_data_points - minimal number of posted datapoint boxes needed to update the pool box (consensus);
  • max_deviation_percent - a cut off for the lowest and highest posted datapoints(i.e. datapoints deviated more than this will be filtered out and not take part in the refresh of the pool box);
  • epoch_length - minimal number of blocks between refresh(pool box) actions;
  • min_votes - minimal number of posted ballot boxes voting for a change to the pool box contracts;
  • min_storage_rent - box value in nanoERG used in oracle and ballot boxes;

Check out How I bootstrapped an ERG/XAU pool on testnet report for an example.

Invite new oracle to the running pool

To invite a new oracle the person that bootstrapped the pool need to send one oracle token and one reward token. On bootstrap X oracle and reward tokens are sent to the oracle_address, where X is the total oracle token quantity minted on bootstrap. Use scripts/send_new_oracle.sh to send one oracle, reward and ballot token. Besides the tokens the pool config file that you are running now should be sent as well. Send pool_config.yaml to the new oracle.

Joining a running pool

To join the existing pool one oracle and one reward token must be received to the address which will be used as oracle_address in the config file of the oracle. The received pool_config.yaml config file must placed accordingly.

To run the oracle:

  • Make sure node's wallet is unlocked;
  • Run an oracle with
oracle-core run

Extract reward tokens

Since the earned reward tokens are accumulating in the oracle box there is a command to send all accumulated reward tokensminus 1 (needed for the contract) to the specified address:

oracle-core extract-reward-tokens <ADDRESS>

To show the amount of accumulated reward tokens in the oracle box run

oracle-core print-reward-tokens

Transfer the oracle token to a new operator

Be aware that reward tokens currently accumulated in the oracle box should be extracted with extract-reward-tokens command firstbefore transferring the oracle token to the new address. Run

oracle-core transfer-oracle-token <ADDRESS>

Ensure the new address has enough coins for tx fees to run in a pool. As with inviting a new oracle, the pool config file that you are running now should be sent as well. Send pool_config.yaml to the new operator.

Updating the contracts/tokens

Changes to the contract(parameters)/tokens can be done in three steps:

  • prepare-update command submits a new refresh box with the updated refresh contract;
  • vote-update-pool command submits oracle's ballot box voting for the changes;
  • update-pool command submits the update transaction, which produces a new pool box; Each of the step is described below. See also a detailed instruction on Updating the epoch length

Create a new refresh box with prepare-update command

Create a YAML file describing what contract parameters should be updated. See also an example of such YAML file at Updating the epoch length Run:

oracle-core prepare-update <YAML file>

This will generate pool_config_updated.yaml config file which should be used in update-pool command. The output shows the new pool box contract hash and reward tokens amounts for the subsequent dozen epochs. To be used in the vote-update-pool command run by the oracles on the next step.

Vote for contract update with vote-update-pool command

Run

oracle-core vote-update-pool <NEW_POOL_BOX_ADDRESS_HASH_STR> <UPDATE_BOX_CREATION_HEIGHT>

Where:

  • <NEW_POOL_BOX_ADDRESS_HASH_STR> - base16-encoded blake2b hash of the serialized pool box contract for the new pool box
  • <UPDATE_BOX_CREATION_HEIGHT> - The creation height of the existing update box.

are required parameters, with optinal (in case of minting a new reward token):

  • <REWARD_TOKEN_ID_STR> - base16-encoded reward token id in the new pool box (use existing if unchanged)
  • <REWARD_TOKEN_AMOUNT> - reward token amount in the pool box at the time of update transaction is committed

They are printed in the output of the prepare-update command.

Update the pool box contract with update-pool command

Make sure the pool_config_updated.yaml config file generated during the prepare-update command is in the same folder as the oracle-core binary. Run

oracle-core update-pool

With optional(only if minted) parameters: <REWARD_TOKEN_ID_STR> - base16-encoded reward token id in the new pool box (only if minted) <REWARD_TOKEN_AMOUNT> - reward token amount in the pool box at the time of update transaction is committed (only if minted)

This will submit an update tx. After the update tx is confirmed, remove scanIds.json and use pool_config_updated.yaml to run the oracle (i.e., rename it to pool_config.yaml and restart the oracle). Distribute the pool_config.yaml file to all the oracles. Be sure they delete scanIds.json before restart.

Import update pool config with import-pool-update command

Make sure the pool_config_updated.yaml config file generated during the prepare-update command is at hand. Run

oracle-core import-update-pool pool_config_updated.yaml

This will update the pool_config.yaml, removes scanIds.json. Restart the oracle afterwards.

How to run as systemd daemon

To run oracle-core as a systemd unit, the unit file in systemd/oracle-core.service should be installed. The default configuration file path is ~/.config/oracle-core/oracle_config.yaml. This can be changed inside the .service file

cp systemd/oracle-core.service ~/.config/systemd/user/oracle-core.service
systemctl --user enable oracle-core.service

Verifying contracts against EIP-23

It is recommended to check that the contracts used are indeed coming from EIP-23. Run the following command to get encoded hashes of each contract:

./oracle-core print-contract-hashes

or if running from source files:

cargo test check_contract_hashes -- --nocapture

Check these values against those described in EIP-23.

Metrics

Prometheus metrics are disabled by default and can be enabled by setting metrics_port parameter in the oracle config file. The dashboard for Grafana is available in the scripts folder.

oracle-core's People

Contributors

alesfatalis avatar greenhat avatar hammerbu avatar hkedia avatar kettlebell avatar kushti avatar mmahut avatar reqlez avatar robkorn avatar ross-weir avatar scalahub avatar sethdusek avatar snaksolaso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oracle-core's Issues

[300 SigmaUSD] Vote for update(pool) command

Should be done after #45 is merged.

To vote for the pool box update as per https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#vote-for-update (with updates introduced in v2.0a, e.g reward tokens were moved from the refresh box to the pool box) add a new command line option --vote-update-pool with the following parameters:

  • new pool box hash (blake2b256);
  • height of the update box;

If the ballot token is locked in the existing ballot box (i.e. output of the previous update tx) it should be spent to create the new ballot box.

Bootstrap the oracle pool command

The goal is to automate the whole bootstrap process for the new oracle pool and run it with a single command.

Run bootstrap via a command-line option and a special config file with
bootstrap parameters "--bootstrap bootstrap_config.yaml".

In the bootstrap_config.yaml the following parameters are required:

The bootstrap process mints:

  • pool NFT token;
  • refresh NFT token;
  • oracle tokens;
  • reward tokens;

and creates pool and refresh
boxes
via
chained transactions (inputs for the subsequent transactions are outputs of
the previous transactions in the mempool) in one block. The contracts should
be constructed from the contracts defined in the contracts module with minted tokens via with* calls.

Oracle tokens should be put in a box guarded by the node wallet's PK for distribution among the oracles later.

The bootstrap process also generates an output config file to run the oracle
pools app. In the output config file the following info should be present:

  • minted token IDs

Display Token IDs/Parameters of Oracle Pool after Bootstrap

Hi, after a successful bootstrap, it would be great if all of the Token IDs could be displayed, ideally in the same format as oracle_config.yaml, so that parameters can be easily copied into the oracle_config file. Right now you need to open explorer with the Transaction IDs and copy the token parameters from there.

I can assign myself to this issue if you think this would be a worthwhile addition

Systemd compatibility

Adapt the app to be launched as daemon under systemd.
To check:

  • output, logging;
  • signal handling;

Test bootstrapping and running oracles on testnet

Scenario:

  1. Bootstrap new oracle pool for XAU/ERG pair with 2 min oracles consensus;
  2. Start 1 oracle;
  3. Check that pool box refresh is failing;
  4. Start the 2nd oracle;
  5. Check that pool box refresh is working;
  6. Check that reward tokens are transferred from the pool box into oracle boxes;

[300 SigmaUSD] Rescan blocks after registering set-scans

Hi, while doing some testnet runs, I noticed that after registering the scans, you have to use the node interface to rescan. Ideally oracle-core should maybe call the rescan endpoint itself. However, we would need some way to store the height of the bootstrap transaction(s), because rescanning from height 0 can be quite time consuming

Maybe bootstrap should store "creation_height" as a temporary file, and when oracle-core run/OraclePool::new() is called, it should check that file and rescan from there after registering scans.

Monitoring the oracle pool

Motivation

During the oracle operation, various events might occur that require the operator's involvement to resolve (failing to post datapoint, oracle count dropping below consensus level, etc.). The goal is to provide the metrics fed into the external monitoring system.

Implementation details

Provide Prometheus-compatible metrics and a Grafana dashboard for the following data:

  • the height of the current refresh box;
  • current height reported by the node;
  • oracle datapoint boxes count in the current epoch;
  • min oracle datapoint boxes needed for consensus;

With these metrics exposed an operator would be able to build various alerts using alertmanager/grafana/pagerduty/etc. For example current height reported by the node - height of the current refresh box > epoch_length will produce an alert if no datapoint collection happened during the last epoch.

Get working pool on testnet

Hi @greenhat I've got an ergo node on testnet running and have been trying to get oracle bootstrap happening (there are a bunch of bugs on develop branch that I will fix in upcoming commits). But I immediately have a problem when trying to sign the first transaction by the node for minting the first token.

Using the wallet/transaction/sign endpoint, JSON payload for the UnsignedTransaction is:

{
  "inputs": [
    {
      "boxId": "4bdd60da23c2dcbb347626845ac66b3a92297273022ed3c93d61b4e0f5a2ec66",
      "extension": {}
    }
  ],
  "dataInputs": [],
  "outputs": [
    {
      "value": 1000000,
      "ergoTree": "0008cd030458fff90453884bec45858745be6d48748ceb4559944f996fbc67a7ebb81189",
      "assets": [
        {
          "tokenId": "4bdd60da23c2dcbb347626845ac66b3a92297273022ed3c93d61b4e0f5a2ec66",
          "amount": 1
        }
      ],
      "additionalRegisters": {
        "R4": "0e134552472f555344204f5020706f6f6c204e4654",
        "R5": "0e134552472f555344204f5020506f6f6c204e4654",
        "R6": "0e0131"
      },
      "creationHeight": 235914
    },
    {
      "value": 14000000,
      "ergoTree": "0008cd030458fff90453884bec45858745be6d48748ceb4559944f996fbc67a7ebb81189",
      "assets": null,
      "additionalRegisters": {},
      "creationHeight": 235914
    },
    {
      "value": 59984000000,
      "ergoTree": "0008cd02428fbf621f43617dbb2b949427f7ef888003b9f83c741bbf971ddb48182bc8e0",
      "assets": null,
      "additionalRegisters": {},
      "creationHeight": 235914
    },
    {
      "value": 1000000,
      "ergoTree": "1005040004000e36100204a00b08cd0279be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798ea02d192a39a8cc7a701730073011001020402d19683030193a38cc7b2a57300000193c2b2a57301007473027303830108cdeeac93b1a57304",
      "assets": null,
      "additionalRegisters": {},
      "creationHeight": 235914
    }
  ]
}

Note that the existing code wraps up the transaction with

{ 
  "tx": "...json above"
}

Now the node rejects this request with the following error message:

Node Response: The request content was malformed:
C[A]: DownField(assets),MoveRight,DownArray,DownField(outputs),DownField(tx)

Now it looks to me that the problem is with the assets field under tx.outputs[1]. Any idea what's going on?

Submit string as datapoint...

Hi... I am interested in ergo and oracle pools concept...I am trying to launch an oracle pool on my local ergo testnet, but I need to submit different data types as datapoint other than just numbers(f64)...
I want to ask is there any specific reason that this version of oracle-core and connector works with just numbers (f64 or u64) as datapoint?...
I changed all datapoint related data types( u64 and f64) to str and modified connector and oracle-core modules in connector project and now I`m trying to change deviation checks in oracle-core/api.rs now, but I doubted if there is any logical obstacle in my way...
Thanks in advance for your help...

[300 SigmaUSD] Network agnostic-ness

It should be done after #80 is merged.

I've looked through the code and did not find a reason to distinguish between mainnet and testnet.

Implementation details.
Use network-agnostic Address for *ContractParameters::p2s. Parse base58 address with Address::unchecked_parse_from_str().
Contracts in conf files should be encoded as base16 serialized bytes (node/explorer format).

EDIT: We probably should leave contracts as P2S strings in conf files. To be able to use plutomonkey for contract compilation.

[300 SigmaUSD] Transfer oracle token command

Oracle token should be transfered with a command --transfer-oracle-token with the following parameters:

  • public key(P2PK address) of the new owner;

According to the https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#transfer-oracle-token the new output box should be created with new owner's PK stored in R4. To avoid accidental reward token transfers let's exit if there is more the one reward token in the input oracle box (see https://github.com/ergoplatform/eips/pull/41/files#r870440838).

[200 SigmaUSD] Newtypes for pool, refresh, oracle, etc. tokens

Motivation

To employ type checks in API where Token is expected now.

Implementation details

Make newtype for each token type and use them instead of Token and TokenId throughout the codebase. Each newtype checked ctor should check the token id against the one loaded from the config.

[100 SigmaUSD] Remove node's address conversion

Namely

/// Given a P2S Ergo address, extract the hex-encoded serialized ErgoTree (script)
pub fn address_to_tree(address: &P2SAddressString) -> Result<String> {
new_node_interface().p2s_to_tree(address)
}
/// Given a P2S Ergo address, convert it to a hex-encoded Sigma byte array constant
pub fn address_to_bytes(address: &P2SAddressString) -> Result<String> {
new_node_interface().p2s_to_bytes(address)
}
/// Given an Ergo P2PK Address, convert it to a raw hex-encoded EC point
pub fn address_to_raw(address: &P2PKAddressString) -> Result<String> {
new_node_interface().p2pk_to_raw(address)
}
/// Given an Ergo P2PK Address, convert it to a raw hex-encoded EC point
/// and prepend the type bytes so it is encoded and ready
/// to be used in a register.
pub fn address_to_raw_for_register(address: &P2PKAddressString) -> Result<String> {
new_node_interface().p2pk_to_raw_for_register(address)
}
/// Given a raw hex-encoded EC point, convert it to a P2PK address
pub fn raw_to_address(raw: &String) -> Result<P2PKAddressString> {
new_node_interface().raw_to_p2pk(raw)
}
/// Given a raw hex-encoded EC point from a register (thus with type encoded characters in front),
/// convert it to a P2PK address
pub fn raw_from_register_to_address(typed_raw: &String) -> Result<P2PKAddressString> {
new_node_interface().raw_from_register_to_p2pk(typed_raw)
}

Replace any used functions with local implementation.

CI tests for complex scenarios

Motivation

To test more complex scenarios where outputs from one operation are the inputs to another operation. For example, consider the following scenario:

  1. Bootstrap new oracle pool;
  2. Invite oracles (send them oracle token + reward token);
  3. Invited oracles posting their datapoints;
  4. Run refresh command to collect posted datapoints;
  5. Repeat steps 3, and 4 one more time.

Manually testing this scenario is a tedious and error-prone job.

Implementation details

Firing up real nodes with attached oracles is a no-go. Let's try to simulate the node's role and the whole UTXO model with a simplified implementation similar to the one we have in https://github.com/ergoplatform/ergo-playgrounds (BlockchainSimulation and Party traits and their implementation)

Consider updating ergo-node-interface

Hi, working on getting pool setup on testnet, and I was unable to start the bootstrap process. It keeps erroring with the following:

Node Response: The request content was malformed:
String: DownArray,DownField(inputsRaw)

After some testing, it appears that the serialization of UnsignedTransaction in the current ergo-node-interface is the issue:

       let endpoint = "/wallet/transaction/sign";
       let prepared_body = json!({ "tx": unsigned_tx, "inputsRaw": boxes_to_spend, "dataInputsRaw": data_input_boxes });

Comparing this to the serialization of UnsignedTransaction in ergo-lib, it seems the correct field is inputs and not inputsRaw, since that seems to be a different format (boxes are serialized and not in JSON form). Newer versions of ergo-node-interface fix this. I can send a PR to update it if you wish

[100 SigmaUSD] Add ERG/XAU pair datapoint source

Coingecko API should be able to show ERG in XAU prices. See discord - https://discord.com/channels/668903786361651200/669989266478202917/986627954568204320

See the existing NanoErgUsd datapoint source as an example -

impl DataPointSource for NanoErgUsd {
fn get_datapoint(&self) -> Result<i64, DataPointSourceError> {
get_nanoerg_usd_price()
}
}
// Number of nanoErgs in a single Erg
static NANO_ERG_CONVERSION: f64 = 1000000000.0;
static CG_RATE_URL: &str =
"https://api.coingecko.com/api/v3/simple/price?ids=ergo&vs_currencies=USD";
/// Acquires the price of Ergs in USD from CoinGecko, convert it
/// into nanoErgs per 1 USD, and return it.
fn get_nanoerg_usd_price() -> Result<i64, DataPointSourceError> {
let resp = reqwest::blocking::Client::new().get(CG_RATE_URL).send()?;
let price_json = json::parse(&resp.text()?)?;
if let Some(p) = price_json["ergo"]["usd"].as_f64() {
// Convert from price Erg/USD to nanoErgs per 1 USD
let nanoerg_price = (1.0 / p) * NANO_ERG_CONVERSION;
Ok(nanoerg_price as i64)
} else {
Err(DataPointSourceError::JsonMissingField)
}
}

Add new datapoint source handling in the oracle config load -

match &*self.data_point_source {
"NanoErgUsd" => Box::new(NanoErgUsd),
"NanoAdaUsd" => Box::new(NanoAdaUsd),
_ => return Err(anyhow!("Config: data_point_source is invalid (must be one of 'NanoErgUsd' or 'NanoAdaUsd'")),
}

Remove side-effects from `main`

Currently the main function is bloating up due to side effects from CLI commands. Refactor it to have a similar workflow to pool commands.

Convert ErgoTree Scans to Coll[Byte]

See: ergoplatform/ergo#1781 (comment)

Currently scans for contract address fail, because ergo node expects serialized Coll[Byte] in scans it seems. The easiest way to convert ErgoTree to this format would be something like:

fn ergo_tree_to_scan_bytes(tree: &ErgoTree) -> String {
   base16::encode_lower(Constant::from(tree.sigma_serialize_bytes().unwrap()).sigma_serialize_bytes())
}

edit: I have a fix in my updatecontract branch, and I can confirm that the scans now work. Before ergo node was also throwing an exception in one of the scans.

[200 SigmaUSD] Build the binary and publish it as a release artifact via CI

We'd want to have binaries available to download from the github's release page for Linux, macOS, and ideally Windows.
The build should be triggered when a github's release is created.

EDIT:
To support old GLIBC versions starting from RHEL 8 which has 2.28 let's try building on the ubuntu-20.04 image (see https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners) on GA.
Here is an explanation of why GLIBC version matters in static binary build - https://www.reddit.com/r/rust/comments/n6udyk/building_rust_binaries_in_ci_that_work_with_older/

Consider avoiding `with_...` call chains to build a contract

Motivation

Builder pattern for contracts might be a bad idea. We might end up with a partially updated contract if some call to with... is missed. I have not yet seen a case where we need to update only one parameter of the contract.

Implementation details

A simple c'tor with required parameters might be a better option than a builder pattern.
Don't rely on ORACLE_CONFIG. I suspect we might get rid of it since we might need to mock some options in tests.

[500 SigmaUSD] Implement submit data point operation

Remove existing separate app which submits data points to the oracle main app via REST API

Implement data point submission according to EIP-23 in the oracle main app in the main event loop using process-build_action-execute_action workflow (see PoolCommand::Refresh and PoolAction::Refresh for examples).

The overall design should accommodate using an alternative custom data point source by running the shell command provided via the config file option (which can be hard-coded for now).

References

EIP-23 - https://github.com/ergoplatform/eips/blob/eip23/eip-0023.md
Contracts and tests - https://github.com/scalahub/OraclePool

[100 SigmaUSD] Minted update NFT should be locked in update box on bootstrap

After #51

On bootstrap op the minted update NFT should be placed in the update box locked with update contract https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#update-contract
Otherwise, a token holder would be able to singlehandedly change the pool box (see pool contract https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#pool-contract)

Fix first run UX

When the app is launched for the first time (without oracle-config.yml present) show a friendly error message and show bootstrap command help.

Negative tests for refresh action

Add tests for refresh action that simulate the environment where refresh action should fail (not enough oracles, node stuck, etc.)

[700 SigmaUSD] Contracts with new parameters

After #28

1. Make contract instances static via lazy_static (see existing ORACLE_CONFIG).

  1. Make contract ctor's to build contracts with new parameters(via with_* calls);
  2. Load the contracts(P2S, indices for constant) from the config and use parameters loaded from the config file (token ids, etc) to make updated contracts (via ctor's from 1.)
  3. Ideally, we'd want to check that the contracts with updated parameters from 2. are the same as currently deployed in the pool. This could be achieved by logging a warning when we're fetching the boxes from the node in Scan. We cannot panic since we could be in the middle of updating contracts/boxes via voting.

EDIT: static contract instances are not viable since we need to be able to pass(inject) contract instances in tests without config file.

Consider separating parameter values from `*ContractParameters`

Now we're using them as expected constant values on contract parsing. I suggest we remove them from *ContractParameters since the structs are already heavy. Plus, we might introduce a new struct below.

In bootstrap, we should be able to set new min_data_points and other contract parameter values rather than compiling the contract with a new value to have the new constant value.

OR

re-designate them to be treated as new parameters on bootstrap or expected parameters in all other cases. Distinguish via two methods in *Contract (load/create?)

[100 SigmaUSD] Check that node wallet is unlocked on app start

Since most of the commands at some point require an unlocked node wallet we should check it explicitly. Check that wallet is unlocked for the relevant commands in main() on command dispatch. Write an error log message and panic if the wallet is locked.

Extract reward(tokens) commands

Reward tokens should be extracted with a command --extract-reward-tokens with the following parameters:

  • public key(P2PK address) to guard the output box with extracted tokens;

As per https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#extract-reward-tokens one reward token should be left in the new oracle box. This means that the command should move all but one reward tokens to the output box. If there is only one reward token in the oracle box the command should exit immediately.

Additionally, we need a command --print-reward-tokens to check the available reward tokens in our oracle box.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.