ergoplatform / oracle-core Goto Github PK
View Code? Open in Web Editor NEWCore off-chain component of Oracle Pools
License: Apache License 2.0
Core off-chain component of Oracle Pools
License: Apache License 2.0
Add tests for refresh action that simulate the environment where refresh action should fail (not enough oracles, node stuck, etc.)
Commit hash from which the app was built should be included in the app version.
Todo:
--version
that prints the app version and exits;Oracle token should be transfered with a command --transfer-oracle-token
with the following parameters:
According to the https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#transfer-oracle-token the new output box should be created with new owner's PK stored in R4. To avoid accidental reward token transfers let's exit if there is more the one reward token in the input oracle box (see https://github.com/ergoplatform/eips/pull/41/files#r870440838).
Remove existing separate app which submits data points to the oracle main app via REST API
Implement data point submission according to EIP-23 in the oracle main app in the main event loop using process-build_action-execute_action workflow (see PoolCommand::Refresh
and PoolAction::Refresh
for examples).
The overall design should accommodate using an alternative custom data point source by running the shell command provided via the config file option (which can be hard-coded for now).
EIP-23 - https://github.com/ergoplatform/eips/blob/eip23/eip-0023.md
Contracts and tests - https://github.com/scalahub/OraclePool
Coingecko API should be able to show ERG in XAU prices. See discord - https://discord.com/channels/668903786361651200/669989266478202917/986627954568204320
See the existing NanoErgUsd datapoint source as an example -
oracle-core/core/src/datapoint_source/erg_usd.rs
Lines 8 to 32 in fcffd8a
Add new datapoint source handling in the oracle config load -
oracle-core/core/src/oracle_config.rs
Lines 52 to 56 in fcffd8a
Make a struct with exactly typed fields and load config file only once (via lazy_static).
Should be done after #26 and after #51
As per https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#update-pool-box (with updates introduced in v2.0a, e.g reward tokens were moved from the refresh box to the pool box)) add a command line option --update-pool-box
with the following parameters:
After #28
1. Make contract instances static
via lazy_static
(see existing ORACLE_CONFIG
).
with_*
calls);Scan
. We cannot panic since we could be in the middle of updating contracts/boxes via voting.EDIT: static
contract instances are not viable since we need to be able to pass(inject) contract instances in tests without config file.
To test more complex scenarios where outputs from one operation are the inputs to another operation. For example, consider the following scenario:
Manually testing this scenario is a tedious and error-prone job.
Firing up real nodes with attached oracles is a no-go. Let's try to simulate the node's role and the whole UTXO model with a simplified implementation similar to the one we have in https://github.com/ergoplatform/ergo-playgrounds (BlockchainSimulation
and Party
traits and their implementation)
To be in line with the node/explorer.
Now we're using them as expected constant values on contract parsing. I suggest we remove them from *ContractParameters
since the structs are already heavy. Plus, we might introduce a new struct below.
In bootstrap, we should be able to set new min_data_points
and other contract parameter values rather than compiling the contract with a new value to have the new constant value.
OR
re-designate them to be treated as new parameters on bootstrap or expected parameters in all other cases. Distinguish via two methods in *Contract
(load
/create
?)
We'd want to have binaries available to download from the github's release page for Linux, macOS, and ideally Windows.
The build should be triggered when a github's release is created.
EDIT:
To support old GLIBC versions starting from RHEL 8 which has 2.28 let's try building on the ubuntu-20.04
image (see https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners) on GA.
Here is an explanation of why GLIBC version matters in static binary build - https://www.reddit.com/r/rust/comments/n6udyk/building_rust_binaries_in_ci_that_work_with_older/
During the oracle operation, various events might occur that require the operator's involvement to resolve (failing to post datapoint, oracle count dropping below consensus level, etc.). The goal is to provide the metrics fed into the external monitoring system.
Provide Prometheus-compatible metrics and a Grafana dashboard for the following data:
With these metrics exposed an operator would be able to build various alerts using alertmanager/grafana/pagerduty/etc. For example current height reported by the node
- height of the current refresh box
> epoch_length
will produce an alert if no datapoint collection happened during the last epoch.
Namely
oracle-core/core/src/node_interface.rs
Lines 114 to 145 in b852636
Scenario:
Instead of bootstrap_config_from_yaml
use serde_yaml
for YAML parsing.
Also, get serialization "for free" to provide the end-user with generated bootstrap config example (like bootstrap dump-config
command). This way we avoid the need to ship the bootstrap config example externally.
Hi, after a successful bootstrap, it would be great if all of the Token IDs could be displayed, ideally in the same format as oracle_config.yaml, so that parameters can be easily copied into the oracle_config file. Right now you need to open explorer with the Transaction IDs and copy the token parameters from there.
I can assign myself to this issue if you think this would be a worthwhile addition
The goal is to automate the whole bootstrap process for the new oracle pool and run it with a single command.
Run bootstrap via a command-line option and a special config file with
bootstrap parameters "--bootstrap bootstrap_config.yaml".
In the bootstrap_config.yaml the following parameters are required:
The bootstrap process mints:
and creates pool and refresh
boxes via
chained transactions (inputs for the subsequent transactions are outputs of
the previous transactions in the mempool) in one block. The contracts should
be constructed from the contracts defined in the contracts
module with minted tokens via with*
calls.
Oracle tokens should be put in a box guarded by the node wallet's PK for distribution among the oracles later.
The bootstrap process also generates an output config file to run the oracle
pools app. In the output config file the following info should be present:
Hi, while doing some testnet runs, I noticed that after registering the scans, you have to use the node interface to rescan. Ideally oracle-core should maybe call the rescan endpoint itself. However, we would need some way to store the height of the bootstrap transaction(s), because rescanning from height 0 can be quite time consuming
Maybe bootstrap should store "creation_height" as a temporary file, and when oracle-core run/OraclePool::new() is called, it should check that file and rescan from there after registering scans.
Currently the main function is bloating up due to side effects from CLI commands. Refactor it to have a similar workflow to pool commands.
Build failed due to the errors building openssl-sys
crate - https://github.com/ergoplatform/oracle-core/runs/7742607564?check_suite_focus=true
See contract source - https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#update-contract with updates introduced in v2.0a, e.g reward tokens were moved from the refresh box to the pool box
For example see how other contracts contracts::*
are implemented.
Hi @greenhat I've got an ergo node on testnet running and have been trying to get oracle bootstrap happening (there are a bunch of bugs on develop
branch that I will fix in upcoming commits). But I immediately have a problem when trying to sign the first transaction by the node for minting the first token.
Using the wallet/transaction/sign
endpoint, JSON payload for the UnsignedTransaction
is:
{
"inputs": [
{
"boxId": "4bdd60da23c2dcbb347626845ac66b3a92297273022ed3c93d61b4e0f5a2ec66",
"extension": {}
}
],
"dataInputs": [],
"outputs": [
{
"value": 1000000,
"ergoTree": "0008cd030458fff90453884bec45858745be6d48748ceb4559944f996fbc67a7ebb81189",
"assets": [
{
"tokenId": "4bdd60da23c2dcbb347626845ac66b3a92297273022ed3c93d61b4e0f5a2ec66",
"amount": 1
}
],
"additionalRegisters": {
"R4": "0e134552472f555344204f5020706f6f6c204e4654",
"R5": "0e134552472f555344204f5020506f6f6c204e4654",
"R6": "0e0131"
},
"creationHeight": 235914
},
{
"value": 14000000,
"ergoTree": "0008cd030458fff90453884bec45858745be6d48748ceb4559944f996fbc67a7ebb81189",
"assets": null,
"additionalRegisters": {},
"creationHeight": 235914
},
{
"value": 59984000000,
"ergoTree": "0008cd02428fbf621f43617dbb2b949427f7ef888003b9f83c741bbf971ddb48182bc8e0",
"assets": null,
"additionalRegisters": {},
"creationHeight": 235914
},
{
"value": 1000000,
"ergoTree": "1005040004000e36100204a00b08cd0279be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798ea02d192a39a8cc7a701730073011001020402d19683030193a38cc7b2a57300000193c2b2a57301007473027303830108cdeeac93b1a57304",
"assets": null,
"additionalRegisters": {},
"creationHeight": 235914
}
]
}
Note that the existing code wraps up the transaction with
{
"tx": "...json above"
}
Now the node rejects this request with the following error message:
Node Response: The request content was malformed:
C[A]: DownField(assets),MoveRight,DownArray,DownField(outputs),DownField(tx)
Now it looks to me that the problem is with the assets
field under tx.outputs[1]
. Any idea what's going on?
It should be done after #80 is merged.
I've looked through the code and did not find a reason to distinguish between mainnet and testnet.
Implementation details.
Use network-agnostic Address
for *ContractParameters::p2s
. Parse base58 address with Address::unchecked_parse_from_str()
.
Contracts in conf files should be encoded as base16 serialized bytes (node/explorer format).
EDIT: We probably should leave contracts as P2S strings in conf files. To be able to use plutomonkey for contract compilation.
Existing contracts and actions are using v2.0 from https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md
The new v2.0a version with reward tokens in the pool box(instead of in the refresh box) is available at https://github.com/scalahub/OraclePool/tree/v2/src/main/scala/oraclepool/v2a
The proof should be verifiable by anyone.
I'm thinking of hashes of the contracts that are published in the EIP-23 and verified in our tests.
I explained this idea in more detail at ergoplatform/ergo-playgrounds#25. Which should be implemented as a part of this issue.
To avoid manual name matching on config load at
oracle-core/core/src/oracle_config.rs
Lines 52 to 56 in fcffd8a
To avoid issues described in #80 (comment)
Otherwise, it's displayed as /10
with decimal set 1 (now).
In the TxBuilder::mint_token()
call.
Adapt the app to be launched as daemon under systemd.
To check:
Hi, working on getting pool setup on testnet, and I was unable to start the bootstrap process. It keeps erroring with the following:
Node Response: The request content was malformed:
String: DownArray,DownField(inputsRaw)
After some testing, it appears that the serialization of UnsignedTransaction in the current ergo-node-interface is the issue:
let endpoint = "/wallet/transaction/sign";
let prepared_body = json!({ "tx": unsigned_tx, "inputsRaw": boxes_to_spend, "dataInputsRaw": data_input_boxes });
Comparing this to the serialization of UnsignedTransaction in ergo-lib, it seems the correct field is inputs and not inputsRaw, since that seems to be a different format (boxes are serialized and not in JSON form). Newer versions of ergo-node-interface fix this. I can send a PR to update it if you wish
To employ type checks in API where Token
is expected now.
Make newtype for each token type and use them instead of Token
and TokenId
throughout the codebase. Each newtype checked ctor should check the token id against the one loaded from the config.
After #51
On bootstrap op the minted update NFT should be placed in the update box locked with update contract https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#update-contract
Otherwise, a token holder would be able to singlehandedly change the pool box (see pool contract https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#pool-contract)
To avoid #96.
Plus, there is no real need for all interested parties to have their instance.
Reward tokens should be extracted with a command --extract-reward-tokens
with the following parameters:
As per https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#extract-reward-tokens one reward token should be left in the new oracle box. This means that the command should move all but one reward tokens to the output box. If there is only one reward token in the oracle box the command should exit immediately.
Additionally, we need a command --print-reward-tokens
to check the available reward tokens in our oracle box.
Should be done after #45 is merged.
To vote for the pool box update as per https://github.com/ergoplatform/eips/blob/bd7a6098db78d67ac2ca9ada4c09bfe709d5ac03/eip-0023.md#vote-for-update (with updates introduced in v2.0a, e.g reward tokens were moved from the refresh box to the pool box) add a new command line option --vote-update-pool
with the following parameters:
If the ballot token is locked in the existing ballot box (i.e. output of the previous update tx) it should be spent to create the new ballot box.
These are not parameters to the contracts and we already have TokensToMint::oracle_tokens
and ballot_tokens
.
See: ergoplatform/ergo#1781 (comment)
Currently scans for contract address fail, because ergo node expects serialized Coll[Byte] in scans it seems. The easiest way to convert ErgoTree to this format would be something like:
fn ergo_tree_to_scan_bytes(tree: &ErgoTree) -> String {
base16::encode_lower(Constant::from(tree.sigma_serialize_bytes().unwrap()).sigma_serialize_bytes())
}
edit: I have a fix in my updatecontract branch, and I can confirm that the scans now work. Before ergo node was also throwing an exception in one of the scans.
process
;process
as simple as possible.Fetch data point in build_publish_datapoint_action
.
When the app is launched for the first time (without oracle-config.yml
present) show a friendly error message and show bootstrap command help.
We don't need the whole pool box, just the current epoch count.
We don't use these parameters, and besides that, we have tokens quantity set in the TokensToMint
.
Builder pattern for contracts might be a bad idea. We might end up with a partially updated contract if some call to with...
is missed. I have not yet seen a case where we need to update only one parameter of the contract.
A simple c'tor with required parameters might be a better option than a builder pattern.
Don't rely on ORACLE_CONFIG
. I suspect we might get rid of it since we might need to mock some options in tests.
Since most of the commands at some point require an unlocked node wallet we should check it explicitly. Check that wallet is unlocked for the relevant commands in main()
on command dispatch. Write an error log message and panic if the wallet is locked.
Hi... I am interested in ergo and oracle pools concept...I am trying to launch an oracle pool on my local ergo testnet, but I need to submit different data types as datapoint other than just numbers(f64)...
I want to ask is there any specific reason that this version of oracle-core and connector works with just numbers (f64 or u64) as datapoint?...
I changed all datapoint related data types( u64 and f64) to str and modified connector and oracle-core modules in connector project and now I`m trying to change deviation checks in oracle-core/api.rs now, but I doubted if there is any logical obstacle in my way...
Thanks in advance for your help...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.