Coder Social home page Coder Social logo

hyperledger-archives / sawtooth-core Goto Github PK

View Code? Open in Web Editor NEW
1.4K 130.0 766.0 31.12 MB

Core repository for Sawtooth Distributed Ledger

Home Page: https://wiki.hyperledger.org/display/sawtooth

License: Apache License 2.0

Shell 2.00% Python 71.67% JavaScript 0.08% Rust 24.56% Dockerfile 1.54% Just 0.14%
sawtooth hyperledger distributed-ledger sawtooth-lake

sawtooth-core's Introduction

Hyperledger Sawtooth

This project has moved (see below).

Hyperledger Sawtooth was a project to provide an enterprise solution for building, deploying, and running distributed ledgers (also called blockchains).

Project Status

This Hyperledger project, Hyperledger Sawtooth, has been archived and is no longer active within Hyperledger.

Sawtooth is now maintained by the Splinter community. For more information, visit: https://github.com/splintercommunity/sawtooth-core/

License

Hyperledger Sawtooth software is licensed under the Apache License Version 2.0 software license.

sawtooth-core's People

Contributors

abdelkrim avatar agunde406 avatar annechenette avatar arsulegai avatar askmish avatar boydjohnson avatar christo4ferris avatar cianx avatar danintel avatar danxmack avatar dcmiddle avatar delventhalz avatar dplumb94 avatar feihujiang avatar jjason avatar jsmitchell avatar ltseeley avatar mfford avatar nconde avatar nick-drozd avatar ojalatodd avatar peterschwarz avatar pjholmes avatar rberg2 avatar rbuysse avatar ryanlassigbanks avatar shannynalayna avatar tombarnes avatar trbs avatar vaporos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sawtooth-core's Issues

Unable to compose docker and run the single node on MacOS or windows:

sawtooth-validator-default | [2022-04-17 12:13:46.810 INFO cli] config [path]: config_dir = "/etc/sawtooth"; config [path]: key_dir = "/etc/sawtooth/keys"; config [path]: data_dir = "/var/lib/sawtooth"; config [path]: log_dir = "/var/log/sawtooth"; config [path]: policy_dir = "/etc/sawtooth/policy"
sawtooth-validator-default | [2022-04-17 12:13:46.811 WARNING cli] Network key pair is not configured, Network communications between validators will not be authenticated or encrypted.
sawtooth-validator-default | [2022-04-17 12:13:46.861 DEBUG state_verifier] verifying state in /var/lib/sawtooth/merkle-00.lmdb
sawtooth-validator-default exited with code 137

Lightweight Nodes

Add support for lightweight nodes. A lightweight node does not do validation, but instead relies on state receipts to reconstruct state. However, to clients it does look like a validator node (provides the same API), so it is possible to run a REST API against it and/or subscribe to events.

Non-publishing Nodes

Support validator nodes which do not publish blocks.

In addition to configuring a validator to not publish blocks, the rest of the network must support sending blocks to this validator for validation purposes but know it isn't going to publish blocks.

Bug with ias_url

Hello, I've tried to modify /etc/sawtooth/poet_enclave_sgx.toml with the ias_url for a development environment. I inserted "https://api.trustedservices.intel.com/sgx/dev/" for ias_url, but when I run the command sawset proposal, it returns:

sawset proposal create -k /etc/sawtooth/keys/validator.priv sawtooth.consensus.algorithm.name=PoET sawtooth.consensus.algorithm.version=0.1 sawtooth.poet.report_public_key_pem="$(cat /etc/sawtooth/ias_rk_pub.pem)" sawtooth.poet.valid_enclave_measurements=$(poet enclave --enclave-module sgx measurement) sawtooth.poet.valid_enclave_basenames=$(poet enclave --enclave-module sgx basename) sawtooth.poet.enclave_module_name=sawtooth_poet_sgx.poet_enclave_sgx.poet_enclave -o config.batch
[21:36:52 WARNING poet_enclave] SGX PoET enclave initialized.
[21:36:53 ERROR   ias_client] get_signature_revocation_lists HTTP Error code : 404
[21:36:53 WARNING poet_enclave] Failed to retrieve initial sig rl from IAS: 404 Client Error: Resource Not Found for url: https://api.trustedservices.intel.com/attestation/v3/sigrl/00000AD9
[21:36:53 WARNING poet_enclave] Retrying in 60 sec

But the request should be sent to https://api.trustedservices.intel.com/sgx/dev/attestation/v3/sigrl/00000AD9

Partial Validation Nodes

Partial validation nodes are nodes which can not themselves validate all transactions, maybe because of a missing transaction processor, and rely on state receipt hashes to fill in any portions they can not validate.

Is batch in sawtooth same as in traditional 2 phase commit

Newbie here, please be patient with my question :). I was reading about 2 pc in oracle blockchain platform and wondering if sawtooth batch achieves the same. Also, can a sawtooth batch across multiple sawtooth instances (chains) e.g. distributed transaction across multiple sawtooth chains.

Thanks,

Change the default database

Sawtooth saves the batches to LMDB and I am going to save these batches to another NoSQL database.
Please let me know if it is possible and it would be thankful if you share some code examples.

Add append/remove to Setting Transaction Family

Currently the Setting Transaction Family only supports proposals for replacing a setting entirely with a new one. This is technically usable for storing long lists of public-keys, but becomes very unfriendly as those lists get long and they are edited more frequently.

The SettingsProposal protobuf will need to be extended to accomplish this; currently it allows "value" but also needs to support "append" and "remove":

"append" - add the specified value to the end of the setting with an optional delimiter (defaults to ",")
"remove" - finds a value within a setting and removes, based on an optional delimiter (defaults to ",")

Duplicated by - https://jira.hyperledger.org/browse/STL-261

problem with docker-compose

I have been following the documentation to deploy a sawtooth test network (using docker) but failed with both poet and pbft versions.

The nightly tag of docker is outdated and does not work with the docker-compose anymore, however I have been able to deploy the poet version by just changing the tag part of the images in docker-compose from nightly to latest. After this change the "sawtooth peer list" and the "curl http://sawtooth-rest-api-default-0:8008/peers" commands work properly.
Yet when trying to set a key named MyKey to the value 999 according to this command, I run into other problems:
intkey set --url http://sawtooth-rest-api-default-0:8008 MyKey 999
I get the response:
{
"link": "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=8a9cde13ada7bc4f06e6736acf54689515946d8865538544b6e71744a50f861d6621a9e5e94d073b495858116c7c7d5685c6341e65389af30d12bdea78d6b90c"
}

which may seem rational according to the documentation but when I try to run this command:
intkey show --url http://sawtooth-rest-api-default-1:8008 MyKey
I get the following response:
Error: No such key: MyKey

now if I curl the specified link from the response of the intkey set command I get that the status is pending
curl "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=8a9cde13ada7bc4f06e6736acf54689515946d8865538544b6e71744a50f861d6621a9e5e94d073b495858116c7c7d5685c6341e65389af30d12bdea78d6b90c"
the response:
{
"data": [
{
"id": "8a9cde13ada7bc4f06e6736acf54689515946d8865538544b6e71744a50f861d6621a9e5e94d073b495858116c7c7d5685c6341e65389af30d12bdea78d6b90c",
"invalid_transactions": [],
"status": "PENDING"
}
],
"link": "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=8a9cde13ada7bc4f06e6736acf54689515946d8865538544b6e71744a50f861d6621a9e5e94d073b495858116c7c7d5685c6341e65389af30d12bdea78d6b90c"
}

Sawtooth should run on modern systems

Less urgently than but related to #2454 (regarding OpenSSL), Sawtooth's,

Supported operating systems: Ubuntu 16.04 ...

is some way back now, the Docker files are largely based on ubuntu:xenial and ubuntu:bionic.

At BTP we've had some success in bringing Sawtooth up to more recent Debian, it may be worth getting it building on a debian:bookworm-slim image or suchlike.

Separately, Sawtooth seems to be working best with old Docker Compose. v1 doesn't even receive updates any more. It'd probably be worth updating the compose files to work with the current latest version. In this effort, also note discussion on #2434.

Software library/package/crate dependencies for the source code may also be worth updating, for ease of both initial installation and subsequent bug patches. On the Rust side, cargo audit is useful.

In general, while backward compatibility is welcome where feasible, Sawtooth should work on a modern system with up-to-date dependencies.

This issue is created to capture discussion on versions of,

  • base Linux image
  • Docker and Docker Compose
  • software libraries

and to include related discussion on if Sawtooth packages still need providing (e.g. *.deb) or if Docker images suffice, which may inform what we can most feasibly achieve.

Inconsistency in the Sawtooth documentation

Hello! Trying to debug an issue as of late using the Sawtooth documentation, i've noticed that the tutorial section found here actually has an inconsistency. The sawtooth-default-pbft.yaml and sawtooth-default-poet.yaml files are pointing to the main branch, which has a different architecture than the 1-2 branch (files in the main branch don't have the Transaction Processors attached to it).

Also, the sawtooth-kubernetes-default-pbft.yaml and sawtooth-kubernetes-default-poet.yaml files also have an error that leads to the requested page not being found, in the same tutorial section linked previously (just scrolling down a little further). Cheers!

Rewrite REST API in Rust

Are we going to port the rest api to rust for better preformance?

If so rocket or actix-web? I wound think this should make a nice self contained project for someone and we could keep both the python implementation and the rust impl for a while till everyone was completely happy.

The ISOLATION_ID variable is not set. Defaulting to a blank string.

I'm trying to run

sudo docker-compose -f sawtooth-build.yaml up

But I'm getting the following error:

WARNING: The ISOLATION_ID variable is not set. Defaulting to a blank string. ERROR: no such image: sawtooth-validator-local:: invalid reference format

Is this issue because I didn't set up a http proxy?

I am using ubuntu 22.04LTS and docker version 23.0.1.

sawtooth-validator-default sawtooth-validator-default

When I use Docker for a Single Sawtooth Node. Follow the steps in website.
In Step 3: Start the Sawtooth Docker Environment:
The problem is below:

sawtooth-settings-tp-default | INFO | sawtooth_sdk::proces | Message: cf7742d809ff457fba0fc06eedc907e2
sawtooth-settings-tp-default | INFO | sawtooth_sdk::proces | sending PingResponse
sawtooth-validator-default | [2023-10-31 03:09:49.534 DEBUG interconnect] No response from 713cce62a68f7672e0ade9a65366537d1387fc01b6fea726893ab30449d76b92f62d207742a703762d9f866d9ea45e97c57061195fea078e28e9241ec2ff5e69 in 10.007903337478638 seconds - beginning heartbeat pings.
sawtooth-validator-default | [2023-10-31 03:09:49.535 DEBUG interconnect] No response from 02744ab3a5235fe142c08ed48ae078e1f91ff11615f09dda419b8b5759e7c449008a4f670648583d6259e7361d9c9a25db4f68a4e2f011bc74da3bb7ab8252ae in 10.007768869400024 seconds - beginning heartbeat pings.
sawtooth-validator-default | [2023-10-31 03:09:49.536 DEBUG interconnect] No response from 010132a6c832702654c52751072ccdf2fd5379befa1f245fcbe4cb25083689bd100e37f7905c56d709308f96c46067a16a0a66f6d094d5fda79505cead42c302 in 10.006694316864014 seconds - beginning heartbeat pings.
sawtooth-intkey-tp-python-default | [2023-10-31 03:09:49.536 DEBUG core] received message of type: PING_REQUEST
sawtooth-xo-tp-python-default | [2023-10-31 03:09:49.537 DEBUG core] received message of type: PING_REQUEST
sawtooth-settings-tp-default | INFO | sawtooth_sdk::proces | Message: c084d9c80c744fa79534ee9c2a5407c0
sawtooth-settings-tp-default | INFO | sawtooth_sdk::proces | sending PingResponse
sawtooth-validator-default | [2023-10-31 03:09:59.540 DEBUG interconnect] No response from 713cce62a68f7672e0ade9a65366537d1387fc01b6fea726893ab30449d76b92f62d207742a703762d9f866d9ea45e97c57061195fea078e28e9241ec2ff5e69 in 10.001838684082031 seconds - beginning heartbeat pings.
sawtooth-validator-default | [2023-10-31 03:09:59.541 DEBUG interconnect] No response from 02744ab3a5235fe142c08ed48ae078e1f91ff11615f09dda419b8b5759e7c449008a4f670648583d6259e7361d9c9a25db4f68a4e2f011bc74da3bb7ab8252ae in 10.00159239768982 seconds - beginning heartbeat pings.
sawtooth-validator-default | [2023-10-31 03:09:59.541 DEBUG interconnect] No response from 010132a6c832702654c52751072ccdf2fd5379befa1f245fcbe4cb25083689bd100e37f7905c56d709308f96c46067a16a0a66f6d094d5fda79505cead42c302 in 10.00166654586792 seconds - beginning heartbeat pings

So, is there any problem with the step before or proxy?

OpenSSL v1.1.1 EOL

OpenSSL v1.1.1 reaches its end of life on 11th September 2023. Beyond that, security fixes are available only via paid support.

Our current Docker builds rely on openssl installed from Ubuntu bionic, i.e., libssl1.1. (We even still have use of xenial.)

We should ensure that Sawtooth depends on a version of OpenSSL for which security patches remain available.

unify sawtooth CLI argument behavior

Subcommands vary unexpectedly. E.g.,

  • for picking up a url set in $SAWTOOTH_HOME/etc/cli.toml instead of --url
  • parsing option -F as short for --format

then

  • identity and settings don't do those
  • block, peer, and state do those.

It's probably worth reviewing the command-line arguments across sawtooth's subcommands to look for easy opportunities for unification. With luck, some of this is a bit of copy-and-paste that can be factored out to a common helper.

Doc for consensus engine

Hi,
I am currently doing an internship related to blockchain where we learned to work with sawtooth.
Since I was editing the devmode, I wrote a documentation for consensus engine for future reference, since it wasn't there i took devmode code for reference, it's more of my own notes (mostly complete for the code part) - https://github.com/adi-g15/notes/blob/main/hyperledger/sawtooth/consensus-engine/docs-rust.md

In case that doc can help, i would be pleased to add/edit its contents 😅 .

PoET not accepting any requests after getting to many requests error 429

1. Issue
I am running a heavy workload on Sawtooth network for test purpose. When I run the network using PBFT or Raft consensus I have Too many requests error but the network continue to accept requests. However using PoET the network stop accepting any request after I got 429 error.

2. System information

This is the validator configuration for PoET

validator-0:
    image: hyperledger/sawtooth-validator:chime
    container_name: sawtooth-validator-default-0
    expose:
      - 4004
      - 5050
      - 8800
    volumes:
      - poet-shared:/poet-shared
    command: "bash -c \"\
        sawadm keygen --force && \
        mkdir -p /poet-shared/validator-0 || true && \
        cp -a /etc/sawtooth/keys /poet-shared/validator-0/ && \
        while [ ! -f /poet-shared/poet-enclave-measurement ]; do sleep 1; done && \
        while [ ! -f /poet-shared/poet-enclave-basename ]; do sleep 1; done && \
        while [ ! -f /poet-shared/poet.batch ]; do sleep 1; done && \
        cp /poet-shared/poet.batch / && \
        sawset genesis \
          -k /etc/sawtooth/keys/validator.priv \
          -o config-genesis.batch && \
        sawset proposal create \
          -k /etc/sawtooth/keys/validator.priv \
          sawtooth.consensus.algorithm.name=PoET \
          sawtooth.consensus.algorithm.version=0.1 \
          sawtooth.poet.report_public_key_pem=\
          \\\"$$(cat /poet-shared/simulator_rk_pub.pem)\\\" \
          sawtooth.poet.valid_enclave_measurements=$$(cat /poet-shared/poet-enclave-measurement) \
          sawtooth.poet.valid_enclave_basenames=$$(cat /poet-shared/poet-enclave-basename) \
          -o config.batch && \
        sawset proposal create \
          -k /etc/sawtooth/keys/validator.priv \
             sawtooth.poet.target_wait_time=5 \
             sawtooth.poet.initial_wait_time=25 \
             sawtooth.publisher.max_batches_per_block=100 \
          -o poet-settings.batch && \
        sawadm genesis \
          config-genesis.batch config.batch poet.batch poet-settings.batch && \
        sawtooth-validator -v \
          --bind network:tcp://eth0:8800 \
          --bind component:tcp://eth0:4004 \
          --bind consensus:tcp://eth0:5050 \
          --peering static \
          --endpoint tcp://validator-0:8800 \
          --scheduler parallel \
          --maximum-peer-connectivity 10000
    \""
    environment:
      PYTHONPATH: "/project/sawtooth-core/consensus/poet/common:\
        /project/sawtooth-core/consensus/poet/simulator:\
        /project/sawtooth-core/consensus/poet/core"
    stop_signal: SIGKILL

I am running the sawtooth network on a AWS VM, with 8GO of RAM and 2CPU running ubuntu 18.04

2. Question
Any idea how to solve this issue?
Is there a way to disable back pressure test ?

Error while loading rust contract

I was trying to load the hyperledger grid smart contract to sawtooth network

after successfully running the docker-compose file with 4 active validator, i am getting following error

I have just logs one contract details rest 5 is giving me same error

➜  ~ docker logs tnt-contract-builder
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=958c6a9e7577f16c812a66fbdf0d860b74c36237808cfaabc7910548fb5c8c81451d6ad41a59a50d14af9a81b7569511443dd4dab6067a262a43a3517f52b270" }
Response Body:
StatusResponse {"data":[{"id": "958c6a9e7577f16c812a66fbdf0d860b74c36237808cfaabc7910548fb5c8c81451d6ad41a59a50d14af9a81b7569511443dd4dab6067a262a43a3517f52b270", "status": "PENDING", "invalid_transactions": []}], "link": "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=958c6a9e7577f16c812a66fbdf0d860b74c36237808cfaabc7910548fb5c8c81451d6ad41a59a50d14af9a81b7569511443dd4dab6067a262a43a3517f52b270&wait=30"}
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=dc410ca4b86c35a7ce37f5b82a0c3c7a8209b980ad4638e6120b47703ba0fe9f6b02f717b68c89fb640b6cfda0d3f2e81f80f8750163d417e61141c09fa5bd53" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=eb421c54bb740a278fbd65ef37879a2fa193b897828cac1f250c8b8060899ce516b19c13fa63b1f081222402f2563dba7d707aace5c33594f587e7c8d5a051d9" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=a6d84ff7056902908026264bc224c046290f99337770d2b6913a5f76fe8ed5814e5277dc5244b9564d4d12823dc56a2ae1f7c4986f6ac7344a19a20fc382e639" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=a33bd2e40a34aac8399a8c39bfe073129a8abcaf23575247a521106af8eba43104aff6743efe26300a8ff3854dac20ec74492722d1f321aa02cf320131e83ef2" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=9e6140c04c1bd0cf39e281754bf4f8508fc07ab40887250f75e2dd13f9eab3cd4bf7ccacb8d3de88e0f52258627adc709dceaa501d80a18268b9fac8c869fcfb" }

this is my docker-compose file

version: "3.6"

volumes:
  contracts-shared:
  grid-shared:
  pbft-shared:
  gridd-alpha:
  templates-shared:
  cache-shared:

services:
  # ---== shared services ==---

  sabre-cli:
    image: hyperledger/sawtooth-sabre-cli:latest
    volumes:
      - contracts-shared:/usr/share/scar
      - pbft-shared:/pbft-shared
    container_name: sabre-cli
    stop_signal: SIGKILL

  tnt-contract-builder:
    image: piashtanjin/tnt-contract-builder:latest
    container_name: tnt-contract-builder
    volumes:
      - pbft-shared:/pbft-shared
    entrypoint: |
      bash -c "
        while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
        sabre cr --create grid_track_and_trace --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre upload --filename /tmp/track_and_trace.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre ns --create a43b46  --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm a43b46 grid_track_and_trace --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee01 grid_track_and_trace --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee05 grid_track_and_trace --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
        echo '---------========= grid schema contract is loaded =========---------'
      "

  schema-contract-builder:
    image: piashtanjin/schema-contract-builder:latest
    container_name: schema-contract-builder
    volumes:
      - pbft-shared:/pbft-shared
    entrypoint: |
      bash -c "
        while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
        sabre cr --create grid_schema --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre upload --filename /tmp/schema.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre ns --create 621dee01 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee01 grid_schema --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee05 grid_schema --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
        echo '---------========= grid schema contract is loaded =========---------'
      "

  # pike-contract-builder:
  pike-contract-builder:
    image: piashtanjin/pike-contract-builder:latest
    container_name: pike-contract-builder
    volumes:
      - pbft-shared:/pbft-shared
    entrypoint: |
      bash -c "
        while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
        sabre cr --create grid_pike --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre upload --filename /tmp/pike.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre ns --create 621dee05 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee05 grid_pike --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
        echo '---------========= pike contract is loaded =========---------'
      "

  product-contract-builder:
    image: piashtanjin/product-contract-builder:latest
    container_name: product-contract-builder
    volumes:
      - pbft-shared:/pbft-shared
    entrypoint: |
      bash -c "
        while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
        sabre cr --create grid_product --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre upload --filename /tmp/product.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre ns --create 621dee05 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre ns --create 621dee01 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre ns --create 621dee02 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee05 grid_product --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee01 grid_product --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee02 grid_product --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
        echo '---------========= grid_product contract is loaded =========---------'
      "

  location-contract-builder:
    image: piashtanjin/location-contract-builder:latest
    container_name: location-contract-builder
    volumes:
      - pbft-shared:/pbft-shared
    entrypoint: |
      bash -c "
        while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
        sabre cr --create grid_location --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre upload --filename /tmp/location.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre ns --create 621dee04 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee05 grid_location --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee01 grid_location --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee04 grid_location --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
        echo '---------========= grid_location contract is loaded =========---------'
      "
  purchase-order-contract-builder:
    image: piashtanjin/purchase-order-contract-builder:latest
    container_name: purchase-order-contract-builder
    volumes:
      - pbft-shared:/pbft-shared
    entrypoint: |
      bash -c "
        while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
        sabre cr --create grid_purchase_order --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre upload --filename /tmp/purchase_order.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre ns --create 621dee06 --key /pbft-shared/validators/validator --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee05 grid_purchase_order --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
        sabre perm 621dee06 grid_purchase_order --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
        echo '---------========= grid_purchase_order contract is loaded =========---------'
      "
  # if [ ! -e sabre-admin.batch ]; then
  #     sawset proposal create \
  #         -k /root/.sawtooth/keys/my_key.priv \
  #         sawtooth.swa.administrators=$$(cat /pbft-shared/validators/validator-0.pub) \
  #         -o sabre-admin.batch
  #       sawadm genesis sabre-admin.batch
  validator-0:
    image: hyperledger/sawtooth-validator:nightly
    container_name: sawtooth-validator-default-0
    expose:
      - 4004
      - 5050
      - 8800
    ports:
      - "4004:4004"
    volumes:
      - pbft-shared:/pbft-shared
    command: |
      bash -c "
        if [ -e /pbft-shared/validators/validator-0.priv ]; then
          cp /pbft-shared/validators/validator-0.pub /etc/sawtooth/keys/validator.pub
          cp /pbft-shared/validators/validator-0.priv /etc/sawtooth/keys/validator.priv
        fi &&
        if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
          sawadm keygen
          mkdir -p /pbft-shared/validators || true
          cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-0.pub
          cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-0.priv
        fi &&
        if [ ! -e config-genesis.batch ]; then
          sawset genesis -k /etc/sawtooth/keys/validator.priv -o config-genesis.batch
        fi &&
        while [[ ! -f /pbft-shared/validators/validator-1.pub || \
                 ! -f /pbft-shared/validators/validator-2.pub || \
                 ! -f /pbft-shared/validators/validator-3.pub || \
                 ! -f /pbft-shared/validators/validator-4.pub ]];
        do sleep 1; done
        echo sawtooth.consensus.pbft.members=\\['\"'$$(cat /pbft-shared/validators/validator-0.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-1.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-2.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-3.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-4.pub)'\"'\\] &&
        if [ ! -e /root/.sawtooth/keys/my_key.priv ]; then
          sawtooth keygen my_key
        fi &&
        if [ ! -e config.batch ]; then
         sawset proposal create \
            -k /etc/sawtooth/keys/validator.priv \
            sawtooth.consensus.algorithm.name=pbft \
            sawtooth.consensus.algorithm.version=1.0 \
            sawtooth.validator.transaction_families='[{"family": "sabre", "version": "0.5"}, {"family":"sawtoo", "version":"1.0"}, {"family":"xo", "version":"1.0"}]' \
            sawtooth.identity.allowed_keys=\\['\"'$$(cat /pbft-shared/validators/validator-0.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-1.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-2.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-3.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-4.pub)'\"'\\] \
            sawtooth.swa.administrators=\\['\"'$$(cat /pbft-shared/validators/validator-0.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-1.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-2.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-3.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-4.pub)'\"'\\] \
            sawtooth.consensus.pbft.members=\\['\"'$$(cat /pbft-shared/validators/validator-0.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-1.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-2.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-3.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-4.pub)'\"'\\] \
            sawtooth.publisher.max_batches_per_block=1200 \
            -o config.batch 
        fi &&
        if [ ! -e /var/lib/sawtooth/genesis.batch ]; then
          sawadm genesis config-genesis.batch config.batch 
        fi &&
        sawtooth-validator -vv \
          --endpoint tcp://validator-0:8800 \
          --bind component:tcp://eth0:4004 \
          --bind consensus:tcp://eth0:5050 \
          --bind network:tcp://eth0:8800 \
          --scheduler parallel \
          --peering static \
          --maximum-peer-connectivity 10000
      "
  validator-1:
    image: hyperledger/sawtooth-validator:nightly
    container_name: sawtooth-validator-default-1
    expose:
      - 4004
      - 5050
      - 8800
    volumes:
      - pbft-shared:/pbft-shared
    command: |
      bash -c "
        if [ -e /pbft-shared/validators/validator-1.priv ]; then
          cp /pbft-shared/validators/validator-1.pub /etc/sawtooth/keys/validator.pub
          cp /pbft-shared/validators/validator-1.priv /etc/sawtooth/keys/validator.priv
        fi &&
        if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
          sawadm keygen
          mkdir -p /pbft-shared/validators || true
          cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-1.pub
          cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-1.priv
        fi &&
        sawtooth keygen my_key &&
        sawtooth-validator -vv \
          --endpoint tcp://validator-1:8800 \
          --bind component:tcp://eth0:4004 \
          --bind consensus:tcp://eth0:5050 \
          --bind network:tcp://eth0:8800 \
          --scheduler parallel \
          --peering static \
          --maximum-peer-connectivity 10000 \
          --peers tcp://validator-0:8800
      "
  validator-2:
    image: hyperledger/sawtooth-validator:nightly
    container_name: sawtooth-validator-default-2
    expose:
      - 4004
      - 5050
      - 8800
    volumes:
      - pbft-shared:/pbft-shared
    command: |
      bash -c "
        if [ -e /pbft-shared/validators/validator-2.priv ]; then
          cp /pbft-shared/validators/validator-2.pub /etc/sawtooth/keys/validator.pub
          cp /pbft-shared/validators/validator-2.priv /etc/sawtooth/keys/validator.priv
        fi &&
        if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
          sawadm keygen
          mkdir -p /pbft-shared/validators || true
          cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-2.pub
          cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-2.priv
        fi &&
        sawtooth keygen my_key &&
        sawtooth-validator -vv \
          --endpoint tcp://validator-2:8800 \
          --bind component:tcp://eth0:4004 \
          --bind consensus:tcp://eth0:5050 \
          --bind network:tcp://eth0:8800 \
          --scheduler parallel \
          --peering static \
          --maximum-peer-connectivity 10000 \
          --peers tcp://validator-0:8800 \
          --peers tcp://validator-1:8800
      "
  validator-3:
    image: hyperledger/sawtooth-validator:nightly
    container_name: sawtooth-validator-default-3
    expose:
      - 4004
      - 5050
      - 8800
    volumes:
      - pbft-shared:/pbft-shared
    command: |
      bash -c "
        if [ -e /pbft-shared/validators/validator-3.priv ]; then
         cp /pbft-shared/validators/validator-3.pub /etc/sawtooth/keys/validator.pub
         cp /pbft-shared/validators/validator-3.priv /etc/sawtooth/keys/validator.priv
        fi &&
        if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
         sawadm keygen
         mkdir -p /pbft-shared/validators || true
         cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-3.pub
         cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-3.priv
        fi &&
        sawtooth keygen my_key &&
        sawtooth-validator -vv \
          --endpoint tcp://validator-3:8800 \
          --bind component:tcp://eth0:4004 \
          --bind consensus:tcp://eth0:5050 \
          --bind network:tcp://eth0:8800 \
          --scheduler parallel \
          --peering static \
          --maximum-peer-connectivity 10000 \
          --peers tcp://validator-0:8800 \
          --peers tcp://validator-1:8800 \
          --peers tcp://validator-2:8800
      "
  validator-4:
    image: hyperledger/sawtooth-validator:nightly
    container_name: sawtooth-validator-default-4
    expose:
      - 4004
      - 5050
      - 8800
    volumes:
      - pbft-shared:/pbft-shared
    command: |
      bash -c "
        if [ -e /pbft-shared/validators/validator-4.priv ]; then
          cp /pbft-shared/validators/validator-4.pub /etc/sawtooth/keys/validator.pub
          cp /pbft-shared/validators/validator-4.priv /etc/sawtooth/keys/validator.priv
        fi &&
        if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
          sawadm keygen
          mkdir -p /pbft-shared/validators || true
          cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-4.pub
          cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-4.priv
        fi &&
        sawtooth keygen my_key &&
        sawtooth-validator -vv \
          --endpoint tcp://validator-4:8800 \
          --bind component:tcp://eth0:4004 \
          --bind consensus:tcp://eth0:5050 \
          --bind network:tcp://eth0:8800 \
          --scheduler parallel \
          --peering static \
          --maximum-peer-connectivity 10000 \
          --peers tcp://validator-0:8800 \
          --peers tcp://validator-1:8800 \
          --peers tcp://validator-2:8800 \
          --peers tcp://validator-3:8800
      "
  sawtooth-rest-api:
    image: hyperledger/sawtooth-rest-api:latest
    container_name: sawtooth-rest-api-default-0
    expose:
      - 8008
    ports:
      - "8008:8008"
    depends_on:
      - validator-0
    command: |
      bash -c "
        sawtooth-rest-api \
          --connect tcp://validator-0:4004 \
          --bind sawtooth-rest-api:8008
      "
    stop_signal: SIGKILL

  sawtooth-rest-api1:
    image: hyperledger/sawtooth-rest-api:nightly
    container_name: sawtooth-rest-api-1
    expose:
      - 8008
    depends_on:
      - validator-0
    command: |
      bash -c "
        sawtooth-rest-api -v --connect tcp://validator-1:4004 --bind sawtooth-rest-api1:8008
      "
    stop_signal: SIGKILL

  sawtooth-rest-api2:
    image: hyperledger/sawtooth-rest-api:nightly
    container_name: sawtooth-rest-api-2
    expose:
      - 8008
    depends_on:
      - validator-0
    command: |
      bash -c "
        sawtooth-rest-api -v --connect tcp://validator-2:4004 --bind sawtooth-rest-api2:8008
      "
    stop_signal: SIGKILL
  sawtooth-rest-api3:
    image: hyperledger/sawtooth-rest-api:nightly
    container_name: sawtooth-rest-api-3
    expose:
      - 8008
    depends_on:
      - validator-0
    command: |
      bash -c "
        sawtooth-rest-api -v --connect tcp://validator-3:4004 --bind sawtooth-rest-api3:8008
      "
    stop_signal: SIGKILL

  sawtooth-rest-api4:
    image: hyperledger/sawtooth-rest-api:nightly
    container_name: sawtooth-rest-api-4
    expose:
      - 8008
    depends_on:
      - validator-0
    command: |
      bash -c "
        sawtooth-rest-api -v --connect tcp://validator-4:4004 --bind sawtooth-rest-api4:8008
      "
    stop_signal: SIGKILL

  sawtooth-settings-tp:
    image: hyperledger/sawtooth-settings-tp:latest
    container_name: sawtooth-settings-tp
    expose:
      - 4004
    command: settings-tp -v -C tcp://validator-0:4004
    stop_signal: SIGKILL

  sawtooth-settings-tp-1:
    image: hyperledger/sawtooth-settings-tp:latest
    container_name: sawtooth-settings-tp-1
    expose:
      - 4004
    command: settings-tp -v -C tcp://validator-1:4004
    stop_signal: SIGKILL
  
  sawtooth-settings-tp-2:
    image: hyperledger/sawtooth-settings-tp:latest
    container_name: sawtooth-settings-tp-2
    expose:
      - 4004
    command: settings-tp -v -C tcp://validator-2:4004
    stop_signal: SIGKILL

  sawtooth-settings-tp-3:
    image: hyperledger/sawtooth-settings-tp:latest
    container_name: sawtooth-settings-tp-3
    expose:
      - 4004
    command: settings-tp -v -C tcp://validator-3:4004
    stop_signal: SIGKILL

  sawtooth-settings-tp-4:
    image: hyperledger/sawtooth-settings-tp:latest
    container_name: sawtooth-settings-tp-4
    expose:
      - 4004
    command: settings-tp -v -C tcp://validator-4:4004
    stop_signal: SIGKILL

  sabre-tp:
    image: hyperledger/sawtooth-sabre-tp:0.8
    container_name: sawtooth-sabre-tp
    depends_on:
      - validator-0
    entrypoint: sawtooth-sabre -vv --connect tcp://validator-0:4004

  sawtooth-client:
    image: hyperledger/sawtooth-shell:nightly
    container_name: sawtooth-shell
    volumes:
      - pbft-shared:/pbft-shared
    depends_on:
      - validator-0
    command: |
      bash -c "
        sawtooth keygen &&
        tail -f /dev/null
      "
    stop_signal: SIGKILL
  pbft-0:
    image: hyperledger/sawtooth-pbft-engine:nightly
    container_name: sawtooth-pbft-engine-default-0
    command: pbft-engine -vv --connect tcp://validator-0:5050
    stop_signal: SIGKILL

  pbft-1:
    image: hyperledger/sawtooth-pbft-engine:nightly
    container_name: sawtooth-pbft-engine-default-1
    command: pbft-engine -vv --connect tcp://validator-1:5050
    stop_signal: SIGKILL

  pbft-2:
    image: hyperledger/sawtooth-pbft-engine:nightly
    container_name: sawtooth-pbft-engine-default-2
    command: pbft-engine -vv --connect tcp://validator-2:5050
    stop_signal: SIGKILL

  pbft-3:
    image: hyperledger/sawtooth-pbft-engine:nightly
    container_name: sawtooth-pbft-engine-default-3
    command: pbft-engine -vv --connect tcp://validator-3:5050
    stop_signal: SIGKILL

  pbft-4:
    image: hyperledger/sawtooth-pbft-engine:nightly
    container_name: sawtooth-pbft-engine-default-4
    command: pbft-engine -vv --connect tcp://validator-4:5050
    stop_signal: SIGKILL

  # ---== alpha node ==---

  db-alpha:
    image: postgres
    container_name: db-alpha
    hostname: db-alpha
    restart: always
    expose:
      - 5432
    environment:
      POSTGRES_USER: grid
      POSTGRES_PASSWORD: grid_example
      POSTGRES_DB: grid

  gridd-alpha:
    image: gridd
    container_name: gridd-alpha
    hostname: gridd-alpha
    build:
      context: ../..
      dockerfile: daemon/Dockerfile
      args:
        - REPO_VERSION=${REPO_VERSION}
        - CARGO_ARGS= --features experimental
    volumes:
      - contracts-shared:/usr/share/scar
      - pbft-shared:/pbft-shared
      - gridd-alpha:/etc/grid/keys
      - cache-shared:/var/cache/grid
    expose:
      - 8080
    ports:
      - "8080:8080"
    environment:
      GRID_DAEMON_KEY: "alpha-agent"
      GRID_DAEMON_ENDPOINT: "http://gridd-alpha:8080"
    entrypoint: |
      bash -c "
        # we need to wait for the db to have started.
        until PGPASSWORD=grid_example psql -h db-alpha -U grid -c '\q' > /dev/null 2>&1; do
            >&2 echo \"Database is unavailable - sleeping\"
            sleep 1
        done
        grid keygen --skip && \
        grid keygen --system --skip && \
        grid -vv database migrate \
            -C postgres://grid:grid_example@db-alpha/grid &&
        gridd -vv -b 0.0.0.0:8080 -k root -C tcp://validator-0:4004 \
            --database-url postgres://grid:grid_example@db-alpha/grid
      "

Problem with building

When I ran the script docker-compose -f docker/compose/sawtooth-build.yaml up, I got:

WARNING: The ISOLATION_ID variable is not set. Defaulting to a blank string.
ERROR: no such image: sawtooth-validator-local:: invalid reference format

How to deal with it?

Ubuntu 20.04

Is is being considered to publish sawtooth packages for ubuntu 20.04, if so is there an estimated date?

State Checkpointing

State checkpointing essentially resets the genesis block of the network, such that all nodes agree on a new starting state/block. This requires many features to accomplish, and an RFC should be produced early in the process.

This feature enables the network to "throw away" historical transactions (those prior to the checkpoint). It also enables very long-running (multi-year) networks while keeping the transaction log a reasonable length.

Can't execute tests in GitHub Actions (Docker)

I'm working in a fork repo (which has some custom changes) and I want to be running the same testing as upstream does but not using Jenkins, using GitHub Actions or another CI platform.

From what I can tell I've copied the steps from Jenkinsfile correctly but I am seeing a failure: ERROR: test_config (nose2.loader.ModuleImportFailure) -> ModuleNotFoundError: No module named 'sawtooth_cli.protobuf'.

Everything is at gluwa#31.

FTR if I execute the same steps on my computer everything seems to work fine - containers are built and tests report pass.

Help/hints/RTFM is much appreciated

lmdb.MapFullError limit with mapsize?

Hello, my docker container is giving back all transaction's results with pending state, the logs throws this error:

[2021-12-29 19:16:54.971 ERROR    future] An unhandled error occurred while running future callback
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/sawtooth_validator/networking/future.py", line 79, in run_callback
    self._callback_func(self._request, self._result)
  File "/usr/lib/python3/dist-packages/sawtooth_validator/execution/executor.py", line 127, in _future_done_callback
    data=data)
  File "/usr/lib/python3/dist-packages/sawtooth_validator/execution/scheduler_serial.py", line 108, in set_transaction_execution_result
    batch_id=batch_signature)
  File "/usr/lib/python3/dist-packages/sawtooth_validator/execution/scheduler_serial.py", line 299, in _calculate_state_root_if_required
    state_hash = self._compute_merkle_root(required_state_hash)
  File "/usr/lib/python3/dist-packages/sawtooth_validator/execution/scheduler_serial.py", line 273, in _compute_merkle_root
    persist=True, clean_up=True)
  File "/usr/lib/python3/dist-packages/sawtooth_validator/execution/context_manager.py", line 409, in _squash
    state_hash = tree.update(updates, deletes, virtual=virtual)
  File "/usr/lib/python3/dist-packages/sawtooth_validator/state/merkle.py", line 247, in update
    self._database.set_batch(update_batch)
  File "/usr/lib/python3/dist-packages/sawtooth_validator/database/lmdb_nolock_database.py", line 121, in set_batch
    txn.put(k.encode(), packed, overwrite=True)
lmdb.MapFullError: mdb_put: MDB_MAP_FULL: Environment mapsize limit reached

Is possible change the environment mapsize? or this have a limit that is impossible to change?

Add general purpose filter query parameters to REST API

On routes that return lists of resources, if there is a query parameter that doesn't match any other parameters, the REST API should assume it is a filter, and only return resources that match that filter.

For example:

/batches?batcher_pubkey=02d260a46457a064733153e09840c322bee1dff34445d7d49e19e60abd18fd0758

Should only return batches with a batcher_pubkey that matches "02d260a46457a064733153e09840c322bee1dff34445d7d49e19e60abd18fd0758".

Features:

  • Comma separated values behave as a logical
  • Dot nation can be used to reference nested keys or indexes
  • header. can be omitted from the beginning of header keys

(Duplicated by https://jira.hyperledger.org/projects/STL/issues/STL-224)

All-In-One

Compile the Validator, Transaction Processors and Consensus into one process

Batches submitted via REST API during genesis are lost

When a batch is submitted via the REST API to a validator that is processing the genesis block, it will be lost.

Steps to reproduce:

  1. Create a genesis block with at least one sawtooth settings transaction
  2. Start a validator process
  3. Start a REST API process
  4. Submit an intkey transaction (via intkey load, for example).
  5. Start {{tp_settings}}
  6. Start {{tp_intkey_python}}

At this point, the genesis block will be created, but the intkey transaction(s) will not be processed.

To fix this, either:

  • the batch should be preserved in a pending queue (which would require journal being active)
  • do not accept client messages until the genesis block is complete (probably the most desirable)

(Duplicated by https://jira.hyperledger.org/browse/STL-508)

Meta: Issue triage and labeling

This issue captures community discussion on how we triage issues. This could focus simply on categorizing, making it easy for individuals to find what they might next do, or it could include more prioritization for team-directed selection of issues. GitHub labels could usefully assist triage but require answering: (a) How eagerly to create labels? (b) Who can label issues? Examples of possibly useful types of label include,

  • GitHub defaults: bug, enhancement, question, ...
  • generic from elsewhere: performance, security, testing, ...
  • by component: validator, tp, sdk, pbft, rest, cli, ...
  • by language/technology: js, python, rust, ...
  • stage of process: build, install, run, ...
  • priorities.

Problem with protocol buffer

Hi,

While following the steps on https://sawtooth.hyperledger.org/docs/core/releases/latest/app_developers_guide/ubuntu.html i have encountered some problem when executing STEP 4.4 the error is the following:

[....]
File "/usr/lib/python3/dist-packages/sawtooth_cli/protobuf/transaction_pb2.py", line 22
[...]
TypeError: new() got an unexpected keyword argument 'serialized_options'

I have been looking arround and have seen that this problem arises when the *_pb2.py files are generarted by a version of the protoc compiler newer than the protocol buffer runtime library.

I have tried to upgrade my python runtime library but still get the same error.
The current version of the protocol bufffer library is:

Name: protobuf
Version: 3.15.8
Summary: Protocol Buffers
Home-page: https://developers.google.com/protocol-buffers/
Author: None
Author-email: None
License: 3-Clause BSD License
Location: /home/mario/.local/lib/python3.6/site-packages
Requires: six
Required-by: sawtooth-sdk

I have checked in the sys.path just to see if the package is installed with different version on different path but my search path is

Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import sys
sys.path
[' ', '/usr/lib/python36.zip', '/usr/lib/python3.6', '/usr/lib/python3.6/lib-dynload', '/home/mario/.local/lib/python3.6/site-packages', '/usr/local/lib/python3.6/dist-packages', '/usr/local/lib/python3.6/dist-packages/dnspython-2.1.1.dev32+geed2172-py3.6.egg', '/usr/lib/python3/dist-packages']

As you can see the path for the protobuf that the pip command shows is in fourth place and none of the previous paths contain a protocol buffer library so don´t know what can be going on.

Does anyone know what could be happening

EDIT

I have just realized that the command on STEP 4 run the command as another user with the "sudo -u sawtooth" so i am pretty sure that the /home/mario/.local/.... path is not available for that command since that is a user-scope path. Sawtooth user does not have a local download of protobuf package and instead the search for the protobuf package is falling on the last path (/usr/lib/python3/dist-packages) there it is another protobuf intalation, this time verison 3.0.0, don´t know when the package was intalled here i assume during sawtooth intalation but seems weird since that version makes the environment to fail.

Not sure how to tell pip to upgrade not the local package but the global one, installed on the previous path, i have tried with sudo but it doesn´t work.

This explanation seems reasonable but I am not sure why nobody has this problem. If someone could confirm it would be helpful.

EDIT 2

Alright so i have managed to tell pip how to update the packaged contained in the /usr/lib/python3/dist-packages and know it works. Actually after solving this i had a couple more errors:

  • The sawtooth_signing package was missing
  • The sawtooth_sdk package was also missing

Again these packages were intalled under /home/mario/... so i had to intall them on the other location, but while trying to install them i got another weird behaviour, pip was trying to install dependencies of these packages which were already installed and to make matters worse installing those dependencies generate an error, specifically the error had to do with the secp256k1 package which failed compilation. Its seems that pip defaulted to --ignore-installed behaviour.

Basically in case you end up with the same installation problem on you enviroment what you have to do is install the missing packages manually with the following command:

sudo python -m pip -t /usr/lib/python3/dist-packages/ --no-deps protobuf sawtooth_sdk sawtooth_signing

Hope this issue serves for somebody else facing the same problem.

Implement BlockStore replay tool

Implement a tool which will replay the blocks contained in a block store through the scheduler/executor/context manager/TPs.

Initially, this tool is primarily for testing scheduler implementations and debugging.

This tool should make aggressive use of the schedulers, by default, spanning multiple blocks across a single scheduler. It should also have options to make it behave closer to the journal (currently single block at a time).

This tools should print performance statistics as it operates.

Duplicates https://jira.hyperledger.org/browse/STL-475

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.