Coder Social home page Coder Social logo

bcgov / von-network Goto Github PK

View Code? Open in Web Editor NEW
157.0 18.0 184.0 595 KB

A portable development level Indy Node network.

License: Apache License 2.0

Python 40.39% Shell 28.65% CSS 4.81% JavaScript 8.95% HTML 16.76% Dockerfile 0.44%
von verifiable-claims hyperledger hyperledger-indy verifiable-organizations-network citz

von-network's Introduction

License Lifecycle:Stable

VON Network

A portable development level Indy Node network, including a Ledger Browser. The Ledger Browser (for example the BC Gov's Ledger for the GreenLight Demo Application) allows a user to see the status of the nodes of a network and browse/search/filter the Ledger Transactions.

von-network is being developed as part of the Verifiable Organizations Network (VON). For more information on VON see https://vonx.io. Even, better - join in with what we are doing and contribute to VON, Aries and Indy communities.

VON Network is Not a Production Level Indy Node Network

VON Network is not a production level Indy Node network. It was designed as a provisional network for development and testing purposes only. It provides you with an exceptionally simple way to spin up an Indy Node network, but is lacking many of the features and safeguards needed for a production level network.

VON Network is provided as is for development and testing. Please do not use it for production environments.

The VON-Network Ledger Browser and API

With the Ledger Browser (for example: http://greenlight.bcovrin.vonx.io/), you can see:

  • The status of the Ledger nodes
  • The detailed status of the Ledger Nodes in JSON format (click the "Detailed Status" link)
  • The three ledger's of an Indy Network - Domain, Pool and Config (click the respective links)
  • The Genesis Transactions for the Indy Network instance.
    • In an Indy Agent, use the URL <server>/genesis to GET the genesis file to use in initializing the Agent.

By using the "Authenticate a new DID" part of the UI or posting the appropriate JSON to the VON-Network API (see an example script here), a new DID can be added to the Ledger. A known and published Trust Anchor DID is used to write the new DID to the Ledger. This operation would not be permitted in this way on the Sovrin Main Network. However, it is a useful mechanism on sandbox Indy Networks used for testing.

In the Domain Ledger screen (example), you can browse through all of the transactions that have been created on this instance of the Ledger. As well, you can use a drop down filter to see only specific Ledger transaction types (nym - aka DID, schema, CredDef, etc.), and search for strings in the content of the transactions.

VON Network Quick Start Guide

New to VON Network? We have a tutorial about using VON Network to get you started.

Note that in order to use Docker Desktop (> version 3.4.0), make sure you uncheck the "Use Docker Compose V2" in Docker Desktop > Preferences > General. Refer to this issue for additional details; #170

Want to see a full demo that includes applications and verifiable credentials being issued? The VON Quick Start Guide provides the instructions for running a local instance of a full demo of the components, including an Indy Network, an instance of TheOrgBook and GreenLight. This is a great way to see the VON Network in action.

Indy-Cli Container Environment

This repository includes a fully containerized Indy-Cli environment, allowing you to use the Indy-Cli without having to build or install the Indy-SDK or any of its dependencies on your machine.

For more information refer to Using the containerized indy-cli

Ledger Troubleshooting

Refer to the Troubleshooting document for some tips and tools you can use to troubleshoot issues with a ledger.

VON Quick Start Guide

The environment provides a set of batch script templates and a simple variable substitution layer that allows the scripts to be reused for a number of purposes.

For examples of how to use this capability, refer to Writing Transactions to a Ledger for an Un-privileged Author

Running the Network Locally

The tutorial about using VON Network has information on starting (and stopping) the network locally.

Running the web server in Docker against another ledger

  1. Run docker to start the ledger, and pass in GENESIS_URL and LEDGER_SEED parameters:

For example to connect to the Sovrin Test Network:

./manage build
GENESIS_URL=https://raw.githubusercontent.com/sovrin-foundation/sovrin/master/sovrin/pool_transactions_sandbox_genesis ./manage start-web

Note that it takes some time to get the transactions and status from the network. Once the UI appears, try getting the Genesis Transaction that the server started up properly.

Running the web server on your local machine

You can run the web server/ledger browser on its own, and point to another Indy/Sovrin network.

  1. Install python and pip (recommend to use a virtual environment such as virtualenv)

  2. Download this repository:

git clone https://github.com/bcgov/von-network.git
cd von-network
  1. If using virtualenv, setup a virtual environment and activate it:
virtualenv --python=python3.6 venv
source venv/bin/activate
  1. Install requirements:
pip install -r server/requirements.txt
  1. Run the server, you can specify a genesis file, or a url from which to download a genesis file - you can also specify a seed for the DID to use to connect to this ledger:
GENESIS_FILE=/tmp/some-genesis.txt PORT=9000 python -m server.server

Or:

GENESIS_URL=https://some.domain.com/some-genesis.txt LEDGER_SEED=000000000000000000000000SomeSeed PORT=9000 python -m server.server

For example to connect to the STN:

GENESIS_URL=https://raw.githubusercontent.com/sovrin-foundation/sovrin/master/sovrin/pool_transactions_sandbox_genesis LEDGER_SEED=000000000000000000000IanCostanzo PORT=9000 python -m server.server

Running the Network on a VPS

Requirements

  • ubuntu 16.04
  • at least 1GB RAM
  • accepting incoming TCP connections on ports 9701-9708
  • root access
  1. Install unzip utility:

    # Requires root privileges
    apt install unzip
  2. Install Docker and Docker Compose:

    curl -fsSL get.docker.com -o get-docker.sh
    # Requires root privileges
    sh get-docker.sh
    curl -L https://github.com/docker/compose/releases/download/1.24.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    chmod +x /usr/local/bin/docker-compose
  3. Download this repository:

    curl -L https://github.com/bcgov/von-network/archive/main.zip > bcovrin.zip && \
        unzip bcovrin.zip && \
        cd von-network-main && \
        chmod a+w ./server/
  4. Build the Docker container:

    ./manage build
  5. Run the network of nodes:

    # This command requires the publicly accessible ip address of the machine `public_ip_address`
    # WEB_SERVER_HOST_PORT maps the docker service port to a public port on the machine
    # LEDGER_INSTANCE_NAME sets the display name of the ledger on the page headers.
    ./manage start public_ip_address WEB_SERVER_HOST_PORT=80 "LEDGER_INSTANCE_NAME=My Ledger" &

AWS EC2 Security considerations

If you are installing on an Amazon EC2 node you may find the Indy nodes are failing to connect to each other. The signature for this will be a repeating message every 60 seconds when you view the logs via "./manage log"

node2_1 | 2020-05-07 23:56:30,728|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2020-05-07 23:56:30,729|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2020-05-07 23:56:30,730|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2020-05-07 23:56:30,730|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2020-05-07 23:56:30,730|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.

The Indy nodes are configured to talk to each other via their "public" address not the Virtual Private Cloud address of the EC2 node. It is common practice to tightly restrict traffic inbound to public IPs when first setting up a deployment in AWS. You will need to adjust the Inbound and Outbound traffic rules on your Security Groups to allow traffic specifically from the public EC2 address.

Connecting to the Network

With the CLI

Once the nodes are all running and have connected to each other, you can run the Indy client to test the connection in a separate terminal window:

./manage cli

If you want to connect to a remote indy-node pool, you can optionally supply an ip address. (Currently only supports a test network running on a single machine with a single ip address.)

./manage cli <ip address>

The Indy CLI should boot up and you should see the following:

Indy-CLI (c) 2017 Evernym, Inc.
Type 'help' for more information.
Running Indy 1.1.159

indy>

Now connect to our new Indy network to make sure network is running correctly:

pool connect sandbox

What you should see is:

indy> pool connect sandbox
Pool "sandbox" has been connected

If you see this, congratulations! Your nodes are running correctly and you have a connection to the network.

Extra Features for Development

Running BCovrin also runs a thin webserver (at http://localhost:9000 when using docker) to expose some convenience functions:

Genesis Transaction Exposed

The genesis transaction record required to connect to the node pool is made available at:

<ip_address>/genesis

Write new did for seed

The node pool can have a trust anchor write a did for you. That feature is available in the UI.

Customize your Ledger Browser Deployment

It is possible to customize some of the aspects of the Ledger Browser at run-time, by using the following environment variables:

  • REGISTER_NEW_DIDS: if set to True, it will enable the user interface allowing new identity owners to write a DID to the ledger. It defaults to False.
  • LEDGER_INSTANCE_NAME: the name of the ledger instance the Ledger Browser is connected to. Defaults to Ledger Browser.
  • INFO_SITE_URL: a URL that will be displayed in the header, and can be used to reference another external website containing details/resources on the current ledger browser instance.
  • INFO_SITE_TEXT: the display text used for the INFO_SITE_URL. If not specified, it will default to the value set for INFO_SITE_URL.
  • WEB_ANALYTICS_SCRIPT: the JavaScript code used by web analytics servers. Populate this environment variable if you want to track the usage of your site with Matomo, Google Analytics or any other JavaScript based trackers. Include the whole <script type="text/javascript">...</script> tag, ensuring quotes are escaped properly for your command-line interpreter (e.g.: bash, git bash, etc.).
  • LEDGER_CACHE_PATH: if set, it will instruct the ledger to create an on-disk cache, rather than in-memory. The image supplies a folder for this purpose; $HOME/.indy_client/ledger-cache. The file should be placed into this directory (e.g.: /home/indy/.indy-client/ledger-cache/ledger_cache_file or $HOME/.indy_client/ledger-cache/ledger_cache_file).
  • INDY_SCAN_URL: the URL to the external IndyScan ledger browser instance for the network. This will replace the links to the builtin ledger browser tools.
  • INDY_SCAN_TEXT: the display text used for the INDY_SCAN_URL. If not specified, it will default to the value set for INDY_SCAN_URL.

Using IndyScan with VON Network

IndyScan is production level transaction explorer for Hyperledger Indy networks. It's a great tool for exploring and searching through the transactions on the various ledgers.

You might be asking... Why would I want to use IndyScan with von-network, when von-network has a built-in ledger browser?

The short answer is performance at scale. The built-in ledger browser works great for most local development purposes. However, it starts running into performance issues when your instance contains over 100,000 transactions. IndyScan on the other hand is backed by Elasticsearch and can easily scale well beyond that limitation. So if you're hosting an instance of von-network for your organization to use for testing, like BC Gov does with BCovrin Test, you'll want to look into switching over to IndyScan as your ledger browser.

To use IndyScan as your ledger browser for von-network, you're responsible for setting up and hosting your own instance of IndyScan. Please refer to the IndyScan repository for information on how to accomplish this. Once your IndyScan instance is up and running you can configure your von-network instance to provide a link to it on the main page by using the INDY_SCAN_URL and INDY_SCAN_TEXT variables described in the previous section. The link to your IndyScan instance will replace the links to von-network's built in ledger browser tools.

Contributing

Pull requests are always welcome!

Please see the Contributions Guide for the repo.

You may also create an issue if you would like to suggest additional resources to include in this repository.

All contrbutions to this repository should adhere to our Code of Conduct.

von-network's People

Contributors

aberasturi avatar alexandershenshin avatar andrewwhitehead avatar artemkaaas avatar benjsmi avatar brycecurtis avatar dependabot[bot] avatar edcurran avatar esune avatar ff137 avatar gramthanos avatar ianco avatar jceb avatar jcourt562 avatar ken5scal avatar kimdhamilton avatar lsawicki-profondo avatar lynnbendixsen avatar mgmgwi avatar nick-1979 avatar nrempel avatar patstlouis avatar rajpalc7 avatar repo-mountie[bot] avatar satishsurath avatar surajsingla333 avatar swcurran avatar tdiesler avatar timoglastra avatar wadebarnes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

von-network's Issues

ACA-Py Agent throwing 500 internal server error for 3rd agent reusing a single use invite

Issue

With a single use invite, and that invite has already been used to establish a connection with Alice(inviter) and Bob(invitee). If Mallory attempts to use that invite to initiate a connection with Alice, Alice will respond with a 500 internal server error to Mallory.

Steps to Reproduce

Scenario: Inviter Sends invitation for one agent second agent tries after connection

Given we have "3" agents
| name | role |
| Alice | inviter |
| Bob | invitee |
| Mallory | inviteinterceptor |
And "Alice" generated a single-use connection invitation
And "Bob" received the connection invitation
And "Bob" sent a connection request
And "Alice" accepts the connection request by sending a connection response
And "Alice" and "Bob" have a connection
When "Mallory" sends a connection request to "Alice" based on the connection invitation
Then "Alice" sends a request_not_accepted error  -> Alice sends a 500 internal server error instead

Outcome

500 Internal Server Error\n\nServer got itself in trouble

Expected Outcome

A request_not_accepted error

Workaround

None

Test Cases Affected

T003-API10-RFC0160, T004-API10-RFC0160

Probable Side Issue

Probable Side Issue: Based on the RFC for the connection protocol I had expected to get a request_not_accepted error if Alice and Bob were in the share phase of the sequence when Mallory attempted to reuse the invite. At this point it seems that unless the inviter is in a state of โ€œactiveโ€ for that invite then anyone can come in and begin to facilitate a connection with the inviter. I do believe that once Bob accepts invitation/sends connection request that anyone trying to reuse that invite should get a request_not_accepted error. Someone let me know if you wish to log a separate issue for this.

Severity

Medium

Business Priority

Medium

TypeError: GenericMeta object argument after ** must be a mapping, not list

Running this as is on an Ubuntu machine using Docker compose, here's the logs for Node1:

init_indy_node Node1 9418 80
Node-stack name is Node1
Client-stack name is Node1C
Generating keys for random seed b'cac2EE084B4AAaeE47F9D55f3B77c6F7'
Init local keys for client-stack
Public key is 28349f48e70b4013dd76dba0f1b4cdcaf6b05da015c95be4c95ccb496ef96264
Verification key is 968a23ea955a4df8833d813ea3e7d20e5901be20f7c794dc558986e8615c6a3c
Init local keys for node-stack
Public key is 28349f48e70b4013dd76dba0f1b4cdcaf6b05da015c95be4c95ccb496ef96264
Verification key is 968a23ea955a4df8833d813ea3e7d20e5901be20f7c794dc558986e8615c6a3c
BLS Public key is FufFb6gVM3ivGXpKbeSjCKBrbwbaFodvroUcX4wFh7rv8hvYtEfwSnVxDEG4VMPDvwfuugFDP1ajChv9PZpMWbqV1C9rRRNLcC2aLybXV5BVUJgFJrmZxPd8Dxbu75zLWkgvak2WfUSHUn4JazSgNzMkAWyHpFh611kYBmonWtmZMb
generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 1 --ips 172.16.10.1,172.16.10.1,172.16.10.1,172.16.10.1
Generating keys for provided seed b'000000000000000000000000000Node1'
Init local keys for client-stack
Public key is f5a2927d4eb8e23cdd0167c2e786613993590dab50e6d68bc3821df3b8c34f1f
Verification key is ecbb3dd3a659f1e94d160fb2a77d85b92fda9de1b62c46e80ebf78735970056d
Init local keys for node-stack
Public key is f5a2927d4eb8e23cdd0167c2e786613993590dab50e6d68bc3821df3b8c34f1f
Verification key is ecbb3dd3a659f1e94d160fb2a77d85b92fda9de1b62c46e80ebf78735970056d
BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
Nodes will not run locally, so writing /etc/indy/indy.env
This node with name Node1 will use ports 9701 and 9702 for nodestack and clientstack respectively
BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
Generated genesis transaction file:
/home/indy/.indy-cli/networks/sandbox/pool_transactions_genesis
{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","client_ip":"172.16.10.1","client_port":9702,"node_ip":"172.16.10.1","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv","identifier":"Th7MpTaRZVRYnPiabds81Y","txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62","type":"0"}
{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","client_ip":"172.16.10.1","client_port":9704,"node_ip":"172.16.10.1","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb","identifier":"EbP4aYNeTHL6q385GuVpRV","txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc","type":"0"}
{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","client_ip":"172.16.10.1","client_port":9706,"node_ip":"172.16.10.1","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya","identifier":"4cU41vWW82ArfxJxHkzXPG","txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4","type":"0"}
{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","client_ip":"172.16.10.1","client_port":9708,"node_ip":"172.16.10.1","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA","identifier":"TWwCRQRZ2ZHMJFn9TzLp7W","txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008","type":"0"}
start_indy_node Node1 9701 9702
2018-03-09 17:14:47,390 | DEBUG    | __init__.py          (60) | register | Registered VCS backend: git
2018-03-09 17:14:47,413 | DEBUG    | __init__.py          (60) | register | Registered VCS backend: hg
2018-03-09 17:14:47,466 | DEBUG    | __init__.py          (60) | register | Registered VCS backend: svn
2018-03-09 17:14:47,467 | DEBUG    | __init__.py          (60) | register | Registered VCS backend: bzr
2018-03-09 17:14:48,380 | DEBUG    | selector_events.py   (53) | __init__ | Using selector: EpollSelector
2018-03-09 17:14:48,398 | DEBUG    | ledger.py            (201) | start | Starting ledger...
2018-03-09 17:14:48,411 | DEBUG    | ledger.py            (67) | recoverTree | Recovering tree from transaction log
2018-03-09 17:14:48,434 | DEBUG    | ledger.py            (82) | recoverTree | Recovered tree in 0.022596266004256904 seconds
2018-03-09 17:14:48,477 | DEBUG    | ledger.py            (201) | start | Starting ledger...
2018-03-09 17:14:48,485 | DEBUG    | ledger.py            (67) | recoverTree | Recovering tree from transaction log
2018-03-09 17:14:48,500 | DEBUG    | ledger.py            (82) | recoverTree | Recovered tree in 0.01505104498937726 seconds
2018-03-09 17:14:48,509 | INFO     | node.py              (2636) | initStateFromLedger | Node1 found state to be empty, recreating from ledger
2018-03-09 17:14:48,514 | INFO     | pool_manager.py      (394) | _order_node | Node1 node Node1 ordered, NYM Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv
2018-03-09 17:14:48,515 | INFO     | pool_manager.py      (394) | _order_node | Node1 node Node2 ordered, NYM 8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb
2018-03-09 17:14:48,515 | INFO     | pool_manager.py      (394) | _order_node | Node1 node Node3 ordered, NYM DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya
2018-03-09 17:14:48,516 | INFO     | pool_manager.py      (394) | _order_node | Node1 node Node4 ordered, NYM 4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA
2018-03-09 17:14:48,535 | INFO     | node.py              (786) | _create_bls_bft | BLS: BLS Signatures will be used for Node Node1
2018-03-09 17:14:48,544 | INFO     | node.py              (2636) | initStateFromLedger | Node1 found state to be empty, recreating from ledger
2018-03-09 17:14:48,553 | INFO     | node.py              (612) | setPoolParams | Node1 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node4', 'Node1', 'Node3', 'Node2'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'view_change_done': Quorum(3), 'view_change': Quorum(3), 'observer_data': Quorum(2), 'propagate_primary': Quorum(2), 'consistency_proof': Quorum(2), 'f': 1, 'prepare': Quorum(2), 'election': Quorum(3), 'bls_signatures': Quorum(3), 'propagate': Quorum(2), 'timestamp': Quorum(2), 'same_consistency_proof': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'ledger_status': Quorum(2), 'checkpoint': Quorum(2)}
2018-03-09 17:14:48,607 | INFO     | plugin_loader.py     (118) | _load | plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
2018-03-09 17:14:48,609 | DISPLAY  | replicas.py          (41) | grow | Node1 added replica Node1:0 to instance 0 (master)
2018-03-09 17:14:48,609 | DISPLAY  | replicas.py          (41) | grow | Node1 added replica Node1:1 to instance 1 (backup)
2018-03-09 17:14:48,610 | DEBUG    | plugin_helper.py     (23) | loadPlugins | Plugin loading started to load plugins from plugins_dir: /var/lib/indy/plugins
2018-03-09 17:14:48,611 | DEBUG    | plugin_helper.py     (28) | loadPlugins | Plugin directory created at: /var/lib/indy/plugins
2018-03-09 17:14:48,612 | DEBUG    | plugin_helper.py     (63) | loadPlugins | Total plugins loaded from plugins_dir /var/lib/indy/plugins are : 0
2018-03-09 17:14:48,647 | DEBUG    | ledger.py            (201) | start | Starting ledger...
2018-03-09 17:14:48,658 | DEBUG    | ledger.py            (67) | recoverTree | Recovering tree from transaction log
2018-03-09 17:14:48,689 | DEBUG    | ledger.py            (82) | recoverTree | Recovered tree in 0.03132294199895114 seconds
2018-03-09 17:14:48,701 | INFO     | node.py              (2636) | initStateFromLedger | Node1 found state to be empty, recreating from ledger
2018-03-09 17:14:48,702 | DEBUG    | ledger.py            (199) | start | Ledger already started.
2018-03-09 17:14:48,702 | DEBUG    | ledger.py            (199) | start | Ledger already started.
2018-03-09 17:14:48,703 | DEBUG    | ledger.py            (199) | start | Ledger already started.
2018-03-09 17:14:48,705 | DEBUG    | authenticator.py     (31) | start | Starting ZAP at inproc://zeromq.zap.1
2018-03-09 17:14:48,705 | DEBUG    | base.py              (72) | allow | Allowing 0.0.0.0
2018-03-09 17:14:48,705 | DEBUG    | base.py              (112) | configure_curve | Configure curve: *[/var/lib/indy/sandbox/keys/Node1/public_keys]
2018-03-09 17:14:48,707 | INFO     | stacks.py            (84) | start | CONNECTION: Node1 listening for other nodes at 0.0.0.0:9701
2018-03-09 17:14:48,708 | DEBUG    | authenticator.py     (31) | start | Starting ZAP at inproc://zeromq.zap.2
2018-03-09 17:14:48,708 | DEBUG    | base.py              (72) | allow | Allowing 0.0.0.0
2018-03-09 17:14:48,708 | DEBUG    | base.py              (112) | configure_curve | Configure curve: *[*]
2018-03-09 17:14:48,710 | INFO     | node.py              (853) | start | Node1 first time running...
2018-03-09 17:14:48,712 | INFO     | zstack.py            (584) | connect | CONNECTION: Node1 looking for Node4 at 172.16.10.1:9707
2018-03-09 17:14:48,714 | INFO     | zstack.py            (584) | connect | CONNECTION: Node1 looking for Node3 at 172.16.10.1:9705
2018-03-09 17:14:48,715 | INFO     | zstack.py            (584) | connect | CONNECTION: Node1 looking for Node2 at 172.16.10.1:9703
2018-03-09 17:14:49,121 | INFO     | keep_in_touch.py     (98) | _connsChanged | CONNECTION: Node1 now connected to Node4
2018-03-09 17:14:49,315 | INFO     | node.py              (1751) | preLedgerCatchUp | Node1 reverted 0 batches before starting catch up for ledger 0
2018-03-09 17:14:49,315 | INFO     | ledger_manager.py    (878) | mark_ledger_synced | CATCH-UP: Node1 completed catching up ledger 0, caught up 0 in total
2018-03-09 17:14:49,317 | INFO     | keep_in_touch.py     (98) | _connsChanged | CONNECTION: Node1 now connected to Node3
2018-03-09 17:14:49,332 | INFO     | looper.py            (273) | shutdown | Looper shutting down now...
Traceback (most recent call last):
  File "/usr/local/bin/start_indy_node", line 17, in <module>
    run_node(config, self_name, int(sys.argv[2]), int(sys.argv[3]))
  File "/usr/local/lib/python3.5/dist-packages/indy_node/utils/node_runner.py", line 34, in run_node
    looper.run()
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 290, in __exit__
    self.shutdownSync()
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 286, in shutdownSync
    self.loop.run_until_complete(self.shutdown())
  File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
    return future.result()
  File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
  File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 276, in shutdown
    await self.runFut
  File "/usr/lib/python3.5/asyncio/futures.py", line 363, in __iter__
    return self.result()  # May raise too.
  File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
  File "/usr/local/lib/python3.5/dist-packages/indy_node/utils/node_runner.py", line 34, in run_node
    looper.run()
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 260, in run
    return self.loop.run_until_complete(what)
  File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
    return future.result()
  File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
  File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 223, in runForever
    await self.runOnceNicely()
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 206, in runOnceNicely
    msgsProcessed = await self.prodAllOnce()
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 151, in prodAllOnce
    s += await n.prod(limit)
  File "/usr/local/lib/python3.5/dist-packages/indy_node/server/node.py", line 275, in prod
    c = await super().prod(limit)
  File "/usr/local/lib/python3.5/dist-packages/plenum/server/node.py", line 976, in prod
    c += await self.serviceNodeMsgs(limit)
  File "/usr/local/lib/python3.5/dist-packages/plenum/server/node.py", line 1010, in serviceNodeMsgs
    await self.processNodeInBox()
  File "/usr/local/lib/python3.5/dist-packages/plenum/server/node.py", line 1566, in processNodeInBox
    await self.nodeMsgRouter.handle(m)
  File "/usr/local/lib/python3.5/dist-packages/plenum/server/router.py", line 81, in handle
    res = self.handleSync(msg)
  File "/usr/local/lib/python3.5/dist-packages/plenum/server/router.py", line 70, in handleSync
    return self.getFunc(msg[0])(*msg)
  File "/usr/local/lib/python3.5/dist-packages/plenum/server/message_req_processor.py", line 50, in process_message_rep
    return handler.process(msg, frm)
  File "/usr/local/lib/python3.5/dist-packages/plenum/server/message_handlers.py", line 62, in process
    valid_msg = self.create(msg.msg, **params)
  File "/usr/local/lib/python3.5/dist-packages/plenum/server/message_handlers.py", line 75, in create
    return LedgerStatus(**msg)
TypeError: GenericMeta object argument after ** must be a mapping, not list

My environment:

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.4 LTS
Release:	16.04
Codename:	xenial

Linux kernel info:

$ uname -a
Linux ben-dev-machine 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Docker-compose info:

$ docker-compose version
docker-compose version 1.19.0, build 9e633ef
docker-py version: 2.7.0
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t  3 May 2016

Docker info:

$ docker info
Containers: 6
 Running: 1
 Paused: 0
 Stopped: 5
Images: 334
Server Version: 17.12.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9b55aab90508bd389d7654c4baf173a981477d55
runc version: 9f9c96235cc97674e935002fc3d78361b696a69e
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-116-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.67GiB
Name: ***** censored *****
ID: JTGA:HCOJ:L6VQ:U7JT:UGNN:QYQ2:PCB7:V3RO:HAIC:LYYL:YAGF:6GFU
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Add missing topics

TL;DR

Topics greatly improve the discoverability of repos; please add the short code from the table below to the topics of your repo so that ministries can use GitHub's search to find out what repos belong to them and other visitors can find useful content (and reuse it!).

Why Topic

In short order we'll add our 800th repo. This large number clearly demonstrates the success of using GitHub and our Open Source initiative. This huge success means its critical that we work to make our content as discoverable as possible; Through discoverability, we promote code reuse across a large decentralized organization like the Government of British Columbia as well as allow ministries to find the repos they own.

What to do

Below is a table of abbreviation a.k.a short codes for each ministry; they're the ones used in all @gov.bc.ca email addresses. Please add the short codes of the ministry or organization that "owns" this repo as a topic.

add a topic

That's in, you're done!!!

How to use

Once topics are added, you can use them in GitHub's search. For example, enter something like org:bcgov topic:citz to find all the repos that belong to Citizens' Services. You can refine this search by adding key words specific to a subject you're interested in. To learn more about searching through repos check out GitHub's doc on searching.

Pro Tip ๐Ÿค“

  • If your org is not in the list below, or the table contains errors, please create an issue here.

  • While you're doing this, add additional topics that would help someone searching for "something". These can be the language used javascript or R; something like opendata or data for data only repos; or any other key words that are useful.

  • Add a meaningful description to your repo. This is hugely valuable to people looking through our repositories.

  • If your application is live, add the production URL.

Ministry Short Codes

Short Code Organization Name
AEST Advanced Education, Skills & Training
AGRI Agriculture
ALC Agriculture Land Commission
AG Attorney General
MCF Children & Family Development
CITZ Citizens' Services
DBC Destination BC
EMBC Emergency Management BC
EAO Environmental Assessment Office
EDUC Education
EMPR Energy, Mines & Petroleum Resources
ENV Environment & Climate Change Strategy
FIN Finance
FLNR Forests, Lands, Natural Resource Operations & Rural Development
HLTH Health
FLNR Indigenous Relations & Reconciliation
JEDC Jobs, Economic Development & Competitiveness
LBR Labour Policy & Legislation
LDB BC Liquor Distribution Branch
MMHA Mental Health & Addictions
MAH Municipal Affairs & Housing
BCPC Pension Corporation
PSA Public Safety & Solicitor General & Emergency B.C.
SDPR Social Development & Poverty Reduction
TCA Tourism, Arts & Culture
TRAN Transportation & Infrastructure

NOTE See an error or omission? Please create an issue here to get it remedied.

New nodes

Hello, How I can uptade the pool with new insert node in indy-cli ?

Unknown connection id: ian-co

Everything works fine but issuing credential fails with following error message

curl -X POST \
  http://localhost:5001/ian-co/issue-credential \
  -H 'content-type: application/json' \
  -d '[
  {
    "attributes": {
        "corp_num": "1234567890",
        "legal_name": "My Test Corp",
        "permit_id": "123834234234999",
        "permit_type": "Unlimited Use Authorization",
        "permit_issued_date": "2020-03-07T08:00:00+00:00",
        "permit_status": "ACT",
        "effective_date": "2020-03-07T08:00:00+00:00"
    },
    "schema": "ian-permit.ian-co",
    "version": "1.0.0"
  }
]
'
[{"success": false, "result": "Unknown connection id: ian-co"}]%                                                                                                                             

cred_def.py hard codes the cred-def tag

cli-scripts/cred_def.py used by the ./manage cli scripts to generate a new cred-def hard codes the tag for all generated cred-defs. Add support for passing the tag as a variable, the same as ./manage indy-cli create-signed-cred-def.

Rustup?

https://github.com/bcgov/von-network/blob/master/Dockerfile#L56-L59 has us grabbing rust using rustup. I understand this is a typical way of installing Rust, but I was curious if it doesn't make more sense in this context to install it as an apt package, since that's how everything else is being installed?

https://packages.ubuntu.com/en/xenial/rustc works as well as https://packages.ubuntu.com/en/xenial/cargo in my testing, but they are indeed older versions of the language. I will PR if it seems like a worthwhile consideration. Otherwise just close me.

Building troubles

Hi
First of all, many thanks for providing this and its the elaborate documentation!
I am running into an issue however. I am fairly sure it is not directly related to the setup, but perhaps you could provide some guidance on resolving it.

I want to run a 'VPS' instance (https://github.com/bcgov/von-network#running-the-network-on-a-vps) of the von network on Ubuntu 18.04 LTS using docker ce 19.03.13.

When building using ./manage build I get the following error:

Unable to find image 'eclipse/che-ip:latest' locally
latest: Pulling from eclipse/che-ip
d6a5679aa3cf: Already exists
4498fa6d0d1b: Already exists
Digest: sha256:2ac584b1bd6e6ec2379760dd90ae63b61b67f40cc6331c6bfc46e5e747b767b5
Status: Downloaded newer image for eclipse/che-ip:latest
free(): invalid pointer
SIGABRT: abort
PC=0x7f12a7330f47 m=0 sigcode=18446744073709551610
signal arrived during cgo execution

goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0x4afd50, 0xc420049cc0, 0xc420049ce8)
	/usr/lib/go-1.8/src/runtime/cgocall.go:131 +0xe2 fp=0xc420049c90 sp=0xc420049c50
github.com/docker/docker-credential-helpers/secretservice._Cfunc_free(0x2101270)
	github.com/docker/docker-credential-helpers/secretservice/_obj/_cgo_gotypes.go:111 +0x41 fp=0xc420049cc0 sp=0xc420049c90
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List.func5(0x2101270)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:96 +0x60 fp=0xc420049cf8 sp=0xc420049cc0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List(0x0, 0x756060, 0xc4200180c0)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:97 +0x217 fp=0xc420049da0 sp=0xc420049cf8
github.com/docker/docker-credential-helpers/secretservice.(*Secretservice).List(0x77e548, 0xc420049e88, 0x410022, 0xc420060220)
	<autogenerated>:4 +0x46 fp=0xc420049de0 sp=0xc420049da0
github.com/docker/docker-credential-helpers/credentials.List(0x756ba0, 0x77e548, 0x7560e0, 0xc420080008, 0x0, 0x10)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:145 +0x3e fp=0xc420049e68 sp=0xc420049de0
github.com/docker/docker-credential-helpers/credentials.HandleCommand(0x756ba0, 0x77e548, 0x7ffee9a0b66e, 0x4, 0x7560a0, 0xc420080000, 0x7560e0, 0xc420080008, 0x40e398, 0x4d35c0)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:60 +0x16d fp=0xc420049ed8 sp=0xc420049e68
github.com/docker/docker-credential-helpers/credentials.Serve(0x756ba0, 0x77e548)
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/obj-x86_64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:41 +0x1cb fp=0xc420049f58 sp=0xc420049ed8
main.main()
	/build/golang-github-docker-docker-credential-helpers-cMhSy1/golang-github-docker-docker-credential-helpers-0.5.0/secretservice/cmd/main_linux.go:9 +0x4f fp=0xc420049f88 sp=0xc420049f58
runtime.main()
	/usr/lib/go-1.8/src/runtime/proc.go:185 +0x20a fp=0xc420049fe0 sp=0xc420049f88
runtime.goexit()
	/usr/lib/go-1.8/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc420049fe8 sp=0xc420049fe0

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
	/usr/lib/go-1.8/src/runtime/asm_amd64.s:2197 +0x1

rax    0x0
rbx    0x7ffee9a09a80
rcx    0x7f12a7330f47
rdx    0x0
rdi    0x2
rsi    0x7ffee9a09810
rbp    0x7ffee9a09b80
rsp    0x7ffee9a09810
r8     0x0
r9     0x7ffee9a09810
r10    0x8
r11    0x246
r12    0x7ffee9a09a80
r13    0x1000
r14    0x0
r15    0x30
rip    0x7f12a7330f47
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
Sending build context to Docker daemon  316.4kB
Step 1/9 : FROM bcgovimages/von-image:node-1.12-3
...

Interestingly it does not seem to prevent the eclipse/che-ip image from being build, and I can run that independently using e.g. "docker run eclipse/che-ip". The other images build just fine.

If I then want to start the network using "./manage start --logs"
I get a complaint from docker-compose:

Define and run multi-container applications with Docker.

Usage:
  docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
  docker-compose -h|--help

Clearly docker compose is unhappy with one of the docker-compose commands being build in the manage script, but given there are several, it is a bit hard to diagnose which one is failing :(
I am also unsure if this is related to the above issue with eclipse/che-ip

Any suggestions on how to resolve this?
Many thanks!

./manage start should be able to run in background

The ./manage start script starts the network with docker compose in detached mode, but then it follows the logs. I think the start command should be runnable in the background. Two possible solutions I can think of:

  1. Remove log command from up/start command
    Users can still manually run the ./manage logs command to achieve the same result

  2. Allow to pass an --no-logs or --logs false argument to ./manage start
    This will skip the log command if the --no-logs or --logs false argument is passed.

I would be happy to create a pr with one of the above solutions.

./ manage build error in von-network

In step 28 I get the error below

error[E0119]: conflicting implementations of trait std::convert::From<&>for typetypes::to_sql::ToSqlOutput<'>: --> /home/indy/.cargo/registry/src/github.com-1ecc6299db9ec823/rusqlcipher-0.14.6/src/types/to_sql.rs:26:1 | 18 | / impl<'a, T: ?Sized> From<&'a T> for ToSqlOutput<'a> 19 | | where &'a T: Into<ValueRef<'a>> 20 | | { 21 | | fn from(t: &'a T) -> Self { 22 | | ToSqlOutput::Borrowed(t.into()) 23 | | } 24 | | } | |_- first implementation here 25 | 26 | impl<'a, T: Into<Value>> From<T> for ToSqlOutput<'a> { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for types::to_sql::ToSqlOutput<'>| = note: downstream crates may implement traitstd::convert::From<&>for typetypes::value::Value`

error: aborting due to previous error

For more information about this error, try rustc --explain E0119.
error: Could not compile rusqlcipher.
warning: build failed, waiting for other jobs to finish...
error: build failed
ERROR: Service 'client' failed to build: The command '/bin/sh -c /home/indy/.cargo/bin/cargo build' returned a non-zero code: 101`

Webserver_1 error

I am having trouble getting the webserver to work. I get the following error message:

webserver_1 | 2020-05-06 18:22:13,399|ERROR|libindy.py| src/services/pool/state_proof/mod.rs:86 | Given signature is not for current root hash, aborting

When I try: docker ps, I see that the webserver is running.

But when I try to go to localhost:9000 I get:

This site canโ€™t provide a secure connection
localhost sent an invalid response.
ERR_SSL_PROTOCOL_ERROR

indy-cli and cli are not found from fresh pull

I've just pulled this repository and I've found the following issue. When trying to run some commands using manage script, they do not work despite appearing in the help me:

i.e.

comp@devop von-network % ./manage
... 
  indy-cli - Run Indy-Cli commands in a Indy-Cli container environment.
 
        ./manage indy-cli -h
          - Display specific help documentation.

  cli - Run a command in an Indy-Cli container.
        
        ./manage cli -h
          - Display specific help documentation.
comp@devop von-network % ./manage indy-cli 
./manage: line 239: realpath: command not found
comp@devop von-network % ./manage cli
./manage: line 239: realpath: command not found

I need to fix this so I can follow Aries edX Course. Thanks in advance.

Create ledger browser for the IIWBook Demo

Create a ledger browser instance and point it at the ledger being used for the IIWBook Demo (genesis file below). Create a URL - https://iiwbook.ledger-browser.vonx.io. Let me know if any of that is a pain (e.g. the https, etc.).

Thanks!

Genesis File (from @nrempel):

{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","client_ip":"52.224.127.162","client_port":9702,"node_ip":"52.224.127.162","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","client_ip":"52.224.127.162","client_port":9704,"node_ip":"52.224.127.162","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","client_ip":"52.224.127.162","client_port":9706,"node_ip":"52.224.127.162","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","client_ip":"52.224.127.162","client_port":9708,"node_ip":"52.224.127.162","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"}

von-network with Raspberry Pi 4

Hi,

I wanted to use aca-py Cloud Agent on RP4 together with von-network?

Is there a way to run von-network on Pi because I think, because of different architectures it wonยดt be possible with docker?

Alternatively, is there a way to run von-network on my PC and connect it with Agent on Pi?

Thanks in advance!

Error with libindystrgpostgres when create credential definition

I have another problem with the python file when trying to create the credential definition.

โžœ  von-network git:(master) โœ— ./manage \
  -v ${PWD}/cli-scripts/ \
  cli \
  walletName=myorg_issuer \
  storageType=default
  storageConfig={} storageCredentials={} walletKey=${key} \
  poolName=localpool \
  authorDid=NFP8kaWvCupbDQHQhErwXb \
  schemaId=${schemaID} \
  schemaName=anh-permit.anh-co \
  schemaVersion=1.0.0 \
  schemaAttributes=name \
  tag=tag \
  python cli-scripts/cred_def.py
Loading postgres
Traceback (most recent call last):
  File "cli-scripts/cred_def.py", line 145, in <module>
    stg_lib = CDLL("libindystrgpostgres.so")
  File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ctypes/__init__.py", line 364, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: dlopen(libindystrgpostgres.so, 6): image not found

I didn't use postgres in my case, so I let it default storateType here

Dockerfile - SSL Error

Hi,
When trying to build the docker image (./mange build) it was giving an SSL Error.
Easily resolved with this line in Dockerfile, before running requirements.txt:

RUN python -m pip install --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --trusted-host pypi.org --upgrade pip

VirtualBox with Ubuntu server 18 as guest
Host: Windows 10.

Cheers

Tag versions to allow stable point references

VON network is an extremely useful tool for developers before they are ready to brave the public test ledgers on Sovrin, Indicio etc.

It would be great If we could just tag the current version so that future development, that might break the HEAD does not effect people pulling the repo. I know they could just pull a specific commit ref but a tag makes it more obvious which one to use.

Problem in running in the browser the Aries OpenAPI Demo

Hi All, I used the demo smoothly for a bit.

Then I closed the browser windows, and tried to use the docker playground again.

The problem I am finding now, is the impossibility for me to launch commands from the new created instances.

I cannot see the text I write, and it really seems something is wrong when setting up the instances.
I don't know the reason, because it has been working smoothly before.

(By the way, I am already logged in into docker)

Here a picture in order for you to better understand my problem:

Captura de pantalla 2019-07-23 a les 16 14 22

Someone has any ideas on this?

Thanks

The network is not starting

I have followed all the steps to spin up the network locally using docker in a linux environment.
But when I run the command sudo ./manage start

It gives the error -
An Http request took too long to complete - consider setting COMPOSE_HTTP_TIMEOUT to a higher value.

Can someone please give a workaround over this quickly

"./manage cli <ip_address>" does not make use of user supplied ip

running ./manage cli 107.121.220.88 will produce:

WARNING: The GENESIS_URL variable is not set. Defaulting to a blank string.
WARNING: The ANONYMOUS variable is not set. Defaulting to a blank string.
WARNING: The LEDGER_SEED variable is not set. Defaulting to a blank string.
WARNING: The LEDGER_CACHE_PATH variable is not set. Defaulting to a blank string.
WARNING: The WEB_ANALYTICS_SCRIPT variable is not set. Defaulting to a blank string.
WARNING: The INFO_SITE_TEXT variable is not set. Defaulting to a blank string.
WARNING: The INFO_SITE_URL variable is not set. Defaulting to a blank string.
von_generate_transactions


================================================================================================
Generating genesis transaction file:
nodeArg:
ipAddresses: 172.17.0.1,172.17.0.1,172.17.0.1,172.17.0.1
genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis
------------------------------------------------------------------------------------------------
generate_indy_pool_transactions --nodes 4 --clients 0 --ips 172.17.0.1,172.17.0.1,172.17.0.1,172.17.0.1

BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP

------------------------------------------------------------------------------------------------
Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis

{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"172.17.0.1","client_port":9702,"node_ip":"172.17.0.1","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"172.17.0.1","client_port":9704,"node_ip":"172.17.0.1","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"172.17.0.1","client_port":9706,"node_ip":"172.17.0.1","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"172.17.0.1","client_port":9708,"node_ip":"172.17.0.1","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"}
================================================================================================

Note the line containing: "von_generate_transactions" does not include the user supplied ip indicating no custom IP was recognised so the command continues with default values.

The issue is in docker-compose.yml the CLIENT profile does not pass through the IP (or IPS) environment variable. Should be changed to:

version: '3'
services:
  #
  # Client
  #
  client:
    image: von-network-base
    command: 'bash -c ''./scripts/start_client.sh'''
    environment:
      - IP=${IP}
      - IPS=${IPS}
      - DOCKERHOST=${DOCKERHOST}
      - RUST_LOG=${RUST_LOG}
    networks:
      - von
    volumes:
      - client-cli:/home/indy/.indy-cli
      - client-data:/var/lib/indy

  #
  # Webserver
  #
  webserver:
    image: von-network-base
    command: 'bash -c ''./scripts/start_webserver.sh'''
    environment:
      - IP=${IP}
      - IPS=${IPS}
      - DOCKERHOST=${DOCKERHOST}
      - LOG_LEVEL=${LOG_LEVEL}
      - RUST_LOG=${RUST_LOG}
      - GENESIS_URL=${GENESIS_URL}
      - ANONYMOUS=${ANONYMOUS}
      - LEDGER_SEED=${LEDGER_SEED}
      - LEDGER_CACHE_PATH=${LEDGER_CACHE_PATH}
      - MAX_FETCH=${MAX_FETCH:-50000}
      - RESYNC_TIME=${RESYNC_TIME:-120}
      - REGISTER_NEW_DIDS=${REGISTER_NEW_DIDS:-True}
      - LEDGER_INSTANCE_NAME=${LEDGER_INSTANCE_NAME:-localhost}
      - WEB_ANALYTICS_SCRIPT=${WEB_ANALYTICS_SCRIPT}
      - INFO_SITE_TEXT=${INFO_SITE_TEXT}
      - INFO_SITE_URL=${INFO_SITE_URL}
    networks:
      - von
    ports:
      - ${WEB_SERVER_HOST_PORT:-9000}:8000
    volumes:
      - ./config:/home/indy/config
      - ./server:/home/indy/server
      - webserver-cli:/home/indy/.indy-cli
      - webserver-ledger:/home/indy/ledger

  #
  # Nodes
  #
  nodes:
    image: von-network-base
    command: 'bash -c ''./scripts/start_nodes.sh'''
    networks:
      - von
    ports:
      - 9701:9701
      - 9702:9702
      - 9703:9703
      - 9704:9704
      - 9705:9705
      - 9706:9706
      - 9707:9707
      - 9708:9708
    environment:
      - IP=${IP}
      - IPS=${IPS}
      - DOCKERHOST=${DOCKERHOST}
      - LOG_LEVEL=${LOG_LEVEL}
      - RUST_LOG=${RUST_LOG}
    volumes:
      - nodes-data:/home/indy/ledger

  node1:
    image: von-network-base
    command: 'bash -c ''./scripts/start_node.sh 1'''
    networks:
      - von
    ports:
      - 9701:9701
      - 9702:9702
    environment:
      - IP=${IP}
      - IPS=${IPS}
      - DOCKERHOST=${DOCKERHOST}
      - LOG_LEVEL=${LOG_LEVEL}
      - RUST_LOG=${RUST_LOG}
    volumes:
      - node1-data:/home/indy/ledger

  node2:
    image: von-network-base
    command: 'bash -c ''./scripts/start_node.sh 2'''
    networks:
      - von
    ports:
      - 9703:9703
      - 9704:9704
    environment:
      - IP=${IP}
      - IPS=${IPS}
      - DOCKERHOST=${DOCKERHOST}
      - LOG_LEVEL=${LOG_LEVEL}
      - RUST_LOG=${RUST_LOG}
    volumes:
      - node2-data:/home/indy/ledger

  node3:
    image: von-network-base
    command: 'bash -c ''./scripts/start_node.sh 3'''
    networks:
      - von
    ports:
      - 9705:9705
      - 9706:9706
    environment:
      - IP=${IP}
      - IPS=${IPS}
      - DOCKERHOST=${DOCKERHOST}
      - LOG_LEVEL=${LOG_LEVEL}
      - RUST_LOG=${RUST_LOG}
    volumes:
      - node3-data:/home/indy/ledger

  node4:
    image: von-network-base
    command: 'bash -c ''./scripts/start_node.sh 4'''
    networks:
      - von
    ports:
      - 9707:9707
      - 9708:9708
    environment:
      - IP=${IP}
      - IPS=${IPS}
      - DOCKERHOST=${DOCKERHOST}
      - LOG_LEVEL=${LOG_LEVEL}
      - RUST_LOG=${RUST_LOG}
    volumes:
      - node4-data:/home/indy/ledger

networks:
  von:

volumes:
  client-cli:
  client-data:
  webserver-cli:
  webserver-ledger:
  node1-data:
  node2-data:
  node3-data:
  node4-data:
  nodes-data:

and now running ./manage cli 107.121.220.88 we get

WARNING: The GENESIS_URL variable is not set. Defaulting to a blank string.
WARNING: The ANONYMOUS variable is not set. Defaulting to a blank string.
WARNING: The LEDGER_SEED variable is not set. Defaulting to a blank string.
WARNING: The LEDGER_CACHE_PATH variable is not set. Defaulting to a blank string.
WARNING: The WEB_ANALYTICS_SCRIPT variable is not set. Defaulting to a blank string.
WARNING: The INFO_SITE_TEXT variable is not set. Defaulting to a blank string.
WARNING: The INFO_SITE_URL variable is not set. Defaulting to a blank string.
von_generate_transactions -i 107.121.220.88


================================================================================================
Generating genesis transaction file:
nodeArg:
ipAddresses: 107.121.220.88,107.121.220.88,107.121.220.88,107.121.220.88
genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis
------------------------------------------------------------------------------------------------
generate_indy_pool_transactions --nodes 4 --clients 0 --ips 107.121.220.88,107.121.220.88,107.121.220.88,107.121.220.88

BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP

------------------------------------------------------------------------------------------------
Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis

{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"107.121.220.88","client_port":9702,"node_ip":"107.121.220.88","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"107.121.220.88","client_port":9704,"node_ip":"107.121.220.88","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"107.121.220.88","client_port":9706,"node_ip":"107.121.220.88","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"107.121.220.88","client_port":9708,"node_ip":"107.121.220.88","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"}
================================================================================================

Much better!

How to change static page content

Hi
Thank you for your support. I would like to change static page content. Like as example if i want to change localhost to any other name or want to remove Contributed by the Province of British Columbia how to do that. I changed at static files and it seems not working.

I will be grateful if you provide an accurate guide line.

Issues restarting existing von-network instance

PR 59 fixed some issues with restarting an existing instance, however the following issue now occurs.

The containers start, but the web server encounters the following exception and the Ledger Browser hangs at the Connecting to server screen.

image

webserver_1  | WARNING:indy.libindy:_indy_loop_callback: Function returned error (<ErrorCode.DidAlreadyExistsError: 600>, {'message': 'Error: DID already exists\n  Caused by: V4SGRU86Z58d6TV7PBUe6f\n', 'backtrace': ''})
webserver_1  | ERROR:server.anchor:(<ErrorCode.DidAlreadyExistsError: 600>, {'message': 'Error: DID already exists\n  Caused by: V4SGRU86Z58d6TV7PBUe6f\n', 'backtrace': ''})
webserver_1  | Traceback (most recent call last):
webserver_1  |   File "/home/indy/server/anchor.py", line 213, in _open_wallet
webserver_1  |     {'seed': LEDGER_SEED}
webserver_1  |   File "/home/indy/.pyenv/versions/3.5.6/lib/python3.5/site-packages/indy/did.py", line 51, in create_and_store_my_did
webserver_1  |     create_and_store_my_did.cb)
webserver_1  |   File "/home/indy/.pyenv/versions/3.5.6/lib/python3.5/asyncio/futures.py", line 381, in __iter__
webserver_1  |     yield self  # This tells Task to wait for completion.
webserver_1  |   File "/home/indy/.pyenv/versions/3.5.6/lib/python3.5/asyncio/tasks.py", line 310, in _wakeup
webserver_1  |     future.result()
webserver_1  |   File "/home/indy/.pyenv/versions/3.5.6/lib/python3.5/asyncio/futures.py", line 294, in result
webserver_1  |     raise self._exception
webserver_1  | indy.error.IndyError: (<ErrorCode.DidAlreadyExistsError: 600>, {'message': 'Error: DID already exists\n  Caused by: V4SGRU86Z58d6TV7PBUe6f\n', 'backtrace': ''})
webserver_1  |
webserver_1  | During handling of the above exception, another exception occurred:
webserver_1  |
webserver_1  | Traceback (most recent call last):
webserver_1  |   File "/home/indy/server/anchor.py", line 226, in open
webserver_1  |     await self._open_wallet()
webserver_1  |   File "/home/indy/server/anchor.py", line 216, in _open_wallet
webserver_1  |     raise AnchorException(str(e))
webserver_1  | server.anchor.AnchorException: (<ErrorCode.DidAlreadyExistsError: 600>, {'message': 'Error: DID already exists\n  Caused by: V4SGRU86Z58d6TV7PBUe6f\n', 'backtrace': ''})
webserver_1  | INFO:__main__:--- Trust anchor initialized ---

Steps to reproduce:

./manage down to clean up existing containers and volumes
./manage build to build the latest version of master
./manage up
The containers will start and the Ledger Browser will connect this first time.
./manage stop to stop the containers without deleting the volumes.
./manage up
The containers will start, the web server will eventually encounter the exception above, and the Ledger Browser will hang at the Connecting to server screen.

Problem Launching Webserver, Windows 10

  1. run ./manage start in the von root folder
  2. All docker processes start up
    => after a short time the webserver process exits with the error:
    webserver_1 | /home/indy/.pyenv/versions/3.6.9/bin/python: No module named server.server

All the other 4 nodes run fine on Windows 10.

I am running unbuntu 16.04 linux on Windows 10 with Docker toolbox/Oracle VM Virtualbox

Problem starting the service

Hi
I have cloned the latest repo and run /manage build it was successful but after than when try to run ./manage start it show up followings :
Capture

Please suggest me what i can do at this situation.
Thanks in advance.

Indy SDK Raw Build?

https://github.com/bcgov/von-network/blob/master/Dockerfile#L61-L66, which is:

# Build libindy
RUN git clone https://github.com/hyperledger/indy-sdk.git
WORKDIR /home/indy/indy-sdk/libindy
RUN git fetch
RUN git checkout 778a38d92234080bb77c6dd469a8ff298d9b7154
RUN /home/indy/.cargo/bin/cargo build

Seems to take the most time, other than maybe rustup, during this build. Curious if it's absolutely necessary to build libindy to make this von-network image. I know that the upstream repo probably does it, but my thinking is that we could do better than that. I'm guessing that the Sovrin repo offering of libindy is not at a recent enough build to be usable? Could we maybe change the indy_stream parameter to grab a later package instead of building indy ourselves?

And then I have a few related questions to the above question. Assuming for a second that we don't need to build libindy (which is a big assumption, I know), do we need to install Rust?

I know that the SDK was written in Rust, but it seems that it has bindings into other languages (like Python) that we could capitalize on, by just installing the binding instead of building from scratch?

In general, I'm a believer that Docker containers can be useful for building software projects, but that there's a difference between building a docker image that is useful for building a software package, but it is a separate enterprise to build a Docker image for running/serving/executing the package. In this case, what I'm saying I guess is that to me it doesn't make sense to build the thing in the same container in which you use it.

I'm asking about this and many other issues on this repo because I'm trying to determine how amenable the owners would be to PRs that change/shift things. If the feeling is that my ideas here are a bit to zany or that maybe they are good ideas but there's other more important things than even reviewing PRs that could address some of these issues, then I won't bother. I'm not saying "this is something you need to fix", I'm trying to say "is this something you would consider fixes on?"

/bin/sh -c pip install --no-cache-dir -r server/requirements.txt' returned a non-zero code: 1

While tries to start up the VON network with "von-network/manage build" the listed error is happening.


Resulting Output
Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /simple/pyyaml/
Could not find a version that satisfies the requirement pyyaml~=5.1.1 (from -r server/requirements.txt (line 1)) (from versions: )
No matching distribution found for pyyaml~=5.1.1 (from -r server/requirements.txt (line 1))
The command '/bin/sh -c pip install --no-cache-dir -r server/requirements.txt' returned a non-zero code: 1


It seems like docker command listed below in von-network-base container is failing

"Cmd":["/bin/sh","-c","pip install --no-cache-dir -r server/requirements.txt"]


docker issue

Can you guys help me resolve this?

Question: Can I connect an Android App to VON network?

I treid to develop an android app that uses the indy-sdk. I had problems to set up the sdk locally and found the von network.
Is is possible to connect my android app from Android Studio to the von network and play a bit around with everything?

Getting trust anchor seed or adding a new steward

Hi,

is it somehow possible to retrieve the trust anchor seed form a running instance of the von-network or to add another steward/trustee that is able to authenticate new DIDs besides "trust anchor api/ui"

Can't write a schema definition to test ledger at http://test.bcovrin.vonx.io/

Currently, I'm following the steps at: https://github.com/bcgov/von-network/blob/master/docs/Writing%20Transactions%20to%20a%20Ledger%20for%20an%20Un-privileged%20Author.md
except that:
#1 I init my pool with the test ledger instead:
./manage cli reset; ./manage cli init-pool localpool http://test.bcovrin.vonx.io/genesis
#2 I register new DID for my agent and my endorser on the UI at http://test.bcovrin.vonx.io using the wallet seed as instruction. (However, when I try to run did import in my local wallet, I notice that the associated verkey in that test ledger is different with my verkey in my wallet, so I'm not why is that happen)
#3 I write the transaction to the test ledger as:

ledger get-nym did=WXwizNAUxGMB9QrgSDZG9T
ledger get-nym did=NFP8kaWvCupbDQHQhErwXb
ledger load-transaction file=/tmp/anh-permit.anh-co_endorser_signed_schema.txn
ledger custom context

and I got the following error:

Following NYM has been received.
Metadata:
+------------------------+-----------------+---------------------+---------------------+
| Identifier             | Sequence Number | Request ID          | Transaction time    |
+------------------------+-----------------+---------------------+---------------------+
| LibindyDid111111111111 | 23684           | 1592333926246546200 | 2020-06-16 18:49:33 |
+------------------------+-----------------+---------------------+---------------------+
Data:
+------------------------+------------------------+----------------------------------------------+----------+
| Identifier             | Dest                   | Verkey                                       | Role     |
+------------------------+------------------------+----------------------------------------------+----------+
| V4SGRU86Z58d6TV7PBUe6f | WXwizNAUxGMB9QrgSDZG9T | H6cXF5VtUv8bgqT8wXa1XGZFqBDUJ8hjefAnebghz74j | ENDORSER |
+------------------------+------------------------+----------------------------------------------+----------+
pool(localpool):indy> ledger get-nym did=NFP8kaWvCupbDQHQhErwXb
Following NYM has been received.
Metadata:
+------------------------+-----------------+---------------------+---------------------+
| Identifier             | Sequence Number | Request ID          | Transaction time    |
+------------------------+-----------------+---------------------+---------------------+
| LibindyDid111111111111 | 23245           | 1592333936373336900 | 2020-06-12 15:09:48 |
+------------------------+-----------------+---------------------+---------------------+
Data:
+------------------------+------------------------+----------------------------------------------+------+
| Identifier             | Dest                   | Verkey                                       | Role |
+------------------------+------------------------+----------------------------------------------+------+
| V4SGRU86Z58d6TV7PBUe6f | NFP8kaWvCupbDQHQhErwXb | Cah1iVzdB6UF5HVCJ2ENUrqzAKQsoWgiWyUopcmN3WHd | -    |
+------------------------+------------------------+----------------------------------------------+------+
pool(localpool):indy> ledger load-transaction file=/tmp/anh-permit.anh-co_endorser_signed_schema.txn
Transaction has been loaded: {"endorser":"WXwizNAUxGMB9QrgSDZG9T","identifier":"NFP8kaWvCupbDQHQhErwXb","operation":{"data":{"attr_names":["test_value"],"name":"anh-permit.anh-co","version":"1.0.0"},"type":"101"},"protocolVersion":2,"reqId":1592333696665373500,"signatures":{"NFP8kaWvCupbDQHQhErwXb":"4oH8NqJBQyxYoYPq3NyUF1ZYVSQGxuSnsUUF3JWpd6PB4vBwuquwzwYieHynBnzADVu3FoM9YvQdL9UFeiTKqeAh","WXwizNAUxGMB9QrgSDZG9T":"5PLLUWjDtcfxDrPmRUcwzvxxML5qpFw7tkFmgr8rkVAG3wXLkAkiyr9xnBFcpgcNixENDFRXy9VfvB4nttAGEKRv"}}
pool(localpool):indy> ledger custom context
Transaction stored into context: "{\"endorser\":\"WXwizNAUxGMB9QrgSDZG9T\",\"identifier\":\"NFP8kaWvCupbDQHQhErwXb\",\"operation\":{\"data\":{\"attr_names\":[\"test_value\"],\"name\":\"anh-permit.anh-co\",\"version\":\"1.0.0\"},\"type\":\"101\"},\"protocolVersion\":2,\"reqId\":1592333696665373500,\"signatures\":{\"NFP8kaWvCupbDQHQhErwXb\":\"4oH8NqJBQyxYoYPq3NyUF1ZYVSQGxuSnsUUF3JWpd6PB4vBwuquwzwYieHynBnzADVu3FoM9YvQdL9UFeiTKqeAh\",\"WXwizNAUxGMB9QrgSDZG9T\":\"5PLLUWjDtcfxDrPmRUcwzvxxML5qpFw7tkFmgr8rkVAG3wXLkAkiyr9xnBFcpgcNixENDFRXy9VfvB4nttAGEKRv\"}}".
Would you like to send it? (y/n)
y
Transaction has been rejected: The action is forbidden

And If I tried to do it again. I got another error (look like a time out error):

Transaction response has not been received

When I checked the logs at ./manage logs, I found an error log:

webserver_1  | 2020-06-16 19:04:59,213|ERROR|libindy.py|	src/services/pool/state_proof/mod.rs:86 | Given signature is not for current root hash, aborting

but I'm not sure whether is that relevant.

I currently get stuck at this point. My goal is to write my own schema to the test ledger at the moment.

Error when running ./manage cli (Docker)

When attempting to run the ./manage cli command, the cp .indy-cli/networks/sandbox/pool_transactions_genesis .indy_client/pool/sandbox/sandbox.txn command fails. The .indy-cli/networks/sandbox/pool_transactions_genesis file does not exist.

cp .indy-cli/networks/sandbox/pool_transactions_genesis .indy_client/pool/sandbox/sandbox.txn

Changing the command to cp /home/indy/ledger/sandbox/pool_transactions_genesis .indy_client/pool/sandbox/sandbox.txn fixed the problem, but not sure if that is what should be done.

Time Out Issue in VON Network StartUp

I am getting timeout error in VON Network StartUp despite having good internet connectivity. It was working fine in the first time installation and starting up.

node3_1 | 2019-03-18 02:12:28,046 | INFO | zstack.py (584) | connect | CONNECTION: Node3 looking for Node2 at 172.20.10.12:9703
von-web_1 | ERROR|indy::errors::indy | src/errors/indy.rs:68 | Casting error to ErrorCode: Timeout
von-web_1 | ERROR|indy::services::pool | src/services/pool/mod.rs:426 | Pool worker thread finished with error Timeout
von-web_1 | _indy_loop_callback: Function returned error 307
von-web_1 | Traceback (most recent call last):
von-web_1 | File "server.py", line 295, in
von-web_1 | loop.run_until_complete(boot())
von-web_1 | File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
von-web_1 | return future.result()
von-web_1 | File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
von-web_1 | raise self._exception
von-web_1 | File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
von-web_1 | result = coro.throw(exc)
von-web_1 | File "server.py", line 112, in boot
von-web_1 | await pool.open()
von-web_1 | File "/home/indy/.local/share/virtualenvs/server-8XoupS0v/lib/python3.5/site-packages/von_agent/nodepool.py", line 125, in open
von-web_1 | self._handle = await pool.open_pool_ledger(self.name, None)
von-web_1 | File "/home/indy/.local/share/virtualenvs/server-8XoupS0v/lib/python3.5/site-packages/indy/pool.py", line 82, in open_pool_ledger
von-web_1 | open_pool_ledger.cb)
von-web_1 | File "/usr/lib/python3.5/asyncio/futures.py", line 361, in iter
von-web_1 | yield self # This tells Task to wait for completion.
von-web_1 | File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
von-web_1 | future.result()
von-web_1 | File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
von-web_1 | raise self._exception
von-web_1 | indy.error.IndyError: ErrorCode.PoolLedgerTimeout
von_von-web_1 exited with code 1

Issue with Ledger Browser (UI) disconnecting from nodes

After running for an extended period of time the Ledger Browser (UI) can loose its connections to the nodes. Typically a restart fixes the issue. However it would be nice to find a more permanent solution.

This causes a few issues:

  • The validator node status stops working. The node status indicators (node circles) do not appear on the home page.
    • Under the covers the detailed node status is not collected, and since this is typically used to monitor the health of a von-network instance the ledger presents as non-functional (down).
  • The DID registration controls stop functioning, since the connection to the ledger for writing transactions is lost.
  • Ledger browsing functionality is lost, since connection to the ledger for reading is lost.
  • From a client (agent) perspective the ledger typically continues to function without any issues, as it's only the Leger Browser's (von-network UI's) connection to the ledger that is affected.

problems running cli

when running ./manage cli i get the following error, where it cannot find the pool transaction genesis.

WARNING: The RUST_LOG variable is not set. Defaulting to a blank string.
WARNING: The IP variable is not set. Defaulting to a blank string.
WARNING: The IPS variable is not set. Defaulting to a blank string.
von_generate_transactions


================================================================================================
Generating genesis transaction file:
nodeArg: 
DOCKERHOST: 172.17.0.1
genesisFileTemplatePath: /home/indy/.indy-cli/networks/sandbox/pool_transactions_genesis
genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis
------------------------------------------------------------------------------------------------
generate_indy_pool_transactions --nodes 4 --clients 0 --ips 172.17.0.1,172.17.0.1,172.17.0.1,172.17.0.1 

BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw

------------------------------------------------------------------------------------------------
Generated genesis transaction file; /home/indy/.indy-cli/networks/sandbox/pool_transactions_genesis

{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","client_ip":"172.17.0.1","client_port":9702,"node_ip":"172.17.0.1","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv","identifier":"Th7MpTaRZVRYnPiabds81Y","txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62","type":"0"}
{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","client_ip":"172.17.0.1","client_port":9704,"node_ip":"172.17.0.1","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb","identifier":"EbP4aYNeTHL6q385GuVpRV","txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc","type":"0"}
{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","client_ip":"172.17.0.1","client_port":9706,"node_ip":"172.17.0.1","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya","identifier":"4cU41vWW82ArfxJxHkzXPG","txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4","type":"0"}
{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","client_ip":"172.17.0.1","client_port":9708,"node_ip":"172.17.0.1","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA","identifier":"TWwCRQRZ2ZHMJFn9TzLp7W","txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008","type":"0"}

------------------------------------------------------------------------------------------------
Final genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis

cat: /home/indy/ledger/sandbox/pool_transactions_genesis: No such file or directory
================================================================================================
Loading module /home/indy/.pyenv/versions/3.5.5/lib/python3.5/site-packages/config/config-crypto-example1.py
libpbc.so.1: cannot open shared object file: No such file or directory

was wondering how to be able to fix this

Running Nodes on different machines

I have Von-network running on 2 different machines, the first one with default values, and the second one is running with GENESIS_URL pointing to the first one, but with this setup, the 4 nodes are running only on the first machine.

Is there any command to run 2 nodes one in each machine, like you see in indy-node documentation
https://github.com/hyperledger/indy-node/blob/master/docs/source/start-nodes.md in the section Remote Test Network Example,

And will the UI reflect/update with info from the second node, running on a different machine, and also, will the genesis file reflect this changes with the new IPs and port.

theres a script https://github.com/bcgov/von-network/blob/master/scripts/start_node.sh but I'm failing to use it.

Running as a VPS on an EC2 needs to ensure security groups are correct.

Installing the von network on an AWS EC2 can result in the nodes failing to become fully active due to AWS security group configuration.

If after starting the installed VON network with "./manage start" you see the following repeating pattern of messages in the "./manage log" output you need to check that your AWS security groups are set to allow communication between the EC2 instance public addresses. This might be useful to call out in the "Running the Network on a VPS" section of the README.md. The reason is that the indy nodes are configured to listen on 0.0.0.0 but send outgoing node traffic to the public addresses not the private VPC addresses. The public addresses usually have security restrictions applied, especially when you are testing a deployment.

node2_1 | 2020-05-07 23:56:30,728|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2020-05-07 23:56:30,729|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2020-05-07 23:56:30,730|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2020-05-07 23:56:30,730|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2020-05-07 23:56:30,730|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.

The suggested steps for deploying the VON VPS on an AWS EC2 node are (AWS components in bold) :

  • Create a VPC and configure the Network ACL to allow inbound traffic on port 22 AND inbound and outbound traffic on ports 9701-9708
  • Deploy an ubuntu Ec2 instance into a public subnet and configure a Security Group to allow inbound traffic on port 22 AND inbound and outbound traffic on ports 9701-9708
  • For the outbound traffic rules on both the Network ACL and the Security Group the destination IP will be the Public IP address of the Ec2 instance you deploy (this will allow the nodes inside the docker container to call out to each another)
  • For the inbound traffic rules on both the Network ACL and the Security Group the source IP address will be the Public IP address of the Ec2 instance AND any IP from which you will be running clients e.g. an ssh client or the indy-cli

Running the Network in Docker

$ ./manage start-web GENESIS_URL=https://raw.githubusercontent.com/sovrin-foundation/sovrin/master/sovrin/pool_transactions_sandbox_genesis
Unable to find image 'eclipse/che-ip:latest' locally
latest: Pulling from eclipse/che-ip
d6a5679aa3cf: Pulling fs layer
4498fa6d0d1b: Pulling fs layer
4498fa6d0d1b: Verifying Checksum
4498fa6d0d1b: Download complete
d6a5679aa3cf: Verifying Checksum
d6a5679aa3cf: Download complete
d6a5679aa3cf: Pull complete
4498fa6d0d1b: Pull complete
Digest: sha256:2ac584b1bd6e6ec2379760dd90ae63b61b67f40cc6331c6bfc46e5e747b767b5
Status: Downloaded newer image for eclipse/che-ip:latest
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.

Continue with the new image? [yN] y
ERROR: pull access denied for von-network-base, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

Add project lifecycle badge

No Project Lifecycle Badge found in your readme!

Hello! I scanned your readme and could not find a project lifecycle badge. A project lifecycle badge will provide contributors to your project as well as other stakeholders (platform services, executive) insight into the lifecycle of your repository.

What is a Project Lifecycle Badge?

It is a simple image that neatly describes your project's stage in its lifecycle. More information can be found in the project lifecycle badges documentation.

What do I need to do?

I suggest you make a PR into your README.md and add a project lifecycle badge near the top where it is easy for your users to pick it up :). Once it is merged feel free to close this issue. I will not open up a new one :)

Cannot start webserver - python error - Ubuntu 18.04 in Vagrant Windows

Following https://github.com/bcgov/von-network/blob/master/docs/UsingVONNetwork.md
Ubuntu 18.04 in Vagrant on Windows

When running sudo ./manage build
in step "pip install --no-cache-dir -r server/requirements.txt"

Successfully installed MarkupSafe-1.1.1 aiohttp-jinja2-1.1.2 jinja2-2.11.2 meld3-2.0.1 pyyaml-5.1.2 supervisor-4.0.4
You are using pip version 9.0.3, however version 20.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Removing intermediate container 345757aaee0b

When running: sudo ./manage start --logs

_webserver_1 | 2020-05-01 22:55:06,855|WARNING|libindy.py|_indy_loop_callback: Function returned error
webserver_1 | 2020-05-01 22:55:06,860|ERROR|anchor.py|Initialization error:
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/home/indy/server/anchor.py", line 221, in _open_pool
webserver_1 | self._pool = await pool.open_pool_ledger(pool_name, json.dumps(pool_cfg))
webserver_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy/pool.py", line 83, in open_pool_ledger
webserver_1 | open_pool_ledger.cb)
webserver_1 | indy.error.PoolLedgerTimeout
webserver_1 |
webserver_1 | The above exception was the direct cause of the following exception:
webserver_1 |
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/home/indy/server/anchor.py", line 314, in open
webserver_1 | await self._open_pool()
webserver_1 | File "/home/indy/server/anchor.py", line 223, in open_pool
webserver_1 | raise AnchorException("Error opening pool ledger connection") from e
webserver_1 | server.anchor.AnchorException: Error opening pool ledger connection
webserver_1 | 2020-05-01 22:55:06,906|INFO|server.py|--- Trust anchor initialized ---

image

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.