Coder Social home page Coder Social logo

constellation's Introduction

Constellation

Constellation is a self-managing, peer-to-peer system in which each node:

  • Hosts a number of NaCl (Curve25519) public/private key pairs.

  • Automatically discovers other nodes on the network after synchronizing with as little as one other host.

  • Synchronizes a directory of public keys mapped to recipient hosts with other nodes on the network.

  • Exposes a public API which allows other nodes to send encrypted bytestrings to your node, and to synchronize, retrieving information about the nodes that your node knows about.

  • Exposes a private API which:

    • Allows you to send a bytestring to one or more public keys, returning a content-addressable identifier. This bytestring is encrypted transparently and efficiently (at symmetric encryption speeds) before being transmitted over the wire to the correct recipient nodes (and only those nodes.) The identifier is a hash digest of the encrypted payload that every recipient node receives. Each recipient node also receives a small blob encrypted for their public key which contains the Master Key for the encrypted payload.

    • Allows you to receive a decrypted bytestring based on an identifier. Payloads which your node has sent or received can be decrypted and retrieved in this way.

    • Exposes methods for deletion, resynchronization, and other management functions.

  • Supports a number of storage backends including LevelDB, BerkeleyDB, SQLite, and Directory/Maildir-style file storage suitable for use with any FUSE adapter, e.g. for AWS S3.

  • Uses mutually-authenticated TLS with modern settings and various trust models including hybrid CA/tofu (default), tofu (think OpenSSH), and whitelist (only some set of public keys can connect.)

  • Supports access controls like an IP whitelist.

Conceptually, one can think of Constellation as an amalgamation of a distributed key server, PGP encryption (using modern cryptography,) and Mail Transfer Agents (MTAs.)

Constellation's current primary application is to implement the "privacy engine" of Quorum, a fork of Ethereum with support for private transactions that function exactly as described in this README. Private transactions in Quorum contain only a flag indicating that they're private and the content-addressable identifier described here.

Constellation can be run stand-alone as a daemon via constellation-node, or imported as a Haskell library, which allows you to implement custom storage and encryption logic.

Installation

Prerequisites

  1. Install supporting libraries: - Ubuntu: apt-get install libdb-dev libleveldb-dev libsodium-dev zlib1g-dev libtinfo-dev - Red Hat: dnf install libdb-devel leveldb-devel libsodium-devel zlib-devel ncurses-devel - MacOS: brew install berkeley-db leveldb libsodium

Downloading precompiled binaries

Constellation binaries for most major platforms can be downloaded here.

Installation from source

  1. First time only: Install Stack: - Linux: curl -sSL https://get.haskellstack.org/ | sh - MacOS: brew install haskell-stack

  2. First time only: run stack setup to install GHC, the Glasgow Haskell Compiler

  3. Run stack install

Generating keys

  1. To generate a key pair "node", run constellation-node --generatekeys=node

If you choose to lock the keys with a password, they will be encrypted using a master key derived from the password using Argon2id. This is designed to be a very expensive operation to deter password cracking efforts. When constellation encounters a locked key, it will prompt for a password after which the decrypted key will live in memory until the process ends.

Running

  1. Run constellation-node <path to config file> or specify configuration variables as command-line options (see constellation-node --help)

For now, please refer to the Constellation client Go library for an example of how to use Constellation. More detailed documentation coming soon!

Configuration File Format

See sample.conf.

How It Works

Each Constellation node hosts some number of key pairs, and advertises a publicly accessible FQDN/port for other hosts to connect to.

Nodes can be started with a reference to existing nodes on the network (with the othernodes configuration variable,) or without, in which case some other node must later be pointed to this node to achieve synchronization.

When a node starts up, it will reach out to each node in othernodes, and learn about the public keys they host, as well as other nodes in the network. In short order, the node's public key directory will be the same as that of all other nodes, and you can start addressing messages to any of the known public keys.

This is what happens when you use the send function of the Private API to send the bytestring foo to the public key ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc=:

  1. You send a POST API request to the Private API socket like: {"payload": "foo", "from": "mypublickey", to: "ROAZBWtSacxXQrOe3FGAqJDyJjFePR5ce4TSIzmJ0Bc="}

  2. The local node generates using /dev/urandom (or similar):

    • A random Master Key (MK) and nonce
    • A random recipient nonce
  3. The local node encrypts the payload using NaCl secretbox using the random MK and nonce.

  4. The local node generates an MK container for each recipient public key; in this case, simply one container for ROAZ..., using NaCl box and the recipient nonce.

    NaCl box works by deriving a shared key based on your private key and the recipient's public key. This is known as elliptic curve key agreement.

    Note that the sender public key and recipient public key we specified above aren't enough to perform the encryption. Therefore, the node will check to see that it is actually hosting the private key that corresponds to the given public key before generating an MK container for each recipient based on SharedKey(yourprivatekey, recipientpublickey) and the recipient nonce.

    We now have:

    • An encrypted payload which is foo encrypted with the random MK and a random nonce. This is the same for all recipients.

    • A random recipient nonce that also is the same for all recipients.

    • For each recipient, the MK encrypted with the shared key of your private key and their public key. This MK container is unique per recipient, and is only transmitted to that recipient.

  5. For each recipient, the local node looks up the recipient host, and transmits to it:

    • The sender's (your) public key

    • The encrypted payload and nonce

    • The MK container for that recipient and the recipient nonce

  6. The recipient node returns a SHA3-512 hash digest of the encrypted payload, which represents its storage address.

    (Note that it is not possible for the sender to dictate the storage address. Every node generates it independently by hashing the encrypted payload.)

  7. The local node stores the payload locally, generating the same hash digest.

  8. The API call returns successfully once all nodes have confirmed receipt and storage of the payload, and returned a hash digest.

Now, through some other mechanism, you'll inform the recipient that they have a payload waiting for them with the identifier owqkrokwr, and they will make a call to the receive method of their Private API:

  1. Make a call to the Private API socket receive method: {"key": "qrqwrqwr"}

  2. The local node will look in its storage for the key qrqwrqwr, and abort if it isn't found.

  3. When found, the node will use the information about the sender as well as its private key to derive SharedKey(senderpublickey, yourprivatekey) and decrypt the MK container using NaCl box with the recipient nonce.

  4. Using the decrypted MK, the local node will decrypt the encrypted payload using NaCl secretbox using the main nonce.

  5. The API call returns the decrypted data.

Getting Help

Stuck at some step? Please join our slack community for support.

constellation's People

Contributors

bts avatar cartazio avatar conor10 avatar fixanoid avatar namtruong avatar patrickmn avatar rushikeshacharya avatar trung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

constellation's Issues

Delete entry from database

I found this pullrequest #26 on deleting entries from the database but I can't fully figure out how to actually call the private api to do so. My haskell knowledge is pretty much non-existent so any pointers would be great.

I'm running constellation with quorum in a docker container

Sender cannot be same as recipient even when address are different

I am running the 2 nodes setup. The default simple storage contract is a made a private conversation between the sender and receiver. However, when sending the contract, I am getting this error

2018 Feb-04 14:05:24152734 [WARN] Error performing API request: ApiSend (Send {sreqPayload = "`\`\`@R4\NAKa\NUL\SIW`\NUL\128\253[`@Q` \128a\SOHI\131\&9\129\SOH`@R\128\128Q\144` \SOH\144\145\144PP[\128`\NUL$
SOH\144\145\144PP`\195V[\NUL[4\NAK`\161W`\NUL\128\253[`\167`\206V[`@Q\128\130\129R` \SOH\145PP`@Q\128\145\ETX\144\243[`\NULT\129V[\128`\NUL\129\144UP[PV[`\NUL\128T\144P[\144V\NUL\161ebzzr0X \213\133\ESC\$
\NUL\NUL\NUL\NUL\NUL\NUL*", sreqFrom = "BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo=",
sreqTo = ["QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc="]});
sendRequest: Errors while running sendPayload: [Left "safeBeforeNM: Sender cannot be a recipient\nCallStack (from HasCallStack):\n
error, called at ./Constellation/Enclave/Payload.hs:56:24 in constellation-0.1.0.0-bO6yomQcNv4mJID9Sq40H:Constellation.Enclave.Payload"]

I am running this script

a = eth.accounts[0]
web3.eth.defaultAccount = a;

// abi and bytecode generated from simplestorage.sol:
// > solcjs --bin --abi simplestorage.sol
var abi = [{"constant":true,"inputs":[],"name":"storedData","outputs":[{"name":"","type":"uint256"}],"payable":false,"type":"function"},{"constant":false,"inputs":[{"name":"x","type":"uint256"}],"name":"set","outputs":[],"payable":false,"type":"function"},{"constant":true,"inputs":[],"name":"get","outputs":[{"name":"retVal","type":"uint256"}],"payable":false,"type":"function"},{"inputs":[{"name":"initVal","type":"uint256"}],"payable":false,"type":"constructor"}];

var bytecode = "0x6060604052341561000f57600080fd5b604051602080610149833981016040528080519060200190919050505b806000819055505b505b610104806100456000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680632a1afcd914605157806360fe47b11460775780636d4ce63c146097575b600080fd5b3415605b57600080fd5b606160bd565b6040518082815260200191505060405180910390f35b3415608157600080fd5b6095600480803590602001909190505060c3565b005b341560a157600080fd5b60a760ce565b6040518082815260200191505060405180910390f35b60005481565b806000819055505b50565b6000805490505b905600a165627a7a72305820d5851baab720bba574474de3d09dbeaabc674a15f4dd93b974908476542c23f00029";

var simpleContract = web3.eth.contract(abi);
var simple = simpleContract.new(42, {from:web3.eth.accounts[0], data: bytecode, gas: 0x47b760, privateFor: ["QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc="]}, function(e, contract) {
	if (e) {
		console.log("err creating contract", e);
	} else {
		if (!contract.address) {
			console.log("Contract transaction send: TransactionHash: " + contract.transactionHash + " waiting to be mined...");
		} else {
			console.log("Contract mined! Address: " + contract.address);
			console.log(contract);
		}
	}
});

Any ETA on when documentation would be published?

For the benefit of us non-haskell programmers, it would be helpful if some consistent documentation could be published.

We had run into some trouble with configuration related issues in a non vagrant/example environment

Send password as argument

Is there a way to start sending password as argument? it would be very helpful, we are working setting up +3 constellation nodes with quorum and this will make start-up scripts easier for us.

Support for modifying configuration while running

We currently start constellation using an IP whitelist given as a command line argument, but it would be nice to be able to update this IP whitelist while the application is running, perhaps by reloading the configuration file if it is modified. Does this functionality currently exist?

Doesn't work with latest Homebrew version of berkeley-db

Looks like constellation-enclave-keygen expects berkeley-db v 6.1, which currently homebrew is installing v6.2

dyld: Library not loaded: /usr/local/opt/berkeley-db/lib/libdb-6.1.dylib
  Referenced from: /Users/jamie/Code/cons/./constellation-enclave-keygen
  Reason: image not found
Trace/BPT trap: 5

Document configuration file changes and conversion

The feature is documented as follows:

"Comma-separated list of paths to public keys that are always included as recipients (these must be advertised somewhere)"

However, using that syntax will not result into adding the recipients as expected when also privateKeyPath is set.

An example config file to reproduce the problem is as follows:

url = "http://192.168.0.158:9000/"
port = 9000
socketPath = "qdata/tm1.ipc"
otherNodeUrls = ["http://192.168.0.194:9001/"]
publicKeyPath = "keys/tm1.pub"
privateKeyPath = "keys/tm1.key"
alwayssendto = ["keys/tm3.pub"]
storagePath = "qdata/constellation1"
verbosity = 3

Above results into follow debug print:

2017 May-31 08:23:16334502 [DEBUG] Configuration: Config {cfgUrl = "http://192.168.0.158:9000/", cfgPort = 9000, cfgSocket = Just "qdata/tm1.ipc", cfgOtherNodes = ["http://192.168.0.194:9001/"], cfgPublicKeys = ["keys/tm1.pub"], cfgPrivateKeys = ["keys/tm1.key"], cfgAlwaysSendTo = [], cfgPasswords = Nothing, cfgStorage = "qdata/constellation1", cfgIpWhitelist = [], cfgJustShowVersion = False, cfgJustGenerateKeys = [], cfgVerbosity = 3}

Observe in above that cfgAlwaysSendTo is an empty array.

Discussion: Storage engine options and defaults

Constellation currently uses BerkeleyDB for all storage. It includes code for LevelDB and SQLite, however one cannot currently choose either of them. The simple reason BerkeleyDB is the default is that it was faster than the other options in our testing.

Constellation was specifically designed to use stateless crypto--XSalsa20 allows for randomly generated nonces--in order to support hosting the same key pair on multiple Constellation nodes and using a shared underlying datastore like S3 without requiring contentious nonce management, and positing only the restriction that the data store must have read-after-creation consistency. With S3 and similar, thinking about redundancy and backups becomes a lot simpler, and since the payloads stored are encrypted, storing them with a cloud provider doesn't involve much risk.

Ideally, the --storage option will work as follows:

  • constellation-node --storage=data -- use default engine (BerkeleyDB) in the folder 'data'
  • constellation-node --storage=bdb:data -- explicitly use BerkeleyDB in the folder 'data'
  • constellation-node --storage=s3:constellationstore -- use the 'constellationstore' bucket on S3 (credentials fetched from ~/.aws/credentials or env vars on startup)
  • ...

Now:

  • What should the out of the box default be? Should it continue to be BerkeleyDB? Other options include BoltDB, LevelDB, RocksDB, ...
  • What other options should be supported? For example:
    • S3
    • Google Cloud DataStore (10MiB object limit) or Google Blobstore
    • Azure Blob Storage
    • Redis
    • Tahoe-LAFS
    • seaweedfs
    • ...

(These would be the options supported out of the box in the standalone version, but you would still be able to import Constellation as a library and supply anything that satisfies the Storage datatype for exotic requirements.)

Getting "certificate rejected: Certificate validation failed" Error in Constellation 0.3.2

While running more than one quorum node in a same server with public ip specified in the 'constellation-start.sh' script, we are getting "certificate rejected: Certificate validation failed" error whenever trying to start the second node.
We observed that, for the second node running in the same box the tls certificate related entries are not getting populated in the "tls-know-clients" or "tls-known-server" file under datadir folder.
Only one entry is present against the server ip in the "tls-know-clients" or "tls-known-server" files which is for the first quorum node -
{
"hosts": {
"X.X.X.142": [
{
"data": "F8:16:53:9A:88:BB:EC:68:7F:30:AB:70:E4:0B:F4:29:48:D8:0C:D3:E5:9C:99:C1:B3:D7:86:A7:1F:4B:89:32:F1:D5:BB:B0:88:E5:21:D5:35:FA:B3:8D:59:16:A6:77:8C:1D:60:35:EA:2D:25:B5:EF:B3:03:9C:ED:9E:CB:52",
"type": "rsa-sha512"
}
]
}
}

Why the second node's tls certificate related entries are not getting populated during node startup?

Argon2 error attempting to generate constellation keys with password

Hi,

I am trying to generate a constellation key pair on an EC2 instance running ubuntu 16.04, and if I specify a password I get an error from Argon2. I installed constellation with the binary provided for release v0.2.0, but I also encountered this problem with release v0.1.0. If I leave the password blank my key pair is generated just fine, but if I specify a password I get the following error:

ubuntu@ip-<my-private-ip>:~$ constellation-node --generatekeys=node
Lock key pair node with password [none]: ****
constellation-node: argon2: hash: internal error
CallStack (from HasCallStack):
  error, called at ./Crypto/KDF/Argon2.hs:124:27 in cryptonite-0.22-4Pi0d9NUdk4EM6H9Pr9xLG:Crypto.KDF.Argon2

Any advice on what might be wrong or how I might get this to work would be appreciated.

How to generate node keys

Hi,

Please can you explain how I can generate a private / public key for my constellation node ?
Same for the archival Public / Private Key ?

Many thanks

How To compile and install from source on CentOS 7

Everybody who plans to build Constellation binaries from source on CentOS, must install supporting libraries:

  • libdb-devel, leveldb-devel, libsodium-devel, zlib-devel, ncurses-devel

but also:

  • glibc-static, libstdc++-static (otherwise you will see a "Missing C library: stdc++" error, when compiling "double-conversion")
  • gcc-c++ (in order to avoid the "gcc error trying to exec 'cc1plus' execvp no such file or directory" error)

Add peer management APIs

  • Ability to trigger immediate resynchronization
  • Ability to add a peer (which for whatever reason isn't reaching out to us, e.g. empty initial config) at runtime

Constellation stops working if transaction executed with self key

If a private transaction is executed with self public key of the node, then constellation stops working.

Step 1 : Get a docker container of Ubuntu with latest version of Quorum and constellation.

Step 2: Get the 7 nodes example from GitHub. Start the nodes with raft script.

Step 3: Considered 1st and 2nd node for the test. Execute private contract from 1st node with correct public key of 2nd node. Transactions is executed successfully and contract mined.

Step 4: Executed the contract with self public key of 1st node. Transaction fails.

3.Repeat step 3, it fails every time until constellation of node 1 restarted again.

I am not sure why constellation stops working after step 4 execution. I remember, earlier version was not having this issue.

Invalid responses from other nodes' /partyinfo endpoints are fatal

We're trying to get Constellation running in RHEL. Sadly Red Hat is not one of the major platforms where Constellation has precompiled binaries for :)

We compiled it on a machine with internet access (which was not trivial given a weird bunch of dependencies), but when trying to run it on another machine, we get the following error:

2017 Jun-21 13:51:38115940 [WARN] Failed to run an AtExit hook: : Data.Binary.Get.runGet at position 8: not enough bytes
CallStack (from HasCallStack):
  error, called at libraries/binary/src/Data/Binary/Get.hs:342:5 in binary-0.8.3.0:Data.Binary.Get
constellation-node: Data.Binary.Get.runGet at position 8: not enough bytes
CallStack (from HasCallStack):
  error, called at libraries/binary/src/Data/Binary/Get.hs:342:5 in binary-0.8.3.0:Data.Binary.Get

Constellation Node config

Hi,

I am doing a setup from scratch based on the 7-nodes example. I don't understand what is the relationship between a geth node and constellation node and where it is defined.

I can't see where it is defined that a given constellation node is "bound" to a given geth node.

That's a constellation node config:

url = "http://127.0.0.1:9000/"
port = 9000
socketPath = "qdata/tm1.ipc"
otherNodeUrls = []
publicKeyPath = "keys/tm1.pub"
privateKeyPath = "keys/tm1.key"
archivalPublicKeyPath = "keys/tm1a.pub"
archivalPrivateKeyPath = "keys/tm1a.key"
storagePath = "qdata/constellation1"

Constellation nodes fail to sync/recognize each other when new configuration format is used

Constellation Version : 0.1.0
Quorum Version : 1.2.0

When using new configuration format from sample.conf, nodes fail to sync with each other. No error / warning is shown in log (verbosity=3).

Nodes are able to sync when old format is used. Below are working & failing samples of constellation [boot] node

Working

url = "http://172.24.0.2:30300/" 
port = 30300 
socketPath = "/data/quorum/constellation/Node1_constellation.ipc" 
otherNodeUrls = [] 
publicKeyPath = "/data/quorum/constellation/keystore/Node1_constellation.pub"
privateKeyPath = "/data/quorum/constellation/keystore/Node1_constellation.key" 
archivalPublicKeyPath = "/data/quorum/constellation/keystore/Node1_archival.pub" 
archivalPrivateKeyPath = "/data/quorum/constellation/keystore/Node1_archival.key" 
storagePath = "/data/quorum/constellation/storage" 
verbosity = 3

Failing

url = "http://172.24.0.2:30300/" 
port = 30300 
socket = "/data/quorum/constellation/Node1_constellation.ipc" 
otherNodeUrls = [] 
publickeys = ["/data/quorum/constellation/keystore/Node1_constellation.pub"] 
privatekeys = ["/data/quorum/constellation/keystore/Node1_constellation.key"] 
storage = "/data/quorum/constellation/storage" 
verbosity = 3

Sender cannot be a recipient check prevents private transaction recovery

Although the following issue touches on this, I thought it useful to create a more specific issue to discuss potential solutions to this problem.

The safeBeforeNM is used both in processing new transactions and retrieving existing in Constellation to ensure that the sender public key is not the same as the intended recipent(s) public key (i.e. the privateFor address on transactions)

During the processing of receive requests from geth nodes (i.e. new private transaction) , this has the knock-on effect of preventing private transactions from being loaded into the private database when a node has to be run up fresh (this transaction retrieval happens here).

For example, we have two geth instances (geth1 & geth2) and their corresponding Constellation instances (tm1 & tm2), configured as follows:

tm1PubKey = "BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo="
tm2PubKey = "QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc="

We then send a transaction from geth1, with privateFor set to tm2PubKey

ApiSend request goes from geth1 to tm1:

2017 Aug-07 15:36:58931107 [DEBUG] Request from : ApiSend
 (Send {sreqPayload = "\t^\167\179\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\202\132\&5i\227BqD\206\173^
MY\153\163\208\204\249+\142\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NULd", sreqFrom = "BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo=", sreqTo = ["QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc="]}); Response: ApiSendR (SendResponse {sresKey = "mcQktPbaCa+xFUPMP0rGRzROmP/4zWxfpUWkXeET0Tj/b9D5wuqTmYXQIj/KYFdgsZSAE+7WQws0Sayczwz8FQ==“})

Corresponding ApiPush and ApiReceive received by tm2:

2017 Aug-07 15:36:58927546 [DEBUG] Request from 127.0.0.1:58378: ApiPush ("\ENQB\222G\194rQhb\186\224\140S\241\203\ETXD9\167\&9\CANO\231\a \141\217(\ETB\178\220\SUB","\NAKs2~\131\197\&4O\176\&9>C\177\227\231e\254\174k\223\243I\DC1\201\n\225\193\216#\162Q\139c\236\143\183\b5y\227\193=\235f\247\208\255\SOa`\247p\129\238\199\212)5\168Yt\EM*D\209w\242-\170\147K\148\CAN(\192\181\238\198\210\"\SO\170\&6\158","\244\147G\234\129g2\135\233\DLEP\188\252)\202\STX@\157\196U\147\196I0",["\209\142\212\186\169tq*|p\194\142\237\164j\154~\151\202\230\142\ENQ\152\186\a+\DC4\ESC\146]\232d\a'\174\158\193\232\191\a\167\DC29?\165\195\EM\b"],"C#a\141fE\tuK\181\a\253\210L?\140\146\213s9\190'\147\181"); 
Response: ApiPushR "mcQktPbaCa+xFUPMP0rGRzROmP/4zWxfpUWkXeET0Tj/b9D5wuqTmYXQIj/KYFdgsZSAE+7WQws0Sayczwz8FQ=="

2017 Aug-07 15:36:58950493 [DEBUG] Request from : ApiReceive (Receive {rreqKey = "
mcQktPbaCa+xFUPMP0rGRzROmP/4zWxfpUWkXeET0Tj/b9D5wuqTmYXQIj/KYFdgsZSAE+7WQws0Sayczwz8FQ==", rreqTo = "QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc="});
Response: ApiReceiveR (ReceiveResponse {rresPayload = "\t^\167\179\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\202\132\&5i\227BqD\206\173^MY\153\163\208\204\249+\142\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NULd”})

So far, so good.

Although the transaction is present in tm1, it’s not actually possible to access it due to the sender cannot be a recipient check. But that isn’t an issue normally, as the state of our transaction is stored in geth1’s private chain. Hence if tm1 dies, it has the private chain available locally.

However, if we delete geth1’s private state DB it’s not possible to recover this private state. The reason being that when the node resyncs when it receives a private transaction, it requests that transaction from tm1. However, the sender cannot be a recipient check kicks in which prevents geth1 from being able to recover the private transactions from tm1. This results in the following error:

2017 Aug-07 15:50:52847745 [WARN] Error performing API request: ApiReceive (Receive {rreqKey = "mcQktPbaCa+xFUPMP0rGRzROmP/4zWxfpUWkXeET0Tj/b9D5wuqTmYXQIj/KYFdgsZSAE+7WQws0Sayczwz8FQ==", rreqTo = "BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo="}); encrypt: Sender cannot be a recipient
CallStack (from HasCallStack):
  error, called at ./Constellation/Enclave/Payload.hs:56:24 in constellation-0.1.0.0-6e8lqFuBPtzCz3iEft1Zk0:Constellation.Enclave.Payload

This node can no longer transact with this contract which it earlier created.

The same problem manifests itself for backup instances too.

`invalid padding` error

I checked for the obvious, namely, tampering the pub base64 by adding some characters at the start. The error, changes to invalid base64 encoding near offset 44. Also I verified that I was using the same version constellation-enclave-keygen, and finally, tried to use keys from quorum-examples just in case.

DISCLAIMER: Pretty sure is a layer 8 error, but I cannot see it after two hours of fiddling. Kindly request for help.

  • Commit is
commit 4b4ba6003baf01705ec9f7d843baeae3b37f9a94
Author: Patrick Mylund Nielsen <[email protected]>
Date:   Fri Mar 24 12:09:11 2017 -0400
  • Log obtained
./constellation-node ethereum/constellation-conf

2017 Mar-28 10:36:12663291 [INFO] Log level is LevelDebug
2017 Mar-28 10:36:12663469 [DEBUG] Configuration: Config {cfgUrl = "http://10.24.0.4:22000/", cfgPort = 22000, cfgSocket = Just "/home/banketh/ethereum/constellation.ipc", cfgOtherNodes = ["http://10.24.0.5:22000/, http://10.24.0.8:22000/, http://10.24.0.7:22000/"], cfgPublicKeys = ["/home/banketh/ethereum/constellation-pub"], cfgPrivateKeys = ["/home/banketh/ethereum/constellation-prv"], cfgPasswords = Nothing, cfgStorage = "/home/banketh/ethereum/constellation", cfgIpWhitelist = [], cfgJustShowVersion = False, cfgVerbosity = 3}
2017 Mar-28 10:36:12663554 [INFO] Utilizing 1 core(s)
2017 Mar-28 10:36:12663722 [INFO] Constructing Enclave using keypairs [("/home/banketh/ethereum/constellation-pub","/home/banketh/ethereum/constellation-prv")]
constellation-node: fromShowRight: Got Left: "invalid padding"
CallStack (from HasCallStack):
  error, called at ./Constellation/Util/Either.hs:14:28 in constellation-0.1.0.0-IsiykKpPlVs3yKQYw1hA4n:Constellation.Util.Either
  • Command
./constellation-node ethereum/constellation-conf
  • constellation-conf
# Externally accessible URL for this node (this is what's advertised)
url = "http://10.24.0.4:22000/"

# Port to listen on for the public API
port = 22000

# Socket file to use for the private API / IPC
socket = "/home/banketh/ethereum/constellation.ipc"

# Initial (not necessarily complete) list of other nodes in the network.
# Constellation will automatically connect to other nodes not in this list
# that are advertised by the nodes below, thus these can be considered the
# "boot nodes."
othernodes = ["http://10.24.0.5:22000/", "http://10.24.0.8:22000/", "http://10.24.0.7:22000/"]

# The set of public keys this node will host
publickeys = ["/home/banketh/ethereum/constellation-pub"]

# The corresponding set of private keys
privatekeys = ["/home/banketh/ethereum/constellation-prv"]

# Optional file containing the passwords to unlock the given privatekeys
# (one password per line -- add an empty line if one key isn't locked.)
# passwords = "passwords"

# Where to store payloads and related information
storage = "/home/banketh/ethereum/constellation"

# Optional IP whitelist for the external API. If unspecified/empty,
# connections from all sources will be allowed (but the private API remains
# accessible only via the IPC socket above.) To allow connections from
# localhost when a whitelist is defined, e.g. when running multiple
# Constellation nodes on the same machine, add "127.0.0.1" and "::1" to
# this list.
# ipwhitelist = ["10.0.0.1", "2001:0db8:85a3:0000:0000:8a2e:0370:7334"]

# Verbosity level (each level includes all prior levels)
#   - 0: Only fatal errors
#   - 1: Warnings
#   - 2: Informational messages
#   - 3: Debug messages
verbosity = 3
  • constellation-pub
UVSeNgQA2WJSZVoUAi5v2Ebu0aSc18QW2juW/yBSDzI=
  • constellation-prv
{"data":{"bytes":"PQv0IkX0HR/qxv7v/3bkqLgqAb3EuNYl3LLDwHWyCm0="},"type":"unlocked"}

remove non-sqlite storage backends

@jpmsam : a number of the issues will be resolved if we drop all non sqlite backends, because that backend is both easy to static link into the application on all platforms, and friendly to query outside of the application.

I think the right near term fix is to do that internal change, but still accept the backend flag but emit a warning when any non sqlite backend is selected, and then also add a mutually exclussive CLI flag for setting the directory of the storage file. (this is so that current scripts wont totallly break as things get migrated to the new version)

constellation-node: command not found

I've followed the instructions to install Constellation and executed the following commands on my Ubuntu machine-
apt-get install libdb-dev libleveldb-dev libsodium-dev zlib1g-dev libtinfo-dev
curl -sSL https://get.haskellstack.org/ | sh
stack setup
stack install [this gives me error to use stack init instead ?]

and running constellation-node gives me command not found, error can you please help me resolve this issue ?

Allow configuration of constellation, geth and bootnode using FQDN vs. IP address

I've been working on standing up the 7 node example on Mesosphere DC/OS. Currently the configuration used to launch Constellation and geth requires addresses in the form of IP address. In cloud orchestration environments like DC/OS and Kubernetes the containers launch can have short lives and move around throughout the cluster. IP addresses have less relevance and the containers are usually given names as part of the Service Discovery mechanism.

For example on DC/OS using IP per Container the containers are given names in the format of myapp-subgroup-outergroup.marathon.containerip.dcos.thisdcos.directory (documentation)

tm2.conf requires IP address:

url = "http://127.0.0.1:9001/"
port = 9001
socketPath = "qdata/tm2.ipc"
otherNodeUrls = ["http://127.0.0.1:9000/"]
publicKeyPath = "keys/tm2.pub"
privateKeyPath = "keys/tm2.key"
archivalPublicKeyPath = "keys/tm2a.pub"
archivalPrivateKeyPath = "keys/tm2a.key"
storagePath = "qdata/constellation2"

Ideally you should be able to specify by the FQDN understood by the container orchestrator DNS:

url = "http://node2-quorum.marathon.containerip.dcos.thisdcos.directory:9001/"
port = 9001
socketPath = "qdata/tm2.ipc"
otherNodeUrls = ["http://node1-quorum.marathon.containerip.dcos.thisdcos.directory:9000/"]
publicKeyPath = "keys/tm2.pub"
privateKeyPath = "keys/tm2.key"
archivalPublicKeyPath = "keys/tm2a.pub"
archivalPrivateKeyPath = "keys/tm2a.key"
storagePath = "qdata/constellation2"

Similarly in the start.sh script fails when using FQDN over IP address:

BOOTNODE_ENODE=enode://61077a284f5ba7607ab04f33cfde2750d659ad9af962516e159cf6ce708646066cd927a900944ce393b98b95c914e4d6c54b099f568342647a1cd4a262cc0423@[127.0.0.1]:33445

GLOBAL_ARGS="--bootnodes $BOOTNODE_ENODE --networkid $NETID --rpc --rpcaddr 0.0.0.0 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum"

Should be

BOOTNODE_ENODE=enode://61077a284f5ba7607ab04f33cfde2750d659ad9af962516e159cf6ce708646066cd927a900944ce393b98b95c914e4d6c54b099f568342647a1cd4a262cc0423@[bootnode-quorum.marathon.containerip.dcos.thisdcos.directory]:33445

GLOBAL_ARGS="--bootnodes $BOOTNODE_ENODE --networkid $NETID --rpc --rpcaddr node1-quorum.marathon.containerip.dcos.thisdcos.directory --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum"

(This example needs to change to set node specific rpcaddr based on IP per container deployments).

Constellation peer discovery questions

I'm bringing back the question about how constellation peers discover and connect to each other during runtime since we (eventually) want to run Quorum in production, automating the provisioning of private Quorum networks. Every network should allow new nodes to join.

We basically want to achieve that a new node can join a network without having to tell other nodes to update a local configuration file (tm.conf, otherNodes = [...]and restart them).

In Slack, @patrickmn mentioned that a Constellation node behaves like a Geth boot node (https://go-quorum.slack.com/archives/C68NY0QQZ/p1517276479000180?thread_ts=1517258552.000240&cid=C68NY0QQZ)

I have the following questions:

  • Very basic question to confirm: Is it correct that if Constellation node A connects to node B, and C connects to B, in the end, A and C will be connected together as well?
  • In an end-to-end setup (automating the provisioning of Quorum networks), would we then at minimum run a Geth boot node along with a Constellation "boot node"?
  • Is there any form of documentation how Constellation's p2p discovery protocol works? (wish I had more time to learn haskell 😃 )

Data sync not happening between nodes deployed on different machines

I have boot node and node 1 deployed on one machine as block maker and voter and relevant invoking tm2.conf file looks like -


url = "http://10.10.10.5:9000/"
port = 9000
socketPath = "qdata/tm2.ipc"
otherNodeUrls = []
publicKeyPath = "keys/tm2.pub"
privateKeyPath = "keys/tm2.key"
archivalPublicKeyPath = "keys/tm2a.pub"
archivalPrivateKeyPath = "keys/tm2a.key"
storagePath = "qdata/constellation2"

and node3 deployed on another machine which is ONLY voter and uses tm5.conf which config looks like -

url = "http://10.11.11.4:9000/"
port = 9000
socketPath = "qdata/tm5.ipc"
otherNodeUrls = ["http://10.10.10.5:9000/"]
publicKeyPath = "keys/tm5.pub"
privateKeyPath = "keys/tm5.key"
archivalPublicKeyPath = "keys/tm5a.pub"
archivalPrivateKeyPath = "keys/tm5a.key"
storagePath = "qdata/constellation5"

both the nodes constellations logs show no errors. In the constellation log for node 1 the log is displayed as -


nohup: appending output to 'nohup.out'
2017 Sep-03 05:24:32216663 [INFO] Constellation initializing using config file tm2.conf
2017 Sep-03 05:24:32217548 [INFO] Log level is LevelDebug
2017 Sep-03 05:24:32217620 [INFO] Utilizing 2 core(s)
2017 Sep-03 05:24:32220587 [INFO] Constructing Enclave using keypairs (keys/tm2.pub, keys/tm2.key) (keys/tm2a.pub, keys/tm2a.key)
2017 Sep-03 05:24:32220802 [INFO] Initializing storage qdata/constellation2
2017 Sep-03 05:24:32276282 [INFO] Internal API listening on qdata/tm2.ipc
2017 Sep-03 05:24:32276271 [INFO] External API listening on port 9000
2017 Sep-03 05:24:32276219 [INFO] Node started
2017 Sep-03 05:24:38344272 [DEBUG] Request from : ApiUpcheck; Response: ApiUpcheckR
2017 Sep-03 05:25:09751069 [DEBUG] Request from 10.11.11.4:49940: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:29:35051830 [DEBUG] Request from 10.10.10.5:48254: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:30:09760130 [DEBUG] Request from 10.11.11.4:50490: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:34:37725427 [DEBUG] Request from 10.10.10.5:49104: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:35:09798877 [DEBUG] Request from 10.11.11.4:51044: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:39:40398721 [DEBUG] Request from 10.10.10.5:49964: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:40:09837242 [DEBUG] Request from 10.11.11.4:51600: ApiPartyInfo (PartyInfo {piUrl = "http://10.11.11.4:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})
2017 Sep-03 05:44:42984722 [DEBUG] Request from 10.10.10.5:50820: ApiPartyInfo (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]}); Response: ApiPartyInfoR (PartyInfo {piUrl = "http://10.10.10.5:9000/", piRcpts = fromList [("QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=","http://10.10.10.5:9000/"),("R56gy4dn24YOjwyesTczYa8m5xhP6hF2uTMCju/1xkY=","http://10.11.11.4:9000/")], piParties = fromList ["http://10.10.10.5:9000/"]})

and node 3 constellation log shown as -

nohup: appending output to 'nohup.out'
2017 Sep-03 05:25:09524816 [INFO] Constellation initializing using config file tm5.conf
2017 Sep-03 05:25:09528401 [INFO] Log level is LevelDebug
2017 Sep-03 05:25:09528491 [INFO] Utilizing 2 core(s)
2017 Sep-03 05:25:09528924 [INFO] Constructing Enclave using keypairs (keys/tm5.pub, keys/tm5.key) (keys/tm5a.pub, keys/tm5a.key)
2017 Sep-03 05:25:09529209 [INFO] Initializing storage qdata/constellation5
2017 Sep-03 05:25:09606851 [INFO] External API listening on port 9000
2017 Sep-03 05:25:09606862 [INFO] Internal API listening on qdata/tm5.ipc
2017 Sep-03 05:25:09606804 [INFO] Node started
~

So the problem here is, I am able to deploy contract and add data using contract functions in node 1, however pointing to node3 and calling the same function on contract address of node 1 gives me back null. Transactions are not synced between the nodes I guess ?. And here I am performing public transaction so all the participating should be able to see and read the data? but thats not happening in node 3 case, can somebody please help me resolve this.

cute pic -
https://www.instagram.com/p/BYGgz26BuZA/?hl=en&taken-by=utamaruru

Stack build not working on OSX if not using homebrew; docs need updated

For many reasons I choose not to use homebrew, so I installed BerkleyDB manually. Because of this the stack cmd is stack install --extra-include-dirs=/usr/local/BerkleyDB.5.3/include --extra-lib-dirs=/usr/local/BerkleyDB.5.3/lib I believe this should be noted in the README or BUILD file. I am more than happy to put together a pr when I can.

Reduce some DoS vectors

Right now, any node with network access can join a Constellation network and advertise potentially bogus information, resulting in different kinds of DoS (even though they can't decrypt messages not intended for them.)

  • TOFU model?
  • Validatable proofs for advertised pubkeys

Configure Constellation

Am very new to quorum and want to have production setup for 5 node setup. Please guide or help me to know step by step process.

What kind of metadata is collectible?

It is not an obvious from README, so I post a question here.

How much communication metadata Constellation exposes? For example, is it possible for others to know the payload sender and receiver?

"Unknown recipient" after adding a new constellation and geth node

To reproduce:

  • start a network with just one Quorum node (geth+constellation), let's call it node_1.
  • then I call node_1 raft.addPeer() and pass in the enode URI of node_2
  • I get back a number 2 which is the raft node ID for the raft cluster
  • I then start up node_2 constellation (with tm.conf containing othernodes: ["node_1_ip:port"])
  • I finally start up node_2 geth with --raftjoinexisting 2

with the resulting 2-node network, I can successfully send public transactions. but when I tried to send a private transaction from node_1 to node_2, with privateFor containing node_2 constellation public key, I get "Unknown recipient" error, as though node_2 constellation was not joined to the transaction manager network

Duplicate symbol _blake2b_init_key causes build failure on OS X Sierra

Constellation will not build on OS X Sierra. This is due to both Argon2 and cryptonite exporting the same blake2b.c library symbols.

Details are available in the following threads:

haskell-crypto/cryptonite#109
haskell-hvr/argon2#10

Details of my setup are below.

  • haskell-stack 1.3.0 (which installed ghc-8.0.1)
  • berkeley-db-6.1.26
  • libsodium-1.0.11
$ stack install
constellation-0.1.0.0: build (lib + exe)
Preprocessing library constellation-0.1.0.0...
Preprocessing executable 'constellation-enclave-keygen' for
constellation-0.1.0.0...
Linking .stack-work/dist/x86_64-osx/Cabal-1.24.0.0/build/constellation-enclave-keygen/constellation-enclave-keygen ...
duplicate symbol _blake2b_init_key in:
    /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
    /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
duplicate symbol _blake2b_init in:
    /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
    /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
duplicate symbol _blake2b_init_param in:
    /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
    /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
duplicate symbol _blake2b_final in:
    /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
    /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
duplicate symbol _blake2b_update in:
    /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
    /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
duplicate symbol _blake2b in:
    /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
    /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
ld: 6 duplicate symbols for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
`gcc' failed in phase `Linker'. (Exit code: 1)

--  While building package constellation-0.1.0.0 using:
      /Users/xxx/.stack/setup-exe-cache/x86_64-osx/Cabal-simple_mPHDZzAJ_1.24.0.0_ghc-8.0.1 --builddir=.stack-work/dist/x86_64-osx/Cabal-1.24.0.0 build lib:constellation exe:constellation-enclave-keygen exe:constellation-node --ghc-options " -ddump-hi -ddump-to-file"
    Process exited with code: ExitFailure 1

XCode GCC version:

Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.1.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

Verbose output:

$ stack install -v
Version 1.3.0 x86_64 hpack-0.15.0
2016-12-21 11:41:17.745595: [debug] Checking for project config at: /Users/xxx/code/quorum/constellation/stack.yaml
@(Stack/Config.hs:863:9)
2016-12-21 11:41:17.747020: [debug] Loading project config file stack.yaml
@(Stack/Config.hs:881:13)
2016-12-21 11:41:17.748878: [debug] Trying to decode /Users/xxx/.stack/build-plan-cache/x86_64-osx/lts-7.5.cache
@(Data/Store/VersionTagged.hs:68:5)
2016-12-21 11:41:17.762898: [debug] Success decoding /Users/xxx/.stack/build-plan-cache/x86_64-osx/lts-7.5.cache
@(Data/Store/VersionTagged.hs:72:13)
2016-12-21 11:41:17.767066: [debug] Using standard GHC build
@(Stack/Setup.hs:597:9)
2016-12-21 11:41:17.767507: [debug] Getting global package database location
@(Stack/GhcPkg.hs:55:5)
2016-12-21 11:41:17.767911: [debug] Run process: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --no-user-package-db list --global
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:17.768039: [debug] Asking GHC for its version
@(Stack/Setup/Installed.hs:103:13)
2016-12-21 11:41:17.769079: [debug] Getting Cabal package version
@(Stack/GhcPkg.hs:170:5)
2016-12-21 11:41:17.769220: [debug] Run process: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc --numeric-version
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:17.769342: [debug] Run process: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --no-user-package-db field --simple-output Cabal version
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:17.842879: [debug] Process finished in 74ms: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --no-user-package-db list --global
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:17.848030: [debug] Process finished in 77ms: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --no-user-package-db field --simple-output Cabal version
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:17.888978: [debug] Process finished in 119ms: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc --numeric-version
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:17.889178: [debug] Resolving package entries
@(Stack/Setup.hs:252:5)
2016-12-21 11:41:17.889710: [debug] Starting to execute command inside EnvConfig
@(Stack/Runners.hs:163:18)
2016-12-21 11:41:17.889807: [debug] Parsing the cabal files of the local packages
@(Stack/Build/Source.hs:298:5)
2016-12-21 11:41:17.910327: [debug] Parsing the targets
@(Stack/Build/Source.hs:235:5)
2016-12-21 11:41:17.911035: [debug] Start: getPackageFiles /Users/xxx/code/quorum/constellation/constellation.cabal
@(Stack/Package.hs:250:21)
2016-12-21 11:41:17.970499: [debug] Finished in 57ms: getPackageFiles /Users/xxx/code/quorum/constellation/constellation.cabal
@(Stack/Package.hs:250:21)
2016-12-21 11:41:17.972337: [debug] Finding out which packages are already installed
@(Stack/Build/Installed.hs:68:5)
2016-12-21 11:41:17.972640: [debug] Run process: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --global --no-user-package-db dump --expand-pkgroot
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.020985: [debug] Process finished in 48ms: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --global --no-user-package-db dump --expand-pkgroot
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.038834: [debug] Run process: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --user --no-user-package-db --package-db /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/pkgdb dump --expand-pkgroot
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.186934: [debug] Process finished in 147ms: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --user --no-user-package-db --package-db /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/pkgdb dump --expand-pkgroot
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.188323: [debug] Run process: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --user --no-user-package-db --package-db /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/pkgdb dump --expand-pkgroot
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.229435: [debug] Process finished in 40ms: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --user --no-user-package-db --package-db /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/pkgdb dump --expand-pkgroot
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.238547: [debug] Constructing the build plan
@(Stack/Build/ConstructPlan.hs:159:5)
2016-12-21 11:41:18.239401: [debug] Trying to decode /Users/xxx/.stack/indices/Hackage/00-index.cache
@(Data/Store/VersionTagged.hs:68:5)
2016-12-21 11:41:18.431973: [debug] Success decoding /Users/xxx/.stack/indices/Hackage/00-index.cache
@(Data/Store/VersionTagged.hs:72:13)
2016-12-21 11:41:18.587241: [debug] Checking if we are going to build multiple executables with the same name
@(Stack/Build.hs:196:5)
2016-12-21 11:41:18.587626: [debug] Executing the build plan
@(Stack/Build/Execute.hs:454:5)
2016-12-21 11:41:18.588661: [debug] Getting global package database location
@(Stack/GhcPkg.hs:55:5)
2016-12-21 11:41:18.588813: [debug] Run process: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --no-user-package-db list --global
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.627094: [debug] Process finished in 38ms: /Users/xxx/.stack/programs/x86_64-osx/ghc-8.0.1/bin/ghc-pkg --no-user-package-db list --global
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.629225: [debug] Encoding /Users/xxx/code/quorum/constellation/.stack-work/dist/x86_64-osx/Cabal-1.24.0.0/stack-build-cache
@(Data/Store/VersionTagged.hs:51:5)
2016-12-21 11:41:18.629967: [debug] Finished writing /Users/xxx/code/quorum/constellation/.stack-work/dist/x86_64-osx/Cabal-1.24.0.0/stack-build-cache
@(Data/Store/VersionTagged.hs:55:5)
2016-12-21 11:41:18.630108: [info] constellation-0.1.0.0: build (lib + exe)
@(Stack/Build/Execute.hs:826:23)
2016-12-21 11:41:18.630569: [debug] Run process: /Users/xxx/.stack/setup-exe-cache/x86_64-osx/Cabal-simple_mPHDZzAJ_1.24.0.0_ghc-8.0.1 --builddir=.stack-work/dist/x86_64-osx/Cabal-1.24.0.0 build lib:constellation exe:constellation-enclave-keygen exe:constellation-node --ghc-options " -ddump-hi -ddump-to-file"
@(System/Process/Read.hs:306:3)
2016-12-21 11:41:18.812676: [info] Preprocessing library constellation-0.1.0.0...
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:21.881973: [info] Preprocessing executable 'constellation-enclave-keygen' for
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:21.882177: [info] constellation-0.1.0.0...
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:22.304909: [info] Linking .stack-work/dist/x86_64-osx/Cabal-1.24.0.0/build/constellation-enclave-keygen/constellation-enclave-keygen ...
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673075: [warn] duplicate symbol _blake2b_init_key in:
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673226: [warn]     /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673321: [warn]     /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673444: [warn] duplicate symbol _blake2b_init in:
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673518: [warn]     /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673584: [warn]     /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673667: [warn] duplicate symbol _blake2b_init_param in:
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673731: [warn]     /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673794: [warn]     /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673930: [warn] duplicate symbol _blake2b_final in:
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.673995: [warn]     /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.674057: [warn]     /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.674138: [warn] duplicate symbol _blake2b_update in:
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.674199: [warn]     /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.674261: [warn]     /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.674339: [warn] duplicate symbol _blake2b in:
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.674469: [warn]     /Users/xxx/.stack/snapshots/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/cryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU/libHScryptonite-0.19-G9PYO4oOEqhDTta2u9rAaU.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.675459: [warn]     /Users/xxx/code/quorum/constellation/.stack-work/install/x86_64-osx/lts-7.5/8.0.1/lib/x86_64-osx-ghc-8.0.1/argon2-1.2.0-GzhheWYu2saHa29BPiZeRb/libHSargon2-1.2.0-GzhheWYu2saHa29BPiZeRb.a(blake2b.o)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.762342: [warn] ld: 6 duplicate symbols for architecture x86_64
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.777064: [warn] clang: error: linker command failed with exit code 1 (use -v to see invocation)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.778531: [warn] `gcc' failed in phase `Linker'. (Exit code: 1)
@(Stack/Build/Execute.hs:1013:67)
2016-12-21 11:41:23.814913: [debug] Start: getPackageFiles /Users/xxx/code/quorum/constellation/constellation.cabal
@(Stack/Package.hs:250:21)
2016-12-21 11:41:23.885131: [debug] Finished in 70ms: getPackageFiles /Users/xxx/code/quorum/constellation/constellation.cabal
@(Stack/Package.hs:250:21)
2016-12-21 11:41:23.893998: [debug] Encoding /Users/xxx/code/quorum/constellation/.stack-work/dist/x86_64-osx/Cabal-1.24.0.0/stack-build-cache
@(Data/Store/VersionTagged.hs:51:5)
2016-12-21 11:41:23.896432: [debug] Finished writing /Users/xxx/code/quorum/constellation/.stack-work/dist/x86_64-osx/Cabal-1.24.0.0/stack-build-cache
@(Data/Store/VersionTagged.hs:55:5)

--  While building package constellation-0.1.0.0 using:
      /Users/xxx/.stack/setup-exe-cache/x86_64-osx/Cabal-simple_mPHDZzAJ_1.24.0.0_ghc-8.0.1 --builddir=.stack-work/dist/x86_64-osx/Cabal-1.24.0.0 build lib:constellation exe:constellation-enclave-keygen exe:constellation-node --ghc-options " -ddump-hi -ddump-to-file"
    Process exited with code: ExitFailure 1

Package for ARM

Can someone provide a constellation package for Rpi / ARM? Unfortunately I can not install GHC due to eroor "couldn't figure out LLVM version! Make sure you have installed LLVM 3.9". .. LLVM 3.9 is installed ...

constellation-node not running

Hi,
My constellation-node is not running,run 'constellation-node --version',prompt ‘constellation-node: /lib/x86_64-linux-gnu/libtinfo.so.5: no version information available (required by constellation-node)’。
My operating system is Ubuntu 14.04 LTS (GNU/Linux 3.13.0-123-generic x86_64),already run ‘apt-get install libdb-dev libleveldb-dev libsodium-dev zlib1g-dev libtinfo-dev’。
Run ldd,prompt:
/home/quorum/constellation-0.1.0-ubuntu1604/constellation-node: /lib/x86_64-linux-gnu/libtinfo.so.5: no version information available (required by /home/quorum/constellation-0.1.0-ubuntu1604/constellation-node)
linux-vdso.so.1 => (0x00007ffee9fa2000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f0af6f97000)
libsodium.so.18 => /usr/local/libsodium-1.0.12/src/libsodium/.libs/libsodium.so.18 (0x00007f0af6d2b000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f0af6b12000)
libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f0af68e9000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f0af66e1000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f0af64de000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f0af62da000)
libdb-5.3.so => /usr/lib/x86_64-linux-gnu/libdb-5.3.so (0x00007f0af5f38000)
libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10 (0x00007f0af5cc4000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f0af59be000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f0af57a0000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f0af558a000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0af51c2000)
/lib64/ld-linux-x86-64.so.2 (0x00007f0af729b000)
Please help,tks。

Too many open files -- Unix IPC

Hello!

I'm working on JPMorgan's Quorum where I wrote a simplistic private smart contract and I'm hitting it with as much as possible of transactions to stress test it. The configuration I'm using is the 7nodes example. (https://github.com/jpmorganchase/quorum-examples)

I ended-up with 200 Tx/sec. When I checked the logs I found that constellation-node is logging "Too Many Open Files" while trying to connect to other constellation nodes using Unix IPC. I raised the soft and the hard limits of file descriptor to 1 million with no use as it seems that constellation-node is not honoring the limit. By listing the file descriptors under /proc//fd I'm always getting less than 1024 files. I'm not sure what I'm doing wrong here and I hope that someone can help.

Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.