Coder Social home page Coder Social logo

hyperledger-labs / minbft Goto Github PK

View Code? Open in Web Editor NEW
62.0 9.0 25.0 3.24 MB

Implementation of MinBFT consensus protocol.

License: Apache License 2.0

Makefile 3.25% Go 89.47% C 5.41% Shell 1.87%
consensus-protocol intel-sgx minbft bft state-machine-replication trusted-execution blockchain

minbft's Introduction

Status Badge

NOTE: This lab has been archived and is no longer being maintained.

GitHub Actions GoDoc Go Report Card Total alerts

MinBFT

Status

This project is in experimental development stage. It is not suitable for any kind of production use. Interfaces and implementation may change significantly during the life cycle of the project.

What is MinBFT

MinBFT is a pluggable software component that allows to achieve Byzantine fault-tolerant consensus with fewer consenting nodes and less communication rounds comparing to the conventional BFT protocols. The component is implemented based on Efficient Byzantine Fault-Tolerance paper, a BFT protocol that leverages secure hardware capabilities of the participant nodes.

This project is an implementation of MinBFT consensus protocol. The code is written in Golang, except the TEE part which is written in C, as an Intel® SGX enclave.

Why MinBFT

Byzantine fault-tolerant (BFT) protocols are able to achieve high transaction throughput in permission-based consensus networks with static configuration of connected nodes. However, existing components such as Practical Byzantine Fault Tolerance (PBFT) consensus plugin still incur high communication cost. Given the pervasive Trusted Execution Environments (TEE) on commodity platforms, we see a strong motivation to deploy more efficient BFT protocols that leverage the TEE applications as trust anchors even on faulty nodes.

TEEs are pervasive nowadays and supported by many commodity platforms. For instance, Intel® SGX is being deployed on many PCs and servers, while mobile platforms are mostly supported by Arm Trustzone. TEEs rely on secure hardware to provide data protection and code isolation from a compromised hosting system. Here, we propose such a consensus component that implements MinBFT, a BFT protocol that leverages TEEs to prevent message equivocation, and thus reducing the required number of consenting nodes as well as communication rounds given the same fault-tolerance threshold. More specifically, it requires only 2f+1 consenting nodes in order to tolerate f faulty nodes (or tolerate up to half of faulty nodes); meanwhile, committing a message requires only 2 rounds of communication among the nodes instead of 3 rounds as in PBFT.

We hope that by evaluating this consensus component under the existing blockchain frameworks, the community will benefit from availability to leverage it in different practical use cases.

Concepts

The consensus process in MinBFT is very similar to PBFT. Consenting nodes (i.e., nodes who vote for ordering the transactions) compose a fully-connected network. There is a leader (often referred to as primary node) among the nodes who first prepares an incoming request message to the other nodes by suggesting a sequence number for the request in broadcasted PREPARE messages. The other nodes verify the PREPARE messages and subsequently broadcast a COMMIT message in the network. Finally, the nodes who have received f+1 (where f is fault-tolerance, a number of tolerated faulty nodes) consistent COMMIT messages execute the request locally and update the underlying service state. If the leader is perceived as faulty, a view change procedure follows to change the leader node.

Note that the signatures for each PREPARE and COMMIT messages are generated by USIG (Unique Sequential Identifier Generator) service, the tamper-proof part of the consenting nodes. The sequence numbers are also assigned to the messages by USIG with the help of a unique, monotonic, and sequential counter protected by TEE. The signature, also known as USIG certificate, certifies the counter assigned to a particular message. The USIG certificate combined with the counter value comprises a UI (unique identifier) of the message. Since the monotonic counter prevents a faulty node from sending conflicting messages to different nodes and provides correct information during view changes, MinBFT requires less communication rounds and total number of consenting nodes than PBFT.

For more detailed description of the protocol, refer to Efficient Byzantine Fault-Tolerance paper.

Quick Start

This quick start shows you how to run this project using Docker containers. This is the easiest way to try out this project with minimal setup. If you want to build and run without Docker, skip this section.

Prerequisites

To run the containers, the following software must be installed on your system.

If you are using Ubuntu 20.04, they can be installed as follows:

sudo apt-get install docker.io docker-compose

Note that SGX-enabled CPU is not required to run the containers; they will run in the simulation mode provided by Intel SGX SDK. We plan to have another container image which runs in the HW mode (i.e. using "real" hardware features) in a future release.

Building a Container Image

Build an image of the containers as follows:

sudo docker build -f sample/docker/Dockerfile -t minbft .

Note that, by defaut, the docker command needs to be executed by root. Refer to Docker's document for details.

If your system has the make command, the following command also can be used.

sudo make docker

Running Replicas

To start up an example consensus network of replicas, invoke the following commands:

sudo UID=$UID docker-compose -f sample/docker/docker-compose.yml up -d

This will start the replica nodes as 3 separate containers in background.

Submitting Requests

Requests can be submitted for ordering and execution to the example consensus network as follows:

sudo docker-compose -f sample/docker/docker-compose.yml \
  run client request "First request" "Second request" "Another request"

This command should produce the following output showing the result of ordering and execution of the submitted requests:

Reply: {"Height":1,"PrevBlockHash":null,"Payload":"Rmlyc3QgcmVxdWVzdA=="}
Reply: {"Height":2,"PrevBlockHash":"DuAGbE1hVQCvgi+R0E5zWaKSlVYFEo3CjlRj9Eik5h4=","Payload":"U2Vjb25kIHJlcXVlc3Q="}
Reply: {"Height":3,"PrevBlockHash":"963Kn659GbtX35MZYzguEwSH1UvF2cRYo6lNpIyuCUE=","Payload":"QW5vdGhlciByZXF1ZXN0"}

The output shows the submitted requests being ordered and executed by a sample blockchain service. The service executes request by simply appending a new block for each request to the trivial blockchain maintained by the service.

Tear Down

The following command can be used to terminate running containers:

sudo docker-compose -f sample/docker/docker-compose.yml down

The containers create some files while running. These files can be deleted as follows.

rm -f sample/docker/keys.yaml sample/docker/.keys.yaml.lock

The Docker image can be deleted as follow.

sudo docker rmi minbft

If your system has the make command, the following command also can be used.

sudo make docker-clean

Requirements

Operating System

The project has been tested on Ubuntu 18.04 and 20.04 LTS. Additional required packages can be installed as follows:

sudo apt-get install build-essential pkg-config

Golang

Go 1.13 or later is required to build this project (tested against 1.13, 1.15 and 1.16). Official installation instructions can be found here.

Intel® SGX SDK

The Intel® SGX enclave implementation has been tested with Intel® SGX SDK for Linux version 2.12. For installation instuctions please visit download page. Please note that Intel SGX has two operation modes and required software components depend on operation mode.

  • If you run in HW mode, you have to install all three components: SGX driver, PSW, and SGX SDK.
  • If you run in simulation mode, only SGX SDK is required.

A conventional directory to install the SDK is /opt/intel/. Please do not forget to source /opt/intel/sgxsdk/environment file in your shell. Alternatively, one can add the following line to ~/.profile:

. /opt/intel/sgxsdk/environment

If you run in simlation mode, you need create/update the link to the additional directory of shared libraries with following commands:

sudo bash -c "echo /opt/intel/sgxsdk/sdk_libs > /etc/ld.so.conf.d/sgx-sdk.conf"
sudo ldconfig

When using a machine with no SGX support, only SGX simulation mode is supported. In that case, please be sure to export the following environment variable, e.g. by modifying ~/.profile file:

export SGX_MODE=SIM

Getting Started

This is a Go module and can be placed anywhere; no need to be in GOPATH. If this is placed in GOPATH and you are using Go 1.11 or 1.12, please make sure that the environment variable GO111MODULE=on has set to activate module mode.

All following commands are supposed to be run in the root of the module's source tree.

Building

The project can be build by issuing the following command. At the moment, the binaries are installed in sample/bin/ directory; no root privileges are needed.

make install

Running Example

Running the example requires some set up. Please make sure the project has been successfully built and sample/bin/keytool and sample/bin/peer binaries were produced. Those binaries can be supplied with options through a configuration file, environment variables, or command-line arguments. More information about available options can be queried by invoking those binaries with help argument. Sample configuration files can be found in sample/authentication/keytool/ and sample/peer/ directories respectively.

Before running the example, the environment variable $LD_LIBRARY_PATH needs to include sample/lib where libusig_shim.so is installed by make install.

export LD_LIBRARY_PATH="${PWD}/sample/lib:${LD_LIBRARY_PATH}"

Generating Keys

The following command are to be run from sample directory.

cd sample

Sample key set file for testing can be generate by using keytool command. This command produces a key set file suitable for running the example on a local machine:

bin/keytool generate -u lib/libusig.signed.so

This invocation will create a sample key set file named keys.yaml containing 3 key pairs for replicas and 1 key pair for a client by default.

Consensus Options Configuration

Consensus options can be set up by means of a configuration file. A sample consensus configuration file can be used as an example:

cp config/consensus.yaml ./

Peer Configuration

Peer configuration can be supplied in a configuration file. Selected options can be modified through command line arguments of peer binary. A sample configuration can be used as an example:

cp peer/peer.yaml ./

Running Replicas

To start up an example consensus network of replicas on a local machine, invoke the following commands:

bin/peer run 0 &
bin/peer run 1 &
bin/peer run 2 &

This will start the replica nodes as 3 separate OS processes in background using the configuration files prepared in previous steps.

Submitting Requests

Requests can be submitted for ordering and execution to the example consensus network using the same peer binary and configuration files for convenience. It is better to issue the following commands in another terminal so that the output messages do not intermix:

bin/peer request "First request" "Second request" "Another request"

This command should produce the following output showing the result of ordering and execution of the submitted requests:

Reply: {"Height":1,"PrevBlockHash":null,"Payload":"Rmlyc3QgcmVxdWVzdA=="}
Reply: {"Height":2,"PrevBlockHash":"DuAGbE1hVQCvgi+R0E5zWaKSlVYFEo3CjlRj9Eik5h4=","Payload":"U2Vjb25kIHJlcXVlc3Q="}
Reply: {"Height":3,"PrevBlockHash":"963Kn659GbtX35MZYzguEwSH1UvF2cRYo6lNpIyuCUE=","Payload":"QW5vdGhlciByZXF1ZXN0"}

The output shows the submitted requests being ordered and executed by a sample blockchain service. The service executes request by simply appending a new block for each request to the trivial blockchain maintained by the service.

Tear Down

The following command can be used to terminate running replica processes and release the occupied TCP ports:

killall peer

Fault Tolerance

The above example shows a simple normal case of the consensus network. Our next interest is how the system behaves when some replicas are faulty.

Crash Fault on Backup

The simplest faulty case is crash fault on backup replicas. Note that we don't tolerate any type of fault on primary replica until view change operation is implemented.

Let's restart the network, and note the process IDs of each replica process.

$ bin/peer run 0 &
[1] 16899
$ bin/peer run 1 &
[2] 16916
$ bin/peer run 2 &
[3] 16923

Make sure that all replicas are properly working by sending a request:

$ bin/peer request First request
Reply: {"Height":1,"PrevBlockHash":null,"Payload":"Rmlyc3QgcmVxdWVzdA=="}

Now kill replica 1 and send another request:

$ kill 16916
$ bin/peer request Second request
Reply: {"Height":2,"PrevBlockHash":"DuAGbE1hVQCvgi+R0E5zWaKSlVYFEo3CjlRj9Eik5h4=","Payload":"U2Vjb25kIHJlcXVlc3Q="}

OK, we still get the reply messages with successfully agreed response. Next, kill another backup replica and send another request:

$ kill 16923
$ bin/peer request Another request
(no response)

We fail to reach consensus and get no response because more than f replicas are faulty.

Code Structure

The code divided into core consensus protocol implementation and sample implementation of external components required to interact with the core. The following directories contain the code:

  • api/ - definition of API between core and external components
  • client/ - implementation of client-side part of the protocol
  • core/ - implementation of core consensus protocol
  • usig/ - implementation of USIG, tamper-proof component
  • messages/ - definition of the protocol messages
  • sample/ - sample implementation of external interfaces
    • authentication/ - generation and verification of authentication tags
      • keytool/ - tool to generate sample key set file
    • conn/ - network connectivity
    • config/ - consensus configuration provider
    • requestconsumer/ - service executing ordered requests
    • peer/ - CLI application to run a replica/client instance

Roadmap

The following features of MinBFT protocol has been implemented:

  • Normal case operation: minimal ordering and execution of requests as long as primary replica is not faulty
  • SGX USIG: implementation of USIG service as Intel® SGX enclave

The following features are considered to be implemented:

  • View change operation: provide liveness in case of faulty primary replica
  • Garbage collection and checkpoints: generation and handling of CHECKPOINT messages, log pruning, high and low water marks
  • USIG enclave attestation: support to remotely attest USIG Intel® SGX enclave
  • Faulty node recovery: support to retrieve missing log entries and synchronize service state from other replicas
  • Request batching: reducing latency and increasing throughput by combining outstanding requests for later processing
  • Asynchronous requests: enabling parallel processing of requests
  • MAC authentication: using MAC in place of digital signature in USIG to reduce message size
  • Read-only requests: optimized processing of read-only requests
  • Speculative request execution: reducing processing delay by tentatively executing requests
  • Documentation improvement: comprehensive documentation
  • Testing improvement: comprehensive unit- and integration tests
  • Benchmarks: measuring performance

Contributing

Everyone is welcome to contribute! There are many ways to make useful contribution. Please look at Contribution Guideline for more details.

License

Source code files are licensed under the Apache License, Version 2.0.

Documentation files are licensed under the Creative Commons Attribution 4.0 International License.

minbft's People

Contributors

nhoriguchi avatar romeo5929 avatar sergefdrv avatar tkuhrt avatar ynamiki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

minbft's Issues

Include message type to signed payload

The message type should be covered by signature/USIG certificate to guarantee that a singed payload will not happen to be misinterpreted. NB: Protobuf does not include the message type into the binary encoding.

Mandatory parameters for timeout values

We might be better to give parameters for clientstate or its timers as mandatory ones as pointed out in #127.

I thought of passing api.Configer to clientstate.NewProvider in defaultIncomingMessageHandler but unit tests like request-timeout_test.go is affected by this change (that seems complicated to me). So I'm not sure that it's a good direction or worth doing at this point.

facing below error . kindly tell how to troubleshoot it?

shalikram@shalikram-HP-Pavilion-g4-Notebook-PC:~/minbft$ make install
make -C usig/sgx build
make[1]: Entering directory '/home/shalikram/minbft/usig/sgx'
make[1]: Nothing to be done for 'build'.
make[1]: Leaving directory '/home/shalikram/minbft/usig/sgx'
go build -o sample/build/keytool ./sample/authentication/keytool
sample/authentication/keytool/main.go:19:8: cannot find package "github.com/hyperledger-labs/minbft/sample/authentication/keytool/cmd" in any of:
/usr/lib/go-1.10/src/github.com/hyperledger-labs/minbft/sample/authentication/keytool/cmd (from $GOROOT)
/home/shalikram/go/src/github.com/hyperledger-labs/minbft/sample/authentication/keytool/cmd (from $GOPATH)
Makefile:55: recipe for target 'build' failed
make: *** [build] Error 1

Test client

Add unit tests for client implementation modules.

Environment checker in build process

We have non-trivial prerequisite steps before running Running Example, which can be a barrier for early users. So some environment check and guidance messages might be helpful. For example, if we try to build on non-SGX system with SGX_MODE=HW, the warning like below should be helpful:

$ make install
Platform doesn't support Intel SGX. Please retry with environment variable SGX_MODE=SIM.

Similarly, if we try to build without meeting requirement, the error like below works well.

$ make install
Intel SGX SDK is not available on current platform, see README for requirements.

Make some logging parameters configurable

I learned from code review for #41 that peer/client can have more flexible logging by making some parameters (like log level, log file's path, and/or prefix format) configurable.

settle explicit dependency on GOPATH

minbft is now treated as Go module with #82. Go module is free from GOPATH generally, but we still have dependency on it. Removing GOPATH is not mandatory or urgent but will be done at some point.

Make usig/sgx package retrievable from the repository

Now the main code base is converted to a Go module. However, we cannot fully utilize flexibility and convenience of Go module system due to the fact that usig/sgx is still a special package. That is because usig/sgx/usig-enclave.go has to be generated with make. This means a module user cannot retrieve this package directly from the repository, but needs to build it separately. We should change the way we build the enclave to make this an ordinary Go package.

To achieve this, we need to break the dependency of cgo code on SGX SDK dynamic libraries. In other words, building of the encalve-related C stuff needs to be split from the main Go code base.

Making the enclave wrapper (shim) library a dynamically-linked module (using dlopen() in cgo code) seems to be a good option to achieve this.

For more details, please refer to the original discussion starting here.

Fix Errors from GolangCI-Lint

By switching to golangci-lint #71, #78, we get some errors from the linter.
The CI ignores these errors now; when this is fixed, make lint || : allow failure in .circleci/config.yml should be changed to make lint not to ignore the errors.

sample/requestconsumer/simpleledger.go:158:5: SA4003: no value of type uint64 is less than 0 (staticcheck)
	if l.length <= 0 {
	   ^
usig/sgx/usig-enclave.go:66:5: S1009: should omit nil check; len() for nil slices is defined as zero (gosimple)
	if sealedKey != nil && len(sealedKey) != 0 {
	   ^

https://circleci.com/gh/hyperledger-labs/minbft/115

Use Go Modules

Go 1.11 introduced "modules". This feature is now experimental and will be finalized in Go 1.12.

The feature may useful to manage dependencies. After Go 1.12 is released, we will consider whether it suits our project.

Simplify messageString() output for large message payload

I noticed in "MinBFT on Fabric" trial that orderer's log output was filled with message payload, which was a bit cumbersome to handle. Showing raw payload might be good for some simple workload, but some digest might more preferable for messages with large payload. One possible way to go is to introduce a threshold and to show a digest value if the size of message payload goes above it.

Change sample replica connector to tolerate connecting to faulty replicas

Currently the sample gRPC replica connector has to establish connections to all the required replicas before it can be used by the core/client. Thus it cannot handle the case when a replica is already faulty before the connection is established. That also has an implication, that replicas have to start listening to incoming connections before trying to establish outgoing connection. Changing how the replica connector establishes outgoing connections could also help to overcome this limitation.

Race condition for request execution

Message streams from different replicas are processed concurrently. Even though PREPARE/COMMIT messages from the same replica are always processed sequentially, there is still a possible race condition for request execution.

The race can happen because there is no synchronization in makeCommitmentCollector between invocations of countCommitment and executeRequest. It can happen that countCommitment returns true for two subsequent prepared requests in two goroutines. That is because one goroutine may first supply the last required commitment for one prepared request, whereas another goroutine can supply a commitment for the same prepared request (which is ignored) followed by the last required commitment of the subsequent prepared request. Then two goroutines will race to execute those requests.

Support submitting multiple requests in sample CLI

Beyond just simple demonstration, we might want to allow creating some workload. We can extend the sample peer request command to support submitting a series of requests back-to-back. Then this could be used to imitate some dense workload.

Definition of USIG

USIG seems to be a core component of MinBFT, but the terminology is not defined anywhere. Google search shows that USIG means Unique Sequential Identifier Generator, so showing this words somewhere in README will be helpful.

CircleCI: Create and Use Custom Docker Image

CircleCI, which we have set up in #13 supports custom Docker images so it is better to build a custom image with Intel SGX SDK and Go on Ubuntu 16.04 than to do everything in .circleci/config.yml.

A problem on using a custom image is that we need to have a Docker Registry or to manage an account of Docker Hub.

Consider faulty client

Faulty client might:

  • not send a Request message to the primary
  • send conflicting Request messages to different replicas

This could lead to expiration of the request timer and view change even if the primary is not faulty. This way liveness may be compromised.

Enable CI in GitHub

GitLab CI was used for this project internally. In order to see CI results in GitHub, it should be re-enabled using whatever GitHub supports.

Detect configuration typos in the sample

Even though this is just a sample application, this kind of error detection might help avoid surprising results if there is a simple typo in configuration. Maybe address in a separate issue? The same also applies to sample/config package.

Originally posted by @sergefdrv in #134

Make Rule of Merging Pull Request

We need to make a rule of merging a pull request.
I think a simple rule is sufficient for this time.
@sergefdrv, @Naoya-Horiguchi, please give me your comments and suggestions.

  • A pull request must be reviewed by one of the nec-blockchain members (@sergefdrv, @Naoya-Horiguchi and @ynamiki).
  • The reviewer merges the request if it is fine.
  • If the contributor (i.e. a creator of a Pull Request) is a member of nec-blockchain, the request must be reviewed and is merged by another member.

Error: Failed to generate keys

after make install and cd sample command following issue coming while running this command
bin/keytool generate -u lib/libusig.signed.so

the following message is shown in the terminal

Using output file: keys.yaml
Error: Failed to generate keys: failed to create USIG enclave: libsgx_urts.so: cannot open shared object file: No such file or directory
Usage:
keytool generate [numberReplicas [numberClients [usigEnclaveFile]]] [flags]

Flags:
--client-key-spec string keyspec for client (default "ECDSA")
--client-sec-param int client security param (default 256)
-h, --help help for generate
-c, --num-clients int number of clients (default 1)
-r, --num-replicas int number of replicas (default 3)
--replica-key-spec string keyspec for replica (default "ECDSA")
--replica-sec-param int replica security param (default 256)
-u, --usig-enclave-file string USIG enclave file (default "libusig.signed.so")

Global Flags:
--config string config file (default "keytool.yaml")
-o, --output string output file (default "keys.yaml")

Failed to generate keys: failed to create USIG enclave: libsgx_urts.so: cannot open shared object file: No such file or directory

can you please help me out to resolve it.

Allow force-push to address pull request comments

When a PR branch is force-pushed, GitHub provides a link ("force-pushed") to check the changes:

image

We'd better change our contribution guideline. Review comments could be addressed without opening another PR, but by force-pushing. The links would allow to check the changes, even more conveniently.

Originally posted by @sergefdrv in #112 (comment)

Remove 1s delay workaround for SGX SDK simulation mode

Now the PR intel/linux-sgx#304 got merged, wait for the next release of SGX SDK, switch to it, and remove the workarounds:

Requires libsgx_urts.so to run in SGX hardware mode

Current installation procedure seems to cover only cases of running in SGX simulation mode. I saw the following error in running keytool generate with SGX_MODE=HW:

bin/keytool: error while loading shared libraries: libsgx_urts.so: cannot open shared object file: No such file or directory

libsgx_urts_sim.so for simulation mode is installed by SGX SDK installer sgx_linux_x64_sdk_2.3.101.46683.bin, and libsgx_urts.so for hardware mode is delivered by libsgx-enclave-common_2.3.101.46683-1_amd64.deb. So we might be better to add some instruction for users who want to run MinBFT in hardware mode.

Forward request messages to primary replica

The sample client currently sends REQUEST messages to all replicas, but generally that's not a requirement. The REQUEST messages are embedded in the corresponding PREPARE messages and distributed to backup replicas in prepare phase, so by allowing the backup replicas to forward the REQUEST messages to the primary replica at the time, clients can initiate requests just by sending at least one REQUEST message to one of the replicas. This issue also covers changing the sample client to flexibly choose the target replicas of requesting for demonstration purpose.

Contributing: Git hook to fix commit order in GitHub

It is a known issue that GitHub shows commits according to commit author date rather than topologically. This creates confusion when commits are reordered with rebase/cherry-pick.

I found a simple script that updates author and commiter dates of the commit. This script can be used as a git hook to automatically reset the commit dates whenever a commit gets updated. I wonder if we should mention this in our contribution guideline?

Failed to generate keys: failed to create USIG enclave:

After running make install
I am getting following error while executing command
bin/keytool generate -u lib/libusig.signed.so

shalikram@shalikram-HP-Pavilion-g4-Notebook-PC:/minbft$ cd sample
shalikram@shalikram-HP-Pavilion-g4-Notebook-PC:
/minbft/sample$ bin/keytool generate -u lib/libusig.signed.so
Using output file: keys.yaml
Error: Failed to generate keys: failed to create USIG enclave: libsgx_urts.so: cannot open shared object file: No such file or directory
Usage:
keytool generate [numberReplicas [numberClients [usigEnclaveFile]]] [flags]

Flags:
--client-key-spec string keyspec for client (default "ECDSA")
--client-sec-param int client security param (default 256)
-h, --help help for generate
-c, --num-clients int number of clients (default 1)
-r, --num-replicas int number of replicas (default 3)
--replica-key-spec string keyspec for replica (default "ECDSA")
--replica-sec-param int replica security param (default 256)
-u, --usig-enclave-file string USIG enclave file (default "libusig.signed.so")

Global Flags:
--config string config file (default "keytool.yaml")
-o, --output string output file (default "keys.yaml")

Failed to generate keys: failed to create USIG enclave: libsgx_urts.so: cannot open shared object file: No such file or directory

Check updates to CI Dockerfile

Since #50 has been merged, we use a custom docker image in CI, which is maintained on Docker Hub. The image is built from Dockerfile managed in the source tree. As long as this Dockerfile stays unchanged, it is safe to keep using the same image from Docker Hub when running CI checks.

However, we would like to update the image from time to time. In that case, we should ensure the update will not break our CI. This comes down to running CI checks on the updated image before changing our default CI image on Docker Hub.

#52 attempts to update the Dockerfile without running CI checks on the updated image. #58 suggests changes to the Dockerfile to always fetch the latest release of gometalinter tool, but it's not clear how to trigger image build and ensure if the resulting image does not break CI. #52 (comment) suggests using a separate repository for the Dockerfile.

We need to define a scheme for updating the CI docker image, preferably without risking to suddenly break our CI.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.