Coder Social home page Coder Social logo

hyperledger / fabric-test Goto Github PK

View Code? Open in Web Editor NEW
151.0 17.0 130.0 16.21 MB

A collection of utilities used to test the core Hyperledger Fabric projects

License: Apache License 2.0

Go 35.69% Shell 24.61% JavaScript 26.47% Makefile 0.27% Java 0.38% Gherkin 1.65% TypeScript 10.04% Dockerfile 0.88%
hyperledger-fabric

fabric-test's Introduction

Build Status

Getting Started

Fabric-Test provides two tools for testing Fabric itself: The Operator tool and PTE

  • The Operator tool is used to deploy Fabric networks. It can be used to deploy Fabric to your local machine using Docker, or to Kubernetes
  • PTE, or Performance Traffic Engine is used to Invoke and Query chaincode through a network deployed using the Operator tool

Prerequisites

While Fabric-Test provides a utility for installing most of its dependencies, you do need a few basic tools to get started:

  • Go 1.18 or later
  • Node 16 or later
  • Java 8 or later (if using Java chaincode)
  • Docker
  • Docker-Compose
  • Curl and Make

Once you've installed these simple dependencies you simply execute make pre-reqs from the root of the repo and Fabric-Test will bootstrap the rest of the dependencies and install the required NPM packages.

Environment variables

Make sure $GOPATH/bin (if GOPATH is set) or $HOME/go/bin is in your $PATH so that the go tools can be found.

If you run the make targets from the project root directory, fabric-test/bin with the Fabric binaries will get added to PATH and fabric-test/config with Fabric node config files will get added to FABRIC_CFG_PATH. If you run the tests directly (outside of make) you will need to set these variables yourself.

Running Test Suites with Make

You can run the automated test suites with a Makefile target given below. This handles all the steps for you as the procedure installs all the prerequisites and executes the test suite in the targeted directory. Simply call make and target one of the test suites in the regression directory:


  make regression/smoke     # Cleans environment, updates submodules, clones & builds
                            # fabric & fabric-ca images, executes Smoke tests from
                            # regression/smoke folder.

Tools Used to Execute Tests

Operator

Please see the README located in the tools/operator directory for more detailed information for using the Operator to launch Fabric networks, administer them, and execute actions from test-input files to reconfigure the network, disrupt the network, or use PTE to send transactions.

Performance Traffic Engine

Please see the README located in the tools/PTE directory for more detailed information for using the Performance Traffic Engine to drive transactions through a Fabric network.

.. Licensed under Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by/4.0/

fabric-test's People

Contributors

2016nishi avatar adnan-c avatar asararatnakar avatar awjh-ibm avatar bharadwajambati95 avatar chandrachavva avatar denyeart avatar dependabot[bot] avatar gennadylaventman avatar johndsheehan avatar jt-nti avatar lhaskins avatar lindluni avatar luomin avatar mastersingh24 avatar mbwhite avatar nklincoln avatar rameshthoomu avatar rennman avatar ricjhill avatar ryjones avatar sambhavdutt avatar scottz64 avatar shimos avatar shw8927 avatar suryalnvs avatar sykesm avatar vijaypunugubati avatar vramakrishna avatar yuki-kon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fabric-test's Issues

Create Test Scenario for Chaos Testing

Some of the fundamental properties of a Blockchain network, are its highly-availability, fault-tolerance and ability to recover from disaster. To that end, one of chief goal as System Test should be to fully test the ability of a Fabric network to recover from minimal and total failure. Our goal should be to assert the expected functionality of the network as we kill off components and allow them to come recover, delete the underlying artifacts of these components, and take them offline for extended periods of time.

This testing will be performed using ChaosKube on a Kubernetes cluster. We can make use of local clusters using microk8s.

Convert PTE To Latest High-Level Node.js SDK

PTE is written using the fabric-common package, which is the low-level SDK. Now that the high-level SDK exists and that is the area of focus for future development, PTE should be modified to use the high-level SDK.

Instead of attempting to modify the existing PTE, the rewrite should be a copy and sit in a folder next to PTE (maybe PTEv2) for now. Development will be incremental, code should be pushed frequently, and reviewed and merged frequently, rather than pushing an entire rewrite of the codebase in single commit series.

We should open issues around each component of PTE we need to implement.

trying to connect to a kubernetes cluster created with tools/operator using java SDK

I am super new to fabric. Can someone please point me to how i would go about connecting a java app using the fabric java SDK ? I can contribute a sample app as a PR if that would be something people can use.

I am trying to make sense of this fabric-samples java project. I noticed the operator generates connection profiles and ca certs for all orgs. However its not clicking how my client can talk to the cluster stood up by the operator using those generated files.

Can anyone help ?

Improve PTE logging when channel is not yet initialized on orderer

Concerning errors in PTE logs at join channel step:

info: [2022-03-30T21:47:47.541Z PTE 0 main]: [joinChannel:org1] Successfully enrolled orderer 'admin'
info: [2022-03-30T21:47:47.541Z PTE 0 main]: [joinChannel:org2] Successfully enrolled orderer 'admin'
info: [2022-03-30T21:47:47.547Z PTE 0 util]: [assignChannelOrderer] channel orderers: [ [Orderer] ]
info: [2022-03-30T21:47:47.547Z PTE 0 util]: [assignChannelOrderer] channel orderers: [ [Orderer] ]
2022-03-30T21:47:47.577Z - error: [Orderer.js]: sendDeliver - rejecting - status:SERVICE_UNAVAILABLE
2022-03-30T21:47:47.577Z - error: [Orderer.js]: sendDeliver - rejecting - status:SERVICE_UNAVAILABLE
info: [2022-03-30T21:47:48.594Z PTE 0 main]: [joinChannel:org=testorgschannel0:org2] Successfully got the genesis block
info: [2022-03-30T21:47:48.594Z PTE 0 main]: [joinChannel:org=testorgschannel0:org1] Successfully got the genesis block

The error is a non-issue. PTE creates the channel on the orderer, and then tries to get the genesis block from orderer to prove the channel is created and ready before moving on to the join peers step. Since it takes a few seconds for the channel to initialize on the orderer, PTE makes the attempt with retries every one second. The first attempt fails with an error reported in the PTE log and in the orderer log, and then succeeds on the next attempt.

I'd suggest to update PTE pte-main.js to write a warning log message stating that the channel is not yet initialized and it will retry retrieval of genesis block in one second.

The change can be made at this line:
https://github.com/hyperledger/fabric-test/blob/main/tools/PTE/pte-main.js#L913

The change will make readers of the log aware that the error is expected and benign.

Create Full Interoperability Suite

Fabric has three SDK's: Go, Node.js/Typescript, and Java, in order to maintain and verify functionality we should create a full-featured interoperability definition. This definition should define all of the functionality we will test, and then we should implement the Interoperability suite by replicating these test definitions in each SDK. This will allow us to verify that functionality remains valid and consistent.

Create T-Shirt Size Networks for Performance Regression Testing

In order to run performance regression, we need to have some representative networks for Fabric. Not sure if these already exists, but if not, it would be nice to start with three network setups:

Small:  maybe like the one in Fab-17614, 

2 orgs
2 peers with CA's
2 orderer nodes
1 channel
1 chaincode types
Medium: 

5 orgs
2 peers with CA's
5 orderer nodes all belonging to the same org.
5 channel
1 chaincode types
Large:

30 orgs
2 peers with CA's
10 orderer nodes, from different Orgs
1 channel
1 chaincode types

Create Test Scenarios for the External Chaincode Launcher

The external chaincode launcher was added as part of the v2.0.0 release. This change allows users to break free of the dependency on the docker daemon inside the peer.

In particular users can launch chaincode as standalone servers, they can launch docker containers out of band running chaincode and connect to them, or they can launch separate kubernetes pods running the chaincode.

This encapsulates two test scenarios:

  • Launching the chaincode in Docker out of band and connecting to it
  • Launching the chaincode as a separate pod and connecting to it

We can make use of some tool such as microk8s to run the tests in CI: https://microk8s.io/

Document Test Scenarios in `regression` Directory

In ./tools/PTE/CITests we have a README that lists our test tasks. We should replicate this in the root of the regression director and it should include a list of each of our tests, a very, very brief description of the test and a link to the README for each of our tests.

With that being said, each individual test directory should also include a README describing the test scenario and the topology of the test network.

This will help developers identify tests they want to target when they want to test something related to changes they've made.

All future tests will need to include an update to the global README and the README specific to the scenario

Rename Operator Tool

As we try to standardize the Operator, it is highly confusing in that the Kubernetes application lifecycle tooling is itself called the "Operator Framework" with the Kubernetes Resource being called Operator itself. To prevent confusion we should rename Operator to something more idiomatic of Fabric.

Document Chaincode to Provide Functional Context

Add a README to the chaincodes directory to document what each of the chaincodes are testing. Today it is impossible for anyone who didn't write the chaincode to figure out what the differences are between chaincodes.

Add chaos testing for peer gateway into the fabric-test builds

Code base

  • add operator based network
  • add chaos chaincode from fabric-chaos testing into chaincodes/chaos/node
  • add chaos client (don't forget the lint dot file) from fabric-chaos-testing into tools/chaos/client/node
  • add chaos engine (don't forget the lint dot file and sample scenarios) from fabric-chaos-testing into tools/chaos/engine

Integration

  • build task to run chaos killing non gateway peers (this could fail occasionally) - see requirements later for this : PR AVAILABLE
  • build task to run complete chaos and ensure client can still recover - see requirements later for this
  • tune the tasks to the build environment they will run in when adding to the CI pipeline

Misc

  • task to do a smoke test of chaos engine when PR submitted
  • provide a readme to document the chaos environment and how to use standalone

For each build task you will need to

  • update Makefile to to build chaos chaincode, chaos client, chaos engine (on time only)
  • create a directory in regression
  • add in an appropriate suite_test.go + test.go files (+go.mod, go.sum)
  • create a chaos launcher script (based on coord.sh) to manage syncing the client and chaos engine
  • add a scenarios directory with the required scenarios for that build task
  • add a .env file which configures the client and is appropriate for that task (probably require tuning)

Privacy leakage in mapkeys.go

The "Invoke" function in chaincodes/mapkeys/go/mapkeys.go gets the private data directly via function parameters, which will be included in the transaction and all peers can access.

Recreate PTE Using the Fabric-Go-SDK

The Go SDK has officially released support for the new high level programming model. Now that this is available we should consider creating a Go-based tool for performing network operation and testing purposes.

The Operator tooling is written in Go, as well as Fabric itself. To make use of the larger Fabric ecosystem, it now makes sense to write a tool for drive traffic and operations in Go. This tooling will be a part of the larger Operator ecosystem.

We should open additional issues to encapsulate all the work needed to create a PTE-Go

Replace PTE with supported alternative for the Operator Test Tool

PTE relies on an old unsupported version of the node SDK. The Operator Test tool relies on PTE to do admin (such as channel creation) as well as drive business logic for test purposes.

With the availability of the new gateway api and this will be the recommended api for client business applications we should replace the need to drive transactions with the new thin Go SDK.

For operational requirements we only have the osnadmin and peer cli commands to rely on. The Operator Test Tool already has some support for using the cli to perform operational activities, we need to look extend this.

Self Contain the Operator Tooling

The Operator tooling today reads in a config file, parses it, then writes it to a common location on disk. This should instead be changed such that the config file is parsed in a single location and the config object is instead passed through the call stack.

This will enable support for #265.

externalIP returning localhost for orderer (and other) services

externalIP is currently returning localhost in my current deployment. As such the health check are failing and the deployment is not successful. I am wondering if i missed a configuration step ?

Do i need to set something in the network spec that will make sure the deployed hl cluster is exposed to the external world ? Do i have to expose the service externally myself ? i was hoping the operator would do that for me.

Do i need to port forward for all services instead ?

Automate configuration of a performance testbed for a Fabric Developer

Automate configuration of a provisioned environment to enable a developer provided fabric to be driven by a caliper workload

  • investigate the suitability for the environment to be all provisioned VMs such that the results are consistent and comparable within the lifetime of those provisioned VMs
  • provide an environment to be able to stand up a fabric network with local dev changes
  • provide capability to be able to extend that environment to include prometheus and grafana servers with a starter set of queries and graphs
  • reference caliper and caliper-benchmarks wrt defining and running a caliper based benchmark to perform comparisons

At this time there is no facilities available to be able to provision systems for this activity. It will therefore be the responsibility of contributors to source their own appropriate systems in order to be able to run benchmarks to test their code changes.

Further, this will initially be a BYOF (Bring Your Own Fabric) environment where the fabric processes are either run natively or using a lightweight container environment such as docker and as such the processes are still cleanly exposed as though they were native and network access is trivial via a single SUT ipaddress. It will be upto the developer to start/stop the fabric processes as well as be able to deploy their own chaincode.

Update fabric-test Go version to v1.15.x to match Fabric

Found these references to Go v1.14.x in fabric-test:

ci/azure-pipelines-daily.yml:24: GO_VER: 1.14.2
ci/azure-pipelines.yml:26: GO_VER: 1.14.2
ci/azure-pipelines-interop.yml:32: value: 1.14.2
images/ca/Dockerfile:8:ARG GO_VER=1.14.4
images/proxy-tools/Dockerfile:9:ARG GO_VER=1.14.4
images/orderer/Dockerfile:8:ARG GO_VER=1.14.4
images/peer/Dockerfile:8:ARG GO_VER=1.14.4

Refactor Operator Tooling to be Self-Contained and Written Idiomatically

The first thing I'll note is, this is going into a feature branch, not directly into master, as getting it right is more important to me than getting it in. I am opening the PR's here so the work is visible and can be commented on, but it will not change the way the Operator works today, as I wont be merging anything to master until all of the work is done.

The second thing is, I am focusing on one part at a time (Docker to start). This means I am okay breaking things related to kuberenetes up front. Once I am done with the Docker portion, I will come back through each PR and refactor each PR to work with Kubernetes. So the NodePortIP missing would be corrected in a refactor of this PR at a later time

Today the operator calls out to several command line tools: Docker, Configtxgen, Configtxlator, YTT, Crtyptogen, and even performing some tasks like moving a file by calling the bash mv as opposed to using the Go os.Rename, and until recently we also called out to the kubectl command.

This is highly problematic in that it requires the user to first put tools on their path before ever using our tool. It also prevents us from ensuring version compatibility, i.e., I don't know if they have configtxgen 2.0 or 1.1.

Because of YTT we also write a custom config to disk, over and over again. And we read the config and append data to it and write it to disk, then read it again, and append data to it and write it to disk, all before ever generating the YAML files we want.

Instead these changes will bring about proper config injection and dependency management. Creating in memory config objects where necessary and replacing tools like configtxgen and configtxlator with the fabric-config to manage config operations in code, and replace the calls to Docker with the Docker SDK, and ultimately remove YTT by replacing cryptogen with calls to the fabric CA using the Go SDK (since we aren't actually testing the functionality of cryptogen the place we get crypto from is irrelevant).

Ultimately this will allow us to remove external dependencies, and make the Operator tool a truly standalone tool.

The change to the Go SDK will allow us to embed everything we need to perform System Tests in the Operator tooling. Getting the Operator into state where it is able to extended and maintained is important, and today it is very hard to wrap your head around what is happening as a lot of things are managed on disk, rather than in code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.