Coder Social home page Coder Social logo

hyperledger-archives / avalon Goto Github PK

View Code? Open in Web Editor NEW
136.0 136.0 93.0 12.35 MB

Hyperledger Avalon enables privacy in blockchain transactions, moving intensive processing from a main blockchain to improve scalability and latency, and to support attested Oracles

Home Page: https://wiki.hyperledger.org/display/avalon/Hyperledger+Avalon

License: Apache License 2.0

Makefile 1.13% Shell 2.56% Python 45.02% CMake 2.74% C++ 39.59% C 1.53% Dockerfile 1.95% SWIG 1.12% Go 2.61% JavaScript 0.24% Groovy 0.44% Solidity 1.06%

avalon's People

Contributors

amxx avatar bvavala avatar byron-marohn avatar cmickeyb avatar danintel avatar dcmiddle avatar divyataori avatar eugeneyyy avatar g2flyer avatar ikegawa-koshi avatar jimthematrix avatar jucchan avatar karthikamurthy avatar m3ngyang avatar manju956 avatar manojgop avatar manojsalunke85 avatar markyqj avatar pankajgoyal2 avatar pegartillo95 avatar ram-srini avatar reckeyzhang avatar rranjan3 avatar ryjones avatar snmackenzie avatar sriharshasubbakrishna avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

avalon's Issues

Optimize fabric dockerfile

Fabric dockerfile final build stage needs to be optimized to copy required artifacts from build image

Avalon fabric ssl handshake failed


Reproduce

Up minifabric network
cd $avalon_dir
sudo /home/ubuntu/.local/bin/minifab up -i 1.4.4

Up Avalon Fabric
sudo docker-compose -f docker-compose-fabric.yaml up --build

Call Avalon
docker exec -it avalon-shell bash
cd examples/apps/generic_client/ ./fabric_generic_client.py -b fabric --workload_id "echo-result" --in_data "Hello"


More Informations

Avalon Version
1f84bfc7e413505732aa0eb11cc76ddec1c5fcd4

OS Informations

$ uname -a
Linux dc01 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

minifabric Up logs

ubuntu@dc01:~/avalon$ sudo /home/ubuntu/.local/bin/minifab up -i 1.4.4 
Minifab Execution Context:
    FABRIC_RELEASE=1.4.4
    CHANNEL_NAME=mychannel
    PEER_DATABASE_TYPE=golevel
    CHAINCODE_LANGUAGE=go
    CHAINCODE_NAME=simple
    CHAINCODE_VERSION=1.0
    CHAINCODE_PARAMETERS="init","a","200","b","300"
    CHAINCODE_PRIVATE=false
    CHAINCODE_POLICY=
    TRANSIENT_DATA=
    BLOCK_NUMBER=newest
    EXPOSE_ENDPOINTS=false
    CURRENT_ORG=org0.example.com
    HOST_ADDRESSES=172.20.46.136
    WORKING_DIRECTORY: /home/ubuntu/avalon
# Fabric Network **************************************************************************************
  * minifab                                                                             
    setting up
............................................................

# STATS ***********************************************************************************************
minifab: ok=65  failed=0
...
# Fabric operations ***********************************************************************************
  * minifab                                                                             
    channel create, channel join, anchor update, profile generation, cc install, cc instantiate, discover
.............................................................................................................................................................

# STATS ***********************************************************************************************
minifab: ok=178 failed=0
Running Nodes:
dev-peer2.org0.example.com-simple-1.0:
cli:
ca1.org1.example.com:7054/tcp
ca1.org0.example.com:7054/tcp
orderer3.example.com:7050/tcp
orderer2.example.com:7050/tcp
orderer1.example.com:7050/tcp
peer2.org1.example.com:
peer1.org1.example.com:
peer2.org0.example.com:
peer1.org0.example.com:
minifab:

real    5m20.072s
user    1m42.661s
sys     0m31.098s

Avalon Error Informations

root@0d42f9ce0ed0:/project/avalon# cd examples/apps/generic_client/
root@0d42f9ce0ed0:/project/avalon/examples/apps/generic_client# ./fabric_generic_client.py -b fabric --workload_id "echo-result" --in_data "Hello"
[10:44:01 INFO    __main__] ******* Hyperledger Avalon Generic client *******
[10:44:01 INFO    root] Org name choose: org0.example.com
Init client with profile=/project/avalon/sdk/avalon_sdk/connector/blockchains/fabric/network.json
[10:44:01 DEBUG   hfc.fabric.client] Init client with profile=/project/avalon/sdk/avalon_sdk/connector/blockchains/fabric/network.json
create org with name=example.com
[10:44:01 DEBUG   hfc.fabric.client] create org with name=example.com
create org with name=org0.example.com
[10:44:01 DEBUG   hfc.fabric.client] create org with name=org0.example.com
create org with name=org1.example.com
[10:44:01 DEBUG   hfc.fabric.client] create org with name=org1.example.com
create ca with name=ca1.org0.example.com
[10:44:01 DEBUG   hfc.fabric.client] create ca with name=ca1.org0.example.com
create ca with name=ca1.org1.example.com
[10:44:01 DEBUG   hfc.fabric.client] create ca with name=ca1.org1.example.com
Import orderers = dict_keys(['orderer1.example.com', 'orderer2.example.com', 'orderer3.example.com'])
[10:44:01 DEBUG   hfc.fabric.client] Import orderers = dict_keys(['orderer1.example.com', 'orderer2.example.com', 'orderer3.example.com'])
Import peers = dict_keys(['peer1.org0.example.com', 'peer2.org0.example.com', 'peer1.org1.example.com', 'peer2.org1.example.com'])
[10:44:01 DEBUG   hfc.fabric.client] Import peers = dict_keys(['peer1.org0.example.com', 'peer2.org0.example.com', 'peer1.org1.example.com', 'peer2.org1.example.com'])
New channel with name = mychannel
[10:44:01 DEBUG   hfc.fabric.client] New channel with name = mychannel
[10:44:01 INFO    hfc.fabric.channel.channel] DISCOVERY: adding channel peers query
[10:44:01 INFO    hfc.fabric.channel.channel] DISCOVERY: adding config query
E0505 10:44:01.514951561      59 ssl_transport_security.cc:1379] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED.
[10:44:01 WARNING STDERR] Traceback (most recent call last):
[10:44:01 WARNING STDERR]   File "./fabric_generic_client.py", line 621, in <module>
[10:44:01 WARNING STDERR] Main()
[10:44:01 WARNING STDERR]   File "./fabric_generic_client.py", line 508, in Main
[10:44:01 WARNING STDERR] blockchain, config)
[10:44:01 WARNING STDERR]   File "./fabric_generic_client.py", line 333, in create_worker_registry_instance
[10:44:01 WARNING STDERR] return FabricWorkerRegistryImpl(config)
[10:44:01 WARNING STDERR]   File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/connector/blockchains/fabric/fabric_worker_registry.py", line 52, in __init__
[10:44:01 WARNING STDERR] self.__fabric_wrapper = FabricWrapper(config)
[10:44:01 WARNING STDERR]   File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/connector/blockchains/fabric/fabric_wrapper.py", line 61, in __init__
[10:44:01 WARNING STDERR] self.__peername, 'Admin')
[10:44:01 WARNING STDERR]   File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/connector/blockchains/fabric/base.py", line 75, in __init__
[10:44:01 WARNING STDERR] self._user, peer, self._channel_name))
[10:44:01 WARNING STDERR]   File "/usr/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
[10:44:01 WARNING STDERR] return future.result()
[10:44:01 WARNING STDERR]   File "/usr/local/lib/python3.6/dist-packages/hfc/fabric/client.py", line 175, in init_with_discovery
[10:44:01 WARNING STDERR] local=False)
[10:44:01 WARNING STDERR]   File "/usr/local/lib/python3.6/dist-packages/aiogrpc/channel.py", line 40, in __call__
[10:44:01 WARNING STDERR] return await fut
[10:44:01 WARNING STDERR] grpc._channel
[10:44:01 WARNING STDERR] .
[10:44:01 WARNING STDERR] _MultiThreadedRendezvous
[10:44:01 WARNING STDERR] :
[10:44:01 WARNING STDERR] <_MultiThreadedRendezvous of RPC that terminated with:
[10:44:01 WARNING STDERR]       status = StatusCode.UNAVAILABLE
[10:44:01 WARNING STDERR]       details = "failed to connect to all addresses"
[10:44:01 WARNING STDERR]       debug_error_string = "{"created":"@1588675441.515290099","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3981,"referenced_errors":[{"created":"@1588675441.515279834","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":394,"grpc_status":14}]}"
[10:44:01 WARNING STDERR] >
avalon-blockchain-connector    | 2020-05-05 10:42:57,827 - DEBUG - Import orderers = dict_keys(['orderer1.example.com', 'orderer2.example.com', 'orderer3.example.com'])
avalon-blockchain-connector    | Import peers = dict_keys(['peer1.org0.example.com', 'peer2.org0.example.com', 'peer1.org1.example.com', 'peer2.org1.example.com'])
avalon-blockchain-connector    | 2020-05-05 10:42:57,831 - DEBUG - Import peers = dict_keys(['peer1.org0.example.com', 'peer2.org0.example.com', 'peer1.org1.example.com', 'peer2.org1.example.com'])
avalon-blockchain-connector    | New channel with name = mychannel
avalon-blockchain-connector    | 2020-05-05 10:42:57,833 - DEBUG - New channel with name = mychannel
avalon-blockchain-connector    | 2020-05-05 10:42:57,834 - INFO - DISCOVERY: adding channel peers query
avalon-blockchain-connector    | 2020-05-05 10:42:57,834 - INFO - DISCOVERY: adding config query
avalon-blockchain-connector    | E0505 10:42:57.840321291      50 ssl_transport_security.cc:1379] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED.
avalon-blockchain-connector    | Traceback (most recent call last):
avalon-blockchain-connector    |   File "/usr/local/bin/avalon_blockchain_connector", line 7, in <module>
avalon-blockchain-connector    |     from avalon_blockchain_connector.connector_service import main
avalon-blockchain-connector    |   File "/usr/local/lib/python3.6/dist-packages/avalon_blockchain_connector/connector_service.py", line 114, in <module>
avalon-blockchain-connector    |     main()
avalon-blockchain-connector    |   File "/usr/local/lib/python3.6/dist-packages/avalon_blockchain_connector/connector_service.py", line 104, in main
avalon-blockchain-connector    |     fabric_connector_svc = FabricConnector(uri)
avalon-blockchain-connector    |   File "/usr/local/lib/python3.6/dist-packages/avalon_blockchain_connector/fabric/fabric_connector.py", line 61, in __init__
avalon-blockchain-connector    |     self.__fabric_worker = FabricWorkerRegistryImpl(self.__config)
avalon-blockchain-connector    |   File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/connector/blockchains/fabric/fabric_worker_registry.py", line 52, in __init__
avalon-blockchain-connector    |     self.__fabric_wrapper = FabricWrapper(config)
avalon-blockchain-connector    |   File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/connector/blockchains/fabric/fabric_wrapper.py", line 61, in __init__
avalon-blockchain-connector    |     self.__peername, 'Admin')
avalon-blockchain-connector    |   File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/connector/blockchains/fabric/base.py", line 75, in __init__
avalon-blockchain-connector    |     self._user, peer, self._channel_name))
avalon-blockchain-connector    |   File "/usr/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
avalon-blockchain-connector    |     return future.result()
avalon-blockchain-connector    |   File "/usr/local/lib/python3.6/dist-packages/hfc/fabric/client.py", line 177, in init_with_discovery
avalon-blockchain-connector    |     local=False)
avalon-blockchain-connector    |   File "/usr/local/lib/python3.6/dist-packages/aiogrpc/channel.py", line 40, in __call__
avalon-blockchain-connector    |     return await fut
avalon-blockchain-connector    | grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
avalon-blockchain-connector    |        status = StatusCode.UNAVAILABLE
avalon-blockchain-connector    |        details = "failed to connect to all addresses"
avalon-blockchain-connector    |        debug_error_string = "{"created":"@1588675377.840694607","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3981,"referenced_errors":[{"created":"@1588675377.840682527","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":394,"grpc_status":14}]}"

Can not build the project from docker file

OS version: Ubuntu 18.04 (Used in a Virtual Box)
Docker version: 19.03.6
docker-compose version: 1.17.1

I tried to build the Avalon project by using the docker-compose command for the SGX simulator mode. But it gives the following error. I can't find a solution for this. Is something wrong with my docker versions?

image

Optimize enclave manager docker image

Avalon workflow is broken in SGX HW mode.
Following SGX error is observed during enclave initialization.
failed to initialize enclave; Failed to get SGX quote size.: UNKNOWN SGX ERROR:<error_value>

This issue is observed since changes to containerizing enclave manager was introduced.
Also, the issue is due to some discrepancy in final image of enclave manager dockerfile.
As a temporary workaround, final image is removed in the source code so that all binaries and environement will be available in build image of dockerfile itself.

AI:

  1. Identify the discrepancy in final stage which i causing SGX error during initialization and fix it.
  2. Bring back optimizations that was done in the final stage of the docker image.

The signing algorithm hashes twice

We have been having some mysterious difficulties in preparing an acceptable WorkOrderSubmit request in pure python or golang (i.e. without using the swig library). Turns out the digest of the request is hashed TWICE:

first in signature.py():
262 final_hash = crypto.compute_message_hash(concat_hash)
[...]
270 status, signature = self.generate_signature(final_hash, private_key)

and then down the frame stack from generate_signature, in sig_private_key.cpp():
212 ByteArray pcrypto::sig::PrivateKey::SignMessage(const ByteArray& message) const {
213 unsigned char hash[SHA256_DIGEST_LENGTH];
214 // Hash
215 SHA256((const unsigned char*)message.data(), message.size(), hash);
[...]
A compensating hashing is done in the openssl wrapper during signature verification, so if one uses that library both on the server and in the client, it does not show.
But when using other libraries/languages, it is becoming awkward (and is not documented) - one has to:

  • concatenate the relevant parts of a request
  • calculate sha256 of this extract --> h1
  • encrypt the resulting hash and include it in the request
  • calculate sha256 of the hash h1 --> h2
  • sign the hashed hash h2 with some private key
  • include the signature and the public key in the request

This is counter-intuitive and not documented.
Same happens during verification: one needs to decrypt the hash, recover the signature, and then verify the signature against the hash of the hash.
If this is intentional, it should be very well documented at the least.

Integrate burrow

It should be possible to integrate Hyperledger Burrow with Avalon with relative ease, considering we expose many of the same interfaces as Besu.

Crypto module uses AES-128-GCM instead of recommended AES-GCM-256

The “encryption algorithm” tags are set to AES-256-GCM, while the implementation is at the moment hardcoded to AES-128-GCM.

tc/sgx/common/crypto/README.md mentions AES-GCM-256 for data enctryption. But the header file tc/sgx/common/crypto/skenc.h defines 16 bytes (128 bit) key length.

Make the worker sync up and event handling logic similar for ethereum and fabric

Currently both Fabric and ethereum worker sync up and event handling implementation in block chain connector and SDK is different. Ideally most of the logic and function implementation will be similar and can be reused. Main differences would be only related to way events are obtained from corresponding block chain.

Running a basic test case doesn't work

Hi,

Kris, a Gardener dev here. We would like to start the work to adapt our monitor frontend to TCF. In order to do that, I tried to set up TCF on my local machine to figure out how to integrate with TCF. It however hasnt worked for me, as described below:

  1. I start TCF HW mode with sudo docker-compose -f docker-compose-sgx.yaml
  2. I open TCF virtual environment inside docker container with source _dev/bin/activate
  3. I run a test scenario with python3 Demo.py --input_dir ./json_requests/ --connect_uri "http://localhost:1947" work_orders/output.json
  4. Expected: A test passes and some output is shown in docker logs
  5. Actual: A test fails with a following error:
(_dev) root@77d67368567a:/project/TrustedComputeFramework/tests# python3 Demo.py --input_dir ./json_requests/ \
>         --connect_uri "http://localhost:1947" work_orders/output.json
[11:09:54 INFO    __main__] ***************** INTEL TRUSTED COMPUTE FRAMEWORK (TCF)*****************
[11:09:54 INFO    __main__] Load Json Directory from ./json_requests/
[11:09:54 INFO    __main__] Execute work order
[11:09:54 INFO    __main__] ------------------Input file name: json_01.json ---------------

[11:09:54 INFO    __main__] *********Request Json********* 
{"jsonrpc": "2.0", "method": "WorkerLookUp", "id": 1, "params": {"workerType": 1, "workOrderId": null}}

[11:09:54 INFO    __main__] **********Received Response*********
{'jsonrpc': '2.0', 'id': 1, 'result': {'totalCount': 0, 'lookupTag': '', 'ids': []}}

[11:09:54 INFO    __main__] ------------------Input file name: json_02.json ---------------

[11:09:54 WARNING STDERR] Traceback (most recent call last):
[11:09:54 WARNING STDERR]   File "Demo.py", line 256, in <module>
[11:09:54 WARNING STDERR] Main()
[11:09:54 WARNING STDERR]   File "Demo.py", line 253, in Main
[11:09:54 WARNING STDERR] LocalMain(config)
[11:09:54 WARNING STDERR]   File "Demo.py", line 92, in LocalMain
[11:09:54 WARNING STDERR] worker_id = response["result"]["ids"][0]
[11:09:54 WARNING STDERR] IndexError
[11:09:54 WARNING STDERR] :
[11:09:54 WARNING STDERR] list index out of range

and following output in docker logs:

tcf    | [10:24:04 INFO    __main__] TCS Listener started on port 1947
tcf    | [11:09:54 INFO    __main__] Received a new request from the client
tcf    | [11:09:54 INFO    tcs_worker_registry_handler] Received Worker request : WorkerLookUp
tcf    | [11:09:54 INFO    __main__] response[application/json]: {"jsonrpc": "2.0", "id": 1, "result": {"totalCount": 0, "lookupTag": "", "ids": []}}

Looks like a bug, though do let me know if I set something up incorrectly, please!

Thanks,
KSS

Docker build broken

Following the instructions in the BUILD.md file for a Docker build in simulation mode fail silently:
the docker-compose finishes with BUILD SUCCESS but when entering the container to run the tests, the work folder doesn't contain the expected virtual environment:

ubuntu@ip-XX-XX-XX-XX:~$ docker exec -it tcf bash
root@193258dac9df:/project/TrustedComputeFramework/tools/build# source _dev/bin/activate
bash: _dev/bin/activate: No such file or directory
root@193258dac9df:/project/TrustedComputeFramework/tools/build# ls
Makefile  _dev
root@193258dac9df:/project/TrustedComputeFramework/tools/build# ls _dev/
opt

This was introduced after commit 6eea60e since I could run the whole procedure successfully at that point in time.

Reproduced on several AWS EC2 instances (t2.XX) on Ubuntu 18.04 by @bertmiller and myself.

Synchronization bug in receipt flow in direct model

In synchronous mode of work order execution, receipt is going to update first because we are update receipt in enclave manager after the work order execution is completed. From client side work order request is submitted first and then receipt is created. Create one table for receipts and remove condition in update receipt function update if receipt created.

Optimize blockchain_connector/Dockerfile

In blockchain_connector/Dockerfile, /usr/local is copied from build_image to final_image. See if you can selectively copy the needed folders/files in final_image. One probable solution is to copy these folders:

COPY --from=build_image /usr/local/bin /usr/local/bin
COPY --from=build_image /usr/local/lib/python3.6 /usr/local/lib/python3.6

Avalon should not require both solc and solcx

Recently Avalon was modified to require the solcx compiler.
However, it also requires the solc compiler (otherwise applications will not run without both).

Modify Avalon to require just solcx and not both solc and solcx.

RSA encryption key size inconsistent (code/docs)

This is in common/cpp/crypto:
In the README.md:

Asymmetric encryption | RSA-OAEP | 3072 | (1)

In the pkenc.h:

//***RSA is not quantum resistant //
//
USE 3072 for long term security ***//
const int RSA_KEY_SIZE = 2048;

I understand this is a transition-issue, but still confusing...

start aesm failed ? or wrong?

Run aesm service on host machine

If you are behind a corporate proxy, uncomment and update the proxy type and aesm proxy lines in /etc/aesmd.conf:

proxy type = manual
aesm proxy = http://your-proxy:your-port

Start the AESM service on the host machine

sudo source /opt/intel/libsgx-enclave-common/aesm/aesm_service

The lase step "start the aesm service on the machine" is sudo source /opt/intel/libsgx-enclave-common/aesm/aesm_service?????

if always wrong , whether we should use the cmd like below?

sudo service aesmd start

build failed

Setting up dirmngr (2.2.4-1ubuntu1.2) ...
Setting up build-essential (12.4ubuntu1) ...
Setting up gpg-wks-client (2.2.4-1ubuntu1.2) ...
Setting up gnupg (2.2.4-1ubuntu1.2) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Collecting json-rpc
Downloading https://files.pythonhosted.org/packages/93/a4/c7da3efc17da87042e077f4587ad018ff20ada0de32f0189d1d7ccf403f8/json_rpc-1.13.0-py2.py3-none-any.whl (41kB)
Collecting pyzmq
Downloading https://files.pythonhosted.org/packages/c9/11/bb28199dd8f186a4053b7dd94a33abf0c1162d99203e7ab32a6b71fa045b/pyzmq-19.0.1-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
Exception:
Traceback (most recent call last):
File "/usr/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/response.py", line 384, in read
data = self._fp.read(amt)
File "/usr/share/python-wheels/CacheControl-0.11.7-py2.py3-none-any.whl/cachecontrol/filewrapper.py", line 60, in read
data = self.__fp.read(amt)
File "/usr/lib/python3.6/http/client.py", line 459, in read
n = self.readinto(b)
File "/usr/lib/python3.6/http/client.py", line 503, in readinto
n = self.fp.readinto(b)
File "/usr/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.6/ssl.py", line 1012, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.6/ssl.py", line 874, in read
return self._sslobj.read(len, buffer)
File "/usr/lib/python3.6/ssl.py", line 631, in read
v = self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 353, in run
wb.build(autobuilding=True)
File "/usr/lib/python3/dist-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 620, in _prepare_file
session=self.session, hashes=hashes)
File "/usr/lib/python3/dist-packages/pip/download.py", line 821, in unpack_url
hashes=hashes
File "/usr/lib/python3/dist-packages/pip/download.py", line 659, in unpack_http_url
hashes)
File "/usr/lib/python3/dist-packages/pip/download.py", line 882, in _download_http_url
_download_url(resp, link, content_file, hashes)
File "/usr/lib/python3/dist-packages/pip/download.py", line 603, in _download_url
hashes.check_against_chunks(downloaded_chunks)
File "/usr/lib/python3/dist-packages/pip/utils/hashes.py", line 46, in check_against_chunks
for chunk in chunks:
File "/usr/lib/python3/dist-packages/pip/download.py", line 571, in written_chunks
for chunk in chunks:
File "/usr/lib/python3/dist-packages/pip/utils/ui.py", line 139, in iter
for x in it:
File "/usr/lib/python3/dist-packages/pip/download.py", line 560, in resp_read
decode_content=False):
File "/usr/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/response.py", line 436, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/response.py", line 401, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "/usr/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/usr/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/response.py", line 307, in _error_catcher
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
ERROR: Service 'avalon-listener' failed to build: The command '/bin/sh -c apt-get update && apt-get install -y -q python3-pip && pip3 install json-rpc pyzmq && echo "Install Common Python, Crypto, Listener and SDK packages\n" && pip3 install dist/.whl && echo "Remove unused packages from image\n" && apt-get autoremove --purge -y -q python3-pip && apt-get clean && rm -rf /var/lib/apt/lists/' returned a non-zero code: 2

Ethereum and Fabric worker and registry implementations differ

Whlie documenting the Ethereum and Fabric SDKs I noticed the implementations differ.

For these methods for Etherum, the return code is:
Transaction receipt on success or None on error.
For Fabric, the return code is:
ContractResponse.SUCCESS on success or ContractResponse.ERROR on error.

The methods where the implementation differs are:

Methods registry_add(), registry_update(), registry_set_status()
File sdk/avalon_sdk/ethereum/ethereum_worker_registry_list.py
File sdk/avalon_sdk/fabric/fabric_worker_registry_list.py

Methods worker_register(), worker_update(), worker_set_status()
File sdk/avalon_sdk/ethereum/ethereum_worker_registry.py
File sdk/avalon_sdk/fabric/fabric_worker_registry.py

Problems initiating avalon

We have an azure machine running Ubuntu 18.04 and with the Confidential Computing turned on.
We have also followed all the steps from the prerequisites guide.
Now we are trying to build as Hardware Mode but inside docker.
We had the problem of having to change /dev/sgx by /dev/isgx and we have already fixed it.
After that we were missing the registration in IAS but now we have already been registered and we have both our SPID and our its_api_key correctly setted on the tcs_config.toml file.
And we are still getting the following error.

avalon-sgx-enclave-manager    | Initializing Avalon Intel SGX Enclave
avalon-sgx-enclave-manager    | 
avalon-sgx-enclave-manager    | Enclave path: /project/avalon/tc/sgx/trusted_worker_manager/enclave/build/lib/libtcf-enclave.signed.so
avalon-sgx-enclave-manager    | 
avalon-sgx-enclave-manager    | SPID: F8D5BD4E90E4121B34C2B2012BF9F3BF
avalon-sgx-enclave-manager    | 
avalon-lmdb                   | 2020-04-08 13:04:53,258 - INFO - Received a new request from the client
avalon-sgx-enclave-manager    | [13:04:54 ERROR   avalon_enclave_manager.enclave_manager] failed to initialize enclave; Failed to initialize quote in enclave constructor: SGX ERROR: SGX_ERROR_SERVICE_UNAVAILABLE
avalon-sgx-enclave-manager    | Traceback (most recent call last):
avalon-sgx-enclave-manager    |   File "/usr/local/lib/python3.6/dist-packages/avalon_enclave_manager/enclave_manager.py", line 447, in start_enclave_manager
avalon-sgx-enclave-manager    |     enclave_helper.initialize_enclave(config.get("EnclaveModule"))
avalon-sgx-enclave-manager    |   File "/usr/local/lib/python3.6/dist-packages/avalon_enclave_manager/avalon_enclave_helper.py", line 36, in initialize_enclave
avalon-sgx-enclave-manager    |     return avalon_enclave.initialize_with_configuration(enclave_config)
avalon-sgx-enclave-manager    |   File "/usr/local/lib/python3.6/dist-packages/avalon_enclave_manager/avalon_enclave_bridge.py", line 135, in initialize_with_configuration

Are we doing anything wrong?

The use of '0x' prefix for hex values is inconsistent

It is about the JSON formats.
Sample values:
"workOrderId": "0xb152345d18e8a0f6"
"requesterId": "0x3456"

"workloadId": "68656172742d646973656173652d6576616c"
"sessionKeyIv": "88a301385631ee67a2924ea1",
etc.

It is a small thing, but easy to fix.

Inside Out File I/O functionality is broken

The Inside Out File I/O functionality is broken.
This is shown by the Workload Tutorial Phase 3 File I/O, which used to work:
https://github.com/hyperledger/avalon/tree/master/docs/workload-tutorial

In Phase 3 after modifying the "hello world" program, it is supposed to create a file and return a key.
It returns a key, but the file is never created.

After typing:
The decrypted response is supposed to be:
Decrypted response:
[{'index': 0, 'dataHash':
'D040AFA0D78276BAFD1360A6170D7EB53446731F25E0F77343A07EEE3628731A',
'data': 'Hello jack [1]', 'encryptedDataEncryptionKey': '', 'iv': ''}]

Instead it is:
Decrypted response:
[{'index': 0, 'dataHash':
'D040AFA0D78276BAFD1360A6170D7EB53446731F25E0F77343A07EEE3628731A',
'data': 'Hello jack []', 'encryptedDataEncryptionKey': '', 'iv': ''}]

And NO file is created (that is, no file /tmp/tutorial/jack and no file in directory /tmp/tutorial/ ).

I was debugging the program and found iohandler_enclave.cpp TcfExecuteIoCommand() returns TCF_ERR_UNKOWN (-1), which is converted to 0xFFFFFFFF. Looking at TcfExecuteIoCommand, that appers to occur when ocall_Process() throws a unknown exception.

Encryption RSA key is "compact"

The encryption RSA key is generated and marshalled to a "pem" format at the TCS start:
JSON serialized worker info is {"workerType": [...], "encryptionKey": "-----BEGIN RS A PUBLIC KEY-----\nMIIBCgKCAQEAsq7Bbdkb/O7Z05zAqONVteRZC/G1WgwiTpNuscUjnlLs9yqvux3x\n6GJESSeQ7kEtz/lPcdqkWHXPAjpFh1HPqoqklkQr7tjCzwYgC9hA8L6byJPaErdC\nDJXmJkHwiBMDuRPTMGOBQuBIp74kf6w/1EhfYjgpgyhpZVHCk8aPfaydG8PgNwGu\niJCLvLBEXRfNSmDBTGgFSdMMp+owGF0xDbd8bgG3Mk3tEpbQMaDXjsJkQ1mn3P0s\nw1Znx8S9u9r0nSX6xFl2mKqAODEBdAf14pjmbOe9U0DEWHE/ZjaeNsgHXfVB/st2\nXeTYgiW2iheqnZ0SYmqGbqasB++3Sq0IiwIDAQAB\n-----END RSA PUBLIC KEY-----\n", "encryptionKeySignature": ""}}, "status": 1}
This is the format in which it is being subsequently propagated.
Alas, one cannot easily parse this key (try to paste into a *.pem file and use as input for openssl). That is because only carries the internal part of the key (270 bytes). It misses the RSA OID, which, together with ASN1/DER formatting would make it 294 bytes long.
Now I may not know what exactly I am writting about, but it took me a hack to get the interoperability with other std crypto libraries (go/crypto, go/spacemonkeys, ...). A hack that is not available on StackExchange! So how about our young audience?

Persistent response format needs to be consistent internally

Observation made here - #439 (comment)
needs to be catered at the interface communicating via JRPC.

At this point JRPC context is not available, but response is persisted in database. So there needs to be uniformity in such responses wherever being handled in a similar manner. This will ease things when a component reads from database to form a JRPC response.

The JRPC facing component needs to take care of the format specified in the spec -
{
"jsonrpc": "2.0", // as per JSON RPC spec
"id": , // the same as in input
"error": { // as per JSON RPC spec
"code": ,
"message": ,
"data":
}
}

Fix pycodestyle issues reported by lint for common module

When pycodestyle is enabled by run_lint script, following issues are observed.
---- pycodestyle in common
common/python/crypto_utils/crypto_utility.py:23:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:33:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:41:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:53:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:60:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:68:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:88:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:116:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:165:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/crypto_utility.py:184:1: E302 expected 2 blank lines, found 1
common/python/crypto_utils/signature.py:114:80: E501 line too long (85 > 79 characters)
common/python/utility/jrpc_utility.py:19:1: E302 expected 2 blank lines, found 1

tests/Demo.py request time out

| 2020-05-20 08:07:57,488 - INFO - b'L\nwo-timestamps'
avalon-sgx-enclave-manager | File /tmp/account_ledger.json WRITE has been completed
avalon-sgx-enclave-manager |
avalon-lmdb | 2020-05-20 08:07:57,488 - INFO - ['L', 'wo-timestamps']
avalon-sgx-enclave-manager | Avalon Intel SGX Enclave initialized.

[08:11:06 INFO main] Request Json
{"jsonrpc": "2.0", "method": "WorkerLookUp", "id": 1, "params": {"workerType": 1, "workOrderId": null}}

[08:11:16 ERROR avalon_sdk.http_client.http_jrpc_client] no response from server: timed out
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/http_client/http_jrpc_client.py", line 62, in _postmsg
response = opener.open(request, timeout=10)
File "/usr/lib/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/usr/lib/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/usr/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/lib/python3.6/urllib/request.py", line 1353, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib/python3.6/urllib/request.py", line 1328, in do_open
r = h.getresponse()
File "/usr/lib/python3.6/http/client.py", line 1356, in getresponse
response.begin()
File "/usr/lib/python3.6/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.6/http/client.py", line 268, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
[08:11:16 WARNING STDERR] Traceback (most recent call last):
[08:11:16 WARNING STDERR] File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/http_client/http_jrpc_client.py", line 62, in _postmsg
[08:11:16 WARNING STDERR] response = opener.open(request, timeout=10)
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/urllib/request.py", line 526, in open
[08:11:16 WARNING STDERR] response = self._open(req, data)
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/urllib/request.py", line 544, in _open
[08:11:16 WARNING STDERR] '_open', req)
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/urllib/request.py", line 504, in _call_chain
[08:11:16 WARNING STDERR] result = func(*args)
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/urllib/request.py", line 1353, in http_open
[08:11:16 WARNING STDERR] return self.do_open(http.client.HTTPConnection, req)
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/urllib/request.py", line 1328, in do_open
[08:11:16 WARNING STDERR] r = h.getresponse()
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/http/client.py", line 1356, in getresponse
[08:11:16 WARNING STDERR] response.begin()
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/http/client.py", line 307, in begin
[08:11:16 WARNING STDERR] version, status, reason = self._read_status()
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/http/client.py", line 268, in _read_status
[08:11:16 WARNING STDERR] line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
[08:11:16 WARNING STDERR] File "/usr/lib/python3.6/socket.py", line 586, in readinto
[08:11:16 WARNING STDERR] return self._sock.recv_into(b)
[08:11:16 WARNING STDERR] socket
[08:11:16 WARNING STDERR] .
[08:11:16 WARNING STDERR] timeout
[08:11:16 WARNING STDERR] :
[08:11:16 WARNING STDERR] timed out
[08:11:16 WARNING STDERR]
[08:11:16 WARNING STDERR] During handling of the above exception, another exception occurred:
[08:11:16 WARNING STDERR] Traceback (most recent call last):
[08:11:16 WARNING STDERR] File "Demo.py", line 295, in
[08:11:16 WARNING STDERR] Main()
[08:11:16 WARNING STDERR] File "Demo.py", line 291, in Main
[08:11:16 WARNING STDERR] local_main(config)
[08:11:16 WARNING STDERR] File "Demo.py", line 121, in local_main
[08:11:16 WARNING STDERR] response = uri_client._postmsg(input_json_str1)
[08:11:16 WARNING STDERR] File "/usr/local/lib/python3.6/dist-packages/avalon_sdk/http_client/http_jrpc_client.py", line 75, in _postmsg
[08:11:16 WARNING STDERR] raise MessageException('no response from server: {0}'.format(err))
[08:11:16 WARNING STDERR] avalon_sdk.http_client.http_jrpc_client
[08:11:16 WARNING STDERR] .
[08:11:16 WARNING STDERR] MessageException
[08:11:16 WARNING STDERR] :
[08:11:16 WARNING STDERR] no response from server: timed out

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.