Coder Social home page Coder Social logo

zigbee-alliance / distributed-compliance-ledger Goto Github PK

View Code? Open in Web Editor NEW
84.0 6.0 43.0 26.12 MB

DCL is a public permissioned ledger framework for certification of device models. The ledger is based on Cosmos SDK and Tendermint.

License: Apache License 2.0

Dockerfile 0.04% Makefile 0.02% Go 17.74% Shell 4.52% Python 0.16% JavaScript 35.67% TypeScript 41.52% HTML 0.01% SCSS 0.01% Vue 0.01% Jinja 0.01% HCL 0.29% Smarty 0.01%
blockchain zigbee tendermint cosmos-sdk ledger distributed-ledger matter

distributed-compliance-ledger's Introduction

Distributed Compliance Ledger

License Unit Tests

If you are interested in how to build and run the project locally, please look at README-DEV

Please note, that the only officially supported platform now is Linux. It's recommended to develop and deploy the App on Ubuntu 18.04 or Ubuntu 20.04.

Overview

DC Ledger is a public permissioned Ledger which can be used for two main use cases:

  • ZB/Matter compliance certification of device models
  • Public key infrastructure (PKI)

More information about use cases can be found in DC Ledger Overview and Use Case Diagrams.

DC Ledger is based on Tendermint and Cosmos SDK.

DC Ledger is a public permissioned ledger in the following sense:

In order to send write transactions to the ledger you need:

  • Have a private/public key pair
  • Have an Account created on the ledger via ACCOUNT transaction (see Use Case Txn Auth).
    • The Account stores the public part of the key
    • The Account has an associated role. The role is used for authorization policies.
  • Sign every transaction by the private key.

Main Components

Pool of Nodes

  • A network of Tendermint-based validator nodes (Validators and Observers) maintaining the ledger.
  • Every validator node (dcld binary) runs DC Ledger application code (based on Cosmos SDK) implementing the use cases.
  • See the proposed deployment in deployment and deployment-detailed.
  • See recommended design for DCL MainNet deployment on AWS in aws deployment

Node Types

  • Full Node: contains a full replication of data (ledger, state, etc.):
    • Validator Node (VN): a full node participating in consensus protocol (ordering transactions).
    • Sentry Node: a full node that doesn't participate in consensus and wraps the validator node representing it for the rest of the network as one of the ways for DDoS protection.
      • Private Sentry Node: a full node to connect other Validator or Sentry nodes only; should not be accessed by clients.
      • Public Sentry Node: a full node to connect other external full nodes (possibly observer nodes).
    • Observer Node (ON): a full node that doesn't participate in consensus. Should be used to receive read/write requests from the clients.
  • Light Client Proxy Node: doesn't contain a full replication of data. Can be used as a proxy to untrusted Full nodes for single-value query requests sent via CLI or Tendermint RPC. It will verify all state proofs automatically.
  • Seed Node: provides a list of peers which a node can connect to.

See

Clients

For interactions with the pool of nodes (sending write and read requests).

Every client must be connected to a Node (either Observer or Validator).

If there is no trusted node for connection, a Light Client Proxy can be used. A Light Client Proxy can be connected to multiple nodes and will verify the state proofs for every single value query request.

Please note, that multi-value queries don't have state proofs support and should be sent to trusted nodes only.

Please make sure that TLS is enabled in gRPC, REST or Light Client Proxy for secure communication with a Node.

How To: Node Operators

Add an Observer node to existing network

See Running Node. There are two options to add an Observer nodes:

Please take into account running-node-in-existing-network.md.

Add a Validator node to existing network

A recommended way for deployment and client connection: diagram, diagram-detailed and diagram-aws.

See Running Node for possible patterns and instructions.

Please take into account running-node-in-existing-network.md.

Upgrade all nodes in a pool to a new version of DCL application

DCL application can be simultaneously updated on all nodes in the pool without breaking consensus. See Pool Upgrade and Pool Upgrade How To for details.

Run a local pool of nodes in Docker

This is for development purposes only.

See Run local pool section in README-DEV.md.

How To: Users

CLI

  • The same dcld binary as a Node
  • A full list of all CLI commands can be found there: transactions.md.
  • CLI can be used for write and read requests.
  • Please configure the CLI before using (see how-to.md).
  • If there are no trusted Observer or Validator nodes to connect a CLI, then a Light Client Proxy can be used.

Light Client Proxy

Should be used if there are no trusted Observer or Validator nodes to connect.

It can be a proxy for CLI or direct requests from code done via Tendermint RPC.

Please note, that CLI can use a Light Client proxy only for single-value query requests. A Full Node (Validator or Observer) should be used for multi-value query requests and write requests.

Please note, that multi-value queries don't have state proofs support and should be sent to trusted node only.

See Run Light Client Proxy for details how to run it.

REST

gRPC

Tendermint RPC and Light Client

  • Tendermint RPC is exposed by every running node at port 26657. See https://docs.cosmos.network/v0.45/core/grpc_rest.html#tendermint-rpc.
  • Tendermint RPC supports state proofs. Tendermint's Light Client library can be used to verify the state proofs. So, if Light Client API is used, then it's possible to communicate with non-trusted nodes.
  • Please note, that multi-value queries don't have state proofs support and should be sent to trusted nodes only.
  • There are currently no DC Ledger specific API libraries for various platforms and languages, but they may be provided in the future.
  • The following libraries can be used as light clients:
  • Refer to this doc to see how to subscribe to a Tendermint WebSocket based events and/or query an application components.

Instructions

After the CLI or REST API is configured and Account with an appropriate role is created, the following instructions from how-to.md can be used for every role (see Use Case Diagrams):

  • Trustee
    • propose new accounts
    • approve/reject new accounts
    • propose revocation of accounts
    • approve revocation of accounts
    • propose X509 root certificates
    • approve/reject X509 root certificates
    • propose revocation of X509 root certificates
    • approve revocation of X509 root certificates
    • propose pool upgrade
    • approve/reject pool upgrade
    • propose disable a validator node
    • approve/reject disable a validator node
  • Vendor
    • publish/update vendor info
    • publish/update/delete device model info
    • publish/update/delete device model version
    • publish/update/delete PKI Revocation Distribution Point
    • publish/remove X509 certificates
  • Certification Center
    • certify or revoke certification of device models
    • update/delete compliance info
  • Vendor Admin
    • publish/update vendor info for any vendor
  • Node Admin
    • add a new Validator node
    • disable a Validator node
    • enable a Validator node

Useful Links

distributed-compliance-ledger's People

Contributors

abdulbois avatar abdulla-ashurov avatar adenishchenko avatar akarabashov avatar andkononykhin avatar ankur325 avatar artemkaaas avatar ashcherbakov avatar askolesov avatar clapre avatar denisrybas avatar dependabot[bot] avatar electrocucaracha avatar happyhq avatar hawk248 avatar jcps07 avatar kgoncharov avatar lazizcodesatdsr avatar spivachuk avatar toktar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

distributed-compliance-ledger's Issues

State Proofs: Light client for lists

Make sure that we can trust a result in cases where multiple results are returned (for example get-all-models).

Options
O1: Do not use Cosmos queriers to return multiple entries. Have an additional index in store <all-items-key>:<list of individual item keys>, so that get_all_xxx can be queried from store (and hence with common state proofs) and returns a list of individual item keys instead of the full item. Individual items cam be queried by get_xxx with the given item key as an input (with a common state proof again).
Note: need to think about pagination support.
O2: Support multi-proof on Cosmos-sdk level (consider contribution).

O2: Multi-Proof support at cosmos-sdk
The current Design: https://github.com/zigbee-alliance/distributed-compliance-ledger/blob/master/docs/design/multiproofs-design.md

Some notes (can be a bit outdated):

  1. https://github.com/cosmos/cosmos-sdk/blob/master/client/context/query.go#L102 - light client is used when QueryState is used for querying.
  2. if QueryWithData is called (as in all examples such as https://github.com/cosmos/sdk-tutorials/blob/master/nameservice/x/nameservice/client/rest/query.go#L19), then light client is not used and just gets the result from the corresponding business logic (keeper):
    https://github.com/cosmos/sdk-tutorials/blob/master/nameservice/x/nameservice/keeper/keeper.go#L35 or https://github.com/cosmos/sdk-tutorials/blob/master/nameservice/x/nameservice/keeper/keeper.go#L105
    https://github.com/cosmos/cosmos-sdk/blob/master/store/iavl/store.go#L193
    https://github.com/tendermint/iavl/blob/master/immutable_tree.go#L153
  3. There is a related issue: cosmos/cosmos-sdk#5241
  4. It can be fixed if

[Create key for account] Need to add the length limitation for the name field and the password in the keys creation process.

Severity: enhancement

Build info: 1b99283

Overview:
Need to add the length limitation for the name field and the password in the keys creation process.
Actually, name and password values are not limited.
It is suggested to make the next limitations that must be discussed:

  • the maximum available length for the name field is near 50 -255 symbols.
  • the maximum available length for the password field is near 255 symbols.

By @julia-aksyanova

Certify Compliance: Incremental Updates

[Add model info] The error `error decoding transaction` is a result of the Add model info request.

Severity: critical

Build info: e5bf2a4

Steps to reproduce:

  1. Enter "add a new model info". It does not matter the role of the user: vendor (valid for adding a model info) or not (invalid - Trustee, TestHouse, ZBCertificationCenter, NodeAdmin). Example command: dclcli tx modelinfo add-model --vid=1 --pid=1 --name="Device #1" --description="Device Description" --sku="SKU12FS" --firmware-version="1.0" --hardware-version="2.0" --tis-or-trp-testing-completed=true --from=jack
  2. Send the command.
  3. Look at the received result.

Actual results:
The error is returned:

{
  "height": "0",
  "txhash": "5617069476F284751BD4D74701BA2AF844AD1A3D8E5F988286CFD82A7CB918DF",
  "code": 2,
  "raw_log": "{\"codespace\":\"sdk\",\"code\":2,\"message\":\"error decoding transaction\"}"
}

image

Expected results:
If the user role is valid (vendor) for adding model info, the request is executed successfully and info is added into the system.
If the user role is invalid (Trustee, TestHouse, ZBCertificationCenter, NodeAdmin) for adding model info, the request is finished with the error about invalid user role.

Finalize Device Type mapping for CID field

Right now, the CID attribute in the deviceModel indicates the type of device model, however it looks like mapping for CID is not finalized.

So, for example, would this be the Device ID for endpoint 0 of the device? These are defined in doc 10-6050 and can be found here: https://groups.zigbee.org/wg/all_members/document/15013.

Please confirm with DCL folks, thanks.

Originally posted by @chrisdecenzo in https://github.com/CHIP-Specifications/connectedhomeip-spec/pull/400#discussion_r526133636

Create DCL Overview document for CHIP specification

Right now, CHIP specification linked to https://github.com/zigbee-alliance/distributed-compliance-ledger/blob/master/docs/DCL-Overview.pdf,. however CHIP spec should not be pointed at the master branch of a repo and instead a specific version tag that CHIP would be pinned to (or even better, a spec document hosted by the Zigbee Alliance)

Originally posted by @BroderickCarlin in https://github.com/CHIP-Specifications/connectedhomeip-spec/pull/400#r521741889

[Get account with address] The error is returned if the `get-account-with-address` command contains the address of any proposed account.

Severity: major

Build info: 1b99283

Steps to reproduce:

  1. Enter get-account-with-address command which contains the address of any proposed account.
    Example of this command: zblcli query auth account --address=<value of the proposed account>
  2. Send the command.
  3. Look at the received result.

Actual results:
The error is returned.
Please see the attachment for more details about the received error.
image

Expected results:
Info about the proposed account should be returned.

Additional info:
There are no problems when the address of any approved account is used in the get-account-with-address request.

By @julia-aksyanova

Upgrade: Design and experiments for Upgrade and Migration support

It should be based on https://github.com/cosmos/cosmos-sdk/tree/master/cosmovisor and https://docs.cosmos.network/master/modules/upgrade/

The main idea:

  1. New releases (packages) are hosted on GitHub (https://docs.github.com/en/actions/publishing-packages-with-github-actions/about-packaging-with-github-actions)
  2. Validator process is run from under the cosmovisor shim, which can perform updates
  3. Trustees send and approve (as usual) an Upgrade transaction specifying
  • update time (time or block height)
  • a link to the package and its checksum
  1. if state migration is needed (there are breaking changes), then developers create and regisetr a special Handler
  2. when cosmovisorsees that it's update time (it will be the same "time" for all nodes; for example, the same block height), it performs the update (potentially downloading the binary). Migration Handler is also executed.

What needs to be done:

  1. Update transaction implementation (PROPOSE_UPDATE and APPROVE_UPDATE). The current approach in https://docs.cosmos.network/master/modules/upgrade/ is based on https://docs.cosmos.network/master/modules/gov/, and it looks like we can not use the gov module as it depends on staking and tokens.
  2. We may have to do some minor changes in https://docs.cosmos.network/master/modules/upgrade/ module by the same reason (need to get rid of gov dependency?)
  3. integrate cosmovisor, update instructions
  4. test that everything works as expected

Notes

WBS and estimates: https://cloud.dsr-corporation.com/index.php/s/6dkyjZwnLcGMMwG, "update" tab.

[Forign keys] Issue with compliance

Steps to reproduce:

  • Create model info
  • Create compliance for the model info
  • Remove the model info
  • Go to compliance page, click on the compliance

Result:

  • Page not found error in console

Expected result:

  • Good solution: implement logical deletion for model infos
  • Really bad solution: show the corresponding error message

Support multiple Models Info with the same VID/PID but different versions

Acceptance criteria:

  • Vendors need to be able to publish new versions of the same model (same VID/PID combination).
  • New versions are identified by a combination of firmware_version and hardware_version,
  • Every new version is published in non-certified state. So, Test Results publishing and compliance certification need to specify firmware_version and hardware_version in addition to pid and vid.
  • hardware_version can be an optional field in the future

Technical tasks:

  • vid/pid/hw_ver/fw_ver needs to be a key for ModelInfos, Compliance tests and certifications results
  • There needs to be an index for the vid/pid to a list of versions
  • GET_MODEL_INFO by VID/PID returns a list of all versions of the model
  • There needs to be a way to get a particular version of the model info by specifying vid, pid, firmware_version and hardware_version. It can be either a new command, or a query parameter to the existing GET_MODEL_INFO command.
  • GET_ALL_MODEL_INFO and GET_VENDOR_MODEL_INFO need to return firmware_version and hardware_version fields
  • firmware_version and hardware_version fields needs to be included into ADD_TEST_RESULT and GET_TEST_RESULT
  • firmware_version and hardware_version fields needs to be included into CERTIFY_MODEL, REVOKE_MODEL_CERTIFICATION, GET_CERTIFIED_MODEL, GET_REVOKED_MODEL

[Multiproofs] Strange limit flag behaviour

Actual behaviour:
Multiproof might contain additional service records that are used internally for validation but not returned to the user. Those records are counted when the limit flag is specified.
Expected behaviour:
Service records shouldn't be counted when limit flag is specified.

Extend the Model Info Schema to match the Spec

Acceptance Criteria:
Make sure that the DCL ModelInfo fields match the spec from https://github.com/CHIP-Specifications/connectedhomeip-spec/blob/master/src/service_device_management/DistributedComplianceLedger.adoc
In particular,

  • Allow editing of versions for the same VID/PID (will be done in #75)
  • Add the following fields:
    • ProductName
    • OtaBlob
    • CommissioningCustomflow
    • CommissioningCustomflowUrl
    • CommissioningModeInitialStepsHint
    • CommissioningModeSecondaryStepsHint
    • ReleaseNotesUrl
    • UserManualUrl
    • SupportUrl
    • ChipBlob
  • Rename the following fields:
    • FirmwareVersion -> SoftwareVersionNumber + SoftwareVersionString
    • HardwareVersion-> HardwareVersionNumber+ HardwareVersionString
    • Custom -> VendorBlob
  • Make sure all fields are in the same registry as in the Spec (camel case)

[Certify model] Need confirmation message before override certificate

Steps to reproduce:

  1. Add to the system model-info with valid vid and pid (dclcli tx modelinfo add-model...).
  2. Add test-result for this model: use the same vid and pid parameters (dclcli tx compliancetest add-test-result...).
  3. Certify this model using one account with the ZBCertificationCenter role: use the same vid and pid parameters (dclcli tx compliance certify-model...).
  4. Try to certify the same pair of vid and pid using the same account with the ZBCertificationCenter role (from step 3).

Actual result:
Override certificate without confirmation message.

Expected result:
Need show confirmation message before override certificate. Confirmation message should contain: "override the existing certification info [y/N]: " (message should be discussed).

[General] Need to correct typos in several places.

Build info:
1b99283

Overview:
Need to correct typos in the next documents:

By @julia-aksyanova

[PKI] Organaze storage-related logic in x/pki app module better

Now in x/pki app module Keeper handles addition of unique certificate keys separately from addition of certificates. However, unique certificate keys are registered at the same moments when certificates are added / proposed. Actually unique certificate keys are just additional indexes of certificates. So it makes sense to put addition / proposal of a certificate and addition of the unique certificate key into one operation (addition / proposal of a certificate correspondingly).

Also it makes sense to move logic currently located in addChildCertificateEntry and removeChildCertificateEntry methods of Handler to Keeper.

[Certify model] Need to add the length limitation for the `reason` field in the `certify-model` command.

Severity:
enhancement

Build info:
0505970

Overview:
Need to add the length limitation for the reason field in the certify-model command (zblcli tx compliance certify-model --vid=<uint16> --pid=<uint16> --certification-type=<zb> --certification-date=<rfc3339 encoded date> --reason=<resaon_info> --from=<account>).
Actually, thereason field is not limited.
It is suggested to make the next limitation that must be discussed:

  • the maximum available length for the reason field is about 255-15000 symbols.

If the mentioned parameter contains 20000 symbols, you get the following error as a result of command execution.
image

[Certify model] Need to remove the possibility to certify model several times from different accounts.

Severity:
major

Build info:
0505970

Precondition:
The system should have two or more users with the ZBCertificationCenter role.

Steps to reproduce:

  1. Add to the system model-info with valid vid and pid (zblcli tx modelinfo add-model...).
  2. Add test-result for this model: use the same vid and pid parameters (zblcli tx compliancetest add-test-result...).
  3. Certify this model using one account with the ZBCertificationCenter role: use the same vid and pid parameters (zblcli tx compliance certify-model...).
  4. Try to certify the same pair of vid and pid:
    a) using the same account with the ZBCertificationCenter role (from step 3);
    b) using another account with the ZBCertificationCenter role (not the same as in step 3).
  5. Look at the result of the command execution.

Actual results:
The second certification command for the same pair of the vid and pid is successfully executed.

Expected results:
a) When the same account with the ZBCertificationCenter role (from step 3) is used at the second request, the confirmation message should appear: "override the existing certification info [y/N]: " (message should be discussed).
b) When another account with the ZBCertificationCenter role (not the same as in step 3) is used at the second request, error message should appear info the results: "Model with vid= and pid= is already certified on the ledger."(message should be discussed).

Validators: Revocation

[Add test result] Need to add the length limitation for the `test-result` field in the `add-test-result` command.

Severity:
enhancement

Build info:
0505970

Overview:
Need to add the length limitation for the test-result field in the add-test-result command (
zblcli tx compliancetest add-test-result --vid= --pid= --test-result= --test-date= --from=).
Actually, the test-result field is not limited.
It is suggested to make the next limitation that must be discussed:

  • the maximum available length for the test-result field is near 255-2500 symbols.

If the mentioned parameter contains 5000 symbols, you get the following error as a result of command execution.
image

As a user, I need to be able to get the information about the created account even if the account is revoked

Currently revoked accounts are removed from the state. But all the entities created by this account are considered as valid, but with unknown owner/creator.
As a user, I need to be able to get the information about the creator account even if the account is revoked.

Options:

  • O1: Add a new entity RevokedAccount. Create a new Revoked Account when deleting an Account. Support queries for RevokedAccount(s).
  • O2: Add a revoked flag for the current Account entity. Toggle the flag on revocation instead of removing the account.

Notes

  1. We need to make sure that it's not possible to send a transaction signed by a revoked account.
  2. We need to support re-adding of a revoked account (via a usual propose/approve account requests). So, the chosen option must support it.

[Certify model] Change validation for available min and max values for the `certification-date` field.

Severity:
enhancement

Build info:
0505970

Overview:
Need to add the limitation for the certification-date field in the certify-model command (zblcli tx compliance certify-model --vid=<uint16> --pid=<uint16> --certification-type=<zb> --certification-date=<rfc3339 encoded date> --from=<account>).
Actually, the test-date field has the following limits:

  • minimum available value is equal to 1000-01-01T00:00:00Z;
  • maximum available value is equal to 9999-12-31T23:59:59Z.

It is suggested to change these limitations for the following values that must be discussed:

  • minimum available value is equal to 1900-01-01T00:00:00Z;
  • maximum available value is equal to 1999-12-31T23:59:59Z.

Also, maybe it will be better to add the validation that the entered value for the certification-date parameter is not from the future.
certification-date parameter should have the date from the past. So the user should not have any possibility to add a date from the future (tomorrow, next week/year/century, etc.).
And this remark can affect the suggestion above (maximum available value is equal to 1999-12-31T23:59:59Z.).

[Add model info] Need to add the length limitation for the `name`, `description`, `sku`, `firmware-version`, `hardware-version`, `custom` fields in the add model info command.

Severity:
enhancement

Build info:
1b99283

Overview:
Need to add the length limitation for the name, description, sku, firmware-version, hardware-version, custom fields in the add model info command (zblcli tx modelinfo add-model --vid=<uint16> --pid=<uint16> --name=<string> --description=<string or path> --sku=<string> --firmware-version=<string> --hardware-version=<string> --tis-or-trp-testing-completed=<bool> --from=<account>).
Actually, name and description values are not limited.

It is suggested to make the next limitations that must be discussed:

  • the maximum available length for the name field is near 50 -255 symbols.
  • the maximum available length for the description field is near 255-1000 symbols.
  • the maximum available length for the sku field is near 50 -255 symbols.
  • the maximum available length for the firmware-version field is near 50 -255 symbols.
  • the maximum available length for the hardware-version field is near 50 -255 symbols.
  • the maximum available length for the custom field is near 255 symbols.

If each of all mentioned parameters contains 2500 symbols, you get the following error as a result of command execution.
image

By @julia-aksyanova

[Revoke model] Need to add the length limitation for the `reason` field in the `revoke-model` command.

Severity:
enhancement

Build info:
0505970

Overview:
Need to add the length limitation for the reason field in the revoke-model command (zblcli tx compliance revoke-model --vid= --pid= --certification-type= --revocation-date= --reason=<resaon_info> --from=).
Actually, the reason field is not limited.
It is suggested to make the next limitation that must be discussed:

  • the maximum available length for the reason field is about 255-15000 symbols.

Fix TestPkiDemo

This test periodically hangs up. There is also the comment here:

// FIXME: GetX509CertChain calls within this test may fail with EOF on an attempt to read the response
// from REST API server and so leave the resulting Certificates.Items slice empty.
// The issue seems to be caused by the connection being closed due to timeout while REST API server
// is gathering the reply which consists of multiple replies from the pool.
// However, net/http.Client does not report the request as timed out while actually it seems to be so.

Correct handling of errors from CLIContext.queryStore

Now the client implementation treats an error returned by CLIContext.QueryStore as the entry absence, the same as it treats nil result of CLIContext.QueryStore. However, only nil result of CLIContext.QueryStore actually means the entry absence. An error is returned in case of the query failure (e.g. proof verification failure, etc).

In each place, where QueryStore is used, separate handling of an error from handling of nil result and provide proper handling for an error.

By @spivachuk

Add OTA URL to Model Info

Add the following fields to MODEL_INFO:

  • Version - not related to OTA URL, but a general field in ModelInfo identifying the version and possible changes in the Model Info format. Optional field.
  • OTA_URL - URL of the OTA
  • OTA_checksum - checksum of the OTA
  • OTA_checksum_type - checksum of the OTA

All the fields are optional.
If one of them is set, then the other two must also be set.

The only editable field here is OTA_URL. Other fields can be setup only during creation of the Model Info.
OTA URL can be edited only if OTA_checksum and OTA_checksum_type are already set.

No need to check the content of the fields in the scope of this task (assume that this is just strings).

[Revoke model] Change validation for available min and max values for the `revocation-date` field.

Severity:
enhancement

Build info:
0505970

Overview:
Need to add the limitation for the revocation-date field in the revoke-model command (zblcli tx compliance revoke-model --vid=<uint16> --pid=<uint16> --certification-type=<zb> --revocation-date=<rfc3339 encoded date> --from=<account>).
Actually, the test-date field has the following limits:

  • minimum available value is equal to 1000-01-01T00:00:00Z;
  • maximum available value is equal to 9999-12-31T23:59:59Z.

It is suggested to change these limitations for the following values that must be discussed:

  • minimum available value is equal to 1900-01-01T00:00:00Z;
  • maximum available value is equal to 1999-12-31T23:59:59Z.

Also, maybe it will be better to add the validation that the entered value for the revocation-date parameter is not from the future.
revocation-date parameter should have the date from the past. So the user should not have any possibility to add a date from the future (tomorrow, next week/year/century, etc.).
And this remark can affect the suggestion above (maximum available value is equal to 1999-12-31T23:59:59Z.).

[Revoke model] Revoke command is successfully executed for invalid pair of `vid` and `pid` parameters.

Severity:
major

Build info:
0505970

Steps to reproduce:

  1. Add to the system model-info with valid vid and pid (zblcli tx modelinfo add-model...).
  2. Add test-result for this model: use the same vid and pid parameters (zblcli tx compliancetest add-test-result...).
  3. Certify this model: use the same vid and pid parameters (zblcli tx compliance certify-model...).
  4. Revoke the model (zblcli tx compliance revoke-model...) but:
  • make a typo in vid and\or pid parameters;
  • or enter any not-existing vid and\or pid parameters.
  1. Look at the results of the revoke command execution.

Actual results:
The revoke command is successfully executed.

Expected results:
The error should appear if the revoke command contains not-existing vid and pid parameters: "No model info associated with the vid= and pid= on the ledger.".

Additional info:
If the user makes revocation with invalid pair of vid and pid parameters and then he\she decides to certify the same invalid pair of vid and pid, the certification is successfully executed.
But if check the info about mpdel (zblcli query modelinfo model --vid=<value> --pid=<value>), there is no info about this model.
revocation-and-certification-invalid-vid-and-pid

image

[PKI] Revise REST API design of PKI

In the current design of PKI REST API there are a number of routes that potentially conflict with each other. These are as follows:

  • GET /pki/certs/revoked and GET /pki/certs/{subject},
  • GET /pki/certs/root and GET /pki/certs/{subject},
  • GET /pki/certs/proposed/root and GET /pki/certs/{subject}/{subjectKeyID},
  • GET /pki/certs/revoked/root and GET /pki/certs/{subject}/{subjectKeyID}.

These conflicts are now avoided in the implementation due to a properly chosen order of routes registration in x/pki/client/rest/rest.go. (See the comments in this file for details.)

Anyway, the current design of PKI REST API does not follow commonly used REST API design principles and should be revised.

Support building of packages in CI/CD workflow

  • Packages can be stored in GitHub
  • DCL should be built for Ubuntu 18.04
    - DCL CLI should be built for Ubuntu, Mac, Windows
  • Has the recent master package be built once the PR is merged and stored in GitHub package registry
  • Support building of packages for releases

Need to separate list of accounts in ledger and list of keys in local wallet

We have one list of users. This list is an intersection of key list in CLI wallet and account list in the ledger. It was done for the first demo. To use UI in production we need:

  • Split user list into account list and key list. We already have sketches in code.
  • Each account record should indicate whether the corresponding key is in the wallet or not.
  • Each key record may indicate whether the corresponding account is in the ledger or not.

[Propose add account] It should be impossible to send the `propose-add-account` that does not contain the `role` parameter.

Severity: major

Build info: 1b99283

Steps to reproduce:

  1. Enter propose-add-account command which contains:
  • valid address and pubkey parameters;
  • valid from parameter;
  • role parameter does not exist in this request.

Example of this command: zblcli tx auth propose-add-account --address=<valid_value> --pubkey=<valid_value> --from=<valid_value>

  1. Send the command.
  2. Look at the received result.

Actual results:
The account is successfully added and the actual status of it is proposed.
Also, other Trustee accounts can approve it without any problems.

Expected results:
It should be impossible to send the propose-add-account that does not contain the role parameter.

By @julia-aksyanova

[General] Need to remove the possibility to send commands with duplicate parameters.

Severity:
enhancement

Build info:
1b99283

Overview:
Need to remove the possibility to send commands with duplicated parameters.
For example, you can send one of the following commands and as a result, you do not get an error (command successfully is sent):

  • zblcli tx auth propose-add-account --address=cosmos1k3dyy5pg7xqw3xevz68tzr2l3ahl2y4jce67cc --pubkey=cosmospub1addwnpepq2cj7rnfv3mnwqapmu9hejpfz68e7v73j8fkuumy2ly0pryg7nxqxyk3fgq --pubkey=cosmospub1addwnpepqf5rrfasymwua6dcm69cwvpdgtuffteyl8qrhz7te7mejmecryydu26hnlm --roles=Vendor --from=alice;
  • zblcli tx auth propose-add-account --address=cosmos1k3dyy5pg7xqw3xevz68tzr2l3ahl2y4jce67cc --address=cosmos12u9laj032j49n05vkem5hkhpq0353mj7c8hax0 --pubkey=cosmospub1addwnpepq2cj7rnfv3mnwqapmu9hejpfz68e7v73j8fkuumy2ly0pryg7nxqxyk3fgq --roles=Vendor --from=alice;
  • zblcli tx auth propose-add-account --address=cosmos1k3dyy5pg7xqw3xevz68tzr2l3ahl2y4jce67cc --pubkey=cosmospub1addwnpepq2cj7rnfv3mnwqapmu9hejpfz68e7v73j8fkuumy2ly0pryg7nxqxyk3fgq --roles=Vendor --from=alice --roles=TestHouse;
  • zblcli tx modelinfo add-model --vid=3499 --pid=35634 --cid=34 --name="1" --description="1" --sku=1 --firmware-version=1 --hardware-version=1 --tis-or-trp-testing-completed=false --from=julia-vendor --cid=432;
  • zblcli tx modelinfo add-model --vid=3499 --pid=35634 --cid=34 --name="1" --description="1" --sku=1 --firmware-version=1 --hardware-version=1 --tis-or-trp-testing-completed=false --from=julia-vendor --cid=432 --vid=234 --pid=9909 --vid=203;
  • etc.

By @julia-aksyanova

Get rid of boilerplate types defined in integration_tests/utils/types.go

integration_tests/utils/types.go contains a number of struct types repeating the shapes of struct types from application modules but with some distinctions. These types are used for unmarshalling by encoding/json of data previously marshalled by tendermint/go-amino. The distinctions are as follows:

  • types in integration_tests/utils use string fields where the corresponding types from application modules use 64-bit integer fields (because Amino codec intentionally marshals 64-bit integers to JSON as strings, while encoding/json does not perform the backward conversion),
  • types in integration_tests/utils omit fields of interface types (because Amino codec marshals such fields as interface values, i.e. value-type tuples, but encoding/json does not handle such representation).

Consider ways to get rid of using these boilerplate types for integration tests. As an option, integration tests might use Amino codec (as the node/client implementation does) with registering necessary interface / concrete types in the codec. The code from the implementation can be re-used for registering types in the codec.

By @spivachuk

[General] The new input line should not be presented as the continuation of the error when the `from` parameter contains a not existing account.

Severity: minor

Build info: 1b99283

Steps to reproduce:

  1. Enter any command in CLI which contains from parameter: for example, add a new account, approve the account, add model info, add test info, etc.
  2. Enter incorrect (not existing) account in the from parameter.
  3. Send the command.
  4. Look at the received result.

Actual results:
The error is returned. The new input line is presented as the continuation of the error.

Expected results:
The error is returned. And the new input line should begin with a new line.
image

By @julia-aksyanova

[Add test result] Need to change min and max available value for the `test-date` field.

Severity:
enhancement

Build info:
0505970

Overview:
Need to add the limitation for the test-date field in the add-test-result command (
zblcli tx compliancetest add-test-result --vid= --pid= --test-result= --test-date= --from=).
Actually, the test-date field has the following limits:

  • minimum available value is equal to 1000-01-01T00:00:00Z;
  • maximum available value is equal to 9999-12-31T23:59:59Z.

It is suggested to change these limitations for the following values that must be discussed:

  • minimum available value is equal to 1900-01-01T00:00:00Z;
  • maximum available value is equal to 1999-12-31T23:59:59Z.

Configurable auth rules

The current rules for authentication are hard-coded. Need to be able to set permissions for every actions

  • what roles can perform the action
  • how many approvals are needed (currently it's 2/3 of total number).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.