Coder Social home page Coder Social logo

hashgraph / hedera-services Goto Github PK

View Code? Open in Web Editor NEW
270.0 38.0 121.0 383.8 MB

Crypto, token, consensus, file, and smart contract services for the Hedera public ledger

License: Apache License 2.0

Dockerfile 0.09% Shell 0.20% Python 0.10% Java 83.11% HTML 1.31% Batchfile 0.02% Solidity 1.31% Kotlin 0.03% PureBasic 13.78% ANTLR 0.06%

hedera-services's Introduction

Node: Build Application Artifact Determinism Node: Performance Tests

codecov Latest Version Made With Development Branch License

Hedera Services

Implementation of the Platform and the services offered by nodes in the Hedera public network.

Overview of child modules

  • platform-sdk/ - the basic Platform – documentation
  • hedera-node/ - implementation of Hedera services on the Platform – documentation

Getting Started

Refer to the Quickstart Guide for how to work with this project.

Solidity

Hedera Contracts support pragma solidity <=0.8.9.

Support

If you have a question on how to use the product, please see our support guide.

Contributing

Contributions are welcome. Please see the contributing guide to see how you can get involved.

Code of Conduct

This project is governed by the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code of conduct.

License

Apache License 2.0

hedera-services's People

Contributors

anighanta avatar artemananiev avatar cesaragv avatar cody-littley avatar daniel-k-ivanov avatar dependabot[bot] avatar edward-swirldslabs avatar failfmi avatar georgiyazovaliiski avatar hendrikebbers avatar imalygin avatar iwsimon avatar jeffreydallas avatar jjohannes avatar kimbor avatar litt3 avatar ljianghedera avatar lpetrovic05 avatar lukelee-sl avatar mhess-swl avatar nathanklick avatar neeharika-sompalli avatar netopyr avatar povolev15 avatar qianswirlds avatar qnswirlds avatar rbair23 avatar shemnon avatar timo0 avatar tinker-michaelj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hedera-services's Issues

It is possible to create an account without keys

Summary of the defect
It is possible to create an account with, for example, an empty key list.

Suggested resolution
Unlike with immutable files/topics/contracts, there is no reasonable use case for an account without keys. Make it invalid to submit a CryptoCreate whose key does no contain at least one Ed25519 key.

HAPI: Add query to provide the Services Version number

Summary
Add a query to return the versions of both the Hedera gRPC API (HAPI) and the Services codebase itself, where these versions will respect the semver standard of <major>.<minor>.<patch>. (When we advance the HAPI version, the Services must advance; but the converse need not hold.)

Suggested resolution

  • Add a protobuf SemanticVersion type.
  • Implement a GetVersionInfo query under a new NetworkService scoped to operations targeting the network/nodes.

Impact

  • The FeeSchedule.json needs to be updated with correct usage prices for the GetVersionInfo query.

Update Address Books to include NodeID & NodeAccountID & NodeCertHash

Summary
Add metadata for node ids, node account ids, and TLS certificate hashes to the address book and node details system files.

Suggested resolution
Update the NodeAddressBook protobuf type to support enhancing address book (0.0.101) with

  1. A nodeId for correlating information when multiple address entries refer to the same node; and,
  2. A strongly-typed nodeAccountId for the node's account id (so the memo is freed up); and,
  3. A certHash so clients can validate the cert they receive from a node during TLS negotiation.

Include the nodeId in node details (0.0.102) as well.

Impact

  • When these files are updated and S3 bucket names are changed to use the nodeId rather than the account number, we must coordinate the change with Mirror Node.
  • DevOps tools need to support the new NodeAddressBook field types.

Doc Requirements

  • HAPI doc should explain the use of the new fields.

Disable SC ability to transfer money into account protected by receiverSigRequired

Summary of the defect
The receiverSigRequired setting on an account is ignored for transfers triggered by a Solidity transaction.

Suggested resolution
Track the non-contract addresses that receive hbar during a Solidity execution; if any have receiverSigRequired, and the governing transaction is missing an active signature from the associated Key, revert the Solidity transaction.

COST_ANSWER query for getInfo on invalid account id returns 0

Summary of the defect
When the target account does not exist, a CryptoGetInfo query with response type COST_ANSWER has status OK (and cost 0) instead of INVALID_ACCOUNT_ID.

Suggested resolution
Make CryptoGetInfo response statuses consistent with the contract, file, and topic Get*Info queries.

Sbh estimate for FileUpdate does not reflect actual delta in storage used

Summary of the defect
Given a file X with b0 bytes and a remaining lifetime of h0 hours, the storage usage is:

sbh(X) = b0 * h0

If I submit a FileUpdate that changes X to have b1 bytes and a lifetime of h1 hours, the cost of this txn should be based on the delta in storage usage:

Δsbh(X) = max(0, b1 * h1 - b0 * h0)

However, the current FileFeeBuilder usage estimate for a FileUpdate ignores the baseline usage of b0 * h0, and effectively computes:

Δsbh(X) = b1 * h1 

These semantics only make sense for a FileCreate; they are clearly wrong for a FileUpdate.

Suggested resolution
Fix by refactoring related parts of the FeeBuilder library under com.hedera.services.fees.

Change all instances of claims to liveHashes

Documentation requirements
Our usage of "claim" is inconsistent with the ubiquitous grammar of decentralized identity standards and architectures. We should replace all uses of "claim" with "livehash" in order to:

  1. Avoid confusion.
  2. Make the "liveness" semantics explicit.
  3. Emphasize the information referenced is a hash, and not itself a credential.

Use swirlds Throttle class for API throttling control

Summary
Replace the legacy throttle components with the Swirlds Platform Throttle class. Make its "leaky bucket" model explicit in the properties that are used to configure throttles, which should:

  • be NETWORK-scoped and not node-scoped; and,
  • support "overflow" buckets; and,
  • allow arbitrary assignment of HAPI operations to buckets.

Suggested resolution
Create a new property family with the prefixes hapi.throttling.{op,config,defaults} to define bucket, each with a capacity in ops-per-second. Then map operations to buckets using hapi.throttling.op.{bucket,capacityRequired} properties. Support backwards-compatibility with a hapi.throttling.config.useLegacyProps which, if true, creates a set of buckets and op-mappings equivalent to whatever legacy throttles are configured.

Impact
When the throttles in 0.0.121 are updated, the new properties should include hapi.throttling.config.useLegacyProps=false along with the new properties.

Need to fix hedera services' nightly regression failure for DUPLICATE_TRANSACTION

Occasionally we are seeing the DUPLICATE_TRANSACTION failure during nightly regression. The error messages are below. This needs to be fixed from test-client's transaction generation logic.


2020-05-12 06:42:54.121 INFO   102  ProviderRun - Finished initializion for provider run...
2020-05-12 06:43:04.328 INFO   125  ProviderRun - 17 minutes left in test - 220 ops submitted so far (11 pending).
java.lang.AssertionError: Wrong precheck status! expected:<OK> but was:<DUPLICATE_TRANSACTION>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:834)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at com.hedera.services.bdd.spec.transactions.HapiTxnOp.submitOp(HapiTxnOp.java:176)
    at com.hedera.services.bdd.spec.HapiSpecOperation.execFor(HapiSpecOperation.java:165)
    at com.hedera.services.bdd.spec.utilops.grouping.ParallelSpecOps.lambda$submitOp$1(ParallelSpecOps.java:51)
    at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1807)
    at java.base/java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1799)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
2020-05-12 06:43:50.153 WARN   181  HapiSpecOperation - 'UmbrellaRedux' - ParallelSpecOps{numSubOps=1} failed!
java.lang.AssertionError: Problem(s) with sub-operation(s): HapiMessageSubmit{sigs=7, node=0.0.3, topic=RandomTopicCreation-topic10, message=Optional[Hello Hedera]} :: Wrong precheck status! expected:<OK> but was:<DUPLICATE_TRANSACTION>
    at org.junit.Assert.fail(Assert.java:88) ~[junit-4.12.jar:4.12]
    at com.hedera.services.bdd.spec.utilops.grouping.ParallelSpecOps.submitOp(ParallelSpecOps.java:64) ~[classes/:?]
    at com.hedera.services.bdd.spec.HapiSpecOperation.execFor(HapiSpecOperation.java:165) ~[classes/:?]
    at com.hedera.services.bdd.spec.utilops.CustomSpecAssert.allRunFor(CustomSpecAssert.java:38) ~[classes/:?]
    at com.hedera.services.bdd.spec.utilops.CustomSpecAssert.allRunFor(CustomSpecAssert.java:47) ~[classes/:?]
    at com.hedera.services.bdd.spec.utilops.ProviderRun.submitOp(ProviderRun.java:144) ~[classes/:?]
    at com.hedera.services.bdd.spec.HapiSpecOperation.execFor(HapiSpecOperation.java:165) ~[classes/:?]
    at com.hedera.services.bdd.spec.HapiApiSpec.exec(HapiApiSpec.java:189) ~[classes/:?]
    at com.hedera.services.bdd.spec.HapiApiSpec.run(HapiApiSpec.java:149) ~[classes/:?]
    at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:720) ~[?:?]
    at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) ~[?:?]
    at com.hedera.services.bdd.suites.HapiApiSuite.runSync(HapiApiSuite.java:231) ~[classes/:?]
    at com.hedera.services.bdd.suites.HapiApiSuite.runSuite(HapiApiSuite.java:123) ~[classes/:?]
    at com.hedera.services.bdd.suites.HapiApiSuite.runSuiteSync(HapiApiSuite.java:116) ~[classes/:?]
    at com.hedera.services.bdd.suites.SuiteRunner.lambda$runSuitesSync$10(SuiteRunner.java:265) ~[classes/:?]
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:176) ~[?:?]
    at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) ~[?:?]
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) ~[?:?]
    at com.hedera.services.bdd.suites.SuiteRunner.runSuitesSync(SuiteRunner.java:266) ~[classes/:?]
    at com.hedera.services.bdd.suites.SuiteRunner.lambda$runTargetCategories$6(SuiteRunner.java:240) ~[classes/:?]
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) [?:?]
    at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1654) [?:?]
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) [?:?]
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) [?:?]
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) [?:?]
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [?:?]
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) [?:?]
    at com.hedera.services.bdd.suites.SuiteRunner.runTargetCategories(SuiteRunner.java:240) [classes/:?]
    at com.hedera.services.bdd.suites.SuiteRunner.runCategories(SuiteRunner.java:209) [classes/:?]
    at com.hedera.services.bdd.suites.SuiteRunner.main(SuiteRunner.java:182) [classes/:?]
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
    at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:567) ~[?:?]
    at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:282) [exec-maven-plugin-1.6.0.jar:?]
    at java.lang.Thread.run(Thread.java:835) [?:?]
2020-05-12 06:43:50.159 WARN   181  HapiSpecOperation - 'UmbrellaRedux' - ProviderRun{} failed!
java.lang.AssertionError: Operation 'ParallelSpecOps{numSubOps=1}' :: Problem(s) with sub-operation(s): HapiMessageSubmit{sigs=7, node=0.0.3, topic=RandomTopicCreation-topic10, message=Optional[Hello Hedera]} :: Wrong precheck status! expected:<OK> but was:<DUPLICATE_TRANSACTION>
    at org.junit.Assert.fail(Assert.java:88) ~[junit-4.12.jar:4.12]
    at com.hedera.services.bdd.spec.utilops.CustomSpecAssert.allRunFor(CustomSpecAssert.java:40) ~[classes/:?]
    at com.hedera.services.bdd.spec.utilops.CustomSpecAssert.allRunFor(CustomSpecAssert.java:47) ~[classes/:?]
    at com.hedera.services.bdd.spec.utilops.ProviderRun.submitOp(ProviderRun.java:144) ~[classes/:?]
    at com.hedera.services.bdd.spec.HapiSpecOperation.execFor(HapiSpecOperation.java:165) ~[classes/:?]
    at com.hedera.services.bdd.spec.HapiApiSpec.exec(HapiApiSpec.java:189) ~[classes/:?]
    at com.hedera.services.bdd.spec.HapiApiSpec.run(HapiApiSpec.java:149) ~[classes/:?]
    at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:720) ~[?:?]
    at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) ~[?:?]
    at com.hedera.services.bdd.suites.HapiApiSuite.runSync(HapiApiSuite.java:231) ~[classes/:?]
    at com.hedera.services.bdd.suites.HapiApiSuite.runSuite(HapiApiSuite.java:123) ~[classes/:?]
    at com.hedera.services.bdd.suites.HapiApiSuite.runSuiteSync(HapiApiSuite.java:116) ~[classes/:?]
    at com.hedera.services.bdd.suites.SuiteRunner.lambda$runSuitesSync$10(SuiteRunner.java:265) ~[classes/:?]
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:176) ~[?:?]
    at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) ~[?:?]
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?]
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) ~[?:?]
    at com.hedera.services.bdd.suites.SuiteRunner.runSuitesSync(SuiteRunner.java:266) ~[classes/:?]
    at com.hedera.services.bdd.suites.SuiteRunner.lambda$runTargetCategories$6(SuiteRunner.java:240) ~[classes/:?]
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) [?:?]
    at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1654) [?:?]
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) [?:?]
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) [?:?]
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) [?:?]
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [?:?]
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) [?:?]
    at com.hedera.services.bdd.suites.SuiteRunner.runTargetCategories(SuiteRunner.java:240) [classes/:?]
    at com.hedera.services.bdd.suites.SuiteRunner.runCategories(SuiteRunner.java:209) [classes/:?]
    at com.hedera.services.bdd.suites.SuiteRunner.main(SuiteRunner.java:182) [classes/:?]
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
    at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:567) ~[?:?]
    at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:282) [exec-maven-plugin-1.6.0.jar:?]
    at java.lang.Thread.run(Thread.java:835) [?:?]
2020-05-12 06:43:50.163 INFO   214  HapiApiSpec - 'UmbrellaRedux' - final status: FAILED!
2020-05-12 06:43:50.163 INFO   150  UmbrellaRedux - -------------- RESULTS OF UmbrellaRedux SUITE --------------
2020-05-12 06:43:50.164 INFO   152  UmbrellaRedux - Spec{name=UmbrellaRedux, status=FAILED}
2020-05-12 06:43:50.171 INFO   214  SuiteRunner - ============== sync run results ==============
2020-05-12 06:43:50.171 INFO   217  SuiteRunner - UmbrellaRedux                    :: 0/1 suites ran OK
2020-05-12 06:43:50.172 INFO   223  SuiteRunner -   --> Problems in suite 'UmbrellaRedux' :: Spec{name=UmbrellaRedux, status=FAILED}
Ubuntu 18.04.2 LTS

TLS on GRPC

Summary
Enable server authentication and encryption IFF requested by client per the reference. Continue to allow plaintext connections. (Note that Council members will ultimately be required to apply for and own the TLS certificate.)

Suggested resolution
Configure Services to optionally bind a second gRPC server with TLS enabled on (default) port 50212. Generate and deploy self-signed certificates for mainnet, testnet, and our internal networks (perf, staging, etc) subject to:

  • Unique certificates for each node
  • The CN in each cert should have the form Hedera-<nodeId> where nodeId matches the id logged by the platform at startup (and the nodeId in the Address Book, etc).

Could not delete a file with only one of access keys

Summary of the defect
FileDelete should have the semantics of a "revocation service"---i.e. a storage service which requires one or more keys to sign when creating content; but permits removal with any single key's signature.

Suggested resolution
Update HederaKeyActivation#isActive to accept a KeyActivationCharacteristics which customizes this signing requirement for the top-level WACL of an active FileDelete transaction.

Doc requirements
Update the HAPI documentation to reflect the above semantics.

Review what smart contract properties should be inherited when one contract is creating another contract.

Summary

First, when a "parent" contract creates a "child" contract using the Solidity new keyword, propagate the parent contract properties to the child, including the:

  • expiration
  • autorenew period
  • admin key
  • memo field
  • proxy account id

Second, extend the TransactionRecord protobuf to support a list of created contract ids.

Suggested resolution

  • Scan the address: list[address] map of creations in the EVM ProgramResult upon successful execution of a Solidity transaction and customize all new accounts with the parent properties.
  • Add the creations to the TransactionContext for later storage in the record.

HCS running hash should concatenate message hash & not message

Summary of the defect
The new topicRunningHash of a topic receiving a ConsensusSubmitMessage should be the SHA-384 digest of, in order:

  1. The previous topicRunningHash of the topic (48 bytes)
  2. The topicRunningHashVersion (8 bytes)
  3. The topic's shard (8 bytes)
  4. The topic's realm (8 bytes)
  5. The topic's number (8 bytes)
  6. The number of seconds since the epoch before the ConsensusSubmitMessage reached consensus (8 bytes)
  7. The number of nanoseconds since 6. before the ConsensusSubmitMessage reached consensus (4 bytes)
  8. The topicSequenceNumber (8 bytes)
  9. The output of the SHA-384 digest of the message bytes from the ConsensusSubmitMessage (48 bytes)
    But in fact, we have not versioned the topic running hash; and are updating the digest with the bytes of the submitted message, not their SHA-384 hash.

Suggested resolution

  • Add a topicRunningHashVersion to the protobuf TransactionReceipt type.
  • Correct the hash calculation.

Impact
Mirror nodes should store for each message, the version number of the record for it. That way, when checking the running hash, they know how to calculate it for both old messages and new messages.

Allow immutable files to be created with empty keys (as with immutable topics and contracts).

Summary of the defect
Immutable files (that is, files with an empty WACL) are not supported---this is inconsistent with our support for immutable contracts and topics.

Suggested resolution
Permit immutable files, with the following semantics for other operations on immutable files:

  • FileDelete is UNAUTHORIZED.
  • FileAppend is UNAUTHORIZED.
  • FileUpdate with any attribute change other than expiry extension is UNAUTHORIZED.
  • FileGetInfo leaves the FileGetInfoResponse.FileInfo#keys field unset.

Doc requirements
Update the HAPI documentation to reflect the above semantics.

FileGetInfo on deleted file returns FILE_DELETED error

Summary of the defect
A FileGetInfo query should still return the metadata (expiry, WACL, deletion status) of a deleted file until it has expired; however, Services is currently rejecting such queries with a FILE_DELETED response status.

Suggested resolution
Update the relevant AnswerService with relaxed validation logic.

Dedicated network for nightly regression

  • Create a new dedicated network to run nightly regression on commits of this repo, different than the current dedicated network that is used for the old swirlds repo.
  • Create a new branch in infrastructure repo (https://github.com/swirlds/infrastructure/issues/387)
  • Start using the new infra branch, so both old swirlds and this new hashgraph repo could run nightly regression in parallel against different dedicated network.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.