Coder Social home page Coder Social logo

panacea-oracle's Introduction

Panacea Oracle

An oracle which validates off-chain data to be transacted in the data exchange protocol of the Panacea chain while preserving privacy.

Features

  • Validating that data meets the requirements of a specific deal
    • with utilizing TEE (Trusted Execution Environment) for preserving privacy
  • Providing encrypted data to buyers

Hardware Requirements

The oracle only works on SGX-FLC environment with a quote provider installed. You can check if your hardware supports SGX and it is enabled in the BIOS by following EGo guide.

Installation

Usages

Recommended configurations

Recommend appropriate settings based on your application

  • If the request body(receive message) is 500 KB or less in size
    • max-connections: 100
  • If the request body(receive message) is between 500 KB and 1 MB in size
    • max-connections: 50

The default settings are as follows

  • enclave
    • heap-size: 1024 MB
  • gRPC
    • max-connections: 50
    • max-rcv-msg-size: 1024 KB
  • API
    • max-connections: 50
    • max-request-body-size: 1024 KB

panacea-oracle's People

Contributors

gyuguen avatar 0xhanslee avatar inchori avatar audtlr24 avatar youngjoon-lee avatar

Stargazers

 avatar Javed Khan avatar  avatar  avatar

Watchers

Cha Minkyoo avatar  avatar Eunsol avatar  avatar Seunghwan Kim avatar  avatar  avatar

panacea-oracle's Issues

Apply gRPC and gRPC-Gateway

Background

Our oracle currently provides a REST API. Not a bad choice, but for future maintenance, switching to gRPC would be a good choice.
With gRPC, anyone with a proto file can easily call our API to our server. Also, gRPC can easily create api documentation using swagger.
And with gRPC-Gateway we can provide a REST API as well.

Implement

Default Implement

We implement gRPC by referring to the document below

It also implements gRPC-Gateway

Set a proto file

datadeal.proto

syntax = "proto3";
package panacea.datadeal.v0;

option go_package = "github.com/medibloc/panacea-oracle/server/types/datadeal/v0";

import "google/api/annotations.proto";

service DataDealService {
  rpc ValidateData(ValidateDataRequest) returns (ValidateDataResponse) {
    option (google.api.http) = {
      post: "/v0/data-deal/deals/{deal_id}/data"
      body: "*"
    };
  }
}

message ValidateDataRequest {
  uint64 deal_id = 1;
  string provider_address = 2;
  bytes encrypted_data = 3;
  bytes data_hash = 4;
}

message ValidateDataResponse {
  Certificate certificate = 1;
}

// Certificate defines a certificate signed by an oracle who issued the certificate.
message Certificate {
  UnsignedCertificate unsigned_certificate = 1;
  bytes signature = 2;
}

// UnsignedCertificate defines a certificate issued by an oracle as a result of data validation.
message UnsignedCertificate {
  string cid = 1;
  string unique_id = 2;
  string oracle_address = 3;
  uint64 deal_id = 4;
  string provider_address = 5;
  string data_hash = 6;
}

key.proto

syntax = "proto3";
package panacea.key.v0;

option go_package = "github.com/medibloc/panacea-oracle/server/types/key/v0";

import "google/api/annotations.proto";

service KeyService {
  rpc GetSecretKey(GetSecretKeyRequest) returns (GetSecretKeyResponse) {
    option (google.api.http) = {
      get: "/v0/secret-key"
    };
  }
}

message GetSecretKeyRequest {
  uint64 deal_id = 1;
  bytes data_hash = 2;
}

message GetSecretKeyResponse {
  bytes encrypted_secret_key = 1;
}

status.proto

syntax = "proto3";
package panacea.status.v0;

option go_package = "github.com/medibloc/panacea-oracle/server/types/status/v0";

import "google/api/annotations.proto";

service StatusService {
  rpc GetStatus(GetStatusRequest) returns (GetStatusResponse) {
    option (google.api.http) = {
      get: "/v0/status"
    };
  }
}

message GetStatusRequest {

}

message GetStatusResponse {
  string oracle_account_address = 1;
  StatusAPI api = 2;
  StatusEnclaveInfo enclave_info = 3;
}

message StatusAPI {
  string listen_addr = 1;
}

message StatusEnclaveInfo {
  bytes product_id = 1;
  string unique_id = 2;
}

Makefile

MODULE=github.com/medibloc/panacea-oracle

PROTO_DIR=proto
PROTO_OUT_DIR=./

proto-gen:
	protoc --proto_path=$(PROTO_DIR) \
		--proto_path=third_party/proto \
		--go_out=$(PROTO_OUT_DIR) \
			--go_opt=paths=import \
			--go_opt=module=$(MODULE) \
		--go-grpc_out=$(PROTO_OUT_DIR) \
			--go-grpc_opt=paths=import \
			--go-grpc_opt=module=$(MODULE) \
		--grpc-gateway_out=$(PROTO_OUT_DIR) \
			--grpc-gateway_opt logtostderr=true \
			--grpc-gateway_opt=paths=import \
			--grpc-gateway_opt=module=$(MODULE) \
		$(PROTO_DIR)/panacea/*/*/*.proto

Improved block height specification for query

Background

We ran performance tests and found the bottlenecks.

https://github.com/medibloc/panacea-oracle/blob/v0.0.1-alpha.1/panacea/query_client.go#L198
In this function, each time we fetch the latest height of the lightClient, there is a delay, plus it is processed sequentially due to the mutex. This will slow down any requests that reference this function.

Implement

  • Delete the QueryMiddleware.
  • Utilize the already working light client refresh schedule.
  • Modify a schedule that synchronizes the last block at short intervals and caches the height of this block.
  • In QueryClient, get the cached block heights and use them in the query.

Fix panic from auth middleware when account pubkey not found

[email protected]@vmoracledev002:~/scripts$ ego run $(which oracled) start
EGo v1.1.0 (4625a610928f4f4b1ea49262c363376b1e574b6c)
[erthost] loading enclave ...
[erthost] entering enclave ...
[ego] starting application ...
time="2022-12-15T06:11:59Z" level=info msg="dialing to Panacea gRPC endpoint: http://127.0.0.1:9090"
time="2022-12-15T06:11:59Z" level=info msg="successfully connect to IPFS node"
time="2022-12-15T06:11:59Z" level=info msg="Panacea event subscriber is started"
time="2022-12-15T06:11:59Z" level=info msg="HTTP server is started: 127.0.0.1:8080"
Dragonberry Active
ERROR: Segmentation fault [openenclave-src/enclave/core/sgx/exception.c:oe_real_exception_dispatcher:469]
ERROR: Backtrace:
ERROR: github.com/medibloc/panacea-oracle/server/middleware.(*jwtAuthMiddleware).queryAccountPubKey(): 0x7fdfc217d031
ERROR: github.com/medibloc/panacea-oracle/server/middleware.(*jwtAuthMiddleware).Middleware.func1(): 0x7fdfc217c9d7
ERROR: net/http.HandlerFunc.ServeHTTP(): 0x7fdfc1c190ef
ERROR: github.com/gorilla/mux.(*Router).ServeHTTP(): 0x7fdfc203d4cf
ERROR: net/http.serverHandler.ServeHTTP(): 0x7fdfc1c1c6db
ERROR: net/http.(*conn).serve(): 0x7fdfc1c17b97
ERROR: net/http.(*Server).Serve.func3(): 0x7fdfc1c1d02e
ERROR: runtime.goexit.abi0(): 0x7fdfc1796b61
ERROR: Backtrace:
ERROR: oe_abort_with_td(): 0x7fdfc0c1dd03
ERROR: oe_abort(): 0x7fdfc0c1cae8
ERROR: oe_real_exception_dispatcher(): 0x7fdfc0c2021e
ERROR: github.com/medibloc/panacea-oracle/server/middleware.(*jwtAuthMiddleware).queryAccountPubKey(): 0x7fdfc217d031
ERROR: github.com/medibloc/panacea-oracle/server/middleware.(*jwtAuthMiddleware).Middleware.func1(): 0x7fdfc217c9d7
ERROR: net/http.HandlerFunc.ServeHTTP(): 0x7fdfc1c190ef
ERROR: github.com/gorilla/mux.(*Router).ServeHTTP(): 0x7fdfc203d4cf
ERROR: net/http.serverHandler.ServeHTTP(): 0x7fdfc1c1c6db
ERROR: net/http.(*conn).serve(): 0x7fdfc1c17b97
ERROR: net/http.(*Server).Serve.func3(): 0x7fdfc1c1d02e
ERROR: runtime.goexit.abi0(): 0x7fdfc1796b61
ERROR: :OE_ENCLAVE_ABORTING [openenclave-src/host/calls.c:_call_enclave_function_impl:56]
ERROR: signal: aborted (core dumped)

Subscribe `UpgradeOracle` event

  • UpgradeOracle event detection
  • verification
    • uniqueID (if it is the next version of uniqueID)
    • oracle address (if it is registered already or not)
    • node key remote report
    • trusted block info (height, hash)
  • send tx for approval of oracle upgrade

Testing gRPC and HTTP performance

Background

panacea-orlace has completed its first round of feature development, and functional validation is more or less complete.
The next question is how much performance can be achieved.
In addition to finding optimizations for the configurable options in panacea-oracle, we need to know what the limitations are.

Performance tools

I looked at a few performance testing tools and settled on GHZ, which can test gRPC and HTTP.

Here why I want to use GHZ.

  • It can be tested neatly using the CLI based on a proto file.
  • It can measure the response time and throughput of our service.
  • It can export the test results in various formats, including JSON, YAML, and HTML.
  • It is in BETA, but is also available in WEB form

Notes

Of course, I can test this tool and replace it with another if it doesn't work for me.

I'll add the results of my performance tests in comments as I go

TBD: WASM for validation logics

A basic strategy to implement complex data validation is defining static validation rules in JSON or any other descriptive language.
Then, the oracle works only with promised static rules. That's also what we've intended.
But I am thinking if there is any chance to make our oracle dynamic, accepting any WASM code as validation rules. One of pros is that we can minimize upgrades of the oracle.
Of course, we must have a security strategy such as issuing a oracle key for each WASM code using the hash of WASM code, or so. There would be tens of strategies that we can thinking about.

This topic is quite challenging, and would be an over-engineering if there are not that many needs to use our oracle in various ways. So, let's just keep this in mind and do PoC someday.

Add validate `VerifiablePresentation`.

Background

The oracle is validate a JSON schema for now, but we are planning to validate a VP(Verifiable Presentation).
So, We need to implement validating the VP(Verifiable Presentation) which has a VC(Verifiable Credential) and Proofs.

Implement

When the provider request the validate the data which includes a VP, the oracle will decrypt the data and validate it.
The VP and the proof will be validate and after validation is success, the data will be encrypted by secretKey and return as DataCertificate. Also, in near feature the oracle will be validate the not only VP and the proof, but also PresentationDefinition. The VP validation will use a medibloc/vc-sdk which applying aries-framework-go v0.1.8

Add `GET /v0/status` API

similar as Tendermint RPC /status and Cosmos gRPC /node_info.

necessary for health check by Azure

Change to storing data via consumer service instead of IPFS

Background

In DEP, we identified the possibility that, when utilizing ipfs as a data transfer channel, a provider using a malicious oracle could be rewarded without providing any data. To overcome this, we change the specification to allow the consumer to provide the data transfer channel themselves using like dep-consumer and register this information to the deal.

Therefore, oracle should be changed to upload encrypted data to the consumer service instead of ipfs and generate certificates based on endpoint.

Implementation

  • remove ipfs related functions
  • change data upload part to use consumer service in the deal

Execute ApproveOracleRegistration tx

  • Send an approval to Panacea
    • The approval should contain:
      • oracle priv key (encrypted with node key)
      • target oracle address
      • signature (signed by oracle priv key)

Change json return type after validate a data

We decided to add a Consent data type in panacea-core. After validating a data, the oracles need to return Certificate, which includes UnsignedCertificate and Signature. So, the provider could submit a consent with json file returned from validate request.

I'll change after the PR merged.

Speed up the verifiedQueryClient using goroutines

Currently, the verifiedQueryClient makes a ABCI query based on the latest block, and wait for the next block (= latest + 1) for light client verification.
But, there are some cases where we need to query multiple values from the chain. For example, we need to fetch a deal, a certificate, and an account for handling GET /v0/.../secret-key.

To speed up it, use goroutines.

There are two ways, in general. We should decide about it.

  1. Spawning goroutines whenever you want.
  2. Limiting the max number of goroutines for a verifiedQueryClient (similar as a thread pool).

It depends on the rate limiting of the chain node.

Test `register-oracle` manually with Testnet

How to test

Connect to vmoracletestnet001.

You can find a oracle docker container running:

ubuntu@vmoracletestnet001:~$ docker ps
CONTAINER ID   IMAGE                                  COMMAND                  CREATED      STATUS      PORTS                                       NAMES
a93cb1a23848   ghcr.io/medibloc/panacea-oracle:main   "ego run /usr/bin/or…"   2 days ago   Up 2 days   0.0.0.0:8080->8080/tcp, :::8080->8080/tcp   oracle1

You can see the details of that docker image:

ubuntu@vmoracletestnet001:~$ docker images ghcr.io/medibloc/panacea-oracle
REPOSITORY                        TAG       IMAGE ID       CREATED      SIZE
ghcr.io/medibloc/panacea-oracle   main      7b7ff5fae539   2 days ago   533MB

That image was published as https://github.com/medibloc/panacea-oracle/pkgs/container/panacea-oracle/59177424?tag=main. Do not execute docker pull ghcr.io/medibloc/panacea-oracle:main until it is really needed.

That oracle was started by the following command:

docker run --rm \
    --device /dev/sgx_enclave \
    --device /dev/sgx_provision \
    -v /home/ubuntu/.oracle-docker-1:/oracle \
    -p 8080:8080 \
    ghcr.io/medibloc/panacea-oracle:main \
    ego run /usr/bin/oracled start

Similar as above, you can start another docker container.

Then, you can execute a register-oracle CLI and see if the 1st oracle approves the registration.

Added maxConnections and maxRequestBodySize limit setting.

Background

If we don't limit the maximum number of connections on our server, we'll get a lot of requests, which will result in a lot of errors. In particular, our project is likely to experience OOM(Out of memory) due to SGX's memory limitations. Similarly, if we don't limit the size of the request body, we're likely to experience memory issues.

Implementation

Add maxConnections and maxRequestBodySize to config.toml.

Implement rate limit in gRPC and gRPC-Gateway

Background

All API servers cannot receive API requests indefinitely. Without this constraint, our servers may not be able to handle the high volume of requests and the service may stop.
Rate limit receives as many API requests as the set value and fails the remaining requests. As a result, service stabilization can be achieved.

Implement

We can use this package for rate limit.
https://pkg.go.dev/golang.org/x/time/rate

Middleware in gRPC can make use of this package.
https://github.com/grpc-ecosystem/go-grpc-middleware

gRPC-Gateway uses mux, so we just need to create mux handler

Set the maximum number of open connections in config.toml

[grpc]
listen-addr="127.0.0.1:9090" # The gRPC listen address.
max-open-connections="300" # This is the maximum connection of gRPC and gRPC-Gateway combined.
write-timeout="" # I'm not sure about this configuration, but it seems likely.
read-timeout="" # I'm not sure about this configuration, but it seems likely.

gateway-listen-addr="127.0.0.1:8080" # The gRPC-Gateway address

Improve rewind errors in IPFS

Background When many requests are made to IPFS, the following error sometimes occurs:

failed to store data to IPFS: Post \"https://ipfs-test-rpc.gopanacea.org:443/api/v0/add?\": net/http: cannot rewind body after connection loss

This error occurs because the shell.Shell function used to send IPFS is not thread-safe.
Although we won't be using IPFS in the next version, the IPFS error should be fixed because it can be a service with the current version.

Implementation

  • Modify to create a shell.Shell every time add or get data to IPFS.

Consider making `verifiedQueryClient.GetStoreData()` accept a `queryHeight`

Related with #46, but with a different approach. We can probably use both, or one of them.

Currently, the verifiedQueryClient.GetStoreData() always execute a ABCI query based on the latest block, and wait for the next block (= latest + 1) for the light block verification. That logic is correct basically, but I think we don't always need to make a query based on the latest block. If there is some cases where it's okay to make a query based on a previous block, we might be able to skip waiting for a new block sometimes. This strategy will speed up the verifiedQueryClient.

Case 1: When handling events subscribed

We can get a block height of that event as below. Then, we can make a ABCI query based on that height in order to get chain states related with that event. Then, we can call GetStoreData() with that height (if we make it accept a height).

func (e RegisterOracleEvent) EventHandler(event ctypes.ResultEvent) error {
	eventDataTx := event.Data.(types.EventDataTx)
	queryHeight := eventDataTx.Height
        ...
        ... := queryClient.GetStoreData(..., queryHeight)
        ...
}

If the event handler was triggered right after the event was emitted, the GetStoreData() will wait for a next block (= queryHeight + 1). But, the event subscriber was delayed for some reason, the GetStoreData() will skip waiting. But the possibility of this case is low. The subscriber won't be delayed in most cases.

Case 2: When handling end-user requests

Currently, it seems that all end-user requests should be handled based on the latest block. Here could be also a temporal delay between the time when the end-user decides to send a request and the time when the verifiedQueryClient starts its logic. However, in general, end-users send requests without checking if certain states are finalized in the chain. In other words, end-users don't usually have any desired block height when they send requests to oracles. So, I think this approach is not that helpful for this case.

Discussion

So, it seems that the approach #46 is more useful in most cases. But still, this approach is more lightweight. We need to keep thinking if this approach is needed.

Subscribe ApproveOracleRegistration tx (instant)

  • After executing a RegisterOracle tx, subscribe a ApproveOracleRegistration tx instantly.
    • If subscribed,
      • get oracle priv key from the tx, decrypt it, and seal/store it.
      • unsubscribe the event.
      • subscribe RegisterOracle txs, as implemented by #3.

Add CLI: get-oracle-key

Get encrypted_oracle_priv_key, then decrypt and store the oracle private key from MsgApproveOracleRegistration Tx

Add `upgrade-oracle` CLI

CLI for upgrade of oracle

  • send MsgUpgradeOracle to Panacea
    • UniqueId
    • OracleAddress
    • NodePubKey
    • NodePubKeyRemoteReport
    • TrustedBlockHeight
    • TrustedBlockHash

Add CLI: start

  • subscribe and handling relevant events (#3)
  • handling data validation requests for data certificate (#6)

Uniform data formats included in remote reports

The data format included in the remote report used when generating genesis oracle and the remote report used when register oracle are different.

Genesis Oracle

oraclePubKey := oraclePrivKey.PubKey().SerializeCompressed()
oracleKeyRemoteReport, err := sgx.GenerateRemoteReport(oraclePubKey)

Register New Oracle

nodePubKey := nodePrivKey.PubKey().SerializeCompressed()
oraclePubKeyHash := sha256.Sum256(nodePubKey)
nodeKeyRemoteReport, err := sgx.GenerateRemoteReport(oraclePubKeyHash[:])

When generating remote reports from Genesis Oracle, the included data must be hashed as SHA256.

Even in remote report verification, the public key must be hashed with SHA256.

We don't need to use `verifiedQueryClient` for `TxBuilder`

The TxBuilder requires QueryClient only for GetAccount, and we're using a verifiedQueryClient for NewTxBuilder. But I don't see any risk yet even if we use a normal queryClient instead of verifiedQueryClient (which contains light client). If my thought is true, let's use a normal query client for TxBuilder since it's way faster than verifiedQueryClient.

The reason why we adopted the light client is that any malicious guy can trick the oracle with false information, such as fake deal info or fake oracle private key. But, the account info which is required by TxBuilder is not that crucial, I think. Oracle does some off-chain operations using oracle private key or deal info (which contains the consumer info), such as issuing certificates or encrypting data for consumers. Because there is no safety checker for these off-chain operations, we need the light client in order to decide whether we should start an off-chain operation based on the info fetched from the chain, or not.

However, the TxBuilder doesn't do any off-chain operation. That does only one operation, broadcasting txs, which is definitely performed on-chain. On-chain operations are validated by chain validators. If someone tricks the TxBuilder with fake account info, the txs executed by TxBuilder won't be accepted by chain validators. Of course, that malicious guy can run her own chain node, but she won't be able to beat voting powers of other validators.

Let's think whether my theory is correct.

Add get-oracle-key CLI at OracleUpgrade store

Currently the get-oracle-key CLI get the EncryptedOraclePrivKey from only OracleRegistration.
But in oracle upgrade we can't use this CLI in new oracle (Because the nodePrivKey is different). Therefore, if storing OraclePrivKey fails for some reason during upgrade, there is no way to obtain the key again.
So, It is necessary to improve get-oracle-key for covering upgrade scenario.

DID Authentication

Background

When provider requests data validation with VC/VP, oracle need to validate whether the requester is holder of VP via DID authentication.

Implementation

For a simple way, JWT can be used for DID authentication as done in account authentication.
I'm going to investigate whether there is a more general or standard way of DID authentication.

Subscribe RegisterOracle tx

  • Subscribe all RegisterOracle txs
    • verify report via SGX
    • verify unique ID in the report
    • verify trusted block

Next: #3

Add PathEscape to MerklePath key

The oracle get data from panacea by GetStoreData that verify queried data with merkle proof.

GetStoreData use VerifyMembership function used by cosmos/ibc-go.
In VerifiyMemebership, the MerklePath use url path unescaping like below:

// GetKey will return a byte representation of the key
// after URL escaping the key element
func (mp MerklePath) GetKey(i uint64) ([]byte, error) {
	if i >= uint64(len(mp.KeyPath)) {
		return nil, fmt.Errorf("index out of range. %d (index) >= %d (len)", i, len(mp.KeyPath))
	}
	key, err := url.PathUnescape(mp.KeyPath[i])
	if err != nil {
		return nil, err
	}
	return []byte(key), nil
}

Path unescaping is that converting each 3-byte encoded substring of the form "%AB" into the hex-decoded byte 0xAB. It returns an error if any % is not followed by two hexadecimal digits.

Therefore, if we simply do string(key) when creating MerklePath like now, char % can be included in the string, and errors may occur.

To solve this, we can use PathEscape that does inverse transformation of PathUnescape.

Add `getSecretKeys` feature and change `getSecretKey` uri

Background

We currently allow consumers to get deal_id and data_hash from deal and secretKey one by one.
However, there can be multiple data in a deal, and it is inconvenient for consumers to bring these one by one.
Therefore, we develop an API that can return multiple secretKeys.

Also, I want to change the uri of the API to get the secretKey.
Currently we have a uri configured as /v0/secretKey?deal_id={dealID}&data_hash={dataHash}.
However, since this secretKey belongs to the deal, I think it should be changed as follows.

  • multiple search: /v0/datadeal/deals/{deal_id}/secret-keys
  • one search: /v0/datadeal/deals/{deal_id}/secret-keys/{data_hash}

Implement

GetSecretKeys

Request

GET /v0/datadeal/deals/{deal_id}/secret-keys?offset={offset}&limit={limit}
# Authorization: Bearer {jwtToken}
# Content-Type: application/json

Response

{
  "secret_keys":[
    {
      "deal_id": 0,
      "data_hash": "",
      "cid": "",
      "encrypted_secret_key": ""
    }
  ],
  "page": {
    "offset": 0,
    "limit": 100
  }
}

GetSecretKey

Request

GET /v0/datadeal/deals/{deal_id}/secret-keys/{data_hash}
# Authorization: Bearer {jwtToken}
# Content-Type: application/json

Response

{
      "deal_id": 0,
      "data_hash": "",
      "cid":"",
      "encrypted_secret_key": ""
}

Handle data validation requests

  • POST /v0/data-deal/deals/{dealId}/data
    • Read a data from request body
    • Decrypt data
    • Validate data
    • Re-encrypt data using a combined key
    • Put data into IPFS
    • Issue a certificate to the client

Do we need `--trusted-block-*` flags for the CLI `gen-oracle-key`?

Related with medibloc/panacea-core#557

It seems that we don't need trusted block info for generating an oracle key, seal it, and store it to the disk. If I understand it correctly, the only reason why we get --trusted-block-* flags is for initializing verifiedQueryClient in advance, so that it can be used when we do oracled start. If so, what do you think about specifying trusted block info when we do oracled start, by flags or a toml file?
Because we're getting trusted block info for gen-oracle-key, we always need a running chain whenever we do gen-oracle-key. It's not a big issue in the mainnet. But when we test the entire system in local, it makes bootstrapping complicated.
Also, it's not intuitive that we need trusted block info for gen-oracle-key, because the gen-oracle-key operation doesn't require any communication with the chain.

Improve caching of JSON schema

Background

Data validation typically uses the same JSON Schema.
By caching this JSON Schema, it will be fetched from memory instead of making a network call every time it is called.
We can expect a performance improvement.

Implementation

  • Cache every JSON Schema fetched from a URI individually.
  • Once called, the URI is held in memory and used until this server is stopped.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.