Coder Social home page Coder Social logo

smartcontractkit / external-adapters-js Goto Github PK

View Code? Open in Web Editor NEW
264.0 77.0 299.0 1.17 GB

Monorepo containing JavaScript implementation of external adapters

License: MIT License

JavaScript 0.50% Dockerfile 0.02% TypeScript 98.71% Shell 0.27% Jsonnet 0.50% Smarty 0.01%

external-adapters-js's Introduction

Chainlink External Adapters (TypeScript)

This repository contains the source code for Chainlink external adapters. If you would like to contribute, please see the Contributing page for more details.

Table of Contents

  1. Getting Started
  2. How to Run
  3. Testing
  4. Deployment
  5. EA Versioning
  6. Advanced Features

Getting Started

Requirements

  • Node.js v22
  • Yarn

Install

yarn

Installs the dependencies for all workspaces.

Setup

yarn setup

Runs the setup step for all adapters. Typically this step just compiles TypeScript, but may involve other tasks.

Clean

yarn clean

Clears all build files/directories. Useful in case of issues when installing dependencies or running setup.

Folder Structure

╠═.github - scripts automatically ran by the CI/CD workflow
║
╠═.husky - git hooks
║
╠═.vscode - code editor specific configuration
║
╠═.yarn - yarn 2 dependencies
║
╠═grafana - utilities and configurations related to Grafana
║
╚═packages
    ║
    ╠══ composites - adapters composed of multiple other adapters for complex functionality
    ║
    ╠══ core - the internal framework used across all external adapters
    ║
    ╠══ k6 - performance testing scripts and configurations using k6
    ║
    ╠══ non-deployable - adapters that are not meant to be run independently
    ║
    ╠══ scripts - additional Node.js scripts for mono-repository management
    ║
    ╠══ sources - adapters that read data from a data provider's API.
    ║
    ╚══ targets - adapters that write data to a location, often a blockchain.

Adapters

The MASTERLIST.md file in the root directory contains the list of all the external adapters available in this repository, together with their version, type, supported environment variables, endpoints and other helpful information. Check their linked README to learn more about them.

How to Run

External adapters should be run as long-lived processes, either directly as HTTP Server, Docker Container, or Single-Command Docker App. Each adapter may have configuration that is required to be supplied through environment variables.

Configuration

There may be required environment variables that must be provided to run an External Adapter. Please see the respective adapter's README for more specific information on the External Adapter that you would like to run.

Every External Adapter has some optional environment variables for customizing behavior and turning on advanced features. The list of all available options can be seen here.

Run as HTTP server

Use the start command while in the directory of the adapter that you would like to run. For example:

cd packages/sources/coingecko
yarn start

Run as Docker Container

  1. All of the external-adapters have a service that is created when the repo's docker-compose file is generated.

This can be done by running the following command in the root of the repository (after yarn && yarn setup):

yarn generate:docker-compose
  1. Next create a container image. Use the generated docker-compose.generated.yaml file along with docker-compose build.
docker-compose -f docker-compose.generated.yaml build [adapter-name]

Where [adapter-name] is replaced with the following:

Parameter Description Options
adapter-name name of the external adapter package, usually the folder name with -adapter as a suffix See docker-compose.generated.yaml for list of services that can be used as options

For example the bravenewcoin external adapter uses bravenewcoin-adapter:

docker-compose -f docker-compose.generated.yaml build bravenewcoin-adapter
  1. Then run it with:
docker-compose -f docker-compose.generated.yaml run -p 8080:8080 -e API_KEY='YOUR_API_KEY' bravenewcoin-adapter

Environment files can also be passed through a file:

docker run -p 8080:8080 --env-file="~/PATH_TO_ENV" -it proof-of-reserves-adapter:latest

Single-Command Docker App

This command will start all of your external adapters with performance features enabled and with pre-defined metrics charts for each EA on a single server.

The first step will be to load up all of the environment variables that are needed across all of the External Adapters that will be ran. These can either be already be loaded into the environment or supplied to the startup script as a text file. Also, make sure that Grafana dependencies are installed.

Starting from the root of the repository:

  1. Ensure that the project is setup and that the docker-compose file has been generated
yarn && yarn setup && yarn generate:docker-compose
  1. Use the startup script by supplying every External Adapter that you would like to run and monitor.

The adapter will have the format of [[ADAPTER NAME]]-adapter.

For example:

cd grafana && ./scripts/compose.sh coingecko-adapter coinmarketcap-adapter
  1. The running services can be found at the following ports:
  • External Adapters - search docker-compose.generated.yaml for the name of your EA. The port it is running on will be found as the first number before the colon under ports.
coincodex-adapter:
  image: coincodex-adapter:0.0.4
  ports:
    - 8112:8080 <----------- The first number before the colon here
  build:
    context: ..
    dockerfile: ./Dockerfile
    args:
      location: packages/sources/coincodex
      package: '@chainlink/coincodex-adapter'
    labels:
      com.chainlinklabs.external-adapter-type: sources
  environment:
    - EA_PORT=${EA_PORT}

Testing

In order to e2e test adapters locally, you may need to set environment variables such as $API_KEY. These can be found in the README.md for every adapter. Integration and unit tests use mocks, so there is no need to set environment variables.

Make sure you run these commands from the ROOT of this monorepo.

# Build all packages
yarn setup

# Run all unit tests
yarn test:unit

# Run all integration tests
yarn test:integration

export adapter=myadapter # Your adapter name, coinmarketcap, coingecko, etc

# Run integration tests for that adapter
yarn test $adapter/test/integration

# Run unit tests for that adapter
yarn test $adapter/test/unit

# Run a specific test for that adapter
yarn test $adapter/test/unit/my-specific-test.test.ts

# Run a tests in watch mode, re-running tests that have code changes or dependency changes in them
yarn test --watch $adapter/test/unit

Deployment

Container Images

Images are being published to Chainlink's public AWS ECR repositories: public.ecr.aws/chainlink/adapters

They can also be found in the public gallery, the registry name is chainlink (e.g. https://gallery.ecr.aws/chainlink/adapters/1forge-adapter).

The External Adapters are being tagged with semantic releases to allow for automated upgrades.

Running from ECR

The EA container image can be download by using the docker pull command. For example:

docker pull public.ecr.aws/chainlink/adapters/1forge-adapter:latest

To run the image use the docker run command. For example:

docker run -p 8080:8080 -e API_KEY='YOUR_API_KEY' public.ecr.aws/chainlink/adapters/1forge-adapter:latest

It can be helpful to pass a text file to the container to handle giving multiple environment variables:

docker run -p 8080:8080 --env-file=[[path to your env file]] public.ecr.aws/chainlink/adapters/1forge-adapter:latest

External Adapters Versioning

For a full rundown on how versioning works, see semver.

What you need to know is that we make releases with the following versioning scheme: major.minor.patch.

  • A patch version change usually has small changes/bug fixes. Upgrading/downgrading the patch version number should never break compatibility.
  • A minor version change usually adds functionality. Upgrading should never break compatibility, but you might not be able to downgrade the minor version.
  • A major version change usually introduces a breaking change. Both upgrading and downgrading the major version number might require additional work. Proceed with caution!

Best practice is to try to always keep it up to date!

Why do the versions jump up sometimes?

Sometimes when looking at the releases for an EA you might see it jumped a version number. For example, the previous release might be 1.2.3 and then the next release is 1.2.5. The reason is that each week we publish a new release. Whenever we make a change it includes a changeset, which uses versioning semantics above (major, minor, and patch). Sometimes over the course of a given week, more than one change is included in an adapter, so more than one changeset gets ingested into the release, thus causing the release number to jump. So if a version went from 1.2.3 to 1.2.5, that means two patches were pushed that week.

Advanced Features

Please refer to ea-framework-js docs for topics like performance, rate limiting, caching, overrides and other advanced features.

external-adapters-js's People

Contributors

ad0ll avatar alecgard avatar alejoberardino avatar amit-momin avatar austinborn avatar bogos avatar boxhock avatar cds95 avatar cl-ea avatar dependabot[bot] avatar ebarakos avatar eddiejhong avatar emmick4 avatar github-actions[bot] avatar henrynguyen5 avatar justinkaseman avatar karen-stepanyan avatar krebernisak avatar kuphjr avatar mjk90 avatar mmcallister-cll avatar momentmaker avatar nzleet avatar renovate-bot avatar renovate[bot] avatar rodrigoad avatar silaslenihan avatar subarna-singh avatar tateexon avatar thodges-gh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

external-adapters-js's Issues

Error: Chromium revision is not downloaded.

Steps to reproduce

  • Switch on Chainlink VPN.
  • curl https://lambda.staging.devnet.tools/googlefinance/call -d '{"data": {"from": "FTSE", "to": "GBP" }}'.

Expected outcome

Status code 200 with a response object containing result field.

Actual outcome

{"jobRunID":"1","status":"errored","error":{"error":{"name":"AdapterError","message":"Chromium revision is not downloaded. Run \"npm install\" or \"yarn install\""}},"statusCode":500}%

Migrate @chainlink/ea-bootstrap to 100% async

Right now there is a divide in the codebase between an older style of adapters that use synchronous code callbacks and the newer style that using awaiting promises.

This can be seen here.

All code should be migrated away from using ExecuteWrappedResponse.

Blockcypher request rate limiting

Description

The Blockcypher balance adapter is blocked because their rate limit per second, even on their most premium plan, will not work with wbtc's ~30 custodial wallet requests.

One solution could be to allow a new environment variable for RATE_LIMIT = 10, where the number given will be the number of requests sent per second.

This would unlock the wbtc proof-of-reserves adapter and allow node operators to use a cheaper plan.


The above has been given a naive solution, but a more re-usable function could be added to utils.

References

#276

Unable to publish NPM packages using Github Actions

I've been following GitHub Docs | Publishing Node.js packages and various open issues on actions/setup-node, but I can't seem to get it to work for the life of me.

The most common error for NPM publish:

Run yarn workspace @chainlink/external-adapter publish --non-interactive --access public
yarn workspace v1.22.5
yarn publish v1.22.5
[1/4] Bumping version...
info Current version: 0.2.4
[2/4] Logging in...
[3/4] Publishing...
error Couldn't publish package: "https://registry.npmjs.org/@chainlink%2fexternal-adapter: Not found"
info Visit https://yarnpkg.com/en/docs/cli/publish for documentation about this command.
error Command failed.
Exit code: 1
Command: /opt/hostedtoolcache/node/12.18.3/x64/bin/node
Arguments: /usr/share/yarn/lib/cli.js publish --non-interactive --access public
Directory: /home/runner/work/external-adapters-js/external-adapters-js/external-adapter
Output:

info Visit https://yarnpkg.com/en/docs/cli/workspace for documentation about this command.
##[error]Process completed with exit code 1.

The most common error for GitHub Packages publish:

[1/4] Bumping version...
info Current version: 0.2.14
[2/4] Logging in...
verbose 0.295776152 Error: No token found and can't prompt for login when running with --non-interactive.
    at /home/runner/work/external-adapters-js/external-adapters-js/.yarn/releases/yarn-1.22.5.cjs:35203:13
    at Generator.next (<anonymous>)
    at step (/home/runner/work/external-adapters-js/external-adapters-js/.yarn/releases/yarn-1.22.5.cjs:310:30)
    at /home/runner/work/external-adapters-js/external-adapters-js/.yarn/releases/yarn-1.22.5.cjs:328:14
    at new Promise (<anonymous>)
    at new F (/home/runner/work/external-adapters-js/external-adapters-js/.yarn/releases/yarn-1.22.5.cjs:25783:28)
    at /home/runner/work/external-adapters-js/external-adapters-js/.yarn/releases/yarn-1.22.5.cjs:307:12
    at getToken (/home/runner/work/external-adapters-js/external-adapters-js/.yarn/releases/yarn-1.22.5.cjs:35270:18)
    at Object.<anonymous> (/home/runner/work/external-adapters-js/external-adapters-js/.yarn/releases/yarn-1.22.5.cjs:98537:65)
    at Generator.next (<anonymous>)
error No token found and can't prompt for login when running with --non-interactive.
info Visit https://yarnpkg.com/en/docs/cli/publish for documentation about this command.
error Command failed.
Exit code: 1
Command: /opt/hostedtoolcache/node/12.18.3/x64/bin/node
Arguments: /home/runner/work/external-adapters-js/external-adapters-js/.yarn/releases/yarn-1.22.5.cjs publish --verbose --non-interactive --access public
Directory: /home/runner/work/external-adapters-js/external-adapters-js/external-adapter
Output:

info Visit https://yarnpkg.com/en/docs/cli/workspace for documentation about this command.
##[error]Process completed with exit code 1.

Settle on a single Big number library

It seems we currently use all big number libraries out there: ethers.BigNumber , decimal.js , bignumber.js , bn.js and big.js.

We should settle on a single library and use that globally throughout the repo.

TS export = disables us to export types

We should migrate for the way we currently export form adapters:

// only values can be exported here
export = { NAME, /* types */, makeExecute, makeConfig, ...expose(util.wrapExecute(makeExecute())) }

... to something like this:

const handlers = expose(util.wrapExecute(makeExecute()))

export { NAME, types, makeExecute, makeConfig, handlers }

Improve support for multiple keys in API_KEY env variable

There are still some places in the codebase that use util.getRequiredEnv('API_KEY') instead of getRandomRequiredEnv function.

We'll need to update those and also make sure that we add an ability to specifically require the API key when using the Requester.getDefaultConfig function, as some adapters use it but don't require an API key set, for example the dydx-stark adapter.

Non-price feed error throws an error when queried due to metrics checking for price feed parameters

Non-blocking error is thrown each time a non-price feed adapter is queried.

{"error":{"jobRunID":"1","status":"errored","statusCode":400,"name":"AdapterError"},"level":"error","message":"Error validating input.","instanceId":"bd50ff51-dcc2-459b-924c-41a7e8f8783a","timestamp":"2021-04-07T14:20:40.029Z"}
{"message":"Unable to validate feed name","level":"debug","instanceId":"bd50ff51-dcc2-459b-924c-41a7e8f8783a","timestamp":"2021-04-07T14:20:40.029Z"}

The parameters are validated as part of the middleware/metrics system in the /bootstrap/src/lib/metrics/util.ts file and runs the Validator which is throwing the first line of the error above.

/**
 * Get feed id name based on input params
 * @param input The adapter input request
 * @returns {string}
 */
export const getFeedId = (input: AdapterRequest): string => {
  const commonFeedParams = {
    base: ['base', 'from', 'coin', 'symbol', 'asset'],
    quote: ['quote', 'to', 'convert'],
  }
  try {
    const validator = new Validator(input, commonFeedParams)
    if (validator.error) {
      logger.debug('Unable to validate feed name')
      return JSON.stringify(input)
    }
    return `${validator.validated.data.base.toUpperCase()}/${validator.validated.data.quote.toUpperCase()}`
  } catch (e) {
    return JSON.stringify(input)
  }
}

Can be confusing to operators to see an error when running non-price feed EAs.

Improve EA error handling

My thoughts are that we should handle most/all of errors, output them with correct status codes and appropriate messages. We have work to do here.

Internal errors should be caught, mapped to a HTTP 500 status code and construct an appropriate message. Something like Internal Server Error: ${error.name}. Then if we are in dev env (NODE_ENV=development) or some other verbose/debug flag is set, we dump the whole internal error on the output, as an underlying cause to be investigated.

Originally posted by @krebernisak in #212 (comment)

Refactor getting environment variables

Maybe we could avoid some duplication with the other method? Like passing a boolean value e.g = (name: string, required = true, delimiter = ',', prefix = '') and use util.getRandomEnv('API_KEY') in all places.

Then we can call util.getRandomEnv('API_KEY', false) in places that we don't want the var to be required.

Or something similar.

Originally posted by @ebarakos in #226 (comment)

Investigate test timeouts

I believe we need a timeout to give error tests time to retry. This does need more investigation though.

The problem I have noticed is that different APIs take different amount of times to timeout. In the example of 1forge I had to increase it this much to pass, where as other adapters could take lower amounts (usually 3000 to allow three 1 second retries).

Originally posted by @justinkaseman in #226 (comment)

Improve adapter input validation and documentation

This idea started here by @yosriady.

As validation is also a big part of external-adapters-js, which has a simple Validator we currently use, it would be great if we could combine our efforts here and define a simple validation utility that we would use for both.

I like the idea of defining a declarative validation strategy, one that we could also use to add extra documentation to external adapters. Either printing it to logs or adding a special endpoint that would return the input schema for documentation and debug purposes. But just reading it from source code as-is would be beneficial for someone trying to deploy and use those adapters. Better and more specific error messages would also bring a much improved UX.

If we enable to easily write custom validation functions, more would be written, which would also further improve code safety.

Add API's response.data on the data field of adapter's response

I have noticed on some of the JS adapters, we were returning the actual API's response.data field.

We might need to alter the example TS adapter, so on the simple/standard cases, this field is appended to the data object, to maintain backwards compatibility.

Current:

  const response = await Requester.request(reqConfig, customError)
  const result = Requester.validateResultNumber(response.data, ['price'])
  return Requester.success(jobRunID, {
    data: { result },
    result,
    status: 200,
  })

Suggestion:

  const response = await Requester.request(reqConfig, customError)
  const result = Requester.validateResultNumber(response.data, ['price'])
  return Requester.success(jobRunID, {
    data: { ...response.data, result },
    result,
    status: 200,
  })

GCP nodejs v10 or 12?

It says that for the GCP you should use nodejs v 10, but for AWS lamda use v12. Should they both be 12?

API keys from Config

There are a couple places in the codebase where API keys are accessing from within an endpoint's code. These instead need to be handled in makeConfig.

This change will interfere with validation tests. These tests currently do not have any API keys. They could be passed into tests through the Config.

Originally posted by @justinkaseman in #226 (comment)

Support batching for PoR adapters where available

  • Blockcypher

Unfortunately, this is only useful to improve latency:

With regard to rate limits, each individual batch call counts as a request; for example, if you request 3 addresses in a batch, you're still using 3 API calls of resources on our end. The big advantage to batching is that you avoid 3 separate round-trip calls/reduce latency. Since the default, non-registered rate limit per second is 3, larger batches require a paid API token. To use larger batches please register.

[FEATURE] Serverless Framework and OpenWhisk Integration

OpenWhisk is an opensource FaaS solution that can easily be deployed on Kubernetes. I'd like to make it possible to deploy all of the adapters on OpenWhisk with the help of Serverless Framework.

This way, deployment of new adapters can be as easy as sls deploy

Config improvements

Description

Re-usable Configs were recently added to reduce shared code between EAs. This was a simple first-step and should be improved upon.

There was concern about them being too static. A generic type was added to ExecuteFactory to allow many Config types. This concern should be given more thought for the future.

These configurations should be extendable and be able to come from a variety of sources (URL params, ENV variables, argument to factory, ect.)

The compostability of these configurations in their current form should be verified.

Make e2e integration tests more usable by injecting secrets

Integration tests, like they are currently set up, are not too usable. We would like to start integration tests for all adapters at once, but how do we then inject a different API key to each one?

One possible solution would be to have a secrets.json configured with all secrets required for integration tests. Then every test can read from there.

For CI runs we could have an encrypted secrets.json.gpg file that gets decrypted and provided to the test suite. Documentation on this workflow can be found here.

Originally posted by @krebernisak in #218 (comment)

Validation error in "unknown endpoint"' of behaviors

On test-helpers' balance behavior, unknown endpoint should not result in validation error, since a non-valid endpoint for balance behavior might make sense in another adapter's behavior, as it happens on cryptoapis.

Balance endpoints incorrectly parse units

Description

Balance endpoints originally were created for Bitcoin Proof of Reserves.
The External Adapters were engineered to be able to handle more than just Bitcoin.
In some cases the balance units needs to be converted.

A recent PR hard coded conversion to BTC, when it could be multiple currencies.
A better utility function should be created or support for other currencies should be disabled.

This issue can be re-created by running yarn test in cryptoapis adapter.
Ethereum chains will throw this error trying to convert the units:

Error: fractional component exceeds decimals (fault="underflow", operation="parseFixed", code=NUMERIC_FAULT, version=bignumber/5.0.14)

Improve CI jobs runtime

Currently, CI jobs for a PR branch could run in excess of 45 min. This is too much waiting time and slows down the normal development process.

Some ideas on how to speed up CI runs:

  • make all adapters using one cmd (a lot of time is spent waiting to boot up CI runners)

Circular dependency between bootstrap and external-adapter

We currently use shorthand ambient modules in our @chainlink/types:

declare module '@chainlink/external-adapter
declare module '@chainlink/ea-bootstrap

These stifle types by setting every export to be any, defeating our type safety.
We are unable to remove them because of a circular dependency between these two packages.

b847ff2

Is a band-aid to allow CI to run with this issue. It should be removed.

Could not load typescript compiler with NPM package name

On zipping files for serverless consumption:

make zip adapter=coinpaprika

I get the following error:

Error: Could not load TypeScript compiler with NPM package name `/home/patrick/.npm/_npx/3340692/lib/node_modules/@vercel/ncc/dist/ncc/typescript.js`. Are you sure it is correctly installed?

I've tried changing devDependancies and running npm link typescript, such as suggested in this issue: vercel/ncc#622

I've attemped on:
OS: Ubuntu 20.04.1 LTS, & MacOS Big Sur 11.1
Node: 10.19.0, 12.18.4, 14.15.2 (and various npm versions associated)

Any thoughts?

Improve documentation on PoR & composite adapters in general

Currently, it's not straightforward for node operators to understand quickly how to build & run composite adapters like Proof of Reserve. Composite adapters have underlying adapters they use of which each has some kind of configuration and/or defaults. All of this info is currently obscure and hard to get to.

As suggested by @tyrion70:

Ideally the README has everything needed for a node operator to get it started together with the jobspecs of course

Log adapter errors

Currently if there is some internal error, it just get notified to the client with a 500 without logging anything. Logging this would be really helpful for debugging and recognize faster possible problems in production adapters

Docker run cannot read from a file

While acceptance testing adapters the command
docker run -p 8080:8080 --env-file=PATH -it proof-of-reserves-adapter:latest
was unable to be used to start a docker container with the correct environment variables.

Having this work from would be a lot easier for node operators who needs to start 5+ containers to run composite adapters.

Make integration tests more usable by mocking HTTP requests

Instead of using Requester static functions to make HTTP requests, we would like to rewrite this functionality as an HTTP module object that gets injected into the adapter, and as such can be easily mocked in tests.

This will enable easier tests, better test coverage, and running more tests in the CI environment, increasing the overall codebase safety.

Make build failing

After installing using

yarn

at the top level, I ran make build coingecko and am getting

make build coingecko
yarn ncc build  -o /dist
yarn run v1.22.4
$ /Users/kjd/go/src/github.com/smartcontractkit/external-adapters-js/node_modules/.bin/ncc build -o /dist
{ Error: Cannot find module '/Users/kjd/go/src/github.com/smartcontractkit/external-adapters-js'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
    at resolve (internal/modules/cjs/helpers.js:33:19)
    at runCmd (evalmachine.<anonymous>:1:47259)
    at Object.819 (evalmachine.<anonymous>:1:44241)
    at __webpack_require__ (evalmachine.<anonymous>:1:216)
    at startup (evalmachine.<anonymous>:1:353)
    at module.exports.8 (evalmachine.<anonymous>:1:385)
    at evalmachine.<anonymous>:1:395
    at Object.<anonymous> (/Users/kjd/go/src/github.com/smartcontractkit/external-adapters-js/node_modules/@zeit/ncc/dist/ncc/cli.js:6:28)
    at Module._compile (internal/modules/cjs/loader.js:778:30) code: 'MODULE_NOT_FOUND' }
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
make: *** [build] Error 1

Seems to be happening for all the adapters I tried.

Secrets module

We can't commit API keys to the public repository.

But, you are right that integration tests set like this are not too usable. We want to somehow start integration tests for all adapters at once, but then how do we inject a different API key to each.

One possible solution would be to have a secrets.json configured with all secrets required for integration tests. Then every test can read from there.

Originally posted by @krebernisak in #218 (comment)

Sequences processing semantics [Draft]

Some of our adapters like the PoR bitcoin indexer adapters process sequenced data.

Adapters that process sequences should only change items that they know how to process. If they don’t have the ability to process them they should add a warning. Maybe the next adapter in the chain will have the know-how to process those items and possibly remove the warning.

If they have the ability to process the item, but the item is found to be invalid, the error should be thrown as there is no way to continue processing this sequence. For example, if we have an adapter that processes bitcoin/mainnet addresses. That adapter should add warnings for all bitcoin/testnet and ethereum/mainnet addresses. But if it finds an invalid bitcoin/mainnet address, then it throws an error!

We can additionally add a strict configuration flag, which we can use to configure the sequence processing adapter with: “No warnings, all errors.” Or in other words, you need to be able to process everything or nothing.

TODO:

  • How sequence processing semantic should work
  • How we should be able to chain different adapters with different processing abilities
  • How error handling should work
  • How can we extract this in a framework to easily add sequence processing abilities to an adapter

401 coinapi won't work

Test:

cd https://github.com/smartcontractkit/external-adapters-js/tree/master/coinapi
npm i
API_KEY=xyz npm test

Result:

{"message":"Could not reach endpoint: \"Request failed with status code 401\"","level":"error","instanceId":"1bc8f1c1-592c-42af-b217-ac3ff7c1d0c9","timestamp":"2020-10-27T05:37:12.259Z"}

Adapters should automatically export their name

The discussion about this started here.

This was first used as an ENV prefix to avoid naming collisions when using two or more adapters in the same env. So for now it was a simple hardcoded uppercase string.

What we should do now to make this more robust is this:

  • Import the name from package.json, remove the -adapter suffix, and export that as NAME.
  • Create a toEnvName util function that will map it to a safe string to use as ENV prefix to avoid naming collisions.

Environment variable names used by the utilities in the Shell and Utilities volume of IEEE Std 1003.1-2001 consist solely of uppercase letters, digits, and the '_' (underscore) from the characters defined in Portable Character Set and do not begin with a digit.

Refactor CMC price endpoint & add tests

Seems good, left only one potential change.

The price endpoint though is quite complicated. We might need to add various test cases regarding slug/cid + to do some good acceptance testing. There is probably room for refactoring, either now or at some point, once we cover all cases with tests.

Agreed that there is probably room for refactoring and additional tests, but for the sake of staying in the scope of the TS refactor and hitting this release I think we should instead open a new ticket for that.

Opening one now for CMC price endpoint refactor.

Originally posted by @justinkaseman in #218 (comment)

Coingecko EA not working properly with the cache

Typically, when a job hits the api directly, the response looks like this:

{"jobRunID":"1","result":0.02570336,"statusCode":200,"data":{"result":0.02570336}}

And when it hits the cache, a successful response looks something like this:

{"result":0.025649,"maxAge":60000,"statusCode":200,"data":{"status":200,"title":"OK","description":"Successful request","payload":{"timestamp":1615461840000,"pair":"bal_eth","price":"0.025649","volume":"0.95"},"result":0.025649}}

For coingecko, we get this:

{"maxAge":60000,"statusCode":200,"data":{"result":0.025649}}

The jobspecs are looking for a result field at the top level. Can we please add it so that all caches send the result to the top level?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.