Coder Social home page Coder Social logo

interledger / rafiki Goto Github PK

View Code? Open in Web Editor NEW
226.0 20.0 78.0 202.48 MB

An open-source, comprehensive Interledger service for wallet providers, enabling them to provide Interledger functionality to their users.

Home Page: https://rafiki.dev/

License: Apache License 2.0

Shell 0.11% JavaScript 2.68% TypeScript 96.87% Dockerfile 0.13% Smarty 0.08% CSS 0.01% Go 0.13%
interledger rafiki ilp webmonetization hacktoberfest hacktoberfest-accepted hacktoberfest2022

rafiki's Introduction

Rafiki

Rafiki

What is Rafiki?

Rafiki is open source software that provides an efficient solution for an Account Servicing Entity to enable Interledger functionality on its users' accounts.

This includes

  • sending and receiving payments (via SPSP and Open Payments)
  • allowing third-party access to initiate payments and view transaction data (via Open Payments)

❗ Rafiki is intended to be run by Account Servicing Entities only and should not be used in production by non-regulated entities.

Rafiki is made up of several components, including an Interledger connector, a high-throughput accounting database called TigerBeetle, and several APIs:

Additionally, this package also includes a reference implementation of a GNAP authorization server, which handles the access control for the Open Payments API. For more information on the architecture, check out the Architecture documentation.

New to Interledger?

Never heard of Interledger before? Or would you like to learn more? Here are some excellent places to start:

Contributing

Please read the contribution guidelines before submitting contributions. All contributions must adhere to our code of conduct.

Community Calls

Our Rafiki community calls are open to our community members. We have them every Tuesday at 14:30 GMT, via Google Meet.

Google Meet joining info

Video call link: https://meet.google.com/sms-fwny-ezc

Or dial: ‪(GB) +44 20 3956 0467‬ PIN: ‪140 111 239‬#

More phone numbers: https://tel.meet/sms-fwny-ezc?pin=5321780226087

Add to Google Calendar

Local Development Environment

Prerequisites

Environment Setup

# install node from `./.nvmrc`
nvm install
nvm use

# install pnpm
corepack enable

# if moving from yarn run
pnpm clean

# install dependencies
pnpm i

Local Development

The Rafiki local environment is the best way to explore Rafiki locally. The localenv directory contains instructions for setting up a local playground. Please refer to the README for each individual package for more details.

Useful commands

# build all the packages in the repo:
pnpm -r build

# build specific package (e.g. backend):
pnpm --filter backend build

# generate types from specific package GraphQL schema:
pnpm --filter backend generate

# run individual tests (e.g. backend)
pnpm --filter backend test

# run all tests
pnpm -r --workspace-concurrency=1 test

# format and lint code:
pnpm format

# check lint and formatting
pnpm checks

# verify code formatting:
pnpm check:prettier

# verify lint
pnpm check:lint

rafiki's People

Contributors

alexlakatos avatar barnarddt avatar beniaminmunteanu avatar blaircurrey avatar cairin avatar cioraz avatar dclipp avatar donchangfoot avatar dragosp1011 avatar golobitch avatar huijing avatar joblerstune avatar jpsousa78 avatar kit-t avatar koekiebox avatar lengyel-arpad85 avatar matdehaast avatar melissahenderson avatar mkurapov avatar njlie avatar omertoast avatar raducristianpopa avatar renovate[bot] avatar rico191013 avatar rluckom-coil avatar sabineschaller avatar sentientwaffle avatar sublimator avatar traviscrist avatar wilsonianb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rafiki's Issues

Webmonetization Architecture

The goal of this issue is to discuss and come to a consensus of how to architect Webmonetization within Rafiki. I want us to first begin by determining and agreeing what the end user (operator) will want and then based on that we can delve into how best to implement this.

As a starting point I propose a few User Stories

  1. I as an End User want to be able to see my webmonetization earnings segmented by time.
  2. I as an End User want to be able to see my webmonetization earnings per session.
  3. I as an End User want to be able to see every single packet for webmonetization

Transaction History for Web Monetization

Public SPSP Endpoint

GET /pay/:id

This endpoint is described in the SPSP-STREAM project but should be upgraded in the following way:

  • Instead of using the user's root ILP account for the connection tag, check its postgres DB to see if there is a currently active WM invoice
    • Invoices should be stored with metadata, expiry (optional), and a reference to the account service item representing the resource
  • If there is no currently active WM invoice, make a call to the account service and a call to postgres to create one
    • If the current WM invoice is expired, it should be disabled via the account service
  • Use the ILP account ID of the active invoice as the connection tag when generating SPSP credentials
  • Return the destinationAccount and sharedSecret

Balance & Transaction History

This should be implemented as part of a graphQL API on the API backend. For now we should make it accessible to the wallet operator, and in future we will make it available to the user and to authorized third parties

query {
	user: User
}

type Account {
	id: string!
	balance: Amount!
}

type User {
	balance: Amount

	# in the future it should be possible to fetch a transaction list. for now we'll start with just a list of invoices
	invoices(count: number, skip: number): [Invoice]
}

type Amount {
	amount: string!
	currency: string!
	scale: number
}

type Invoice {
	receivedAmount: Amount!
	maximumAmount: Amount
	active: boolean!
	createdAt: Date!
	expiresAt: Date
	description: string
}

Implement send by source amount API

  • Send an amount in sender's units to a payment pointer
  • We'd also add functionality to allow 1 API Key per account. A wallet can generate this key and give it to the user, allowing the user to initiate payments programmatically

Application account abstraction

On the Rafiki call today, @matdehaast brought up questions around the higher-layer account abstraction, which at the moment is very ambiguous. At various times, we've referred to these as "entities," "users," and "identities." Here, I'll just refer to them as "application accounts."

I'd like to clarify the functionality of this abstraction, how it's distinct from Interledger accounts, and thoughts on naming.

Interledger accounts track a financial accounting balance with a counterparty affected by ILP packets. (I use "counterparty" to include end users whose funds are custodied at the provider and ILP peers.) Interledger accounts expose routing and peering functionality, and permissioning features to delegate their balance to Interledger sub-accounts.

Application accounts initiate, request, execute, and track payments sent or received via one or more Interledger accounts. To support this, application accounts expose an Open Payments account, invoices and mandates; STREAM functionality such as sender and receiver; and various payment aggregation features. If in the future we support identity attestations, for verification of recipient for payments initiated by third parties, they may also represent some kind of identity (or multiple identities).

To summarize, I see the distinction as: Interledger accounts deal in packets; application accounts deal in payments.

The challenge is, these application accounts may be held by end users, merchants, businesses, or other entities. We don't want to put too many constraints on how the operator of Rafiki models these account holders in their own system, or how they're permissioned.

For these reasons, I don't think we should use "user," "identity," or "entity." So, spitballing a few ideas on what we could call these application accounts for common terminology:

  • Bearer -- I like this because it both represents a holder of funds, and an entity that invoices or receives payment. But, this could be confusing or not necessarily map to its existing financial meaning.
  • PaymentAccount -- simple and descriptive. Will it apply to future use cases?
  • PaymentParty -- too vague? too fun?

transfer Mutation Resolvers

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

createWebhook Mutation Resolvers

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

deleteWebhook Mutation Resolvers

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

webhook Query Resolver

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

Expose account API to backend via graphQL

Instead of using REST, we can expose the account service API to consumers (the API backend and the operator) using graphQL. This has the advantage of only fetching necessary data, especially in situations where data comes only from one of postgres or tigerbeetle

webhooks IlpAccount Resolvers

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

createDeposit Mutation Resolvers

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

Merge accounts package into connector package

Problem

The current implementation of the accounts and connector packages is conflicted. They are implemented as separate packages. However, the accounts package is being imported into the connector package as if it were a module, when it is implemented as a standalone service.

Constraints

  • The connector logic and the ilp accounts logic should be separate.
    • They should be isolated services.
  • The connector service imports types from the accounts service.
  • The connector and accounts service should communicate in-process for performance.

Proposal

This proposal follows from #52, to be implemented alongside/after.

The connector package should be renamed the interledger package, and the accounts logic (currently in the accounts package) should be implemented as a service within the new interledger package.

updateWebhook Mutation Resolvers

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

withdrawal Query Resolver

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

Implement Accounts Service

The responsibility of the account service is to manage interledger accounts configuration/balances.

It's the only service that speaks to tigerbeetle directly.

  • It's used by the application-layer API in order to implement open payments APIs
  • It's accessed by the administrator to manage liquidity and deposits/withdrawals
  • The connector relies on its ILP config & balances to process packets.

Implement Rate Backend API

  • The basic version of this should be baked into the connector, but we should expose a configurable option where the wallet can specify an endpoint with a well-defined interface to retrieve rates from
    • We should respect cache control headers on the information returned so that the wallet can configure how long their rates are good for.

Re: Redis restart

If Redis restarts, shouldn't the old rates still be usable when it comes back up? Any downtime less than a few minutes (or less than whatever the invalidation period is) shouldn't cause additional downtime in the push model.

Re: latency

I don't think it's acceptable for polling for rates to delay packet processing. In the pull model, rates should be polled more frequently than they're invalidated. (The poll interval could also be configurable).

Re: authentication

I think we're in agreement? I see the interaction as:

RAIO ←→ Rate microservice ←→ Existing rate service

RAIO and the rate microservice would be in the same cluster and not require auth.

The existing rate service could be anything: ECB, CoinMarketCap, Uphold's internal rate service, etc., and may have its own auth.

I was just noting as long as the intermediary rate microservice needs to exist and is in the same cluster, if the cluster restarts, it restarts.

withdrawals IlpAccount Resolvers

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

deposits IlpAccount Resolvers

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

ilpAccounts Query Resolver

  • Complete implementation by ensuring all the defined data is returned, not just the account id.
  • Complete tests

Migration error running backend tests

I get the following when running yarn test or yarn workspace backend test locally on main:

Determining test suites to run...migration file "20210422134422_create_events_table.js" failed
migration failed with error: create table "events" ("id" uuid not null, "name" varchar(255) not null, "type" varchar(255) not null, "typeId" varchar(255) not null, "payload" jsonb not null, "createdAt" timestamptz default CURRENT_TIMESTAMP, "updatedAt" timestamptz default CURRENT_TIMESTAMP) - relation "events" already exists
error: create table "events" ("id" uuid not null, "name" varchar(255) not null, "type" varchar(255) not null, "typeId" varchar(255) not null, "payload" jsonb not null, "createdAt" timestamptz default CURRENT_TIMESTAMP, "updatedAt" timestamptz default CURRENT_TIMESTAMP) - relation "events" already exists
    at Parser.parseErrorMessage (/home/brandon/raio/.yarn/cache/pg-protocol-npm-1.5.0-390f8d9ed8-270e72d38a.zip/node_modules/pg-protocol/dist/parser.js:287:98)
    at Parser.handlePacket (/home/brandon/raio/.yarn/cache/pg-protocol-npm-1.5.0-390f8d9ed8-270e72d38a.zip/node_modules/pg-protocol/dist/parser.js:126:29)
    at Parser.parse (/home/brandon/raio/.yarn/cache/pg-protocol-npm-1.5.0-390f8d9ed8-270e72d38a.zip/node_modules/pg-protocol/dist/parser.js:39:38)
    at Socket.<anonymous> (/home/brandon/raio/.yarn/cache/pg-protocol-npm-1.5.0-390f8d9ed8-270e72d38a.zip/node_modules/pg-protocol/dist/index.js:11:42)
    at Socket.emit (node:events:376:20)
    at addChunk (node:internal/streams/readable:304:12)
    at readableAddChunk (node:internal/streams/readable:279:9)
    at Socket.Readable.push (node:internal/streams/readable:218:10)
    at TCP.onStreamRead (node:internal/stream_base_commons:192:23)

knex rollback says Already at the base migration

ILPAccount Payment Pointer mapping

Currently the assumption has been made that there is a 1-1 mapping between an ILPAccount and a Payment Pointer. This issue is to determine if this is what we want.

Occasional accounts tests failures

The accounts service tests are occasionally failing in Github Actions with Error: Process completed with exit code 129.

This has happened in a number of pull requests, in which cases re-running succeeded:

This appears to happen with yarn workspace accounts test, but not yarn test.

The error will occur after Jest globalSetup completes but before beforeAll in the account service's lone test file starts (there appears to regularly be a ~10 second delay between the two. 😕 )

syslog shows

kernel: [  110.144682] traps: node[1827] trap invalid opcode ip:7f5448bc12fd sp:7ffd26aafb60 error:0 in client.node[7f5448bc0000+98000]

Core file (uploaded as artifact)

ilpAccount Query Resolver

  • Complete implementation by ensuring all the defined data is returned, not just the account id.
  • Complete tests

deposit Query Resolver

TODO

  • Scaffold resolver to match generated types
  • Implement resolver
  • Test resolver

Blockers

  • No corresponding service implementation

Implement send by destination API

  • Send a payment in destination units to a payment pointer
  • This will require a quoting step so that the sender can figure out how much this will cost in their units and establish a maximum

Design first-party send APIs

  • These APIs would be exposed to the user of an account, but not baked into the standard of what could be delegated to third parties yet. This allows us to stay flexible and alter the APIs while Rafiki is still new.

Add deposit and withdrawal lookup

In order to support getDeposit and getWithdrawal graphql queries, the accounts service needs to be able to look up deposits and withdrawals by id.
Currently the id is the tigerbeetle transfer id.
Lookup would require either:

  • adding a transaction and/or deposit and/or withdrawal postgres tables
  • tigerbeetle transfer lookup + postgres account table index by balance id

Standardize services implementation across packages

There is currently quite a large difference in implementations of the services in the backend and connector packages.

The accounts package is not mentioned here because #53 , but would need to conform to this too.

We need to:

  • Highlight and discuss the differences in implementations.
  • Decide on a single structure all packages should conform to.
  • Migrate packages that don't conform.
  • Document the structure for future collaborators.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.