formancehq / ledger Goto Github PK
View Code? Open in Web Editor NEWA Programmable Core Ledger
License: MIT License
A Programmable Core Ledger
License: MIT License
Hi there,
i tried displaying the UI right now and it comes up with this:
there is data in the quickstart ledger:
{"cursor":{"page_size":15,"has_more":false,"total":3,"remaning_results":0,"data":[{"address":"world","contract":"default"},{"address":"users:001","contract":"default"},{"address":"central_bank","contract":"default"}]},"ok":true}
Is your feature request related to a problem? Please describe.
In some scenarios it is needed to create transactions that happened in the past and not in the moment they were committed.
Summary
Solution proposal
Make the api accept a backdate property
{
"postings": [
{
"source": "alice",
"destination": "teller",
"amount": 100,
"asset": "COIN"
},
{
"source": "teller",
"destination": "alice",
"amount": 5,
"asset": "GEM"
}
],
backdate: $timestamp
}
This should update the balances accordingly or throw an error if subsequent transactions cannot be applied to avoid leaving the system in an invalid state
Summary
We should be able to filter transactions by metadata.
Solution proposal
We should implement the same API than the one exposed in the account endpoint
After "go build" with CGO_ENABLED=0 or CGO_ENABLED=1 pgsql storage is not working ๐
But, with "go run" all is ok.
For PGSQL error is {"err":"initializing driver: sql: unknown driver \"pgx\" (forgotten import?)","ok":false}
Summary
We should be able to filter transactions by metadata
Solution proposal
GET /:ledger/transactions?meta:key=value
GET /:ledger/accounts?meta:key=value
The match should be done on an exact value.
Because the metadata object on transactions and accounts is always stored key by key, with the value stored as a json string we should be able to pass any type as value, scalar or compound.
Describe alternatives you've considered
Nothing yet, happy to discuss alternative ideas
Additional context
Follow-up of a discord discussion
This kind of code is duplicate 3 times.
Maybe we could have some utils method to encode/decode pagination token.
With generics it make the task easy.
Originally posted by @gfyrag in #243 (comment)
Is your feature request related to a problem? Please describe.
Currently metadata records are not assigned an ID or timestamp when saved to the ledger's DB.
Solution proposal
Similar to other methods on the ledger, add logic to calculate the next metadata ID. Add this to the ID field of the record. Get the current system time and add the timestamp to the record as well.
To do this we would either need to update the storage.Store interface's SaveMeta method to accept an ID and timestamp, or change the core.Metadata type to be a struct which has an ID and timestamp as fields in addition the the map[string]json.RawMessage.
After i Download and Install the the pre-compiled binary "numary_x.y.z_Windows-64bit.zip",
I fellow the tutos :
numary config init
numary storage init
numary version
Version: 1.7.1
Date: 2022-08-02T10:37:19Z
Commit: 22b2d2c
numary server start --debug
ok
but when, i start testing
intro.num :
send [COIN 100] (
source = @world
destination = @centralbank
)
numary exec dunshire intro.num
i get :
time="2022-08-10T16:07:33+01:00" level=fatal msg=EOF
in the client side :
numary exec quickstart example.num
time="2022-08-10T05:44:44+01:00" level=fatal msg=EOF
and this error in the server side:
time="2022-08-10T16:18:02+01:00" level=debug msg="QueryRowContext: SELECT ledger FROM ledgers WHERE ledger = ? [dunshire]"
time="2022-08-10T16:18:02+01:00" level=debug msg="ExecContext: INSERT INTO ledgers (ledger, addedAt) VALUES (?, ?) ON CONFLICT DO NOTHING [dunshire 2022-08-10 16:18:02.8947982 +0100 CET m=+7.719215001]"
time="2022-08-10T16:18:02+01:00" level=debug msg="Initialize store"
time="2022-08-10T16:18:02+01:00" level=info msg=Request ip=127.0.0.1 latency=2.3679ms method=POST path=/dunshire/script status=500 user_agent=Go-http-client/1.1
time="2022-08-10T16:18:02+01:00" level=error msg="initializing ledger store: open migrates\0-init-schema: file does not exist"
ps: I use sqlite as Driver
...
server:
http:
basic_auth: ""
bind_address: localhost:3068
storage:
cache: true
dir: c:/.numary/data
driver: sqlite
postgres:
conn_string: postgresql://localhost/postgres
sqlite:
db_name: numary
ui:
http:
bind_address: localhost:3068
version: 1.7.1
I think numary is the right fit for a project I'm planning on, and I'm wondering if adding descriptions (and perhaps filtering by them in GET /{ledger}/transactions see #30 ) would be a planned feature? In my case, it would be useful to correlate the initial set of transactions from a public blockchain to the internal numary ledger.
For both this and #30, I could try a PR when I get to coding the accounting part of my project. (that is, if you consider these features would be useful as part of numary)
Describe the bug
I noticed when I benchmarked ledger, I sometimes would get a 200 response with no content in the response body, but the ledger logs would indicate: internal errors executing script: conflict error on reference
.
After doing some digging, the true cause is duplicate key value violates unique constraint "transactions_id_key"
when inserting into transactions table (possibly same issue could be present with other tables too).
The issue happens becomes you seem to be incrementing the ID in code, and not by using a db auto increment feature. Since there is no locking around this this kind of conflict is going to arise sooner or later when handling concurrent requests.
To Reproduce
Steps to reproduce the behavior:
Just have a script/benchmarking tool do a few requests to ledger at the same time.
For instance, I use Bombardier:
bombardier -c 50 -m POST -f bench.json -d 10s http://localhost:3068/example/script
Where bug.json
can be any script posting, but for instance:
{"plain":"vars {\n monetary $money_authorized\n account $destination\n}\n\nsend $money_authorized (\n source = @world\n // @user:c36a3385-d94e-485d-9ccd-3682507d0fc2:authorized:b394c68d-0e4b-4e09-a634-3449d054edd5\n destination =
$destination\n)","vars":{"money_authorized":{"amount":2,"asset":"EUR/2"},"destination":"user:bf11c59beaca47cda9d477cd62181e55:authorized:47aa2ce0db61498bb02d9f0ca2728425"}}
Expected behavior
I expect a bunch of things:
Environment:
This would be useful - for example - to show a user their transaction history, or to see a specific transaction.
Numary version: 1.0.0-beta.3
Actual behavior:
The reference
field is not populated on GET /transactions
(affects both SQLite & Postgres storages)
Expected behavior:
The field should be populated (the data is persisted but not retrieved)
Steps to reproduce:
curl -X POST -H "Content-Type: application/json" http://localhost:3068/test-01/transactions -d \
'{
"postings": [
{
"source": "world",
"destination": "test:001",
"amount": 100,
"asset": "USD/2"
}
],
"reference": "foobar"
}'
curl -X GET http://localhost:3068/test-01/transactions | jq .
Logs: N/A
Adding Opentracing traces in Ledger: https://github.com/opentracing/opentracing-go
The list of ledgers is stored in config file, which causes a persistence issue for cloud-native environments or multi ledger
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Summary
Solution proposal
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Hi there,
I am currently trying to "install" numary on my Ubuntu machine and i cannot untar the file.
Full log included:
philip@NB:~$ curl -O https://github.com/numary/ledger/releases/download/1.0.0-alpha.19/ledger_1.0.0-alpha.19_Linux_x86_64.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 649 100 649 0 0 7131 0 --:--:-- --:--:-- --:--:-- 7131
philip@NB:~$ tar -zxvf ledger_1.0.0-alpha.19_Linux_x86_64.tar.gz
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
philip@NB:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04 LTS
Release: 20.04
Codename: focal
Describe the bug
When following the tutorial with latest master
(and SQLite), I've found that the transactions
endpoint response contains empty values. From a quick investigation this occurs when a NULL
reference field is inserted.
To Reproduce
Steps to reproduce the behavior:
{
"postings": [
{
"source": "world",
"destination": "central_bank",
"asset": "GEM",
"amount": 100
},
{
"source": "central_bank",
"destination": "users:001",
"asset": "GEM",
"amount": 100
}
]
}
transactions
endpoint:{
"cursor": {
"page_size": 15,
"has_more": false,
"total": 1,
"remaining_results": 0,
"data": [
{
"txid": 0,
"postings": [
{
"source": "",
"destination": "",
"amount": 0,
"asset": ""
},
{
"source": "",
"destination": "",
"amount": 0,
"asset": ""
}
],
"reference": "",
"timestamp": "2021-10-21T18:04:14-03:00",
"hash": "a8bd561eead0383290ef6a8181a4068adcac6eadd7c143a080fafccbc4fd6f44",
"metadata": {}
}
]
},
"err": null,
"ok": true
}
SELECT t.id, t.timestamp, t.hash, t.reference, p.source, p.destination, p.amount, p.asset FROM transactions AS t LEFT JOIN postings AS p ON p.txid = t.id WHERE t.id IN (SELECT txid FROM postings GROUP BY txid ORDER BY txid desc LIMIT 16) ORDER BY t.id
The query output looks as follows:
Expected behavior
The transactions
response endpoint should contain the posting field contents (source
, destination
, amount
, asset
).
Potential solution
If we reference the DB transaction value just like the other values, then we ensure that an empty string is used when no reference is specified:
diff --git a/storage/sqlite/transactions.go b/storage/sqlite/transactions.go
index fa09dfe..6533947 100644
--- a/storage/sqlite/transactions.go
+++ b/storage/sqlite/transactions.go
@@ -133,16 +133,10 @@ func (s *SQLiteStore) SaveTransactions(ts []core.Transaction) error {
tx, _ := s.db.Begin()
for _, t := range ts {
- var ref *string
-
- if t.Reference != "" {
- ref = &t.Reference
- }
-
ib := sqlbuilder.NewInsertBuilder()
ib.InsertInto("transactions")
ib.Cols("id", "reference", "timestamp", "hash")
- ib.Values(t.ID, ref, t.Timestamp, t.Hash)
+ ib.Values(t.ID, t.Reference, t.Timestamp, t.Hash)
sqlq, args := ib.BuildWithFlavor(sqlbuilder.SQLite)
With the above change, the output look as expected:
{
"cursor": {
"page_size": 15,
"has_more": false,
"total": 1,
"remaining_results": 0,
"data": [
{
"txid": 0,
"postings": [
{
"source": "world",
"destination": "central_bank",
"amount": 100,
"asset": "GEM"
},
{
"source": "central_bank",
"destination": "users:001",
"amount": 100,
"asset": "GEM"
}
],
"reference": "",
"timestamp": "2021-10-21T18:16:42-03:00",
"hash": "4bc57a65d20c512497e590ba17830693d601f720b6b36a72b28e401fe24c1d18",
"metadata": {}
}
]
},
"err": null,
"ok": true
}
As an alternative we could modify the field at database level to be non-NULL or use a default value (empty value?). I would prefer this small code change, the initial database structure could be improved later.
Any opinions?
We should be able to revert a transaction without having to build the reverse transaction
POST /:ledger/transactions/:txid/revert
Given the transaction:
{
"txid": 0,
"postings": [
{
"source": "world",
"destination": "users:001",
"amount": 100,
"asset": "COIN"
},
{
"source": "users:001",
"destination": "payments:001",
"amount": 100,
"asset": "COIN"
}
]
}
The revert endpoint should attempt to commit the exact reverse transaction:
{
"txid": 1,
"postings": [
{
"source": "payments:001",
"destination": "users:001",
"amount": 100,
"asset": "COIN"
},
{
"source": "users:001",
"destination": "world",
"amount": 100,
"asset": "COIN"
}
]
}
Now numscript supports only monetary
type for dynamic assets. But in some ways might need to use asset
and amount
independet. Like this, in one transaction.
vars (
asset $asset
number $amount
)
send [$asset $amount] (
source = @world
destination = @users:001
)
send [$asset *] (
source = @users:001
destination = @platform:profit
)
Hello everyone,
Here we use Nats Jetstream as our core streaming and messaging and we want to keep that.
What would be the best way or starting point for us to add support for nats into ledger?
Thanks!
Is your feature request related to a problem? Please describe.
About the fact that accounts can't have negative values, is it possible to overcome this in some way ?
(In the formance documentation: Accounts in Formance Ledger cannot go negative! (Except for the special @world account).)
Is database connection pooling supported?
I see some commits in the past that seem to indicate this was supported, but these days I can't find any reference to it anymore in the codebase.
If it was removed, why? Seems like a very useful feature to keep latency down.
Some script posting to ledger take almost 1 second and that is quite slow for our use case where performance is really important.
Description
Summary
We should be able to filter transactions by reference
Solution proposal
GET /:ledger/transactions?reference=1234
Expected behavior
curl -X POST -H "Content-Type: application/json" http://localhost:3068/test-01/transactions -d \
'{
"postings": [
{
"source": "world",
"destination": "test:001",
"amount": 100,
"asset": "USD/2"
}
],
"reference": "foobar"
}'
curl -X GET http://localhost:3068/test-01/transactions?reference=foobar | jq .
# ^ should return the transaction
Describe the bug
Ledger is making a connection to segment even when Segment integration is disabled. I use Pi-Hole blocker and this results in an error log printed in the log.
To Reproduce
Block api.segment.io DNS resolution via some means like Pi-Hole etc.
Expected behavior
Since the integration is disabled, there should be no client created / connection attempted.
Logs
segment 2022/10/17 21:34:23 ERROR: sending request - Post "https://api.segment.io/v1/batch": dial tcp [::]:443: connect: connection refused
segment 2022/10/17 21:34:23 ERROR: sending request - Post "https://api.segment.io/v1/batch": dial tcp [::]:443: connect: connection refused
segment 2022/10/17 21:34:23 ERROR: sending request - Post "https://api.segment.io/v1/batch": dial tcp [::]:443: connect: connection refused
Environment (please complete the following information):
Additional context
the problem goes away if I disable Pi-Hole blocking
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Logs
If applicable, add logs to help explain your problem.
Environment (please complete the following information):
Additional context
Add any other context about the problem here.
Referring to #22 - we need to add another config param for the UI bind address
Describe the bug
When I query get /transactions?account=some-account-here
it ignore the account query parameter
To Reproduce
curl --request GET \
--url 'http://localhost:3068/quickstart/transactions?account=211212'
--header 'Accept: application/json' | jq
Expected behavior
return the transactions that match with the desired account
Logs
If applicable, add logs to help explain your problem.
Environment (please complete the following information):
Version: 1.0.0-beta.11
Date: 2021-11-23T11:54:22Z
Commit: 48ca56b
Describe the bug
I try to run the very first example from getting started but I got an error
To Reproduce
go run main.go server start
echo "
send [USD/2 599] (
source = @world
destination = @payments:001
)
send [USD/2 599] (
source = @payments:001
destination = @rides:0234
)
send [USD/2 599] (
source = @rides:0234
destination = {
85/100 to @drivers:042
15/100 to @platform:fees
}
)
" > example.num
go run main.go exec quickstart example.num
Expected behavior
Transaction was exchanged
Logs
Got error:
ERRO[0032] statement: INSERT INTO log(type, date, data, hash) SELECT v.type, v.timestamp, v.data, '' FROM ( SELECT id as ord, 'NEW_TRANSACTION' as type, timestamp, '{"txid": ' || id || ', "postings": ' || postings || ', "metadata": {}, "timestamp": "' || timestamp || '", "reference": "' || CASE WHEN reference IS NOT NULL THEN reference ELSE '' END || '"}' as data FROM transactions UNION ALL SELECT 10000000000 + meta_id as ord, 'SET_METADATA' as type, timestamp, '{"targetType": "' || UPPER(meta_target_type) || '", "targetId": ' || CASE WHEN meta_target_type = 'transaction' THEN meta_target_id ELSE ('"' || meta_target_id || '"') END || ', "metadata": {"' || meta_key || '": ' || CASE WHEN json_valid(meta_value) THEN meta_value ELSE '"' || meta_value || '"' END || '}}' as data FROM metadata ) v ORDER BY v.timestamp ASC, v.ord ASC;: failed to run statement 7: no such function: json_valid [UNKNOWN]
INFO[0032] Request ip=127.0.0.1 latency=14.504834ms method=POST path=/quickstart/script status=503 user_agent=Go-http-client/1.1
ERRO[0032] initializing ledger store: statement: INSERT INTO log(type, date, data, hash) SELECT v.type, v.timestamp, v.data, '' FROM ( SELECT id as ord, 'NEW_TRANSACTION' as type, timestamp, '{"txid": ' || id || ', "postings": ' || postings || ', "metadata": {}, "timestamp": "' || timestamp || '", "reference": "' || CASE WHEN reference IS NOT NULL THEN reference ELSE '' END || '"}' as data FROM transactions UNION ALL SELECT 10000000000 + meta_id as ord, 'SET_METADATA' as type, timestamp, '{"targetType": "' || UPPER(meta_target_type) || '", "targetId": ' || CASE WHEN meta_target_type = 'transaction' THEN meta_target_id ELSE ('"' || meta_target_id || '"') END || ', "metadata": {"' || meta_key || '": ' || CASE WHEN json_valid(meta_value) THEN meta_value ELSE '"' || meta_value || '"' END || '}}' as data FROM metadata ) v ORDER BY v.timestamp ASC, v.ord ASC;: failed to run statement 7: no such function: json_valid [UNKNOWN] [UNKNOWN]
Environment (please complete the following information):
Additional context
I found out the root cause is the project use package github.com/mattn/[email protected] which belongs to sqlite3 version 3.36.0. Sqlite version 3.36 does not enable JSON support by default so we got that error when using json_valid
function.
I know sqlite3 is not storage for production but that error will prevent the new developer.
I propose changing version of go-sqlite3 from v1.14.9 to v1.14.14 - which belongs to sqlite version 3.38.5
We should add a console command to verify the hash chain of transaction. This will allow us to see if the database has been tampered with.
Summary
Provide structured, levelled logging to improve the dev experience of numary - first suggested by Alex K in this comment here. This allows allows for additional control over what is logged in the future.
Solution proposal
Implement leveled logging with zap. Also, clean up places around the app where if viper.GetBool("debug")
, and replace with debug level logs.
An example of providing an Fx app with a logger (and specifically zap) is shown here in the Fx docs .
Describe alternatives you've considered
Other popular logging packages, such as logrus, could be used instead. Zap seems to be a natural fit for an Fx app but I'm not attached to the idea.
Summary
Some of the CLI commands actually try to connect to the ledger HTTP API, assuming localhost. This causes an issue when working with remote instances
Solution proposal
We should add an host flag to override the default localhost
To Reproduce
The following script can be executed, passing the checkout_payment_reference
variable
vars {
monetary $money_captured
account $source
account $destination
string $checkout_payment_reference
}
set_tx_meta("checkout_payment_reference", $checkout_payment_reference)
Expected behavior
The transaction metadata key checkout_payment_reference
to be set
Logs
errorCode: COMPILATION_FAILED
errorMessage: could not set variables: invalid json for variable of checkout_payment_reference of type string: invalid type
transaction: null
Environment (please complete the following information):
After "go build" with CGO_ENABLED=0 or CGO_ENABLED=1 pgsql storage is not working ๐
But, with "go run" all is ok.
For PGSQL error is {"err":"initializing driver: sql: unknown driver \"pgx\" (forgotten import?)","ok":false}
The list of ledgers is stored in config file, which causes a persistence issue for cloud-native environments or multi ledger
HI there,
I tried opening the Dashboard via the SSH link i have, but it just came up with the following error:
2021/07/22 12:57:08 exec: "xdg-open": executable file not found in $PATH
I have already changed the HTML bind address to the external IP.
Is there some URL I can visit to open the Dashboard or does it have to be started manually?
thanks,
Philip
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.