dipdup-io / metadata Goto Github PK
View Code? Open in Web Editor NEWTezos TZIP-16/TZIP-12 metadata indexer
Home Page: https://ide.dipdup.io/?resource=https://metadata.dipdup.net/v1/graphql
Tezos TZIP-16/TZIP-12 metadata indexer
Home Page: https://ide.dipdup.io/?resource=https://metadata.dipdup.net/v1/graphql
Seems a bit odd to me, but maybe there's a specific reason?
Maybe uint64 makes more sense here. Decimal turns out to be numeric
instead of bigint
in the gql schema, which makes for some odd queries.
In dipdup, it's currently possible to limit the block scope via config using the following settings:
indexes:
my_index:
first_level: 1000000
last_level: 2000000
For testing, it would be useful to be able to do the same thing in the metadata plugin. This would also limit the tester's S3 footprint which is nice for cost savings!
Some IPFS links won't resolve because the url isn't encoded properly.
For example ipfs://bafybeid5xl7cwdaelgh2rrxeicul6v7xbuwvtgiywklui43todymiyxx3y/TZLAND#247_FINALE.glb
(just imagine this could be a json file for the sake of this ticket) will fail to fetch because the #
isn't url encoded. Filenames containing spaces appear to work fine, though.
Tried fixing this but I can't quite figure out where that goes wrong and adding additional url encoding causes spaces to be double url encoded (%2520
).
Thanks!
The primary key and index specifications made using gorm tag in the structs are not being processed when the tables are generated in postgres.
Reference :
https://github.com/dipdup-net/metadata/blob/a4d18d7b27f576d6460a0c6d47aae1a2e8b555f9/cmd/metadata/models/token_metadata.go#L11-L27
According to the token_metadata struct there should be a composite pk on token_id, contract and network but when I described the table it shows id as the only pk. I am using PostgresSQL v14.1
Heya,
another issue I found is that some files on IPFS large enough to be chunked in the response fail to read.
The issue is pool.go in request
:
return ioutil.ReadAll(io.LimitReader(resp.Body, pool.limit))
I assume using LimitReader
instead of resp.Body.Read
bypasses the chunked reading done by the http module internally.
Using:
buf := make([]byte, resp.ContentLength)
_, err := resp.Body.Read(buf)
return buf, err
Makes that work.
But my question: Not quite sure what is to be achieved with the limit? To filter out large files? Are partial responses desired?
There seems to be a logic bug in the code ref: https://github.com/dipdup-net/metadata/blob/d2ec49730fa5a32039eb493ac64aadb90546daea/cmd/metadata/service/token.go#L161
This line forces the saver function to only save the metadata if there are at least 10 pending tokens which cause the metadata to be not stored at all until this condition is met. This is problematic in many ways, especially for contracts that don't have that high frequency of token mints. This also affects any previously unresolved/failed token mints which can also be blocked on this condition.
A easy fix would be to go back to the previous code where there was a separate case with a ticker that would autosave the data after some amount of time irrespective of the token count
Heya, this is looking amazing. Currently I'm indexing metadata in dipdup-py. It's a little tedious that way.
I suppose I drop my dipdup-py config into the build dir and build run the container build? Or is there some way to reference an external config that I missed?
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.