Coder Social home page Coder Social logo

adamritter / nostr-relaypool-ts Goto Github PK

View Code? Open in Web Editor NEW
61.0 61.0 16.0 401 KB

A Nostr RelayPool implementation in TypeScript using only nostr-tools library as a dependency

Home Page: https://www.npmjs.com/package/nostr-relaypool

License: MIT License

JavaScript 1.03% TypeScript 98.97%
javascript nostr typescript

nostr-relaypool-ts's People

Contributors

0xtlt avatar adamritter avatar alexgleason avatar coyotte508 avatar dolu89 avatar grmkris avatar npub1zenn0 avatar twtaylor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

nostr-relaypool-ts's Issues

Think about removing relay property from filter after delay is implemented

After delay is implemented, filtering just for a relay won't be costly, thereby the relay property isn't really needed, and just makes the API more complicated. It can probably be removed, but it's good to evaluate on a reference implementation. Anyways it's probably better to leave it, as different implementations may use different methods for subscribing.

InMemoryRelay

Only for NodeJS testing, it can use ws dev dependency package

Process events from all running subscriptions

It shouldn't matter from which subscription batch an event is coming from, as it's verified, all events from all subscriptions (and added to cache) should be processed as we already have an event demultiplexer anyways.

This feature also makes it possible to not resend filters if there's a filter already running.

Suggestion: Return 2 separate parameters in `onnotice()` or `onerror()`

Looking at the API in README and the code, it looks like you concatenate the relay (presumably the relay URL) and error or notice params with a colon.

I imagine that I want to know which relay generated which notices, so I can tell the user "this relay sent this X". Of course I could extract it from the string, but it's a bit more complicated to get just the error out of a string like this: wss://foo:error_with_colon:second_message.

I'd imagine something like (relay: string, message: string) => void as a signature.

Allow subscription batching by delaying subscriptions

Adding maxDelay option delays sending subscription requests, batching them and matching them after the events come back.

It's called maxDelay instead of delay, as subscribing with a maxDelay of 100ms and later subscribing with infinity time will reset the timer to 100ms delay.

The first implementation probably just uses matchFilter (O(n^2)) for redistributing events that can be easily optimized if the abstraction is successful.

NIP-42

NIP-42 was added to nostr-tools. Would be nice to have it here too.

It works about as follows:

import { nip42 } from 'nostr-tools';

relay.on('auth', challenge => nip42.authenticate({ relay, challenge, sign }))

Thoughts on best approaches here? Probably like .onauth since there is .onnotice etc. already? I didn't check any code here yet, thought I'd open up an issue first.

option for Rolling filter subscription

The simplest way to implement this is to subscribe to a few relays, then if no data arrives for some time (or eose arrives), try the other relays, having always only 2-3 subscriptions in progress with a specific timeout.

Merge filters automatically

Find filters in the subscriptions that only differ in 1 key and merge them both on relaypool level and relay level.

Make sure that the algorithm is linear in input size (not accidental O(n^2))

FakeRelay

FakeRelay object should be used to not connect to real relays. Just store data in memory and return data when asked. Custom response times should be settable

Use EventDemultiplexer

A filter matcher class should be able to get new filters, preprocess them by its most important attribute (linear time of filter entries), and match events efficiently in mostly const time, call onevent specified for filters

Add reducer for batching filters with limit

Multiple filters with the same limit are batched into a common filter.

There should be a way to specify what should be the limit of the common merged filter.

Filters should probably matched by creating an additional originalFilterCount property and using that when sending a subscription.

Also merging limits may be needed to be turned off

caching events by id and profile / contacts requests

Events by id should be always cached, but profile / contacts may change over time, so the filters should contain an option (notFromCache) that doesn't look at cache for returning the result (it updates the cache though).

Hint for storing data in localStorage

While having a LRU cache is a great idea, the most important queries are running permamently. Events generated by these queries need to be stored in localStorage even if they are not recent.

A simple API is RelayPool::store(evvent), Event::store(), and store(cb: OnEvent) : OnEvent .

localStorage should at least be updated before closing the window, but also probably a few second delay of debounce is probably good.

Why do Authors use a callback syntax?

Is it safe to call it like this? I might be calling fetchUser hundreds of times - does the subscription ever get closed? I need something like getEventById for an author.

image

Thanks! This library has helped make pooling so much easier.

ReferenceError: You are trying to `import` a file after the Jest environment has been torn down.

Although the test passes, it could pass cleaner:

PASS ./relay-pool.test.ts (9.965 s)

ReferenceError: You are trying to import a file after the Jest environment has been torn down. From relay-pool.test.ts.

  at Object.createBufferingLogger [as BufferingLogger] (node_modules/websocket/lib/utils.js:24:23)
  at new WebSocketConnection (node_modules/websocket/lib/WebSocketConnection.js:42:25)
  at WebSocketClient.Object.<anonymous>.WebSocketClient.succeedHandshake (node_modules/websocket/lib/WebSocketClient.js:343:22)
  at WebSocketClient.Object.<anonymous>.WebSocketClient.validateHandshake (node_modules/websocket/lib/WebSocketClient.js:332:10)
  at ClientRequest.handleRequestUpgrade (node_modules/websocket/lib/WebSocketClient.js:261:14)

/Users/adamritter/Documents/GitHub/nostr-relaypool-ts/node_modules/websocket/lib/utils.js:24
var logFunction = require('debug')(identifier);
^

TypeError: require(...) is not a function
at Object.createBufferingLogger [as BufferingLogger] (/Users/adamritter/Documents/GitHub/nostr-relaypool-ts/node_modules/websocket/lib/utils.js:24:39)
at new WebSocketConnection (/Users/adamritter/Documents/GitHub/nostr-relaypool-ts/node_modules/websocket/lib/WebSocketConnection.js:42:25)
at WebSocketClient.Object..WebSocketClient.succeedHandshake (/Users/adamritter/Documents/GitHub/nostr-relaypool-ts/node_modules/websocket/lib/WebSocketClient.js:343:22)
at WebSocketClient.Object..WebSocketClient.validateHandshake (/Users/adamritter/Documents/GitHub/nostr-relaypool-ts/node_modules/websocket/lib/WebSocketClient.js:332:10)
at ClientRequest.handleRequestUpgrade (/Users/adamritter/Documents/GitHub/nostr-relaypool-ts/node_modules/websocket/lib/WebSocketClient.js:261:14)
at ClientRequest.emit (node:events:513:28)
at TLSSocket.socketOnData (node:_http_client:574:11)
at TLSSocket.emit (node:events:513:28)
at addChunk (node:internal/streams/readable:324:12)
at readableAddChunk (node:internal/streams/readable:297:9)
at TLSSocket.Readable.push (node:internal/streams/readable:234:10)
at TLSWrap.onStreamRead (node:internal/stream_base_commons:190:23)

Node.js v18.12.1
A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks. Active timers can also cause this, ensure that .unref() was called on them.

Test Suites: 3 passed, 3 total
Tests: 13 passed, 13 total
Snapshots: 0 total
Time: 10.597 s, estimated 11 s
Ran all test suites.
(base) โžœ ~/Documents/GitHub/nostr-relaypool-ts git:(main)

Auto unsubscribe

Autounsubscribe each relay by a virtual subscription on EOSE event. This feature has a complex relation with the normal unsubscribe feature that works on whole subscriptions. Also probably needs a timeout. It's better to leave this feature a bit later.

Support for event reactions

I see you have implemented the functionality to collect event reactions and references in the event.ts file, is there anything holding you back from adding it to the latest release? If so, I would like to help.

Cache by any type of filter

Filtering by author... should return cache results immediately even if it doesn't change the sent subscriptions

Not giving back the same event twice in a subscription

While I prefer the default behaviour being giving back the same event multiple times, so that higher layers built upon RelayPool can store information about which events are stored on which relays, I'm not against having an option for it for clients that use the data given back by RelayPool directly.

Gather generic information about subscription execution times

Subscription events are the scarcest resouce in nostr. To prioritize subscriptions at specific relays, we should model the resource constraints. There are two main constraints:

  • is the data available at the relays
  • does the relay respond to the subscription / how long does it take / is there a limit reached

It's simpler to model these two constraints separately before taking a decision of the relays to connect to.

I believe that the eostr event already contains enough data about the data availability on different nodes for different subscriptions. The respond times depend on multiple things: can we connect to the relay, does it start sending data, when it sends eose event, and how much it depends on the query. We should report these times in the subscription and store on the server for modelling.

Execution times heavily depend on the filters and server speed, but I believe that the most important thing is modelling the delays of execution first (rate limiting, and because of limited amount of total sql queries by the server). We can know more about it after looking at the data.

Events missformat

No idea where it can come from, I will dig and try to fix it but Events seems having a tags problem, look the event tags at the following link:

https://nostr-gateway.vercel.app/e/bf67a0eb6e8a69e60f4af4909c3dde0e32b7e746fd32cdde7abf826e10abe4e0?relays=wss://relay.damus.io

And now what I get with Nostr relaypool:

{
  "id": "bf67a0eb6e8a69e60f4af4909c3dde0e32b7e746fd32cdde7abf826e10abe4e0",
  "kind": 1,
  "pubkey": "884704bd421721e292edbff42eb77547fe115c6ff9825b08fc366be4cd69e9f6",
  "tags": [
    [
      "",
      "fd5ac66f8379039427ebeddb60d2ef91397914d683bd46a044e0fd0c5be7c2a7",
      "e"
    ],
    [
      "e",
      "71319b99ef3fc0bc2174b0152861c82ca4c59bd26ef805201a21cd8ef1a0a6e6"
    ],
    [
      "p",
      "45c41f21e1cf715fa6d9ca20b8e002a574db7bb49e96ee89834c66dac5446b7a"
    ],
    [
      "p",
      "9989500413fb756d8437912cc32be0730dbe1bfc6b5d2eef759e1456c239f905"
    ]
  ],
  "created_at": 1675004570,
  "content": "Is the API public or CORS protected?",
  "relayPool": {
    "relayByUrl": {},
    "noticecbs": [],
    "eventCache": {
      "eventsById": {},
      "metadataByPubKey": {},
      "contactsByPubKey": {}
    },
    "minMaxDelayms": null,
    "filtersToSubscribe": []
  },
  "relays": [
    "wss://relay.damus.io",
    "wss://relay.snort.social",
    "wss://nostr-01.bolt.observer",
    "wss://relay.nostr.bg",
    "wss://nostr-pub.wellorder.net",
    "wss://nostr.bitcoiner.social",
    "wss://relay.nostr.info"
  ]
}

I'll tell you if I find something

Return updated metadata / contacts for cached results

If NoCache is not set, cached results are returned for metadata / contacts kinds requests and the filters are deleted before allowing onevent matching on them.

As they can contain new results, new matches should be allowed even if no specific filters are sent by that subscription.

Of course it's probably helpful to have a good O(n log n) matching library implemented for it.

Get 1 event

Implementing subscription merging, delaying and caching gives us an amazing abstraction that can be used from the components without extra logic for managing data: getting one event. After implementing the other features this function is trivial to implement and joy to work with.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.