Coder Social home page Coder Social logo

cfc-servers / gm_express Goto Github PK

View Code? Open in Web Editor NEW
58.0 4.0 3.0 186 KB

An unlimited, out-of-band, bi-directional networking library for Garry's Mod

Home Page: https://gmod.express

License: GNU General Public License v3.0

Lua 100.00%
garrysmod garrysmod-addon gmod gmod-addon gmod-lua gmodlua lua network networking

gm_express's Introduction

Express πŸš„

A lightning-fast networking library for Garry's Mod that allows you to quickly send large amounts of data between server/client with ease.

Seriously, it's really easy! Take a look:

-- Server
local data = file.Read( "huge_data_file.json" )
express.Broadcast( "stored_data", { data } )

-- Client
express.Receive( "stored_data", function( data )
    file.Write( "stored_data.json", data[1] )
end )
Compared to doing it yourself...
-- Server
-- This is just an example!
-- It doesn't handle errors or clients joining, and it doesn't support multiple streams

util.AddNetworkString( "myaddon_datachunks" )
local buffer = ""

local function broadcastChunk()
    if #buffer == 0 then return end

    local chunkSize, isLast = math.min( 63000, #buffer ), false
    buffer = string.sub( buffer, chunkSize + 1 )

    if #pending <= chunkSize then
        buffer, isLast = "", true
    end

    net.Start( "myaddon_datachunks" )
    net.WriteUInt( chunkSize, 16 )
    net.WriteData( string.sub( pending, 1, chunkSize ), chunkSize )
    net.WriteBool( isLast )
    net.Broadcast()
end

function BroadcastFile( filePath )
    local fileData = file.Read( filePath, "DATA" )
    buffer = util.Compress( fileData )
end

local interval = engine.TickInterval() * 8
timer.Create( "MyAddon_DataSender", interval, 0, broadcastChunk )

BroadcastFile( "huge_data_file.json" )
-- Client
local buffer = ""
net.Receive( "myaddon_datachunks", function()
    buffer = buffer .. net.ReadData( net.ReadUInt( 16 ) )
    if not net.ReadBool() then return end

    local datas = util.Decompress( buffer )
    processData( datas )
end )

In this example, huge_data_file.json could be in excess of 100mb (soon) 25mb post-compression without Express even breaking a sweat. The client would receive the contents of the file as fast as their internet connection can carry it.

GLuaTest GLuaLint

Details

Instead of using Garry's Mod's throttled (<1mb/s!) and already-polluted networking system, Express uses unthrottled HTTP requests to transmit data between the client and server.

Doing it this way comes with a number of practical benefits:

  • πŸ“¬ These messages don't run on the main thread, meaning it won't block networking/physics/lua
  • πŸ’ͺ A dramatic increase to maximum message size (~100mb, compared to the net library's <64kb limit)
  • 🏎️ Big improvements to speed in many circumstances
  • πŸ€™ It's simple! You don't have to worry about serializing, compressing, and splitting your table up. Just send the table!

Express works by storing the data you send on Cloudflare's Edge servers. Using Cloudflare workers, KV, and D1, Express can cheaply serve millions of requests and store hundreds of gigabytes per month. Cloudflare's Edge servers offer extremely low-latency requests and data access to every corner of the globe.

By default, Express uses gmod.express, the public and free API provided by CFC Servers, but anyone can easily host their own! Check out the Express Service README for more information.

Usage

Examples

Broadcast a message from Server

-- Server
-- `data` can be a table of (nearly) any size, and may contain (almost) any values!
-- the recipient will get it exactly like you sent it
local data = ents.GetAll()
express.Broadcast( "all_ents", data )

-- Client
express.Receive( "all_ents", function( data )
    print( "Got " .. #data .. " ents!" )
end )

Client -> Server

-- Client
local data = ents.GetAll()
express.Send( "all_ents", data )

-- Server
-- Note that .Receive has `ply` before `data` when called from server
express.Receive( "all_ents", function( ply, data )
    print( "Got " .. #data .. " ents from " .. ply:Nick() )
end )

Server -> Multiple clients with confirmation callback

-- Server
local meshData = prop:GetPhysicsObject():GetMesh()
local data = { data = data, entIndex = prop:EntIndex() }

-- Will be called after the player successfully downloads the data
local confirmCallback = function( ply )
    receivedMesh[ply] = true
end

express.Send( "prop_mesh", data, { ply1, ply2, ply3 }, confirmCallback )


-- Client
express.Receive( "prop_mesh", function( data )
    entMeshes[data.entIndex] = data.data
end )

πŸ“– Documentation

express.Receive( string name, function callback )

Description

This function is very similar to net.Receive. It attaches a callback function to a given message name.

Arguments

  1. string name
    • The name of the message. Think of this just like the name given to net.Receive
    • This parameter is case-insensitive, it will be string.lower'd
  2. function callback
    • The function to call when data comes through for this message.
    • On CLIENT, this callback receives a single parameter:
      • table data: The data table sent by server
    • On SERVER, this callback receives two parameters:
      • Player ply: The player who sent the data
      • table data: The data table sent by the player

Example

Set up a serverside receiver for the "balls" message:

express.Receive( "balls", function( ply, data )
    myTable.playpin = data

    if not IsValid( ply ) then return end
    ply:ChatPrint( "Thanks for the balls!" )
end )

express.ReceivePreDl( string name, function callback )

Description

Very much like express.Receive, except this callback runs before the data has actually been downloaded from the Express API.

Arguments

  1. string name
    • The name of the message. Think of this just like the name given to net.Receive
    • This parameter is case-insensitive, it will be string.lower'd
  2. function callback
    • The function to call just before downloading the data.
    • On CLIENT, this callback receives:
      • string name: The name of the message
      • string id: The ID of the download (used to retrieve the data from the API)
      • int size: The size (in bytes) of the data
      • boolean needsProof: A boolean indicating whether or not the sender has requested proof-of-download
    • On SERVER, this callback receives:
      • string name: The name of the message
      • Player ply: The player that is sending the data
      • string id: The ID of the download (used to retrieve the data from the API)
      • int size: The size (in bytes) of the data
      • boolean needsProof: A boolean indicating whether or not the sender has requested proof-of-download

Returns

  1. boolean:
    • Return false to halt the transaction. The data will not be downloaded, and the regular receiver callback will not be called.

Example

Adds a normal message receiver and a pre-download receiver to prevent the server from downloading too much data:

express.Receive( "preferences", function( ply, data )
    ply.preferences = data
end )

express.ReceivePreDl( "preferences", function( name, ply, _, size, _ )
    local maxSize = maxMessageSizes[name]
    if size <= maxSize then return end

    print( ply, "tried to send a", size, "byte", name, "message! Rejecting!" )
    return false
end )

express.ClearReceiver( string name )

Description

Removes the callback associated with the given message name. Much like net.Receive( message, nil ).

Arguments

  1. string name
    • The name of the message. Think of this just like the name given to net.Receive
    • This parameter is case-insensitive, it will be string.lower'd

Example

Create a new Receiver when the module is enabled, and remove the receiver when it's disabled

local function enable()
    express.Receive( "example", processData )
end

local function disable()
    express.ClearReceiver( "example" )
end

express.Send( string name, table data, function onProof )

Description

The CLIENT version of express.Send. Sends an arbitrary table of data to the server, and runs the given callback when the server has downloaded the data.

Arguments

  1. string name
    • The name of the message. Think of this just like the name given to net.Receive
    • This parameter is case-insensitive, it will be string.lower'd
  2. table data
    • The table to send
    • This table can be of any size, in any order, with nearly any data type. The only exception you might care about is Color objects not being fully supported (WIP).
  3. function onProof() = nil
    • If provided, the server will send a token of proof after downloading the data, which will then call this callback
    • This callback takes no parameters

Example

Sends a table of queued actions (perhaps from a UI) and then allows the client to proceed when the server confirms it was received. A timer is created to handle the case the server doesn't respond for some reason.

local queuedActions = {
    { "remove_ban", steamID1 },
    { "add_ban", steamID2, 60 },
    { "change_rank", steamID3, "developer" }
}

myPanel:StartSpinner()
myPanel:SetInteractable( false )
express.Send( "bulk_admin_actions", queuedActions, function()
    myPanel:StopSpinner()
    myPanel:SetInteractable( true )
    timer.Remove( "bulk_actions_timeout" )
end )

timer.Create( "bulk_actions_timeout", 5, 1, function()
    myPanel:SendError( "The server didn't respond!" )
    myPanel:StopSpinner()
    myPanel:SetInteractable( true )
end )

express.Send( string name, table data, table/Player recipient, function onProof )

Description

The SERVER version of express.Send. Sends an arbitrary table of data to the recipient(s), and runs the given callback when the server has downloaded the data.

Arguments

  1. string name
    • The name of the message. Think of this just like the name given to net.Receive
    • This parameter is case-insensitive, it will be string.lower'd
  2. table data
    • The table to send
    • This table can be of any size, in any order, with nearly any data type. The only exception you might care about is Color objects not being fully supported (WIP).
  3. table/Player recipient
    • If given a table, it will be treated as a table of valid Players
    • If given a single Player, it will send only to that Player
  4. function onProof( Player ply ) = nil
    • If provided, the client(s) will send a token of proof after downloading the data, which will then call this callback
    • This callback takes one parameter:
      • Player ply: The player who provided the proof

Example

Sends a table of all players' current packet loss to a single player. Note that this example does not use the optional onProof callback.

local loss = {}
for _, ply in ipairs( player.GetAll() ) do
    loss[ply] = ply:PacketLoss()
end

express.Send( "current_packet_loss", loss, targetPly )

express.Broadcast( string name, table data, function onProof )

Description

Operates exactly like express.Send, except it sends a message to all players.

Arguments

  1. string name
    • The name of the message. Think of this just like the name given to net.Receive
    • This parameter is case-insensitive, it will be string.lower'd
  2. table data
    • The table to send
    • This table can be of any size, in any order, with nearly any data type. The only exception you might care about is Color objects not being fully supported (WIP).
  3. function onProof( Player ply ) = nil
    • If provided, each player will send a token of proof after downloading the data, which will then call this callback
    • This callback takes a single parameter:
      • Player ply: The player who provided the proof

Example

Sends the updated RP rules to all players

RP.UpdateRules( newRules )
    RP.Rules = newRules
    express.Broadcast( "rp_rules", newRules )
end

🎣 Hooks

GM:ExpressLoaded()

Description

This hook runs when all Express code has loaded. All express methods are available. Runs exactly once on both realms.

This is a good time to make your Receivers (express.Receive).

Example

Creates the Express Receivers when Express is available

-- cl_init.lua

hook.Add( "ExpressLoaded", "MyAddon_SetupExpress", function()
    express.Receive( "MyAddon_ObjectData", function( data )
        processData( data )
    end )
end )

GM:ExpressPlayerReceiver( Player ply, string message )

Description

Called when ply creates a new receiver for message (and, by extension, is ready for both net and express messages)

Once this hook is called, it is guaranteed to be safe to express.Send to the player.

Arguments

  1. Player ply
    • The player that registered a new Express Receiver
  2. string message
    • The name of the message that a Receiver was registered for
    • (Note: This will be string.lower'd before calling this hook, so expect it to always be lowercase)

Example

Sends an initial dataset to the client as soon as they're ready

-- sv_init.lua

hook.Add( "ExpressPlayerReceiver", "MyAddon_InitData", function( ply, message )
    if message ~= "myaddon_initdata" then return end
    express.Send( "myaddon_initdata", MyAddon.CurrentData, ply )
end )
-- cl_init.lua

hook.Add( "ExpressLoaded", "MyAddon_SetupExpress", function()
    express.Receive( "MyAddon_InitData", function( data )
        processData( data )
    end )
end )

Performance

We tested Express' performance against two other options:

  • Manual Chunking:
    • This is a bare-minimum example script that serializes, compresses, and splits the data up across as few net messages as possible. (This is typically what people do in smaller addons.)
    • Source
  • NetStream:
    • This library is very popular. It's the go-to choice for sending large chunks of data. It's currently used by Starfall, PAC3, AdvDupe2, etc.
    • Source

Test Details

Test Setup

Our findings are based on a series of tests where we generated data sets filled with random elements across a range of data types. (string, int, float, bool, Vector, Angle, Color, Entity, table)

We sent this data using each of the options, one at a time.

These test were performed on a moderately-specced laptop. The server was a dedicated base-branch server run in WSL2. The client was base-branch clean-install run on Windows.

For each test, we collected two key metrics:

  • Duration: The total time (in seconds) it took to complete each test. This includes compression, serialization, sending, and acknowledgement.
  • Message Count: The number of net messages sent during the transfer. Fewer is usually better.

References:

  • This is an example of the data sets that we use during the test runs.
  • You can view the raw test setup here.
Detailed Test Results
Test 1 (74.75 KB):

Summary: This data can fit in only two net messages. In this situation, Express loses out to just sending net messages (by almost a full second).

Data Size Compressed Size
194.97 KB 74.75 KB
Method Duration (s) Messages Sent
Manual Chunking 1.265 2
NetStream 2.273 11
Express 1.909 1
Test 2 (374.78 KB):

Summary: Requiring at least six net messages when sent normally, Express sends the data about 3x faster.

Data Size Compressed Size
988.2 KB 374.78 KB
Method Duration (s) Messages Sent
Manual Chunking 6.160 6
NetStream 10.303 51
Express 2.151 1
Test 3 (1.5 MB):

Summary: After passing the "1 megabyte" mark, Express' advantages bein really shining through, beating the next fastest option by 21 seconds (8x faster!)

Data Size Compressed Size
3.97 MB 1.5 MB
Method Duration (s) Messages Sent
Manual Chunking 24.325 24
NetStream 40.849 200
Express 2.897 1
Test 4 (11.22 MB):

Summary: With a much larger payload, it becomes abundantly clear how slow and prohibitive the built-in net library can be. Express sends this 11mb payload in under 20 seconds, while the net library is nearing 200 seconds.

Data Size Compressed Size
29.67 MB 11.22 MB
Method Duration (s) Messages Sent
Manual Chunking 181.491 180
NetStream 304.552 1,485
Express 18.993 1
Test 5 (11.96 KB): Summary: Because this payload only requires a single net mesage, Express falls way behind of the pack in terms of transfer speed.
Data Size Compressed Size
29.79 KB 11.96 KB
Method Duration (s) Messages Sent
Manual Chunking 0.306 1
NetStream 0.833 3
Express 1.333 1

Test Result Takeaways

  • Express sends data significantly faster than both Manual Chunking and NetStream when the data size exceeds a certain threshold (Roughly whenever 3 or more net messages would be required).
  • Express only sends up to 2 net messages per transfer, no matter the size of the data.
  • Despite its impressive performance with large data sizes, Express is less efficient than other methods for smaller data sizes.
  • (NetStream is surprisingly slow, regardless of data size)

Extra Notes

  • These results will depend heavily on networking conditions. For some people, lots of smaller messages may actually perform better than one large Express download.
  • Anything that uses the built-in net library (like NetStream) will be more reliable than a library like Express, even if they may be slower overall.
  • Express caches sends. This means that if you needed to send a dataset to more than one player, Express would only need to upload the data once, saving a significant amount of time and bandwidth. These savings aren't reflected in this test run.

These tests illustrate how Express can significantly improve data transfer speed and efficiency for large or even intermediate-scale data, but may underperform when handling smaller data sizes.

Understanding the trade-offs of Express can help you determine if it's a good fit for your project.

Case Studies

Intricate ACF-3 Tank dupe πŸ”«

Here's a clip of me spawning a particularly detailed and Prop2Mesh-heavy ACF-3 dupe (both Prop2Mesh and Adv2 use Netstream to transmit their data).
gmod_cL5uWh9hTu.mp4

A few things to note:

  • It took ~20 seconds for the dupe to be transferred to the server via Netstream
  • It took an additional ~20 seconds for the Prop2Mesh data to be Netstreamed back to me
  • On the netgraph, you can see the in and out metrics (and the associated green horizontal progress bar) that shows Netstream sending each chunk
  • Netstream only processes one request at a time. This is important, because it means while Adv2 or Prop2Mesh are transmitting data, no other player can use any Netstream-based addon until it completes.

Using some custom backport code, I converted Prop2Mesh and Advanced Duplicator 2 to use Express instead of Netstream. Here's me spawning the same tank in the exact same conditions, but using Express instead:

gmod_5RiCPGLfFA.mp4

The entire process took under 15 seconds - that's over 60% faster! My PC actually lagged for a moment because of how quickly all of the meshes downloaded and were available to render.

Even better? This doesn't block any other player from spawning their dupes! Because this is using Express instead of Netstream, other players can freely spawn their dupes, Prop2Mesh, Starfalls, etc. without being blocked and without blocking others.

Prop2Mesh + Adv2 stress test πŸ§ͺ

I had someone who knew more about Prop2Mesh than me create a highly complex controller. Here are the stats:

XngzjRoTlZ

Nearly 1M triangles across 162 models! If you've ever worked with meshes before, you'll know those are crazy high numbers.

When spawning this dupe in a stock server with Adv2 and Prop2Mesh, it takes nearly 4 minutes! All the while, blocking other players from using any Netstream-based addon. I can't even upload the video here because it's too big. Hopefully this screenshot is informative enough:

image

Some metrics:

  • It took 1 minute and 50 seconds before the dupe was even spawnable (it had to send the full dupe over to the server first)
  • After an additional 3 minutes, the meshes were finally downloaded and rendered
  • Again, while this was happening, no other player could use Adv2, Prop2Mesh, or Starfall

With that same backport code, forcing Adv2 and Prop2Mesh to use Express, the entire process takes under 30 seconds! That's almost a 90% speed increase.

gmod_3CairQAogv.mp4

Credits

A big thanks to @thelastpenguin for his super fast pON encoder that lets Express quickly serialize almost every GMod object into a compact message.

gm_express's People

Contributors

brandonsturgeon avatar starlight-oliver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gm_express's Issues

Investigate concerns of timeouts for clients with slow internet connections

As reported by another developer who implemented a similar system on their server, clients with very slow upload speeds (~1mb/s) could experience timeouts when uploading data to Express.

They suggested that splitting the data into 500kb chunks worked for them. I want to experiment to see if we can find another solution that won't require too much data manipulation.

First, we could try setting the default timeout value on all of our HTTP requests to 2 or 3 minutes (it's 60s by default). We'd have to see what the ramifications of that would be. For example, how many pending HTTP requests can be active at the same time?

Default GMod uses HTTP/1.1, so something fancy like HTTP/2 streaming is out of the question.

How to make the addon more robust to fatal API issues?

The web is the web. Stuff happens.

Sometimes numbskulls like me accidentally push breaking changes, sometimes Cloudflare has outages, sometimes players have firewalls... etc.

So what is Express supposed to do when it can't reach the API? Should it fallback to regular net messages, effectively acting like (or maybe directly using) Netstream? This would complicate the code a bit, and would probably create some annoying regression issues, but it would make messages sent with Express much more reliable.

And what if only one party has issues connecting to the API? Would the Netstream solution work in this case too?

Give the server more control over how clients interact with the API

Right now, the server retrieves two tokens on startup; one for itself, and one for all clients.
This client token can be used to write or retrieve data for as long as it's in the KV store.

This creates an avenue for attack wherein a malicious client could use their key to upload any amount of data at any rate.

One low-hanging improvement: Have the server issue clients short-term (or even one-time) tokens. The client code would passively manage the tokens, asking for a new one when it needs one.

Another improvement: force the clients to inform the server of any would-be uploads before they happen. Clients could send the server the hash and size of their data, the server would respond with a specially crafted token (maybe a jwt?) to use for that specific upload. The API could then validate that the upload was entirely correct (size and hash match), responding with an error and maybe short-term API timeout.

Doing transactions in this way would require more overhead, but the server could, for any reason, refuse to grant a token for the proposed upload. This gives a lot of flexibility to server owners for, realistically, a minor performance hit.

Add a backup for K/V

Cloudflare's K/V has a major issue for Express: it doesn't guarantee that the value you store will be accessible in every region immediately.

For Express, this means the server could upload a payload and only some clients could simply not see it. Of course the issue happens in reverse, too; a client could upload a payload that the server couldn't see if they were in different regions.

K/V is super fast, faster than the alternatives, so I'd like to keep using it. I think the best plan is to also use Cloudflare's D1 when it reaches general availability as a backup.

So for every request, we'd now store the data in both KV and D1 and for every retrieval, check KV first and then check D1 if the ID wasn't found.

Or, if D1 latency gets low enough, we could just use D1 entirely 🀷

Lots of measurements and testing to do.
A few open questions:

  • How does D1 latency compare to KV?
  • How much extra latency is added by storing in both KV and D1?
  • What does D1 pricing look like?

Allow another argument to .Send that handles timeouts or errors

To do this, we have to make proof sending mandatory, with optional callbacks for proof receiving.

Then, we can send a success bool on the proof message to indicate errors or timeouts.

This will let the sender handle cases where the recipient never received their message.

Disable Express' automatic registration/version checking in a testing environment

When running our GLuaTest suite in Github's Actions, Express still reaches out to the default domain and does a revision check and registration.

This is unnecessary for our use case, so we should tell Express not to bother when it's in such an environment.

There may be a convar we can check (or create) that would indicate it's in a test environment. A little bit of discovery is necessary for this.

How to handle proofs when sending many of the same messages?

Here's the bit of code in question:
https://github.com/CFC-Servers/gm_express/blob/main/lua/gm_express/sh_init.lua#L131-L141

When a message is sent, it creates a new entry in the express._awaitingProof table, using the hash of the data (prefixed with the recipient's Steam ID, if called serverside) and then removes the entry from the table when proof is received.

But what should Express do if the same message with the same data is sent multiple times in a short timespan?
I suppose the expected behavior would be to get a callback for each message sent, but right now it'd only run the callback once (the first run would remove it from the callbacks table).

Perhaps we could make an incrementing transactionID that would get automatically sent and incremented with each message, and then use that number in the key for express._awaitingProof. Then, the recipient would reply with same transactionID we sent them, and we'd use that to run the correct callback.

We could implement this in transparently so the user doesn't have to worry about it, but I worry this could create a maybe-exploit where a malicious actor could reply with a different transactionID, potentially running the wrong callback. Granted, it would still be prefixed with their SteamID, so they'd only be running a callback we already expected them to run.... I dunno.

Just a braindump for now, will revisit when some of the more pressing tasks have been completed.

Recipients randomly receiving 404's on GET

This is a really annoying issue that, according to Cloudflare's docs, shouldn't be happening.

The Problem

Cloudflare KV docs state:

When you write to KV, your data is written to central data stores. It is not sent automatically to every location’s cache

Initial reads from a location do not have a cached value. The data must be read from the nearest central data store, resulting in a slower response.

So what they're saying here is that KV achieves low latency by caching KV lookups. On the first read from a given location, it'll get a cache MISS and have to traverse the Cloudflare nodes to find the actual value, and then it'll cache that value to the location where the lookup occurred.

This is fine. Good, even - we can deal with that added latency for a single client.

A quick refresher on how Express works:
Our current flow is something like this:

  1. Sender uploads their data to Cloudflare
  2. Express Service generates a UUID and uses it as the key for the Data while saving to KV
  3. Express Service replies with 200 and the UUID
  4. Sender sends a net message to the recipient containing the UUID
  5. Recipient receives the net message, reads the UUID, and makes a GET to Express Service for that UUID
  6. Express Service asks KV for that UUID, returns the Data to the Recipient

However, what we're actually seeing, is the Recipient asks Express Service for the UUID, and gets a 404!

How does that work? It's not possible for it to actually be a 404 at this point because the data definitely exists in KV. We wouldn't have the UUID unless it were already stored!

The worst part about this bug is that this 404 is actually cached for that location! Meaning anyone else trying to read the Data for that UUID just gets a cached 404 πŸ€¦β€β™‚οΈ


The Solutions

Assuming this isn't some obscure bug with the Express Service (I suppose it's possible), there are a few different ways to tackle this.

1. Have clients retry for the data
(Proposed in #34)
We could just make the client ask for the data over and over until the cache busts? πŸ˜…

2. Have the sender wait a brief moment before sending the net message with the data's UUID
(Proposed in #34)
If it really was a timing issue, perhaps delaying the net message by even a few fractions of a second could increase the chance of a successful first GET. This would be a very minor delay, but it would increase the minimum time of every message by that much.

3. Create a comprehensive cross-region, bare-bones example demonstrating this as a Cloudflare bug and ask Cloudflare to fix it
This should probably be done, if at least just to confirm that the bug is Cloudflare's like I'm describing it... But man, that's a lot of work.

Async for pon encode/decode and util compress/decompress?

With especially large data structures, just the process of preparing the data to send or read can be massively taxing on the server.

Because we already operate on callbacks, we could run these processes asynchronously to spread the work out over multiple ticks, easing any massive spikes that might occur.

Back-end

Hello. Do you have plans to implement JS back-end not only through CloudFlare? For me personally, and I think for many users of your library, it would be much more convenient to host these files on the same VDS.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.