Trying to make Rust game networking ecosystem easier.
- Renet: Server/Client Network library for games written in Rust.
- Bit Serializer: Serialize data with control of every bit.
- Shiyard Rapier: An rapier Physics Engine plugin for the shipyard ECS.
Server/Client network library for multiplayer games with authentication and connection management made with Rust
License: Apache License 2.0
Trying to make Rust game networking ecosystem easier.
This is an issue when using bevy_renet
.
Assuming a standard plugin setup:
app.add_plugins(RenetServerPlugin);
app.add_plugins(NetcodeServerPlugin);
renet
will already broadcast messages to a newly connected client for a single tick before sending ServerEvent::ClientConnected
bevy event.
This means the game doesn't have the opportunity to react to the client joining first.
The flow is as follows:
### Sometime during Tick 0 ###
- there are already connected clients
- new client connects
### Tick 1 ###
PreUpdate:
- RenetServerPlugin: fn update_system()
- NetcodeServerPlugin: fn update_system() // new client is added to the list of clients
// ServerEvent::ClientConnected event is queued in RenetServer.events
Update:
- app: game logic queues new messages
PostUpdate:
- transport: fn send_packets() // messages are sent to everyone including new client
### Tick 2 ###
PreUpdate:
- RenetServerPlugin: fn update_system() // sends queued ServerEvent::ClientConnected event to bevy
- NetcodeServerPlugin: fn update_system()
Update:
- app: game logic queues new messages // also reacts to ClientConnected, queues client setup messages
PostUpdate:
- transport: fn send_packets() // new client receives setup messages, but after regular messages from Tick 1
The issue comes from the fact that in NetcodeServerPlugin
, the update method is set to run after the one from RenetServerPlugin
:
# transport.rs
impl Plugin for NetcodeServerPlugin {
fn build(&self, app: &mut App) {
app.add_event::<NetcodeTransportError>();
app.add_systems(
PreUpdate,
Self::update_system
.run_if(resource_exists::<NetcodeServerTransport>())
.run_if(resource_exists::<RenetServer>())
.after(RenetServerPlugin::update_system),
);
// ...
So during the first tick, RenetServerPlugin does not have any events to send.
After a new client connects, ServerEvent::ClientConnected
should be sent before send_packets()
is called. This would allow the game to react to the new arrival before sending them any messages.
Heyo! I'm still digging into this but I think there might be an issue with how the UDP socket receives/sends if the server is not bound to 127.0.0.1
.
Just wanted to make this for some visibility, but I'm going to look more into it. Will update this as I figure out more.
I made this repo to just test a raw udp socket: https://github.com/Aceeri/udp-screech and that seems to work fine, but copy pasting the udp socket code from that to the bevy_renet example doesn't seem to.
I'm transferring data via DefaultChannel::Chunk and it's drowning me in info messages. I assume I can disable them, but I don't know how
2023-01-22T20:53:27.857094Z INFO rechannel::channel::block: Generated SliceMessage 22 from chunk_id 92. (23/42)
2023-01-22T20:53:27.857337Z INFO rechannel::channel::block: Generated SliceMessage 23 from chunk_id 92. (24/42)
2023-01-22T20:53:27.857558Z INFO rechannel::channel::block: Generated SliceMessage 24 from chunk_id 92. (25/42)
2023-01-22T20:53:27.857791Z INFO rechannel::channel::block: Generated SliceMessage 25 from chunk_id 92. (26/42)
2023-01-22T20:53:27.858026Z INFO rechannel::channel::block: Generated SliceMessage 26 from chunk_id 92. (27/42)
2023-01-22T20:53:27.858247Z INFO rechannel::channel::block: Generated SliceMessage 27 from chunk_id 92. (28/42)
2023-01-22T20:53:27.858487Z INFO rechannel::channel::block: Generated SliceMessage 28 from chunk_id 92. (29/42)
2023-01-22T20:53:27.858725Z INFO rechannel::channel::block: Generated SliceMessage 29 from chunk_id 92. (30/42)
2023-01-22T20:53:27.858974Z INFO rechannel::channel::block: Generated SliceMessage 30 from chunk_id 92. (31/42)
2023-01-22T20:53:27.859220Z INFO rechannel::channel::block: Generated SliceMessage 31 from chunk_id 92. (32/42)
2023-01-22T20:53:27.859464Z INFO rechannel::channel::block: Generated SliceMessage 32 from chunk_id 92. (33/42)
2023-01-22T20:53:27.859709Z INFO rechannel::channel::block: Generated SliceMessage 33 from chunk_id 92. (34/42)
2023-01-22T20:53:27.859958Z INFO rechannel::channel::block: Generated SliceMessage 34 from chunk_id 92. (35/42)
2023-01-22T20:53:27.860215Z INFO rechannel::channel::block: Generated SliceMessage 35 from chunk_id 92. (36/42)
2023-01-22T20:53:27.860455Z INFO rechannel::channel::block: Generated SliceMessage 36 from chunk_id 92. (37/42)
2023-01-22T20:53:27.860717Z INFO rechannel::channel::block: Generated SliceMessage 37 from chunk_id 92. (38/42)
2023-01-22T20:53:27.860960Z INFO rechannel::channel::block: Generated SliceMessage 38 from chunk_id 92. (39/42)
2023-01-22T20:53:27.861191Z INFO rechannel::channel::block: Generated SliceMessage 39 from chunk_id 92. (40/42)
2023-01-22T20:53:27.888031Z INFO rechannel::channel::block: Acked SliceMessage 0 from chunk_id 92. (1/42)
2023-01-22T20:53:27.888293Z INFO rechannel::channel::block: Acked SliceMessage 1 from chunk_id 92. (2/42)
2023-01-22T20:53:27.888576Z INFO rechannel::channel::block: Acked SliceMessage 2 from chunk_id 92. (3/42)
2023-01-22T20:53:27.888846Z INFO rechannel::channel::block: Acked SliceMessage 3 from chunk_id 92. (4/42)
2023-01-22T20:53:27.889063Z INFO rechannel::channel::block: Acked SliceMessage 4 from chunk_id 92. (5/42)
2023-01-22T20:53:27.889286Z INFO rechannel::channel::block: Acked SliceMessage 5 from chunk_id 92. (6/42)
2023-01-22T20:53:27.889516Z INFO rechannel::channel::block: Acked SliceMessage 6 from chunk_id 92. (7/42)
2023-01-22T20:53:27.889747Z INFO rechannel::channel::block: Acked SliceMessage 7 from chunk_id 92. (8/42)
2023-01-22T20:53:27.889980Z INFO rechannel::channel::block: Acked SliceMessage 8 from chunk_id 92. (9/42)
2023-01-22T20:53:27.890223Z INFO rechannel::channel::block: Acked SliceMessage 9 from chunk_id 92. (10/42)
2023-01-22T20:53:27.890445Z INFO rechannel::channel::block: Acked SliceMessage 10 from chunk_id 92. (11/42)
2023-01-22T20:53:27.890711Z INFO rechannel::channel::block: Acked SliceMessage 11 from chunk_id 92. (12/42)
2023-01-22T20:53:27.890939Z INFO rechannel::channel::block: Acked SliceMessage 12 from chunk_id 92. (13/42)
2023-01-22T20:53:27.891216Z INFO rechannel::channel::block: Acked SliceMessage 13 from chunk_id 92. (14/42)
2023-01-22T20:53:27.891468Z INFO rechannel::channel::block: Acked SliceMessage 14 from chunk_id 92. (15/42)
2023-01-22T20:53:27.891709Z INFO rechannel::channel::block: Acked SliceMessage 15 from chunk_id 92. (16/42)
2023-01-22T20:53:27.891942Z INFO rechannel::channel::block: Acked SliceMessage 16 from chunk_id 92. (17/42)
2023-01-22T20:53:27.892160Z INFO rechannel::channel::block: Acked SliceMessage 17 from chunk_id 92. (18/42)
2023-01-22T20:53:27.892365Z INFO rechannel::channel::block: Acked SliceMessage 18 from chunk_id 92. (19/42)
2023-01-22T20:53:27.892582Z INFO rechannel::channel::block: Acked SliceMessage 19 from chunk_id 92. (20/42)
2023-01-22T20:53:27.893722Z INFO rechannel::channel::block: Generated SliceMessage 40 from chunk_id 92. (41/42)
2023-01-22T20:53:27.893992Z INFO rechannel::channel::block: Generated SliceMessage 41 from chunk_id 92. (42/42)
2023-01-22T20:53:27.951005Z INFO rechannel::channel::block: Acked SliceMessage 20 from chunk_id 92. (21/42)
2023-01-22T20:53:27.951341Z INFO rechannel::channel::block: Acked SliceMessage 21 from chunk_id 92. (22/42)
2023-01-22T20:53:27.951805Z INFO rechannel::channel::block: Acked SliceMessage 22 from chunk_id 92. (23/42)
2023-01-22T20:53:27.952095Z INFO rechannel::channel::block: Acked SliceMessage 23 from chunk_id 92. (24/42)
2023-01-22T20:53:27.952335Z INFO rechannel::channel::block: Acked SliceMessage 24 from chunk_id 92. (25/42)
2023-01-22T20:53:27.952679Z INFO rechannel::channel::block: Acked SliceMessage 25 from chunk_id 92. (26/42)
2023-01-22T20:53:27.953007Z INFO rechannel::channel::block: Acked SliceMessage 26 from chunk_id 92. (27/42)
2023-01-22T20:53:27.953307Z INFO rechannel::channel::block: Acked SliceMessage 27 from chunk_id 92. (28/42)
2023-01-22T20:53:27.953603Z INFO rechannel::channel::block: Acked SliceMessage 28 from chunk_id 92. (29/42)
2023-01-22T20:53:27.953899Z INFO rechannel::channel::block: Acked SliceMessage 29 from chunk_id 92. (30/42)
2023-01-22T20:53:27.954130Z INFO rechannel::channel::block: Acked SliceMessage 30 from chunk_id 92. (31/42)
2023-01-22T20:53:27.954374Z INFO rechannel::channel::block: Acked SliceMessage 31 from chunk_id 92. (32/42)
2023-01-22T20:53:27.954607Z INFO rechannel::channel::block: Acked SliceMessage 32 from chunk_id 92. (33/42)
2023-01-22T20:53:27.954840Z INFO rechannel::channel::block: Acked SliceMessage 33 from chunk_id 92. (34/42)
2023-01-22T20:53:27.955061Z INFO rechannel::channel::block: Acked SliceMessage 34 from chunk_id 92. (35/42)
2023-01-22T20:53:27.955301Z INFO rechannel::channel::block: Acked SliceMessage 35 from chunk_id 92. (36/42)
2023-01-22T20:53:27.955587Z INFO rechannel::channel::block: Acked SliceMessage 36 from chunk_id 92. (37/42)
2023-01-22T20:53:27.955817Z INFO rechannel::channel::block: Acked SliceMessage 37 from chunk_id 92. (38/42)
2023-01-22T20:53:27.956032Z INFO rechannel::channel::block: Acked SliceMessage 38 from chunk_id 92. (39/42)
2023-01-22T20:53:27.956235Z INFO rechannel::channel::block: Acked SliceMessage 39 from chunk_id 92. (40/42)
2023-01-22T20:53:27.956488Z INFO rechannel::channel::block: Acked SliceMessage 40 from chunk_id 92. (41/42)
2023-01-22T20:53:27.956668Z INFO rechannel::channel::block: Acked SliceMessage 41 from chunk_id 92. (42/42)
2023-01-22T20:53:27.956927Z INFO rechannel::channel::block: Finished sending chunk message 92.
2023-01-22T20:53:27.958047Z INFO rechannel::channel::block: Generated SliceMessage 0 from chunk_id 93. (1/42)
2023-01-22T20:53:27.958298Z INFO rechannel::channel::block: Generated SliceMessage 1 from chunk_id 93. (2/42)
2023-01-22T20:53:27.958547Z INFO rechannel::channel::block: Generated SliceMessage 2 from chunk_id 93. (3/42)
2023-01-22T20:53:27.958822Z INFO rechannel::channel::block: Generated SliceMessage 3 from chunk_id 93. (4/42)
2023-01-22T20:53:27.959060Z INFO rechannel::channel::block: Generated SliceMessage 4 from chunk_id 93. (5/42)
2023-01-22T20:53:27.959277Z INFO rechannel::channel::block: Generated SliceMessage 5 from chunk_id 93. (6/42)
2023-01-22T20:53:27.959515Z INFO rechannel::channel::block: Generated SliceMessage 6 from chunk_id 93. (7/42)
2023-01-22T20:53:27.959727Z INFO rechannel::channel::block: Generated SliceMessage 7 from chunk_id 93. (8/42)
2023-01-22T20:53:27.959948Z INFO rechannel::channel::block: Generated SliceMessage 8 from chunk_id 93. (9/42)
2023-01-22T20:53:27.960184Z INFO rechannel::channel::block: Generated SliceMessage 9 from chunk_id 93. (10/42)
2023-01-22T20:53:27.960417Z INFO rechannel::channel::block: Generated SliceMessage 10 from chunk_id 93. (11/42)
2023-01-22T20:53:27.960650Z INFO rechannel::channel::block: Generated SliceMessage 11 from chunk_id 93. (12/42)
2023-01-22T20:53:27.960889Z INFO rechannel::channel::block: Generated SliceMessage 12 from chunk_id 93. (13/42)
2023-01-22T20:53:27.961115Z INFO rechannel::channel::block: Generated SliceMessage 13 from chunk_id 93. (14/42)
2023-01-22T20:53:27.961352Z INFO rechannel::channel::block: Generated SliceMessage 14 from chunk_id 93. (15/42)
2023-01-22T20:53:27.961561Z INFO rechannel::channel::block: Generated SliceMessage 15 from chunk_id 93. (16/42)
2023-01-22T20:53:27.961803Z INFO rechannel::channel::block: Generated SliceMessage 16 from chunk_id 93. (17/42)
2023-01-22T20:53:27.962035Z INFO rechannel::channel::block: Generated SliceMessage 17 from chunk_id 93. (18/42)
2023-01-22T20:53:27.962284Z INFO rechannel::channel::block: Generated SliceMessage 18 from chunk_id 93. (19/42)
2023-01-22T20:53:27.962554Z INFO rechannel::channel::block: Generated SliceMessage 19 from chunk_id 93. (20/42)
2023-01-22T20:53:27.970496Z INFO rechannel::channel::block: Generated SliceMessage 20 from chunk_id 93. (21/42)
2023-01-22T20:53:27.970885Z INFO rechannel::channel::block: Generated SliceMessage 21 from chunk_id 93. (22/42)
2023-01-22T20:53:27.971123Z INFO rechannel::channel::block: Generated SliceMessage 22 from chunk_id 93. (23/42)
2023-01-22T20:53:27.971413Z INFO rechannel::channel::block: Generated SliceMessage 23 from chunk_id 93. (24/42)
2023-01-22T20:53:27.971627Z INFO rechannel::channel::block: Generated SliceMessage 24 from chunk_id 93. (25/42)
2023-01-22T20:53:27.971865Z INFO rechannel::channel::block: Generated SliceMessage 25 from chunk_id 93. (26/42)
2023-01-22T20:53:27.972133Z INFO rechannel::channel::block: Generated SliceMessage 26 from chunk_id 93. (27/42)
2023-01-22T20:53:27.972368Z INFO rechannel::channel::block: Generated SliceMessage 27 from chunk_id 93. (28/42)
2023-01-22T20:53:27.972592Z INFO rechannel::channel::block: Generated SliceMessage 28 from chunk_id 93. (29/42)
2023-01-22T20:53:27.972853Z INFO rechannel::channel::block: Generated SliceMessage 29 from chunk_id 93. (30/42)
2023-01-22T20:53:27.973086Z INFO rechannel::channel::block: Generated SliceMessage 30 from chunk_id 93. (31/42)
2023-01-22T20:53:27.973328Z INFO rechannel::channel::block: Generated SliceMessage 31 from chunk_id 93. (32/42)
2023-01-22T20:53:27.973588Z INFO rechannel::channel::block: Generated SliceMessage 32 from chunk_id 93. (33/42)
2023-01-22T20:53:27.973824Z INFO rechannel::channel::block: Generated SliceMessage 33 from chunk_id 93. (34/42)
2023-01-22T20:53:27.974076Z INFO rechannel::channel::block: Generated SliceMessage 34 from chunk_id 93. (35/42)
2023-01-22T20:53:27.974349Z INFO rechannel::channel::block: Generated SliceMessage 35 from chunk_id 93. (36/42)
2023-01-22T20:53:27.974607Z INFO rechannel::channel::block: Generated SliceMessage 36 from chunk_id 93. (37/42)
2023-01-22T20:53:27.974859Z INFO rechannel::channel::block: Generated SliceMessage 37 from chunk_id 93. (38/42)
2023-01-22T20:53:27.975092Z INFO rechannel::channel::block: Generated SliceMessage 38 from chunk_id 93. (39/42)
2023-01-22T20:53:27.975328Z INFO rechannel::channel::block: Generated SliceMessage 39 from chunk_id 93. (40/42)
2023-01-22T20:53:27.999538Z INFO rechannel::channel::block: Generated SliceMessage 40 from chunk_id 93. (41/42)
2023-01-22T20:53:27.999851Z INFO rechannel::channel::block: Generated SliceMessage 41 from chunk_id 93. (42/42)
2023-01-22T20:53:28.030023Z INFO rechannel::channel::block: Acked SliceMessage 0 from chunk_id 93. (1/42)
2023-01-22T20:53:28.030349Z INFO rechannel::channel::block: Acked SliceMessage 1 from chunk_id 93. (2/42)
2023-01-22T20:53:28.030572Z INFO rechannel::channel::block: Acked SliceMessage 2 from chunk_id 93. (3/42)
2023-01-22T20:53:28.030895Z INFO rechannel::channel::block: Acked SliceMessage 3 from chunk_id 93. (4/42)
2023-01-22T20:53:28.031173Z INFO rechannel::channel::block: Acked SliceMessage 4 from chunk_id 93. (5/42)
2023-01-22T20:53:28.031449Z INFO rechannel::channel::block: Acked SliceMessage 5 from chunk_id 93. (6/42)
2023-01-22T20:53:28.031691Z INFO rechannel::channel::block: Acked SliceMessage 6 from chunk_id 93. (7/42)
2023-01-22T20:53:28.031923Z INFO rechannel::channel::block: Acked SliceMessage 7 from chunk_id 93. (8/42)
2023-01-22T20:53:28.032148Z INFO rechannel::channel::block: Acked SliceMessage 8 from chunk_id 93. (9/42)
2023-01-22T20:53:28.032393Z INFO rechannel::channel::block: Acked SliceMessage 9 from chunk_id 93. (10/42)
2023-01-22T20:53:28.032595Z INFO rechannel::channel::block: Acked SliceMessage 10 from chunk_id 93. (11/42)
2023-01-22T20:53:28.032814Z INFO rechannel::channel::block: Acked SliceMessage 11 from chunk_id 93. (12/42)
2023-01-22T20:53:28.033063Z INFO rechannel::channel::block: Acked SliceMessage 12 from chunk_id 93. (13/42)
2023-01-22T20:53:28.033298Z INFO rechannel::channel::block: Acked SliceMessage 13 from chunk_id 93. (14/42)
2023-01-22T20:53:28.033488Z INFO rechannel::channel::block: Acked SliceMessage 14 from chunk_id 93. (15/42)
2023-01-22T20:53:28.033673Z INFO rechannel::channel::block: Acked SliceMessage 15 from chunk_id 93. (16/42)
2023-01-22T20:53:28.033886Z INFO rechannel::channel::block: Acked SliceMessage 16 from chunk_id 93. (17/42)
2023-01-22T20:53:28.034139Z INFO rechannel::channel::block: Acked SliceMessage 17 from chunk_id 93. (18/42)
2023-01-22T20:53:28.034425Z INFO rechannel::channel::block: Acked SliceMessage 18 from chunk_id 93. (19/42)
2023-01-22T20:53:28.034661Z INFO rechannel::channel::block: Acked SliceMessage 19 from chunk_id 93. (20/42)
2023-01-22T20:53:28.062003Z INFO rechannel::channel::block: Acked SliceMessage 40 from chunk_id 93. (21/42)
2023-01-22T20:53:28.062358Z INFO rechannel::channel::block: Acked SliceMessage 41 from chunk_id 93. (22/42)
2023-01-22T20:53:28.062605Z INFO rechannel::channel::block: Acked SliceMessage 20 from chunk_id 93. (23/42)
2023-01-22T20:53:28.062842Z INFO rechannel::channel::block: Acked SliceMessage 21 from chunk_id 93. (24/42)
2023-01-22T20:53:28.063065Z INFO rechannel::channel::block: Acked SliceMessage 22 from chunk_id 93. (25/42)
2023-01-22T20:53:28.063310Z INFO rechannel::channel::block: Acked SliceMessage 23 from chunk_id 93. (26/42)
2023-01-22T20:53:28.063530Z INFO rechannel::channel::block: Acked SliceMessage 24 from chunk_id 93. (27/42)
2023-01-22T20:53:28.063765Z INFO rechannel::channel::block: Acked SliceMessage 25 from chunk_id 93. (28/42)
2023-01-22T20:53:28.063974Z INFO rechannel::channel::block: Acked SliceMessage 26 from chunk_id 93. (29/42)
2023-01-22T20:53:28.064184Z INFO rechannel::channel::block: Acked SliceMessage 27 from chunk_id 93. (30/42)
2023-01-22T20:53:28.064401Z INFO rechannel::channel::block: Acked SliceMessage 28 from chunk_id 93. (31/42)
2023-01-22T20:53:28.064635Z INFO rechannel::channel::block: Acked SliceMessage 29 from chunk_id 93. (32/42)
2023-01-22T20:53:28.064861Z INFO rechannel::channel::block: Acked SliceMessage 30 from chunk_id 93. (33/42)
2023-01-22T20:53:28.065090Z INFO rechannel::channel::block: Acked SliceMessage 31 from chunk_id 93. (34/42)
2023-01-22T20:53:28.065322Z INFO rechannel::channel::block: Acked SliceMessage 32 from chunk_id 93. (35/42)
2023-01-22T20:53:28.065557Z INFO rechannel::channel::block: Acked SliceMessage 33 from chunk_id 93. (36/42)
2023-01-22T20:53:28.065803Z INFO rechannel::channel::block: Acked SliceMessage 34 from chunk_id 93. (37/42)
2023-01-22T20:53:28.066036Z INFO rechannel::channel::block: Acked SliceMessage 35 from chunk_id 93. (38/42)
2023-01-22T20:53:28.066255Z INFO rechannel::channel::block: Acked SliceMessage 36 from chunk_id 93. (39/42)
2023-01-22T20:53:28.066471Z INFO rechannel::channel::block: Acked SliceMessage 37 from chunk_id 93. (40/42)
2023-01-22T20:53:28.066708Z INFO rechannel::channel::block: Acked SliceMessage 38 from chunk_id 93. (41/42)
2023-01-22T20:53:28.066928Z INFO rechannel::channel::block: Acked SliceMessage 39 from chunk_id 93. (42/42)
Hello,
I'm using Renet for about 3 weeks now, overall great usability but one thing is a bit frustrating.
I think the client and server APIs should use specific messages instead of generic vectors of bytes.
One way around it could be to impl Framed from Tokio or maybe it is overkill.
This way, one would have to pass a codec at creation and have a nicely typed frames!
Is there better way to handle switch server and client? Meaning to host the server and disconnect and join another server without need to start app. Not sure where it close network to clear up the data.
I look into the code a bit there no shutdown or disconnect from server or client. There should be clean up for those config?
Or I could be wrong. Need to single plugin just need to pass insert_resource for server and client to alt for testing. As there remove resource to handle close connection?
The Bevy Renet example doesn't compile! I would love it if you showed what the Cargo.toml should look like, and fix the thing where you try to import renet::ClientId
, but that doesn't exist!
All examples / usages use 127.0.0.1
which makes server only listen on loopback interface. Using 0.0.0.0
doesn't work as well for some weird reason related to UDP protocol itself (something to do with it being unable to tell which interface to reply on). So is there a solution to that problem without hardcoding / passing actual WAN IP address directly?
Hey guys, I was trying to use bevy_renet
from crates.io and I realised that the examples from the master
branch are really far ahead, and include many breaking changes such as the 'serde' feature. Can we push those changes? What's holding back a new release?
When sending small packets, they get grouped into one big packet if possible, however the overhead is not taken into consideration. This means if we send many small packets, for example 255 packets of 4 bytes, goes over SLICE_LENGTH. Sending even more even smaller packets, like 1000 of 1 byte, also over the buffer size (1400), and even the maximum MTU (1500). These messages then get dropped by either the transport's max packet size, or it gets truncated to the buffer length, resulting in a corrupt packet.
This may be a dumb issue (I'm quite new to networking), but I noticed that there is a seemingly arbitrary limit on the number of concurrent clients supported by renetcode
. I was wondering why there would be such a limit, if there is any good workarounds that don't involve setting up a separate web server for matchmaking (perhaps using something to spin up multiple instances of the server), and if there's anything else I'm missing here.
After calling RenetClient::disconnect()
, the method RenetClient::is_connecting()
returns true.
Currently, if I send an empty message, it will not be received.
But it's an issue for non-self-describing formats such as bincode
.
For example, the following struct results into an empty Vec
:
#[derive(Deserialize, Serialize)]
struct MyEvent;
It would be great if we could send empty messages. In such formats the following deserializes successfully:
let event: MyEvent = bincode::deserialize(&[]).unwrap();
I discovered this issue after an attempt to switch from rmp_serde
to bincode
. With rmp_serde
it works because it is self-describing format and never serializes into an empty Vec
.
We have connection event for server, it would be very convenient to have a similar feature for client. We currently have conditions for checking states, but there is no way to check if client "just" connected. We could provide additional conditions for it too, but events for this case would be more idiomatic.
When you working with unreliable sequenced channel you sometimes want to receive only the last message. Currently I have to call receive_message
manually until it returns false
which is not very ergonomic.
There are many cases where I want to send the same message to multiple clients.
Currently, I need to clone my Vec<u8>
for every receiver, which causes a lot of allocations.
Renet internally already then converts this into a Bytes
object, which can actually be used with reference counting to avoid cloning the data, which is exactly what I need.
It would be nice to have methods to send messages with a Bytes
object instead of a Vec<u8>
.
It would be great to wrap client ID (currently u64
) into a newtype. This will improve code readability and eliminate possible mistakes by using it in wrong place.
Something like Entity
in Bevy.
Currently RenetServer::clients_id()
returns Vec
but it's not very flexible and users usually wants to iterate over them in most cases. So I would suggest to return impl Iterator
instead.
You could also have it be cross platform with native apps by using something like webrtc-rs
Are there any plans for updating this crate to support bevy 0.10?
Hi! Some parts of renet
library design is not clear for me.
error::DisconnectionReason
is public but not accessibleWhile trying to implement client-side disconnection handling, found that Client::disconnected
returns type that can't be accessed.
In my case I trying to implement
impl From<renet::error::DisconnectionReason> for MyType { .. } // err
// some code
let Some(reason) = client.disconnected() else { .. };
return reason.into();
error
module pub
(rechannel
way).error::DisconnectionReason
instead of rechannel:: DisconnectionReason
(which is already can be accessed) (renetcode
way).It's unclear for me how client reconnection works. Client::disconnect
makes Client
useless after disconnection, because there is no way to reconnect it again (and as I see there is no auto-reconnect when new messages sent).
self
instead of &mut self
in Client::disconnect
to make Client
RAII-idiomatic.Client::connect
method to make Client
instance reusable.Currently in order to send a message, I need to pass anything that converts into Bytes
.
But this prevents users from re-using the memory. Bytes
cheaply clonable, but can't grow, so users can't use it a buffer for snapshots. Vec<u8>
can be used as a buffer for snapshots, but it needs to be cloned to send (because conversion from Vec<u8>
to Bytes
implemented only from owning value).
Maybe provide a streaming API instead where you write messages directly into Renet's message buffer?
Thanks for this awesome crate!
I have encountered an issue with renet
which first surfaced here: projectharmonia/bevy_replicon#107
In a SendType::Unreliable
renet channel, I can't reliably send payloads of hundreds of kilobytes:
From tests:
20 311: Sends consistently
40 693: Sends consistently
73 083: Never sends
329 359: Never sends
If this is intentional, can it be documented somewhere? I don't want this to be the case, however. I'm not sure if 'packet fragmentation' is the usual solution to this sort of issue, and would appreciate pointers on how to sync large amount of data through Unreliable channels, well, reliably.
Since connect token contents are pub(crate)
, it isn't possible to validate the protocol id of a connect token on the client side, nor to use the token's server addresses to tailor your client socket to ipv4/ipv6.
Make the contents of ConnectToken pub
.
I just learned about renet via the Bevy discord. It would be helpful to see bevy_renet
on https://bevyengine.org/assets/#networking.
This can be done in a PR to https://github.com/bevyengine/bevy-assets.
Consider the following test case:
#[cfg(test)]
mod tests {
use std::{
net::UdpSocket,
time::SystemTime,
};
use super::*;
use renet::{RenetConnectionConfig, ServerConfig, NETCODE_KEY_BYTES};
const PRIVATE_KEY: &[u8; NETCODE_KEY_BYTES] = b"an example very very secret key."; // 32-bytes
const PROTOCOL_ID: u64 = 7;
fn new_renet_server() -> Option<RenetServer> {
let server_addr = "127.0.0.1:5000".parse().unwrap();
let socket = UdpSocket::bind(server_addr).unwrap();
let connection_config = RenetConnectionConfig::default();
let server_config = ServerConfig::new(64, PROTOCOL_ID, server_addr, *PRIVATE_KEY);
let current_time = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap();
Some(RenetServer::new(current_time, server_config, connection_config, socket).unwrap())
}
#[test]
fn initializes_from_host() {
let mut app = App::new();
app.world.insert_resource(new_renet_server().unwrap());
assert!(app.world.get_resource::<RenetServer>().is_some(), "Server should be created");
}
}
(just add it to bevy_renet/src/lib.rs
)
And run:
cargo test initializes_from_host
It results in the following error:
thread 'tests::initializes_from_host' has overflowed its stack
fatal runtime error: stack overflow
error: test failed, to rerun pass '-p bevy_renet --lib'
Caused by:
process didn't exit successfully: `renet/target/debug/deps/bevy_renet-0e5c8c0664b0ee4f initializes_from_host` (signal: 6, SIGABRT: process abort signal)
If you remove Option
- it won't crash.
The simple example doesn't work in bevy example file
RenetClient::new(current_time, socket, client_id, connection_config, authentication).unwrap()
renet client should be like this having 5 arguments instead of 4.
Might make more sense to do .insert_resource(Events::<ServerEvent>::default());
and have custom clearing/updating systems set up, but this might be more excessively custom and more of a fault of the Plugin
api not being super flexible.
Most serde implementations provide a method (eg. serde_json::from_reader(_)
) to deserialize any type that implements the std::io::Read
trait. The block channel could probably return a thin wrapper over Vec<Payload>
that both implements std::io::Read
trait and provides access to the wrapped Vec
? This would avoid unnecessary memory copy and, if the user want, the user could still use copy_from_slice
to concatenate the payloads.
I have implemented this in my crate fe2o3-amqp
and saw a 30% decrease in total time in an "unscientific" large content test (a couple of 10MB messages, the total time dropped from ~1.2 secs to ~0.8 secs). I can make a PR if this sounds like a good idea to you :)
Renet breaks when two clients disconnect without using the proper client.disconnect()
function and a new client subsequently connects.
I have included a video demonstrating running the demo_bevy.
One potential cause of this issue could be that all of the clients and the host have the same IP address.
Important to note: I´m exiting the client with ctrl+C
, witch causes the client to exit with sending client.disconnect()
.
While testing my game, I don't care about security configurations, and just want other people to be easily able to join.
I've also noticed the case in #12, and would like to be able to bypass it in development.
Thanks for this awesome ecosystem of crates!
Unfortunately, bevy_renet
is blocking bevy_replicon
from upgrading to bevy 0.12
. I'll try to upgrade bevy_renet
, is there anything I should know?
I've been trying to get demo_bevy
to compile to WASM and work. In my JS console I get a terse error printed that points me to the UdpSocket::bind
call in the client code. So I'm assuming this just isn't a thing in browser WASM land (no way to simply bind to the network interface... obvious in retrospect.)
I don't really know what my options are. WebSockets is TCP and probably not appropriate. I believe there's some hack to let you relay network data from JavaScript land to WASM land via javascript function calls.
Good advice also counts as resolution in my case. Thanks for the really cool crate!
What should the server address be when renet is inside a docker container? Everything I've tried doesn't work.
For a complete game netcode, we need Unreliable Sequenced and Reliable Unordered channels.
An Unreliable Sequenced channel can be used for inputs from the client, state updates, and other packets that don't require reliability but need to be received sequentially.
A Reliable Unordered channel can be used for particle effects and other packets that don't need to be ordered but require reliability.
Two features would allow for custom scheduling inside a Bevy loop:
Schedule
s to RenetServerPlugin
and RenetClientPlugin
and pass it to plugin configurations to replace hardcoded PreUpdate
and PostUpdate
schedules. bevy_xpbd
implemented support for custom schedules the same way.SystemSet
and add them to important renet systems such as send_packets
. So user code can use and configure the execution order of these sets within their own app.A user may also desire to run systems with the currently hardcoded PreUpdate
and PostUpdate
schedules in the same schedule. In this case the plugin systems ordering must strictly rely on the SystemSet
by configuring App
with configure_sets
to chain the elements of the SystemSet
in the desired order.
Copying the sending_and_receiving
test to my project I see it failing most of the time. When I clone the repo, I can see an occasional failure if I just continue to run & run the tests. Is there any insight into what might help in debugging this or stepping through where there might be a race condition or system thing occurring that could lead to this?
I have added renet for networking to my project and it worked as a charm on my linux machine, however, when tested on windows networking gives me a stack overflow.
Stack overflow happens on side of host.
Please help me to determine cause of this issue.
My network code can be found here
thread 'main' overflowed its stack
Hosting behind a NAT router requires passing a different public IP to the server config and the socket. This is not visible in the examples or anything I had read. Doing this incorrectly causes no connections to go through, with no warnings.
Maybe there could be a comment in the examples. Above the line that builds the server config would be a good place in my opininon.
I originally wanted to write a feature request, then realized it was already possible, just not very clear.
Hosting with port forwarding from home is a common thing when playing games with friends. I personally need it for testing my game from my personal computer.
Might have been missed in this PR #103
This is the tracking issue for the 0.1
release of the library:
renetcode
chat_demo::matcher
in a remote server and verify how it behavesrechannel
and renetcode
When creating a connect token, the server timeout is passed in via ConnectToken::generate()
, while the client timeout is implicitly set to NETCODE_TIMEOUT_SECONDS
. From what I can tell in the netcode spec, the client timeout is meant to be configurable.
ConnectToken::generate()
should also take in the client timeout. One workaround is to manually edit the connect token struct, since the fields are now pub
.
Hey,
I'm currently just trying out your library and it is amazing, thank you - but I've run into an issue:
I'm sending a transform to the server over the Default UnreliableChannel who then sends it back to every player. Within seconds, even with one player, messages get dropped - could you point me in the direction as to why that is?
Heyhey. Currently I try to use renet on my bevy project to communicate with a simple elixir UDP server that just logs into the console while running.
As the the messages are send in bincode I already implemented some simple logic to get the bincode deserialized to an elixir struct.
Unfortunately my actual test text message I try to send is overshadowed by plenty of messages I believe come from the RenetClient.
This is I believe the Message that is internally sent by RenetClient. And I tried to get the binary back to a term. Unfortunately a simple :erlang.binary_to_term
won't work since the binary seems some different structure.
Maybe you have an idea or recommendation how I can implement this on elixir side. Maybe this is even worth a separate library to enable a different server language for renet.
best regards
Denny
When server shuts down for whatever reason, how can a client detect that?
If not implemented, could you give me som pointers and il try to create a pr.
Hi, I've run into this error when trying out some demo code. Note: both the client and server code produced this error.
It's a pretty nasty error to encounter for the first time so I tried a few things:
lld
linkercargo clean
and cargo run
I am quite certain that bevy_renet
is involved in the error since compiling a file without it works fine.
If you need any more details let me know.
Is this a bug or am I doing something wrong? I'm making a headless server.
In the demo_bevy project, players can connect but cannot disconnect. I'm not sure what causes this, as the disconnect messages are handled by ServerEvent::ClientDisconnected
. I've read that there's an around 15 second disconnect timer, but this is not emanating in that example. The print immediately following that event call is never logged to the console. Has anyone run into a problem similar?
On Windows, update()
will throw Os { code: 10054, kind: ConnectionReset, message: "An existing connection was forcibly closed by the remote host." }
if client forced connection closing (e.g, client crashed without sending disconnect packet).
I working on https://github.com/lifescapegame/bevy_replicon. It currently coupled with Netcode. Users started to ask me to decouple it (projectharmonia/bevy_replicon#61), but there are two problems:
I think this could be solved on Renet side:
RenetClient
. This will reduce duplicated conditions code inside transports.A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.