quinn-rs / quinn Goto Github PK
View Code? Open in Web Editor NEWAsync-friendly QUIC implementation in Rust
License: Apache License 2.0
Async-friendly QUIC implementation in Rust
License: Apache License 2.0
The test started failing because it depends on a fixed number of handshake packets exchanged before the handshake terminates. Can probably continue exchanging messages until a short header packet is sent.
I'm trying to make a unit test for a thing that runs a server, starts a client to talk to it, and then shuts down. For some reason the client panics on an Option::unwrap()
deep in quinn when it is dropped and it tries to close the connection. When I run basically the same thing as a standalone program it appears to work? It might just not even attempt to shut down correctly when I ctrl-C
it.
This code was ported from quicr so I may be doing something wrong now that quicr allowed. I haven't cut it down to a minimal reproduction next but hope to soon; maybe you can suggest something while I do. The code is here: https://github.com/icefoxen/WorldDat/blob/7ee3babc58ea6e8584b7d1095c7c084a98140fb3/src/peer.rs#L276 , just run cargo test
to reproduce.
Edit: oh yeah, adding the backtrace might help.
---- peer::tests::test_client_connection stdout ----
thread 'peer::tests::test_client_connection' panicked at 'called `Option::unwrap()` on a `None` value', libcore/option.rs:345:21
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/panicking.rs:211
3: std::panicking::default_hook
at libstd/panicking.rs:221
4: std::panicking::rust_panic_with_hook
at libstd/panicking.rs:475
5: std::panicking::continue_panic_fmt
at libstd/panicking.rs:390
6: rust_begin_unwind
at libstd/panicking.rs:325
7: core::panicking::panic_fmt
at libcore/panicking.rs:77
8: core::panicking::panic
at libcore/panicking.rs:52
9: <core::option::Option<T>>::unwrap
at /checkout/src/libcore/macros.rs:20
10: quinn_proto::connection::Connection::make_close
at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/quinn-proto-0.1.0/src/connection.rs:2056
11: quinn_proto::connection::Connection::close
at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/quinn-proto-0.1.0/src/connection.rs:2082
12: quinn_proto::endpoint::Endpoint::close
at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/quinn-proto-0.1.0/src/endpoint.rs:934
13: <quinn::ConnectionInner as core::ops::drop::Drop>::drop
at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/quinn-0.1.0/src/lib.rs:887
14: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
15: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
16: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
17: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
18: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
19: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
20: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
21: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
22: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
23: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
24: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
25: core::mem::drop
at /checkout/src/libcore/mem.rs:795
26: tokio_current_thread::scheduler::release_node
at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.3/src/scheduler.rs:386
27: <tokio_current_thread::scheduler::Scheduler<U> as core::ops::drop::Drop>::drop
at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.3/src/scheduler.rs:419
28: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
29: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
30: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
31: core::ptr::drop_in_place
at /checkout/src/libcore/ptr.rs:59
32: worlddat::peer::tests::test_client_connection
at src/peer.rs:290
33: worlddat::__test::TESTS::{{closure}}
at src/peer.rs:276
34: core::ops::function::FnOnce::call_once
at /checkout/src/libcore/ops/function.rs:223
35: <F as alloc::boxed::FnBox<A>>::call_box
at libtest/lib.rs:1451
at /checkout/src/libcore/ops/function.rs:223
at /checkout/src/liballoc/boxed.rs:642
36: __rust_maybe_catch_panic
at libpanic_unwind/lib.rs:105
We should implement packetization-layer path MTU discovery per the draft's instructions.
This likely has to do with localhost
defaulting to 127.0.0.1
on Linux, unlike on macOS (::1
).
This is a straightforward trait rename.
Right now, the server does not keep track of the random destination connection ID (DCID) contained in the Initial
packet. This means that retransmitted Initial
packets are perceived as new Initial
packets attempting to start a new connection. It probably makes sense to have a separate map for these handshake DCIDs to the related ConnectionState
s, which could then be cleaned up once it's confirmed that the client knows about the server-chosen connection ID for that Initial
packet.
The host xavamedia.nl is given as an example Quinn server at https://github.com/quicwg/base-drafts/wiki/Implementations but:
$ cargo run --example client xavamedia.nl
Finished dev [unoptimized + debuginfo] target(s) in 0.16s
Running `target\debug\examples\client.exe xavamedia.nl`
RESULT: Err(Tls(WebPKIError(CertExpired)))
This is what I had originally in quinn. This would get rid of the extra dependency, and it would let ConnectionId
implement Copy
, which would obviate the need for cloning.
AEAD cryptographic algorithms have a limit of data volume they can transmit to the open. They can be found in TLS-13 rfc8446 section-5.5.
Based on these values, connections should initiate key updates when necessary. A configurable threshold should be checked against the amount of data transferred since last key updates to trigger it.
When the KEY_PHASE is flipped twice in two consecutive received packet, the connection must be aborted. From draft-ietf-quic-tls-16:
An endpoint does not always need to send packets when it detects that
its peer has updated keys. The next packet that it sends will simply
use the new keys. If an endpoint detects a second update before it
has sent any packets with updated keys, it indicates that its peer
has updated keys twice without awaiting a reciprocal update. An
endpoint MUST treat consecutive key updates as a fatal error and
abort the connection.
The crypto
module currently has a PacketKey
type that reconstructs the relevant SealingKey
and OpeningKey
for every seal
or open
operation. There should be a smarter way to structure the types in the crypto
module so that a Secret
has direct access to long-lived Key
instances.
This is explained in section 6.2 of the draft 11 transport spec:
https://tools.ietf.org/html/draft-ietf-quic-transport-11#section-6.2
The code processing incoming handshake packets in ConnectionState::handle_packet()
should check that the version
in the Initial
packet matches what's supported per QUIC_VERSION
. If it doesn't match, it should queue an appropriate response using ConnectionState::build_long_packet()
(or something similar if version negotiation has slightly different needs).
Quantitative, reproducible benchmarks are needed to judge the performance impact of changes, and to guide us towards supporting efficient high-bandwidth communications. A good start would be a criterion benchmark that uses the high-level quinn API to pass a blob of data end-to-end through two quinn endpoints and the host UDP stack.
My code is actually almost well modularized now, so it's easier to post issue reports. Huzzah!
This panics with "thread 'main' panicked at 'unknown stream', libcore/option.rs:989:5". This appears to be happening in the tokio::io::shutdown(stream)
line.
fn receive_message(stream: quinn::Stream) -> impl Future<Item = (), Error = ()> {
quinn::read_to_end(stream, 1024 * 64)
.map_err(|e| warn!("failed to read response: {}", e))
.and_then(move |(stream, req)| {
let msg: ::std::result::Result<Message, rmp_serde::decode::Error> =
rmp_serde::from_slice(&req);
debug!("Got message: {:?}", msg);
let to_do_next: Box<dyn Future<Item = quinn::Stream, Error = ()>> = match msg {
Ok(Message::Ping { id }) => {
info!("Got ping, trying to send pong");
let message = Message::Pong { id };
let to_send = rmp_serde::to_vec(&message)
.expect("Could not serialize message; should never happen!");
Box::new(
tokio::io::write_all(stream, to_send)
.map_err(|e| warn!("Failed to send request: {}", e))
.map(|(stream, _vec)| stream),
)
}
Ok(val) => {
info!("Got message: {:X?}, not doing anything with it", val);
Box::new(future::ok(stream))
}
Err(e) => {
info!("Got unknown message: {:X?}, error {:?}", &req, e);
Box::new(future::ok(stream))
}
};
to_do_next
.and_then(|stream| {
trace!("Trying to shut down stream");
tokio::io::shutdown(stream)
.and_then(|v| {
trace!("Done!");
future::ok(v)
}).map_err(|e| warn!("Failed to shut down stream: {}", e))
}).map(move |_| info!("request complete"))
})
}
This appears to correspond to this code in quinn-proto/src/connection.rs
:
pub fn finish(&mut self, id: StreamId) {
let ss = self
.streams
.get_mut(&id)
.expect("unknown stream")
.send_mut()
.expect("recv-only stream");
....
}
Hello, thanks for quinn!
Is there a way to use the new async/await facilities in quinn? I was trying to make a basic proof-of-concept, but it seems quinn's types are !Send
:
#![feature(await_macro, async_await, futures_api, pin)]
extern crate quinn;
extern crate tokio;
use tokio::prelude::*;
fn main() {
tokio::run_async(async move {
let mut builder = quinn::Endpoint::new();
let (endpoint, driver, incoming) = builder.bind("0.0.0.0:9393").unwrap();
while let Some(Ok(conn)) = await!(incoming.next()) {
while let Some(byte_stream) = await!(conn.incoming.next()) {
match byte_stream {
Ok(quinn::NewStream::Bi(byte_stream)) => {
println!("byte stream!");
},
Ok(quinn::NewStream::Uni(_)) => {
// config.max_remote_uni_streams is defaulted to 0
unreachable!();
}
Err(err) => {
eprintln!("error: {}", err);
}
}
}
}
});
}
Fails with:
Compiling quinn-minimal v0.1.0 (/Users/yusuf/projects/quinn-minimal)
error[E0277]: `std::rc::Rc<std::cell::RefCell<quinn::EndpointInner>>` cannot be sent between threads safely
--> src/main.rs:9:5
|
9 | tokio::run_async(async move {
| ^^^^^^^^^^^^^^^^ `std::rc::Rc<std::cell::RefCell<quinn::EndpointInner>>` cannot be sent between threads safely
|
= help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::cell::RefCell<quinn::EndpointInner>>`
= note: required because it appears within the type `quinn::Endpoint`
= note: required because it appears within the type `for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}`
= note: required because it appears within the type `[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]`
= note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]>`
= note: required because it appears within the type `impl std::future::Future`
= note: required by `tokio::run_async`
error[E0277]: `std::rc::Rc<quinn::ConnectionInner>` cannot be sent between threads safely
--> src/main.rs:9:5
|
9 | tokio::run_async(async move {
| ^^^^^^^^^^^^^^^^ `std::rc::Rc<quinn::ConnectionInner>` cannot be sent between threads safely
|
= help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<quinn::ConnectionInner>`
= note: required because it appears within the type `quinn::Connection`
= note: required because it appears within the type `quinn::NewConnection`
= note: required because it appears within the type `for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}`
= note: required because it appears within the type `[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]`
= note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]>`
= note: required because it appears within the type `impl std::future::Future`
= note: required by `tokio::run_async`
error[E0277]: `std::rc::Rc<std::cell::RefCell<futures::unsync::mpsc::Shared<quinn::NewConnection>>>` cannot be sent between threads safely
--> src/main.rs:9:5
|
9 | tokio::run_async(async move {
| ^^^^^^^^^^^^^^^^ `std::rc::Rc<std::cell::RefCell<futures::unsync::mpsc::Shared<quinn::NewConnection>>>` cannot be sent between threads safely
|
= help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::cell::RefCell<futures::unsync::mpsc::Shared<quinn::NewConnection>>>`
= note: required because it appears within the type `futures::unsync::mpsc::State<quinn::NewConnection>`
= note: required because it appears within the type `futures::unsync::mpsc::Receiver<quinn::NewConnection>`
= note: required because it appears within the type `futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>`
= note: required because it appears within the type `for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}`
= note: required because it appears within the type `[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]`
= note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]>`
= note: required because it appears within the type `impl std::future::Future`
= note: required by `tokio::run_async`
error: aborting due to 3 previous errors
I checked if I could just swap out Rc
's for Arc
's, but there's of course more non-Send
types than just that...
transmit_handshake()
is currently used in a number of places that go like this:
let mut outgoing = Vec::new();
self.tls.write_tls(&mut outgoing).unwrap();
self.transmit_handshake(&outgoing);
We should reevaluate the API to see if transmit_handshake()
can take the to-be-transmitted handshake bytes directly out of the TlsSession
.
Quinn performs stream assembly lazily: incoming data is not flattened into a single linear buffer until read. Additionally, flow control credit is issued on read. This leads to two problems:
read_unordered
skips stream assembly, and hence may issue flow control credit for the same data multiple times. This can lead to arbitrarily large flow control windows, which can lead to arbitrarily large buffer resource consumption and impair performance.These issues can be fixed by eagerly tracking which bytes of a stream have already been received, and discarding duplicate bytes immediately upon receipt.
Currently we first match on the connection ID, and only then consider that a packet might want to start a new connection. We probably need to be smarter about this.
Leave this to higher API levels to decide.
Currently, ConnectionState::process_tls()
handles stream 0 content directly, both reading it from the incoming stream frame and queueing new outgoing messages. This should be changed to be more generic:
Streams
object has a stream 0received
buffer for that streamprocess_tls()
to read and write from/into that stream object(Writing into the stream object depends on also having #3 implemented.)
Quinn should provide an option to automatically send PING
frames whenever the connection has been idle for more than some large fraction of the negotiated idle timeout, allowing time for retransmission if lost. Per the guidance in draft 15 §7.9, this should be supported for both incoming and outgoing connections.
Currently, the application must specify a maximum number of concurrently operating remotely-initiated streams at endpoint construction time. This is more strict than necessary. A connection's streams can be handled in the same manner as an endpoint's connections are: a small, configurable window of new streams is buffered by the implementation, advancing whenever the application consumes an entry to begin performing I/O on it. This would allow applications to dynamically and implicitly control the amount of stream-level concurrency they want, for example using FuturesUnordered
.
Currently, we check every QUIC packet for the stateless reset token at the start of Connection::handle_packet
. The draft dictates that this is premature:
An endpoint detects a potential stateless reset when a packet with a short header either cannot be decrypted or is marked as a duplicate packet.
@Ralith made the argument that this should be an argument to connect()
instead of a global configuration setting in Config
.
I'm actually not quite convinced. We currently change the following things:
I would argue that it makes sense to set these things on a per-Endpoint
basis.
Decodes can obviously fail on invalid input. Currently some of them don't handle this at all; others sometimes panic. The explicit panics should definitely be converted to returning QuicError
; there should also be some minimal checking that the input is sane (for example, length of input slice).
We currently use FnvHashMap
in a number of places, presumably because it is faster than the std HashMap
. However, the reason FnvHashMap
is not the default, as I understand it, is because it is possible for attackers to generate input data that will cause denial of service through worst-case performance of hash map algorithms.
In Endpoint
, we currently use FnvHashMap
for (1) connection_ids_initial
, (2) connection_ids
, and (3) connection_remotes
. (1) seems definitely vulnerable to this style of attack for server endpoints, since clients can randomly pick initial CIDs for their packets, thus triggering collisions. (2) seems safe, since local CIDs are generated by our own code. For (3), I'm not sure how easy it is to spoof IP addresses these days.
Initially, can use this to test decoding routines. There's an example at https://github.com/djc/tokio-imap/tree/master/imap-proto/fuzz that might be useful in getting this off the ground.
We're currently using an awkward heavyweight hack to schedule timers; see the quinn
crate's EndpointInner
struct's timer
member for details. DelayQueue
was literally purpose-built for our requirements, so let's put it to work.
Hi,
do you already have congestion control implemented?
In general, what would you say, how far is your implementation?
This means changing how packets make it into the ConnectionState
send queue. There needs to be a control
Frame
VecDeque
both in ConnectionState
(for connection-global control frames) and Streams
(for stream-related control frames). ConnectionState::queued()
should then be changed to pull frames from these as well as all sending streams in the Streams
instance, filling up a packet to maximum capacity. That should be 1232 bytes minus the size of the header -- which will especially differ based on whether we're sending long or short headers.
0-RTT data can be replayed by an attacker, and 0.5-RTT data (data sent by the server using 1-RTT keys before the client's TLS FIN is received) can be intercepted by a MITM. For some applications (e.g. fetching public data), these possibilities are harmless, and the reduction in latency versus 1-RTT is desirable. For others (e.g. performing non-idempotent operations or fetching private data), these are dangerous security vulnerabilities.
A good API should support applications like the former, while making it difficult for applications like the latter to inadvertently be insecure. The simplest solution would be to not support 0/0.5-RTT data, but hopefully we can do better. Perhaps separate feature-gated APIs?
ring currently does not implement Blake2, so we have to pull in an extra dependency for this. @Ralith what was your original reason for selecting Blake2 for this? If it's good enough we might ask for an update in the original ring issue.
Implement HTTP framing, as defined in the spec, here:
https://tools.ietf.org/html/draft-ietf-quic-http-11#section-4
I've started the process with the SETTINGS
frame, here:
https://github.com/djc/quinn/blob/pre-quicr-quinn/src/http/frame.rs
Currently, it is only possible to connect to a server whose certificate is signed by a trusted authority. While this is ideal for web services, for some applications there is no expectation of or even feasible mechanism for this level of trust. Examples include tests, P2P applications with transient peers, and systems where trust is managed externally. Quinn should support these gracefully, without undermining security for applications where useful certificate authorities exist.
To accomplish this, we need:
Your dependency file specified a branch or reference for https://github.com/Ralith/rustls.git, but Dependabot couldn't find it at the project's source. Has it been removed?
You can mention @dependabot in the comments below to contact the Dependabot team.
error: incompatible bit mask: `_ | 128` can never be equal to `0`
--> quinn-proto/src/tests.rs:407:17
|
407 | assert!(packet[0] | 0x80 != 0);
| ^^^^^^^^^^^^^^^^^^^^^
|
= note: #[deny(clippy::bad_bit_mask)] on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#bad_bit_mask
So, this one is the only clippy lint that is flagged as an error.
This assert appears to be effectively assert!(true)
.
If the intention here was to check for the long header, perhaps it should be instead
assert_ne!(packet[0] & 0x80, 0);
Hi. Thanks for quinn. I want to call QUINN from C/CPP. How can I create a C/CPP bindings?
The spec is here:
https://tools.ietf.org/html/draft-ietf-quic-qpack-00
There really isn't any code around this yet, so at first this could just be a stand-alone implementation with unit test coverage to make sure it can round-trip properly.
Dependabot couldn't find a Cargo.toml for this project.
Dependabot requires a Cargo.toml to evaluate your project's current Rust dependencies. It had expected to find one at the path: /rustls/Cargo.toml
.
If this isn't a Rust project, or if it is a library, you may wish to disable updates for it from within Dependabot.
You can mention @dependabot in the comments below to contact the Dependabot team.
During handshakes, it's common for multiple small QUIC packets to be transmitted in rapid succession. These can be concatenated into a single UDP packet to reduce overhead. This could be done by deferring the actual UDP transmit until at least a full MTU of QUIC packets is available to send or no more need to be immediately sent.
Once 0-RTT support is implemented, this could further reduce overhead by allowing outgoing connections with 0-RTT data to use that data in place of the bulk of the Initial
packet's padding.
Random notes on things I find in the examples that are kind of hard to parse out, as one only moderately familiar with QUIC and tokio/futures:
handle_connection()
and handle_request()
, which is nice, but the client example shoves everything into one big long gnarly future, which is hard to figure out. When trying to understand how futures work, I often find breaking them up so you can actually see the type signatures to be really useful.tokio_current_thread::spawn()
and sometimes runtime.spawn()
which, since you use the single-threaded runtime, I think are equivalent? It's not entirely clear though.current_thread
tokio Runtime
, which sort of makes some sense I think but again isn't obvious to someone who doesn't know this. Maybe it's documented somewhere, but I didn't find it.These are all just the sorts of things that are invisible to someone who knows what they're doing and really opaque to someone who doesn't. I'll semi-happily update the examples to try to improve some of these things, if you guys want.
Blocked on rustls/rustls#151
edit: On review, it looks like the involvement of the TLS stack in stateless retries has been factored out of recent drafts, so this may no longer be blocked.
Currently, Quinn only sends packet synchronously in response to received packets. This has been shown to fail on the server side, where it should send multiple Handshake
packets in response to an Initial
packet. I think the solution here should be, instead of storing a ConnectionState
directly in the Server::connections
map, there should be a Connection
type (in server
) that holds the ConnectionState
and is connected to the server by means of some channels (futures::sync::mpsc
?). In particular, the channel from Connection
to Server
should share the receiver across all the connections to prevent having to poll.
@dyxushuai and @cssivision own quic
crate and have created quic-rs
org. I think it would be nice if you three will work together. What do you think?
The current API is pretty confusing because it requires passing in a Side
, but it's not obvious if that is the API user's side or the message originator's side. My original implementation had explicit ClientTransportParameters
and ServerTransportParameters
types, which is more obvious, and can also directly implement the Value
/Codec
trait.
There is a lot of repetition in error handling code in methods of the Connection
impl. There are multiple ways this could be improved, in increasing order of complexity:
Endpoint
so it do the proper handling in one placeConnection
no longer has to know its ConnectionHandle
@Ralith does that make sense to you? @twilco this is a little more open-ended, but I think you can ramp it up by taking it one step at a time to gain some familiarity with the issues.
My test server has been crashing from this panic:
thread 'main' panicked at 'invalid transport parameter tag 65280', src/parameters.rs:187:22
The byte sequence leading to this problem (from printing the contents of sub
) is this:
[0, 0, 0, 4, 0, 0, 64, 0, 0, 1, 0, 4, 0, 0, 128, 0, 0, 2, 0, 2, 0, 1, 0, 8, 0, 2,
0, 1, 0, 3, 0, 2, 0, 10, 255, 0, 0, 2, 255, 0, 255, 1, 0, 2, 255, 1, 255, 2, 0, 2, 255, 2, 255, 3, 0, 2, 255, 3, 255, 4, 0, 2, 255, 4, 255, 5, 0, 2, 255, 5, 255, 6, 0, 2, 255, 6, 255, 7, 0, 2, 255, 7, 255, 8, 0, 2, 255, 8, 255, 9, 0, 2, 255, 9, 255, 10, 0, 2, 255, 10, 255, 11, 0, 2, 255, 11, 255, 12, 0, 2, 255, 12, 255, 13, 0, 2, 255, 13, 255, 14, 0, 2, 255, 14, 255, 15, 0, 2, 255, 15]
Make sure any bugs in the TransportParameters::decode()
routine are fixed.
As recommended in quic-tls 6. Key Update:
Keys and their corresponding secrets SHOULD be discarded when an
endpoint has received all packets with packet numbers lower than the
lowest packet number used for the new key. An endpoint might discard
keys if it determines that the length of the delay to affected
packets is excessive.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.