Coder Social home page Coder Social logo

yoshuawuyts / futures-concurrency Goto Github PK

View Code? Open in Web Editor NEW
381.0 10.0 29.0 584 KB

Structured concurrency operations for async Rust

Home Page: https://docs.rs/futures-concurrency

License: Apache License 2.0

Rust 100.00%
async rust futures structured-concurrency

futures-concurrency's Introduction

futures-concurrency

Structured concurrency operations for async Rust

Performant, portable, structured concurrency operations for async Rust. It works with any runtime, does not erase lifetimes, always handles cancellation, and always returns output to the caller.

futures-concurrency provides concurrency operations for both groups of futures and streams. Both for bounded and unbounded sets of futures and streams. In both cases performance should be on par with, if not exceed conventional executor implementations.

Examples

Await multiple futures of different types

use futures_concurrency::prelude::*;
use std::future;

let a = future::ready(1u8);
let b = future::ready("hello");
let c = future::ready(3u16);
assert_eq!((a, b, c).join().await, (1, "hello", 3));

Concurrently process items in a stream

use futures_concurrency::prelude::*;

let v: Vec<_> = vec!["chashu", "nori"]
    .into_co_stream()
    .map(|msg| async move { format!("hello {msg}") })
    .collect()
    .await;

assert_eq!(v, &["hello chashu", "hello nori"]);

Access stack data outside the futures' scope

Adapted from std::thread::scope.

use futures_concurrency::prelude::*;

let mut container = vec![1, 2, 3];
let mut num = 0;

let a = async {
    println!("hello from the first future");
    dbg!(&container);
};

let b = async {
    println!("hello from the second future");
    num += container[0] + container[2];
};

println!("hello from the main future");
let _ = (a, b).join().await;
container.push(4);
assert_eq!(num, container.len());

Installation

$ cargo add futures-concurrency

Contributing

Want to join us? Check out our "Contributing" guide and take a look at some of these issues:

License

Licensed under either of Apache License, Version 2.0 or MIT license at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this crate by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

futures-concurrency's People

Contributors

alastair-smith2 avatar alexmoon avatar cactter avatar conradludgate avatar dtolnay avatar eholk avatar jmintb avatar kianmeng avatar matheus-consoli avatar michaelwoerister avatar miguelraz avatar phil-opp avatar poliorcetics avatar soooch avatar swatinem avatar wishawa avatar yoshuawuyts avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

futures-concurrency's Issues

Control number of wakes in benchmark streams

As mentioned in #80 (comment) our benchmark streams basically just yield once and then stop. Instead we should be able to control how often they yield before halting. In order to do this we'll need to rework our types a bit to support it.

This should make it easier to plot the differences using criterion. Right now our measurements of the difference between select! and stream::merge are fairly static. But if we can control the number of iterations we should get a better picture of what it's like when working with longer-lived streams.

Implement more robust

When debugging tuple Merge with mw we found something which hints at a potential bug in tuple Merge. The test case was as follows:

let a = stream::once(async { () });
let b = stream::once(pending());
let c = (a, b).merge();
while let Some(_) = c.next().await {
    dbg!(item);
}
println!("done");

"done" should never be able to print in this case. When doing a cargo run it never triggered, but when running it through the debugger we could see it trigger. This may hint at a race condition!

stream tuple merge is broken

Not sure what the issue is, but #2 flagged broken CI. Tested it locally, and it only happens with tuples - not with arrays.

Test for wakers being swapped

As @eholk pointed out in #57 (comment), we're not guaranteed to keep the same waker between calls to poll:

(It'd probably be good to have test coverage for this scenario, but that probably makes sense to add as a later PR.)

That said though; even though it's not guaranteed, a runtime will run into trouble if they call a future with a waker once, and then no longer associate that waker with the future. A future may be moved in between polls, polled again, but then do a "return fast" path in order to not do the expensive thing.

struct MyFuture {
    in_progress: bool,
}

impl Future for MyFuture {
    fn poll(&mut self, cx: &mut Context) -> Poll<()> {
        if self.in_progress {
            Poll::Pending
        } else {
            // actually do IO here
        }
    }
}

This is mostly a check to guard against error-prone runtime implementations. The actual right thing to do here would be to document that runtimes must associate a future with a waker after it's been passed once, so the complexity from this lives in the runtime - and we don't have to account for it in every future.

But we're not there right now, and we may not be able to mandate this. Meaning we should just test for this first.

create "pre-optimizations branch"

We've been doing a fair number of performance improvements. At some point we're going to have to cut a release, and may even want to write about how much faster things are now. This requires establishing a baseline.

Perhaps 533a14c can be forked, and have the benchmarks backported on top? the only downside is that tuple merge only worked for up to length 3 - so we'll need to add that to the current suite.

Improve future docs

We should start each future's description with: "A future that...", just like std::iter starts all of their types with: "An iterator that...".

When looking at the docs overview this should make it more clear that something is a future rather than a type which needs to be manually constructed.

Stream `merge` function calls `poll_next` again after returning `Poll::Ready(None)`

The Stream::poll_next should not be called again after it returned Ready(None), otherwise it "may panic, block forever, or cause other kinds of problems". The merge implementation does not seem to respect this.

For example, the following code will panic:

use futures_concurrency::stream::Merge;
use futures::StreamExt;

#[tokio::main(flavor = "current_thread")]
async fn main() {
    let create_stream = |count| futures::stream::unfold(count, |n| async move {
        if n > 1 {
            Some ((n, n - 1))
        } else {
            None
        }
    });
    let stream_1 = create_stream(10);
    let stream_2 = create_stream(10);
    
    let merged = (stream_1, stream_2).merge();
    println!("{:?}", merged.collect::<Vec<_>>().await);
}

The panic message is:

thread 'main' panicked at 'Unfold must not be polled after it returned Poll::Ready(None)'

Fusing the streams (i.e. create_stream(10).fuse()) fixes this, but this should not be needed, or enforced by the API.

Implement traits for unit type

A unit type can also be understood as a zero-length tuple. The stdlib implements traits for ZSTs, so we should probably follow suit.

Add `stream::Chain` trait

This is a core iteration "order of execution" operation, enabling people to author:

before

// Manual sequential iteration.
for await n in 0..10 { dbg!(n); }
for await n in 11..20 { dbg!(n); }

after

// The same semantics using `chain`.
for await n in ((0..10), (11..20)).chain() {
    dbg!(n)
}

This is particularly useful to do things like "send one last message after EOF".

Design

pub trait Chain {
    /// What's the return type of our stream?
    type Item;

    /// What stream do we return?
    type Stream: Stream<Item = Self::Item>;

    /// Combine multiple streams into a single stream.
    fn chain(self) -> Self::Stream;
}

Tasks

  • Chain trait #73
  • impl Chain for vec #73
  • impl Chain for array #73
  • impl Chain for tuple
  • benchmarks

inline `utils::random`

Instead of using a thread-local to access randomness, we should instead inline the RNG generator in the output struct. The size of it is basically just a u32, which shouldn't add much overhead.

We don't need cryptographic randomness, but instead only need to pick a semi-random starting point for fairness purposes. Meaning: doing what we're doing without accessing system calls is just fine here, and relying on a thread-local would be overkill.

The seed for the RNG is also guaranteed to be random since we use a memory address as a seed. Meaning it's all random-enough, but also hyper cheap to perform. Removing thread-local access should strictly be faster.

Governance while yosh is out of office

Hey all,

Just wanted to share that I'll be out of office starting tomorrow until December 19th. In the interim: please feel free to keep filing issues, make PRs, and merge them too. I trust folks here to exercise their judgement on which things to merge, and make sure changes are reviewed, etc. So far things have been going pretty steady, so please feel free to things going the way they are.

Both @matheus-consoli and @eholk have commit rights and can merge PRs. And @eholk also has the ability to publish new releases in a pinch. I don't expect a new release to have to be made, but should there be some critical issue that needs a hot fix: it's at least possible if I'm not around.

For anyone new to the project: welcome! I ask you to please be respectful of people's time. Other than me are folks here are really just volunteering their time to help out on this. While they may be kind enough to help in a pinch, they are under absolutely no obligation to do so. Should there be a larger issues or design questions: I'll be back in a few weeks and I'll be happy to discuss them then.

Anyway, I figured I'd let people know so nobody wonders why I'm suddenly radio-silent. See y'all in a few weeks!

Implement `Join`, `Race`, etc for `SmallVec`

So far, Join, Race, etc are implemented for tuples, vecs, and arrays. It might make sense to also provide an implementation for SmallVec (possibly behind a feature-gate).

Although, upon writing this, I'm starting to think that fixed size arrays probably cover the same scenarios pretty well.

Inline `PollState` in future when possible

JoinState is 2-bits of state per future, which means on a u64 we should be able to inline the state for Vec<impl Future> of up to 32 items. We suspect that should be the majority of cases.

That would bring vec::Join down from 2 allocations (metadata + output) to just a single allocation (output) in the majority of cases.

Fair chaining APIs

Now that #104 exists to close #85, there is a real question about fairness and chaining. I've made the case before that in order to guarantee fairness, the scheduling algorithm needs to know about all types it operates on. When we were still using permutations and even just rng-based starting points, I believe this to be true. But I'm slowly coming around to @eholk's idea that this may not be the case.

Benefits

If we resolve this, I think we may be able to improve our ergonomics. Take for example the following code, which I believe to be quite representative of futures-concurrency's ergonomics:

let streams = (
    socket_stream.map(Either::Response),
    route_rx.stream().map(Either::Request),
    pinger.map(|_| Either::Ping),
);
let mut merged = streams.merge();
while let Some(either) = merged.next().await { ... }

The tuple instantiation imo looks quite foreign. In this repo's style, we'd probably instead choose to name the futures, flattening the operation somewhat:

let a = socket_stream.map(Either::Response);
let b = route_rx.stream().map(Either::Request);
let c = pinger.map(|_| Either::Ping);

let mut merged = (a, b, c).merge();
while let Some(either) = merged.next().await { ... }

But while I'd argue this is more pleasant to read, we can't expect people to always do this. The earlier example is often to easier to write, and thus will be written as such. But a chaining API could probably be even easier to author as well:

let mut merged = socket_stream
    .map(Either::Response)
    .merge(route_rx.stream().map(Either::Request))
    .merge(pinger.map(|_| Either::Ping));

while let Some(either) = merged.next().await { ... }

We used to have this API, but we ended up removing it. And I think there's definitely a case to be made to add this back. Just like we'd be expected to have both: async_iter::AsyncIterator::chain and impl async_iter::Chain for tuple, so could we have this for both variants of merge.

Implementation

I'd love to hear more from @eholk here. But my initial hunch is that perhaps something like ExactSizeIterator could help us. But rather than return how many items are contained in an iterator, it'd return the number of iterators contained within. That way outer iterators can track how often they should call inner iterators before moving on. I think this may need specialization to work though?

I think even if we can't make the API strictly fair, it might still be worth adding the chaining API - and we can possibly resolve the fairness issues in the stdlib? Or maybe we can add a nightly flag with the specializations on it as part of this lib? Do folks have thoughts on this?

Author baseline benchmark

For #21 we're planning to implement what's essentially an optimization. We shouldn't do anything without at least having some form of benchmark, so we should start there.

Maybe we can borrow something from the futures-heavy rustc perf suite?

Add a benchmark comparing task spawning vs in-line joining

We should anticipate that when we put out the futures-concurrency RFC that members of the community may ask about which benefits e.g. join or merge provide over abstractions like task::spawn or select! {}. There are several benefits, including integration with debuggers, ease of use, documentation, structured concurrency, and overall correctness. But also, and this is relevant to the recent work we've been doing: performance!

We should author a benchmark comparing e.g. tokio::spawn and tokio::spawn_local against {tuple,array,vec}::join. We may want to implement #82 for futures as part of this, so we can control the number of wakes and don't exclusively perform synchronous work. But it'll be interesting to see the performance delta between the two.

Add tests for `no_std`

The traits themselves are compatible with no_std, and the implementations for array and tuple should be compatible with no_std as well. But as we're adding optimizations, we're using things like bitvec to store metadata in, speeding things up - but that may require allocations.

We should author tests to ensure that the traits keep working on no_std environments, even as we add optimizations. On the implementation side we may even want to swap the implementations we provide for no_std environments if it turns out our more optimized approaches are incompatible - but I suspect we may not need to.

Either way: the first step is to author tests and put them in CI so we can ensure that no_std works going forward.

Make `impl Race for tuple` fair

Right now we poll it in linear order. Just like with impl Merge for tuple we should make it so the poll order is randomized to preserve fairness.

Move back to `futures_core::Stream`

Even if we expect for the functionality in this library to be integrated into the stdlib, this library should work with the existing futures ecosystem. which means: no custom base trait definitions.

The main bit of weirdness I foresee is that we want future "join" semantics to go through IntoFuture, but we can't use the std::future::IntoFuture trait. So we'll probably need to either keep calling it Join, or define our own IntoFuture trait. That doesn't matter too much, and either should be fine probably.

Move back to associated types

This has a clear benefit in that it unambiguously works around the ?Send Future problem in #[async_trait]. Probably easier too for the stdlib versions too.

Replace `MaybeDone`/`Fuse` with separate input/output/state fields

Right now we're using MaybeDone internally to track the state of each future. It can either be: "pending", "containing data", or "done". This is nice from an implementation perspective, but bad for performance. Namely:

  1. It creates unnecessary copies, and sometimes also allocations. Once to store the futures in the MaybeDone structure. And once to copy the data to the output structure.
  2. It creates unnecessarily large memory needs. At any point we may be using up to 2x the amount of memory needed, since we're doing redundant copies.

The solution is to move the container of futures in-line. Create the container of outputs containing MaybeUninit. And finally create a separate container to track the state. Once all futures have completed, we can mark the output container as "initialized" and immediately yield that.

Tasks

Priority

  • Join for Vec #29
  • Join for Array #72
  • Join for tuple #74
  • Race for Vec (not needed)
  • Race for Array (not needed)
  • Race for tuple (not needed)

Secondary

  • Merge for Vec #79
  • Merge for Array #70
  • Merge for tuple

Eventually

  • TryJoin for Vec
  • TryJoin for Array
  • RaceOk for Vec
  • RaceOk for Array

Consider redesigning `AggregateError`

RaceOk returns a Result<T, AggregateError> where the error type matches the input type. Array of futures in -> array of futures out. etc.

Right now the API for it is rather clumsy; usually a thin wrapper to the underlying type + an Error impl so you can pass it through as-is. But what we really want is to expose something akin to Error::sources but not to iterate down, but to iterate over the siblings contained within the aggregate. Not entirely sure yet how to best approach this though.

Improve future debug output

We have list-like futures which don't provide list-like outputs. We should fix that, and author tests for it.

Possibly unnecessary use of `Mutex` in `Merge` implementation?

This is more of a question than an issue right now, but may have want some action depending on the answer to the question.

I noticed in the implementation impl<S, const N: usize> Stream for Merge<S, N> that WakerArray is being used, which requires locking a Mutex for read/write. However, since poll_next borrows Self mutably, it's guaranteed that we won't get any re-entrant calls to poll_next.

Just wondering if there is a reason this is needed - something I'm missing - or whether there is room for improvement here, say a WakerArray implementation that didn't require Mutex?

Thanks!

implement `(a, b, c).race()` in this crate, or somewhere else?

In https://blog.yoshuawuyts.com/futures-concurrency-3/ there is mention of Future::race extension method in async_std.

There is also a suggestion that there could be an extension trait that's implemented for tuples, similar to how Merge works currently in this crate: https://docs.rs/futures-concurrency/latest/futures_concurrency/trait.Merge.html#impl-Merge-for-(S0%2C%20S1%2C%20S2%2C%20S3)

The post also says:

In this post we're going to take a look at how this mode of concurrency works, take a closer look at the issues select! {} has, discuss Stream::merge as an alternative, and finally we'll look at what the ergonomics might look like in the future. This post is part of the "Futures Concurrency" series. You can find all library code mentioned in this post as part of the futures-concurrency crate.

I find the idea of a race extension method really compelling, and have told all of my workmates about the blog post, but there doesn't seem to be an implementation of it yet. If someone wanted to implement this trait, would this repo be a reasonable place to put it?

Add `stream::Zip` trait

This is a core iteration "order of execution" operation, enabling people to author:

after

let s1 = async_iter::repeat(0).take(5);
let s2 = async_iter::repeat(1).take(5);
let s3 = async_iter::repeat(2).take(5);

for await (n1, n2, n3) in (s1, s2, s3).zip() {
    assert_eq!((n1, n2, n3), (0, 1, 2));
}

This is better than the nested tuple approach of non-trait based zip.

Design

pub trait Zip {
    /// What's the return type of our stream?
    type Item;

    /// What stream do we return?
    type Stream: Stream<Item = Self::Item>;

    /// Combine multiple streams into a single stream.
    fn zip(self) -> Self::Stream;
}

Tasks

  • Zip trait #73
  • impl Zip for vec trait #73
  • impl Zip for array trait #73
  • impl Zip for tuple
  • benchmarks

run clippy on CI

Clippy has a lot of complaints; we should run it on CI and validate what we're doing.

Split `utils::WakerList` into `WakerVec` and `WakerArray`

This works towards #68, but we should have a const-variant of the existing WakerList. Probably one way to do it would be to separate it by name: e.g. WakerVec (dynamic size) vs WakerArray (static size). This would make PRs like #75 faster and not rely on allocations.

implement Join and Merge for iterators?

Hi, huge fan of what this crate has to offer. I'm wondering if anyone's thought about implementing these traits for iterators? Is there a technical reason why that would be difficult or is it just that no one has done it yet? I could try my hand at it.

I'm thinking something with a signature like:

impl<T, F, O> Join for T
where
    T: Iterator<Item = F>,
    F: Future<Output = O>,
{
    type Output = std::vec::IntoIter<O>;
    ...
}

Although already several questions are raised just writing this out. Would we want T: Iterator or T: IntoIterator? In the second case we would conflict with implementations for Vec and arrays, so probably Iterator. It would be nice to cover more types with one generic implementation, but I don't think we could give one implementation that is satisfying for iterators, vecs, and arrays.

The next question is the output type. On the one hand it would be nice to have the output be the same kind of thing as the input, so <T as Join>::Output: Iterator<Item = O>. On the other hand, joining all the futures would probably require allocating the results into a vec, and the IntoIter struct contains a vec, meaning that if the user just wants a vec they have to do a .collect() (which I think would get optimized away), making the IntoIter's existence pointless. It would be nice to give the user back the same kind of thing they put in, but if it's secretly a vec behind the scenes then it may better to give them that and leave them the option to use .into_iter() to get an iterator.

Personally I would want to return an iterator. In my code I'm using futures::future::join_all and tap, and I'm writing over and over again:

.pipe(join_all)
.await
.into_iter()

it would be slightly better if I could write:

.join()
.await
.into_iter()

and slightly better still:

.join()
.await

But maybe in someone else's use case they want a Vec more often than they want an iterator.

Anyways, I like this crate and I hope that one day it goes mainstream :).

Implement "perfect" waking through intermediate wakers

Instead of each poll call being O(N) in time, it should be O(1) in time, by tracking which futures have been woken on each iteration, and only waking those.

This acts as a guard against faulty manually implemented futures which don't have a fast return path in the case the wake is a no-op. #8 is a first attempt at such a patch, but needs more work to finish up.

Tasks

Priority

  • Merge for Vec #50, #57
  • Merge for Array #75
  • Merge for tuple #96

Secondary

  • Join for Vec
  • Join for Array
  • Join for tuple
  • Race for Vec
  • Race for Array
  • Race for tuple

Eventually

  • TryJoin for Vec
  • TryJoin for Array
  • TryRace for Vec
  • TryRace for Array

Sould futures in `Join` be dropped at completion of each?

Consider following code:

struct Foo(u64);

impl Drop for Foo {
    fn drop(&mut self) {
        println!("dropping {}", self.0)
    }
}

async fn do_something(a: Foo) -> Foo {
    sleep(Duration::from_millis(a.0 * 100)).await;
    println!("completed: {}", a.0);
    a
}

let a = do_something(Foo(1));
let b = do_something(Foo(2));
let c = do_something(Foo(3));
let d = do_something(Foo(4));

let (_x, _y, _z) = ((a, b).race(), c, d).join().await;

The output of this code is:

completed: 1
completed: 3
completed: 4
dropping 2
dropping 4
dropping 3
dropping 1

But if tokio::join!, futures::join!, or futures::future::join3 is used instead of Join::join, the output will be:

completed: 1
dropping 2
completed: 3
completed: 4
dropping 4
dropping 3
dropping 1

Author benchmark comparing `merge` to `select!` and `race`

We should author a comparative benchmark comparing merge to loop { select! {} } to loop { race() }. merge should be significantly faster, especially for higher concurrency, or more iterations. But it's worth measuring and reporting exactly how much faster.

Deterministically pick the next future to poll

This issue is partly about implementing it, but more importantly to have the discussion about whether we want this. There are tradeoffs that I'll try to explain in this issue.

Fairness and Determinism

This crate strives to be fair in polling futures when there is a choice between which future or stream to poll next. The exact details of the guarantee that we want aren't specified yet (#45), but roughly we want it so that no future is unfairly more likely to complete than another. So if we always polled 0..n, that would be unfair because it would be biased towards futures that are earlier in the list.

The way we guarantee fairness currently is to pick a random index and start polling from there. This keeps any particular entry in the list from getting an unfair advantage. This version does not give us determinism though.

From a discussion with @conradludgate, it seems like it might be enough to just remember the last index we polled and then go from there. This version would give us a deterministic scheduler, but in the long run we don't give an advantage to any index (modulo some special cases I'll get into later).

Why we want determinism?

Concurrency bugs can be extremely difficult to debug and create test cases for. While there will likely always be nondeterminism coming from external sources (e.g. network requests can take a variable amount of time to complete), we can make it so the scheduler does not introduce any additional nondeterminism. This also makes it possible to carefully construct futures for test cases that can exhibit a particular tricky interleaving that may expose bugs. For debugging, when trying to figure out why something went wrong it can be helpful to replay the exact same sequence of events.

Determinism is also potentially faster, since we can save a few instructions from calculating a random number each time around. I doubt this will matter much in practice though.

Why do we not want determinism?

As kind of a dual to the testing/debugging argument I made in the last section, nondeterminism can also help with development and testing. It might be that there are weird bugs that only happen in weird interleavings of futures that would be unlikely to come up during development but could happen in production. In that case, introducing nondeterminism can help uncover these. It's basically fuzz testing for the scheduler.

Also, determinism is somewhat in conflict with fairness. If we did something like [ready(1), ready(2), ready(3)].race() then by our intuitive definition of fairness we should expect to get either 1, 2, or 3 out. But any deterministic scheduler will always return the same answer. So a deterministic scheduler can only be fair in the long run, but not for short-lived processes.

What are some other options?

If we decide to have a deterministic scheduler, it might be worth adding a fuzz testing mode that randomizes the scheduling order, which would give us one of the main benefits of a nondeterministic scheduler.

If we decide not to have determinism, we could add an option to fix the random seed which would make things deterministic when needed, such as when writing a regression test for a nasty bug.

We could add a feature flag and let library users decide which mode they want, although in my opinion it'd be better to not have the complexity of maintaining two versions.

What should we do?

I'm partial to a deterministic scheduler (which I why I opened the issue). For real-world futures, things will be inherently nondeterministic, but I think it makes sense to be predictable where we can. I'm curious to hear other people's opinions though, since I think there are good arguments either way.

Implement PinnedDrop for `impl RaceOk for Tuple`

#109 implemented RaceOk for tuples, but is missing a PinnedDrop implementation. This is needed since we have a MaybeUninit structure which holds the errors. If we get an error once, store it, and then cancel the RaceOk call - we've now leaked data - since MaybeUninit types need to be manually de-initialized on drop.

Document fairness properties

Traits such as race and merge produce a single output from multiple inputs. These traits provide fairness properties which should be documented at the trait implementation level, but also on the Race and Merge traits themselves.

Run Miri on CI

If we're going to implement #22, we're going to be using unsafe. And more unsafe means more chances to get UB. We should run our test suite through Miri to validate against accidental UB.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.