Coder Social home page Coder Social logo

async-std's People

Contributors

assemblaj avatar bors[bot] avatar dcarosone avatar dignifiedquire avatar drevoed avatar felipesere avatar fishrock123 avatar freebroccolo avatar joshtriplett avatar k-nasa avatar keruspe avatar killercup avatar matklad avatar miker1423 avatar montekki avatar nnethercote avatar nonnontrivial avatar notgull avatar olegnn avatar razican avatar skade avatar skorgu avatar starsheriff avatar sunjay avatar taiki-e avatar tirr-c avatar wassasin avatar yjhmelody avatar yoshuawuyts avatar yshui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

async-std's Issues

Build link-checking for book

Land #61 and make sure it doesn't break netlify ;).

  • Turn book building into a script
  • Make sure link checking only triggers during testing
  • On deploy environments, download the mdbook and linkcheck binary

Make a final decision for the prelude

Right now I feel we should go with the following prelude:

pub use crate::future::Future;

pub use crate::io::BufRead as _;
pub use crate::io::Read as _;
pub use crate::io::Seek as _;
pub use crate::io::Write as _;

pub use crate::stream::Stream;

pub use crate::time::Timeout as _;

I don't think we should add or remove anything from this list, but am unsure about which traits should be anonymously imported (as _) and which shouldn't. I feel like fully importing traits like Read could be a potential source of conflicts if the user uses std::io::Read in their code at the same time.

What about trait Stream? Should that one be anonymous or not? In a way, it is fundamental just like Iterator, which is even in the std prelude. So maybe we should import it fully.

But I'm still not 100% decided...

[tracking] io

Similar to #129 for streams, this issue tracks what's left to port from std::io to async_std::io.

Top-level exports

  • prelude

Free functions

  • copy
  • empty
  • repeat
  • sink
  • stderr
  • stdin
  • stdout

Structs

  • BufReader
  • BufWriter
  • Bytes
  • Chain
  • Cursor
  • Empty
  • Error
  • IntoInnerError
  • IoSlice
  • IoSliceMut
  • LineWriter
  • Lines
  • Repeat
  • Sink
  • Split
  • Stderr
  • StderrLock
  • Stdin
  • StdinLock
  • Stdout
  • StdoutLock
  • Take

Read methods

  • Read::by_ref
  • Read::bytes
  • Read::chain
  • Read::read_exact
  • Read::read_to_end
  • Read::read_to_string
  • Read::read_vectored
  • Read::take

Write methods

  • Write::by_ref
  • Write::write_all
  • Write::write_fmt
  • Write::write_vectored

BufRead methods

  • BufRead::buffer
  • BufRead::consume
  • BufRead::lines
  • BufRead::read_line
  • BufRead::read_until
  • BufRead::split

BufWriter methods

  • BufWriter::buffer
  • BufWriter::get_mut
  • BufWriter::get_ref
  • BufWriter::into_inner
  • BufWriter::new
  • BufWriter::with_capacity

BufReader methods

  • BufReader::fill_buf
  • BufReader::get_mut
  • BufReader::get_ref
  • BufReader::into_inner
  • BufReader::new
  • BufReader::with_capacity

More flexible reactor/executor API

While this seems like something that shouldn't be considered for 1.0 IMHO, it would be good to start discussions about how an API could look like and what requirements different folks have here.

Also while this is kind of related to #60, my main point here is about being able to have control over the lifetime of the reactor/executor, allowing to run multiple and about which would be used when/where. See also rustasync/runtime#42 for a similar issue of mine for the runtime crate, on which everything that follows is based.


Currently the executor and reactor and thread pools are all global and lazily started when they're first needed, and there's no way to e.g. start them earlier, stop them at some point, run multiple separate ones, etc.

This simplifies the implementation a lot at this point (extremely clean and easy to follow code right now!) and is also potentially more performant than passing around state via thread-local-storage (like in e.g. tokio).

It however limits the usability at least in two scenarios where I'd like to make use of async-std.

Anyway, reasons why this would be useful to have (I'm going to call the reactor/executor/threadpool combination a runtime for the following):

  1. Usage in library crates without interfering with any other futures code other library crates or the application might use. This would potentially also go with specific per-thread configuration inside the library crate, for e.g. setting thread priorities of the runtime in a way that is meaningful for what this specific library is doing. (See also rustasync/runtime#8)
  2. Similar to the above, but an extension with more requirements: plugins. For plugins you might want to use a runtime internally, but at some point you might want to be able to unload the plugin again. As Rust generally does static linking at this point, each plugin would have its own version of async-std/etc included, so unloading a plugin also requires to be able to shut down the runtime at a specific point and to ensure that none of the code of the plugin is running anymore.
  3. Error isolation. While this is probably done even better with separate processes, being able to compartmentalize the application into different parts that don't implicitly share any memory with each other could be useful, also for debuggability.

Book: Tasks example out of date

Here is the tasks example:

use async_std::fs::File;
use async_std::task;

async fn read_file(path: &str) -> Result<String, io::Error> {
    let mut file = File::open(path).await?;
    let mut contents = String::new();
    file.read_to_string(&mut contents).await?;
    contents
}

fn main() {
    let reader_task = task::spawn(async {
        let result = read_file("data.csv").await;
        match result {
            Ok(s) => println!("{}", s),
            Err(e) => println!("Error reading file: {:?}", e)
        }
    });
    println!("Started task!");
    task::block_on(reader_task);
    println!("Stopped task!");
}

But it does not work. File does not have a read_to_string method seemingly.

Here is what I had to do to get it to work:


async fn read_file(path: &str) -> io::Result<String> {
    //let mut file = File::open(path).await?;
    fs::read_to_string(path).await
}

fn main() {
    let reader_task = task::spawn(async {
        let result = read_file("data.csv").await;
        match result {
            Ok(s) => println!("{}", s),
            Err(e) => println!("Error reading file: {:?}", e),
        }
    });
    println!("Started task!");
    task::block_on(reader_task);
    println!("Stopped task!");
}

Misleading future documentation?

Hi,

the async-std documentation hints that there is a join futures combinator, but according to issue #14 this is still missing: link.

I recognize that this may be an issue of lower importance, and might not be trivial to fix. Feel free to close this issue if this won't be fixed.

Best,
ambiso

Compatibility with tokio?

Hi,

And kudos for this very promising project.

I'm currently trying to replace all instances of futures.rs and tokio with async-std.

However, hyper requires streams that implement the tokio::io::AsyncRead and AsyncWrite traits.

Given a stream obtained from async-std, such as a TcpStream, how can I get something that implements tokio's traits?

Thanks again for async-std!

Book: unresolved import `futures::select`

In the current version of the book, the final code of the Handling Disconnection tutorial doesn't work:

use futures::{
    channel::mpsc,
    SinkExt,
    FutureExt,
    select,
};

---

error[E0432]: unresolved import `futures::select`
  --> src/main.rs:13:5
   |
13 |     select,
   |     ^^^^^^ no `select` in the root

futures-preview 0.3.0-alpha.17 actually defines two select! macros, futures::future::select and futures::stream::select, see its lib.rs.

Changing the import to stream::select (I hope I understood it correctly, given that we are working with streams; either way, importing future::select has the same issue) makes the import work, but it cannot resolve the macro still:

async fn client_writer(
    messages: &mut Receiver<String>,
    stream: Arc<TcpStream>,
    mut shutdown: Receiver<Void>,
) -> Result<()> {
    let mut stream = &*stream;
    loop {
        select! {
            msg = messages.next().fuse() => match msg {
                Some(msg) => stream.write_all(msg.as_bytes()).await?,
                None => break,
            },
            void = shutdown.next().fuse() => match void {
                Some(void) => match void {},
                None => break,
            }
        }
    }
    Ok(())
}

---

error: cannot find macro `select!` in this scope
  --> src/main.rs:92:9
   |
92 |         select! {
   |         ^^^^^^

error: cannot find macro `select!` in this scope
   --> src/main.rs:126:21
    |
126 |         let event = select! {
    |                     ^^^^^^

warning: unused import: `future::select`
  --> src/main.rs:13:5
   |
13 |     future::select,
   |     ^^^^^^^^^^^^^^

Expose IoHandle or some way to register mio Evented

Currently this is hidden as an implementation detail of the network driver. Exposing would make it possible to hook up arbitrary Evented e.g. for other kernel event sources.

Perhaps this doesn't belong in async-std... in that case, maybe it could be extracted to another crate?

Create our own types Empty, Sink, and Cursor

Instead of re-exporting types Empty, Sink, and Cursor from std::io into async_std::io, I believe we should create our own equivalents of those types.

The problem with these is that they implement synchronous and asynchronous traits at the same time, and I think that is a mistake. My intuition says that's wrong because types should either be synchronous or asynchronous, never both at the same time.

As a more concrete example, consider the fact that Sink implements two methods write_all, one coming from std::io::Write and the other coming from async_std::io::Write. Here's an attempt at calling write_all while both traits are imported:

use std::io::prelude::*;

use async_std::io;
use async_std::prelude::*;
use async_std::task;

fn main() -> io::Result<()> {
    task::block_on(async {
        let s = io::sink();
        s.write_all(b"hello world").await?;
        Ok(())
    })
}

This errors out with:

error[E0034]: multiple applicable items in scope
  --> examples/foo.rs:10:11
   |
10 |         s.write_all(b"hello world").await?;
   |           ^^^^^^^^^ multiple `write_all` found
   |
   = note: candidate #1 is defined in an impl of the trait `std::io::Write` for the type `std::io::Sink`
   = help: to disambiguate the method call, write `std::io::Write::write_all(s, b"hello world")` instead
   = note: candidate #2 is defined in an impl of the trait `async_std::io::write::Write` for the type `_`
   = help: to disambiguate the method call, write `async_std::io::write::Write::write_all(s, b"hello world")` instead

Having types that implement both synchronous and asynchronous traits at the same time is never a convenience, I think, and can only be a nuisance like in this example.

cc @yoshuawuyts

Add BufWriter

There is already a BufReader, so people would expect to have a BufWriter too. A lazy implementation is to simply wrap BufWriter from futures-0.3.

Verify documentation links on Travis

Let's use cargo-deadlinks with the following command on Travis:

cargo deadlinks --check-http

This currently fails with some errors which are due to re-exports from std:

Found invalid urls in /home/stjepang/work/async-std/target/doc/async_std/io/type.Result.html:
        Linked file at path /home/stjepang/work/async-std/target/doc/async_std/result/enum.Result.html does not exist!
Found invalid urls in /home/stjepang/work/async-std/target/doc/async_std/io/struct.Error.html:
        Linked file at path /home/stjepang/work/async-std/target/doc/std/io/struct.Error.html does not exist!
        Linked file at path /home/stjepang/work/async-std/target/doc/std/io/enum.ErrorKind.html does not exist!
        Linked file at path /home/stjepang/work/async-std/target/doc/async_std/ffi/struct.NulError.html does not exist!

The way we resolve these errors is by writing shim docs for re-exports from std, similarly to how we did that here:

if #[cfg(feature = "docs")] {

The idea is that under the docs feature flag we generate "fake" docs linking to async-std's types, but otherwise re-export real types from std.

lines example in docs doesn't work

In https://docs.rs/async-std/0.99.3/async_std/io/trait.BufRead.html#examples-2, this example is provided:

use async_std::fs::File;
use async_std::io::BufReader;
use async_std::prelude::*;

let file = File::open("a.txt").await?;
let mut lines = BufReader::new(file).lines();
let mut count = 0;

for line in lines.next().await {
    line?;
    count += 1;
}

I used it in this full program:

#![feature(async_await)]

use std::env::args;

use async_std::fs::File;
use async_std::io::{self, BufReader};
use async_std::prelude::*;
use async_std::task;

fn main() -> io::Result<()> {
    let path = args().nth(1).expect("missing path argument");

    let mut count = 0u64;
    task::block_on(async {
        //jlet file = File::open(&path).await?;
        //let mut lines = BufReader::new(file).lines();
        
        let file = File::open(&path).await?;
        let mut lines = BufReader::new(file).lines();
        let mut count = 0;

        for line in lines.next().await {
            line?;
            count += 1;
        }
        println!("The file contains {} lines.", count);
        Ok(())
    })
}

However, running that counts 1 line for any file with >= 1 line that I run it on. In contrast, this full program works correctly:

#![feature(async_await)]

use std::env::args;

use async_std::fs::File;
use async_std::io::{self, BufReader};
use async_std::prelude::*;
use async_std::task;

fn main() -> io::Result<()> {
    let path = args().nth(1).expect("missing path argument");

    let mut count = 0u64;
    task::block_on(async {
        let file = File::open(&path).await?;
        let mut lines = BufReader::new(file).lines();

        while let Some(line) = lines.next().await {
            line?;
            count += 1;
        }

        println!("The file contains {} lines.", count);
        Ok(())
    })
}

Add Flow API

Currently, we have Stream API implementation. For async processing of incoming data, we can add Flow API.

What is Flow API?

Use case 1

Imagine I have a stream like this:

let mut k = stream::cycle(vec![1, 2, 3]);

I want to dispatch events to different processing stages. Like I want to add all 1s, multiply all 2s etc.

There is no convenient way to do that except initially creating different streams for them.
This is what I call partition.

Use case 2

Then I want to merge these and create a unified stream.

Process things in this merged stream and continue doing it.
This is what I call merge and priority selection based merge.

Use case 3

I want to create fully replicated streams from a single stream.
That I call broadcast.

Benefits

  • This API will ease the pain of continuous async processing of events.
  • Allow stream combinators without any extra overhead and built-in trust to this library.
  • Allow putting further abstractions on top of network programming blocks. (Which is going to be super nice)

build.rs fails in dependencies for examples

Hey!
Thanks for this awesome initiative :)

I can build and run separate binaries that links to async-std as extern crate (like the examples in the readme) but I can't run the examples from within async-std.

johannes@jm:~/dev/async-test % rustc -V
rustc 1.39.0-nightly (53df91a9b 2019-08-27)
johannes@jm:~/dev/async-test % cargo -V
cargo 1.39.0-nightly (3f700ec43 2019-08-19)
johannes@jm:~/dev/async-test % uname -a
FreeBSD jm 13.0-CURRENT FreeBSD 13.0-CURRENT r349834+a82ad980c917(dell-fix_iichid-evdev) DELL-NODEBUG  amd64

This is what I get

johannes@jm:~/dev/async-std % cargo run --example hello-world
   Compiling libnghttp2-sys v0.1.2
   Compiling openssl-sys v0.9.49
   Compiling backtrace-sys v0.1.31
   Compiling mime_guess v2.0.1
   Compiling mime v0.3.13
   Compiling tempdir v0.3.7
   Compiling proc-macro-hack v0.5.9
   Compiling rand_chacha v0.2.1
error: failed to run custom build command for `libnghttp2-sys v0.1.2`

Caused by:
  process didn't exit successfully: `/usr/home/johannes/dev/async-std/target/debug/build/libnghttp2-sys-4c6f3caedee97f80/build-script-build` (signal: 11, SIGSEGV: invalid memory reference)
warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `backtrace-sys v0.1.31`

Caused by:
  process didn't exit successfully: `/usr/home/johannes/dev/async-std/target/debug/build/backtrace-sys-78dbde0feafa0d65/build-script-build` (signal: 11, SIGSEGV: invalid memory reference)
--- stdout
cargo:rustc-cfg=rbt

warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `openssl-sys v0.9.49`

Caused by:
  process didn't exit successfully: `/usr/home/johannes/dev/async-std/target/debug/build/openssl-sys-7d3ff8c9464a6e09/build-script-main` (signal: 11, SIGSEGV: invalid memory reference)
warning: build failed, waiting for other jobs to finish...
error: build failed

Any idea what this might depend on?

Add possibility to configure the way tasks are spawned

The async_std::task::Builder and async_std::task::spawn methods assume that some kind of environment is present (ie. a background thread pool) where to spawn tasks.

async-std/src/task/pool.rs

Lines 172 to 192 in 532c73c

static ref QUEUE: Sender<Job> = {
let (sender, receiver) = unbounded::<Job>();
for _ in 0..num_cpus::get().max(1) {
let receiver = receiver.clone();
thread::Builder::new()
.name("async-task-driver".to_string())
.spawn(|| {
TAG.with(|tag| {
for job in receiver {
tag.set(job.tag());
abort_on_panic(|| job.run());
tag.set(ptr::null());
}
});
})
.expect("cannot start a thread driving tasks");
}
sender
};

Crates that provide some sort of hidden environment generally provide a way to configure how it works. Example of what I mean:

Similarly, I think async_std should provide some sort of set_task_spawner function that allows configuring how that works.

The use-case I have in mind is the browser environment, where you want to drive tasks by using spawn_local (which is implemented using setTimeout).

Book: explain statically checked unreachable

The current version of the Handling Disconnects section of the book states:

In the shutdown case we use match void {} as a statically-checked unreachable!().

Please explain the significance of this statement. In how far is this a statically checked version of unreachable!?

This is the first time I am encountering this pattern and I am confused by the statement.

Book: naming convention for loop functions

There are quite a few functions in the book that runs while loop inside as are supposed to be called instead task::spawn. E.g. server, client, client_writer. I think it makes sense to extend those name to explicitly set expectation on loop inside it, like server_loop, client_loop, client_writer_loop. Would be happy to provide PR if that sounds like a helpful change.

Another thought about naming, client function technically is not about "client", it's about "connection". Maybe it should be connection_loop? It makes disconnect handling easier to read.

Change the timeout API?

Timeouts are confusing. @spacejam recently wrote an example that contains the following piece of code:

stream
    .read_to_end(&mut buf)
    .timeout(Duration::from_secs(5))
    .await?;

The problem here is that we need two ?s after .await and it's easy to forget that.

I think the confusing part is in that the .timeout() combinator looks like it just transforms the future in a similar vein to .map() or .and_then(), but it really does not!

Instead, .timeout() bubbles the result of the future so that its type becomes Result<Result<_, io::Error>, TimeoutError>.

Perhaps it would be less confusing if timeout() was a free-standing function in the future module rather than a method on the time::Timeout extension trait?

future::timeout(
    stream.read_to_end(&mut buf),
    Duration::from_secs(5),
)
.await??;

This timeout() function would stand alongside ready(), pending(), and maybe some other convenience functions in the future module.

Here's another idea. What if we had io::timeout() function that resolves to a Result<_, io::Error> instead of bubbling the results? Then we could write the following with a single ?:

io::timeout(
    stream.read_to_end(&mut buf),
    Duration::from_secs(5),
)
.await?;

Now it's also more obvious that we're setting a timeout for an I/O operation and not for an arbitrary future.

In addition to that, perhaps we could delete the whole time module? I'm not really a fan of it because it looks nothing like std::time and I generally dislike extension traits like Timeout.

Note that we already have a time-related function, task::sleep(), which is not placed in the time module so we probably shouldn't worry about grouping everything time-related into the time module. I think it's okay if we have io::timeout() and future::timeout().

Finally, here's a really conservative proposal. Let's remove the whole time module and only have io::timeout(). A more generic function for timeouts can then be left for later design work. I think I prefer this option the most.

Provide a cargo-generate template

cargo-generate is great, we should ship a template for an async-std app with:

  • a small example in src/main.rs
  • an example for a test

Joining futures

Hello, sorry for the ignorance, but I would like to know if this crate has available some
macro / mechanism to join futures so they can ran concurrently like futures crate has.

#![feature(pin, async_await, futures_api)]
use async_std::io;
use async_std::task;
use serde_derive::Deserialize;

#[macro_use]
extern crate futures;

#[derive(Deserialize, Debug)]
struct Post {
    #[serde(rename = "userId")]
    user_id : usize,
    id: usize,
    title: String,
    completed: bool
}

fn main() {
    task::block_on(async {
        let post_fut  = surf::get("https://jsonplaceholder.typicode.com/todos/1").recv_json::<Post>();
        let post2_fut = surf::get("https://jsonplaceholder.typicode.com/todos/2").recv_json::<Post>();
        let (result1, result2 ) = join!(post_fut, post2_fut);
        println!("{:?}", result1.unwrap());
        println!("{:?}", result2.unwrap());
    });
}

Book chapter 1.2 futures::future::Future description

First, thanks a lot for this great library and it's accompanying documentation!

This description of std::future::Future form the book sounds not quite correct:

In some sense, the std::future::Future can be seen as a minimal subset of futures::future::Future

https://book.async.rs/overview/std-and-library-futures.html

Actually both traits are the same. It's just a reexport:

https://github.com/rust-lang-nursery/futures-rs/blob/cde791c00b8b9c4fd14a594855038a1bc4b6323e/futures-core/src/future/mod.rs#L7

TcpListener::bind, etc. may block

TCP/UDP listeners and streams take A: std::net::ToSocketAddrs. Some ToSocketAddrs impls will resolve domain names. However, ToSocketAddrs is not futures-aware, so DNS lookups are synchronous.

Possible solutions:

  • Spawn a blocking task a la filesystem operations (simpler)
  • Replace ToSocketAddrs with an async version (probably more efficient)

Make `blocking` public?

Hi! I was using async-std in my cacache library. One of the things that I'm trying to do is implement AsyncWrite for it, but it turns out I'm using a tmpfile library that does sync i/o. Because of that, I pretty much copy-pasted the AsyncWrite impl for async-std's File, and it turns out with the latest version, task::blocking is now private, so I can't just... do that. (To clarify, I was using async-std pre-release, when I needed to use async-pool for this, and I just started porting the code over tonight when I ran into this).

For the sake of compatibility, it would be nice to have this available. My code that's doing this is over here, in case there turns out to be a Better Way™ to do what I'm trying to do that hopefully doesn't involve reimplementing tmpfile logic: https://github.com/zkat/cacache-rs/blob/zkat/async/src/content/write.rs#L147-L256

Cheers!

Blocking pool doesn't have backpressure on OSX

Longhauling blocking requests are panicking the blocking thread pool because max threads is not 10_000 for OSX. It is 4096. Current method panics.

Solution: having a variable max threads based on errors coming up from thread pool during spawning dynamic threads.

Can't use async-std on beta channel

When I add async-std as a dependency and try to compile with the beta channel I get:

error[E0554]: `#![feature]` may not be used on the beta release channel
  --> /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/async-std-0.99.3/src/lib.rs:30:1
   |
30 | #![feature(async_await)]
   | ^^^^^^^^^^^^^^^^^^^^^^^^

Any plan to port std::sync::mpsc or other channel types?

Hi,

I'm trying replacing tokio with async-std in my own project and it's truly amazing. However there's no channel-equivalents like std::sync::mpsc in async-std, so I have to use the futures crate's version.

I think it would be very nice to have mpsc, oneshot, etc. from futures crate re-exported in this crate's namespace for consistency and convenience. Any plan for that?

Fix doc warnings

We should make a pass over the docs soon, making sure they are free of warnings.

Some typos and improvements for the book

Collecting a few of them here before making a single commit with all fixes:

  • In general, Title Case is not followed consistently for titles.

  • std::future and futures-rs

    […] you link those in.. Both uses […]

    There are two periods when there should be either one or three.

  • Stability and SemVer

    […] we introducece functionality […]

    Typo for "introduce".

    […] in which case we give at least 3 month of ahead notice.

    This sounds a bit off to me. Maybe "we will give a notice at least 3 months ahead" is better.

  • Futures

    […] a very simplified view suffices for us:

    The list that follows starts its items with a lowercase letter. However, the list immediately below starts them with an uppercase letter. This is a bit distracting, and not consistent. Perhaps using a sentence is more appropriated, such as "Computation is a sequence of composable operations which can branch based on a decision, and either run to succession and yield a result, or they can yield an error".

    […] and how to react on potential events the... well... Future

    Probably something like "and how to react on potential events in the… well, Future" is better.

    I noticed here that code blocks are not syntax-highlighted. Is there a reason for this?

    When this function is called, it will produce a Future<Output=String>

    That's not the case though, is it? The function is async fn ... -> Result<String, io::Error>, not async fn ... -> String.

    […] a value available sometime later

    Should that be "available some time later" or "some later time"?

    *we will introduce you to tasks, which we need to actually run Futures

    A bit earlier it was said that calling poll repeatedly was enough to drive a future to completion. So is "need" the right word here?

  • Tasks

    Now that we know what Futures are, we now want to run them!

    "Now" is repeated too soon. Maybe "Now that we know what Futures are, we want to run them!" works better.

    […] task can also has a name and an ID, just like a thread

    Task can also have a name.

    The carry desirable metadata for debugging

    They carry.

    […] task api handles […]

    task API.

    […] mix well with they concurrent execution […]

    with the concurrent.

    Result<T,E>

    Missing space after the comma.

Add Future::join

In the 2016 futures announcement post, the join combinator was shown as something that would make choosing between several futures easy.

In rust-lang/futures-rs#1215 and beyond this was changed to several methods: join, join1, join2, join3, and join!, try_join macros (proc macros since 0.3.0-alpha.18 so statements can be written inline).

Future::join

It still seems incredibly useful to be able to join multiple futures together, and having a single combinator to do that seems like the simplest API, even if resulting code might not look completely symmetrical. I propose we add Future::join:

use async_std::future;

let a = future::ready(1);
let b = future::ready(2);
let pair = a.join(b);

assert_eq!(pair.await, (1, 2));

Future::try_join

The futures-preview library also exposes a try_join method. This is useful when you want to unwrap two results. Internally it uses TryFuture as a reference, which means this method should only exist on futures where Output = Result<T, E>, and I'm not entirely sure if that's feasible. However if it is it might be convenient to also expose:

use async_std::future;

let a = future::ready(Ok::<i32, i32>(1));
let b = future::ready(Ok::<i32, i32>(2));
let pair = a.try_join(b);

assert_eq!(pair.await, Ok((1, 2)));

Future::join_all

The third join combinator present is Future::join_all. The docs don't make a big sell on them (inefficient, set can't be modified after polling started, prefer futures_unordered), but it's probably still worth mentioning. I don't think we should add this combinator, but instead point people to use fold instead:

don't do this

use async_std::future::join_all;

async fn foo(i: u32) -> u32 { i }
let futures = vec![foo(1), foo(2), foo(3)];

assert_eq!(join_all(futures).await, [1, 2, 3]);

do this instead

let futures = vec![foo(1), foo(2), foo(3)];
let futures = futures.fold(|p, n| p.join(n))
assert_eq!(futures.await, [1, 2, 3]);

note: not tested this, but in general I don't think we need to worry about this case too much as handling the unordered case seems much more important and would cover this too.

[tracking] streams

With #125 out, it's probably worth looking at which other parts of std::iter we can port to async_std::stream. This issue is intended to track what's left for us to port.

Missing free functions

  • from_fn
  • repeat_with
  • successors

Missing traits

  • DoubleEndedStream
  • ExactSizeStream
  • Extend
  • FusedStream
  • Product
  • Sum

Missing stream methods

  • Stream::all
  • Stream::any
  • Stream::by_ref
  • Stream::chain
  • Stream::cloned
  • Stream::cmp
  • Stream::collect
  • Stream::copied
  • Stream::count
  • Stream::cycle
  • Stream::enumerate
  • Stream::eq
  • Stream::filter
  • Stream::filter_map
  • Stream::find
  • Stream::find_map
  • Stream::flat_map
  • Stream::flatten
  • Stream::fold
  • Stream::for_each
  • Stream::fuse
  • Stream::ge
  • Stream::gt
  • Stream::inspect
  • Stream::last
  • Stream::le
  • Stream::lt
  • Stream::map
  • Stream::max
  • Stream::max_by
  • Stream::max_by_key
  • Stream::min
  • Stream::min_by
  • Stream::min_by_key
  • Stream::ne
  • Stream::nth
  • Stream::partial_cmp
  • Stream::partition
  • Stream::peekable -> wip #366
  • Stream::position
  • Stream::product
  • Stream::rev
  • Stream::rposition
  • Stream::scan
  • Stream::size_hint
  • Stream::skip
  • Stream::skip_while
  • Stream::step_by
  • Stream::sum
  • Stream::take
  • Stream::take_while
  • Stream::try_fold
  • Stream::try_for_each
  • Stream::unzip
  • Stream::zip

Missing IntoStream impls

Currently not possible. See #129 (comment)

Missing FromStream impls

  • FromStream<()> for ()
  • FromStream<char> for String
  • FromStream<String> for String
  • FromStream<&'a char> for String
  • FromStream<&'a str> for String
  • FromStream<T> for Cow<'a, [T]> where T: Clone
  • FromStream<A> for Box<[A]>
  • FromStream<A> for VecDeque<A>
  • FromStream<Result<A, E>> for Result<V, E> where V: FromStream<A>
  • FromStream<Option<A>> for Option<V> where V: FromStream<A>
  • FromStream<(K, V)> for BTreeMap<K, V> where K: Ord
  • FromStream<(K, V)> for HashMap<K, V, S> where K: Eq + Hash, S: BuildHasher + Default
  • FromStream<T> for BinaryHeap<T> where T: Ord
  • FromStream<T> for BTreeSet<T> where T: Ord
  • FromStream<T> for LinkedList<T>
  • FromStream<T> for Vec<T>
  • FromStream<T> for HashSet<T, S> where T: Eq + Hash, S: BuildHasher + Default

DoubleEndedStream

  • DoubleEndedStream::poll_next_back
  • DoubleEndedStream::next_back
  • DoubleEndedStream::nth_back
  • DoubleEndedStream::rfind
  • DoubleEndedStream::rfold
  • DoubleEndedStream::try_rfold

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.