mirror of
https://github.com/rust-lang/book.git
synced 2026-05-16 06:50:41 -04:00
Ch. 17: integrate a number of the outstanding review comments
Bonus: fix some style guide issues, too! Co-authored-by: Carol (Nichols || Goulding) <carol.nichols@gmail.com> Co-authored-by: James Munns <james@onevariable.com> Co-authored-by: Tim McNamara <paperless@timmcnamara.co.nz>
This commit is contained in:
@@ -104,7 +104,7 @@
|
||||
- [Async and Await](ch17-00-async-await.md)
|
||||
- [Futures and the Async Syntax](ch17-01-futures-and-syntax.md)
|
||||
- [Concurrency With Async](ch17-02-concurrency-with-async.md)
|
||||
- [Working With More Than Two Futures](ch17-03-more-futures.md)
|
||||
- [Working With Any Number of Futures](ch17-03-more-futures.md)
|
||||
- [Streams](ch17-04-streams.md)
|
||||
- [Digging Into the Traits for Async](ch17-05-traits-for-async.md)
|
||||
- [Futures, Tasks, and Threads](ch17-06-futures-tasks-threads.md)
|
||||
|
||||
@@ -78,9 +78,9 @@ returned a `Future<Output = ()>`.
|
||||
Then Rust warned us that we did not do anything with the future. This is because
|
||||
futures are *lazy*: they don’t do anything until you ask them to with `await`.
|
||||
This should remind you of our discussion of iterators [back in Chapter
|
||||
13][iterators-lazy]. Iterators do nothing unless you call their `.next()`
|
||||
method—whether directly, or using `for` loops or methods like `.map()` which use
|
||||
`.next()` under the hood.
|
||||
13][iterators-lazy]. Iterators do nothing unless you call their `next`
|
||||
method—whether directly, or using `for` loops or methods like `map` which use
|
||||
`next` under the hood.
|
||||
|
||||
With futures, the same basic idea applies: they do nothing unless you explicitly
|
||||
ask them to. This laziness allows Rust to avoid running async code until it is
|
||||
@@ -237,27 +237,27 @@ enum MyAsyncStateMachine {
|
||||
|
||||
Writing that out by hand would be tedious and error-prone, especially when
|
||||
making changes to code later. Instead, the Rust compiler creates and manages the
|
||||
state machine data structures for async code automatically.
|
||||
state machine data structures for async code automatically. If you’re wondering:
|
||||
yep, the normal borrowing and ownership rules around data structures all apply.
|
||||
Happily, the compiler also handles checking those for us, and has good error
|
||||
messages. We will work through a few of those later in the chapter!
|
||||
|
||||
If you’re wondering: yep, the normal borrowing and ownership rules around data
|
||||
structures all apply. Happily, the compiler also handles checking those for us,
|
||||
and has good error messages. We will work through a few of those later in the
|
||||
chapter!
|
||||
|
||||
<!--
|
||||
TODO: this part needs to be rewritten to account for moving the content out to
|
||||
a later part of the book.
|
||||
-->
|
||||
Ultimately, something has to execute that state machine. That something is a
|
||||
runtime. This is why you may sometimes come across references to *executors*
|
||||
when looking into runtimes: an executor is the part of a runtime responsible for
|
||||
executing the async code.
|
||||
|
||||
Now we can understand why the compiler stopped us from making `main` itself an
|
||||
async function in Listing 17-3. If `main` were an async function, something else
|
||||
would need to call `poll()` on whatever `main` returned, but main is the
|
||||
starting point for the program! Instead, we use the `trpl::run` function, which
|
||||
sets up a runtime and polls the `Future` returned by `hello` until it returns
|
||||
`Ready`.
|
||||
would need to manage the state machine for whatever future `main` returned, but
|
||||
main is the starting point for the program! Instead, we use the `trpl::run`
|
||||
function, which sets up a runtime and polls the `Future` returned by `hello`
|
||||
until it returns `Ready`.
|
||||
|
||||
> Note: We skipped over most of the details of how the `Future` trait works so
|
||||
> far. We will come back to some of those later in the chapter!
|
||||
> Note: some runtimes provide macros to make it so you *can* write an async main
|
||||
> function. Those macros rewrite `async fn main() { ... }` to be a normal `fn
|
||||
> main` which does the same thing we did by hand in Listing 17-TODO: call a
|
||||
> function which runs a future to completion the way `trpl::run` does.
|
||||
|
||||
Now that you know the basics of working with futures, we can dig into more of
|
||||
the things we can *do* with async.
|
||||
|
||||
@@ -103,14 +103,19 @@ with different syntax: using `.await` instead of calling `join` on the join
|
||||
handle, and awaiting the `sleep` calls.
|
||||
|
||||
The bigger difference is that we did not need to spawn another operating system
|
||||
thread to do this. In fact, we do not even a task here. Given that async blocks
|
||||
compile to anonymous futures, we can put each loop in an async block and have
|
||||
the runtime run them both to completion using `trpl::join`.
|
||||
thread to do this. In fact, we do not even need to spawn a task here. Because
|
||||
async blocks compile to anonymous futures, we can put each loop in an async
|
||||
block and have the runtime run them both to completion using the `trpl::join`
|
||||
function.
|
||||
|
||||
<!--
|
||||
We were able to get concurrency for just the cost of a task.
|
||||
Tasks have much faster startup time and use much less memory than an OS thread.
|
||||
-->
|
||||
In Chapter 16, we showed how to use the `join` method on the `JoinHandle` type
|
||||
returned when you call `std::thread::spawn`. The `trpl::join` function is
|
||||
similar, but for futures. When you give it two futures, it produces a single new
|
||||
future whose output is a tuple with the output of each of the futures you passed
|
||||
in once *both* complete. Thus, in Listing 17-7, we use `trpl::join` to wait for
|
||||
both `fut1` and `fut2` to finish. We do *not* await `fut1` and `fut2`, but
|
||||
instead the new future produced by `trpl::join`. We ignore the output, because
|
||||
it is just a tuple with two unit values in it.
|
||||
|
||||
<Listing number="17-7" caption="Using `trpl::join` to await two anonymous futures" file-name="src/main.rs">
|
||||
|
||||
@@ -147,7 +152,7 @@ what we saw with threads. That is because the `trpl::join` function is *fair*,
|
||||
meaning it checks each future equally often, alternating between them, and never
|
||||
lets one race ahead if the other is ready. With threads, the operating system
|
||||
decides which thread to check and how long to let it run. With async Rust, the
|
||||
runtime decides which future to check. (In practice, the details get complicated
|
||||
runtime decides which taks to check. (In practice, the details get complicated
|
||||
because an async runtime might use operating system threads under the hood as
|
||||
part of how it manages concurrency, so guaranteeing fairness can be more work
|
||||
for a runtime—but it is still possible!) Runtimes do not have to guarantee
|
||||
@@ -184,11 +189,11 @@ version of the API is only a little different from the thread-based version: it
|
||||
uses a mutable rather than an immutable receiver `rx`, and its `recv` method
|
||||
produces a future we need to await rather than producing the value directly. Now
|
||||
we can send messages from the sender to the receiver. Notice that we do not have
|
||||
to spawn a separate thread or even a task; we merely need to await the
|
||||
`rx.recv()` call.
|
||||
to spawn a separate thread or even a task; we merely need to await the `rx.recv`
|
||||
call.
|
||||
|
||||
The synchronous `Receiver::recv()` method in `std::mpsc::channel` blocks until
|
||||
it receives a message. The `trpl::Receiver::recv()` method does not, because it
|
||||
The synchronous `Receiver::recv` method in `std::mpsc::channel` blocks until
|
||||
it receives a message. The `trpl::Receiver::recv` method does not, because it
|
||||
is async. Instead of blocking, it hands control back to the runtime until either
|
||||
a message is received or the send side of the channel closes. By contrast, we do
|
||||
not await the `send` call, because it does not block. It does not need to,
|
||||
@@ -221,17 +226,17 @@ know how many messages are coming in. In the real world, though, we will
|
||||
generally be waiting on some *unknown* number of messages. In that case, we need
|
||||
to keep waiting until we determine that there are no more messages.
|
||||
|
||||
In synchronous code, we might use a `for` loop to process a sequence of items
|
||||
like this, regardless of how many items are in the loop. However, Rust does not
|
||||
yet have a way to write a `for` loop over an *asynchronous* series of items.
|
||||
Instead, we need to use a new kind of loop we haven’t seen before, the `while
|
||||
let` conditional loop. A `while let` loop is the loop version of the `if let`
|
||||
construct we saw back in Chapter 6. The loop while continue executing as long as
|
||||
the pattern matches.
|
||||
In Listing 16-10, we used a `for` loop to process all the items received from a
|
||||
synchronous channel. However, Rust does not yet have a way to write a `for` loop
|
||||
over an *asynchronous* series of items. Instead, we need to use a new kind of
|
||||
loop we haven’t seen before, the `while let` conditional loop. A `while let`
|
||||
loop is the loop version of the `if let` construct we saw back in Chapter 6. The
|
||||
loop will continue executing as long as the pattern it specifies continues to
|
||||
match the value.
|
||||
|
||||
<!-- TODO: update text in ch. 19 to account for our having introduced this. -->
|
||||
|
||||
The `rx.recv()` call produces a `Future`, which we await. The runtime will pause
|
||||
The `rx.recv` call produces a `Future`, which we await. The runtime will pause
|
||||
the `Future` until it is ready. Once a message arrives, the future will resolve
|
||||
to `Some(message)`, as many times as a message arrives. When the channel closes,
|
||||
regardless of whether *any* messages have arrived, the future will instead
|
||||
@@ -247,8 +252,9 @@ again, so the runtime pauses it again until another message arrives.
|
||||
The code now successfully sends and receives all of the messages. Unfortunately,
|
||||
there are still a couple problems. For one thing, the messages do not arrive at
|
||||
half-second intervals. They arrive all at once, two seconds (2,000 milliseconds)
|
||||
after we start the program. For another, this program also never stops! You will
|
||||
need to shut it down using <span class="keystroke">ctrl-c</span>.
|
||||
after we start the program. For another, this program also never exits! Instead,
|
||||
it waits forever for new messages. You will need to shut it down using <span
|
||||
class="keystroke">ctrl-c</span>.
|
||||
|
||||
Let’s start by understanding why the messages all come in at once after the full
|
||||
delay, rather than coming in with delays in between each one. Within a given
|
||||
@@ -263,7 +269,10 @@ let` loop get to go through any of the `.await` points on the `recv` calls.
|
||||
To get the behavior we want, where the sleep delay happens between receiving
|
||||
each message, we need to put the `tx` and `rx` operations in their own async
|
||||
blocks. Then the runtime can execute each of them separately using `trpl::join`,
|
||||
just like in the counting example.
|
||||
just like in the counting example. Once again, we await the result of calling
|
||||
`trpl::join`, not the individual futures. If we awaited the individual futures
|
||||
in sequence, we would just end up back in a sequential flow—exactly what we are
|
||||
trying *not* to do.
|
||||
|
||||
<!-- We cannot test this one because it never stops! -->
|
||||
|
||||
@@ -278,28 +287,28 @@ just like in the counting example.
|
||||
With the updated code in Listing 17-10, the messages get printed at
|
||||
500-millisecond intervals, rather than all in a rush after two seconds.
|
||||
|
||||
The program still never stops, because of the way `while let` loop interacts
|
||||
with `trpl::join`:
|
||||
The program still never exits, though, because of the way `while let` loop
|
||||
interacts with `trpl::join`:
|
||||
|
||||
* The future returned from `trpl::join` only completes once *both* futures
|
||||
passed to it have completed.
|
||||
* The `tx` future completes once it finishes sleeping after sending the last
|
||||
message in `vals`.
|
||||
* The `rx` future will not complete until the `while let` loop ends.
|
||||
* The `while let` loop will not end until `rx.recv().await` produces `None`.
|
||||
* The `rx.recv().await` will only return `None` once the other end of the
|
||||
channel is closed.
|
||||
* The channel will only close if we call `rx.close()` or when the sender side,
|
||||
* The `while let` loop will not end until awaiting `rx.recv` produces `None`.
|
||||
* Awaiting `rx.recv` will only return `None` once the other end of the channel
|
||||
is closed.
|
||||
* The channel will only close if we call `rx.close` or when the sender side,
|
||||
`tx`, is dropped.
|
||||
* We do not call `rx.close()` anywhere, and `tx` will not be dropped until the
|
||||
async block ends.
|
||||
* The block cannot end because it is blocked on `trpl::join` completing,
|
||||
which takes us back to the top of this list!
|
||||
* We do not call `rx.close` anywhere, and `tx` will not be dropped until the
|
||||
outermost async block passed to `trpl::run` ends.
|
||||
* The block cannot end because it is blocked on `trpl::join` completing, which
|
||||
takes us back to the top of this list!
|
||||
|
||||
We could manually close `rx` by calling `rx.close()` somewhere, but that does
|
||||
not make much sense. Stopping after handling some arbitrary number of messages
|
||||
would make the program shut down, but we could miss messages. We need some other
|
||||
way to make sure that `tx` gets dropped *before* the end of the function.
|
||||
We could manually close `rx` by calling `rx.close` somewhere, but that does not
|
||||
make much sense. Stopping after handling some arbitrary number of messages would
|
||||
make the program shut down, but we could miss messages. We need some other way
|
||||
to make sure that `tx` gets dropped *before* the end of the function.
|
||||
|
||||
Right now, the async block where we send the messages only borrows `tx`, but if
|
||||
we could move `tx` into that async block, it would be dropped once that block
|
||||
@@ -324,9 +333,10 @@ This async channel is also a multiple-producer channel, so we can call `clone`
|
||||
on `tx` if we want to send messages from multiple futures. In Listing 17-12, we
|
||||
clone `tx`, creating `tx1` outside the first async block. We move `tx1` into
|
||||
that block just as we did before with `tx`. Then, later, we move the original
|
||||
`tx` into a *new* async block, where we send more messages on a slightly
|
||||
slower delay. (We happen to put this new async block after the async block
|
||||
for receiving messages, but it could go before it just as well.)
|
||||
`tx` into a *new* async block, where we send more messages on a slightly slower
|
||||
delay. We happen to put this new async block after the async block for receiving
|
||||
messages, but it could go before it just as well. They key is the order of the
|
||||
futures are awaited in, not the order they are created in.
|
||||
|
||||
Both of the async blocks for sending messages need to be `async move` blocks, so
|
||||
that both `tx` and `tx1` get dropped when those blocks finish. Otherwise we will
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
## Working With More Than Two Futures
|
||||
## Working With Any Number of Futures
|
||||
|
||||
When we switched from using two futures to three in the previous section, we
|
||||
also had to switch from using `join` to using `join3`. It would be annoying to
|
||||
do this every time we changed our code. Happily, we have a macro form of `join`
|
||||
to which we can pass an arbitrary number of arguments. It also handles awaiting
|
||||
the futures itself. Thus, we could rewrite the code from Listing 17-12 to use
|
||||
`join!` instead of `join3`, as in Listing 17-13:
|
||||
have to call a different function every time we changed the number of futures we
|
||||
wanted to join. Happily, we have a macro form of `join` to which we can pass an
|
||||
arbitrary number of arguments. It also handles awaiting the futures itself.
|
||||
Thus, we could rewrite the code from Listing 17-12 to use `join!` instead of
|
||||
`join3`, as in Listing 17-13:
|
||||
|
||||
<Listing number="17-13" caption="Using `join!` to wait for multiple futures" file-name="src/main.rs">
|
||||
|
||||
@@ -96,8 +97,8 @@ implement the `Future` trait.
|
||||
> able to work with a dynamic collection of futures where we do not know what
|
||||
> they will all be until runtime.
|
||||
|
||||
We start by wrapping each of the futures in the `vec!` in a `Box::new()`, as
|
||||
shown in Listing 17-15.
|
||||
We start by wrapping each of the futures in the `vec!` in a `Box::new`, as shown
|
||||
in Listing 17-15.
|
||||
|
||||
<Listing number="17-15" caption="Trying to use `Box::new` to align the types of the futures in a `Vec`" file-name="src/main.rs">
|
||||
|
||||
@@ -241,7 +242,7 @@ There is a bit more we can explore here. For one thing, using `Pin<Box<T>>`
|
||||
comes with a small amount of extra overhead from putting these futures on the
|
||||
heap with `Box`—and we are only doing that to get the types to line up. We don’t
|
||||
actually *need* the heap allocation, after all: these futures are local to this
|
||||
particular function. As noted above, `Pin` is itself a smart pointer, so we can
|
||||
particular function. As noted above, `Pin` is itself a wrapper type, so we can
|
||||
get the benefit of having a single type in the `Vec`—the original reason we
|
||||
reached for `Box`—without doing a heap allocation. We can use `Pin` directly
|
||||
with each future, using the `std::pin::pin` macro.
|
||||
@@ -260,11 +261,10 @@ references to the dynamic `Future` type, as in Listing 17-18.
|
||||
|
||||
</Listing>
|
||||
|
||||
There is one last issue to fix. We got this far by ignoring the fact that we
|
||||
might have different `Output` types. For example, in Listing 17-19, the
|
||||
anonymous future for `a` implements `Future<Output = u32>`, the anonymous future
|
||||
for `b` implements `Future<Output = &str>`, and the anonymous future for `c`
|
||||
implements `Future<Output = bool>`.
|
||||
We got this far by ignoring the fact that we might have different `Output`
|
||||
types. For example, in Listing 17-19, the anonymous future for `a` implements
|
||||
`Future<Output = u32>`, the anonymous future for `b` implements `Future<Output =
|
||||
&str>`, and the anonymous future for `c` implements `Future<Output = bool>`.
|
||||
|
||||
<Listing number="17-19" caption="Three futures with distinct types" file-name="src/main.rs">
|
||||
|
||||
@@ -277,8 +277,7 @@ implements `Future<Output = bool>`.
|
||||
We can use `trpl::join!` to await them, because it allows you to pass in
|
||||
multiple future types and produces a tuple of those types. We *cannot* use
|
||||
`trpl::join_all`, because it requires the futures passed in all to have the same
|
||||
type. (Remember, that error is what got us started on this adventure with
|
||||
`Pin`!)
|
||||
type. Remember, that error is what got us started on this adventure with `Pin`!
|
||||
|
||||
This is a fundamental tradeoff: we can either deal with a dynamic number of
|
||||
futures with `join_all`, as long as they all have the same type, or we can deal
|
||||
@@ -287,13 +286,6 @@ even if they have different types. This is the same as working with any other
|
||||
types in Rust, though. Futures are not special, even though we have some nice
|
||||
syntax for working with them, and that is a good thing.
|
||||
|
||||
In practice, you will usually work directly with `async` and `.await`, and
|
||||
secondarily with functions and macros like `join` or `join_all`. You will only
|
||||
need to reach for `pin` now and again to use them with those APIs. `Pin` and
|
||||
`Unpin` are mostly important for building lower-level libraries, or when you are
|
||||
building a runtime itself, rather than for day to day Rust code. When you see
|
||||
them, though, now you will know what to do!
|
||||
|
||||
### Racing futures
|
||||
|
||||
When we “join” futures with the `join` family of functions and macros, we
|
||||
@@ -302,11 +294,16 @@ need *some* future from a set to finish before we move on—kind of like racing
|
||||
one future against another. This operation is often named `race` for exactly
|
||||
that reason.
|
||||
|
||||
In Listing 17-20, we use `race` to run two futures, `slow` and `fast`, against
|
||||
each other. Each one prints a message when it starts running, pauses for some
|
||||
amount of time by calling and awaiting `sleep`, and then prints another message
|
||||
when it finishes. Then we pass both to `trpl::race` and wait for one of them to
|
||||
finish. (The outcome here won’t be too surprising: `fast` wins!)
|
||||
> Note: Under the hood, `race` is built on a more general function, `select`,
|
||||
> which you will encounter more often in real-world Rust code. A `select`
|
||||
> function can do a lot of things that `trpl::race` function cannot, but it also
|
||||
> has some additional complexity that we can skip over for now.
|
||||
|
||||
In Listing 17-20, we use `trpl::race` to run two futures, `slow` and `fast`,
|
||||
against each other. Each one prints a message when it starts running, pauses for
|
||||
some amount of time by calling and awaiting `sleep`, and then prints another
|
||||
message when it finishes. Then we pass both to `trpl::race` and wait for one of
|
||||
them to finish. (The outcome here won’t be too surprising: `fast` wins!)
|
||||
|
||||
<Listing number="17-20" caption="Using `race` to get the result of whichever future finishes first" file-name="src/main.rs">
|
||||
|
||||
@@ -507,12 +504,12 @@ future.
|
||||
|
||||
Let’s implement this! To begin, let’s think about the API for `timeout`:
|
||||
|
||||
- It needs to be an async function itself so we can await it.
|
||||
- Its first parameter should be a future to run. We can make it generic to allow
|
||||
* It needs to be an async function itself so we can await it.
|
||||
* Its first parameter should be a future to run. We can make it generic to allow
|
||||
it to work with any future.
|
||||
- Its second parameter will be the maximum time to wait. If we use a `Duration`,
|
||||
* Its second parameter will be the maximum time to wait. If we use a `Duration`,
|
||||
that will make it easy to pass along to `trpl::sleep`.
|
||||
- It should return a `Result`. If the future completes successfully, the
|
||||
* It should return a `Result`. If the future completes successfully, the
|
||||
`Result` will be `Ok` with the value produced by the future. If the timeout
|
||||
elapses first, the `Result` will be `Err` with the duration that the timeout
|
||||
waited for.
|
||||
@@ -588,9 +585,25 @@ using smaller async building blocks. For example, you can use this same approach
|
||||
to combine timeouts with retries, and in turn use those with things like network
|
||||
calls—one of the examples from the beginning of the chapter!
|
||||
|
||||
Over the last two sections, we have seen how to work with multiple futures at
|
||||
the same time. Up next, let’s look at how we can work with multiple futures in a
|
||||
sequence over time, with *streams*.
|
||||
In practice, you will usually work directly with `async` and `.await`, and
|
||||
secondarily with functions and macros like `join`, `join_all`, `race`, and so
|
||||
on. You will only need to reach for `pin` now and again to use them with those
|
||||
APIs.
|
||||
|
||||
We have now seen a number of ways to work with multiple futures at the same
|
||||
time. Up next, we will look at how we can work with multiple futures in a
|
||||
sequence over time, with *streams*. Here are a couple more things you might want
|
||||
to consider first, though:
|
||||
|
||||
* We used a `Vec` with `join_all` to wait for all of the futures in some group
|
||||
to finish. How could you use a `Vec` to process a group of futures in
|
||||
sequence, instead? What are the tradeoffs of doing that?
|
||||
|
||||
* Take a look at the `futures::stream::FuturesUnordered` type from the `futures`
|
||||
crate. How would using it be different from using a `Vec`? (Don’t worry about
|
||||
the fact that it is from the `stream` part of the crate; it works just fine
|
||||
with any collection of futures.)
|
||||
|
||||
|
||||
[collections]: ch08-01-vectors.html#using-an-enum-to-store-multiple-types
|
||||
[dyn]: ch12-03-improving-error-handling-and-modularity.html
|
||||
|
||||
@@ -129,7 +129,7 @@ avoid doing needless work.
|
||||
|
||||
Let’s start by building a little stream of messages, similar to what we might
|
||||
see from a WebSocket or other real-time communication protocols. In Listing
|
||||
17-32, we create a function `get_messages()` which returns `impl Stream<Item =
|
||||
17-32, we create a function `get_messages` which returns `impl Stream<Item =
|
||||
String>`. For its implementation, we create an async channel, loop over the
|
||||
first ten letters of the English alphabet, and send them across the channel.
|
||||
|
||||
@@ -310,7 +310,7 @@ Finally, we loop over that combined stream instead of over `messages` (Listing
|
||||
|
||||
At this point, neither `messages` nor `intervals` needs to be pinned or mutable,
|
||||
because both will be combined into the single `merged` stream. However, this
|
||||
call to `merge` does not type check! (Neither does the `next` call in the `while
|
||||
call to `merge` does not compile! (Neither does the `next` call in the `while
|
||||
let` loop, but we will come back to that after fixing this.) The two streams
|
||||
have different types. The `messages` stream has the type `Timeout<impl
|
||||
Stream<Item = String>>`, where `Timeout` is the type which implements `Stream`
|
||||
|
||||
@@ -51,10 +51,10 @@ work to do, so the caller will need to check again later. The `Ready` variant
|
||||
indicates that the `Future` has finished its work and the `T` value is
|
||||
available.
|
||||
|
||||
> Note: With most futures, the caller should not call `poll()` again after the
|
||||
> Note: With most futures, the caller should not call `poll` again after the
|
||||
> future has returned `Ready`. Many futures will panic if polled again after
|
||||
> becoming ready! Futures which are safe to poll again will say so explicitly in
|
||||
> their documentation.
|
||||
> their documentation. This is similar to how `Iterator::next` behaves!
|
||||
|
||||
Under the hood, when you call `.await`, Rust compiles that to code which calls
|
||||
`poll`, kind of (although not exactly <!-- TODO: describe `IntoFuture`? -->)
|
||||
@@ -97,9 +97,9 @@ on this future and work on other futures and check this one again later. That
|
||||
one of the main jobs for a runtime.
|
||||
|
||||
Recall our description (in the [Counting][counting] section) of waiting on
|
||||
`rx.recv()` . The `recv()` call returns a `Future`, and awaiting it polls it. In
|
||||
our initial discussion, we noted that a runtime will pause the future until it
|
||||
is ready with either `Some(message)` or `None` when the channel closes. With our
|
||||
`rx.recv`. The `recv` call returns a `Future`, and awaiting it polls it. In our
|
||||
initial discussion, we noted that a runtime will pause the future until it is
|
||||
ready with either `Some(message)` or `None` when the channel closes. With our
|
||||
deeper understanding of `Future` in place, and specifically `Future::poll`, we
|
||||
can see how that works. The runtime knows the future is not ready when it
|
||||
returns `Poll::Pending`. Conversely, the runtime knows the future is ready and
|
||||
@@ -176,20 +176,25 @@ like this. When we specify the type of `self` like this, we are telling Rust
|
||||
what type `self` must be to call this method. These kinds of type annotations
|
||||
for `self` are similar to those for other function parameters, but with the
|
||||
restriction that the type annotation has to be the type on which the method is
|
||||
implemented, or a reference or smart pointer to that type. We will see more on
|
||||
this syntax in Chapter 18. For now, it is enough to know that if we want to poll
|
||||
a future (to check whether it is `Pending` or `Ready(Output)`), we need a
|
||||
mutable reference to the type, which is wrapped in a `Pin`.
|
||||
implemented, or a reference or smart pointer to that type, or a `Pin` wrapping a
|
||||
reference to that type. We will see more on this syntax in Chapter 18. For now,
|
||||
it is enough to know that if we want to poll a future (to check whether it is
|
||||
`Pending` or `Ready(Output)`), we need a mutable reference to the type, which is
|
||||
wrapped in a `Pin`.
|
||||
|
||||
`Pin` is a wrapper type, much like the `Box`, `Rc`, and other smart pointer
|
||||
types we saw in Chapter 15. Unlike those, however, `Pin` only works with *other
|
||||
pointer types* like reference (`&` and `&mut`) and smart pointers (`Box`, `Rc`,
|
||||
and so on). To be precise, `Pin` works with types which implement the `Deref` or
|
||||
`DerefMut` traits, which we covered in Chapter 15. You can think of this
|
||||
restriction as equivalent to only working with pointers, though, since
|
||||
implementing `Deref` or `DerefMut` means your type behaves like a pointer type.
|
||||
`Pin` is a wrapper type. In some ways, it is like the `Box`, `Rc`, and other
|
||||
smart pointer types we saw in Chapter 15, which also wrap other types. Unlike
|
||||
those, however, `Pin` only works with *other pointer types* like reference (`&`
|
||||
and `&mut`) and smart pointers (`Box`, `Rc`, and so on). To be precise, `Pin`
|
||||
works with types which implement the `Deref` or `DerefMut` traits, which we
|
||||
covered in Chapter 15. You can think of this restriction as equivalent to only
|
||||
working with pointers, though, since implementing `Deref` or `DerefMut` means
|
||||
your type behaves like a pointer type. `Pin` is also not a pointer itself, and
|
||||
it does not have any behavior of its own like the ref counting of `Rc` or `Arc`.
|
||||
It is purely a tool the compiler can use to uphold the relevant guarantees, by
|
||||
wrapping pointers in the type.
|
||||
|
||||
Recalling that `.await` is implemented in terms of calls to `poll()`, this
|
||||
Recalling that `.await` is implemented in terms of calls to `poll`, this
|
||||
starts to explain the error message we saw above—but that was in terms of
|
||||
`Unpin`, not `Pin`. So what exactly are `Pin` and `Unpin`, how do they relate,
|
||||
and why does `Future` need `self` to be in a `Pin` type to call `poll`?
|
||||
@@ -267,6 +272,10 @@ internal references, so they do not implement `Unpin`. They need to be pinned,
|
||||
and then we can pass the `Pin` type into the `Vec`, confident that the
|
||||
underlying data in the futures will *not* be moved.
|
||||
|
||||
`Pin` and `Unpin` are mostly important for building lower-level libraries, or
|
||||
when you are building a runtime itself, rather than for day to day Rust code.
|
||||
When you see them, though, now you will know what to do!
|
||||
|
||||
> Note: This combination of `Pin` and `Unpin` allows a whole class of complex
|
||||
> types to be safe in Rust which are otherwise difficult to implement because
|
||||
> they are self-referential. Types which require `Pin` show up *most* commonly
|
||||
@@ -283,16 +292,14 @@ underlying data in the futures will *not* be moved.
|
||||
> - [Chapter 2: Under the Hood: Executing Futures and Tasks][under-the-hood]
|
||||
> - [Chapter 4: Pinning][pinning].
|
||||
|
||||
Now that we have a deeper grasp on the `Future`, `Pin`, and `Unpin` traits, we
|
||||
can turn our attention to the `Stream` trait.
|
||||
|
||||
### The Stream Trait
|
||||
|
||||
As described in the section introducing streams, streams are like asynchronous
|
||||
iterators. Unlike `Iterator` and `Future`, there is no definition of a `Stream`
|
||||
trait in the standard library as of the time of writing,<!-- TODO: verify before
|
||||
press time! --> but there *is* a very common definition used throughout the
|
||||
ecosystem.
|
||||
Now that we have a deeper grasp on the `Future`, `Pin`, and `Unpin` traits, we
|
||||
can turn our attention to the `Stream` trait. As described in the section
|
||||
introducing streams, streams are like asynchronous iterators. Unlike `Iterator`
|
||||
and `Future`, there is no definition of a `Stream` trait in the standard library
|
||||
as of the time of writing,<!-- TODO: verify before press time! --> but there
|
||||
*is* a very common definition used throughout the ecosystem.
|
||||
|
||||
Let’s review the definitions of the `Iterator` and `Future` traits, so we can
|
||||
build up to how a `Stream` trait that merges them together might look. From
|
||||
|
||||
Reference in New Issue
Block a user