tokio/runtime/task/mod.rs
1//! The task module.
2//!
3//! The task module contains the code that manages spawned tasks and provides a
4//! safe API for the rest of the runtime to use. Each task in a runtime is
5//! stored in an `OwnedTasks` or `LocalOwnedTasks` object.
6//!
7//! # Task reference types
8//!
9//! A task is usually referenced by multiple handles, and there are several
10//! types of handles.
11//!
12//! * `OwnedTask` - tasks stored in an `OwnedTasks` or `LocalOwnedTasks` are of this
13//! reference type.
14//!
15//! * `JoinHandle` - each task has a `JoinHandle` that allows access to the output
16//! of the task.
17//!
18//! * `Waker` - every waker for a task has this reference type. There can be any
19//! number of waker references.
20//!
21//! * `Notified` - tracks whether the task is notified.
22//!
23//! * `Unowned` - this task reference type is used for tasks not stored in any
24//! runtime. Mainly used for blocking tasks, but also in tests.
25//!
26//! The task uses a reference count to keep track of how many active references
27//! exist. The `Unowned` reference type takes up two ref-counts. All other
28//! reference types take up a single ref-count.
29//!
30//! Besides the waker type, each task has at most one of each reference type.
31//!
32//! # State
33//!
34//! The task stores its state in an atomic `usize` with various bitfields for the
35//! necessary information. The state has the following bitfields:
36//!
37//! * `RUNNING` - Tracks whether the task is currently being polled or cancelled.
38//! This bit functions as a lock around the task.
39//!
40//! * `COMPLETE` - Is one once the future has fully completed and has been
41//! dropped. Never unset once set. Never set together with RUNNING.
42//!
43//! * `NOTIFIED` - Tracks whether a Notified object currently exists.
44//!
45//! * `CANCELLED` - Is set to one for tasks that should be cancelled as soon as
46//! possible. May take any value for completed tasks.
47//!
48//! * `JOIN_INTEREST` - Is set to one if there exists a `JoinHandle`.
49//!
50//! * `JOIN_WAKER` - Acts as an access control bit for the join handle waker. The
51//! protocol for its usage is described below.
52//!
53//! The rest of the bits are used for the ref-count.
54//!
55//! # Fields in the task
56//!
57//! The task has various fields. This section describes how and when it is safe
58//! to access a field.
59//!
60//! * The state field is accessed with atomic instructions.
61//!
62//! * The `OwnedTask` reference has exclusive access to the `owned` field.
63//!
64//! * The Notified reference has exclusive access to the `queue_next` field.
65//!
66//! * The `owner_id` field can be set as part of construction of the task, but
67//! is otherwise immutable and anyone can access the field immutably without
68//! synchronization.
69//!
70//! * If COMPLETE is one, then the `JoinHandle` has exclusive access to the
71//! stage field. If COMPLETE is zero, then the RUNNING bitfield functions as
72//! a lock for the stage field, and it can be accessed only by the thread
73//! that set RUNNING to one.
74//!
75//! * The waker field may be concurrently accessed by different threads: in one
76//! thread the runtime may complete a task and *read* the waker field to
77//! invoke the waker, and in another thread the task's `JoinHandle` may be
78//! polled, and if the task hasn't yet completed, the `JoinHandle` may *write*
79//! a waker to the waker field. The `JOIN_WAKER` bit ensures safe access by
80//! multiple threads to the waker field using the following rules:
81//!
82//! 1. `JOIN_WAKER` is initialized to zero.
83//!
84//! 2. If `JOIN_WAKER` is zero, then the `JoinHandle` has exclusive (mutable)
85//! access to the waker field.
86//!
87//! 3. If `JOIN_WAKER` is one, then the `JoinHandle` has shared (read-only)
88//! access to the waker field.
89//!
90//! 4. If `JOIN_WAKER` is one and COMPLETE is one, then the runtime has shared
91//! (read-only) access to the waker field.
92//!
93//! 5. If the `JoinHandle` needs to write to the waker field, then the
94//! `JoinHandle` needs to (i) successfully set `JOIN_WAKER` to zero if it is
95//! not already zero to gain exclusive access to the waker field per rule
96//! 2, (ii) write a waker, and (iii) successfully set `JOIN_WAKER` to one.
97//! If the `JoinHandle` unsets `JOIN_WAKER` in the process of being dropped
98//! to clear the waker field, only steps (i) and (ii) are relevant.
99//!
100//! 6. The `JoinHandle` can change `JOIN_WAKER` only if COMPLETE is zero (i.e.
101//! the task hasn't yet completed). The runtime can change `JOIN_WAKER` only
102//! if COMPLETE is one.
103//!
104//! 7. If `JOIN_INTEREST` is zero and COMPLETE is one, then the runtime has
105//! exclusive (mutable) access to the waker field. This might happen if the
106//! `JoinHandle` gets dropped right after the task completes and the runtime
107//! sets the `COMPLETE` bit. In this case the runtime needs the mutable access
108//! to the waker field to drop it.
109//!
110//! Rule 6 implies that the steps (i) or (iii) of rule 5 may fail due to a
111//! race. If step (i) fails, then the attempt to write a waker is aborted. If
112//! step (iii) fails because COMPLETE is set to one by another thread after
113//! step (i), then the waker field is cleared. Once COMPLETE is one (i.e.
114//! task has completed), the `JoinHandle` will not modify `JOIN_WAKER`. After the
115//! runtime sets COMPLETE to one, it invokes the waker if there is one so in this
116//! case when a task completes the `JOIN_WAKER` bit implicates to the runtime
117//! whether it should invoke the waker or not. After the runtime is done with
118//! using the waker during task completion, it unsets the `JOIN_WAKER` bit to give
119//! the `JoinHandle` exclusive access again so that it is able to drop the waker
120//! at a later point.
121//!
122//! All other fields are immutable and can be accessed immutably without
123//! synchronization by anyone.
124//!
125//! # Safety
126//!
127//! This section goes through various situations and explains why the API is
128//! safe in that situation.
129//!
130//! ## Polling or dropping the future
131//!
132//! Any mutable access to the future happens after obtaining a lock by modifying
133//! the RUNNING field, so exclusive access is ensured.
134//!
135//! When the task completes, exclusive access to the output is transferred to
136//! the `JoinHandle`. If the `JoinHandle` is already dropped when the transition to
137//! complete happens, the thread performing that transition retains exclusive
138//! access to the output and should immediately drop it.
139//!
140//! ## Non-Send futures
141//!
142//! If a future is not Send, then it is bound to a `LocalOwnedTasks`. The future
143//! will only ever be polled or dropped given a `LocalNotified` or inside a call
144//! to `LocalOwnedTasks::shutdown_all`. In either case, it is guaranteed that the
145//! future is on the right thread.
146//!
147//! If the task is never removed from the `LocalOwnedTasks`, then it is leaked, so
148//! there is no risk that the task is dropped on some other thread when the last
149//! ref-count drops.
150//!
151//! ## Non-Send output
152//!
153//! When a task completes, the output is placed in the stage of the task. Then,
154//! a transition that sets COMPLETE to true is performed, and the value of
155//! `JOIN_INTEREST` when this transition happens is read.
156//!
157//! If `JOIN_INTEREST` is zero when the transition to COMPLETE happens, then the
158//! output is immediately dropped.
159//!
160//! If `JOIN_INTEREST` is one when the transition to COMPLETE happens, then the
161//! `JoinHandle` is responsible for cleaning up the output. If the output is not
162//! Send, then this happens:
163//!
164//! 1. The output is created on the thread that the future was polled on. Since
165//! only non-Send futures can have non-Send output, the future was polled on
166//! the thread that the future was spawned from.
167//! 2. Since `JoinHandle<Output>` is not Send if Output is not Send, the
168//! `JoinHandle` is also on the thread that the future was spawned from.
169//! 3. Thus, the `JoinHandle` will not move the output across threads when it
170//! takes or drops the output.
171//!
172//! ## Recursive poll/shutdown
173//!
174//! Calling poll from inside a shutdown call or vice-versa is not prevented by
175//! the API exposed by the task module, so this has to be safe. In either case,
176//! the lock in the RUNNING bitfield makes the inner call return immediately. If
177//! the inner call is a `shutdown` call, then the CANCELLED bit is set, and the
178//! poll call will notice it when the poll finishes, and the task is cancelled
179//! at that point.
180
181// Some task infrastructure is here to support `JoinSet`, which is currently
182// unstable. This should be removed once `JoinSet` is stabilized.
183#![cfg_attr(not(tokio_unstable), allow(dead_code))]
184
185mod core;
186use self::core::Cell;
187use self::core::Header;
188
189mod error;
190pub use self::error::JoinError;
191
192mod harness;
193use self::harness::Harness;
194
195mod id;
196#[cfg_attr(not(tokio_unstable), allow(unreachable_pub, unused_imports))]
197pub use id::{id, try_id, Id};
198
199#[cfg(feature = "rt")]
200mod abort;
201mod join;
202
203#[cfg(feature = "rt")]
204pub use self::abort::AbortHandle;
205
206pub use self::join::JoinHandle;
207
208mod list;
209pub(crate) use self::list::{LocalOwnedTasks, OwnedTasks};
210
211mod raw;
212pub(crate) use self::raw::RawTask;
213
214mod state;
215use self::state::State;
216
217mod waker;
218
219cfg_taskdump! {
220 pub(crate) mod trace;
221}
222
223use crate::future::Future;
224use crate::util::linked_list;
225use crate::util::sharded_list;
226
227use crate::runtime::TaskCallback;
228use std::marker::PhantomData;
229use std::ptr::NonNull;
230use std::{fmt, mem};
231
232/// An owned handle to the task, tracked by ref count.
233#[repr(transparent)]
234pub(crate) struct Task<S: 'static> {
235 raw: RawTask,
236 _p: PhantomData<S>,
237}
238
239unsafe impl<S> Send for Task<S> {}
240unsafe impl<S> Sync for Task<S> {}
241
242/// A task was notified.
243#[repr(transparent)]
244pub(crate) struct Notified<S: 'static>(Task<S>);
245
246// safety: This type cannot be used to touch the task without first verifying
247// that the value is on a thread where it is safe to poll the task.
248unsafe impl<S: Schedule> Send for Notified<S> {}
249unsafe impl<S: Schedule> Sync for Notified<S> {}
250
251/// A non-Send variant of Notified with the invariant that it is on a thread
252/// where it is safe to poll it.
253#[repr(transparent)]
254pub(crate) struct LocalNotified<S: 'static> {
255 task: Task<S>,
256 _not_send: PhantomData<*const ()>,
257}
258
259impl<S> LocalNotified<S> {
260 #[cfg(tokio_unstable)]
261 pub(crate) fn task_id(&self) -> Id {
262 self.task.id()
263 }
264}
265
266/// A task that is not owned by any `OwnedTasks`. Used for blocking tasks.
267/// This type holds two ref-counts.
268pub(crate) struct UnownedTask<S: 'static> {
269 raw: RawTask,
270 _p: PhantomData<S>,
271}
272
273// safety: This type can only be created given a Send task.
274unsafe impl<S> Send for UnownedTask<S> {}
275unsafe impl<S> Sync for UnownedTask<S> {}
276
277/// Task result sent back.
278pub(crate) type Result<T> = std::result::Result<T, JoinError>;
279
280/// Hooks for scheduling tasks which are needed in the task harness.
281#[derive(Clone)]
282pub(crate) struct TaskHarnessScheduleHooks {
283 pub(crate) task_terminate_callback: Option<TaskCallback>,
284}
285
286pub(crate) trait Schedule: Sync + Sized + 'static {
287 /// The task has completed work and is ready to be released. The scheduler
288 /// should release it immediately and return it. The task module will batch
289 /// the ref-dec with setting other options.
290 ///
291 /// If the scheduler has already released the task, then None is returned.
292 fn release(&self, task: &Task<Self>) -> Option<Task<Self>>;
293
294 /// Schedule the task
295 fn schedule(&self, task: Notified<Self>);
296
297 fn hooks(&self) -> TaskHarnessScheduleHooks;
298
299 /// Schedule the task to run in the near future, yielding the thread to
300 /// other tasks.
301 fn yield_now(&self, task: Notified<Self>) {
302 self.schedule(task);
303 }
304
305 /// Polling the task resulted in a panic. Should the runtime shutdown?
306 fn unhandled_panic(&self) {
307 // By default, do nothing. This maintains the 1.0 behavior.
308 }
309}
310
311cfg_rt! {
312 /// This is the constructor for a new task. Three references to the task are
313 /// created. The first task reference is usually put into an `OwnedTasks`
314 /// immediately. The Notified is sent to the scheduler as an ordinary
315 /// notification.
316 fn new_task<T, S>(
317 task: T,
318 scheduler: S,
319 id: Id,
320 ) -> (Task<S>, Notified<S>, JoinHandle<T::Output>)
321 where
322 S: Schedule,
323 T: Future + 'static,
324 T::Output: 'static,
325 {
326 let raw = RawTask::new::<T, S>(task, scheduler, id);
327 let task = Task {
328 raw,
329 _p: PhantomData,
330 };
331 let notified = Notified(Task {
332 raw,
333 _p: PhantomData,
334 });
335 let join = JoinHandle::new(raw);
336
337 (task, notified, join)
338 }
339
340 /// Creates a new task with an associated join handle. This method is used
341 /// only when the task is not going to be stored in an `OwnedTasks` list.
342 ///
343 /// Currently only blocking tasks use this method.
344 pub(crate) fn unowned<T, S>(task: T, scheduler: S, id: Id) -> (UnownedTask<S>, JoinHandle<T::Output>)
345 where
346 S: Schedule,
347 T: Send + Future + 'static,
348 T::Output: Send + 'static,
349 {
350 let (task, notified, join) = new_task(task, scheduler, id);
351
352 // This transfers the ref-count of task and notified into an UnownedTask.
353 // This is valid because an UnownedTask holds two ref-counts.
354 let unowned = UnownedTask {
355 raw: task.raw,
356 _p: PhantomData,
357 };
358 std::mem::forget(task);
359 std::mem::forget(notified);
360
361 (unowned, join)
362 }
363}
364
365impl<S: 'static> Task<S> {
366 unsafe fn new(raw: RawTask) -> Task<S> {
367 Task {
368 raw,
369 _p: PhantomData,
370 }
371 }
372
373 unsafe fn from_raw(ptr: NonNull<Header>) -> Task<S> {
374 Task::new(RawTask::from_raw(ptr))
375 }
376
377 #[cfg(all(
378 tokio_unstable,
379 tokio_taskdump,
380 feature = "rt",
381 target_os = "linux",
382 any(target_arch = "aarch64", target_arch = "x86", target_arch = "x86_64")
383 ))]
384 pub(super) fn as_raw(&self) -> RawTask {
385 self.raw
386 }
387
388 fn header(&self) -> &Header {
389 self.raw.header()
390 }
391
392 fn header_ptr(&self) -> NonNull<Header> {
393 self.raw.header_ptr()
394 }
395
396 /// Returns a [task ID] that uniquely identifies this task relative to other
397 /// currently spawned tasks.
398 ///
399 /// [task ID]: crate::task::Id
400 #[cfg(tokio_unstable)]
401 pub(crate) fn id(&self) -> crate::task::Id {
402 // Safety: The header pointer is valid.
403 unsafe { Header::get_id(self.raw.header_ptr()) }
404 }
405
406 cfg_taskdump! {
407 /// Notify the task for task dumping.
408 ///
409 /// Returns `None` if the task has already been notified.
410 pub(super) fn notify_for_tracing(&self) -> Option<Notified<S>> {
411 if self.as_raw().state().transition_to_notified_for_tracing() {
412 // SAFETY: `transition_to_notified_for_tracing` increments the
413 // refcount.
414 Some(unsafe { Notified(Task::new(self.raw)) })
415 } else {
416 None
417 }
418 }
419
420 }
421}
422
423impl<S: 'static> Notified<S> {
424 fn header(&self) -> &Header {
425 self.0.header()
426 }
427
428 #[cfg(tokio_unstable)]
429 #[allow(dead_code)]
430 pub(crate) fn task_id(&self) -> crate::task::Id {
431 self.0.id()
432 }
433}
434
435impl<S: 'static> Notified<S> {
436 pub(crate) unsafe fn from_raw(ptr: RawTask) -> Notified<S> {
437 Notified(Task::new(ptr))
438 }
439}
440
441impl<S: 'static> Notified<S> {
442 pub(crate) fn into_raw(self) -> RawTask {
443 let raw = self.0.raw;
444 mem::forget(self);
445 raw
446 }
447}
448
449impl<S: Schedule> Task<S> {
450 /// Preemptively cancels the task as part of the shutdown process.
451 pub(crate) fn shutdown(self) {
452 let raw = self.raw;
453 mem::forget(self);
454 raw.shutdown();
455 }
456}
457
458impl<S: Schedule> LocalNotified<S> {
459 /// Runs the task.
460 pub(crate) fn run(self) {
461 let raw = self.task.raw;
462 mem::forget(self);
463 raw.poll();
464 }
465}
466
467impl<S: Schedule> UnownedTask<S> {
468 // Used in test of the inject queue.
469 #[cfg(test)]
470 #[cfg_attr(target_family = "wasm", allow(dead_code))]
471 pub(super) fn into_notified(self) -> Notified<S> {
472 Notified(self.into_task())
473 }
474
475 fn into_task(self) -> Task<S> {
476 // Convert into a task.
477 let task = Task {
478 raw: self.raw,
479 _p: PhantomData,
480 };
481 mem::forget(self);
482
483 // Drop a ref-count since an UnownedTask holds two.
484 task.header().state.ref_dec();
485
486 task
487 }
488
489 pub(crate) fn run(self) {
490 let raw = self.raw;
491 mem::forget(self);
492
493 // Transfer one ref-count to a Task object.
494 let task = Task::<S> {
495 raw,
496 _p: PhantomData,
497 };
498
499 // Use the other ref-count to poll the task.
500 raw.poll();
501 // Decrement our extra ref-count
502 drop(task);
503 }
504
505 pub(crate) fn shutdown(self) {
506 self.into_task().shutdown();
507 }
508}
509
510impl<S: 'static> Drop for Task<S> {
511 fn drop(&mut self) {
512 // Decrement the ref count
513 if self.header().state.ref_dec() {
514 // Deallocate if this is the final ref count
515 self.raw.dealloc();
516 }
517 }
518}
519
520impl<S: 'static> Drop for UnownedTask<S> {
521 fn drop(&mut self) {
522 // Decrement the ref count
523 if self.raw.header().state.ref_dec_twice() {
524 // Deallocate if this is the final ref count
525 self.raw.dealloc();
526 }
527 }
528}
529
530impl<S> fmt::Debug for Task<S> {
531 fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
532 write!(fmt, "Task({:p})", self.header())
533 }
534}
535
536impl<S> fmt::Debug for Notified<S> {
537 fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
538 write!(fmt, "task::Notified({:p})", self.0.header())
539 }
540}
541
542/// # Safety
543///
544/// Tasks are pinned.
545unsafe impl<S> linked_list::Link for Task<S> {
546 type Handle = Task<S>;
547 type Target = Header;
548
549 fn as_raw(handle: &Task<S>) -> NonNull<Header> {
550 handle.raw.header_ptr()
551 }
552
553 unsafe fn from_raw(ptr: NonNull<Header>) -> Task<S> {
554 Task::from_raw(ptr)
555 }
556
557 unsafe fn pointers(target: NonNull<Header>) -> NonNull<linked_list::Pointers<Header>> {
558 self::core::Trailer::addr_of_owned(Header::get_trailer(target))
559 }
560}
561
562/// # Safety
563///
564/// The id of a task is never changed after creation of the task, so the return value of
565/// `get_shard_id` will not change. (The cast may throw away the upper 32 bits of the task id, but
566/// the shard id still won't change from call to call.)
567unsafe impl<S> sharded_list::ShardedListItem for Task<S> {
568 unsafe fn get_shard_id(target: NonNull<Self::Target>) -> usize {
569 // SAFETY: The caller guarantees that `target` points at a valid task.
570 let task_id = unsafe { Header::get_id(target) };
571 task_id.0.get() as usize
572 }
573}