Should I Task.Wait() or await Task?

Should I Task.Wait() or await Task?

CLR offers an excellent construct for parallel programming – a task. Instead of dealing with threads, developers only have to break their code into small parallelizable units of work, and magic happens behind the scenes. Tasks run on the default thread pool under the careful watch of the default task scheduler. C# compiler has a convenient syntactic sugar “ async ” and “ await ” that break up monolithic methods into resumable state machines. Life is great until two worlds collide and a poor developer is staring down the barrel of a choice. When synchronous code transitions into asynchronous it is very tempting to just type “Task .Result ” or “Task .Wait() “. This split-second, almost unconscious decision may carry drastic consequences for your app. In this article we will see why “ await Task” is almost always the right choice, even if highly disruptive.

Task.Wait() shifts the bottleneck upstream to the thread pool

Thread pool contention is a well-known challenge to seasoned developers. Services should process work at the same rate as they receive it. If they aren’t then they drag a bunch of additional work that slows down the system:

  • Slow services need protection with request throttling in front of them to keep them from grinding to a halt.
  • Service with internal contention, increase hardware demands – either need beefier computers or more of them or both.

In this paragraph, we will look at a sample application that makes a network IO request on each task. This is obviously a gross oversimplification with a number of assumptions (about the relative cost and latency of network IO when compared to the rest of the code).

Every 2 milliseconds this application schedules a task to go fetch the home page of an arbitrarily selected website. It takes approximately 900 milliseconds to establish TLS and fetch the home page. As such, the bottleneck in this workflow is the network, not the application. We will conduct experiments on powerful hardware – Core i7 6-core CPU hyperthreaded, 32GB of RAM, 1 Gigabit NIC. We will use dotnet-counters tool to capture basic performance counters of CLR as follows:

Asynchronous mode

When we run the application as written above we see a constant scale . Application primes the thread pool within a second and after that point has no queue wait time as well as continuously declining thread pool size. Peek thread pool allocation is 28 threads . This application was able to complete over 3200 successful requests in 60 seconds.

Performance counters of constant scaling application

Synchronous bottleneck

Now we are going to introduce a bottleneck in the thread pool by replacing “ await ” with “ .Wait() “. This will increase the demand for threads because each top-level thread will now be waiting on a synchronous event, depriving the thread pool of threads to process HttpClient continuations.

Thread pool scheduler has to allocate additional threads to keep up with demand so we are seeing thread count rising significantly above the previous levels. We are also seeing queue length increasing significantly – this metric represents the number of units of work waiting for a thread to run.

Linear scale of the application with queue depth increasing proportional to the tasks scheduled

This is clearly a resource-consuming application design with low throughput. In total, in 60 seconds this application completed only 12 successful requests – that’s 250x times slower than a fully asynchronous application.

Thread pool manager based on the Hill Climb algorithm is ineffective for bursty workloads

There is a lot of “magic” behind the scenes in CLR (native) and managed foundational libraries to create an illusion of workload elasticity. In plan English – to know how many threads to allocate for any moment in time to handle the work items queued-up in the thread pool is not easy. One has to strike a balance between the two extremes:

  • Too few threads will cause thread pool contention as items will be sitting waiting for a thread to become available
  • Too many threads will cause an increase in context switches and reduce data locality . It means that CPU caches will become less effective and your program will need to make round-trips to RAM more often. There’s a great break-down of latencies between caches and RAM from Jonas Boner .

CLR uses the Hill Climb heuristic to find the global throughput maxima through the thread pool. It is worth taking a minute to study the following theoretical material:

  • Concurrency – Throttling Concurrency in the CLR 4.0 ThreadPool
  • Optimizing Concurrency Levels in the .NET ThreadPool: A Case Study of Controller Design and Implementation

Implementation of this algorithm inside Core CLR is available in the public GitHub repository . Matt Warren published by far the best explanation of how the Hill Climb works in his The CLR Thread Pool ‘Thread Injection’ Algorithm post, section “Working with the Hill Climbing code”.

In the previous paragraph, we already established that Task.Wait() causes contention. In this paragraph, we have learned that Hill Climb adjusts the thread count iteratively until throughput cannot be increased. Task.Wait() doubles the effect of any contention that exists in the thread pool. Let’s see how Hill Climb reacts to sudden spikes in load simulated by simple Thread.Sleep().

This application induces the desired effect on the thread pool – we see 6 spikes of load expressed as a Queue Length (leading indicator), followed by a spike of Completed Work Item Count per second (trailing indicator). Two interesting characteristics worth paying attention to:

  • How high the Queue Length spikes on every iteration is indicative of whether the thread pool is adequately primed for the workload
  • The amount of time between the spike in the Queue Length and the spike in the Completed Work Item, which indicates how quickly Hill Climb reacts to the workload

Hill Climb response time to bursty workload

Here’s the length of time when the Queue Length was greater than zero during each burst: 8 seconds, 6 seconds, 5 seconds, 3 seconds, 3 seconds, and 3 seconds correspondingly. The best algorithm was able to do was 3 seconds of processing time. Exactly 10,000 tasks were queued in 1-2 seconds during each burst. However, at no point in time, the algorithm matched the demand strictly. We can even see that after each burst, a 10-second delay was enough for the Hill Climb to start deallocating threads.

Locks and deadlocks

Synchronization mechanisms that exist in the OS are designed for threads. They do not have a concept of lock delegation from one thread to another if the workflow transitions between them. These synchronization mechanisms are exposed in .NET as ManualResetEvent , Monitor , Semaphore , Mutex , etc.

Tasks are CLR constructs that the underlying OS doesn’t see. It sees a bunch of threads doing a bunch of work. Everything that encapsulates a task is done by foundational libraries – installation of impersonation and synchronization context, async locals , etc. A decent amount of glue code is written by the .NET runtime developers to bring thread-wide concepts to tasks when tasks can jump between threads continuation after continuation – see ExecutionContext.RunInternal() , for example. Clearly, the thread-based synchronization approach doesn’t work for tasks.

SemaphoreSlim is a popular async synchronization primitive. It doesn’t look at the thread on which it runs, it simply counts the number of tasks that gained access under the semaphore, which makes it portable across continuations. This behavior explains why the following code deadlocks.

To reconcile Tasks and Threads in .NET runtime there is a concept of SynchronizationContext . It was introduced to deal with things like UI thread in Windows and ASP.NET worker thread managed by IIS that handle the request pipeline. The idea is that only some threads can perform certain actions but not the others. Every time a task chain completes it must return control to that thread. Stephen Cleary did a great job explaining how deadlocks occur when those special threads are blocked with Task.Wait() in his Don’t Block on Async Code post.

There two general approaches to break synchronization context boundary and avoid a deadlock. See ConfigureAwait FAQ for a deep dive into the topic.

  • Task.ConfigureAwait(continueOnCapturedContext: false) is recommended within general-purpose library code. However, UI applications must continue on the original synchronization context or they will be busted.
  • Task.Run(() => {…}) will also break the synchronization context, but it will also break the task scheduler boundary. If your application uses a custom task scheduler this will force the next task to run on the default task scheduler.

Awaiting a Task unwraps AggregateException

As developers, we are very careful about our API contracts. We want to control which exceptions get the fly out of our methods – we test and document them. Depending on whether we Task.Wait() or await Task we end-up throwing a different exception.

Awaited code will produce the following output:

A seemingly benign change that switches code to go synchronous will impact our API contract, if not handled appropriately.

This code no longer unwraps AggregateException.

Too much async will hurt application performance

We’ve been preaching an async pattern like a universal panacea to all development problems in this article. Yet, there are a few cases where too much async is not healthy and you’re better off going synchronous. You should aim to strike a balance between the amount of computing spent within each task relative to the cost of task switching and handling in the thread pool and scheduler.

Avoid await unless it is necessary

Let’s look at a contrived example of chained await calls that don’t add value on their own, but rather exist to chain async calls. For each method, the compiler will generate an IAsyncStateMachine class. Each state machine will have a MoveNext() method that links to a TaskAwaiter from the next method.

Each method will be resumable with all the results and exception handling code prepared for it. For a method that doesn’t have any useful code that’s awful a lot of compiler-generated code that will consume CPU at runtime. A better approach is to collapse a bunch of asynchronous methods into synchronous ones that return the final task to the top.

The modified code version doesn’t have any async syntactic sugar but is functionally equivalent to the code at the top. It has better runtime performance due to reduced CPU and memory consumption. Even non-async methods can be awaited as long as they return a Task.

Exception handling in Task is expensive

Tasks have a reasonable performance on the happy path – when every continuation does it’s thing and hands control to the next continuation. Tasks have a layer of glue that runs on every continuation to configure synchronization and impersonation context, which is reasonably optimized. However, when a task fails it goes into berserk mode trying to create that masterpiece of an exception that we all love and expect from .NET.

Inside the execution context, Task calls ExceptionDispatchInfo.Capture() , which in turn calls Exception.CaptureDispatchState() , which calls GetStackTracesDeepCopy(). This method attempts to construct a meaningful stack trace for an exception across all task continuations, including all the glue code injected by .NET core library. That’s a very compute and memory intense operation. If for your app exceptions are rare then this is no big deal – you really want that specific stack trace to debug the problem. However, if exceptions occur in your code often, this will become a significant drag on application performance.

It is best to design your application in exception-free manner and replace all calls on the happy path that can throw with TrySomething() pattern.

Thanks to dotnetCoreLogoPack for .NET Core logo.

task result wait

Await, and UI, and deadlocks! Oh my!

' data-src=

Stephen Toub - MSFT

January 13th, 2011 2 2

It’s been awesome seeing the level of interest developers have had for the Async CTP and how much usage it’s getting.  Of course, with any new technology there are bound to be some hiccups.  One issue I’ve seen arise now multiple times is developers accidentally deadlocking their application by blocking their UI thread, so I thought it would be worthwhile to take a few moments to explore the common cause of this and how to avoid such predicaments.

At its core, the new async language functionality aims to restore the ability for developers to write the sequential, imperative code they’re used to writing, but to have it be asynchronous in nature rather than synchronous.  That means that when operations would otherwise tie up the current thread of execution, they’re instead offloaded elsewhere, allowing the current thread to make forward progress and do other useful work while, in effect, asynchronously waiting for the spawned operation to complete.  In both server and client applications, this can be crucial for application scalability, and in client applications in particular it’s also really useful for responsiveness.

Most UI frameworks, such as Windows Forms and WPF, utilize a message loop to receive and process incoming messages.  These messages include things like notifications of keys being typed on a keyboard, or buttons being clicked on a mouse, or controls in the user interface being manipulated, or the need to refresh an area of the window, or even the application sending itself a message dictating some code to be executed.  In response to these messages, the UI performs some action, such as redrawing its surface, or changing the text being displayed, or adding items to one of its controls., or running the code that was posted to it.  The “message loop” is typically literally a loop in code, where a thread continually waits for the next message to arrive, processes it, goes back to get the next message, processes it, and so on.  As long as that thread is able to quickly process messages as soon as they arrive, the application remains responsive, and the application’s users remain happy.  If, however, processing a particular message takes too long, the thread running the message loop code will be unable to pick up the next message in a timely fashion, and responsiveness will decrease.  This could take the form of pauses in responding to user input, and if the thread’s delays get bad enough (e.g. an infinite delay), the application “hanging”.

In a framework like Windows Forms or WPF, when a user clicks a button, that typically ends up sending a message to the message loop, which translates the message into a call to a handler of some kind, such as a method on the class representing the user interface, e.g.:

private void button1_Click(object sender, RoutedEventArgs e) {     string s = LoadString();     textBox1.Text = s; }

Here, when I click the button1 control, the message will inform WPF to invoke the button1_Click method, which will in turn run a method LoadString to get a string value, and store that string value into the textBox1 control’s Text property.  As long as LoadString is quick to execute, all is well, but the longer LoadString takes, the more time the UI thread is delayed inside button1_Click, unable to return to the message loop to pick up and process the next message.

To address that, we can choose to load the string asynchronously, meaning that rather than blocking the thread calling button1_Click from returning to the message loop until the string loading has completed, we’ll instead just have that thread launch the loading operation and then go back to the message loop.  Only when the loading operation completes will we then send another message to the message loop to say “hey, that loading operation you previously started is done, and you can pick up where you left off and continue executing.”  Imagine we had a method:

public Task<string> LoadStringAsync();

This method will return very quickly to its caller, handing back a .NET Task<string> object that represents the future completion of the asynchronous operation and its future result.  At some point in the future when the operation completes, the task object will be able to hand out the operations’ result, which could be the string in the case of successful loading, or an exception in the case of failure.  Either way, the task object provides several mechanisms to notify the holder of the object that the loading operation has completed.  One way is to synchronously block waiting for the task to complete, and that can be accomplished by calling the task’s Wait method, or by accessing its Result, which will implicitly wait until the operation has completed… in both of these cases, a call to these members will not complete until the operation has completed.  An alternative way is to receive an asynchronous callback, where you register with the task a delegate that will be invoked when the task completes.  That can be accomplished using one of the Task’s ContinueWith methods.  With ContinueWith, we can now rewrite our previous button1_Click method to not block the UI thread while we’re asynchronously waiting for the loading operation to complete:

private void button1_Click(object sender, RoutedEventArgs e) {     Task<string> s = LoadStringAsync();     s.ContinueWith(delegate { textBox1.Text = s.Result; }); // warning: buggy }

This does in fact asynchronously launch the loading operation, and then asynchronously run the code to store the result into the UI when the operation completes.  However, we now have a new problem.  UI frameworks like Windows Forms, WPF, and Silverlight all place a restriction on which threads are able to access UI controls, namely that the control can only be accessed from the thread that created it.  Here, however, we’re running the callback to update the Text of textBox1on some arbitrary thread, wherever the Task Parallel Library (TPL) implementation of ContinueWith happened to put it.  To address this, we need some way to get back to the UI thread.  Different UI frameworks provide different mechanisms for doing this, but in .NET they all take basically the same shape, a BeginInvoke method you can use to pass some code as a message to the UI thread to be processed:

private void button1_Click(object sender, RoutedEventArgs e) {     Task<string> s = LoadStringAsync();     s.ContinueWith(delegate     {         Dispatcher.BeginInvoke(new Action(delegate         {             textBox1.Text = s.Result;         }));     }); }

The .NET Framework further abstracts over these mechanisms for getting back to the UI thread, and in general a mechanism for posting some code to a particular context, through the SynchronizationContext class.  A framework can establish a current context, available through the SynchronizationContext.Current property, which provides a SynchronizationContext instance representing the current environment.  This instance’s Post method will marshal a delegate back to this environment to be invoked: in a WPF app, that means bringing you back to the dispatcher, or UI thread, you were previously on.  So, we can rewrite the previous code as follows:

private void button1_Click(object sender, RoutedEventArgs e) {     var sc = SynchronizationContext.Current;     Task<string> s = LoadStringAsync();     s.ContinueWith(delegate     {         sc.Post(delegate { textBox1.Text = s.Result; }, null);     }); }

and in fact this pattern is so common, TPL in .NET 4 provides the TaskScheduler.FromCurrentSynchronizationContext() method, which allows you to do the same thing with code like:

private void button1_Click(object sender, RoutedEventArgs e) {     LoadStringAsync().ContinueWith(s => textBox1.Text = s.Result,         TaskScheduler.FromCurrentSynchronizationContext()); }

As mentioned, this works by “posting” the delegate back to the UI thread to be executed.  That posting is a message like any other, and it requires the UI thread to go through its message loop, pick up the message, and process it (which will result in invoking the posted delegate).  In order for the delegate to be invoked then, the thread first needs to return to the message loop, which means it must leave the button1_Click method.

Now, there’s still a fair amount of boilerplate code to write above, and it gets orders of magnitude worse when you start introducing more complicated flow control constructs, like conditionals and loops.  To address this, the new async language feature allows you to write this same code as:

private async void button1_Click(object sender, RoutedEventArgs e) {     string s = await LoadStringAsync();     textBox1.Text = s; }

For all intents and purposes, this is the same as the previous code shown, and you can see how much cleaner it is… in fact, it’s close to identical  in the code required to our original synchronous implementation.  But, of course, this one is asynchronous: after calling LoadStringAsync and getting back the Task<string> object, the remainder of the function is hooked up as a callback that will be posted to the current SynchronizationContext in order to continue execution on the right thread when the loading is complete.  The compiler is layering on some really helpful syntactic sugar here.

Now things get interesting. Let’s imagine LoadStringAsync is implemented as follows:

static async Task<string> LoadStringAsync() {     string firstName = await GetFirstNameAsync();     string lastName = await GetLastNameAsync();     return firstName + ” ” + lastName; }

LoadStringAsync is implemented to first asynchronously retrieve a first name, then asynchronously retrieve a last name, and then return the concatenation of the two.  Notice that it’s using “await”, which, as pointed out previously, is similar to the aforementioned TPL code that uses a continuation to post back to the synchronization context that was current when the await was issued.  So, here’s the crucial point: for LoadStringAsync to complete (i.e. for it to have loaded all of its data and returned its concatenated string, completing the task it returned with that concatenated result), the delegates it posted to the UI thread must have completed.  If the UI thread is unable to get back to the message loop to process messages, it will be unable to pick up the posted delegates that resulted from the asynchronous operations in LoadStringAsync completing, which means the remainder of LoadStringAsync will not run, which means the Task<string> returned from LoadStringAsync will not complete.  It won’t complete until the relevant messages are processed by the message loop.

With that in mind, consider this (faulty) reimplementation of button1_Click:

private void button1_Click(object sender, RoutedEventArgs e) {     Task<string> s = LoadStringAsync();     textBox1.Text = s.Result; // warning: buggy }

There’s an exceedingly good chance that this code will hang your application.  The Task<string>.Result property is strongly typed as a String, and thus it can’t return until it has the valid result string to hand back; in other words, it blocks until the result is available.  We’re inside of button1_Click then blocking for LoadStringAsync to complete, but LoadStringAsync’s implementation depends on being able to post code asynchronously back to the UI to be executed, and the task returned from LoadStringAsync won’t complete until it does.  LoadStringAsync is waiting for button1_Click to complete, and button1_Click is waiting for LoadStringAsync to complete. Deadlock!

This problem can be exemplified easily without using any of this complicated machinery, e.g.:

private void button1_Click(object sender, RoutedEventArgs e) {     var mre = new ManualResetEvent(false);     SynchronizationContext.Current.Post(_ => mre.Set(), null);     mre.WaitOne(); // warning: buggy }

Here, we’re creating a ManualResetEvent, a synchronization primitive that allows us to synchronously wait (block) until the primitive is set.  After creating it, we post back to the UI thread to set the event, and then we wait for it to be set.  But we’re waiting on the very thread that would go back to the message loop to pick up the posted message to do the set operation.  Deadlock.

The moral of this (longer than intended) story is that you should not block the UI thread.  Contrary to Nike’s recommendations, just don’t do it.  The new async language functionality makes it easy to asynchronous wait for your work to complete.  So, on your UI thread, instead of writing:

Task<string> s = LoadStringAsync(); textBox1.Text = s.Result; // BAD ON UI

you can write:

Task<string> s = LoadStringAsync(); textBox1.Text = await s; // GOOD ON UI

Or instead of writing:

Task t = DoWork(); t.Wait(); // BAD ON UI
Task t = DoWork(); await t; // GOOD ON UI

This isn’t to say you should never block.  To the contrary, synchronously waiting for a task to complete can be a very effective mechanism, and can exhibit less overhead in many situations than the asynchronous counterpart.  There are also some contexts where asynchronously waiting can be dangerous. For these reasons and others, Task and Task<TResult> support both approaches, so you can have your cake and eat it too.  Just be cognizant of what you’re doing and when, and don’t block your UI thread.

(One final note: the Async CTP includes the TaskEx.ConfigureAwait method.  You can use this method to suppress the default behavior of marshaling back to the original synchronization context.  This could have been used, for example, in the LoadStringAsync method to prevent those awaits from needing to return to the UI thread.  This would not only have prevented the deadlock, it would have also resulted in better performance, because we now no longer need to force execution back to the UI thread, when nothing in that method actually needed to run on the UI thread.)

' data-src=

Stephen Toub - MSFT Partner Software Engineer, .NET

' data-src=

Comments are closed. Login to edit/delete your existing comments

' data-src=

I must admin that this article(and the series of articles you have written on TPL) is one of the most comprehensive blogs I have ever read regarding TPL.

I have one quick question regarding this statement that you have made:

There’s an exceedingly good chance that this code will hang your application. The Task.Result property is strongly typed as a String, and thus it can’t return until it has the valid result string to hand back; in other words, it blocks until the result is available. We’re inside of button1_Click then blocking for LoadStringAsync to complete, but LoadStringAsync’s implementation depends on being able to post code asynchronously back to the UI to be executed, and the task returned from LoadStringAsync won’t complete until it does. LoadStringAsync is waiting for button1_Click to complete, and button1_Click is waiting for LoadStringAsync to complete. Deadlock!

So the main problem as to why Deadlock occurs is because the UI Thread cannot process the message posted to the message pump right? Or is it because LoadStringAsync() method cannot even post message to the message pump because the UI thread is blocked by the caller waiting for LoadStringAsync() to complete. If my understanding is correct, the DeadLock happens because the UI thread cannot process the message posted to the message pump which means that LoadStringAsync did post to the message pump but that message cannot be picked up by the UI thread(as its waiting) and thus LoadStringAsync cannot mark itself as complete?

' data-src=

> So the main problem as to why Deadlock occurs is because the UI Thread cannot process the message posted to the message pump right?

Correct. The UI thread is blocked waiting for the task to complete, and the task won’t complete until the UI thread pumps messages, which won’t happen because the UI thread is blocked.


blog post image

Andrew Lock | .NET Escapades Andrew Lock

  • .NET Core 6
  • Source Code Dive

A deep-dive into the new Task.WaitAsync() API in .NET 6

In this post I look at how the new Task.WaitAsync() API is implemented in .NET 6, looking at the internal types used to implement it.

Adding a timeout or cancellation support to await Task

In my previous post , I showed how you could "cancel" an await Task call for a Task that didn't directly support cancellation by using the new WaitAsync() API in .NET 6.

I used WaitAsync() in that post to improve the code that waits for the IHostApplicationLifetime.ApplicationStarted event to fire. The final code I settled on is shown below:

In this post, I look at how the .NET 6 API Task.WaitAsync() is actually implemented.

Diving into the Task.WaitAsync implementation

For the rest of the post I'm going to walk through the implementation behind the API. There's not anything very surprising there, but I haven't looked much at the code behind Task and its kin, so it was interesting to see some of the details.

Task.WaitAsync() was introduced in this PR by Stephen Toub .

We'll start with the Task.WaitAsync methods :

These three methods all ultimately delegate to a different, private , WaitAsync overload (shown shortly) that takes a timeout in milliseconds. This timeout is calculated and validated in the ValidateTimeout method , shown below, which asserts that the timeout is in the allowed range, and converts it to a uint of milliseconds.

Now we come to the WaitAsync method that all the public APIs delegate too . I've annotated the method below:

Most of this method is checking whether we can take a fast-path and avoid the extra work involved in creating a CancellationPromise<T> , but if not, then we need to dive into it. Before we do, it's worth addressing the VoidTaskResult generic parameter used with the returned CancellationPromise<T> .

VoidTaskResult is an internal nested type of Task , which is used a little like the unit type in functional programming ; it indicates that you can ignore the T .

Using VoidTaskResult means more of the implementation of Task and Task<T> can be shared. In this case, the CancellationPromise<T> implementation is the same in both the Task.WaitAsync() implementation (shown above), and the generic versions of those methods exposed by Task<TR> . .

So with that out the way, let's look at the implementation of CancellationPromise<T> to see how the magic happens.

Under the hood of CancellationPromise<T>

There's quite a few types involved in CancellationPromise that you probably won't be familiar with unless you regularly browse the .NET source code, so we'll take this one slowly.

First of all, we have the type signature for the nested type CancellationPromise<T> :

There's a few things to note in the signature alone:

  • private protected —this modifier means that the CancellationPromise<T> type can only be accessed from classes that derive from Task , and are in the same assembly . Which means you can't use it directly in your user code.
  • Task<TResult> —the CancellationPromise<T> derives from Task<TResult> . For the most part it's a "normal" task, that can be cancelled, completed, or faulted just like any other Task .
  • ITaskCompletionAction —this is an internal interface that essentially allows you to register a lightweight action to take when a Task completes. This is similar to a standard continuation created with ContinueWith , except it is lower overhead . Again, this is internal , so you can't use it in your types. We'll look in more depth at this shortly.

We've looked at the signature, now let's look at it's private fields. The descriptions for these in the source cover it pretty well I think:

So we have 3 fields:

  • The original Task on which we called WaitAsync()
  • The cancellation token registration received when we registered with the CancellationToken . If the default cancellation token was used, this will be a "dummy" default instance.
  • The timer used to implement the timeout behaviour (if required).

Note that the _timer field is of type TimerQueueTimer . This is another internal implementation, this time it is part of the overall Timer implementation . We're going deep enough as it is in this post, so I'll only touch on how this is used briefly below. For now it's enough to know that it behaves similarly to a regular System.Threading.Timer .

So, the CancellationPromise<T> is a class that derives from Task<T> , maintains a reference to the original Task , a CancellationTokenRegistration , and a TimerQueueTimer .

The CancellationPromise constructor

Lets look at the constructor now. We'll take this in 4 bite-size chunks. First off, the arguments passed in from Task.WaitAsync() have some debug assertions applied, and then the original Task is stored in _task . Finally, the CancellationPromise<T> instance is registered as a completion action for the source Task (we'll come back to what this means shortly).

Next we have the timeout configuration. This creates a TimerQueueTimer and passes in a callback to be executed after millisecondsDelay (and does not execute periodically). A static lambda is used to avoid capturing state, which instead is passed as the second argument to the TimerQueueTimer . The callback tries to mark the CancellationPromise<T> as faulted by setting a TimeoutException() (remember that CancellationPromise<T> itself is a Task ), and then does some cleanup we'll see later.

Note also that flowExecutionContext is false , which avoids capturing and restoring the execution context for performance reasons. For more about execution context, see this post by Stephen Toub .

After configuring the timeout, the constructor configures the CancellationToken support. This similarly registers a callback to fire when the provided CancellationToken is cancelled. Note that again this uses UnsafeRegister() (instead of the normal Register() ) to avoid flowing the execution context into the callback.

Finally, the constructor does some house keeping. This accounts for the situation where the source Task completes while the constructor is executing , before the timeout and cancellation have been registered. Or if the timeout fires before the cancellation is registered. Without the following block, you could end up with leaking resources not being cleaned up

That's all the code in the constructor. Once constructed, the CancellationPromise<T> is returned from the WaitAsync() method as a Task (or a Task<T> ), and can be awaited just as any other Task . In the next section we'll see what happens when the source Task completes.

Implementing ITaskCompletionAction

In the constructor of CancellationPromise<T> we registered a completion action with the source Task (the one we called WaitAsync() on):

The object passed to AddCompletionAction() must implement ITaskCompletionAction (as CancellationPromise<T> does) ITaskCompletionAction interface is simple, consisting of a single method (which is invoked when the source Task completes) and a single property:

CancellationPromise<T> implements this method as shown below. It sets InvokeMayRunArbitraryCode to true (as all non-specialised scenarios do) and implements the Invoke() method, receiving the completed source Task as an argument.

The implementation essentially "copies" the status of the completed source Task into the CancellationPromise<T> task:

  • If the source Task was cancelled, it calls TrySetCancelled , re-using the exception dispatch information to "hide" the details of CancellationPromise<T>
  • If the source task was faulted, it calls TrySetException()
  • If the task completed, it calls TrySetResult

Note that whatever the status of the source Task , the TrySet* method may fail, if cancellation was requested or the timeout expired in the mean time. In those cases the bool variable is set to false , and we can skip calling Cleanup() (as the successful path will call it instead).

Now you've seen all three callbacks for the 3 possible outcomes of WaitAsync() . In each case, whether the task, timeout, or cancellation completes first, we have some cleanup to do.

Cleaning up

One of the things you can forget when working with CancellationToken s and timers, is to make sure you clean up after yourself. CancellationPromise<T> makes sure to do this by always calling Cleanup() . This does three things:

  • Dispose the CancellationTokenRegistration returned from CancellationToken.UnsafeRegister()
  • Close the ThreadQueueTimer (if it exists), which cleans up the underlying resources
  • Removes the callback from the source Task , so the ITaskCompletionAction.Invoke() method on CancellationPromise<T> won't be called.

Each of these methods is idempotent and thread-safe, so it's safe to call the Cleanup() method from multiple callbacks, which might happen if something fires when we're still running the CancellationPromise<T> constructor, for example.

One point to bear in mind is that even if a timeout occurs, or the cancellation token fires and the CancellationPromise<T> completes, the source Task will continue to execute in the background. The caller who executed source.WaitAsync() won't ever see the output of result of the Task , but if that Task has side effects, they will still occur.

And that's it! It took a while to go through it, but there's not actually much code involved in the implementation of WaitAsync() , and it's somewhat comparable to the "naive" approach you might have used in previous versions of .NET, but using some of .NET's internal types for performance reasons. I hope it was interesting!

In this post I took an in-depth look at the new Task.WaitAsync() method in .NET 6, exploring how it is implemented using internal types of the BCL. I showed that the Task returned from WaitAsync() is actually a CancellationPromise<T> instance, which derives from Task<T> , but which supports cancellation and timeouts directly. Finally, I walked through the implementation of CancellationPromise<T> , showing how it wraps the source Task .

Popular Tags

task result wait

Stay up to the date with the latest posts!

Code Maze

  • Blazor WASM 🔥
  • ASP.NET Core Series
  • GraphQL ASP.NET Core
  • ASP.NET Core MVC Series
  • Testing ASP.NET Core Applications
  • EF Core Series
  • HttpClient with ASP.NET Core
  • Azure with ASP.NET Core
  • ASP.NET Core Identity Series
  • IdentityServer4, OAuth, OIDC Series
  • Angular with ASP.NET Core Identity
  • Blazor WebAssembly
  • .NET Collections
  • SOLID Principles in C#
  • ASP.NET Core Web API Best Practices
  • Top REST API Best Practices
  • Angular Development Best Practices
  • 10 Things You Should Avoid in Your ASP.NET Core Controllers
  • C# Back to Basics
  • C# Intermediate
  • Design Patterns in C#
  • Sorting Algorithms in C#
  • Docker Series
  • Angular Series
  • Angular Material Series
  • HTTP Series
  • .NET/C# Author
  • .NET/C# Editor
  • Our Editors
  • Leave Us a Review
  • Code Maze Reviews

Select Page

Using Task.CompletedTask, Task.FromResult and Return in C# Async Methods

Posted by Code Maze | Sep 2, 2023 | 0

Using Task.CompletedTask, Task.FromResult and Return in C# Async Methods

Developers often use async methods in C# Asynchronous Programming to improve the throughput of applications. In this article, we will learn about the usage of three different statements in C# async methods. When working with asynchronous programming in C#, developers may encounter these constructs that serve distinct purposes. Understanding their use case is vital to writing clear and efficient asynchronous code.

Let’s start.

Using Task.CompletedTask in Async Methods

Assuming we want to adhere to the asynchronous programming pattern. Let’s consider a method that only returns a task without actually performing any asynchronous work. 

In this case, we can use Task.CompletedTask . It’s similar to saying, “We have completed the task, and here’s a task to indicate the completion.”

Let’s create the TaskCompletedHandler class to understand this:

Become a patron at Patreon!

By using Task.CompletedTask in the UseTaskCompletedAsync() method, we avoid unnecessary overhead and resource consumption, as we’re only writing to the console and returning an already completed task without additional processing .

Although Task.CompletedTask is often used in asynchronous methods, it can be useful in synchronous methods as well to maintain consistency in method signatures, especially when we have both asynchronous and synchronous versions of a method.

Let’s see what a synchronous counterpart of our method looks like:

We are able to ensure consistency in the UseTaskCompletedSync() method by using a void return type and performing synchronous work as well.

We use Task.CompletedTask.Wait() in our method, however, using it in a synchronous method isn’t a recommended practice , as it can lead to blocking the current thread, defeating the purpose of asynchronous programming.

Recognizing that blocking threads in synchronous contexts can lead to real-world performance and scalability issues is critical.

Using Task.FromResult in Async Methods

The Task.FromResult is a sibling to Task.CompletedTask . It is a useful method in async programming to create a completed Task with a specific result . It also allows us to quickly return a completed Task in situations where we don’t have asynchronous operations.

Let’s create the UseTaskFromResultAsync() method to understand this:

Using Task.FromResult in the UseTaskFromResultAsync() method allows us to return a string result with the Task . Additionally, we use Task<string> in our method signature rather than Task .

In synchronous methods, we could also potentially use Task.FromResult , however, its usage might not align with the typical intention of synchronous methods:

When we remove the async and Task keywords that we had in the last code block, we need to remove the await keyword also so that we don’t get a compiler error.

Here we need to get the result value of our task by using the Result property as we did in the UseTaskFromResultSync() method. In this method, we use Task.FromResult to return a completed Task<string> . Note that using Result is not advised for the same reasons as the Wait() method. 

Using Return Statements in Async Methods

On the other hand, if our asynchronous method does perform actual asynchronous work, we’ll use the return statement. This will signify an ongoing asynchronous operation. Therefore, the caller is notified that we are working asynchronously, with a task representing progress and completion.

Let’s create the UseReturnAsync() method and see how we can achieve this using a return statement:

In the UseReturnAsync() method, we write a message to the console, carry out an asynchronous operation, and then return a task representing the completion of that operation. The caller can await this task to wait for the result.

When using the return statement in an asynchronous method, the method execution is temporarily paused until the awaited operation completes.

The return keyword is used in a similar way in both synchronous and asynchronous methods to provide a value as the result of the method. Whether the method is synchronous or asynchronous, the basic purpose of the return keyword remains the same – to send a value back from the method to the calling code.

In this article, we learned the usage of Task.CompletedTask , Task.FromResult and return in C# asynchronous methods. We also delved a little into their usage in a synchronous method.

In a nutshell, developers employ Task.CompletedTask when our method doesn’t perform actual asynchronous work but still needs to provide a completed task. We use Task.FromResult in a manner similar to Task.CompletedTask but include a result.

On the other hand, the return statement is used when our method performs asynchronous operations and returns a task representing their completion.

By grasping the usage and distinction between these constructs, we can write concise and efficient asynchronous codes.


Join our 20k+ community of experts and learn about our Top 16 Web API Best Practices .

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Task<TResult>.Result Property

Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.

Gets the result value of this Task<TResult> .

Property Value

The result value of this Task<TResult> , which is of the same type as the task's type parameter.

The task was canceled. The InnerExceptions collection contains a TaskCanceledException object.

An exception was thrown during the execution of the task. The InnerExceptions collection contains information about the exception or exceptions.

The following example is a command-line utility that calculates the number of bytes in the files in each directory whose name is passed as a command-line argument. If the directory contains files, it executes a lambda expression that instantiates a FileStream object for each file in the directory and retrieves the value of its FileStream.Length property. If a directory contains no files, it simply calls the FromResult method to create a task whose Task<TResult>.Result property is zero (0). When the tasks finish, the total number of bytes in all a directory's files is available from the Result property.

Accessing the property's get accessor blocks the calling thread until the asynchronous operation is complete; it is equivalent to calling the Wait method.

Once the result of an operation is available, it is stored and is returned immediately on subsequent calls to the Result property. Note that, if an exception occurred during the operation of the task, or if the task has been cancelled, the Result property does not return a value. Instead, attempting to access the property value throws an AggregateException exception.

  • Task Parallel Library (TPL)
  • Task-based Asynchronous Programming
  • How to: Return a Value from a Task

Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see: .

Submit and view feedback for

Additional resources

Jaliya's Blog

Monday, july 1, 2019, task.wait() vs task.getawaiter().getresult().

  • Task.Wait() (or Task.Result to get the return value if it returns something)
  • Task.GetAwaiter().GetResult()

No comments:

Post a comment.


  1. windows 8

    task result wait

  2. Wait and Get Results

    task result wait

  3. Jaliya's Blog: Task.Wait() Vs Task.GetAwaiter().GetResult()

    task result wait

  4. How a progress tracker protects schedules

    task result wait

  5. Actions > Task Commands > Wait for Task

    task result wait

  6. Different Wait Times for Task Creation

    task result wait


  1. Funny don’t skip🤣😂🤣wait for end

  2. Wait For Result 🤔 ?

  3. Task failed Successfully..!?

  4. Task Failed successfully

  5. Task failed successfully ✅ #shorts

  6. 1M Subscribers on YouTube 🤩


  1. c#

    What is the difference between await Task<T> and Task<T>.Result? (2 answers) Closed 8 years ago. What's the difference between doing the following: async Task<T> method(){ var r = await dynamodb.GetItemAsync(...) return r.Item; } vs async Task<T> method(){ var task = dynamodb.GetItemAsync(...) return task.Result.Item; }

  2. Task.Wait Method (System.Threading.Tasks)

    Definition Namespace: System. Threading. Tasks Assembly: System.Runtime.dll Waits for the Task to complete execution. Overloads Expand table Wait () Waits for the Task to complete execution. C# public void Wait (); Exceptions ObjectDisposedException The Task has been disposed. AggregateException The task was canceled.

  3. await operator

    await operator in the Main method C# language specification See also The await operator suspends evaluation of the enclosing async method until the asynchronous operation represented by its operand completes. When the asynchronous operation completes, the await operator returns the result of the operation, if any.

  4. c#

    This means that using .Result () or .Wait () after async method doesn't cause deadlock any more, and only affects performance (reserving a thread), which is mostly not a concern in small single-threaded business applications that only read and write data from one place to another.

  5. Async/Await

    Every Task will store a list of exceptions. When you await a Task, the first exception is re-thrown, so you can catch the specific exception type (such as InvalidOperationException). However, when you synchronously block on a Task using Task.Wait or Task.Result, all of the exceptions are wrapped in an AggregateException and thrown.

  6. Should I Task.Wait() or await Task?

    When synchronous code transitions into asynchronous it is very tempting to just type "Task .Result " or "Task .Wait () ". This split-second, almost unconscious decision may carry drastic consequences for your app. In this article we will see why " await Task" is almost always the right choice, even if highly disruptive.

  7. Await, and UI, and deadlocks! Oh my!

    Await, and UI, and deadlocks! Oh my! Stephen Toub - MSFT January 13th, 2011 2 2 It's been awesome seeing the level of interest developers have had for the Async CTP and how much usage it's getting. Of course, with any new technology there are bound to be some hiccups.

  8. A deep-dive into the new Task.WaitAsync() API in .NET 6

    In this post I took an in-depth look at the new Task.WaitAsync () method in .NET 6, exploring how it is implemented using internal types of the BCL. I showed that the WaitAsync ()CancellationPromise<T> instance, which derives from Task<T>, but which supports cancellation and timeouts directly. Finally, I walked through the implementation of ...

  9. When to use Task.Wait() or Task.Result? : r/csharp

    When to use Task.Wait () or Task.Result? Help Why did the C# team provide the Result property and Wait () method if they aren't to be used because of deadlocks? Surely there must be some niche situation where you'd want to use them? Or, did they exist before the async / await keywords came out? 33 Share Sort by: KryptosFR • 2 yr. ago

  10. How to Run an Async Method Synchronously in .NET

    To get the results of the task, we call Task.Result which returns the list. Task.Result also blocks when the result is not ready and doesn't return immediately. But when the Result is ready it returns it immediately. In case an exception is thrown, it is wrapped in an AggregateException. Having seen how both Task.Wait() and Task.Result work ...

  11. The Task Asynchronous Programming (TAP) model with async and await

    The string result isn't returned by the call to GetStringAsync in the way that you might expect. (Remember that the method already returned a task in step 3.) Instead, the string result is stored in the task that represents the completion of the method, getStringTask. The await operator retrieves the result from getStringTask.

  12. Understanding Async, Avoiding Deadlocks in C#

    18 Typical code that might pop up in a C# codebase and can be pretty dangerous. You ran into some deadlocks, you are trying to write async code the proper way or maybe you're just curious....

  13. Why do Task.Wait and Task.Result even exist? : r/csharp

    Now if you are in a situation where you can't use async/await and you have to do sync over async, the preferred way to do it seems to be Task.GetAwaiter ().GetResult (); which can still cause deadlocks but at least it doesn't wrap exceptions in an AggregateException . So why do Task.Wait and Task.Result even exist?

  14. Difference Between Returning and Awaiting a Task in C#

    As the result states, the ReadTaskAsync version waits for the reader.ReadAsync task to finish before disposing of the reader instance. In contrast, the ReadTask version immediately disposes the reader instance on leaving using block (with an incomplete task), and the returned Task finishes at the caller's end.

  15. Task.CompletedTask, Task.FromResult and Return in C#

    Recognizing that blocking threads in synchronous contexts can lead to real-world performance and scalability issues is critical.. Using Task.FromResult in Async Methods. The Task.FromResult is a sibling to Task.CompletedTask.It is a useful method in async programming to create a completed Task with a specific result.It also allows us to quickly return a completed Task in situations where we ...

  16. Task<TResult>.Result Property (System.Threading.Tasks)

    Definition Examples Remarks Applies to See also Definition Namespace: System. Threading. Tasks Assembly: System.Runtime.dll Gets the result value of this Task<TResult>. C# public TResult Result { get; } Property Value TResult The result value of this Task<TResult>, which is of the same type as the task's type parameter. Exceptions

  17. c#

    Task<TResult>.Result vs await a task [duplicate] Ask Question Asked 7 years, 10 months ago Viewed 5k times 2 This question already has answers here : Await on a completed task same as task.Result? (2 answers) Closed 7 years ago. I writing a small example to get value 5 in method TestMethod, I have 2 ways to do that:

  18. Jaliya's Blog: Task.Wait() Vs Task.GetAwaiter().GetResult()

    Now let's have a look at 2 different outputs. As you can see when used Task.Wait (), if the task threw an exception, it will be wrapped inside an AggregateException. But when we are using Task.GetAwaiter ().GetResult (), it will throw the exception directly which will make things like debugging/logging easy. That's a very simple tip, but can be ...

  19. How do I get a return value from Task.WaitAll() in a console app?

    3 Answers Sorted by: 119 You don't get a return value from Task.WaitAll. You only use it to wait for completion of multiple tasks and then get the return value from the tasks themselves. var task1 = GetAsync(1); var task2 = GetAsync(2); Task.WaitAll(task1, task2); var result1 = task1.Result; var result2 = task2.Result;