.Net Core Central

C#/.NET/Cloud Tutorials

How to run background tasks in ASP.NET Core Application

In this blog post, I am going to walk through how to run background tasks in ASP.Net Core Web applications using infrastructure and API provided by the ASP.Net Core framework.

There are a lot of scenarios where you want to run a task continuously in the background. Often times, we create our own interface and classes and wire them up somehow in the Startup class to achieve this functionality.

In this blog post, I am going to cover two main ways of running background tasks in ASP.Net Core web applications. Both of the ways are provided out of the box by the ASP.Net core framework.

We do not need any external NuGet package for implementing these background tasks.

Background tasks in ASP.NET Core

Application for background tasks

To start this example, first of all I will create a new ASP.NET Core Web API Application.

Firstly, to do that, I will open up Visual Studio 2019. Once Visual Studio opens up, I will select the menu  File -> New -> Project . This will open the  Create a new Project  project popup window.

Secondly, in the  Create a new Project  popup window, I will select ASP.NET Core Web Application from the project template and click on the  Next  button.

Thirdly, on the next page, I will provide the name of the application as  BackgroundTask.Demo  and click on the  Create  button.

Finally, on the final page, I will select the  API  template option. And I will keep other values default (ASP.Net Core 3.1) and click on the  Create  button.

Two ways of running Background tasks

As I mentioned earlier, we can run background tasks in ASP.NET Core in using two different constructs provided by the ASP.NET Core framework.

They are as follows:

  • Firstly, we can implement IHostedService interface
  • Secondly, we can derive from BackgroundService abstract base class

Background tasks using IHostedService

Now that the project is ready, it is time to create our first background tasks.

For creating a continuously running background task, let us consider we have a printing process. It will print an incrementing integer number.

To achieve this I will create a new class BackgroundPrinter . This class will implement the IHostedService interface.

The IHostedService interface provides two methods, StartAsync and StopAsync . The StartAsync method is where the task should be started. Whereas the StopAsync method is where we should implement logic while the task is stopped.

A couple of important point to remember about the StartAsync method:

  • Firstly, the StartAsync method is called by the framework before the Configure method of the Startup class is called
  • Secondly, the StartAsync method is called before the server is started

Finally, if the class implementing IHostedService uses any unmanaged object, the class has to implement IDisposable interface for disposing off the unmanaged objects.

IHostedService implementation with Timer object

For the first version of the code, I will implement a Timer inside the BackgroundPrinter class. And on a timer interval, I will print out the incremented number.

Firstly, I will declare a Timer object and an integer variable number in the class.

Secondly, inside the StartAsync I will create a new instance of the Timer .

Thirdly, for the Timer delegate, I will define an anonymous function, where I will increment and print the integer number using ILogger defined as a class-level variable.

Fourthly, I will set the timer to run in a 5-second interval.

Finally, I will implement the IDisposable interface and implement the Dispose method to call the Timer object’s Dispose method.

Configuring the BackgroundPrinter class

Once the BackgroundPrinter class is ready, now we will configure it to the Dependency Injection container using the AddHostedService extension method on the IServiceCollection interface.

Now, we can either do it on the Startup class or on the Program class. I prefer doing it in the program class, just for clear separation.

Hence I am going to update the Program class to achieve this. In the CreateHostBuilder method, I will use the ConfigureServices extension method on the IHostBuilder interface to do this.

Running the application

Now I am going to run the application to see the output in the console.

Once I run the application, I will be able to access the API endpoints. At the same time, the console will print the auto-incremented number every 5 seconds in the background.

background tasks in ASP.NET Core

Dependency Injection with Background task

In a production scale application, we will probably not be using logic inside of the class that implements IHostedService . We will not do it so that we can have a separation of concerns and having responsibilities properly defines.

In most cases, the logic of what happens on the task will probably be encapsulated into its own class. Hence, it is very important that the dependency injection works with background tasks.

Now the good news is that since the background tasks are configured into the IServiceCollection , any types added to the dependency injection container will be available to the background task.

Worker Class

To demonstrate that I will create a class Worker and interface IWorker , which will have the logic of incrementing and printing the number. And the BackgroundPrinter class will just use the interface IWorker to execute the logic.

The IWorker interface will have a single method DoWork . The DoWork will take a single parameter, an instance of CancellationToken class.

For the worker class, inside the DoWork method, instead of running a timer, I will create a while loop. And the while loop will wait on the cancel request of the CancellationToken instance. The CancellationToken instance will be passed from the StartAsync method of BackgroundPrinter class.

Inside the while loop, I will increment the class level integer and print it out to the console using the ILogger . And at the end of the while loop, I will do a Task.Delay waiting 5 seconds before the loop executes again.

Once the Worker class is ready, I will update the BackgroundPrinter class to use the IWorker interface for the business logic.

Apart from injection ILogger , now I will inject IWorker as well in the constructor of the BackgroundPrinter class.

Finally, I will register the Worker class in the dependency injection container inside of the Startup class. I will update the ConfigureServices method to add the Worker class as a singleton instance to the dependency injection container.

I will not make any changes to the Program class, since we are still using BackgroundPrinter class for the background task. Now if I run the application, I will see the exact same response in the console as before.

Background tasks using BackgroundService

Using BackgroundService abstract base class is the second way of running background tasks. The implementation of BackgroundService is relatively simpler compared to IHostedService . But at the same time, you have lesser control over how to start and stop the task.

To demonstrate how BackgroundService abstract base class works, I will create a new class DerivedBackgroundPrinter . This new class will drive from the BackgroundService class.

And this time, I will just use the IWorker interface for the business logic. Hence, I will create a constructor dependency on IWorker interface. Since the IWorker interface is already configured in the dependency injection container, it will automatically be passed along to this class.

The BackgroundService abstract base class provides a single abstract method ExecuteAsync . We will have our implementation to call the DoWork of the IWorker interface inside of this ExecuteAsync method.

Once this class is ready, I will update the Program class, to use DerivedBackgroundPrinter class instead of BackgroundPrinter as a background task runner.

Hence, inside of the CreateHostBuilder method of the Program class, I will replace the BackgroundPrinter with DerivedBackgroundPrinter for the AddHostedService generic extension method call.

Now if I run the application, I will see the exact same response as before.

As you can see it is extremely simple to create background tasks using ASP.NET Core provided IHostedService interface as well as BackgroundService abstract base class. The best part is that there is no external NuGet package dependency.

The source code for this blog is available in my GitHub repository here .

The entire coding session is also available in my YouTube channel here .

MAKOLYTE

Solve real coding problems

How to use BackgroundService in ASP.NET Core

You can use a hosted BackgroundService in ASP.NET Core for two purposes:

  • Run a long-running background task.
  • Run a periodic task in the background.

In this article, I’ll show how to create and register a hosted BackgroundService. In this example, it periodically pings a URL and logs the results.

1 – Subclass BackgroundService

The first step is to subclass BackgroundService:

  • Add the constructor with parameters for the dependencies.
  • Add async to the method signature.

In this example, we’ll create a background pinger service. So here’s an example of how to subclass BackgroundService (we’ll implement ExecuteAsync() in the next step):

For reference, here’s the PingSettings class:

2 – Implement ExecuteAsync()

Here’s an example of implementing ExecuteAsync(). This is pinging a URL every X seconds in a loop forever (until the app shuts down):

Note: Ping.SendPingAsync() doesn’t accept a CancellationToken. So I used the same technique I use to set a timeout for TcpClient : use Task.Delay() with the CancellationToken and use Task.WhenAny() to await both tasks.

Make sure to pay attention to the CancellationToken. This is canceled by the framework when the app is shutting down. This gives your BackgroundService a chance to perform a graceful shutdown .

3 – Register the BackgroundService and dependencies

In the ASP.NET Core initialization code, register the BackgroundService with AddHostedService(). Also register its dependencies. Here’s an example:

Note: Before .NET 6, do this in Startup.ConfigureServices().

It has two dependencies – PingerSettings and ILogger<PingerService>. You don’t need to explicitly register ILogger<PingerService> (at least not in ASP.NET Core 6+).

When the app starts, it loads all hosted services and keeps them running in the background. It calls ExecuteAsync() right away. Be sure to await right away (with Task.Yield() if necessary) in order to not block startup.

Launch the ASP.NET Core app. The background service starts running immediately. In this example, it’s pinging every 30 seconds and logging the results to the console log:

Related Articles

  • ASP.NET Core – Control the graceful shutdown time for background services
  • C# – Dependency inject BackgroundService into controllers
  • Control ASP.NET behavior when background service crashes
  • Logging to the database with ASP.NET Core
  • C# – Handle a faulted Task’s exception

9 thoughts on “How to use BackgroundService in ASP.NET Core”

Background service lifetimes depend on the App Pool’s “idle time out” setting. If your app pool has an “idle timeout” of 5 mins your service will stop after 5 mins and won’t restart. Therefore a request to the api in this example must be made within every 5 mins.

Thanks David!

When using IIS as the web server, it’s typically recommended to set your idle timeout to 0. This is especially true if you have a background service. In addition, be aware of app pool recycles. By default, IIS recycles your app pool every 29 hours. This attempts to do a graceful shutdown, which calls StopAsync() on all registered background services.

So if you want your background service to try to do a graceful shutdown, you can override BackgroundService.StopAsync().

Thanks David and Mak. Could not figure out why my background service kept stopping. Do not see any mention of this in the Hosted Service docs… But anyway, I set my idle time to 0 and my job is now running every 24hrs.

How did you set up the idle time to 0? Where is this setting?

1. Open IIS Manager 2. Click on Application Pools 3. Right-click the relevant app pool > click Advanced Settings 4. The setting is called Idle Time-out (minutes)

Will BackgroundService be suitable for process that takes let’s say 30min?

Yes, it’s suitable for long-running processes. It will stay running as long as the web app process is running.

Note: Watch out for the web server recycling your web app. That will stop your background service.

A nice and simple but yet thorough enough explanation on how to use background services in ASP.NET. Have been reading quite some blog post the last few days in search for an answer on the lifetime of such background services and, while not in your blog post, I found it in de comments section.

Thanks for your time and effort and for sharing this.

You’re welcome! It’s great you were able to find the answer in the comments.

Leave a Comment Cancel reply

Async processing of long-running tasks in ASP.NET Core

Sometimes, invoking an API endpoint needs to trigger a long-running task. Examples of this could be invoking an external and slow API or sending an email, which you don't want the caller of your API to wait for. There are multiple ways of implementing this using a message broker, a fire and forget API request, or something completely third. In this post, I'll show you how to implement async processing in ASP.NET Core, using a queue and the Background Worker feature.

Async processing of long-running tasks in ASP.NET Core

Before we begin let me make something clear. Background workers in ASP.NET Core are fine for handling minor workloads. There are some drawbacks, like jobs not being persisted on server restart. The solution proposed in this post can be a good v1 and when you feel ready for it, consider moving to a more robust approach like putting messages on a service bus or implementing a third-party library (check out Hangfire and Quartz.NET ).

To understand how to process long-running tasks, let's start by creating an example and making it sloooooow. Create a new ASP.NET Core application through Visual Studio, dotnet , or your preferred scaffolding engine. For this example, I have chosen the MVC template, but it could just as well be one of the other options.

Next, in the HomeController.cs file, create a new method to simulate a call to a slow running task:

For the demo, I'm waiting 10 seconds to simulate some work. The Task.Delay line would be replaced with some integration code in a real-life example. I have wrapped the code in information messages, which I can later inspect in Visual Studio or through the configured logger ( maybe elmah.io? ).

Then, invoke the CallSlowApi method from the Index method:

Let's run the application and inspect the performance in Developer Tools:

Performance in developer tools

As expected, loading the frontpage takes just above 10 seconds (10 seconds for the Task.Delay and 30 milliseconds to load the page).

Refactoring time! To process the message asynchronously, we'll implement a background worker in ASP.NET Core with an async queue in front. Let's start with the queue. Add a new class named BackgroundWorkerQueue and implementation like shown here:

It's a pretty simple implementation of a C#-based queue, using the ConcurrentQueue class from the System.Collections.Concurrent namespace. The QueueBackgroundWorkItem method will put a Func in the queue for later processing and the DequeueAsync method will pull a Func from the queue and return it.

Next, we need someone to execute the Func work items put on the queue. Add a new class named LongRunningService with the following implementation:

This is an implementation of an ASP.NET Core background service, which is indicated by extending BackgroundService . The service accepts the queue that we implemented in the last step and automatically dequeue and execute work items.

Both the BackgroundWorkerQueue and LongRunningService classes need to be registered with ASP.NET Core. Include the following code in the ConfigureServices method in the Startup.cs file:

That's it. All we need now is to refactor the CallSlowApi method. The HomeController need an instance of the BackgroundWorkerQueue class injected in its constructor:

The Task.Delay call can be moved inside a Func and handed off to the queue like this:

I simply moved the last two lines of the existing method inside the Func provided for the QueueBackgroundWorkItem method.

Launching the website is now snappy as ever:

Optimized performance

To prove that the long-running task is still actually running, you can put a breakpoint after the call to Task.Delay or you can simply inspect the log output in Visual Studio:

Log output

For a real-life sample of implementing this, check out our integration with ASP.NET Core here: https://github.com/elmahio/Elmah.Io.AspNetCore .

elmah.io : Error logging and Uptime Monitoring for your web apps

This blog post is brought to you by elmah.io. elmah.io is error logging, uptime monitoring, deployment tracking, and service heartbeats for your .NET and JavaScript applications. Stop relying on your users to notify you when something is wrong or dig through hundreds of megabytes of log files spread across servers. With elmah.io, we store all of your log messages, notify you through popular channels like email, Slack, and Microsoft Teams, and help you fix errors fast.

elmah.io app banner

See how we can help you monitor your website for crashes Monitor your website

Guest posts

Shady Nagy

Efficient Background Task Processing in ASP.NET Core Techniques and Best Practices

Efficient Background Task Processing in ASP.NET Core Techniques and Best Practices

Table Of Contents

.css-1qh5hbx{box-sizing:border-box;margin:0;min-width:0;display:block;color:var(--theme-ui-colors-heading,#2d3748);font-weight:bold;-webkit-text-decoration:none;text-decoration:none;margin-bottom:1rem;font-size:1.5rem;position:relative;} introduction.

In web applications, you may encounter situations where you need to execute long-running tasks, such as processing large files, sending emails, or calling external APIs. If these tasks are executed synchronously within an HTTP request, it could lead to a poor user experience and even timeouts.

In this post, we’ll discuss how to address this problem by implementing a background task queue in ASP.NET Core. This approach enables the execution of long-running tasks without blocking the main request processing pipeline.

The Problem

Let’s say we have an API endpoint that receives a video file and needs to process it. The processing includes uploading the video to a third-party storage service, updating video metadata, and adding the video to an archive. This process can take a considerable amount of time and should not be executed synchronously within the HTTP request.

Here’s the initial implementation of the endpoint:

The above implementation suffers from the mentioned problem: it executes the long-running tasks synchronously within the HTTP request, which could lead to a poor user experience and even timeouts.

The Solution: Background Task Queue

.css-c6w1gk{box-sizing:border-box;margin:0;min-width:0;display:block;color:var(--theme-ui-colors-heading,#2d3748);font-weight:bold;-webkit-text-decoration:none;text-decoration:none;margin-bottom:1rem;font-size:1.25rem;position:relative;} ibackgroundtaskqueue.

First, we define the IBackgroundTaskQueue interface:

BackgroundTaskQueue

Next, we implement the BackgroundTaskQueue class:

BackgroundTaskService

We also need to implement the BackgroundTaskService class:

Updating the Endpoint

Now, we can update the API endpoint to use the background task queue:

By using the background task queue, we’ve moved the long-running tasks out of the main request processing pipeline. This ensures that the user receives a response promptly, while the tasks are executed in the background.

Registering the Background Task Queue and Service

Finally, we need to register the background task queue and service in the Startup.cs :

Now, when the UploadVideo endpoint is called, the long-running tasks will be executed in the background, improving the user experience and avoiding potential timeouts.

Task Progress and Monitoring

To add task progress and monitoring, we can introduce an ITaskProgressService interface and its implementation to keep track of task progress. This will allow us to provide real-time updates on the progress of the long-running tasks.

ITaskProgressService

First, we define the ITaskProgressService interface:

TaskProgressService

Next, we implement the TaskProgressService class:

Updating the Background Task

Now, we can update the background task queue to report progress updates:

Monitoring Task Progress

We can add a new API endpoint to get the progress of a task by its ID:

Registering the TaskProgressService

Finally, we need to register the TaskProgressService in the Startup.cs :

With this implementation, the user can now monitor the progress of their tasks using the GetTaskProgress endpoint. This provides a way to keep the user informed about the status of their long-running tasks in the background.

Error Handling and Retry

To handle errors and retry failed tasks, we can introduce a retry mechanism in the BackgroundTaskService . This will allow us to automatically retry tasks that have encountered an error, reducing the likelihood of incomplete tasks.

Retry Policy

First, let’s define a simple retry policy that allows us to specify the maximum number of retries and the delay between attempts:

Updating the BackgroundTaskService

Next, we can update the BackgroundTaskService to include error handling and retry logic:

Registering the RetryPolicy

Finally, we need to register the RetryPolicy in the Startup.cs :

With this implementation, the BackgroundTaskService will now retry failed tasks according to the configured retry policy. This helps ensure that tasks have a higher chance of being completed successfully, even in the face of transient errors or other issues.

You can further enhance this implementation by adding more sophisticated error handling, such as different retry strategies, error logging, or notification systems to alert you when tasks fail repeatedly.

Scaling Background Task Processing

As the number of long-running tasks increases, it may become necessary to scale the background task processing to handle the additional workload. One approach to achieve this is by introducing multiple instances of the BackgroundTaskService and distributing tasks among them.

Worker Configuration

First, let’s define a worker configuration class that allows us to specify the number of worker instances:

Modifying the BackgroundTaskService Registration

Next, update the BackgroundTaskService registration in the Startup.cs to create multiple instances of the service based on the worker configuration:

By using multiple instances of the BackgroundTaskService , we distribute the workload among them, allowing for better utilization of system resources and improved overall throughput.

Task Distribution Strategies

To further optimize task distribution among worker instances, you can implement various task distribution strategies. For example, you could use a round-robin approach, where each task is assigned to the next available worker. Alternatively, you could assign tasks based on the current workload of each worker or even implement more advanced algorithms that consider factors such as task priority or estimated completion time.

Monitoring and Scaling

To ensure that the system scales effectively, it’s essential to monitor the background task processing performance. By collecting metrics such as task completion times, resource usage, and queue lengths, you can gain insights into the system’s behavior and identify potential bottlenecks or areas for improvement.

In addition to scaling horizontally (by adding more worker instances), you can also consider scaling vertically by adjusting the resources allocated to each worker (e.g., CPU, memory). Furthermore, you can explore other scaling options such as containerization and orchestration (e.g., Docker, Kubernetes) or using managed services like Azure Functions or AWS Lambda to handle background tasks.

With these approaches in place, you can efficiently scale your background task processing infrastructure to handle increasing workloads and maintain high performance under varying conditions.

Task Prioritization

In some cases, you may need to execute more critical tasks before less important ones. To implement task prioritization, we can extend the IBackgroundTaskQueue interface and its implementation to support task priorities.

PriorityBackgroundTask

First, let’s define a new class PriorityBackgroundTask that will hold the task function and its priority:

Updating IBackgroundTaskQueue

Next, update the IBackgroundTaskQueue interface to support task priorities:

Updating BackgroundTaskQueue

Now, modify the BackgroundTaskQueue class to use a priority queue instead of a concurrent queue:

You can use an existing priority queue implementation or create your own. In this example, we assume a ConcurrentPriorityQueue<TKey, TValue> class is available, which internally manages the priority order.

Updating BackgroundTaskService

Update the BackgroundTaskService class to use the PriorityBackgroundTask :

Update the API endpoint to use the background task queue with task priorities:

In this example, we added an int priority parameter to the UploadVideo method, which is then passed to the QueueBackgroundWorkItem method. You can adjust the priority parameter source as needed (e.g., from the request object, a query parameter, or based on user roles).

By introducing task prioritization, you can ensure that critical tasks are executed before less important ones, enabling a more efficient and responsive background task processing system.

Load Balancing and Task Distribution

Efficient load balancing and task distribution are essential for maximizing resource utilization and system throughput. In this section, we’ll explore different techniques for distributing tasks among multiple worker instances.

Round-Robin Distribution

One simple approach to task distribution is the round-robin technique, where tasks are assigned to worker instances in a circular order. To implement this, we need to modify the BackgroundTaskQueue class to maintain a separate task queue for each worker instance and enqueue tasks in a round-robin fashion:

This implementation assumes that you have registered the WorkerConfiguration class in the dependency injection container and that it is injected into the BackgroundTaskQueue constructor.

Other Distribution Strategies

There are various other task distribution strategies that you can consider, such as:

Task Affinity : Assign tasks to worker instances based on specific criteria, such as data locality or resource requirements (e.g., CPU, memory). This can help minimize data movement and improve overall performance.

Consistent Hashing : Use a consistent hashing algorithm to distribute tasks among worker instances. This approach can minimize task redistribution when worker instances are added or removed, ensuring a more stable distribution.

Sharding : Divide tasks into shards based on specific attributes (e.g., task type, data partition) and assign each shard to a dedicated worker instance. This can help reduce contention and improve resource utilization.

By implementing advanced load balancing and task distribution techniques, you can further optimize the background task processing system, ensuring efficient resource utilization and improved performance under varying workloads.

In this post, we have covered various aspects of implementing background task processing in ASP.NET Core, including task progress and monitoring, error handling and retry, scaling, task prioritization, and load balancing and task distribution. By adopting these techniques, you can build a more robust, efficient, and scalable background task processing system that meets the demands of modern web applications.

Further Reading

To dive deeper into the topics covered in this post, consider checking out the following resources:

  • Implementing a simple priority queue in C#
  • Polly - a .NET resilience and transient-fault-handling library

Feedback and Questions:

We’d love to hear your feedback on this tutorial! If you have any questions or suggestions for improvement, please don’t hesitate to reach out. You can leave a comment below, or you can contact us through the following channels:

We’ll do our best to address any questions or concerns you may have. We look forward to hearing from you and helping you make the most of background task processing in ASP.NET Core!

Shady Nagy

Software Innovation Architect

Related posts.

Unraveling Performance Bottlenecks in .NET Applications A Deep Dive with Code Examples

Quick Links

Legal Stuff

Social Media

MarketSplash

How To Implement Background Jobs In ASP.NET Core

Explore ASP.NET Core's background job implementation and supercharge your development skills!

💡 KEY INSIGHTS

  • Implementing background jobs in ASP.NET Core effectively requires using the `IHostedService` interface , which provides critical methods for job lifecycle management.
  • For long-running operations, it's essential to utilize `BackgroundService` for efficient resource management and minimal strain on system resources.
  • When scheduling and managing recurring tasks, leveraging tools like Hangfire and Quartz.NET offers robust functionality and monitoring capabilities.
  • Ensuring robust error handling and retry mechanisms in background jobs is crucial to maintain resilience and reliability in applications.

Are you a developer looking to enhance your skills in ASP.NET Core ? If you've ever wondered how to efficiently handle background jobs within your applications, you're in the right place.

task asp net core

Understanding Background Jobs In ASP.NET Core

Setting up your asp.net core environment for background tasks, implementing background services with ihostedservice, using backgroundworker in asp.net core, managing long-running tasks with hangfire, handling scheduled tasks with quartz.net, monitoring and debugging background jobs, best practices for background job implementation, frequently asked questions, what are background jobs, implementing background jobs, choosing the right approach.

Background jobs in ASP.NET Core are operations that run Asynchronously and are usually long-running tasks, separate from the main application flow.

Background Jobs are tasks executed on a separate thread, not interrupting the main application process. They enhance performance by ensuring the main thread remains responsive.

In ASP.NET Core, these jobs can be implemented using several approaches.

One common way to implement background jobs in ASP.NET Core is through the IHostedService interface. This interface provides two methods: StartAsync and StopAsync .

Different scenarios might require different approaches :

  • IHostedService : Ideal for tasks that need to run throughout the app's lifetime.
  • BackgroundService : A base class for implementing a long-running IHostedService .

Required NuGet Packages

Configuring services, creating a background task, running the application.

To set up an environment for background tasks in ASP.NET Core, it is essential to Configure Services in the Startup.cs file. This setup is crucial for registering background tasks as services.

Begin by adding necessary NuGet Packages . For a basic setup, Microsoft.Extensions.Hosting is required. For more advanced scenarios, packages like Hangfire or Quartz.NET might be needed.

In the Startup.cs file, configure your services in the ConfigureServices method. Here, you register your background tasks as services.

A background task is a class that implements IHostedService or inherits from BackgroundService . This class contains the logic for the background operation.

After setting up, the ASP.NET Core application will automatically start the background tasks when it runs. Monitor The Output for any logs or exceptions to ensure proper task execution.

Remember, background tasks should be Lightweight and non-blocking to keep the application responsive.

The IHostedService Interface

Registering the service, implementing task logic.

Using IHostedService in ASP.NET Core allows for the implementation of Background Services that can start and stop with the application.

The IHostedService interface includes two methods: StartAsync and StopAsync . These methods control the Lifecycle of the background service .

To use your background service, register it in the ConfigureServices method in Startup.cs . This Registration is crucial for dependency injection.

Inside StartAsync , implement the task logic. It's important to handle Cancellations effectively to stop the task gracefully when the application shuts down.

Understanding BackgroundWorker

Implementing with asp.net core, handling events.

The BackgroundWorker component is used for running operations Asynchronously in the background.

BackgroundWorker creates a separate thread for executing background tasks, preventing the main thread from being blocked. This component is ideal for tasks that require Progress Reporting and completion notifications.

While BackgroundWorker is not a native ASP.NET Core feature, it can be integrated into an ASP.NET Core application. Care must be taken to handle Thread Safety and resource management.

BackgroundWorker supports events like DoWork , ProgressChanged , and RunWorkerCompleted . These events are crucial for managing the task's Lifecycle and user interface updates.

In web contexts, modern ASP.NET Core features like IHostedService are recommended for greater efficiency and control.

Setting Up Hangfire

Scheduling jobs, recurring jobs, dashboard integration.

Hangfire is a powerful library for Background Job Processing in .NET applications. It's particularly effective for managing long-running tasks in ASP.NET Core.

To use Hangfire, first, add it to your project via NuGet. It's essential to Configure Hangfire in the Startup.cs file to ensure its proper functioning.

Hangfire allows you to Schedule Tasks with ease. You can enqueue jobs for immediate execution or delay them for a specified time.

For tasks that need to run periodically, Hangfire provides functionality to Create Recurring Jobs . This feature is highly beneficial for regular data processing or maintenance tasks.

Hangfire comes with an integrated Dashboard that allows monitoring and managing background tasks. This feature provides a user-friendly interface for tracking job progress and statuses.

Best Practices suggest securing the Hangfire Dashboard, especially in production, to prevent unauthorized access.

Integrating Quartz.NET

Creating a job, scheduling the job, advanced scheduling.

Quartz.NET is a Scheduling Library for .NET, widely used for planning and executing tasks.

Start by adding Quartz.NET to your project via NuGet. Configuration is key and involves setting up the scheduler, job, and trigger in your Startup.cs file.

A job in Quartz.NET is a C# class implementing the IJob interface. This class contains the Task Logic that will be executed.

Scheduling involves creating a Job Detail and a Trigger . Triggers define when the job should be executed.

Quartz.NET offers Cron Expressions for complex scheduling scenarios, like running a job every Monday at noon.

Implementing Logging

Using monitoring tools, handling exceptions, debugging techniques.

Effective Monitoring and Debugging are critical for maintaining the reliability and efficiency of background jobs in ASP.NET Core applications.

Incorporate logging within your background jobs to track their activities. Logging is essential for debugging and understanding the job's behavior over time.

Utilize monitoring tools like Application Insights or ELK (Elasticsearch, Logstash, Kibana) stack. These tools help in visualizing logs and tracking the performance of background jobs.

Proper Exception Handling in background jobs is crucial. It ensures that the job doesn't fail silently and allows for corrective actions.

For debugging, use Breakpoints in your IDE or write extensive logs to understand the flow and state of your background job.

Always ensure that your logging level is set appropriately to capture essential information without overloading the log files.

Efficient Resource Management

Scalability considerations, error handling and retry mechanisms, task segmentation, monitoring and logging, avoiding long-running transactions.

As an ASP.NET Developer, I've learned that implementing background jobs in ASP.NET Core is like orchestrating a symphony; every instrument plays a crucial part. By harmonizing IHostedService with efficient resource management and robust error handling, we create a seamless, asynchronous performance that enhances the application's overall functionality without missing a beat.

Dejan Bogatinovski

ASP.NET Developer

Source: Stack Overflow

Adhering to Best Practices ensures that background jobs in ASP.NET Core are reliable, maintainable, and efficient.

Background jobs should use Resources Efficiently to avoid straining the server. This includes optimizing database connections and managing memory usage.

Design background jobs with Scalability in mind. As the application grows, the jobs should still perform effectively without affecting the overall application performance.

Implement robust Error Handling and retry mechanisms. This ensures that the jobs can recover from failures and continue processing.

Break down complex jobs into smaller, manageable tasks. Task Segmentation improves maintainability and makes debugging easier.

Regular Monitoring and Logging are crucial. They provide insights into the performance of background jobs and help in identifying issues early.

Avoid long-running database transactions in background jobs. This reduces the risk of Locks and Deadlocks and improves overall system performance.

What is a Background Job?

A background job is a process or task that runs Asynchronously in the background, separate from the main application thread. It's used for operations that don't require immediate user interaction.

How to Handle Failures in Background Jobs?

For handling failures, implement Retry Mechanisms and error logging. This approach ensures that jobs can recover from interruptions and continue processing.

Can Background Jobs Affect Application Performance?

Yes, if not managed properly, background jobs can Impact Performance. It's important to optimize resource usage and manage the number of concurrent jobs.

What Tools are Recommended for Background Jobs in ASP.NET Core?

Tools like Hangfire, Quartz.NET, and the built-in IHostedService interface are commonly used. Each has its own advantages depending on the requirements.

How to Monitor Background Jobs?

Use monitoring tools like Application Insights or the ELK stack. Logging and monitoring are essential for maintaining the health and efficiency of background jobs.

Let's see what you learned!

What interface in ASP.NET Core is commonly used for implementing background services?

Continue learning with these asp.net guides.

  • Building With ASP.NET Core Web API: Key Concepts
  • Integrating ASP.NET Core Hangfire For Task Scheduling
  • Implementing ASP.NET Core Background Processing
  • How To Perform Distributed Caching In ASP.NET Core
  • How To Optimize Performance In ASP.NET Core

Subscribe to our newsletter

Subscribe to be notified of new content on marketsplash..

  • skip navigation Telerik UI for ASP.NET Core Product Bundles DevCraft All Telerik .NET tools and Kendo UI JavaScript components in one package. Now enhanced with: NEW : Design Kits for Figma

ASP.NET Core Basics: Data Structures—Part 2

assis-zang-bio

Data structures play a key role in computer science and software engineering, providing efficient solutions for various computational challenges. In this second part, we’ll cover advanced topics in data structures, but in an easy way to understand.

In Part 1 of Data Structures , we saw some examples of basic structures in the ASP.NET Core context, what each one means and how it can be implemented. In this second part, we’ll cover the main advanced topics in data structures.

Let’s see the meaning of each of them and understand how they work through practical examples.

What are Advanced Data Structures?

They are complex, specialized data organizations that provide efficient methods for storing and manipulating data across a variety of computational tasks. These frameworks are designed to optimize specific operations such as retrieval, insertion and deletion, and generally have applications in a wide variety of scenarios.

Below are some examples of advanced data structures:

Tree data structures Examples: Binary trees, AVL trees, red-black trees, B trees and others

Heap data structures Examples: Binary heaps, Fibonacci heaps and binomial heaps

Hashing Examples: Hash tables, hash functions

These advanced data structures are essential in diverse computer science and software engineering applications where data management, efficient search and algorithm optimization are key concerns.

They provide the foundation for solving complex problems and improving the performance of software systems in areas such as databases, operating systems, networks and many others. Understanding when and how to apply these frameworks is crucial to designing efficient algorithms and data management systems.

Tree Data Structures

Trees are a type of data structure used to represent hierarchical relationships between elements. Trees are widely used in programming for various purposes, and C# offers the flexibility to work with different types of trees.

Next, let’s check out the main types of trees in C# and implement an example of each.

Binary Trees

Binary trees are data structures where each node has at most two child nodes, typically referred to as the left and right child. Binary trees can be used in various applications, including binary search and expression trees.

Below is a representation of a binary tree structure:

Binary Tree Representation. 1 - Data branches to children 2 and 3. 2 branches to 4 and 5. 3 branches to 6.

To practice the post examples, let’s create a new application in ASP.NET Core. So, execute in the terminal the following command:

This command will create a folder called “PracticingDataStructurePartTwo” and inside it will be a basic web project using the Minimal API template. You can open the project with the IDE of your choice, in this example Visual Studio Code will be used.

You can access all code examples here: Sample source code .

Defining the Binary Tree Node Class

Next, let’s create a class that represents a binary tree node. Each node must have data and references to its left and right children. So, in the root of the project, create a new folder called “Models” and inside it create the class below:

Note that in the class above we defined a class to represent each node in the tree, which has data and its left and right pointers, as represented in the previous image.

We can perform the following operations on binary trees:

  • Insert an element
  • Remove an element
  • Search for an element
  • Delete from an element
  • Traversing an element

Binary Tree Traversals

Tree traversal algorithms can be classified into two main categories:

  • Depth-first search (DFS) algorithms
  • Breadth-first search (BFS) algorithms

Traversing a binary tree means visiting each tree node in a specific order. There are different ways to traverse a binary tree, and the choice of traversal method depends on the specific task you want to perform. The three most common binary tree traversal methods are:

  • In-Order Traversal
  • In an in-order traversal, you visit the nodes of the tree in the following order:
  • Visit the left subtree.
  • Visit the current node.
  • Visit the right subtree.
  • In a binary search tree (BST), an in-order traversal will visit the nodes in ascending order, which is useful for tasks like retrieving elements in sorted order.
  • Pre-Order Traversal
  • In a pre-order traversal, you visit the nodes of the tree in the following order:
  • Pre-order traversal is useful for creating a copy of the tree or serializing it into a format that can be easily reconstructed.
  • Post-Order Traversal
  • In a post-order traversal, you visit the nodes of the tree in the following order:
  • Post-order traversal is commonly used for deleting all nodes in the tree or for evaluating expressions in a mathematical expression tree.

Implementing an In-Order Traversal Binary Tree

To implement an in-order traversal binary tree, let’s create a binary tree class. Inside the Model folder create the class below:

In the code above we defined the BinaryTree class, which is responsible for representing a binary tree data structure. It manages the root node of the tree and provides methods for inserting nodes into the tree and performing in-order traversal. Below is a detailed explanation of each element of the code:

Root property Root is a property of the BinaryTree class. It contains a reference to the root node of the binary tree.

Builder The constructor of the BinaryTree class initializes the Root property to null , indicating that the tree is initially empty.

Insert Method

  • The Insert method is used to insert a new node with a specific integer value into the binary tree.
  • It takes the value to be entered as a parameter.
  • The method delegates the actual insertion to the “InsertRecursive” method, which is a private helper method that handles the recursive insertion process.
  • InsertRecursive Method
  • The “InsertRecursive” method is a private recursive method used to insert nodes into the binary tree.
  • Two parameters are required: the current node (subtree) being considered and the value to be inserted.
  • It checks whether the current node is “null” (indicating an empty location in the tree). If it is null , it creates a new node with the given data and returns it.
  • If the current node is not null , the method calls itself recursively on the left or right child depending on whether the data is smaller or larger than the current node’s data, ensuring that the new node is inserted correctly.
  • The method returns the updated node to maintain the tree structure.
  • InorderTraversal Method
  • The “InorderTraversal” method is used to perform an ordered traversal of the binary tree.
  • It takes a “TreeNode” as a parameter (typically the root node from which the traversal starts).
  • In-order traversal involves visiting the left subtree, then the current node, and finally the right subtree.
  • This method uses a recursive approach to perform the traversal, printing the data for each node visited to the console in sorted order.

Overall, the “BinaryTree” class encapsulates the core functionality of a binary tree, including creating the tree, inserting nodes and traversing the tree in order.

Now in the Program class file add the code below before the code snippet app.run(); :

Then, execute the application with the command dotnet run , and you will have the following output in the console:

Inorder traversal - 1 2 3 4 5 6

Note that even though the list of numbers is unordered, it was sorted in increasing order in the InorderTraversal() method.

Binary Search Tree

Binary search is a search algorithm that can be applied to a sorted array or a binary search tree (BST). To demonstrate binary search in the binary tree created above, simply add the method below to the “BinaryTree” class:

And in the Program.cs file replace the code snippet:

by the following:

and add the following code:

Then, execute in the terminal the command dotnet run and you’ll have the following result:

Binary Search Tree - Value 40 found in the tree

In the code above, we defined the BinarySearch() method that searches for a target value in the binary search tree recursively.

The base if checks if the current node is null , which means the target value was not found and returns false.

If the current node’s data matches the target value, it returns true , indicating that the target value has been found. If the target is smaller than the current node’s data, it searches the left subtree. If the target is larger, it searches in the right subtree.

When calling the method, we passed as target the value 40 which is part of our tree that has the values = 50,30,70,20,40,60 and 80, so as expected the console output was: “Value 40 found in the tree.” As shown in the image below:

Binary Search Tree Representation - From 50 at the top, we go left to 30 (not right to 70), and from 30 we go right to 40 not left to 20

Heap Data Structures

A heap in data structures refers to a specialized data structure that is used to store and manage elements in a way that allows the highest priority element to be easily accessed and removed.

There are two main types of heaps: the “binary heap” and the “Fibonacci heap.” In this post, we will talk about the binary heap, which is the most common.

In C#, a binary heap is often implemented using a class called PriorityQueue in the System.Collections.Generic library.

Next, let’s see how it works and how to implement a binary heap:

Binary Heap

A binary heap is a special binary tree that meets two main properties:

  • Partial order property: For each node in the tree, the key (value) stored in the node is greater (or smaller, depending on the heap type) than the keys stored in its children. This means that the highest priority element will be at the root of the tree.
  • Complete tree structure: The tree is filled from left to right at all levels, except possibly the last level, which is filled from left to right.

In binary heaps, we have two main types:

  • Max-heap: In a max-heap, the element with the highest value (key) has the highest priority. This means that the root of the tree is the highest valued element and all elements are smaller than the parent. Therefore, as you remove elements from the max-heap, the largest elements are processed first.
  • Min-heap: In a min-heap, the element with the lowest value (key) has the highest priority. This means that the root of the tree is the element with the lowest value and all elements are greater than the parent. Therefore, as you remove elements from the min-heap, the smaller elements are processed first.

The choice between using a max-heap or a min-heap depends on the needs of your algorithm or application. Here are some typical scenarios for each of the two types:

  • It is used when you need to find the highest value element quickly, such as when implementing priority queues for task scheduling in an operating system.
  • It is also useful in sorting algorithms like Heapsort where you need to sort in descending order.
  • It is used when you need to find the lowest value element quickly, such as when implementing priority queues for shortest path algorithms such as Dijkstra’s algorithm.

Implementing a max-heap or min-heap in C# can be accomplished using a class like PriorityQueue from the System.Collections.Generic library, as mentioned previously. However, note that by default PriorityQueue creates a min-heap. If you want a max-heap, you can provide a custom comparison that reverses the order of priorities.

Next, let’s implement a min-heap using the C# native class PriorityQueue. So in the Program.cs class, add the code below:

The code above is an example of how to use a priority queue in C# to store elements with associated priorities. First, we create an instance of a priority queue (or PriorityQueue ) that is capable of storing string values (colors) with integer-valued priorities.

Then we add some elements to the priority queue. Each element represents a color, such as Red, Blue, Green and Gray, and each element has an associated priority, which is an integer, such as 0, 4, 2 and 1.

We then enter a loop ( while ) that will continue until the priority queue is empty. Inside the loop, we dequeue (or “dequeue”) elements from the priority queue using queue.TryDequeue() returning the value and priority.

Something important is that, in the “PriorityQueue” class, elements with the lowest priority are removed from the queue first.

Finally, we print the removed element to the screen. If you run the application you will have the following result:

Min Heap - color red priority 0, color gray priority 1, color green priority 2, color blur priority 4

Below you can see a representation of a binary min-heap

Binary Min Heap Representation: color red priority 0, color gray priority 1, color green priority 2, color blur priority 4. By definition in the PriorityQueue class, the dequeuing order will be automatically done from smallest to largest

Hashing Data Structures

Hashing is a fundamental process in computer science, which involves transforming input data into a fixed-size value, often called a “hash” or “hash code,” using a mathematical function known as a hash function.

The main feature of hash functions is that they produce a fixed-size output regardless of the size of the input. This hash is used to represent the original input in a compact way, facilitating efficient searching and storing of data in data structures such as hash tables.

The biggest advantage of using hashing data structures is that they allow you to store data and search it in constant time, that is, in O(1) time.

Hash functions must meet some important properties:

  • Deterministic: For the same input, a hash function must always produce the same hash.
  • Efficient: The hash function must be computationally efficient to calculate the hash of an input.
  • Uniform distribution: The hash function must distribute the hash values evenly, to minimize collisions (when two different inputs have the same hash).
  • Irreversible: It must be difficult or impossible to regenerate the original input from the hash (this is important for cryptographic hash functions).

The hash is basically made up of three components:

  • Key: The key is the value you want to store or fetch from the hash data structure. It is used as input to the hash function to calculate the index where the associated value will be stored or looked up in the hash table.
  • Hash function: The hash function is the mathematical formula or algorithm that transforms the key into a hash value (index) in the hash table. This function is designed to be deterministic and to distribute keys in order to minimize collisions evenly.
  • Hash table: The hash table is the data structure that stores the values associated with the keys. It consists of an array of lists, where each list is associated with an index generated by the hash function. When you want to store or look up a value, the hash function is used to determine which list of the hash table the value should be stored or looked up in.

To implement the creation of a hash table in C#, we can use the “Hashtable” class which is part of the “System.Collections” set.

So, in the Program.cs file add the code below:

Then, if you run the application, the following result will be displayed in the console:

Hash Table Result

In the code above we create an instance of a hash table (Hashtable) then create a method HashFunction(string key) that accepts a key (string) as input and returns an integer value. This method uses the SHA-256 hash function to calculate the hash value of the key.

Inside the HashFunction method, an instance of SHA256, a cryptographic hash algorithm, is created.

The key (string) is then converted to a byte array using UTF-8 encoding.

The ComputeHash function of the SHA256 object is used to calculate the hash of the key bytes.

The first 4 bytes of the hash are converted to an integer value using BitConverter.ToInt32 , which is then returned.

We then add key-value pairs to the hash table, where the keys are the outputs of the HashFunction applied to the strings “2023001”, “2023002” and “2023003”, and the associated values are “Bob”, “Alice” and “John”.

The code then retrieves the values from the hash table using the keys calculated with the HashFunction function. The values associated with the keys are stored in the variables value1, value2 and value3.

Finally, we print to the console the indices associated with the keys and the values associated with these keys in the hash table.

In this way, the code demonstrates how hash tables are used to map keys to values using a hash function, illustrating the concept of hashing, which is one of the main subjects when we talk about complex data structures.

Hashing Representation - key values map to index and buckets

In this post, we learned about three types of complex data structures: binary trees, heap and hashing. These concepts are very common in web applications, where there is a need to deal with a large amount of data, and in scenarios like these, it is common to have problems with optimizations that can be easily solved by data structures.

In addition to these, there are several other types of data structures that you may know, but these three are already a starting point for you to familiarize yourself with the subject.

Something important to remember is that C# has many features for working with data structures such as the “Hashtable” class, for example, so consider using native ASP.NET Core structures whenever possible.

assis-zang-bio

Assis Zang is a software developer from Brazil, developing in the .NET platform since 2017. In his free time, he enjoys playing video games and reading good books. You can follow him at: LinkedIn  and Github .

Related Posts

Asp.net core basics: data structures—part 1, asp.net core basics: build a complete crud application with vs code, asp.net core basics: knowing and applying design patterns, all articles.

  • ASP.NET Core
  • ASP.NET MVC
  • ASP.NET AJAX
  • Blazor Desktop/.NET MAUI
  • Design Systems
  • Document Processing
  • Accessibility

task asp net core

Latest Stories in Your Inbox

Subscribe to be the first to get our expert-written articles and tutorials for developers!

All fields are required

Loading animation

Progress collects the Personal Information set out in our Privacy Policy and the Supplemental Privacy notice for residents of California and other US States and uses it for the purposes stated in that policy.

You can also ask us not to share your Personal Information to third parties here: Do Not Sell or Share My Info

By submitting this form, I understand and acknowledge my data will be processed in accordance with Progress' Privacy Policy .

I agree to receive email communications from Progress Software or its Partners , containing information about Progress Software’s products. I understand I may opt out from marketing communication at any time here or through the opt out option placed in the e-mail communication received.

By submitting this form, you understand and agree that your personal data will be processed by Progress Software or its Partners as described in our Privacy Policy . You may opt out from marketing communication at any time here or through the opt out option placed in the e-mail communication sent by us or our Partners.

We see that you have already chosen to receive marketing materials from us. If you wish to change this at any time you may do so by clicking here .

Thank you for your continued interest in Progress. Based on either your previous activity on our websites or our ongoing relationship, we will keep you updated on our products, solutions, services, company news and events. If you decide that you want to be removed from our mailing lists at any time, you can change your contact preferences by clicking here .

task asp net core

Rick Strahl's Weblog  

task asp net core

Reading Raw ASP.NET Request.Body Multiple Times

Raw Ahi Banner

Some time ago I wrote about retrieving raw HTTP request content from an incoming request and - surprisingly - it's one of the most popular posts on this blog. The post discusses a few ways how you can capture raw request content in ASP.NET Core applications in situations when traditional model binding/mapping doesn't work and you need to access the underlying raw request data.

You can directly read the Request.Body stream to access content in raw form which is reading the request data explicitly, or if you're using Model Binding you implicitly get the data bound to objects in [FromBody] either from JSON and or HTML form content.

This all works fine if you're doing typical request operations, but in that post I left out one bit of important information: The Request.Body request stream is read-once by default, meaning you can read the request data only once . If you repeatedly access the stream you'll get an empty stream and by extension, empty data.

While it's not common that you need to read request data multiple times it comes up occasionally, especially in scenarios where you need to track the request data for auditing or custom logging.

In this post I look at ways how you can read from Request.Body multiple times using the newish Request.EnableBuffering feature in ASP.NET, as well as discussing some of the pitfalls you have to watch out for in the process.

task asp net core

Reading Request Body Multiple Times

The previous post has a lot of detail and the mechanics of the actual process of retrieving request data. That post shows how to capture raw request content and also how to create custom InputFormatter that can automatically return raw data to your controller code using standard parameters.

The gist of the raw request retrieval - with a few adjustments since the last post - is summed up in the code at the end of this post via an HttpRequest extension method helper to retrieve raw request content. The very short TLDR version is that you can use the newish Request.EnableBuffering() feature along with resetting the Request.Body stream after reading to allow multiple reads of the request buffer.

What's up with reading Request.Body multiple times?

So, the problem with reading Request.Body by default is that it's a Read-Forward only data stream meaning you can only read it once . This applies both to your own code that's doing something custom with the Request.Body explicitly, or the internal use of ASP.NET - typically for endpoint model binding that implicitly reads Request.Body . In either case, once the Request.Body gets read, you or ASP.NET itself can't read the request content again.

To give you a real world scenario, I ran into this recently with a customer who requested that we log incoming request POST/PUT data for audit purposes. We need custom logging that logs both some of the incoming Request which includes the incoming POST/PUT data, but also some application specific data. In that scenario the first body read occurs for our custom logging, and then again when ASP.NET performs the [FromBody] binding to provide the model data to our HttpPost/HttpPut endpoints.

Of course, I initially forgot that Request.Body can't be read more than once, and when I initially wrote the log code I was happy to see the captured data written to the custom logging, but didn't immediately notice that the endpoint requests were not receiving their request data . 😄

In my case the first request data capture happens in a small custom middleware implemented via app.Use() in startup code:

The key bit are in this block:

The problem with this code is that it reads the request content before ASP.NET gets to it . My logging code gets the request content just fine, but because the stream is read-once by default, ASP.NET's model binding now fails to retrieve the data on the second read. End result: The Controller API endpoints get nada for data !

To illustrate this scenario, this endpoint:

ends up with a null input, because there's no data in the Request.Body stream on the second read.

Null Post Data

Request.Body Stream

Under normal operation the Read-Once default behavior of the Request.Body stream is what you want: A request comes in and the framework reads the body stream and parses the data into your Model for an API or ViewModel in an MVC or Razor page request.

Reading the stream once is very efficient as the stream is read only as needed and doesn't need to be buffered in memory or on disk. The incoming stream can be directly read and immediately bound to the data from the stream.

But... if for any reason you need to read the stream multiple times, like in my example of the pre-flight logging, that's not going to work, as only the first read operation captures the data. The second operation which is ASP.NET's model binding gets an empty stream.

Use EnableBuffering to Re-Read the Stream

Luckily there's a workaround: You can use Request.EnableBuffering() which essentially changes the Request.Body stream's behavior so that it CanSeek and therefore can be reset to the 0 position and then be re-read.

Note that this is usually a 3 step process:

  • Enable Buffering
  • Read the stream
  • Reset the Stream pointer to 0

So, you can now do this before the stream is read for the first time :

Note that EnableBuffering() supports an optional bufferThreshold parameter that indicates the size of the maximum memory buffer used, before buffering buffers to disk. The values is in bytes and the default if not specified is 30kb which seems reasonable, but if you know the size of your inbound data you can fine tune the buffer to match. See docs for more info .

Then you can read the stream:

Finally, if you expect the stream to be read again, make sure you reset the stream pointer to 0:

This leaves the stream as you found it before your read, and it can then be read again. If you read the stream after ASP.NET has processed the body, you'll need to reset the position yourself before you read the stream .

task asp net core

How Stream Buffering works

Stream buffering works, by enabling buffering before any read operation occurs . When you call the Request.EnableBuffering() method, ASP.NET swaps out the stream used by Request.Body transparently:

Before EnableBuffering() - uses HttpRequestStream

Unbuffered File Stream

After EnableBuffering() - uses FileBufferingReadStream

Buffer File Stream

Make sure you Enable Buffering Before the First Read

The key is:

You have to call Request.EnableBuffering() before any other operation that reads the body .

Seems straight forward, but there are scenarios where this is not obvious. If you want to capture request content in the context of a API Controller action for example, you can't call Request.EnableBuffering() there, because the request likely has already been read by ASP.NET for model binding. If you need to capture request data after ASP.NET has captured it, you will have to find a way to set Request.EnableBuffering() as part of your middleware pipeline to ensure you can read request data multiple times.

In my request data capture logging scenario shown earlier this works fine, because I can plug into the ASP.NET middleware pipeline early enough to guarantee that buffering is enabled before the controller model binding gets a hold of the request buffer. The downside is that you basically end up buffering every request and with that comes a little bit of extra request overhead both in performance and resources usage (especially if you have large POST buffers for things like uploads).

To put it all together:

If you use a Stream reader of some sort make sure you set it to leave the stream open after you're done, or else the reader will close it and that also will break subsequent access to the stream.

I can now read the request data for logging and get the request data to show up in the controller:

Working  Post Data

Read Request Data Helper

I already mentioned the request helper to read the content above, but there are some improvements since the version from the last post that includes support for enabling buffering as part of the HttpRequest extension method.

Here's the updated version from Westwind.AspNetCore :

By using this you can simplify the middleware code from before to:

With the enableBuffering flag on, buffering is enabled if it's not already on, and the stream is automatically reset at the end of the read operation so that a potential secondary read can be done.

The same caveat as before applies: enableBuffering may have no effect if you do it too late in the pipeline, if ASP.NET has already processed the request data.

For example, I couldn't do this:

because ASP.NET has already read the request. If I want this to work I have to add some explicit middleware prior to the controller processing to enable buffering separately:

With that in place, the previous code works and retrieve both the model data and read the body string for a second read.

Reading raw request content is not the most straight forward operation in ASP.NET core, as it's hidden away behind the more approachable model binding approaches. And rightly so, as reading raw request content tends to be relatively infrequent, and usually related to custom auditing or logging requirements more so than actual request semantics.

In this post I've covered one of the caveats with raw Request.Body access related to reading the request content multiple times. Thankfully recent versions have made this easier via Request.EnableBuffering() but even so you have to understand how to take advantage of this functionality and where to apply it for multiple request access to work. This post provides what you need to know and some practical code you can use or build on to be on your way for multiple reqeuest access...

  • HttpRequest.EnableBuffering() Documentation
  • Accepting Raw Body Content in ASP.NET Core Controllers
  • Westwind.AspNetCore library
  • GetRawBodyStringAsync()
  • GetRawBodyBytesAsync()

task asp net core

Other Posts you might also like

  • HSTS: Fix automatic re-routing of http:// to https:// on localhost in Web Browsers
  • Map Physical Paths with an HttpContext.MapPath() Extension Method in ASP.NET
  • Keeping Content Out of the Publish Folder for WebDeploy
  • Back to Basics: Rewriting a URL in ASP.NET Core

Make Donation

The Voices of Reason

task asp net core

# re: Reading Raw ASP.NET Request.Body Multiple Times

EnableBuffering has an overload to specify the memory buffer size. Would prefer that over using a file baked buffer if the size and RPS fit.

task asp net core

@kapsiR - good catch. Might be useful to add that as another optional parameter to the helper.

Code Maze

  • Blazor WASM 🔥
  • ASP.NET Core Series
  • GraphQL ASP.NET Core
  • ASP.NET Core MVC Series
  • Testing ASP.NET Core Applications
  • EF Core Series
  • HttpClient with ASP.NET Core
  • Azure with ASP.NET Core
  • ASP.NET Core Identity Series
  • IdentityServer4, OAuth, OIDC Series
  • Angular with ASP.NET Core Identity
  • Blazor WebAssembly
  • .NET Collections
  • SOLID Principles in C#
  • ASP.NET Core Web API Best Practices
  • Top REST API Best Practices
  • Angular Development Best Practices
  • 10 Things You Should Avoid in Your ASP.NET Core Controllers
  • C# Back to Basics
  • C# Intermediate
  • Design Patterns in C#
  • Sorting Algorithms in C#
  • Docker Series
  • Angular Series
  • Angular Material Series
  • HTTP Series
  • .NET/C# Author
  • .NET/C# Editor
  • Our Editors
  • Leave Us a Review
  • Code Maze Reviews

Select Page

Parallel.ForEachAsync() and Task.Run() With When.All in C#

Posted by Aneta Muslic | Feb 20, 2024 | 0

Parallel.ForEachAsync() and Task.Run() With When.All in C#

Parallel programming is a common and broad concept in the .NET world. In this article, we compare two well-known methods we use to achieve it when running a repetitive asynchronous task. We take a look at how they behave under the hood and compare the pros and cons of each one.

Let’s make a start.

Parallel Programming

In general, parallel programming involves using multiple threads or processors to execute tasks concurrently. It aims to improve performance and responsiveness by dividing tasks into smaller parts that can be processed simultaneously. 

Apart from improving performance and responsiveness, there are additional advantages when using parallel programming. Firstly, by breaking tasks into concurrent subtasks, we can effectively reduce overall execution time. One additional benefit is throughput enhancement as a result of handling multiple tasks simultaneously. Also, running tasks in parallel helps us ensure scalability since it efficiently distributes tasks across processors. This allows performance to scale seamlessly when adding resources.

One more thing we should take into consideration when working with parallel programming is which kind of processes we are trying to parallelize. In this article, we will mention I/O-bound and CPU-bound ones.  

Become a patron at Patreon!

I/O bound processes are processes where the computational duration is determined by the time spent awaiting input/output operations, an example of this is a database call. On the other hand, we have CPU-bound processes. In this case, the performance of the CPU determines the task duration, an example is a method that does some heavy numerical calculations.

Now that we have a quick primer about parallel programming and different types of processes, let’s quickly set everything up and see it in action.

Setting up Async Methods

Since we already have a great article going more in-depth on How to Execute Multiple Tasks Asynchronously , here we will only create a baseline for the Task.WhenAll() method which we will modify when comparing the two approaches.

We start with the default web-API project and expand the WeatherForecastController method with an asynchronous method that runs multiple times:

In the context of types of processes, AsyncMethod() emulates the I/O-bound process and the task delay represents the waiting time of a sub-system response.

After we set everything up, let’s see how to execute these tasks in parallel.

Use Task.WhenAll

First, we need to refactor the GetWeatherForecastWhenAll() method to use the Task.WhenAll() method. It takes an enumerable of tasks and returns a new completed task once all the individual tasks in the collection finish running:

We define an empty list of tasks. Next, we call AsyncMethod() three times without the await keyword. This starts executing these tasks one after another without waiting for them to complete . This is exactly what we want since we add those tasks to our tasks list and use Task.WhenAll() to wait for all of them to complete.

Lastly, when all the tasks are completed, we flatten the combinedResults variable that holds the results and return the result to the user.

We need to keep thread usage in mind when we use parallel execution of tasks. Starting too many threads at once increases context-switching overhead and may impact overall application efficiency. Also, we don’t want to block the main thread. So let’s see how we can get a better understanding of how this method works under the hood regarding threads.

Thread Processing

We start by adding logging to the threads:

Here, we add a Console.WriteLine() statement at the beginning and end of each method. There, we print on which thread methods start and end by using Environment.CurrentManagedThreadId .

Now, if we execute our request, in the output window we can see how threads behave:

Let’s break this down to understand what happens.

When we send an HTTP request, a thread from the thread pool gets assigned to handle it. In our case, it is thread number 16. Then, when we invoke our async methods and we don’t use the await keyword, tasks will usually start executing on the same thread, i.e., 16.

However, when an asynchronous operation encounters the await keyword, in our case await on Task.WhenAll() , it releases the current thread to the thread pool during the waiting period for the task to be completed. When the awaiting operation completes and we want to return the result, the continuation might not necessarily resume on the original thread. That is why we see some of the tasks finish on different threads than they start on.

Besides creating a task by not using the await keyword we can also use Task.Run() method, so let’s take a look at it.

Use Task.Run With Task.WhenAll

By using the Task.Run()  method to execute tasks, we make sure that each new task executes on a separate thread :

Here, we use the Task.Run() method to execute AsyncMethod() three times in a row. Again, by skipping the await keyword we are not awaiting any method to complete, but we run them in parallel and on Task.WhenAll() await their results.

Now, let’s retake a look at the output logs when executing the request:

This time, we see that each new task starts its execution on a new thread. We expect this behavior when using Task.Run() since its purpose is to offload work from the current thread. Same as in the previous example due to the async/await nature and thread pool assigning threads, tasks finish on different threads than they originally start on.

Using Task.Run() requires caution as it might have some drawbacks. Since it offloads work to a new thread, any time it deals with a large number of tasks it can create a large number of threads, each consuming resources and possibly causing thread pool starvation.

Now that we have seen how we can explicitly offload each task to a new thread, let’s look at how we can use another method to perform these tasks in parallel.

Using Parallel.ForEachAsync

Another way we parallelize this work is to use the Parallel.ForEachAsync() method:

First, we set the MaxDegreeOfParallelism value. With this setting, we define how many concurrent operations run. If not set, it uses as many threads as the underlying scheduler provides . To determine this value for a CPU process start with the Environment.ProcessorCount . For I/O-bound processes, this value is harder to determine since it depends on the I/O subsystem, which includes network latency, database responsiveness, etc. So when working with I/O bound processes, we need to do testing with different values to determine the best value for maximum parallelization.

After, we define a ConcurrentBag for our results, which is a thread-safe collection since we use parallel execution of tasks and handle results in a loop. Allowing us to safely modify the collection without worrying about concurrency modification exceptions. Lastly, we set up Parallel.ForEachAsync() method to run three times with set options, and inside the loop, we await each result and add it to the resultBag .

One thing to mention when using the Parallel.ForEachAsync() method is that it has its underlying partitioning. This partitioning divides the input data into manageable batches and assigns each batch to a different thread for parallel processing. The exact size of the batches is determined dynamically by the framework based on factors such as the number of available processors and the characteristics of the input data. So by defining the MaxDegreeOfParallelism , we define the number of batched tasks that execute concurrently.

Regarding thread usage, since we are not explicitly altering thread assignments, threads get assigned as they usually do in the classic async/await process. One difference with the Task.WhenAll() thread usage is that most likely every task starts on its thread since we use the await keyword for each call inside the loop.

Now, let’s take a look at how the Task.Run() method behaves in this case.

Using Task.Run With Parallel.ForEachAsync

Let’s modify our method to use Task.Run() for generating tasks:

However, this may not be the best approach in this case. As we already saw, Parallel.ForEachAsync() has a built-in partitioner that creates batches of tasks and processes them in a single thread. But by using Task.Run() we offload each task into its thread. So using Task.Run() in this case, undermines the benefit of using Parallel.ForEachAsync() for chunking tasks and using fewer threads.

One more thing we may encounter when trying to parallelize the tasks is the usage of the Parallel.ForEach() method.

Pitfalls to Avoid With Parallel.ForEach

The Parallel.ForEach() method, while similar to Parallel.ForEachAsync() , lacks the designed capability to handle asynchronous work.  However, we can still encounter some examples of its usage with asynchronous tasks.

So let’s quickly check on why these approaches may not be the best workarounds and see their drawbacks.

One common thing we can see is forcing awaiting the result in synchronous code by using GetAwaiter () . GetResult() :

We should avoid this approach since by using GetAwaiter().GetResult() we block the calling thread, which is an anti-pattern of async/await . This may cause issues in deadlocks, decreased performance, and loss of context-switching benefits.

Another approach involves using async void:

In this approach, we have another anti-pattern, and that is the usage of async/void . This is a known bad practice with several reasons to avoid it. O ne such reason is that we cannot catch exceptions in the catch block.

As we can see, both of these approaches involve the use of anti-patterns to make Parallel.ForEach() them compatible with asynchronous methods. Since neither of them is a recommended way to implement parallelization, with the introduction of Parallel.ForEachAsync() in .NET 6 we have a preferable method for working with async tasks in a for-each loop.

Now that we took a look at what not to do, let’s sum up everything we’ve learned so far!

When to Use Which Approach?

As with everything in programming, how we use the knowledge from this article depends on the application’s specific requirements. Nevertheless, when choosing the right method, we should consider several factors.

When talking about CPU-bound tasks that can benefit from parallelization, the use of Parallel.ForEachAsync() stands out. Its main benefit is that it efficiently distributes the workload across multiple processor cores. Also, by setting the MaxDegreeOfParallelism we control the concurrency level we want to impose. And as we saw we can easily determine that value.

On the other hand, when dealing with I/O-bound tasks, where operations involve waiting for external resources, Task.WhenAll() becomes a preferable choice. It allows us to execute multiple asynchronous tasks concurrently, without blocking the calling thread. This makes it an efficient option for scenarios like database queries or network requests. Another benefit is that we don’t need to process results inside the loop, but we can wait on all of them and manipulate the results when they are complete.

However, it’s important to note that Task.WhenAll() lacks a built-in partitioner, and its use in a loop without proper throttling mechanisms may result in the initiation of an infinite number of tasks. So depending on the number of tasks we are executing it may be necessary to create our partition strategy or opt for Parallel.ForEachAsync() a solution.

One more thing we mentioned is initializing tasks using Task.Run() . We can use this approach when we want to have explicit control over threading but keep in mind that it can potentially lead to thread pool starvation if too many threads start at once. 

In this article, we look at two methods we use to execute repetitive tasks in parallel. We saw how both methods under the hood use threads and partition the given tasks. Also, we saw what are the differences when using the Task.Run() and how it behaves with both options. Lastly, we provide guidance on which approach is most suitable in different scenarios.

guest

Join our 20k+ community of experts and learn about our Top 16 Web API Best Practices .

task asp net core

Azure Pipelines deprecated tasks retirement schedule

' data-src=

Eric van Wijk

February 16th, 2024 2 1

Azure Pipelines includes around 150 build & release tasks as well as many more task extensions . Various included tasks have multiple (major) versions bringing the total to over included 200 tasks.

Some of these tasks have been deprecated for some time, as newer tasks have replaced them. Deprecation means the task is still supported, before it is retired. In this blog post we’ll communicate what will happen as deprecated tasks retire.

What tasks can I no longer use?

Image task deprecation message

Here is the list of tasks that are deprecated and will be retired:

What will happen after the retirement date?

To help pipeline authors identify pipelines in Azure DevOps Service that use one of the deprecated tasks listed above, we will temporarily fail tasks according to the following schedule:

  • Tuesday February 20
  • Thursday February 22
  • Monday February 26
  • Friday March 1 to Monday March 4
  • Wednesday March 6
  • Friday March 8
  • Tuesday March 12 to Wednesday March 13
  • Friday March 15 onwards

During this schedule, tasks will execute their normal functionality but report an error:

Image image 2

The recommended action is to follow the guidance shown in the error message or otherwise update the pipeline to no longer use the deprecated task. To prevent the error temporarily without replacing the task, set the continueOnError step property to true :

This will execute the tasks functionality without failing and continue the pipeline. Note any other errors will also be suppressed . To prevent ignoring legitimate issues, replace the task with its recommended alternative listed above instead.

Image image 3

In a future update to Azure DevOps Server we will also retire the tasks listed above.

Frequently Asked Questions

  • Q: What will happen if I don’t do anything? A: Tasks will permanently fail in Azure DevOps Service after March 15, and in a future update to Azure DevOps Server.
  • Q: I’m using Azure DevOps Server A: We will announce retirement from Azure DevOps Server separately.
  • Q: We have many pipelines. How can pipeline owners be made aware? A: During the brownout schedule above failing tasks and pipelines let pipeline owners pinpoint pipelines using soon to be retired tasks.
  • Q: I’m using the DownloadPackage@0/NuGetInstaller@0/NuGetRestore@1 task and it is failing. A: These tasks follow an accelerated retirement schedule, see announcement .
  • Task Reference
  • Release Notes

' data-src=

Eric van Wijk Product Manager, Azure DevOps

task asp net core

Leave a comment Cancel reply

Log in to join the discussion.

' data-src=

Is there not a better way to inform users than to selectively fail these tasks during a specific window? Not everyone runs all their pipelines particularly frequently. Likewise, not everyone reads these blogs regularly – I just came across this by chance this morning, for example.

It would be great if there was an in-app notification or similar, for example, informing users of this change when we’re actually using DevOps, for example.

' data-src=

We did get e-mail about this on Monday 2/19. Although the above suggests that these tasks would fail on Tuesday 2/20, they did not? I also see there are deprecation warnings in the logs.

light-theme-icon

Insert/edit link

Enter the destination URL

Or link to existing content

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Task. When All Method

Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.

Creates a task that will complete when all of the supplied tasks have completed.

WhenAll(IEnumerable<Task>)

Creates a task that will complete when all of the Task objects in an enumerable collection have completed.

The tasks to wait on for completion.

A task that represents the completion of all of the supplied tasks.

The tasks argument was null .

The tasks collection contained a null task.

The following example creates a set of tasks that ping the URLs in an array. The tasks are stored in a List<Task> collection that is passed to the WhenAll(IEnumerable<Task>) method. After the call to the Wait method ensures that all threads have completed, the example examines the Task.Status property to determine whether any tasks have faulted.

The overloads of the WhenAll method that return a Task object are typically called when you are interested in the status of a set of tasks or in the exceptions thrown by a set of tasks.

The call to WhenAll(IEnumerable<Task>) method does not block the calling thread.

If any of the supplied tasks completes in a faulted state, the returned task will also complete in a Faulted state, where its exceptions will contain the aggregation of the set of unwrapped exceptions from each of the supplied tasks.

If none of the supplied tasks faulted but at least one of them was canceled, the returned task will end in the Canceled state.

If none of the tasks faulted and none of the tasks were canceled, the resulting task will end in the RanToCompletion state.

If the supplied array/enumerable contains no tasks, the returned task will immediately transition to a RanToCompletion state before it's returned to the caller.

WhenAll(Task[])

Creates a task that will complete when all of the Task objects in an array have completed.

The tasks array contained a null task.

The following example creates a set of tasks that ping the URLs in an array. The tasks are stored in a List<Task> collection that is converted to an array and passed to the WhenAll(IEnumerable<Task>) method. After the call to the Wait method ensures that all threads have completed, the example examines the Task.Status property to determine whether any tasks have faulted.

The call to WhenAll(Task[]) method does not block the calling thread.

WhenAll<TResult>(IEnumerable<Task<TResult>>)

Creates a task that will complete when all of the Task<TResult> objects in an enumerable collection have completed.

Type Parameters

The type of the completed task.

The following example creates ten tasks, each of which instantiates a random number generator that creates 1,000 random numbers between 1 and 1,000 and computes their mean. The Delay(Int32) method is used to delay instantiation of the random number generators so that they are not created with identical seed values. The call to the WhenAll method then returns an Int64 array that contains the mean computed by each task. These are then used to calculate the overall mean.

In this case, the ten individual tasks are stored in a List<T> object. List<T> implements the IEnumerable<T> interface.

The call to WhenAll<TResult>(IEnumerable<Task<TResult>>) method does not block the calling thread. However, a call to the returned Result property does block the calling thread.

If none of the tasks faulted and none of the tasks were canceled, the resulting task will end in the RanToCompletion state. The Task<TResult>.Result property of the returned task will be set to an array containing all of the results of the supplied tasks in the same order as they were provided (e.g. if the input tasks array contained t1, t2, t3, the output task's Task<TResult>.Result property will return an TResult[] where arr[0] == t1.Result, arr[1] == t2.Result, and arr[2] == t3.Result) .

If the tasks argument contains no tasks, the returned task will immediately transition to a RanToCompletion state before it's returned to the caller. The returned TResult[] will be an array of 0 elements.

WhenAll<TResult>(Task<TResult>[])

Creates a task that will complete when all of the Task<TResult> objects in an array have completed.

The following example creates ten tasks, each of which instantiates a random number generator that creates 1,000 random numbers between 1 and 1,000 and computes their mean. In this case, the ten individual tasks are stored in a Task<Int64> array. The Delay(Int32) method is used to delay instantiation of the random number generators so that they are not created with identical seed values. The call to the WhenAll method then returns an Int64 array that contains the mean computed by each task. These are then used to calculate the overall mean.

The call to WhenAll<TResult>(Task<TResult>[]) method does not block the calling thread. However, a call to the returned Result property does block the calling thread.

If none of the tasks faulted and none of the tasks were canceled, the resulting task will end in the RanToCompletion state. The Result of the returned task will be set to an array containing all of the results of the supplied tasks in the same order as they were provided (e.g. if the input tasks array contained t1, t2, t3, the output task's Result will return an TResult[] where arr[0] == t1.Result, arr[1] == t2.Result, and arr[2] == t3.Result) .

If the supplied array/enumerable contains no tasks, the returned task will immediately transition to a RanToCompletion state before it's returned to the caller. The returned TResult[] will be an array of 0 elements.

Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see: https://aka.ms/ContentUserFeedback .

Submit and view feedback for

Additional resources

IMAGES

  1. Learn how to schedule tasks with HangFire in Asp.Net Core

    task asp net core

  2. How to run background tasks in ASP.NET Core Application

    task asp net core

  3. ASP.NET Core 1.0. Part 1: Introduction, general description and the

    task asp net core

  4. How To Setup and Run scheduled background tasks in ASP NET Core Web API

    task asp net core

  5. Asp.Net Core and Task FromResult,CompletedTask

    task asp net core

  6. Asynchronous Programming with Async and Await in ASP.NET Core

    task asp net core

VIDEO

  1. Learn ASP NET 2008 Part 128

  2. Learn ASP NET 2008 Part 201

  3. ASP. NET Tutorials

  4. C# ASP.NET Core MVC

  5. ASP Net Core Web Api7 Part 2

  6. ASP.NET Core MVC

COMMENTS

  1. Task Class (System.Threading.Tasks)

    Continue With (Action<Task,Object>, Object, Task Scheduler) Creates a continuation that receives caller-supplied state information and executes asynchronously when the target Task completes. The continuation uses a specified scheduler. Creates a continuation that executes asynchronously when the target Task completes.

  2. Background tasks with hosted services in ASP.NET Core

    In ASP.NET Core, background tasks can be implemented as hosted services. A hosted service is a class with background task logic that implements the IHostedService interface. This article provides three hosted service examples: Background task that runs on a timer. Hosted service that activates a scoped service.

  3. Different Ways to Run Background Tasks in ASP.NET Core

    In this article, we'll explore different ways of running background tasks in ASP.NET Core applications without depending on external providers. To download the source code for this article, you can visit our GitHub repository. Let's jump in! Example Setup For this article, we will use a minimal API with an in-memory database project for simplicity.

  4. How To Implement Background Tasks In ASP.NET Core

    💡 In ASP.NET Core, these tasks can be implemented using various methods, enhancing application performance and user experience. Hosted services are the primary way to handle background tasks in ASP.NET Core. They are long-running, background processes, seamlessly integrated into the application's lifecycle. Hosted Services Basics

  5. The Task Asynchronous Programming (TAP) model with async and await

    The return type is Task<int> (See "Return Types" section for more options). The method name ends in Async. In the body of the method, GetStringAsync returns a Task<string>. That means that when you await the task you'll get a string (contents). Before awaiting the task, you can do work that doesn't rely on the string from GetStringAsync.

  6. Run and manage periodic background tasks in ASP.NET Core 6 with C#

    4 min read · Jun 10, 2022 5 Here's the intro from the Microsoft Docs, read more here: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services In ASP.NET Core,...

  7. How to run background tasks in ASP.NET Core Application

    August 14, 2020 by Nirjhar Choudhury In this blog post, I am going to walk through how to run background tasks in ASP.Net Core Web applications using infrastructure and API provided by the ASP.Net Core framework. There are a lot of scenarios where you want to run a task continuously in the background.

  8. How to use BackgroundService in ASP.NET Core

    You can use a hosted BackgroundService in ASP.NET Core for two purposes: Run a long-running background task. Run a periodic task in the background. In this article, I'll show how to create and register a hosted BackgroundService. In this example, it periodically pings a URL and logs the results. 1 - Subclass BackgroundService

  9. c#

    When should I use Task.Run in Asp.Net Core? Is it allowed to use Task.Run in an ASP.NET Core controller? In the first article in docs.microsoft we can see this statement: Call data access, I/O, and long-running operations APIs asynchronously if an asynchronous API is available. Do not use Task.Run to make a synchronous API asynchronous.

  10. Implementing ASP.NET Core Background Tasks In Your Project

    ASP.NET Core Background Tasks allow developers to run background operations in a web application. 💡. These operations are long-running, asynchronous tasks separate from the main application thread. IHostedService and BackgroundService are the primary interfaces for implementing background tasks. These are used to create services that run in ...

  11. Understanding Task and ValueTask in C#

    To learn more about Task check our articles: Asynchronous Programming with Async and Await in ASP.NET Core, How to Execute Multiple Tasks Asynchronously in C#, and Tasks VS Threads in C#. Task is a class that contains different methods and properties to manage the state of code execution that will complete in the future.

  12. Implementing ASP.NET Core Background Processing

    ASP.NET Core provides built-in support for background task implementation through services like IHostedService and BackgroundService.These services allow developers to execute background operations in a managed and reliable way.. Creating A Background Task. When you create a background task, you implement the IHostedService interface which requires two methods: StartAsync and StopAsync.

  13. Async processing of long-running tasks in ASP.NET Core

    Async processing of long-running tasks in ASP.NET Core. Sometimes, invoking an API endpoint needs to trigger a long-running task. Examples of this could be invoking an external and slow API or sending an email, which you don't want the caller of your API to wait for. There are multiple ways of implementing this using a message broker, a fire ...

  14. Long-Running Tasks in a Monolith ASP.NET Core Application

    This is potentially a long-running task. As the user clicks on the checkout button in UI, the system triggers several steps: Stock availability check for the cart items. Tax calculation based on customer details. Payment processing through a third party payment gateway. Receipt generation and final email communication.

  15. Implement background tasks in microservices with IHostedService and the

    Figure 6-26. Using IHostedService in a WebHost vs. a Host ASP.NET Core 1.x and 2.x support IWebHost for background processes in web apps. . NET Core 2.1 and later versions support IHost for background processes with plain console apps. Note the difference made between WebHost and Host.

  16. Efficient Background Task Processing in ASP.NET Core Techniques and

    If these tasks are executed synchronously within an HTTP request, it could lead to a poor user experience and even timeouts. In this post, we'll discuss how to address this problem by implementing a background task queue in ASP.NET Core. This approach enables the execution of long-running tasks without blocking the main request processing ...

  17. How To Implement Background Jobs In ASP.NET Core

    One common way to implement background jobs in ASP.NET Core is through the IHostedService interface. This interface provides two methods: StartAsync and StopAsync. public class MyBackgroundService : IHostedService, IDisposable { private Timer _timer; public Task StartAsync(CancellationToken cancellationToken) { _timer = new Timer( DoWork, null ...

  18. ASP.NET Core with Hosted Service & Lifecycle Events

    ASP.NET Core is a powerful framework for building modern web applications. It has a useful "hosted services" feature that allows developers to run background tasks in an ASP.NET Core application. By leveraging lifecycle events, developers can have fine-grained control over the initialization, execution, and termination of these background tasks.

  19. ASP.NET Core Basics: Data Structures—Part 2

    This command will create a folder called "PracticingDataStructurePartTwo" and inside it will be a basic web project using the Minimal API template. You can open the project with the IDE of your choice, in this example Visual Studio Code will be used. You can access all code examples here: Sample source code.

  20. c#

    1 HangFire, Quartz.Net, Coravel. All come with not just timers but queues with retries, priorities, chaining, dashboards. There's far more to scheduling tasks than just a timer. How are you going to check that your jobs run? What about errors? What if you want to run a job on demand?

  21. Reading Raw ASP.NET Request.Body Multiple Times

    Some time ago I wrote about retrieving raw HTTP request content from an incoming request and - surprisingly - it's one of the most popular posts on this blog. The post discusses a few ways how you can capture raw request content in ASP.NET Core applications in situations when traditional model binding/mapping doesn't work and you need to access the underlying raw request data.

  22. Parallel.ForEachAsync() and Task.Run() With When.All in C#

    var result = combinedResults.SelectMany(cr => cr); return result; } Here, we use the Task.Run () method to execute AsyncMethod () three times in a row. Again, by skipping the await keyword we are not awaiting any method to complete, but we run them in parallel and on Task.WhenAll () await their results.

  23. ASP.NET Core Best Practices

    09/14/2023 4 contributors Feedback In this article Cache aggressively Understand hot code paths Avoid blocking calls Return large collections across multiple smaller pages Show 23 more By Mike Rousos This article provides guidelines for maximizing performance and reliability of ASP.NET Core apps. Cache aggressively

  24. Azure Pipelines deprecated tasks retirement schedule

    In November we announced deprecated tasks will be retired after January 31st. If you are using some of the tasks listed below, please update your pipelines e.g. to use the suggested alternative. To help pipeline authors identify pipelines that use deprecated tasks, these tasks emit warnings that include guidance on how to replace the task.

  25. Task.WhenAll Method (System.Threading.Tasks)

    Examples. The following example creates a set of tasks that ping the URLs in an array. The tasks are stored in a List<Task> collection that is converted to an array and passed to the WhenAll(IEnumerable<Task>) method. After the call to the Wait method ensures that all threads have completed, the example examines the Task.Status property to determine whether any tasks have faulted.