Table of Contents
- Streaming Massive Data in C# Web Applications
- Is the Solution IAsyncEnumerable in C#?
- Demo Project: Using IAsyncEnumerable
- Download Large Datasets for Testing
- Front-end: Testing Streaming and Non-Streaming Endpoints
- Loading the Large CSV Data File
- The Regular Non-Streaming Endpoint
- Streaming Endpoint Using IAsyncEnumerable
- Synchronous vs Asynchronous: Which One Wins?
- Task of IEnumerable vs IAsyncEnumerable in C#
- Real-World Applications
- Final Thoughts
Streaming Massive Data in C# Web Applications
If you’ve ever worked with large datasets, you’ll be well aware of the challenges when serving that content to front-end clients. Large datasets can cause issues when returned via APIs, such as:
- Slow response times (time-to-first-byte) – the client has to wait longer than desired before seeing any data arrive at all.
- High memory usage – caused by the code on your server instructing your application to hold entire datasets in RAM.
- User frustration – Google researchers discovered that bounce rates increase the slower your content loads. And that’s a big problem.
Is the Solution IAsyncEnumerable in C#?
Using IAsyncEnumerable in C# is an effective solution to this problem and applies to a wide range of common use cases. With it, you can begin streaming substantial data immediately, reduce memory usage, and even build “real-time-ish” APIs – all without needing WebSockets.
What are WebSockets?
WebSockets are a communication protocol that allows persistent, two-way connections between clients and a server over a single TCP connection. Connections are kept open so data can be sent either way at any time, reducing overheads. But they can come with stability problems if conditions aren’t right (e.g., interference from firewalls, proxies, and network changes).
Unlike regular IEnumerable, IAsyncEnumerable<T> lets you yield results asynchronously without blocking threads and so can produce partial results almost immediately. And it’s great for reading from slow I/O sources like disks, network resources, and databases without buffering. Combining the asynchronicity of IAsyncEnumerable in C# with NDJSON (‘\n’-delimited newline-delimited JSON objects), it makes it even easier for browsers and APIs to consume data incrementally.
Read more about IAsyncEnumerable<T> (Microsoft Docs).
Demo Project: Using IAsyncEnumerable
Let’s explore a real-world example using IAsyncEnumerable to stream a large dataset. You can view the full code on the GitHub IAsyncEnumerable Streaming Demo project page if you’d like to follow along in your IDE.

await foreach in StreamNdjsonIncluded in the demo project are both streaming and non-streaming examples, along with a simple front-end webpage that displays the returned data in the client browser. It’ll give you a really good feel for what an impact IAsyncEnumerable in C# can make when dealing with large datasets.
Download Large Datasets for Testing
For the example, I’ve used the large product datasets at Datablist. The demo repository only includes a 100,000-row CSV in the App_Data folder because of Github’s file size limits, but I recommend using the larger 1million+ datasets – download one and place it alongside the existing file.

Next, modify the FilePath property of DemoOptions.cs in the Models folder to use the 1m record CSV:
public sealed class DemoOptions
{
public string FilePath { get; init; } = "App_Data\\products-1000000.csv";
}Front-end: Testing Streaming and Non-Streaming Endpoints
Launching the project via your IDE will serve the wwwroot\index.html page. The Program.cs file uses app.UseDefaultFiles() to serve static files from that folder by default. You’ll see two buttons there – “Start Streaming” and “Load All at Once” – which demonstrate the streaming and non-streaming endpoints, respectively. Results from both requests are shown in the black pane so you can clearly observe what data the browser receives.

wwwroot\index.html default page of the IAsyncEnumerable in C# demo project.Loading the Large CSV Data File
In the ProductsController.cs file I’ve added an asynchronous method to load the CSV file from Datablist and return its contents. This method supplied both the synchronous api/products/all and asynchronous api/products/stream endpoints in much the same way – it’s when the client requests either one of those that results are either continued to be served up in an async way using IAsyncEnumerable, or all at the same time as an IActionResult of List<Product>.
private static async IAsyncEnumerable<Product> StreamProductsFromFileAsync(string path, [EnumeratorCancellation] CancellationToken ct = default)
{
await using var fs = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read, bufferSize: 1 << 20, useAsync: true);
using var reader = new StreamReader(fs);
using var csv = new CsvReader(reader, new CsvConfiguration(CultureInfo.InvariantCulture)
{
DetectDelimiter = true,
TrimOptions = TrimOptions.Trim
});
csv.Context.RegisterClassMap(ProductMap.Instance);
await foreach (var record in csv.GetRecordsAsync<Product>(ct))
yield return record;
}You’ll spot a reference to CsvReader in the above code. It’s part of the CsvHelper NuGet package that makes working with comma (and other) delimited files super simple. I added the latest version (at the time of writing) to the DataStreamingDemo.Web.csproj file:
<PackageReference Include="CsvHelper" Version="33.1.0" />The library requires that mappings are created to tell it which data fields in the CSV to load into the Product model, which I’ve placed in the Mappings\ProductMap folder of the solution:
using System.Globalization;
using CsvHelper.Configuration;
using DataStreamingDemo.Models;
namespace DataStreamingDemo.Web.Mappings
{
public class ProductMap : ClassMap<Product>
{
public static readonly ProductMap Instance = new();
public ProductMap()
{
AutoMap(CultureInfo.InvariantCulture);
Map(m => m.Index).Name("Index");
Map(m => m.Name).Name("Name");
Map(m => m.Description).Name("Description");
Map(m => m.Brand).Name("Brand");
Map(m => m.Category).Name("Category");
Map(m => m.Price).Name("Price");
Map(m => m.Currency).Name("Currency");
Map(m => m.Stock).Name("Stock");
Map(m => m.EAN).Name("EAN");
Map(m => m.Colour).Name("Color");
Map(m => m.Size).Name("Size");
Map(m => m.Availability).Name("Availability");
Map(m => m.InternalId).Name("Internal ID");
}
}
}
products-100000.csv data file.I’ve mapped all of the fields in the CSV file to the Product.cs model class, but you can use as many or as few as you need to meet your use case.
The Regular Non-Streaming Endpoint
The GetAll method, which is mapped to the /api/products/all endpoint, loads the entire products CSV file into memory before sending the response to the browser.
[HttpGet("all")]
public async Task<IActionResult> GetAll([FromServices] IWebHostEnvironment env, CancellationToken ct, [FromQuery] int? take = null)
{
var filePath = Path.Combine(env.ContentRootPath, _options.Value.FilePath);
if (filePath is null || !Path.Exists(filePath)) return NoContent();
var sw = Stopwatch.StartNew();
var results = new List<Product>(take ?? 64_000);
await foreach (var p in StreamProductsFromFileAsync(filePath, ct))
{
results.Add(p);
if (take.HasValue && results.Count >= take.Value) break;
}
sw.Stop();
Response.Headers["X-Elapsed-Milliseconds"] = sw.ElapsedMilliseconds.ToString();
return Ok(results);
}Processing the data and returning it in one request is perfectly acceptable for scenarios where you may be using small datasets, or serving up paged lists of 20 products at a time. In those situations, the data payload from server to client would be relatively manageable. With caching implemented, it’d be even faster.
But as your data grows, so does the sluggishness of requests, and you’ll see big growing memory spikes for big data. If you’re interested in how different types of common C# collections perform against each other for in-memory operations, check out my other article on using the HashSet in C#.
Streaming Endpoint Using IAsyncEnumerable
The below StreamAll method is mapped to the /api/products/stream endpoint and calls on the base method StreamProductsFromFileAsync to asynchronously retrieve data of type IAsyncEnumerable<Product>, with each entity being written to the response serialised as NDJSON:
[HttpGet("stream")]
public async Task<IActionResult> StreamAll([FromServices] IWebHostEnvironment env, CancellationToken ct)
{
var filePath = Path.Combine(env.ContentRootPath, _options.Value.FilePath);
if (filePath is null) return NoContent();
Response.StatusCode = StatusCodes.Status200OK;
Response.ContentType = "application/x-ndjson; charset=utf-8";
Response.Headers.CacheControl = "no-store";
Response.Headers["X-Accel-Buffering"] = "no";
await Response.StartAsync(ct);
try
{
await foreach (var product in StreamProductsFromFileAsync(filePath, ct))
{
try
{
var json = JsonSerializer.Serialize(product);
await Response.WriteAsync(json, ct);
await Response.WriteAsync("\n", ct);
await Response.Body.FlushAsync(ct);
}
catch { return new EmptyResult(); }
}
}
catch
{
return new EmptyResult();
}
return new EmptyResult();
}The line await Response.Body.FlushAsync(ct) plays a particularly important role here in that it flushes the content immediately to the client. Also present are two custom headers which attempt to avoid proxies, CDNs, and browsers caching the stream:
Response.Headers.CacheControl= “no-store”;Response.Headers["X-Accel-Buffering"]= “no”;
Custom header X-Accel-Buffering is used by nginx to disable response buffering. Without it, the stream may get buffered and only flush after it’s complete, which defeats the whole point of streaming in the first place. I’ve added it for reference, but it shouldn’t be an issue running this demo locally using Kestrel.
Synchronous vs Asynchronous: Which One Wins?
The only thing left is to launch the demo project and compare how the two endpoints behave when called by a client browser. I’ll use Chrome for this test, but you can replicate the same behaviour in any browser. Let’s begin with the 100k CSV file.
Synchronous Response
Clicking the button to load all data at once, you can see there was a 265ms delay in waiting for a response from the server. Content finally finished loading at 628ms and the overall response time was 895ms.

Asynchronous Response Using IAsyncEnumerable in C#
This time, by selecting to stream the data using IAsyncEnumerable in C#, the server response time before starting to return data was quicker at 181ms. The content download time was slightly longer this time at 1.30s (streaming) versus the 628ms (non-streaming).

Testing both of these methods multiple times in the same browser yielded the following results:

A theme is evident that tells a story of what exactly what you’d expect to see:
- Synchronous calls return full data faster overall, but with longer waits for the user.
- Asynchronous calls using
IAsyncEnumerablein C# return faster responses to the client, but the associated overheads extend both the response size and time.
It could be argued that the trade-off in responsiveness for the client is worthwhile, but this should be weighed-up based on use case. And it’s worth noting that with even larger datasets – this time using the 1m product CSV – proper use of IAsyncEnumerable in C# can really save the day. In my testing, the streaming method returned meaningful results almost immediately, while loading the entire dataset failed entirely:

Task of IEnumerable vs IAsyncEnumerable in C#
It’s important to note a common confusion around returning a Task of IEnumerable compared with IAsyncEnumerable in C#. They produce different results, for good reason.
When you return a Task<IEnumerable<T>> you’re really saying “I’ll do some work asynchronously, then deliver you the entire collection once it’s finished”. That means the whole result set has to be built in memory before the caller can use it. On the other hand, IAsyncEnumerable is about streaming the data, meaning the caller can start consuming items as soon as they’re available, without waiting for the full set to be materialised.
The difference is important: the first is about deferred execution of a batch, while the second is about asynchronous iteration over a sequence, which is far better suited to large data or scenarios where you want results to flow as they’re produced.
Real-World Applications
There are tons of real-world uses for using the IAsyncEnumerable implementation in your projects – and some you’ll be really familiar with already:
- Log monitoring – tail logs in real time from the browser, such as those you see in Azure DevOps pipelines or the Azure Portal.
- Finance – deliver market data feeds without buffering, such as stock market indexes.
- E-commerce – stream product catalogues between back-end systems and to partners.
Avoiding high memory usage is a key performance benefit of using IAsyncEnumerable in C#. If creating lean, performant applications with maximum client responsiveness is an imperative, then streaming data should be something you bear in mind over traditional methods.
Final Thoughts
That’s it – I hope you enjoyed reading about streaming massive data with IAsyncEnumerable in C#. Next time you’re building an application or API that involves large datasets, have a go at implementing IAsyncEnumerable with NDJSON. It’ll make them so much faster, lighter, and more responsive for your end users and system integrations.
Full code for this IAsyncEnumerable in C# demo on GitHub.

