Thinking Through Async Overhead in Dart
I’ve been programming a lot in Dart recently, and Dart code is often asynchronous by default. In fact, many of the standard examples I come across assume an async model right away.
As an example, let’s look at the output of an LLM when I asked it for code to read a file.
import 'dart:convert';
import 'dart:io';
Future<void> main() async {
final file = File('input.txt');
if (!await file.exists()) {
stderr.writeln('File not found: ${file.path}');
exitCode = 1;
return;
}
final lines = file
.openRead()
.transform(utf8.decoder)
.transform(const LineSplitter());
await for (final line in lines) {
print(line);
}
}
On first glance, this code looks fine. We mark main as async, check that the file exists, write an error to stderr if it does not, open the file as a stream, decode the bytes as UTF-8, split the text into lines, and print each line.
An interesting side note. Dart strings are sequences of UTF-16 code units, similar to Windows wide strings. This is due to its Browser/JavaScript past.
This version also streams the file line by line, which means it does not need to read the entire file into memory before it starts printing.
The code does exactly what you would want it to do.
But if you think about it some more, you start to notice that the control flow is still basically sequential. We check whether the file exists. Then we read the file. Then we print the lines. Nothing else is happening in this program while we wait.
For a small CLI tool, it is not that different from this synchronous version, minus the streaming behavior:
import 'dart:io';
void main() {
final file = File('input.txt');
if (!file.existsSync()) {
stderr.writeln('File not found: ${file.path}');
exitCode = 1;
return;
}
final lines = file.readAsLinesSync();
for (final line in lines) {
print(line);
}
}
Now, in a different context, the asynchronous version might still be the right choice. But in the small CLI context I was working in, it got me wondering:
How much overhead am I paying by leaving the code asynchronous when the program is still being used sequentially?
Before getting into the benchmark, it is worth being clear about what async actually does.
An asynchronous function in Dart returns a Future<T>. A Future<T> represents a value that will be available later, or an error. When you write await, you are saying: start this operation, then pause this async function until the result is ready.
That is the important part. await pauses the current async function. It does not block the entire isolate. Other already-scheduled work can still run on that isolate while the operation is pending.
So this:
await File('output/a.txt').writeAsString('A');
await File('output/b.txt').writeAsString('B');
await File('output/c.txt').writeAsString('C');
does not start all three writes at once. It starts a.txt, waits for that write to finish, then starts b.txt, waits for that write to finish, then starts c.txt.
That means this code is asynchronous, but it is still sequential.
Benchmarking Sequential Async Code Against Sync
I made a small Dart benchmark that contained two tests:
bin/write_sync.dart
bin/write_async_sequential.dartThe sync version does this:
File('${outputDir.path}/file_$i.txt').writeAsStringSync('hello world!\n');The async sequential version does this:
await File('${outputDir.path}/file_$i.txt').writeAsString('hello world!\n');This intentionally tests async used in a sequential context. It does not use
Future.wait. That comes in a moment.
The results were pretty telling. For tiny sequential file writes, async was roughly 2x slower on my machine:
10,000 writes:
sync: 57.444 µs/write
async: 119.898 µs/write
1,000 writes:
sync: 64.635 µs/write
async: 147.404 µs/write
50,000 writes:
sync: 53.124 µs/write
async: 108.618 µs/writeThis does not mean async is bad. It just means async is not a zero cost abstraction, and this benchmark intentionally exploited that fact.
The async version pays for things like Future creation, event-loop scheduling, async state-machine resumption, and async I/O bookkeeping. For large I/O operations, that overhead may not matter much. For tiny file writes, the actual payload is only 13 bytes, so the overhead around the write dominates the benchmark.
In other words, this was close to a worst-case scenario for async: many tiny sequential operations where the program has nothing else useful to do while waiting.
Adding Future.wait
I then added another test using Future.wait.
Future.wait lets you start multiple asynchronous operations before waiting for all of them to complete. So instead of this:
final aFuture = File('output/a.txt').writeAsString('A');
final bFuture = File('output/b.txt').writeAsString('B');
final cFuture = File('output/c.txt').writeAsString('C');
await aFuture;
await bFuture;
await cFuture;you can write this:
await Future.wait([
File('output/a.txt').writeAsString('A'),
File('output/b.txt').writeAsString('B'),
File('output/c.txt').writeAsString('C'),
]);
This isn’t just about syntactic sugar. The important point is that all three writes are started before we wait for them to finish.
That means this version can overlap independent I/O operations.
To test this I wrote a second benchmark. For the second benchmark, each iteration wrote three files. I tested three versions:
synchronous sequential writes
asynchronous sequential writes
asynchronous writes grouped with
Future.wait
The results looked like this:
30,000 total tiny writes
sync sequential:
61.316 µs/write
async sequential:
111.916 µs/write
async Future.wait, groups of 3:
64.254 µs/writeThat result makes sense.
Sequential async was much slower than sync because it paid the async overhead while still doing the work one operation at a time.
But the Future.wait version nearly caught the synchronous version. It still had async overhead, but it was able to overlap three independent writes per iteration. Even with tiny files, that was enough to hide most of the cost.
I would not overstate the result though. This does not prove that async is always faster when grouped with Future.wait. It only shows that when async work is genuinely independent, scheduling it as independent work can matter.
It also shows the opposite point: blindly adding await everywhere can accidentally serialize work that could have overlapped, causing a major slow down. As someone who doesn’t write async code often, it’s easy to get tunnel vision and accidentally do this.
Async, Sync, and Context
So like most things in programming async is not a silver bullet. Just like how the context matters when writing parallel code, due to the overhead, so too does context matter when working with async code. This is made somewhat more challenging due to the fact that many modern languages make the canonical version of functions async, with synchronous versions either being in a different library, or having ‘sync’ tacked on to the function name.
But with a little planning, and some diligence, async can be a great tool for writing programs that respect the time and resources of users. And it doesn’t take much for the problem to become big enough that the benefits outweigh the overhead. So I’m grateful that it exists, and it is just one more thing that I can add to my programming toolkit.
Call To Action 📣
Hi 👋 my name is Diego Crespo and I like to talk about technology, niche programming languages, and AI. I have a [Twitter](https://twitter.com/deusinmach) [Mastodon](https://mastodon.social/deck/@DiegoCrespo), and [Threads](https://www.threads.net/@deusinmachinablog) if you’d like to follow me on other social media platforms. If you liked the article, consider liking and subscribing. And if you haven’t why not check out another article of mine listed below! Thank you for reading and giving me a little of your valuable time. A.M.D.G

