Parallel & Sequential Execution
Mastering the timing of asynchronous operations is the difference between a sluggish UI and a high-performance engine. Learn to orchestrate complex data flows using concurrency and parallelism.
The Async Waterfall Problem
A common mistake in modern JavaScript is creating an "Async Waterfall." This happens when you await multiple independent operations in a series. If each takes 2 seconds, three operations will take 6 seconds, even if they don't depend on each other.
// ⌠Sequential (Linear Execution) - ~6 seconds
async function loadSequential() {
const user = await fetchUser(); // Wait 2s
const posts = await fetchPosts(); // Wait 2s
const meta = await fetchMeta(); // Wait 2s
return { user, posts, meta };
}
// ✅ Parallel (Concurrent Execution) - ~2 seconds
async function loadParallel() {
// Start all requests simultaneously
const [user, posts, meta] = await Promise.all([
fetchUser(),
fetchPosts(),
fetchMeta()
]);
return { user, posts, meta };
}Resilience vs. Speed
Choosing the right orchestration method depends on yourFailure Model. If you need all the data to render (like a bank transaction), Promise.all is correct. If partial data is acceptable (like a dashboard with multiple independent widgets), Promise.allSettled is the standard.
// 1. All-or-Nothing (Promise.all)
// Fails immediately if ANY request fails
const results = await Promise.all([p1, p2, p3]);
// 2. Resilient (Promise.allSettled)
// Returns status for every promise, never throws
const statuses = await Promise.allSettled([p1, p2, p3]);
const successful = statuses
.filter(r => r.status === 'fulfilled')
.map(r => r.value);Controlled Concurrency (Pooling)
While Promise.all is fast, running 1,000 requests simultaneously can crash a server or exhaust browser memory.Concurrency Pooling allows you to process large datasets in controlled "batches," maintaining speed without overwhelming resources.
// --- Controlled Concurrency Pattern ---
async function processInBatches(items, limit = 5) {
const results = [];
for (let i = 0; i < items.length; i += limit) {
const batch = items.slice(i, i + limit);
const batchResults = await Promise.all(
batch.map(item => processItem(item))
);
results.push(...batchResults);
}
return results;
}
// Process 100 images, but only 5 at a time
const allResults = await processInBatches(imageUrls, 5);Technical Insight: CPU vs I/O Bound
JavaScript is single-threaded for CPU tasks, but its I/O (network, disk) is handled by the underlying system (Libuv). When you run "Parallel" requests, you aren't multi-threading your code; you are leveraging the browser's ability to handle multiple network sockets simultaneously.
Efficiency Checklist:
- ✅ **Context:** Only use sequential
awaitif Task B depends on Task A. - ✅ **Batching:** Use pooling for operations involving more than 10-20 concurrent requests.
- ✅ **Resilience:** Favor
allSettledfor non-critical fallback data. - ✅ **Racing:** Use
Promise.racefor implementing timeouts or finding the fastest server. - ⌠**Anti-Pattern:** Never start a promise and don't await/catch it (creating a "floating" promise).