
Java CompletableFuture, Part 2: Composition
The critical difference between thenApply and thenCompose, how to run independent operations in parallel, and why ForkJoinPool.commonPool() is not safe for production.
The Power of Composition
Part 1 of this series covered the basics. The real value of CompletableFuture comes from composing multiple async operations together.
Consider a typical dashboard load:
- Fetch user data (100ms)
- Fetch recent orders (150ms)
- Fetch payment history (120ms)
Sequential: 100 + 150 + 120 = 370ms. Parallel: max(100, 150, 120) = 150ms. The same operations, 2.5x faster, with smarter orchestration.
The Critical Difference: thenApply vs thenCompose
This is the most common source of confusion with CompletableFuture. The rule is simple once you see it.
thenApply: Synchronous transformation
Use when your transformation does not involve another async call:
CompletableFuture<String> emailFuture = fetchUserAsync(userId)
.thenApply(user -> user.getEmail()); // User -> String (sync)Analogous to Stream.map(). The function runs synchronously after the previous stage completes.
fetchUserAsync(userId)
.thenApply(user -> user.getUsername()) // User -> String
.thenApply(name -> name.toUpperCase()) // String -> String
.thenApply(name -> "Hello, " + name); // String -> StringthenCompose: Asynchronous chaining
Use when your transformation returns another CompletableFuture:
CompletableFuture<List<Order>> ordersFuture = fetchUserAsync(userId)
.thenCompose(user -> fetchOrdersAsync(user.getId())); // User -> CF<List<Order>>Analogous to Stream.flatMap(). It flattens the nested future so you do not end up with a CompletableFuture<CompletableFuture<T>>.
The rule:
- If your function returns
T(a regular value), usethenApply - If your function returns
CompletableFuture<T>, usethenCompose
The Classic Mistake: Nested Futures
This is the bug that appears most often in code reviews:
// Bad: using thenApply when thenCompose is needed
CompletableFuture<CompletableFuture<List<Order>>> nested = fetchUserAsync(userId)
.thenApply(user -> fetchOrdersAsync(user.getId()));
// Result: CompletableFuture<CompletableFuture<List<Order>>>You have wrapped a future inside another future. Unwrapping requires two join() calls:
List<Order> orders = nested.join().join(); // Avoid thisThe fix
// Good: thenCompose flattens automatically
CompletableFuture<List<Order>> flat = fetchUserAsync(userId)
.thenCompose(user -> fetchOrdersAsync(user.getId()));
// Result: CompletableFuture<List<Order>>Real-World Example: Profile Enrichment
A concrete case: fetch a user, then fetch their preferences and loyalty points, and combine everything into a profile object.
Sequential approach
public CompletableFuture<UserProfile> enrichProfile(Long userId) {
return fetchUser(userId)
.thenCompose(user ->
fetchPreferences(user.getId())
.thenCompose(prefs ->
fetchLoyaltyPoints(user.getId())
.thenApply(points ->
new UserProfile(user, prefs, points)
)
)
);
}This works, but it is sequential. Each call waits for the previous one, even though fetching preferences and loyalty points do not depend on each other.
Parallel approach
public CompletableFuture<UserProfile> enrichProfileParallel(Long userId) {
return fetchUser(userId)
.thenCompose(user -> {
// Both launch in parallel — neither depends on the other
CompletableFuture<List<String>> prefsFuture = fetchPreferences(user.getId());
CompletableFuture<Integer> pointsFuture = fetchLoyaltyPoints(user.getId());
return prefsFuture.thenCombine(pointsFuture, (prefs, points) ->
new UserProfile(user, prefs, points)
);
});
}Same result, but preferences and loyalty points fetch concurrently.
Parallel Execution: allOf, anyOf, thenCombine
For multiple independent operations, these methods give you direct control over parallel coordination.
thenCombine: Combine exactly two futures
CompletableFuture<User> userFuture = fetchUser(userId);
CompletableFuture<List<Order>> ordersFuture = fetchOrders(userId);
CompletableFuture<String> summary = userFuture.thenCombine(ordersFuture,
(user, orders) -> user.getName() + " has " + orders.size() + " orders"
);Both futures run in parallel. The combiner function runs when both complete.
allOf: Wait for three or more futures
CompletableFuture<User> userFuture = fetchUser(userId);
CompletableFuture<List<Order>> ordersFuture = fetchOrders(userId);
CompletableFuture<List<Payment>> paymentsFuture = fetchPayments(userId);
CompletableFuture<DashboardData> dashboard =
CompletableFuture.allOf(userFuture, ordersFuture, paymentsFuture)
.thenApply(ignored -> new DashboardData(
userFuture.join(),
ordersFuture.join(),
paymentsFuture.join()
));allOf returns CompletableFuture<Void>. You extract results from the original futures inside thenApply. By the time thenApply runs, all three futures are already complete, so join() does not block.
anyOf: First result wins
CompletableFuture<User> primaryService = fetchFromPrimary(userId);
CompletableFuture<User> backupService = fetchFromBackup(userId);
CompletableFuture<User> fastest = CompletableFuture.anyOf(primaryService, backupService)
.thenApply(result -> (User) result); // anyOf returns ObjectUseful for redundant requests, cache-aside patterns (racing cache against database), or hedging against slow responders.
Why Not ForkJoinPool.commonPool()
By default, CompletableFuture.supplyAsync() uses ForkJoinPool.commonPool(). This is convenient for quick examples but problematic in production.
The problems:
- Shared across the entire JVM. Every library that uses
CompletableFutureshares this pool. One misbehaving dependency can starve your application. - Sized for CPU-bound work. Pool size is
CPU cores - 1. For I/O-bound work (database calls, HTTP requests), this is far too small. - No visibility. Threads are named
ForkJoinPool.commonPool-worker-N, which makes debugging stuck or slow operations much harder.
Custom executors
ExecutorService ioExecutor = Executors.newFixedThreadPool(
Runtime.getRuntime().availableProcessors() * 2,
new ThreadFactory() {
private final AtomicInteger counter = new AtomicInteger(0);
@Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r, "io-pool-" + counter.incrementAndGet());
t.setDaemon(true);
return t;
}
}
);
CompletableFuture.supplyAsync(() -> database.query(), ioExecutor);Thread pool sizing
- I/O-bound work:
CPU × 2-4threads (e.g. 16-32 on an 8-core machine) - CPU-bound work:
CPUthreads (e.g. 8 on an 8-core machine) - Mixed workloads: use separate pools (I/O pool at 2-4× cores, CPU pool at core count)
For I/O-bound work, threads spend most of their time waiting. More threads means more concurrent operations can be in flight.
Virtual Threads (Java 21+)
Java 21 introduced virtual threads, which change the equation for I/O-bound concurrency:
Executor virtualExecutor = Executors.newVirtualThreadPerTaskExecutor();
CompletableFuture.supplyAsync(() -> database.query(), virtualExecutor);Virtual threads are lightweight enough that you can create millions of them. When a virtual thread blocks on I/O, the underlying OS thread is released to do other work. This makes blocking calls cheap enough that pool sizing for I/O-bound work largely stops being a problem.
// Fine with virtual threads — blocking does not waste an OS thread
CompletableFuture.supplyAsync(() -> {
User user = database.query();
return user;
}, virtualExecutor);Virtual threads are well-suited for I/O-bound and high-concurrency scenarios. For CPU-bound work, platform threads remain the right choice.
Sequential vs Parallel: A Direct Comparison
// Bad: sequential — each operation waits for the previous
public CompletableFuture<Dashboard> loadSequential(Long userId) {
return fetchUser(userId) // 100ms
.thenCompose(user ->
fetchOrders(userId) // +150ms
.thenCompose(orders ->
fetchPayments(userId) // +120ms
.thenApply(payments ->
new Dashboard(user, orders, payments))));
}
// Total: ~370ms
// Good: parallel — independent operations run concurrently
public CompletableFuture<Dashboard> loadParallel(Long userId) {
CompletableFuture<User> userF = fetchUser(userId);
CompletableFuture<List<Order>> ordersF = fetchOrders(userId);
CompletableFuture<List<Payment>> paymentsF = fetchPayments(userId);
return CompletableFuture.allOf(userF, ordersF, paymentsF)
.thenApply(v -> new Dashboard(userF.join(), ordersF.join(), paymentsF.join()));
}
// Total: ~150msThe same data, the same operations, 2.5x faster.
Quick Reference
thenApply(fn): synchronous transformation; use when your function returnsTthenCompose(fn): async chaining; use when your function returnsCompletableFuture<T>thenCombine(cf, fn): combine exactly two futures when both completeallOf(cf...): wait for all futures to complete; returnsCompletableFuture<Void>anyOf(cf...): first completed future wins; returnsCompletableFuture<Object>
Key Takeaways
thenApplyfor synchronous transformations.thenComposefor async chaining. This distinction prevents the most common class of composition bugs.- Run independent operations in parallel. Use
allOforthenCombineto coordinate results. - Do not use
ForkJoinPool.commonPool()in production. Create dedicated executors with meaningful thread names. - Size your pools for the workload: I/O-bound at 2-4× CPU cores, CPU-bound at core count.
- On Java 21+, virtual threads largely eliminate pool-sizing concerns for I/O-bound work.
Up Next: Error Handling and Spring Boot Integration
Async code can fail in non-obvious ways. Exceptions do not propagate the way you might expect, and partial failures in parallel operations need deliberate handling. Part 3 covers the error-handling trio (exceptionally, handle, whenComplete), how to deal with partial failures in allOf scenarios, Spring Boot executor configuration, and when @Async causes more problems than it solves.