Writing7 min read
Cover illustration for the Java CompletableFuture composition article
Software EngineeringFebruary 10, 2026

Java CompletableFuture, Part 2: Composition

The critical difference between thenApply and thenCompose, how to run independent operations in parallel, and why ForkJoinPool.commonPool() is not safe for production.

javaasyncconcurrencycompletablefuturethreading

The Power of Composition

Part 1 of this series covered the basics. The real value of CompletableFuture comes from composing multiple async operations together.

Consider a typical dashboard load:

  • Fetch user data (100ms)
  • Fetch recent orders (150ms)
  • Fetch payment history (120ms)

Sequential: 100 + 150 + 120 = 370ms. Parallel: max(100, 150, 120) = 150ms. The same operations, 2.5x faster, with smarter orchestration.

The Critical Difference: thenApply vs thenCompose

This is the most common source of confusion with CompletableFuture. The rule is simple once you see it.

thenApply: Synchronous transformation

Use when your transformation does not involve another async call:

java
CompletableFuture<String> emailFuture = fetchUserAsync(userId)
    .thenApply(user -> user.getEmail());  // User -> String (sync)

Analogous to Stream.map(). The function runs synchronously after the previous stage completes.

java
fetchUserAsync(userId)
    .thenApply(user -> user.getUsername())     // User -> String
    .thenApply(name -> name.toUpperCase())     // String -> String
    .thenApply(name -> "Hello, " + name);      // String -> String

thenCompose: Asynchronous chaining

Use when your transformation returns another CompletableFuture:

java
CompletableFuture<List<Order>> ordersFuture = fetchUserAsync(userId)
    .thenCompose(user -> fetchOrdersAsync(user.getId()));  // User -> CF<List<Order>>

Analogous to Stream.flatMap(). It flattens the nested future so you do not end up with a CompletableFuture<CompletableFuture<T>>.

The rule:

  • If your function returns T (a regular value), use thenApply
  • If your function returns CompletableFuture<T>, use thenCompose

The Classic Mistake: Nested Futures

This is the bug that appears most often in code reviews:

java
// Bad: using thenApply when thenCompose is needed
CompletableFuture<CompletableFuture<List<Order>>> nested = fetchUserAsync(userId)
    .thenApply(user -> fetchOrdersAsync(user.getId()));
// Result: CompletableFuture<CompletableFuture<List<Order>>>

You have wrapped a future inside another future. Unwrapping requires two join() calls:

java
List<Order> orders = nested.join().join();  // Avoid this

The fix

java
// Good: thenCompose flattens automatically
CompletableFuture<List<Order>> flat = fetchUserAsync(userId)
    .thenCompose(user -> fetchOrdersAsync(user.getId()));
// Result: CompletableFuture<List<Order>>

Real-World Example: Profile Enrichment

A concrete case: fetch a user, then fetch their preferences and loyalty points, and combine everything into a profile object.

Sequential approach

java
public CompletableFuture<UserProfile> enrichProfile(Long userId) {
    return fetchUser(userId)
        .thenCompose(user ->
            fetchPreferences(user.getId())
                .thenCompose(prefs ->
                    fetchLoyaltyPoints(user.getId())
                        .thenApply(points ->
                            new UserProfile(user, prefs, points)
                        )
                )
        );
}

This works, but it is sequential. Each call waits for the previous one, even though fetching preferences and loyalty points do not depend on each other.

Parallel approach

java
public CompletableFuture<UserProfile> enrichProfileParallel(Long userId) {
    return fetchUser(userId)
        .thenCompose(user -> {
            // Both launch in parallel — neither depends on the other
            CompletableFuture<List<String>> prefsFuture = fetchPreferences(user.getId());
            CompletableFuture<Integer> pointsFuture = fetchLoyaltyPoints(user.getId());
 
            return prefsFuture.thenCombine(pointsFuture, (prefs, points) ->
                new UserProfile(user, prefs, points)
            );
        });
}

Same result, but preferences and loyalty points fetch concurrently.

Parallel Execution: allOf, anyOf, thenCombine

For multiple independent operations, these methods give you direct control over parallel coordination.

thenCombine: Combine exactly two futures

java
CompletableFuture<User> userFuture = fetchUser(userId);
CompletableFuture<List<Order>> ordersFuture = fetchOrders(userId);
 
CompletableFuture<String> summary = userFuture.thenCombine(ordersFuture,
    (user, orders) -> user.getName() + " has " + orders.size() + " orders"
);

Both futures run in parallel. The combiner function runs when both complete.

allOf: Wait for three or more futures

java
CompletableFuture<User> userFuture = fetchUser(userId);
CompletableFuture<List<Order>> ordersFuture = fetchOrders(userId);
CompletableFuture<List<Payment>> paymentsFuture = fetchPayments(userId);
 
CompletableFuture<DashboardData> dashboard =
    CompletableFuture.allOf(userFuture, ordersFuture, paymentsFuture)
        .thenApply(ignored -> new DashboardData(
            userFuture.join(),
            ordersFuture.join(),
            paymentsFuture.join()
        ));

allOf returns CompletableFuture<Void>. You extract results from the original futures inside thenApply. By the time thenApply runs, all three futures are already complete, so join() does not block.

anyOf: First result wins

java
CompletableFuture<User> primaryService = fetchFromPrimary(userId);
CompletableFuture<User> backupService = fetchFromBackup(userId);
 
CompletableFuture<User> fastest = CompletableFuture.anyOf(primaryService, backupService)
    .thenApply(result -> (User) result);  // anyOf returns Object

Useful for redundant requests, cache-aside patterns (racing cache against database), or hedging against slow responders.

Why Not ForkJoinPool.commonPool()

By default, CompletableFuture.supplyAsync() uses ForkJoinPool.commonPool(). This is convenient for quick examples but problematic in production.

The problems:

  1. Shared across the entire JVM. Every library that uses CompletableFuture shares this pool. One misbehaving dependency can starve your application.
  2. Sized for CPU-bound work. Pool size is CPU cores - 1. For I/O-bound work (database calls, HTTP requests), this is far too small.
  3. No visibility. Threads are named ForkJoinPool.commonPool-worker-N, which makes debugging stuck or slow operations much harder.

Custom executors

java
ExecutorService ioExecutor = Executors.newFixedThreadPool(
    Runtime.getRuntime().availableProcessors() * 2,
    new ThreadFactory() {
        private final AtomicInteger counter = new AtomicInteger(0);
 
        @Override
        public Thread newThread(Runnable r) {
            Thread t = new Thread(r, "io-pool-" + counter.incrementAndGet());
            t.setDaemon(true);
            return t;
        }
    }
);
 
CompletableFuture.supplyAsync(() -> database.query(), ioExecutor);

Thread pool sizing

  • I/O-bound work: CPU × 2-4 threads (e.g. 16-32 on an 8-core machine)
  • CPU-bound work: CPU threads (e.g. 8 on an 8-core machine)
  • Mixed workloads: use separate pools (I/O pool at 2-4× cores, CPU pool at core count)

For I/O-bound work, threads spend most of their time waiting. More threads means more concurrent operations can be in flight.

Virtual Threads (Java 21+)

Java 21 introduced virtual threads, which change the equation for I/O-bound concurrency:

java
Executor virtualExecutor = Executors.newVirtualThreadPerTaskExecutor();
 
CompletableFuture.supplyAsync(() -> database.query(), virtualExecutor);

Virtual threads are lightweight enough that you can create millions of them. When a virtual thread blocks on I/O, the underlying OS thread is released to do other work. This makes blocking calls cheap enough that pool sizing for I/O-bound work largely stops being a problem.

java
// Fine with virtual threads — blocking does not waste an OS thread
CompletableFuture.supplyAsync(() -> {
    User user = database.query();
    return user;
}, virtualExecutor);

Virtual threads are well-suited for I/O-bound and high-concurrency scenarios. For CPU-bound work, platform threads remain the right choice.

Sequential vs Parallel: A Direct Comparison

java
// Bad: sequential — each operation waits for the previous
public CompletableFuture<Dashboard> loadSequential(Long userId) {
    return fetchUser(userId)              // 100ms
        .thenCompose(user ->
            fetchOrders(userId)           // +150ms
                .thenCompose(orders ->
                    fetchPayments(userId) // +120ms
                        .thenApply(payments ->
                            new Dashboard(user, orders, payments))));
}
// Total: ~370ms
 
// Good: parallel — independent operations run concurrently
public CompletableFuture<Dashboard> loadParallel(Long userId) {
    CompletableFuture<User> userF = fetchUser(userId);
    CompletableFuture<List<Order>> ordersF = fetchOrders(userId);
    CompletableFuture<List<Payment>> paymentsF = fetchPayments(userId);
 
    return CompletableFuture.allOf(userF, ordersF, paymentsF)
        .thenApply(v -> new Dashboard(userF.join(), ordersF.join(), paymentsF.join()));
}
// Total: ~150ms

The same data, the same operations, 2.5x faster.

Quick Reference

  • thenApply(fn): synchronous transformation; use when your function returns T
  • thenCompose(fn): async chaining; use when your function returns CompletableFuture<T>
  • thenCombine(cf, fn): combine exactly two futures when both complete
  • allOf(cf...): wait for all futures to complete; returns CompletableFuture<Void>
  • anyOf(cf...): first completed future wins; returns CompletableFuture<Object>

Key Takeaways

  1. thenApply for synchronous transformations. thenCompose for async chaining. This distinction prevents the most common class of composition bugs.
  2. Run independent operations in parallel. Use allOf or thenCombine to coordinate results.
  3. Do not use ForkJoinPool.commonPool() in production. Create dedicated executors with meaningful thread names.
  4. Size your pools for the workload: I/O-bound at 2-4× CPU cores, CPU-bound at core count.
  5. On Java 21+, virtual threads largely eliminate pool-sizing concerns for I/O-bound work.

Up Next: Error Handling and Spring Boot Integration

Async code can fail in non-obvious ways. Exceptions do not propagate the way you might expect, and partial failures in parallel operations need deliberate handling. Part 3 covers the error-handling trio (exceptionally, handle, whenComplete), how to deal with partial failures in allOf scenarios, Spring Boot executor configuration, and when @Async causes more problems than it solves.