Proposal: Parallel Execution Guarantees

Status: Approved Approved: 2026-01-30 Author: Eric (with AI assistance) Created: 2026-01-29 Affects: Compiler, runtime, concurrency model


Summary

This proposal specifies the execution guarantees for the parallel pattern, addressing ambiguity around ordering, concurrency limits, resource exhaustion, and partial completion semantics.


Problem Statement

The spec states that parallel “may execute tasks in parallel” but leaves critical questions unanswered:

  1. Execution order: Are tasks started in list order? Completed in any order?
  2. Concurrency limits: What happens when max_concurrent is exceeded?
  3. Resource exhaustion: What if the system cannot spawn more tasks?
  4. Result ordering: How are results ordered in the output?
  5. Partial completion: What happens if some tasks fail?

Parallel Pattern Specification

Syntax Recap

parallel(
    tasks: [() -> T uses Suspend],
    max_concurrent: Option<int> = None,
    timeout: Option<Duration> = None,
) -> [Result<T, E>]

When max_concurrent is None, there is no limit. When timeout is None, there is no timeout.

Execution Order Guarantees

Start Order: Tasks are started in list order.

parallel(tasks: [task_a, task_b, task_c])
// task_a starts first, then task_b, then task_c
// (subject to max_concurrent constraint)

Completion Order: Tasks may complete in any order.

// If task_b is faster than task_a:
// task_b may complete before task_a
// This is expected concurrent behavior

Result Order: Results are returned in original task order, not completion order.

let results = parallel(tasks: [slow, fast, medium])
// results[0] = result of slow  (first task)
// results[1] = result of fast  (second task)
// results[2] = result of medium (third task)
// Even though fast completed first

Concurrency Limits

The max_concurrent parameter limits simultaneous execution:

parallel(
    tasks: hundred_tasks,
    max_concurrent: Some(10),
)
// At most 10 tasks run simultaneously
// When one completes, the next pending task starts

Semantics:

  • Tasks are queued in list order
  • When a slot opens (task completes), the next queued task starts
  • Tasks wait in the queue, not in a busy loop

Default: When max_concurrent is None (or not specified), there is no limit (all tasks may run simultaneously).

Resource Exhaustion

If the runtime cannot allocate resources for a task (memory, task handles, etc.):

  1. The specific task fails with Err(CancellationError { reason: ResourceExhausted, task_id: n })
  2. Other tasks continue executing
  3. The pattern does NOT panic
  4. Result array contains the error for that task
let results = parallel(tasks: thousand_heavy_tasks)
// If task 500 can't be allocated:
// results[500] = Err(CancellationError { reason: ResourceExhausted, task_id: 500 })
// Other tasks still run

See the nursery-cancellation-proposal for the CancellationError and CancellationReason types.

Timeout Behavior

When timeout expires:

  1. Incomplete tasks are cancelled (see nursery-cancellation-proposal)
  2. Results for cancelled tasks are Err(CancellationError { reason: Timeout, task_id: n })
  3. Completed results are preserved
let results = parallel(
    tasks: [fast_task, slow_task, medium_task],
    timeout: Some(1s),
)
// If slow_task takes 5s:
// results[0] = Ok(fast_result)     // completed
// results[1] = Err(CancellationError { reason: Timeout, task_id: 1 })  // cancelled
// results[2] = Ok(medium_result)   // completed

Cancellation Checking

Tasks in parallel can use is_cancelled() to check for timeout-triggered cancellation:

parallel(
    tasks: [
        () -> {
            for item in large_list do {
                if is_cancelled() then break
                process(item)
            }
        },
    ],
    timeout: Some(5s),
)

This enables cooperative cancellation for long-running tasks. See the nursery-cancellation-proposal for full cancellation semantics.

Error Handling

Default behavior: Errors do not stop other tasks (equivalent to CollectAll for nurseries).

let results = parallel(tasks: [success, failure, success])
// results[0] = Ok(...)
// results[1] = Err(...)
// results[2] = Ok(...)
// All three tasks run

For early termination on error, use nursery with appropriate error mode.

No Early Termination

The parallel pattern does NOT support early termination on first error. All tasks always run to completion (or timeout). If you need to cancel remaining tasks when one fails, use nursery with on_error: FailFast:

// parallel: all tasks run regardless of errors
parallel(tasks: [...])  // → [Result<T, E>] with some Err values

// nursery with FailFast: cancel all on first error
nursery(
    body: n -> for task in tasks do n.spawn(task: task),
    on_error: FailFast,
)

Empty Task List

parallel(tasks: [])
// Returns: []
// No tasks spawned, returns immediately

Execution Model

Task Scheduling

The runtime schedules tasks according to these rules:

  1. Fair scheduling: No task is starved; all eventually get CPU time
  2. No priority: All tasks have equal priority (no priority inversion)
  3. Work stealing: The runtime may move tasks between execution contexts for load balancing

Progress Guarantee

Every non-blocked task makes progress. A task is blocked only when:

  • Waiting on a channel operation
  • Waiting on another async operation
  • Explicitly yielding

Compute-bound tasks do not block other tasks indefinitely — the runtime ensures fair interleaving at suspension points.

Memory Model

Tasks in parallel observe the same memory model as nursery tasks:

  • No shared mutable state
  • Values are moved into tasks (ownership transfer)
  • Captured bindings must be Sendable

Examples

Basic Parallel Execution

@fetch_all (urls: [str]) -> [Result<Response, Error>] uses Suspend =
    parallel(
        tasks: urls.map(url -> () -> fetch(url)),
        max_concurrent: Some(10),
        timeout: Some(30s),
    )

Parallel with Index Tracking

@process_with_index (items: [Item]) -> [Result<Output, Error>] uses Suspend =
    parallel(
        tasks: items
            .iter()
            .enumerate()
            .map((i, item) -> () -> process(index: i, item: item))
            .collect(),
    )
// Results maintain original order

Aggregating Results

@parallel_sum (batches: [[int]]) -> int uses Suspend = {
    let results = parallel(
        tasks: batches.map(batch -> () -> batch.fold(0, (a, b) -> a + b))
    )
    results
        .filter(r -> r.is_ok())
        .map(r -> r.unwrap())
        .fold(0, (a, b) -> a + b)
}

Handling Partial Failures

@best_effort_fetch (urls: [str]) -> [Response] uses Suspend = {
    let results = parallel(
        tasks: urls.map(url -> () -> fetch(url))
        timeout: Some(10s)
    )
    // Keep only successful responses
    results.filter(r -> r.is_ok()).map(r -> r.unwrap()).collect()
}

PatternError HandlingReturn TypeUse Case
parallelCollect all[Result<T, E>]Independent tasks, want all results
spawnFire and forgetvoidSide effects, no results needed
nurseryConfigurable[Result<T, E>]Complex control over cancellation

When to Use Each

Use parallel when:

  • You have independent tasks
  • You want all results (successes and failures)
  • Simple fan-out/fan-in pattern

Use nursery when:

  • You need early termination on error
  • You need explicit cancellation control
  • Tasks have dependencies or need coordination

Use spawn when:

  • You don’t need results
  • Fire-and-forget side effects
  • Background logging, metrics, etc.

Spec Changes Required

Update 10-patterns.md

Add detailed section on parallel:

  • Execution order guarantees
  • Result ordering specification
  • Concurrency limit behavior
  • Resource exhaustion handling
  • Timeout interaction

Add Examples

Add examples showing:

  • Result ordering
  • max_concurrent usage
  • Timeout behavior
  • Partial failure handling

Summary

AspectGuarantee
Start orderTasks start in list order
Completion orderAny order (concurrent)
Result orderSame as task list order
max_concurrentOption<int>, None = unlimited, queued FIFO
timeoutOption<Duration>, None = no timeout
Resource exhaustionCancellationError { reason: ResourceExhausted }, others continue
TimeoutIncomplete tasks get CancellationError { reason: Timeout }
Error handlingAll tasks run (CollectAll behavior, no early termination)
Cancellation checkingis_cancelled() available for cooperative cancellation
Empty listReturns [] immediately
MemorySame as nursery (no shared mutable state)