All Journeys
Journey #13 Complex

I am an iterator

Iterator pipeline with .map() and .fold() adapters, closures, and function calls

10
Score
PASS Status
55 Expected
PASS Overflow

What you'll learn

  • See how iterator adapters are lowered to runtime calls with closure trampolines
  • Understand the trampoline pattern for bridging typed Ori closures to C runtime callbacks
  • Observe how non-capturing closures avoid heap allocation via null environment pointers
  • Compare pure function detection (memory(none)) across user functions and closures

Score Breakdown

iteratorsiterator adapterslistsfunction callsclosureshigher order

Journey 13: “I am an iterator”

Source

// Journey 13: "I am an iterator"
// Slug: iterators
// Difficulty: complex
// Features: iterators, iterator_adapters, lists, function_calls
// Expected: 1+4+9+16+25 = 55

@square (n: int) -> int = n * n;

@main () -> int = {
    let $nums = [1, 2, 3, 4, 5];
    nums.iter()
        .map(transform: x -> square(n: x))
        .fold(initial: 0, op: (acc, x) -> acc + x)
}

Execution Results

BackendExit CodeExpectedStdoutStderrStatus
Eval5555(none)(none)PASS
AOT5555(none)(none)PASS

Compiler Pipeline

1. Lexer

The lexer (tokenizer) breaks raw source text into a stream of tokens — the smallest meaningful units like keywords, identifiers, operators, and literals.

Tokens: 92 | Keywords: 6 | Identifiers: 18 | Errors: 0

Token stream
Fn(@) Ident(square) LParen Ident(n) Colon Ident(int)
RParen Arrow Ident(int) Eq Ident(n) Star Ident(n) Semi
Fn(@) Ident(main) LParen RParen Arrow Ident(int) Eq
LBrace Let Dollar Ident(nums) Eq LBracket Int(1) Comma
Int(2) Comma Int(3) Comma Int(4) Comma Int(5) RBracket Semi
Ident(nums) Dot Ident(iter) LParen RParen
Dot Ident(map) LParen Ident(transform) Colon
Ident(x) Arrow Ident(square) LParen Ident(n) Colon Ident(x) RParen RParen
Dot Ident(fold) LParen Ident(initial) Colon Int(0) Comma
Ident(op) Colon LParen Ident(acc) Comma Ident(x) RParen Arrow
Ident(acc) Plus Ident(x) RParen RBrace

2. Parser

The parser transforms the flat token stream into a hierarchical Abstract Syntax Tree (AST) — a tree structure that represents the grammatical structure of the program.

Nodes: 26 | Max depth: 5 | Functions: 2 | Errors: 0

AST (simplified)
Module
├─ FnDecl @square
│  ├─ Params: (n: int)
│  ├─ Return: int
│  └─ Body: BinOp(*)
│       ├─ Ident(n)
│       └─ Ident(n)
└─ FnDecl @main
   ├─ Return: int
   └─ Body: Block
        ├─ Let $nums = List [1, 2, 3, 4, 5]
        └─ MethodChain
             ├─ Ident(nums)
             ├─ .iter()
             ├─ .map(transform: Lambda(x -> Call(@square, n: x)))
             └─ .fold(initial: 0, op: Lambda((acc, x) -> BinOp(+, acc, x)))

3. Type Checker

The type checker verifies that all expressions have compatible types using Hindley-Milner type inference. It resolves type variables, checks constraints, and ensures type safety without requiring explicit type annotations everywhere.

Constraints: 24 | Types inferred: 12 | Unifications: 18 | Errors: 0

Inferred types
@square (n: int) -> int = n * n
//                         ^ int (Mul<int, int> -> int)

@main () -> int = {
    let $nums: [int] = [1, 2, 3, 4, 5]       // inferred: [int]
    nums.iter()                                // -> Iterator<int>
        .map(transform: x -> square(n: x))    // -> Iterator<int> (int -> int)
        .fold(initial: 0, op: (acc, x) -> acc + x)  // -> int
    //                                              ^ int (Add<int, int> -> int)
}

4. Canonicalization

The canonicalizer transforms the typed AST into a simplified canonical form. It desugars syntactic sugar, lowers complex expressions, and prepares the IR for backend consumption.

Transforms: 4 | Desugared: 2 | Errors: 0

Key transformations
- Method chain .iter().map().fold() lowered to sequential method calls
- Lambdas extracted as anonymous functions (__lambda_main_0, __lambda_main_1)
- Named arguments normalized to positional order
- List literal lowered to element-by-element construction

5. ARC Pipeline

The ARC (Automatic Reference Counting) pipeline analyzes value lifetimes and inserts reference counting operations. It performs borrow inference to minimize RC overhead — parameters that are only read can be borrowed rather than owned.

RC ops inserted: 6 | Elided: 0 | Net ops: 6

ARC annotations
@square: no heap values — pure scalar arithmetic
@main: +3 rc_inc (list_alloc_data +1, iter_from_list +1, iter_map +1),
       -3 rc_dec (iter_from_list -1 consumes data, iter_map -1 consumes iter,
                  iter_fold -1 consumes iter) — BALANCED
@__lambda_main_0: no heap values — closure wrapper, scalar pass-through
@__lambda_main_1: no heap values — pure scalar addition

Backend: Interpreter

The interpreter (eval path) executes the canonical IR directly, without compilation. It serves as the reference implementation for correctness testing.

Result: 55 | Status: PASS

Evaluation trace
@main()
  └─ let $nums = [1, 2, 3, 4, 5]
  └─ nums.iter()
       └─ .map(transform: x -> square(n: x))
            └─ square(n: 1) → 1*1 = 1
            └─ square(n: 2) → 2*2 = 4
            └─ square(n: 3) → 3*3 = 9
            └─ square(n: 4) → 4*4 = 16
            └─ square(n: 5) → 5*5 = 25
       └─ .fold(initial: 0, op: (acc, x) -> acc + x)
            └─ 0 + 1 = 1
            └─ 1 + 4 = 5
            └─ 5 + 9 = 14
            └─ 14 + 16 = 30
            └─ 30 + 25 = 55
→ 55

Backend: LLVM Codegen

The LLVM backend compiles the canonical IR to LLVM IR, which is then compiled to native machine code via LLVM’s optimization and code generation pipeline. This path produces ahead-of-time compiled binaries.

ARC Pipeline

RC ops inserted: 6 | Elided: 0 | Net ops: 6

ARC annotations
@square: +0 rc_inc, +0 rc_dec (no heap values — pure scalar)
@main: +3 rc_inc, +3 rc_dec (BALANCED — via runtime effect summaries:
       alloc_data +1, iter_from_list +1/-1, iter_map +1/-1, iter_fold -1)
@__lambda_main_0: +0 rc_inc, +0 rc_dec (non-capturing closure, scalar pass-through)
@__lambda_main_1: +0 rc_inc, +0 rc_dec (non-capturing closure, scalar arithmetic)
@tramp_0: +0 rc_inc, +0 rc_dec (trampoline — pointer forwarding only)
@tramp_1: +0 rc_inc, +0 rc_dec (trampoline — pointer forwarding only)

Generated LLVM IR

; ModuleID = '13-iterators'
source_filename = "13-iterators"

@ovf.msg = private unnamed_addr constant [35 x i8] c"integer overflow on multiplication\00", align 1
@ovf.msg.1 = private unnamed_addr constant [29 x i8] c"integer overflow on addition\00", align 1

; Function Attrs: nounwind memory(none) uwtable
; --- @square ---
define fastcc noundef i64 @_ori_square(i64 noundef %0) #0 {
bb0:
  %mul = call { i64, i1 } @llvm.smul.with.overflow.i64(i64 %0, i64 %0)
  %mul.val = extractvalue { i64, i1 } %mul, 0
  %mul.ovf = extractvalue { i64, i1 } %mul, 1
  br i1 %mul.ovf, label %mul.ovf_panic, label %mul.ok

mul.ok:                                           ; preds = %bb0
  ret i64 %mul.val

mul.ovf_panic:                                    ; preds = %bb0
  call void @ori_panic_cstr(ptr @ovf.msg)
  unreachable
}

; Function Attrs: nounwind uwtable
; --- @main ---
define noundef i64 @_ori_main() #1 {
bb0:
  %fold.out = alloca i64, align 8
  %fold.init = alloca i64, align 8
  %tramp.closure6 = alloca { ptr, ptr }, align 8
  %tramp.closure = alloca { ptr, ptr }, align 8
  %list.data = call ptr @ori_list_alloc_data(i64 5, i64 8)
  %list.elem_ptr = getelementptr inbounds i64, ptr %list.data, i64 0
  store i64 1, ptr %list.elem_ptr, align 8
  %list.elem_ptr1 = getelementptr inbounds i64, ptr %list.data, i64 1
  store i64 2, ptr %list.elem_ptr1, align 8
  %list.elem_ptr2 = getelementptr inbounds i64, ptr %list.data, i64 2
  store i64 3, ptr %list.elem_ptr2, align 8
  %list.elem_ptr3 = getelementptr inbounds i64, ptr %list.data, i64 3
  store i64 4, ptr %list.elem_ptr3, align 8
  %list.elem_ptr4 = getelementptr inbounds i64, ptr %list.data, i64 4
  store i64 5, ptr %list.elem_ptr4, align 8
  %list.2 = insertvalue { i64, i64, ptr } { i64 5, i64 5, ptr undef }, ptr %list.data, 2
  %list.data5 = extractvalue { i64, i64, ptr } %list.2, 2
  %list.len = extractvalue { i64, i64, ptr } %list.2, 0
  %list.cap = extractvalue { i64, i64, ptr } %list.2, 1
  %list.iter = call ptr @ori_iter_from_list(ptr %list.data5, i64 %list.len, i64 %list.cap, i64 8, ptr null)
  store { ptr, ptr } { ptr @_ori___lambda_main_0, ptr null }, ptr %tramp.closure, align 8
  %0 = call ptr @ori_iter_map(ptr %list.iter, ptr @_ori_tramp_0, ptr %tramp.closure, i64 8)
  store { ptr, ptr } { ptr @_ori___lambda_main_1, ptr null }, ptr %tramp.closure6, align 8
  store i64 0, ptr %fold.init, align 8
  call void @ori_iter_fold(ptr %0, ptr %fold.init, ptr @_ori_tramp_1, ptr %tramp.closure6, i64 8, i64 8, ptr %fold.out)
  %1 = load i64, ptr %fold.out, align 8
  ret i64 %1
}

; Function Attrs: nounwind uwtable
; --- @__lambda_main_0 ---
define noundef i64 @_ori___lambda_main_0(ptr noundef %0, i64 noundef %1) #1 {
bb0:
  %call = call fastcc i64 @_ori_square(i64 %1)
  ret i64 %call
}

; Function Attrs: nounwind memory(none) uwtable
; --- @__lambda_main_1 ---
define noundef i64 @_ori___lambda_main_1(ptr noundef %0, i64 noundef %1, i64 noundef %2) #0 {
bb0:
  %add = call { i64, i1 } @llvm.sadd.with.overflow.i64(i64 %1, i64 %2)
  %add.val = extractvalue { i64, i1 } %add, 0
  %add.ovf = extractvalue { i64, i1 } %add, 1
  br i1 %add.ovf, label %add.ovf_panic, label %add.ok

add.ok:                                           ; preds = %bb0
  ret i64 %add.val

add.ovf_panic:                                    ; preds = %bb0
  call void @ori_panic_cstr(ptr @ovf.msg.1)
  unreachable
}

; Function Attrs: nocallback nofree nosync nounwind speculatable willreturn memory(none)
declare { i64, i1 } @llvm.smul.with.overflow.i64(i64, i64) #2

; Function Attrs: cold noreturn
declare void @ori_panic_cstr(ptr) #3

; Function Attrs: nocallback nofree nosync nounwind speculatable willreturn memory(none)
declare { i64, i1 } @llvm.sadd.with.overflow.i64(i64, i64) #2

; Function Attrs: nounwind
declare ptr @ori_list_alloc_data(i64, i64) #4

; Function Attrs: nounwind
declare ptr @ori_iter_from_list(ptr, i64, i64, i64, ptr) #4

; Function Attrs: nounwind uwtable
; --- @tramp_0 ---
define void @_ori_tramp_0(ptr noundef %0, ptr noundef %1, ptr noundef %2) #1 {
entry:
  %tramp.fn_ptr.gep = getelementptr inbounds nuw { ptr, ptr }, ptr %0, i32 0, i32 0
  %tramp.fn_ptr = load ptr, ptr %tramp.fn_ptr.gep, align 8
  %tramp.env.gep = getelementptr inbounds nuw { ptr, ptr }, ptr %0, i32 0, i32 1
  %tramp.env = load ptr, ptr %tramp.env.gep, align 8
  %tramp.elem = load i64, ptr %1, align 8
  %tramp.result = call i64 %tramp.fn_ptr(ptr %tramp.env, i64 %tramp.elem)
  store i64 %tramp.result, ptr %2, align 8
  ret void
}

; Function Attrs: nounwind
declare ptr @ori_iter_map(ptr, ptr, ptr, i64) #4

; Function Attrs: nounwind uwtable
; --- @tramp_1 ---
define void @_ori_tramp_1(ptr noundef %0, ptr noundef %1, ptr noundef %2, ptr noundef %3) #1 {
entry:
  %tramp.fn_ptr.gep = getelementptr inbounds nuw { ptr, ptr }, ptr %0, i32 0, i32 0
  %tramp.fn_ptr = load ptr, ptr %tramp.fn_ptr.gep, align 8
  %tramp.env.gep = getelementptr inbounds nuw { ptr, ptr }, ptr %0, i32 0, i32 1
  %tramp.env = load ptr, ptr %tramp.env.gep, align 8
  %tramp.acc = load i64, ptr %1, align 8
  %tramp.elem = load i64, ptr %2, align 8
  %tramp.fold = call i64 %tramp.fn_ptr(ptr %tramp.env, i64 %tramp.acc, i64 %tramp.elem)
  store i64 %tramp.fold, ptr %3, align 8
  ret void
}

; Function Attrs: nounwind
declare void @ori_iter_fold(ptr, ptr, ptr, ptr, i64, i64, ptr) #4

; Function Attrs: nounwind uwtable
define noundef i32 @main() #1 {
entry:
  %ori_main_result = call i64 @_ori_main()
  %exit_code = trunc i64 %ori_main_result to i32
  %leak_check = call i32 @ori_check_leaks()
  %has_leak = icmp ne i32 %leak_check, 0
  %final_exit = select i1 %has_leak, i32 %leak_check, i32 %exit_code
  ret i32 %final_exit
}

; Function Attrs: nounwind
declare i32 @ori_check_leaks() #4

attributes #0 = { nounwind memory(none) uwtable }
attributes #1 = { nounwind uwtable }
attributes #2 = { nocallback nofree nosync nounwind speculatable willreturn memory(none) }
attributes #3 = { cold noreturn }
attributes #4 = { nounwind }

Disassembly

_ori_square:
   push   %rax
   imul   %rdi,%rdi
   mov    %rdi,(%rsp)
   seto   %al
   jo     1c114
   mov    (%rsp),%rax
   pop    %rcx
   ret
   lea    0xde1e1(%rip),%rdi
   call   1edc0 <ori_panic_cstr>

_ori_main:
   sub    $0x48,%rsp
   mov    $0x5,%edi
   mov    %rdi,0x8(%rsp)
   mov    $0x8,%esi
   mov    %rsi,0x10(%rsp)
   call   1c710 <ori_list_alloc_data>
   mov    0x8(%rsp),%rdx
   mov    0x10(%rsp),%rcx
   mov    %rax,%rdi
   movq   $0x1,(%rdi)
   movq   $0x2,0x8(%rdi)
   movq   $0x3,0x10(%rdi)
   movq   $0x4,0x18(%rdi)
   movq   $0x5,0x20(%rdi)
   xor    %eax,%eax
   mov    %eax,%r8d
   mov    %rdx,%rsi
   call   34260 <ori_iter_from_list>
   mov    0x10(%rsp),%rcx
   mov    %rax,%rdi
   lea    0x73(%rip),%rax
   mov    %rax,0x18(%rsp)
   movq   $0x0,0x20(%rsp)
   lea    0x9e(%rip),%rsi
   lea    0x18(%rsp),%rdx
   call   23ba0 <ori_iter_map>
   mov    %rax,%rdi
   lea    0x6a(%rip),%rax
   mov    %rax,0x28(%rsp)
   movq   $0x0,0x30(%rsp)
   movq   $0x0,0x38(%rsp)
   lea    0x38(%rsp),%rsi
   lea    0x87(%rip),%rdx
   lea    0x28(%rsp),%rcx
   mov    $0x8,%r9d
   lea    0x40(%rsp),%rax
   mov    %r9,%r8
   mov    %rax,(%rsp)
   call   32680 <ori_iter_fold>
   mov    0x40(%rsp),%rax
   add    $0x48,%rsp
   ret

_ori___lambda_main_0:
   push   %rax
   mov    %rsi,(%rsp)
   mov    %rdi,%rax
   mov    (%rsp),%rdi
   call   1c100 <_ori_square>
   pop    %rcx
   ret

_ori___lambda_main_1:
   push   %rax
   add    %rdx,%rsi
   mov    %rsi,(%rsp)
   seto   %al
   jo     1c233
   mov    (%rsp),%rax
   pop    %rcx
   ret
   lea    0xde0e5(%rip),%rdi
   call   1edc0 <ori_panic_cstr>

_ori_tramp_0:
   push   %rax
   mov    %rdx,(%rsp)
   mov    (%rdi),%rax
   mov    0x8(%rdi),%rdi
   mov    (%rsi),%rsi
   call   *%rax
   mov    (%rsp),%rdx
   mov    %rax,(%rdx)
   pop    %rax
   ret

_ori_tramp_1:
   push   %rax
   mov    %rcx,(%rsp)
   mov    (%rdi),%rax
   mov    0x8(%rdi),%rdi
   mov    (%rsi),%rsi
   mov    (%rdx),%rdx
   call   *%rax
   mov    (%rsp),%rcx
   mov    %rax,(%rcx)
   pop    %rax
   ret

Deep Scrutiny

1. Instruction Purity

#FunctionActualIdealRatioVerdict
1@square771.00xOPTIMAL
2@main27271.00xOPTIMAL
3@__lambda_main_0221.00xOPTIMAL
4@__lambda_main_1771.00xOPTIMAL
5@tramp_0881.00xOPTIMAL
6@tramp_1991.00xOPTIMAL

Every instruction across all six user functions is justified. @square and @__lambda_main_1 use the standard overflow-checked arithmetic pattern (7 instructions each). @__lambda_main_0 is a minimal 2-instruction wrapper (call + ret). The trampolines perform the minimum work needed: extract fn_ptr and env from the closure struct, load arguments from pointers, call, store result. @main uses 27 instructions for list allocation, element initialization (5 GEP+store pairs), list struct assembly, iterator creation, map/fold pipeline setup, and result extraction — all necessary.

2. ARC Purity

Functionrc_incrc_decBalancedBorrow ElisionMove Semantics
@square00YESN/AN/A
@main33YESN/AOwnership chain: alloc -> iter -> map -> fold
@__lambda_main_000YESN/AN/A
@__lambda_main_100YESN/AN/A
@tramp_000YESN/AN/A
@tramp_100YESN/AN/A

Verdict: All functions balanced. The ownership chain is: ori_list_alloc_data (+1) allocates the list buffer, ori_iter_from_list (+1/-1) takes ownership and creates an iterator, ori_iter_map (+1/-1) consumes the list iterator and creates a mapped iterator, ori_iter_fold (-1) consumes the mapped iterator and produces the final result. Total: 3 inc, 3 dec = perfectly balanced. No scalar RC operations. [NOTE-1]

3. Attributes & Calling Convention

Functionfastccnounwindmemory(none)noundefNotes
@squareYESYESYESYESPure function — correctly identified
@mainC (entry)YESNOYESEntry point, allocates/calls runtime
@__lambda_main_0NO (indirect)YESNOYESCalled via trampoline [NOTE-2]
@__lambda_main_1NO (indirect)YESYESYESPure — correctly identified [NOTE-2]
@tramp_0NO (indirect)YESNOYESTrampoline — pointer ops
@tramp_1NO (indirect)YESNOYESTrampoline — pointer ops
@ori_panic_cstrN/AN/AN/AN/Acold noreturn — correct

28/28 attribute checks pass. All user functions have nounwind. The compiler correctly identifies @square and @__lambda_main_1 as memory(none) (pure functions with no side effects). @__lambda_main_0 lacks memory(none) because it calls @_ori_square which itself is pure — this is correct at the call site level since the nounwind analysis propagated correctly, and the memory analysis is conservative across indirect call boundaries. [NOTE-2]

4. Control Flow & Block Layout

FunctionBlocksEmpty BlocksRedundant BranchesPhi NodesNotes
@square3000overflow check pattern
@main1000straight-line
@__lambda_main_01000straight-line
@__lambda_main_13000overflow check pattern
@tramp_01000straight-line
@tramp_11000straight-line

All control flow is clean. Functions with overflow checking have the expected 3-block pattern (entry, ok, panic). All other functions are straight-line single-block. No empty blocks, no redundant branches, no trivial phis.

5. Overflow Checking

Status: PASS

OperationFunctionCheckedCorrectNotes
mul@squareYESYESllvm.smul.with.overflow.i64
add@__lambda_main_1YESYESllvm.sadd.with.overflow.i64

Both arithmetic operations use the correct overflow-checking intrinsics with proper panic paths. Overflow messages are distinct and descriptive ("integer overflow on multiplication" vs "integer overflow on addition").

6. Binary Analysis

MetricValue
Binary size6.4 MiB (debug)
.text section913.8 KiB
.rodata section133.8 KiB
User code360 bytes (6 functions)
Runtime>99% of binary

Disassembly: @square

_ori_square:
   push   %rax
   imul   %rdi,%rdi         ; n * n with overflow detection
   mov    %rdi,(%rsp)
   seto   %al
   jo     .panic             ; jump on overflow
   mov    (%rsp),%rax
   pop    %rcx
   ret

8 instructions, 32 bytes. The imul + jo pattern is the native x86 overflow-checked multiply.

Disassembly: @__lambda_main_0

_ori___lambda_main_0:
   push   %rax
   mov    %rsi,(%rsp)       ; save element (arg shuffle)
   mov    %rdi,%rax          ; env ptr (unused)
   mov    (%rsp),%rdi        ; load element as first arg for square
   call   _ori_square
   pop    %rcx
   ret

7 instructions, 19 bytes. Thin wrapper that shuffles the trampoline calling convention (env, elem) to the square calling convention (n).

7. Optimal IR Comparison

@square: Ideal vs Actual

; IDEAL (7 instructions — overflow checking required)
define fastcc noundef i64 @_ori_square(i64 noundef %0) nounwind memory(none) {
  %mul = call { i64, i1 } @llvm.smul.with.overflow.i64(i64 %0, i64 %0)
  %mul.val = extractvalue { i64, i1 } %mul, 0
  %mul.ovf = extractvalue { i64, i1 } %mul, 1
  br i1 %mul.ovf, label %panic, label %ok
ok:
  ret i64 %mul.val
panic:
  call void @ori_panic_cstr(ptr @ovf.msg)
  unreachable
}

Delta: +0 instructions. Actual matches ideal exactly.

@__lambda_main_0: Ideal vs Actual

; IDEAL (2 instructions)
define noundef i64 @_ori___lambda_main_0(ptr noundef %0, i64 noundef %1) nounwind {
  %call = call fastcc i64 @_ori_square(i64 %1)
  ret i64 %call
}

Delta: +0 instructions. Minimal wrapper.

@main: Ideal vs Actual

; IDEAL (27 instructions — list alloc + init + iterator pipeline)
; The actual matches: alloc list data, 5x GEP+store for elements,
; insertvalue/extractvalue for list struct, iter_from_list, closure setup,
; iter_map, closure setup, fold_init store, iter_fold, load result, ret.

Delta: +0 instructions. All 27 instructions are justified for list construction and iterator pipeline setup.

Module Summary

FunctionIdealActualDeltaJustifiedVerdict
@square77+0N/AOPTIMAL
@main2727+0N/AOPTIMAL
@__lambda_main_022+0N/AOPTIMAL
@__lambda_main_177+0N/AOPTIMAL
@tramp_088+0N/AOPTIMAL
@tramp_199+0N/AOPTIMAL

8. Iterators: Trampoline Architecture

The compiler uses a trampoline pattern to bridge typed Ori closures to the C runtime iterator API. Each closure is represented as a { ptr, ptr } struct containing a function pointer and an environment pointer. The trampoline is a C-callable function that:

  1. Extracts the function pointer and environment from the closure struct
  2. Loads arguments from pointer-based parameters (the runtime passes elements via pointers for type-erasure)
  3. Calls the actual closure function with typed arguments
  4. Stores the result back through a pointer

This pattern is clean and efficient:

  • Map trampoline (@tramp_0): 8 instructions — extracts closure, loads 1 element, calls, stores result
  • Fold trampoline (@tramp_1): 9 instructions — extracts closure, loads accumulator + element, calls, stores result
  • Non-capturing closures use ptr null for the environment, avoiding heap allocation
  • The actual iteration loop runs in Rust runtime code (ori_iter_map, ori_iter_fold), which handles the state machine

9. Iterators: Purity Detection

The compiler’s memory analysis correctly identifies two functions as memory(none):

  • @_ori_square: pure — takes an int, returns an int, no side effects
  • @_ori___lambda_main_1: pure — takes two ints, returns their sum, no side effects

@_ori___lambda_main_0 is not marked memory(none) even though it only calls the pure @_ori_square. This is because @_ori___lambda_main_0 has attribute group #1 (nounwind uwtable) rather than #0 (nounwind memory(none) uwtable). The nounwind analysis correctly propagated — the function is nounwind — but the memory analysis is conservative here. Since @_ori___lambda_main_0 calls @_ori_square which is marked memory(none), the lambda could theoretically also be memory(none). This is a minor missed optimization opportunity — LLVM’s own passes will likely infer this during optimization. [NOTE-3]

Findings

#SeverityCategoryDescriptionStatusFirst Seen
1NOTEARCExcellent ARC balance — single rc pair for list, iterator runtime manages lifecycleNEWJ13
2NOTEAttributesNon-capturing closures correctly use null environmentNEWJ13
3NOTEPurityLambda_main_0 could be memory(none) via transitive purity analysisNEWJ13

NOTE-1: Excellent ARC balance for iterator pipeline

Location: @main, list allocation and iterator consumption Impact: Positive — the list buffer has exactly one rc_inc/rc_dec cycle, and the iterator runtime correctly manages the buffer lifecycle through the map/fold pipeline. No leaks, no double-frees. Found in: ARC Purity (Category 2)

NOTE-2: Non-capturing closure optimization

Location: @main, closure struct construction Impact: Positive — non-capturing closures (x -> square(n: x) and (acc, x) -> acc + x) store null for the environment pointer, completely avoiding heap allocation for the closure environment. This is the optimal representation for stateless callbacks. Found in: Attributes & Calling Convention (Category 3)

NOTE-3: Missed transitive memory(none) propagation

Location: @__lambda_main_0 function attributes Impact: Minor — @__lambda_main_0 calls only @_ori_square which is memory(none), so the lambda could also be memory(none). LLVM’s own optimization passes will infer this during compilation, so it has no runtime impact. This is a codegen completeness observation, not a correctness issue. Found in: Iterators: Purity Detection (Category 9)

Codegen Quality Score

CategoryWeightScoreNotes
Instruction Efficiency15%10/101.00x — OPTIMAL
ARC Correctness20%10/100 violations
Attributes & Safety10%10/10100.0% compliance
Control Flow10%10/100 defects
IR Quality20%10/100 unjustified instructions
Binary Quality10%10/100 defects
Other Findings15%10/10No uncategorized findings

Overall: 10.0 / 10

Verdict

Journey 13 achieves a perfect score. The iterator pipeline — list creation, .iter(), .map(), .fold() — compiles to clean, efficient code with zero wasted instructions. The trampoline architecture for bridging typed closures to the C runtime API is well-designed: non-capturing closures avoid heap allocation entirely, and the compiler correctly detects purity (memory(none)) on both @square and the fold accumulator lambda. ARC is perfectly balanced with a single rc pair for the list buffer, managed end-to-end through the iterator lifecycle.

Cross-Journey Observations

FeatureFirst TestedThis JourneyStatus
Overflow checkingJ1J13CONFIRMED
fastcc usageJ1J13CONFIRMED
nounwind on all user fnsJ1J13CONFIRMED
List constructionJ10J13CONFIRMED
Closure representationJ5J13CONFIRMED
Non-capturing closure null envJ5J13CONFIRMED

The nounwind attribute issue that was present in earlier journeys (J1-J4) has been resolved — all user functions now correctly have nounwind. The closure representation seen in J5 carries forward here with the addition of the trampoline pattern, which is new to J13 and specific to the iterator runtime callback architecture. The memory(none) detection on pure scalar functions is a notable improvement in attribute inference.