Section 09: Algebraic Normalization Pass
Status: Not Started Goal: Build the compiler’s first algebraic optimization pass — a step-3b ARC IR normalization pass that rewrites expressions using exported algebraic law metadata. This is the core of Tang’s thesis: using algebraic properties of types (identity, associativity, commutativity, inverse) to perform optimizations that LLVM cannot do because the operations are opaque function calls at the LLVM level.
Why step 3b ARC IR, not LLVM: At the ARC IR level, primitive operations are represented as ArcInstr::Let { value: ArcValue::PrimOp { op: PrimOp::Binary(Add), args: [x, zero] } } — the operation type and arguments are directly visible. User-defined operator syntax also remains PrimOp; the exact impl identity is exported separately by Section 07.5 because the instruction itself does not carry it. After LLVM emission, these become opaque instructions or calls that LLVM cannot optimize algebraically. The normalization pass MUST fire before the ARC analysis (step 4) because it changes the variable liveness graph (removing Let/PrimOp instructions removes use-def chains).
Critical IR distinction: Most binary/unary operators (both built-in and user-defined) lower to Let { value: PrimOp { op, args } } in ARC IR. The type checker resolves user-defined operators via trait dispatch, but the AST retains ExprKind::Binary which canonicalizes to CanExpr::Binary and lowers to PrimOp::Binary(op). The Apply path (ArcInstr::Apply { func, args }) is for explicit method calls (x.add(rhs: y)), not operator syntax (x + y).
Exceptions — NOT PrimOps: BinaryOp::And (&&), BinaryOp::Or (||), and BinaryOp::Coalesce (??) are intercepted by lower_binary() in compiler/ori_arc/src/lower/expr/mod.rs BEFORE PrimOp emission and lowered to control-flow IR (branches via lower_short_circuit_and(), lower_short_circuit_or(), lower_coalesce()). They NEVER appear as PrimOp::Binary(And) / PrimOp::Binary(Or) / PrimOp::Binary(Coalesce) in ARC IR.
This means the normalization pass has ONE dispatch path:
- PrimOp path: Match on
PrimOp::Binary(op)andPrimOp::Unary(op)— covers ALL overloadable operators (arithmetic, comparison, bitwise, user-defined). Does NOT cover&&/||/??(control-flow IR).
The type of the operands (func.var_types[dst]) determines which laws apply. For built-in overloadable operators (int/str/list/bitwise/not), laws are installed in Section 08 on the stdlib operator impls. For user types (Matrix, Vector), laws come from user-declared laws [] on their operator impls. The AlgebraLawIndex maps (type_idx, trait_name) to law sets — the normalization pass queries this with the operand type and the trait corresponding to the BinaryOp/UnaryOp when such a trait exists.
Pipeline position: After compute_var_reprs() (step 3) and normalize_function() (step 3a, TRMC), before analyze_function() (step 4).
Purity gate: For user-defined operator implementations, rewrites only fire if Section 07.5 exported an exact impl/provenance entry and the corresponding MemoryContract proves the operator is pure enough for the rewrite. For stdlib built-in operators, purity is encoded directly in the exported summary (BuiltinPure). Section 09 must not reconstruct callee names from PrimOp.
Success Criteria:
-
ORI_DUMP_AFTER_ARC=1shows normalized ARC IR (identity ops eliminated, operands reordered) -
x + 0compiles toxfor int (no Add call in ARC IR) -
s + ""compiles tosfor str (no concat call, after Section 06 runtime fix) -
x & -1compiles toxfor int (BitAnd identity elimination) -
x | 0compiles toxfor int (BitOr identity elimination) -
-(-x)compiles toxfor int -
!!bcompiles tobfor bool - Commutative operand ordering:
3 + xnormalizes tox + 3(constant on right) -
(a + b) + canda + (b + c)produce identical ARC IR -
x + (-x)compiles to0for int
Depends on: Section 07 (exported law/provenance index) and Section 08 (stdlib laws must be installed).
Subsumes: Section 05.2 (reflexive peepholes) and Section 05.3 (boolean/bitwise simplifications) — both are implemented within this section’s normalization pass rather than as standalone peepholes. Section 09.2 covers bitwise identity cases (x & -1 → x, x | 0 → x, x ^ 0 → x) and bitwise annihilators (x & 0 → 0, x | -1 → -1). Section 09.3 covers double negation (!!x → x, ~~x → x). The reflexive peephole (x == x → true) is an additional rewrite added during 09.2 for completeness.
CRITICAL IR FACT: && / || are NEVER PrimOps. lower_binary() in compiler/ori_arc/src/lower/expr/mod.rs intercepts BinaryOp::And/Or and routes them to lower_short_circuit_and()/lower_short_circuit_or(), producing control-flow IR (branches). Only bitwise &/| (BitAnd/BitOr) reach PrimOp. Therefore x && true → x and x || false → x are impossible to express as ARC IR peepholes. All boolean &&/|| simplification items have been removed.
09.1 Normalization Pass Framework
File(s): compiler/ori_arc/src/aims/normalize/algebra.rs (NEW), compiler/ori_arc/src/pipeline/aims_pipeline.rs
Build the pass infrastructure: a function that walks ARC IR, queries the exported algebra summary, and applies rewrites.
- Create
compiler/ori_arc/src/aims/normalize/algebra.rs - Define pass entry point:
pub fn normalize_algebra( func: &mut ArcFunction, pool: &Pool, interner: &StringInterner, contracts: &FxHashMap<Name, MemoryContract>, algebra_laws: &AlgebraLawIndex, // law queries (built by Section 07.5) ) -> AlgebraNormResult { // Walk all blocks, all instructions // For each instruction: // Let { value: PrimOp { op, args } } → ALL operators (built-in AND user-defined) // - Identify operation from PrimOp::Binary(op)/Unary(op) // - Map BinaryOp → trait name (Add, Mul, etc.) when one exists // - Identify type from func.var_types[dst] // - Query algebra_laws for this (type, trait_name) pair // - NOTE: &&/||/?? are NEVER PrimOps — they are control-flow IR and will not appear here // - Attempt rewrites in priority order // Return: count of rewrites applied } - Add tracing:
ORI_LOG=ori_arc::aims::normalize::algebra=debugfor rewrite logging - Add phase dump integration:
ORI_DUMP_AFTER_ALGEBRA=1for pre/post normalization IR dump - Insert call at step 3b in
aims_pipeline.rs:// Step 3a: normalize_with_trmc (existing) // Step 3b: normalize_algebra (NEW) let algebra_result = normalize_algebra( &mut func, config.pool, config.interner, config.contracts, &algebra_laws, // AlgebraLawIndex — see plumbing note below ); // Step 3c: If algebra normalization created fresh variables (associativity // rewrites do this), re-run compute_var_reprs() to avoid stale ValueRepr map. if algebra_result.created_fresh_vars { compute_var_reprs(&func, config.pool, config.classifier); } // Step 4: analyze_function (existing) - var_reprs staleness: Associativity rewrites (09.5) create fresh
ArcVarIds for intermediate results. These won’t have entries in thevar_reprsmap computed at step 3. The normalization pass MUST either: (a) populate var_reprs for new variables itself, or (b) signal that var_reprs must be recomputed. Option (b) is simpler — follow the TRMC pattern innormalize_with_trmc()which re-runs step 3 when it transforms the IR. Identity elimination and commutative reordering do NOT create fresh vars and don’t trigger recomputation. - Early exit: if
algebra_laws.impls.is_empty(), return immediately with zero rewrites — the pass is a no-op when no laws are installed. This ensures the pass can be safely inserted into the pipeline even before Section 08 installs stdlib laws. - Define rewrite priority order: identity > involutive > commutative ordering > associativity > inverse
- Purity gate: read purity/provenance from
AlgebraLawIndex. Builtin/stdlib operator entries may be markedBuiltinPure; user-defined operator entries must carry an exactcontract_keyexported by Section 07.5. If the entry is missing or the contract shows effects, skip the rewrite. Do NOT reconstruct callee names insideori_arc. - No builtin
&&/||path needed:BinaryOp::And/Orare intercepted bylower_binary()before PrimOp emission and lowered to control-flow IR (branches). They NEVER appear asPrimOp::Binary(And)/PrimOp::Binary(Or)in ARC IR. BitwiseBitAnd/BitOrDO appear as PrimOps — their identity/annihilator rewrites are handled via the algebra law schema (Section 08 installs laws on BitAnd/BitOr traits)
Law-export plumbing (PREREQUISITE — implemented in Section 07.5):
The normalization pass needs to query algebraic laws (e.g., “does impl Add for int declare associative?”). These laws are stored in ori_types’s TraitRegistry (in TraitEntry.algebra_laws and ImplEntry.declared_laws, added by Section 07). However, ori_arc cannot access TraitRegistry — there is no path from ori_types to ori_arc for trait metadata.
Section 07.5 owns the solution: it creates AlgebraLawIndex, builds it from TraitRegistry, enriches it with exact operator provenance, and threads it through FunctionCompiler → run_arc_pipeline() → AimsPipelineConfig. This section (09) is the consumer — it receives &AlgebraLawIndex via AimsPipelineConfig and queries it during normalization. If Section 07.5 is not complete, this section is blocked.
-
Verify
AlgebraLawIndexis available onAimsPipelineConfig(implemented by Section 07.5) -
Write smoke test:
ArcFunctionwithLet { value: PrimOp { op: Binary(Add), args: [x, zero_lit] } }→ normalization replaces withLet { value: Var(x) } -
Verify:
timeout 150 cargo t -p ori_arcpasses -
/tpr-reviewpassed — independent review found no critical or major issues (or all findings triaged) -
/impl-hygiene-reviewpassed — hygiene review clean. MUST run AFTER/tpr-reviewis clean. -
Subsection close-out (09.1) — MANDATORY before starting the next subsection. Run
/improve-toolingretrospectively on THIS subsection’s debugging journey (per.claude/skills/improve-tooling/SKILL.md“Per-Subsection Workflow”): whichdiagnostics/scripts you ran, where you addeddbg!/tracingcalls, where output was hard to interpret, where test failures gave unhelpful messages, where you ran the same command sequence repeatedly. Forward-look: what tool/log/diagnostic would shorten the next regression in this code path by 10 minutes? Implement improvements NOW (zero deferral) and commit each via SEPARATE/commit-pushusing a valid conventional-commit type (build(diagnostics): ... — surfaced by section-09.1 retrospective—build/test/chore/ci/docsare valid;tools(...)is rejected by the lefthook commit-msg hook). Mandatory even when nothing felt painful. If genuinely no gaps, document briefly: “Retrospective 09.1: no tooling gaps”. Update this subsection’sstatusin section frontmatter tocomplete. -
/sync-claudesection-close doc sync — verify Claude artifacts across all section commits. Map changed crates to rules files, check CLAUDE.md, canon.md. Fix drift NOW. -
Repo hygiene check — run
diagnostics/repo-hygiene.sh --checkand clean any detected temp files.
09.2 Identity Elimination
File(s): compiler/ori_arc/src/aims/normalize/algebra.rs
When an operator call has one argument that is the identity element (as determined by the law’s provider_trait::provider_method), replace the call with the other argument.
- Detect identity patterns (all operators go through PrimOp — see IR distinction note above):
Let { dst, ty, value: PrimOp { op: Binary(Add), args: [x, identity] } }— check ifidentityArcVarId is defined asLet { value: Literal(Int(0)) }(or the appropriate identity for the operation)- Both directions: right identity
op(x, e) → xand left identityop(e, x) → x
- Identifying the identity element:
- Trace the ArcVarId back to its defining
Letinstruction. Match against known identity constants:PrimOp::Binary(Add)+LitValue::Int(0)→ additive identity for intPrimOp::Binary(Mul)+LitValue::Int(1)→ multiplicative identity for intPrimOp::Binary(BitAnd)+LitValue::Int(-1)→ BitAnd identity (all bits set)PrimOp::Binary(BitOr)+LitValue::Int(0)→ BitOr identityPrimOp::Binary(BitXor)+LitValue::Int(0)→ BitXor identity- String concat (
PrimOp::Binary(Add)onTag::Str) + empty string literal → str identity
- Also detect annihilators (absorbing elements):
PrimOp::Binary(BitAnd)+LitValue::Int(0)→ 0 (BitAnd annihilator)PrimOp::Binary(BitOr)+LitValue::Int(-1)→ -1 (BitOr annihilator)
- NOTE:
BinaryOp::And/Or(logical&&/||) are NEVER PrimOps — they are lowered to control-flow IR (branches) bylower_short_circuit_and()/lower_short_circuit_or(). Do NOT try to matchPrimOp::Binary(And)orPrimOp::Binary(Or)— they do not exist in ARC IR. - For user types (same PrimOp path, different type tag): look up the type + trait in
algebra_lawsforIdentitylaw → check if operand is a call to the provider method (e.g.,Zero::zero())
- Trace the ArcVarId back to its defining
- Replacement: replace the instruction with
Let { dst, ty, value: Var(other_operand) } - Handle RC implications: since we’re running BEFORE ARC analysis (step 3b < step 4), RC ops haven’t been inserted yet — the replacement is just variable substitution. Dead code from the eliminated identity value will be cleaned up by subsequent passes.
- Reflexive peephole (from Section 05.2): In the same pass walker, detect
PrimOp::Binary(Eq)where both args are the sameArcVarIdAND the type is a builtin non-float primitive (int,bool,char,byte,str) → replace withLitValue::Bool(true). SimilarlyNotEqsame-var →false. And comparison reflexives:x < x→false,x <= x→true,x > x→false,x >= x→true. Soundness guard: do NOT fire for float or user-defined/custom Eq;PrimOpalone does not prove reflexivity there. - Write tests:
x + 0 → x(int add identity)0 + x → x(int add left identity)x * 1 → x(int mul identity)x + "" → x(str concat identity — only valid after Section 06)"" + x → x(str concat left identity)x & -1 → x(BitAnd identity, all bits set)x | 0 → x(BitOr identity)x ^ 0 → x(BitXor identity)x & 0 → 0(BitAnd annihilator)x | -1 → -1(BitOr annihilator)- same-var builtin comparison (
s == s) rewrites to constant - same-var user-defined/custom Eq does NOT rewrite without extra provenance
- Verify:
timeout 150 cargo t -p ori_arcpasses
Matrix dimensions:
-
Types: int (+, *, &, |, ^), float (excluded — no identity law), str (+), [T] (+), builtin same-var comparison vs user-defined same-var comparison
-
Patterns: right identity, left identity, both identity (0 + 0 → 0), nested identity, annihilator
-
Semantic pin:
x + 0produces ARC IR with NO PrimOp::Binary(Add) instruction (only passes with normalization) -
Negative pins:
float_x + 0.0is NOT eliminated (float has no identity law); user-definedx == xis NOT rewritten by the builtin reflexive peephole;x && trueis NOT matchable (not a PrimOp — it’s control-flow IR) -
/tpr-reviewpassed — independent review found no critical or major issues (or all findings triaged) -
/impl-hygiene-reviewpassed — hygiene review clean. MUST run AFTER/tpr-reviewis clean. -
Subsection close-out (09.2) — MANDATORY before starting the next subsection. Run
/improve-toolingretrospectively on THIS subsection’s debugging journey (per.claude/skills/improve-tooling/SKILL.md“Per-Subsection Workflow”): whichdiagnostics/scripts you ran, where you addeddbg!/tracingcalls, where output was hard to interpret, where test failures gave unhelpful messages, where you ran the same command sequence repeatedly. Forward-look: what tool/log/diagnostic would shorten the next regression in this code path by 10 minutes? Implement improvements NOW (zero deferral) and commit each via SEPARATE/commit-pushusing a valid conventional-commit type (build(diagnostics): ... — surfaced by section-09.2 retrospective—build/test/chore/ci/docsare valid;tools(...)is rejected by the lefthook commit-msg hook). Mandatory even when nothing felt painful. If genuinely no gaps, document briefly: “Retrospective 09.2: no tooling gaps”. Update this subsection’sstatusin section frontmatter tocomplete. -
/sync-claudesection-close doc sync — verify Claude artifacts across all section commits. Map changed crates to rules files, check CLAUDE.md, canon.md. Fix drift NOW. -
Repo hygiene check — run
diagnostics/repo-hygiene.sh --checkand clean any detected temp files.
09.3 Involutive Rewrites
File(s): compiler/ori_arc/src/aims/normalize/algebra.rs
When a unary operator is applied twice, and the operator has the Involutive law, eliminate both applications.
- Detect pattern (all unary operators go through PrimOp):
Let { dst, value: PrimOp { op: Unary(Neg), args: [y] } }whereyis defined byLet { value: PrimOp { op: Unary(Neg), args: [x] } }→ replace withLet { dst, value: Var(x) }
- Query: look up the unary trait (Neg, Not) → check for
Involutivelaw on the impl viaalgebra_laws - Replacement: replace outer instruction with
Let { dst, ty, value: Var(x) } - Write tests:
-(-x) → xfor int!!b → bfor bool~~n → nfor int bitwise not
- Verify:
timeout 150 cargo t -p ori_arcpasses
Matrix dimensions:
-
Types: int (Neg, BitNot), bool (Not)
-
Patterns: double negation, triple negation (-(-(-x)) → -x), mixed (not relevant for fixed types)
-
Semantic pin:
-(-x)produces ARC IR with NO PrimOp::Unary(Neg) instructions -
Negative pin: user-defined type with Neg but WITHOUT
Involutivelaw →-(-x)is NOT eliminated (the law must be declared) -
/tpr-reviewpassed — independent review found no critical or major issues (or all findings triaged) -
/impl-hygiene-reviewpassed — hygiene review clean. MUST run AFTER/tpr-reviewis clean. -
Subsection close-out (09.3) — MANDATORY before starting the next subsection. Run
/improve-toolingretrospectively on THIS subsection’s debugging journey (per.claude/skills/improve-tooling/SKILL.md“Per-Subsection Workflow”): whichdiagnostics/scripts you ran, where you addeddbg!/tracingcalls, where output was hard to interpret, where test failures gave unhelpful messages, where you ran the same command sequence repeatedly. Forward-look: what tool/log/diagnostic would shorten the next regression in this code path by 10 minutes? Implement improvements NOW (zero deferral) and commit each via SEPARATE/commit-pushusing a valid conventional-commit type (build(diagnostics): ... — surfaced by section-09.3 retrospective—build/test/chore/ci/docsare valid;tools(...)is rejected by the lefthook commit-msg hook). Mandatory even when nothing felt painful. If genuinely no gaps, document briefly: “Retrospective 09.3: no tooling gaps”. Update this subsection’sstatusin section frontmatter tocomplete. -
/sync-claudesection-close doc sync — verify Claude artifacts across all section commits. Map changed crates to rules files, check CLAUDE.md, canon.md. Fix drift NOW. -
Repo hygiene check — run
diagnostics/repo-hygiene.sh --checkand clean any detected temp files.
09.4 Commutative Canonical Ordering
File(s): compiler/ori_arc/src/aims/normalize/algebra.rs
For commutative operators, sort operands into a canonical order so CSE can match equivalent expressions. Example: 3 + x and x + 3 should produce the same ARC IR.
- Define canonical ordering for ARC operands:
- Constants sort after variables (constants on right:
x + 3, not3 + x) - Among variables, sort by
ArcVarId(lower ID first) - Among constants, sort by value
- Constants sort after variables (constants on right:
- Detect pattern (all binary ops are PrimOp):
Let { value: PrimOp { op: Binary(op), args: [a, b] } }where op hasCommutativelaw for this type ANDasorts afterb→ swap args to[b, a]
- Write tests:
3 + x → x + 3(constant moves to right)y + x → x + yif x.id < y.id (canonical variable ordering)x + 3unchanged (already canonical)
- Verify:
timeout 150 cargo t -p ori_arcpasses
Matrix dimensions:
- Types: int (Add, Mul — commutative), str (NOT commutative — concat is ordered), float (commutative)
- Patterns: constant-variable swap, variable-variable ordering, already-canonical no-op
- Semantic pin:
3 + xproduces ARC IR with operands in[x, 3]order (only passes with canonicalization) - Negative pin:
strconcat"a" + sis NOT reordered tos + "a"(no commutative law for str)
Note: This interacts with LLVM’s own canonicalization — LLVM also moves constants to the right for int/float. But for user types (matrix, vector), LLVM sees opaque calls and can’t reorder. This pass handles those cases.
-
/tpr-reviewpassed — independent review found no critical or major issues (or all findings triaged) -
/impl-hygiene-reviewpassed — hygiene review clean. MUST run AFTER/tpr-reviewis clean. - Subsection close-out (09.4) — MANDATORY before starting the next subsection. Run
/improve-toolingretrospectively on THIS subsection’s debugging journey (per.claude/skills/improve-tooling/SKILL.md“Per-Subsection Workflow”): whichdiagnostics/scripts you ran, where you addeddbg!/tracingcalls, where output was hard to interpret, where test failures gave unhelpful messages, where you ran the same command sequence repeatedly. Forward-look: what tool/log/diagnostic would shorten the next regression in this code path by 10 minutes? Implement improvements NOW (zero deferral) and commit each via SEPARATE/commit-pushusing a valid conventional-commit type (build(diagnostics): ... — surfaced by section-09.4 retrospective—build/test/chore/ci/docsare valid;tools(...)is rejected by the lefthook commit-msg hook). Mandatory even when nothing felt painful. If genuinely no gaps, document briefly: “Retrospective 09.4: no tooling gaps”. Update this subsection’sstatusin section frontmatter tocomplete. -
/sync-claudesection-close doc sync — verify Claude artifacts across all section commits. Map changed crates to rules files, check CLAUDE.md, canon.md. Fix drift NOW. - Repo hygiene check — run
diagnostics/repo-hygiene.sh --checkand clean any detected temp files.
09.5 Associativity Normalization
File(s): compiler/ori_arc/src/aims/normalize/algebra.rs
For associative operators, flatten nested applications into a canonical right-leaning (or left-leaning) tree. This enables CSE to match (a + b) + c and a + (b + c).
- Define canonical form: right-leaning tree (a + (b + (c + d))) — matches LLVM’s convention
- Detect nested application patterns. In ARC IR, nesting is indirect — the outer instruction references a dst variable whose defining instruction is the inner operation:
Let { dst: d1, value: PrimOp { op, args: [inner_dst, c] } }whereinner_dstis defined byLet { dst: inner_dst, value: PrimOp { op: same_op, args: [a, b] } }ANDophasAssociativelaw for this type- Create new intermediate:
Let { dst: new_var, value: PrimOp { op, args: [b, c] } }, then rewrite outer:Let { dst: d1, value: PrimOp { op, args: [a, new_var] } }
- Note on fresh defs: Associativity rewrites CREATE new intermediate variables (
new_varabove). This meanscompute_var_reprs()(step 3) will be stale after this pass. See step 3b var_reprs staleness note below. - Chain: apply repeatedly until no more left-nesting exists (fixed point, at most O(depth) iterations)
- Combine with commutative ordering (09.4): after flattening, sort the flat list, then rebuild the tree
- Soundness guard: only flatten if BOTH inner and outer applications use the same operator AND the type has the associative law. Mixed operators (e.g.,
(a + b) * c) are NOT flattened. - Write tests:
(a + b) + c → a + (b + c)for int((a + b) + c) + d → a + (b + (c + d))(deep flattening)(a + b) * cNOT flattened (different operators)(a +_float b) + cNOT flattened (float has no associative law)
- Verify:
timeout 150 cargo t -p ori_arcpasses
Matrix dimensions:
-
Types: int (Add, Mul — both associative), str (Add — associative), float (NOT associative)
-
Patterns: depth 2, depth 3, depth 4, mixed operators, mixed types
-
Semantic pin:
(a + b) + canda + (b + c)produce identical ARC IR post-normalization -
Negative pin: float
(a + b) + cis NOT reassociated -
/tpr-reviewpassed — independent review found no critical or major issues (or all findings triaged) -
/impl-hygiene-reviewpassed — hygiene review clean. MUST run AFTER/tpr-reviewis clean. -
Subsection close-out (09.5) — MANDATORY before starting the next subsection. Run
/improve-toolingretrospectively on THIS subsection’s debugging journey (per.claude/skills/improve-tooling/SKILL.md“Per-Subsection Workflow”): whichdiagnostics/scripts you ran, where you addeddbg!/tracingcalls, where output was hard to interpret, where test failures gave unhelpful messages, where you ran the same command sequence repeatedly. Forward-look: what tool/log/diagnostic would shorten the next regression in this code path by 10 minutes? Implement improvements NOW (zero deferral) and commit each via SEPARATE/commit-pushusing a valid conventional-commit type (build(diagnostics): ... — surfaced by section-09.5 retrospective—build/test/chore/ci/docsare valid;tools(...)is rejected by the lefthook commit-msg hook). Mandatory even when nothing felt painful. If genuinely no gaps, document briefly: “Retrospective 09.5: no tooling gaps”. Update this subsection’sstatusin section frontmatter tocomplete. -
/sync-claudesection-close doc sync — verify Claude artifacts across all section commits. Map changed crates to rules files, check CLAUDE.md, canon.md. Fix drift NOW. -
Repo hygiene check — run
diagnostics/repo-hygiene.sh --checkand clean any detected temp files.
09.6 Inverse Elimination
File(s): compiler/ori_arc/src/aims/normalize/algebra.rs
When an operator is applied to a value and its inverse, replace with the identity element.
- Detect pattern (indirect through ArcVarId defs — all operators are PrimOp):
Let { dst, value: PrimOp { op: Binary(Add), args: [x, neg_x] } }whereneg_xis defined byLet { value: PrimOp { op: Unary(Neg), args: [x_same] } }ANDx == x_same(same ArcVarId) AND Add hasInverselaw withNegas inverse → replace withLet { dst, value: Literal(Int(0)) }(or call toZero::zero()for user types)- Also detect commuted form:
args: [neg_x, x]→ same result
- Replacement: replace with a call to the identity provider (
Zero::zero()for additive inverse,One::one()for multiplicative) - Restriction: only for types where both the operator and the inverse are declared with laws.
int: Addhas bothInverse(Neg) andIdentity(Zero) → fires.str: AddhasIdentitybut NOTInverse→ does not fire. - Write tests:
x + (-x) → 0for int(-x) + x → 0for int (commuted)x + (-y)NOT eliminated (different variables)
- Verify:
timeout 150 cargo t -p ori_arcpasses
Matrix dimensions:
-
Types: int (Add with Neg inverse)
-
Patterns:
x + (-x),(-x) + x,x + (-y)(should NOT fire), nested(x + (-x)) + y → y(chain with identity) -
Semantic pin:
x + (-x)produces0constant in ARC IR -
TPR checkpoint —
/tpr-reviewcovering 09.1–09.6 implementation work -
/tpr-reviewpassed — independent review found no critical or major issues (or all findings triaged) -
/impl-hygiene-reviewpassed — hygiene review clean. MUST run AFTER/tpr-reviewis clean. -
Subsection close-out (09.6) — MANDATORY before starting the next subsection. Run
/improve-toolingretrospectively on THIS subsection’s debugging journey (per.claude/skills/improve-tooling/SKILL.md“Per-Subsection Workflow”): whichdiagnostics/scripts you ran, where you addeddbg!/tracingcalls, where output was hard to interpret, where test failures gave unhelpful messages, where you ran the same command sequence repeatedly. Forward-look: what tool/log/diagnostic would shorten the next regression in this code path by 10 minutes? Implement improvements NOW (zero deferral) and commit each via SEPARATE/commit-pushusing a valid conventional-commit type (build(diagnostics): ... — surfaced by section-09.6 retrospective—build/test/chore/ci/docsare valid;tools(...)is rejected by the lefthook commit-msg hook). Mandatory even when nothing felt painful. If genuinely no gaps, document briefly: “Retrospective 09.6: no tooling gaps”. Update this subsection’sstatusin section frontmatter tocomplete. -
/sync-claudesection-close doc sync — verify Claude artifacts across all section commits. Map changed crates to rules files, check CLAUDE.md, canon.md. Fix drift NOW. -
Repo hygiene check — run
diagnostics/repo-hygiene.sh --checkand clean any detected temp files.
09.R Third Party Review Findings
- None.
09.N Completion Checklist
- Normalization pass framework exists at step 3b in AIMS pipeline
- Identity elimination fires for all stdlib types with identity laws
- Involutive rewrites fire for Neg and Not
- Commutative canonical ordering sorts operands for CSE
- Associativity normalization flattens nested applications
- Inverse elimination replaces
x + (-x)with zero - Float correctly excluded from all rewrites except commutativity
- Exported provenance + purity gate prevents rewrites on user operators without exact pure-contract evidence
-
ORI_DUMP_AFTER_ALGEBRA=1shows normalized IR -
timeout 150 ./test-all.shgreen (debug AND release) - Dual-exec parity:
diagnostics/dual-exec-verify.sh tests/spec/algebra/passes — normalization changes ARC IR, which changes LLVM codegen; eval path is unchanged; verify both produce identical observable output -
ORI_CHECK_LEAKS=1clean on all algebraic test programs (normalization removes PrimOp instructions, changing liveness; verify RC ops remain balanced) - Plan annotation cleanup
- Plan sync
-
/tpr-reviewpassed -
/impl-hygiene-reviewpassed -
/improve-toolingretrospective completed — MANDATORY at section close, after both reviews are clean. Reflect on the section’s debugging journey (whichdiagnostics/scripts you ran, which command sequences you repeated, where you added ad-hocdbg!/tracingcalls, where output was hard to interpret) and identify any tool/log/diagnostic improvement that would have made this section materially easier OR that would help the next section touching this area. Implement every accepted improvement NOW (zero deferral) and commit each via SEPARATE/commit-push. The retrospective is mandatory even when nothing felt painful — that is exactly when blind spots accumulate. See.claude/skills/improve-tooling/SKILL.md“Retrospective Mode” for the full protocol.
Exit Criteria: The normalization pass transforms ARC IR using exported algebraic law metadata. ORI_DUMP_AFTER_ALGEBRA=1 demonstrates identity elimination, canonical ordering, and associativity normalization. Float is correctly excluded, and user-defined operator rewrites only fire when exact provenance + purity data exists. All tests pass in debug and release. Interpreter and LLVM produce identical results for all normalized programs (dual-execution parity).