Interpreter Performance Engineering

7 sections

0%
Overview

Make Ori's interpreter as close to native execution speed as possible. Current function call overhead is ~63µs/call (measured via Ackermann benchmark), approximately 100-600x slower than a register-based bytecode VM like Lua. This plan transforms the tree-walking interpreter into a high-performance bytecode VM through incremental, independently testable phases. Tree-walker hot-path work (Sections 02-03) is useful only while benchmarks show allocation and clone churn still dominate; the critical path is preserving evaluator semantics while moving execution to bytecode.

Planned

7 sections