Interpreter Performance Engineering
7 sections
0%
Overview
Make Ori's interpreter as close to native execution speed as possible. Current function call overhead is ~63µs/call (measured via Ackermann benchmark), approximately 100-600x slower than a register-based bytecode VM like Lua. This plan transforms the tree-walking interpreter into a high-performance bytecode VM through incremental, independently testable phases. Tree-walker hot-path work (Sections 02-03) is useful only while benchmarks show allocation and clone churn still dominate; the critical path is preserving evaluator semantics while moving execution to bytecode.
Planned
7 sections
Section 1 Not Started
Benchmark Infrastructure
0/28 tasks
0/28 tasks
Section 2 Not Started
Zero-Allocation Call Path
0/43 tasks
0/43 tasks
Section 3 Not Started
Value Passing Optimization
0/30 tasks
0/30 tasks
Section 4 Not Started
Bytecode Compilation
0/51 tasks
0/51 tasks
Section 5 Not Started
Register-Based VM
0/60 tasks
0/60 tasks
Section 6 Not Started
Verification
0/69 tasks
0/69 tasks
Section 7 Not Started
Salsa Integration & Transition
0/58 tasks
0/58 tasks