Cliff Hacks Things.

Tuesday, February 14, 2006

It seems the Mongoose interpreter wasn't half bad.

I worked on the interpreter for my Smalltalk-like language, Mongoose, through 2004 and the first bit of 2005. I eventually abandoned it, mostly because work became too stressful for me to sustain a major side project, but also because I had always looked at the interpreter as throwaway code on the way to JIT compilation.

(I actually had basic JIT working on PowerPC, using a dastardly hack involving memcpy and GCC's address-of-label operator. Never got it working on x86, where all non-academic languages must run, and I didn't know enough x86 at the time to debug it.)

However, I've come 'round on since then, and have realized three things.
1. There's a place in the world for interpreted languages. Even the large enterprise systems, which I was explicitly targetting with Mongoose, aren't CPU-bound in most cases, and interpreters can often run in a fraction of the RAM of native code — look at Forth.
2. Even if I had a working runtime compiler, putting the JIT in "JIT compiler" requires a good interpreter. You don't want to compile all your code, only the code that needs it — the vast majority of the system will probably remain interpreted. (My Mongoose "JIT" compiled at class load time, which is less than optimal.)
3. My interpreter wasn't half bad.

More on that last point, because it surprised the hell out of me.

Lacking any formal CS education, I tend to come across concepts in the field in a different order than most. For example, SSA form is intuitively obvious to me, and was how I implemented my prototype type-inference system for Mongoose — so I was pretty confused when I finally bought a book on compilers a few months ago. All the inital techniques they were describing seemed unnecessarily complex, dealing with changing values for names and the like. (They were building up to SSA, which they covered in chapter 9.) On the other hand, some of the really fundamental stuff — like basic blocks and optimizing out common cases on the control-flow graph — I had approximated, but hadn't nailed in the way they described.

So, I grade my performance by tracking down interesting papers online, and figuring out how far behind them I am. I do okay in some areas (Bracha and Unger's 2004 paper on OO metamodels pretty much describes Mongoose) and really poorly in others — as is expected, for us liberal arts types.

In the past week or so I've been collecting papers on interpreter design, and in some areas I seem to be close to the state-of-the-art.
- The technique I call instruction coalescing — converting common sequences of VM ops into larger meta-ops that can be more easily optimized — seems to be called superinstructions and specialization in the literature. (Strangely, I haven't found anyone using my technique for reducing indirect branch misprediction — but I'm sure somebody beat me to it.)
- There are a few interpreters floating around that use my dastardly hack for compilation; most call it catenation, which makes sense: I'm using GCC's optimizer at build time, and simply stringing sections of its output together.
- The techniques I used for stack-allocating objects when context permits will be in Java 6. (This may have been around for years, though.)

There's a great report called Optimizations for a Java Interpreter Using Instruction Set Enhancement that covers the techniques I've been using in Mongoose, so I'm glad I'm not totally off-base. They don't use my technique for optimizing indirect branch prediction, which may very well mean it sucks — I haven't done enough profiling.

I'm working on the interpreter again, having found some neat techniques I hadn't derived. We'll see where it goes; the old Mongoose interpreter beat the pants off Ruby 1.8 in most benchmarks, but my error handling wasn't nearly as well-developed, and I didn't test any of the higher-performance Ruby VMs that are available.

(...yes, this is what I do on Valentine's Day.)

1 Comments:

Post a Comment

<< Home