Cliff Hacks Things.

Sunday, February 26, 2006

Excellent GC papers

As some of my past blog entries have implied, I dislike needlessly long, convaluted CS papers.

Now, my experience in formal writing is primarily in psychology, and I know that APA style dictates a certain wordiness — extensive citations of prior related work, etc. At ASU, at least, APA papers were actually prohibited from using quotations, so any referenced material had to be restated or paraphrased. So, I imagine some of the wordiness in CS papers is purely stylistic.

However, every so often I find really to-the-point papers.

My favorite example is Hölzle's 1993 paper on write barriers in Self. I came across this a couple years ago, when I was studying Self. It explains its purpose, summarizes prior work (including concise code snippets), and presents its results in just 6 pages — and the idea it's presenting is non-trivial. Marvelous. (Dr. Hölzle is not in my line of direct supervisors at work, either, so this isn't brown-nosing.)

I'm currently reading a 2001 paper by Fridtjof Siebert discussing root scanning in garbage collectors. It's also delightfully punchy, at 15 pages. I came across it during my research today, and it made me both happy and sad.

Happy, because it lays out (rather more formally) some of the techniques I'm using in M2VM's collector. (Because of M2VM's Smalltalk origins, all trace roots are in heap at well-defined points during execution.)

Sad, because I thought I was all cool for coming up with this idea. According to the paper, I'm at $StateOfTheArt - 4 years. :-)

The good news: Siebert's scheme is tailored to uniprocessor systems, where having a single active thread is acceptable — whereas my variation is explicitly designed for multiprocessor machines (like mine). I'm sure this leap has already been made in the literature, but it's nice to know I'm not entirely insane.

For any interested parties: my approach has threads check a global condition variable at certain synchronization points. If any thread would like a collection to occur, it sets the variable and waits for other threads to reach a synchronization point; each thread flushes registers to heap and decrements a countdown semaphore when they reach synchronize. Once all threads have synchronized, the collector traces the roots and marks reachable objects, and all threads resume while the collection completes.

Because each thread's context information is accessible through a reference in global scope (actually from a pthread TLS slot), finding and tracing the roots is O(n) for n threads. (The full trace phase is still linear for the number of reachable objects, as usual.)

I'm looking at a couple enhancements for parallel tracing and scavenging, and trying to find a good way to apply this to full collections (currently, it's limited to minor, new-gen collections). We'll see.

Developer: 1; Garbage Collector: 0

Garbage collector bugs are some of the hardest I've ever debugged (with the possible exception of massive concurrency or distributed systems). "Y'know that memory address you were just PEEKing at? Yeah, it moved. See if you can find it now, sucker."

However, I've beaten this one, and the M2VM garbage collector is working again. Found a number of oversights in my implementation; I knew the initial mark-compact collector was a stopgap measure, and boy, did I write it like a stopgap measure. :-)

I've also gained a real appreciation for gdb. At some point in the last few years, it's learned to reload single functions from a modified binary on the fly, as well as dynamically reloading and relinking shared libraries. I can switch windows, recompile my GC, rebind the entry point, and resume execution. Pretty sweet. (These might be Apple-specific enhancements, not sure.)

So, the GC is creeping right along — emphasis on the creeping. Like any naïve mark-compact collector, it causes nasty pauses — as much as 1s on a 40MB heap. Hand it a heavily fragmented 1GB heap, and not even my 2.5GHz G5 can make the experience pleasant.

I've found some creative ways to optimize it, but it raises the question of how much to tune throwaway code.

Update: corrected "mark-sweep" to say "mark-compact." I had actually forgotten that non-moving tracing collectors existed.

Update 2: in answer to my final question, I modified the mark-compact collector to use a simple generational strategy. Work required: 2 lines of code. Speedup: 100x.

Saturday, February 25, 2006

Night and Day

The Day Programmer vs. The Night Programmer

There are those who program to make money, and there are those who program because it captivates them. This is a point I've come across many times through the years, but this fellow elucidates it particularly well.

My interviewing style, for example, is specifically designed to weed out the former type. (Which, I suspect, is why I didn't do more interviews at Choice. They were frightened and confused by the latter type.)

Not sure which I am. *goes back to spending Saturday tuning his virtual machine*

Wednesday, February 22, 2006

IT'S ALIIIIIVE!

Since Saturday, I've been hacking on M2VM, my next-generation Mongoose VM. It uses the VM-generation utility I described in my last post.

As of tonight, it works. I have:

1. Designed a class-file format, similar to Java's but better suited. (This is a stopgap measure; M2VM will load classes a module-at-a-time using a yet-to-be-defined format based on the class-file format.)
2. Written a set of utilities (in Java) for building up M2VM code, in preparation for a compiler. (At this point, it generates class files from a tree-based IR, so the compiler layer will be very thin and probably entirely ANTLR.)
3. Written the VM generator (in Ruby) that transforms 151 lines of very manageable domain-specific code into 334 lines of VM inner loop.
4. Written a rudimentary memory manager and garbage collector (in C) with the fundamentals of my eventual zone-based scheme.
5. Written a stopgap class loader (in C) that works with the object memory API to build classes in RAM.
6. Written a test framework that creates objects and sends messages to them.

So, having fed it the equivalent (in pseudocode) of:

sample() {
return self.
}


...I now get a correct return value!

Nested and recursive calls also work.

I'm using my generator in switched mode, so I need to kick it into direct-threading and test that. (It works for some limited test input, and most of the code is boilerplate in the Ruby.) I also need to test some fundamentals — argument passing comes to mind — and flesh out some error handling, but...

... it works!

Sunday, February 19, 2006

Virtual machine generators are a fun way to spend Saturday night!

Last night, I started work on a new version of the Mongoose VM. The previous implementation (M1) was a testbed for a bunch of competing ideas, many of questionable value — so the code was pretty hairy.

The new implementation (M2) uses a simplified bytecode, has a Java-like classfile format for separate compilation (M1 was image-based), and is starting life as a coalescing threaded interpreter.

Once I'd made it a bit into the implementation, I realized how much boilerplate code was involved in writing a VM. So, I assembled some tools (in Ruby, of all languages) to help me out. I now have a domain-specific language for describing VMs, which includes
- Describing the logical register set;
- Describing bytecode functions and formats;
- Providing C implementations for operations;
- Describing the calling conventions for invoking the VM itself from C code.

Short example:

// Bring in some C headers for the operation bodies
include <stdio.h>;
include <stdint.h>;

// Declare us a register
register uint32_t $accumulator;

// Bytecode 0x00 with a single 8-bit argument
op increment(00, amount:8) {
$accumulator += amount;
}

// Bytecode 0x01 with no arguments
op print(01) {
printf("%lu\n", $accumulator);
}

// Current implementations can stop execution with return
op halt(FF) {
return;
}


The initial tools generated a dead-simple switched VM from my bytecode descriptions, as well as some header files for manipulating the instruction set.

Newer revisions generate direct-threaded code, including functions to translate the bytecode format to threaded form on load. I'm working on very simple stream-rewrite capabilities, for use in instruction coalescing (combining several sequential ops into one) and specialization (replacing an op with a variant depending on context).

A few examples of each:

Self sends: by coalescing send-to-self sequences into a virtual self-send instruction, we can take advantage of the lack of polymorphism. In particular, self-send has an inline parameter that points to the body of the method to call — reducing the method dispatch process to a GOTO.
Tail sends: by coalescing a send-return sequence into a special tail-send instruction — unavailable in the raw bytecode — we can perform tailcall elimination.
Tail recursion: by further specializing the instruction when (a) the receiver is self and (b) the message is the same that invoked the current method, we can perform tail recursion elimination using a particularly lightweight sequence. (M1 optimized both tail sends and tail recursion through dedicated bytecodes.)
Hot sends: by specializing sends into a virtual cached-send instruction — which has a 3-deep polymorphic inline cache — we can drastically reduce method lookups for the general case.

You've probably noticed that those examples all involve message sends, and there's a reason for that: Smalltalk-style languages spend most of their time sending messages around, and it tends to be a slow process. It's low-hanging fruit for optimization.

One more:
Opening closures (ha ha): the relatively expensive block instruction, which creates a literal Block, can be specialized to a contained-block instruction. This requires that the block be entirely self-contained: no references to the enclosing context's variables, and no non-local return instructions. In this case, the enclosing activation records need not be reified onto the heap.

Some of these virtual-instruction transformations are working now; I hope to have the rest in place shortly.

(Incidentally, the entire system is under 1000 lines of Ruby, plus a 134-line description of the virtual machine.)

Tuesday, February 14, 2006

Reinventing Forth

There's a saying to the effect that any sufficiently complex program or runtime will wind up reimplementing most of Common Lisp. (I mostly hear that from Lispers, so add salt to taste.)

Well, you can imagine how surprised I was to see someone reinventing Forth.

In an indirect threaded interpreter, each op — let's call them words, for the hell of it — is represented by an index into some table holding an address to the code. (Or, in Forth, an address of a word definition, which contains an address of the code to run.)

In a direct-threaded interpreter, each words is represented by the address of the code itself, so the interpreter — which we'll call NEXT — is basically a repeated indirect jump.

At least one Forth saw the obvious next step in performance: replace each reference to a compiled word with a CALL instruction to its address, instead of the address. (This isn't a huge leap for Forth. Direct-threaded Forths usually have a CALL instruction in the word definition's code slot anyway.) On the machine in question (I don't remember which), they used a within-segment jump instruction, and were able to squeeze it the same space that would have been required for the direct-threaded address.

Evidently, this approach has been rediscovered, under the name "context threading."

Context threading does bring a few significant advancements to the table: it generates the calls at runtime from the original bytecode, and inlines basic flow control among all those CALL instructions. This isn't possible in Forth without a lot of trickery, for the same reason it's difficult in Smalltalk: there are no built-in flow control mechanisms. However, high-performance subroutine-threaded Forths will inline short, common words — and you don't get much shorter or more common than IF/ELSE/THEN.

Speaking of which, context threading seems to rely on the original bytecode to resolve flow control operations, where Forth would simply modify the return address.

(Incidentally: yes, this is one of the two tricks I used in the Mongoose interpreter to improve indirect branch prediction. Forth has a lot of good ideas to steal.)

It seems the Mongoose interpreter wasn't half bad.

I worked on the interpreter for my Smalltalk-like language, Mongoose, through 2004 and the first bit of 2005. I eventually abandoned it, mostly because work became too stressful for me to sustain a major side project, but also because I had always looked at the interpreter as throwaway code on the way to JIT compilation.

(I actually had basic JIT working on PowerPC, using a dastardly hack involving memcpy and GCC's address-of-label operator. Never got it working on x86, where all non-academic languages must run, and I didn't know enough x86 at the time to debug it.)

However, I've come 'round on since then, and have realized three things.
1. There's a place in the world for interpreted languages. Even the large enterprise systems, which I was explicitly targetting with Mongoose, aren't CPU-bound in most cases, and interpreters can often run in a fraction of the RAM of native code — look at Forth.
2. Even if I had a working runtime compiler, putting the JIT in "JIT compiler" requires a good interpreter. You don't want to compile all your code, only the code that needs it — the vast majority of the system will probably remain interpreted. (My Mongoose "JIT" compiled at class load time, which is less than optimal.)
3. My interpreter wasn't half bad.

More on that last point, because it surprised the hell out of me.

Lacking any formal CS education, I tend to come across concepts in the field in a different order than most. For example, SSA form is intuitively obvious to me, and was how I implemented my prototype type-inference system for Mongoose — so I was pretty confused when I finally bought a book on compilers a few months ago. All the inital techniques they were describing seemed unnecessarily complex, dealing with changing values for names and the like. (They were building up to SSA, which they covered in chapter 9.) On the other hand, some of the really fundamental stuff — like basic blocks and optimizing out common cases on the control-flow graph — I had approximated, but hadn't nailed in the way they described.

So, I grade my performance by tracking down interesting papers online, and figuring out how far behind them I am. I do okay in some areas (Bracha and Unger's 2004 paper on OO metamodels pretty much describes Mongoose) and really poorly in others — as is expected, for us liberal arts types.

In the past week or so I've been collecting papers on interpreter design, and in some areas I seem to be close to the state-of-the-art.
- The technique I call instruction coalescing — converting common sequences of VM ops into larger meta-ops that can be more easily optimized — seems to be called superinstructions and specialization in the literature. (Strangely, I haven't found anyone using my technique for reducing indirect branch misprediction — but I'm sure somebody beat me to it.)
- There are a few interpreters floating around that use my dastardly hack for compilation; most call it catenation, which makes sense: I'm using GCC's optimizer at build time, and simply stringing sections of its output together.
- The techniques I used for stack-allocating objects when context permits will be in Java 6. (This may have been around for years, though.)

There's a great report called Optimizations for a Java Interpreter Using Instruction Set Enhancement that covers the techniques I've been using in Mongoose, so I'm glad I'm not totally off-base. They don't use my technique for optimizing indirect branch prediction, which may very well mean it sucks — I haven't done enough profiling.

I'm working on the interpreter again, having found some neat techniques I hadn't derived. We'll see where it goes; the old Mongoose interpreter beat the pants off Ruby 1.8 in most benchmarks, but my error handling wasn't nearly as well-developed, and I didn't test any of the higher-performance Ruby VMs that are available.

(...yes, this is what I do on Valentine's Day.)

Sunday, February 12, 2006

Variable-arity parameterized types

Language geek that I am, I've been reading up on all sorts of type systems in the past few years.

Mostly, I've been interested in ways to statically typecheck dynamic OO languages, like Self — but I've also been interested in how different OO languages implement parameterized types.

I've noticed a common problem: most languages cannot even describe the types of their own methods. (Strongtalk makes an exception for typing Smalltalk-style blocks, but it's not a general solution.)

Before tackling methods, let's take a reasonably common case. We want a class called Tuple, instances of which contain a fixed number of objects of different, specific, types. So, the base Tuple class could be specialized to represent key-value pairs in a map, or (*gag*) multiple return values from a function. (The astute reader sees another obvious use, but we'll get to that shortly.)

How could we define this Tuple in a generic way? In C++, Java, Eiffel, and most other languages (possibly including Strongtalk, though I'd love to be proven wrong), you can't. The closest you can come is to define a different Tuple class for each length — say, Tuple1<A>, Tuple2<A, B>, Tuple3<A, B, C>, etc. I don't think I'm the only one who looks at this solution and says, "Ewww."

Having learned and rejected Python shortly before I started work on Mongoose, tuples struck me as a useful abstraction, and I wanted Mongoose's type system to describe them. Here's a pseudocode Tuple in pseudo-Mongoose. (Brief syntax note: a generic class is applied to parameters as Class(Parameters), just as a function is applied as function(parameters).)

class Tuple(T+) {
method get(index : Integer) : T[index] { ... }
method set(index : Integer, element : T[index]) { ... }
method positionType(index : Integer) : Class(T[index]) { T[index] }
}


Tuple's formal parameter T+ indicates that the class can be specialized on 1..n classes. The value of T is available as an ordered list, both for typechecking at development time, and for access by specialized subclasses at runtime (as shown in the body of positionType). (Implementation of a variable number of slots to store the elements is hairier, and I've omitted it.)

I went to all this nasty work so that Mongoose could statically type its closures and methods. We wind up with something like the following for the Block object (equivalent to a lambda function with environment closure):

class Block(A*, R) {
method argumentType(position : Integer) : Class(A[position]) { A[position] }
method returnType() : Class(R) { R }
method invoke(parameters : A) : R { ... }
}


The formal parameter A* specifies zero or more classes. These variable-arity parameters are bound to Tuples at runtime (and handled through trickery at compile time). For example, for the type application Block(Integer, Integer, String), A within an instance would have the type Tuple(Integer, Integer).

So, say you have an anonymous block like the following, which takes an Integer and tells you if it's bigger than some constant.


{ x : Integer | x > 42 }


Blocks like this are frequently used in Mongoose and Smalltalk to filter collections and the like. I've typed the block's parameter explicitly here; normally, this can be inferred from context. Integer's published contract — sorry, Mr. Meyer — specification for > says it returns a Boolean, so the return type of the block can be inferred.

So, the type of this expression is Block(Integer, Boolean). If anyone were to inspect its class, they'd find that the return type is Boolean and the parameter list is «Integer» (using the Tuple literal syntax that Americans can't type).

The real key here is that this isn't a special feature for standard library classes like Block — this is a generic mechanism (pun intended) applicable to user classes.

Now, after I designed this section of Mongoose, I learned Dylan. Dylan's ludicrously expressive type system — particularly the notion of restricted types, like "Integers between 2 and 12" — really impressed me. I'm looking at ways to integrate this flexibility into Mongoose's parameterized types, preferably in an elegant fashion. (Fortunately, like C++, the parameter bindings are not necessarily restricted to classes, and are available at runtime.)

Edit in response to comments: yes, restricted types would help me constrain the type-array references in that Mongoose code up there at compile time. This is why I want them (among many other reasons).

Saturday, February 11, 2006

Commutative arguments

Had a sick thought today.

In considering the binary operator problem (mentioned in my post on multimethods a few days back), I thought, "What if certain functions could define themselves as order-insensitive?"

The canonical example seems to be a function for testing if two objects are equal, which we'll call equals(a, b). Equality, in my mind, is commutative, like addition: equals(a, b) should always have the same result as equals(b, a). (Thus, equals(equals(a, b), equals(b, a)).)

In a lot of situations, you don't want objects of two different classes to ever be equal. I would argue that numeric types are an exception: if I have the number 5 represented as a base-2 integer, and as an arbitrary-precision decimal type, they can be interconverted without a loss of data, and thus should be equal.

Now, people wanting to compare Integers and Decimals might write the comparison either way — or even not know which value is which type when the comparison is written. Traditionally, if you wanted to define variants of equals to cover this case, you'd have to write two functions — if your language even allows it:
equals(Integer, Decimal)
equals(Decimal, Integer)

What if, instead, the equals function would reorder its arguments as necessary? (Assume the runtime order of evaluation is fixed, in case you have side effects in your argument expressions.) Only one of these two declarations would be necessary — and, in fact, declaring both would be an error.

Addition and any other commutative operation would also benefit.

Now, of course, this doesn't solve the binary-operator problem for non-commutative operations like division, but it might be food for thought.

More on multimethods

Ran across Millstein and Chamber's 2002 paper, Modular Statically Typed Multimethods. Based on Chambers' work on Cecil, it presents a much cleaner, logically-consistent way of doing multimethods.

As a result, I am far less suspicious of them than I was a few days ago.

<aside>
Computer science papers have an amazing knack for being opaque. Perhaps that's why good programming books are so prized. The entire meat of this paper could have been covered in a couple pages of source examples and discussion, with an appendix for the formal definitions — as is, it's 47 pages long.

(Literally. I use "forty-seven" like the Hebrew scriptures use 40, as an unspecified but ludicrous length, like "enter your forty-seven digit extension after the beep." This paper, however, is actually 47 pages long.)

But of course, a nicely tech-written summary wouldn't have made ECOOP.

Anyway.
</aside>

Their system, System E, still strikes me as iffy in the face of runtime-loaded code. Java's ability to load and unload code on the fly is the source of a lot of its power — it's the feature that allows application servers and really-well-integrated IDEs — and I intend to preserve it in Mongoose.

However, I think System E can be easily adapted for runtime loading. It does some link-time typechecking to verify consistency, and if linking happens to occur during execution, no biggie.

It still bothers me that calling a function foo might get me two different code paths before and after loading a module, depending on the (surprise) types of the arguments. This is mostly a security issue for me: if a module is able to stub-out some hasPrivilegedAccess function on load, all hell breaks loose.

System E defines inter-module restrictions that could be used to control some of this; the rest could be left to the language security model, perhaps marking some functions as non-overrideable (Java's final).

Alternatively, if the language can partition loaded code off in its own little sandbox (a la Java's pending Partition feature), perhaps function overloads could be prevented at the partition level.

I'm tinkering on a design for Mongoose that leverages these ideas. More to come.

Tuesday, February 07, 2006

Prototypes and Classes

Class- and Prototype-based OO languages are equivalent. I don't just mean equivalent like any two Turing-complete systems — they're equivalent through a simple transformation, like tail recursion and iteration.

Prototype-based languages are distinguished by the fact that objects can have their own unique behavior and structure. You can add a method or slot to an individual object instance, rather than having to modify some class that defined it. (The benefits in UI programming are obvious — hint: no more action listeners, just redefine a method on that particular button.)

Under the hood, however, prototype-based runtimes usually have behavior objects that back the object instances. An instance will contain some number of data slots, and a reference to its behavior, where the actual methods are stored, along with references to its delegates (similar to parent-types in class-based languages). Multiple instances can share a single behavior object, which dramatically reduces memory requirements.

Now, the methods are stored on the behavior, which may be shared by many instances, but you can still add a method to a particular instance — because the behavior is copy-on-write. When you add a slot or a method to an instance with a shared behavior, it creates a new behavior containing your modification, which delegates back to (read: inherits from) the original. The modified instance is switched to use the new behavior, and voila.

This preserves the prototype semantics of the language, while reducing memory footprint and improving cache performance.

It also looks a lot like a class-based object system — and, in fact, a sufficiently flexible class-based object system can achieve exactly the same behavior.

Translating from the prototype world to the class world, adding a method or slot to an instance creates a new class (behavior) containing the new method/slot, inheriting from (delegating to) the original class, and changes the instance's class to the new subtype.

Very few class-based object systems are this flexible — Java, for example, cannot change the class of an object on the fly. (In fact, only Smalltalk and Ruby spring to mind, though I'm sure someone will pipe up and insist it can be done in CLOS.) I'd like to see this sort of thing become widely available; I'm not a fan of prototype-based languages in general, but this would be a very powerful feature in a class-based language.

Multi-method dispatch, operators, and typecasting

I am suspicious of multi-method dispatch.

In a language like Smalltalk, when you send a message like

foo process: bar with: baz


...you know that foo will respond to the message, generally executing some method identified with process:with:. bar and baz have no direct say.

In a language with multi-method dispatch, bar and baz — the arguments to the message — can also provide methods for handling the message. Generally, the most specific one is chosen at runtime: if foo has a method for dealing with arguments of any two types, but baz offers one that accepts the precise runtime types of all three objects, it is not foo's code which will execute, but baz's.

As I said, I'm suspicious of this.

I like the ease of reasoning that single-dispatch gives me, knowing that a given object's code will execute when I call a method. Particularly if the code for baz was loaded at runtime, the method may be effectively hijacked with a result I didn't intend.

However, I can think of two cases that are dramatically more elegant with MMD: operators, and casting (which is effectively an operator).

Smalltalk lets you name a method '+', to be called in the normal infix fashion, a + b. In that fragment, it is of course a's + method that will execute.

But let's say you define a new type to represent, say, money — we'll name it Money. You teach it to add integers, floats, or other numbers using a + method. This is all well and good, so long as you write it like this:

myMoneyAmount + someInteger

Great. The + method you defined for all objects of type Money runs, adds the integer, returns a Money. All is well.

However, if you write it

someInteger + myMoneyAmount

(which is, thanks to the commutative property, mathematically equivalent), what happens? The + method on the built-in class Integer fires up, and it may not know how to act on your Money object — and almost certainly will not return a new Money in response!

The traditional Smalltalk solution to this is, in my opinion, a hack. It's called double-dispatch. The code in Integer would notice that the other addend isn't a known type, so it passes control into the addend using some other method. Effectively, it converts that fragment into something like

myMoneyAmount addedWithInteger: someInteger


(Why doesn't it flip it around and call the + method on the argument? Because that could quickly become an infinite loop of back-and-forth message sends. addedWithInteger:, or whatever it's called, is defined as not performing double-dispatch.)

That's a simple solution for the built-in types, but if you've got two third-party types you want to add, each must sport an addedWithWhatever: method for the other — when the third-party types are probably not aware of one anothers' existence. The situation can spiral out of control.

In Smalltalk, specifically, this problem is mitigated by the fact that you can add methods to other peoples' classes once they're loaded into the system. You can construct this web of adding methods to ensure your types play nice.

This is not, to me, a solution, but a workaround.

You'll run into similar problems if you try to implement a generic cast method. Smalltalks frequently have asString or asArray methods to convert objects to some other type. If you wanted a generic method, like

foo as: Bar

you'd wind up having to resort to double-dispatch again. Your Money class can easily handle "as: Integer", but having Integer handle "as: Money" is more difficult.

Multi-method dispatch eliminates this problem entirely, by allowing your Money class to introduce an as: method that is used when a Money object is the argument, not just the receiver. (Yes, pedants, in MMD there is no receiver per se.)

This is one situation where I would like the dispatch of the method to be determined at runtime.

C++ solves it a little differently, by having both member functions for operators, and free-floating functions that overload operator+ depending on the argument types, but the result is the same as MMD.


So, for this sort of problem, MMD is appealing. I've been thinking about this for a while, and there's an obvious place in the Mongoose language to hook MMD if necessary (namely, by providing implementations directly for the Class-independent Method Contract Identifier — effectively, C++'s approach).

I suspect there's a compromise, or perhaps an alternative technique.

Sunday, February 05, 2006

Update: no updates.

In case anyone's actually reading this, yes, I will post stuff soon.

Since I created this blog, I haven't been able to discuss most of what I'm doing. I'm starting to get settled, so I'll be doing more personal hacking soon.

Currently, I'm playing with Facelets, and I'm awfully tempted to write up a JSF tutorial that eschews JSP. So much easier!