September 12, 2011

Thinking of dynamic language code as a template

You can think of a dynamic language code as a template. This code has generality because of duck typing; the idea is roughly, "If the code can work on a value, it will, with no muss or fuss."

Some programmers find this to be an empowering form of "do what I say" — you don't need to explain everything you want your code to do in terms that even an optimizing compiler can understand.

In traditional dynamic language VMs, this template is instantiated with concrete value types at runtime. When a value flows into a bit of code, the interpreter inspects the type of the value and then runs the appropriate "instantiation of the template".

Example

Consider the function:

function bitAndAdd(a, b, c) {
    return a + (b & c);
}

In its maximally generic, template-looking form it reads like this pseudo bytecode:

bitAndAdd<T0, T1, T2> =
    GETARG0 -> T0
    GETARG1 -> T1
    GETARG2 -> T2
    BITAND<T1, T2> -> T3
    ADD<T1, T3> -> T4

In the JS engine, the T-prefixed types can be any in the set {null, undefined, boolean, int32, double, object, string} — these are the engine internal value tags.

In a bytecode-based interpreter, each bytecode typically asks, "Is this thing that I'm acting on an int32? Or double? Or string? Or ...? Or user defined type?" [*] Once that's determined, it runs the appropriate sub-section of the bytecode implementation for that type. The question-asking part of the bytecode is, in effect, directing you to the appropriate template instantiation. (Note that bytecodes are kind of like mini-templates composed into the larger function template.)

When you call bitAndAdd<int32, int32, int32>, you get an instantiated code path at runtime that looks like:

bitAndAdd<int32, int32, int32> =
    GETARG0 -> int32
    GETARG1 -> int32
    GETARG2 -> int32
    BITAND<int32, int32> -> int32
    ADD<int32, int32> -> ...

This path represents all of the machine code executed in the interpreter that does useful work for this function. Imagine the bytecodes that make up the function as a bunch of enum values in an array. For each enum value in the array, the interpreter dispatches to an type-unspecialized bytecode implementation. Then, the unspecialized bytecode implementation inspects the input types and dispatches to the specialized implementation.

A decent example is the ADD opcode in JavaScript. The unspecialized version of ADD looks something like:

ADD<T0, T1>(T0 lhs, T1 rhs) {
    if (T0 is int32 and T1 is int32)
        result = ADD<int32, int32>(lhs.getInt32(), rhs.getInt32())
    else if (T0 is string and T1 is string)
        result = ADD<string, string>(lhs.getString(), rhs.getString())
    else if (...)
        ...
}

You can see the unspecialized ADD delegating to the type-specialized bytecode sub-implementations. A specialization like ADD<int32, int32> are fairly simple and read something like:

ADD<int32, int32>(int32 lhs, int32 rhs) {
    result = lhs + rhs
    if (overflow)
        result = (double) lhs + (double) rhs;
}

Specializing the template

Of course, with JIT technology, we try to be a lot smarter about making the instantiation efficient. There are a few key mechanisms:

Avoiding bytecode dispatch

The composition of mini-templates (bytecodes) into the larger program template (made out of bytecodes) causes "jump to the next bytecode in the program" cost in an interpreter loop. By creating a machine-code implementation for the sequence of bytecodes that make up a function, we naturally cause machine-code duplication, but avoid that interpretation overhead. This is a natural consequence of choosing to JIT compile a function.

Constant propagation / local type inference (prime example: JägerMonkey)

If the JIT sees that you're performing a bitwise-and, and the spec says that the result of a bitwise-and is always an int32, the JIT can safely assume that fact downstream in the code; for example, in:

function byteSquare(x) {
    var a = x & 0xff;
    var b = a * a;
    return b;
}

The inputs to the a * a multiply are int32s, without question. There's no reason to instantiate the machine code for other types when you know it's going to be a int32.

Whole program type inference (prime examples: JägerMonkey + Type Inference, IonMonkey)

Flowing conservative type information through all the code in the system tells you key interprocedural facts. For example, just by looking at your program it may be easy to see that the divmod(dividend, divisor) function is only ever called with two constant integers as arguments.

With that as the basic idea, the analysis can get a lot fancier; for example, analyses can see that you're only storing new Car() objects into the vehicle property of your Person records, and that new Car() will always have an int32 value in its mufflerCount attribute.

By considering everything [†] that the template could possibly do, we can still narrow down the possibilities in worthwhile cases. This less us instantiate less code and know more facts for use in optimization.

Trace recording (prime example: TraceMonkey)

When a loop is determined to be important (i.e. it hits a certain execution count) a recorder notes which bytecodes execute in the interpreter and what types flow into them. This avoids both bytecode dispatch overhead and type ambiguity, creating a very highly specialized template.

However, if the types ever stop matching up with the observed bytecode execution — say, myObject.quotient used to be an int32, but when we ran the trace a subsequent time it turned out to be a double — the execution of that trace cannot continue, because it's instantiated for the wrong types downstream. Because of this possibility, a bunch of checks for type compatibility have to be inserted in the template instantiation.

It's important to note that the generated template is more than just type dependent — it can also be value depdendent! Because a trace only records a single execution path and questions like "Do I execute the if body or the else body?" can be determined by the values held by a given type.

Some folks argue that single-execution recording tends to create overly instantiated templates for typical dynamic language code.

Profiling (prime example: IonMonkey)

If you think of trace recording as observing bytecode for a single run of execution, profiling is observing bytecode for n runs of execution [‡] and accumulating the notes. This eases the pain of compiling "flip floppy" code, because you don't have to see the primarily taken path the first time, you have n tries to get a feel for it.

This process lets you instantiate a template that's general enough to handle all the things you've seen over the n iterations you've observed.

While trace compilation culls out all but one path of control flow, profiling data can be used to cull out all unobserved (and presumably cold) control flow as well. This may increase the effectiveness and decrease the runtime of dataflow-analysis-based optimizations, because there's less unimportant code to consider. I'm not aware of existing profiling JITs that do this, however, which indicates to me that it may not be an overall win.

Further discussion

Dynamic language compilers try to use runtime feedback as well as things it can infer about the code to specialize the template in the best way possible and make generated machine code blazingly fast. We want to be able to mix, "things we definitely know" (from analysis) with "things we are pretty sure we know" (from runtime feedback) in order to get the maximum speedup possible.

One exciting thing about Just-In-Time is that this runtime feedback makes it at least theoretically possible to exceed the speed of a corresponding static program; however, there are signficant challenges involved. Whenever we start leaning too heavily on "things we are pretty sure we know," we need to be able to keep the world from falling apart when we're wrong.

It's actually a fairly emotional topic in compiler engineering: how do you hold the world together when everything you thought you knew is wrong?

Be strong. Until next time.

Footnotes

[*]

Of course, bytecodes that don't need to know about the types of its operands don't ask the questions. Note that GETARG0 takes no parameters, and so wouldn't need to ask any questions.

[†]

Nearly everything. The type inference algorithm actually makes some conservative simplifications to cut down on analysis time.

[‡]

To some approximation. As an alternative, you can also observe execution for some period of time t, which has some correlation to the number of runs of execution.