September 27, 2010

PICing on JavaScript for fun and profit

Inline caching is a critical ingredient in the delicious pie that is dynamic language performance optimization. What follows is a gentle-albeit-quirky introduction to what polymorphic inline caches (PICs) are and why they're useful to JavaScript Just-In-Time compilers like JaegerMonkey.

But first, the ceremonial giving of the props: the initial barrage of PIC research and implementation in JaegerMonkey was performed by Dave Mandelin and our current inline cache implementations are largely the work of David Anderson. As always, the performance improvements of Firefox's JavaScript engine can be monitored via the Are We Fast Yet? website.

C is for speed, and that's good enough for me

C is fast.

Boring people (like me) argue about astoundingly interesting boring things like, "Can hand-tuned assembly be generally faster than an equivalent C program on modern processor architectures?" and "Do languages really have speeds?", but you needn't worry — just accept that C is fast, and we've always been at war with Eurasia.

So, as we've established, when you write a program in C, it executes quickly. If you rewrite that program in your favorite dynamic language and want to know if it still executes quickly, then you naturally compare it to the original C program.

C is awesome in that it has very few language features. For any given snippet of C code, there's a fairly direct translation to the corresponding assembly instructions. [*] You can almost think of C as portable assembly code. Notably, there are (almost) zero language features that require support during the program's execution — compiling a C program is generally a non-additive translation to machine code.

Dynamic languages like JavaScript have a massive number of features by comparison. The language, as specified, performs all kinds of safety checks, offers you fancy-n-flexible data record constructs, and even takes out the garbage. These things are wonderful, but generally require runtime support, which is supplied by the language engine. [†] This runtime support comes at a price, but, as you'll soon see, we've got a coupon for 93 percent off on select items! [‡]

You now understand the basic, heart-wrenching plight of the performance-oriented dynamic language compiler engineer: implement all the fancy features of the language, but do it at no observable cost.

Interpreters, virtual machines, and bears

"Virtual machine" sounds way cooler than "interpreter". Other than that, you'll find that the distinction is fairly meaningless in relevant literature.

An interpreter takes your program and executes it. Generally, the term "virtual machine" (AKA "VM") refers to a sub-category of interpreter where the source program is first turned into fake "instructions" called bytecodes. [§]

A bear moving quickly

I call these instructions fake because they do things that a hardware processing units are unlikely to ever do: for example, an ADD bytecode in JavaScript will try to add two arbitrary objects together. [¶] The point that languages implementors make by calling it a "virtual machine" is that there is conceptually a device, whether in hardware or software, that could execute this set of instructions to run the program.

These bytecodes are then executed in sequence. A program instruction counter is kept in the VM as it executes, analogous to a program counter register in microprocessor hardware, and control flow bytecodes (branches) change the typical sequence by indicating the next bytecode instruction to be executed.

Virtual (machine) reality

Languages implemented in "pure" VMs are slower than C. Fundamentally, your VM is a program that executes instructions, whereas compiled C code runs on the bare metal. Executing the VM code is overhead!

To narrow the speed gap between dynamic languages and C, VM implementers are forced to eliminate this overhead. They do so by extending the VM to emit real machine instructions — bytecodes are effectively lowered into machine-codes in a process called Just-In-Time (JIT) compilation. Performance-oriented VMs, like Firefox's SpiderMonkey engine, have the ability to JIT compile their programs.

The term "Just-In-Time" is annoyingly vague — just in time for what, exactly? Dinner? The heat death of the universe? The time it takes me to get to the point already?

In today's JavaScript engines, the lowering from bytecodes to machine instructions occurs as the program executes. With the new JaegerMonkey JIT compiler, the lowering occurs for a single function that the engine sees you are about to execute. This has less overhead than compiling the program as a whole when the web browser receives it. The JaegerMonkey JIT compiler is also known as the method JIT, because it JIT compiles a method at a time.

For most readers, this means a few blobs of x86 or x86-64 assembly are generated as you load a web page. The JavaScript engine in your web browser probably spewed a few nice chunks of assembly as you loaded this blog entry.

Aside: TraceMonkey

In SpiderMonkey we have some special sauce: a second JIT, called TraceMonkey, that kicks in under special circumstances: when the engine detects that you're running loopy code (for example, a for loop with a lot of iterations), it records a stream of bytecodes that corresponds to a trip around the loop. This stream is called a trace and it's interesting because a) it can record bytecodes across function calls and b) the trace optimizer works harder than the method JIT to make the resulting machine code fast.

There's lots more to be said about TraceMonkey, but the inline caching optimization that we're about to discuss is only implemented in JaegerMonkey nowadays, so I'll cut that discussion short.

The need for inline caching

In C, accessing a member of a structure is a single "load" machine instruction:

struct Nose {
    int howManyNostrils;
    bool isPointy;
};

bool isNosePointy(struct Nose *nose) {
    return nose->isPointy;
}

The way that the members of struct Nose are laid out in memory is known to the C compiler because it can see the struct definition — getting the attribute nose->isPointy translates directly into a load from the address addressof(nose) + offsetof(Nose, isPointy).

Note: Just to normalize all the terminology, let's call the data contained within a structure the properties (instead of members) and the way that you name them the identifiers. For example, isPointy is an identifier and the boolean data contained within nose->isPointy is the property. The act of looking up a property through an identifier is a property access.

On the other hand, objects in JavaScript are flexible — you can add and delete arbitrary properties from objects at runtime. There is also no language-level support for specifying the types that an identifier can take on. As a result, there's no simple way to know what memory address to load from in an arbitrary JavaScript property access.

Consider the following snippet:

function isNosePointy(nose) {
    return nose.isPointy;
}

To get at the isPointy property, the JavaScript VM emits a single bytecode, called GETPROP, which says "pull out the property with the identifier isPointy". [#] Conceptually, this operation performs a hash-map lookup (using the identifier as a key), which takes around 45 cycles in my microbenchmark. [♠]

Uncached property access data

The process of "looking up a property at runtime because you don't know the exact type of the object" falls into a general category of runtime support called dynamic dispatch. Unsurprisingly, there is execution time overhead associated with dynamic dispatch, because the lookup must be performed at runtime.

To avoid performing a hash-map lookup on every property access, dynamic language interpreters sometimes employ a small cache for (all) property accesses. You index into this cache with the runtime-type of the object and desired identifier. [♥] Resolving a property access against this cache under ideal circumstances takes about 8.5 cycles.

Cached property access data

WTF is inline caching already!?

So we've established that, with good locality, JS property accesses are at least 8.5x slower than C struct property accesses. We've bridged the gap quite a bit from 45x slower. But how do we bridge the gap even bridgier?

Bridge fail!

The answer is, surprisingly, self-modifying code: code that modifies code-that-currently-exists-in-memory. When we JIT compile a property access bytecode, we emit machine-code that looks like this:

type            <- load addressof(object) + offsetof(JSObject, type)
shapeIsKnown    <- type equals IMPOSSIBLE_TYPE
None            <- goto slowLookupCode if shapeIsKnown is False
property        <- load addressof(object) + IMPOSSIBLE_SLOT

Now, if you ask Joe Programmer what he thinks of that code snippet, he would correctly deduce, "The slow lookup code will always be executed!" However, we've got the self-modifying code trick up our sleeves. Imagine that the type matched, so we didn't have to go to the slow lookup code — what's our new property access time?

One type load, one comparison, an untaken branch, and a property value load. Assuming good locality/predictability and that the object's type happened to already be in the register (because you tend to use it a lot), that's 0+1+1+1 == 3 cycles! Much better.

But how do we get the types to match? Joe Programmer is still looking pretty smug over there.

The trick is to have the slowLookupCode actually modify this snippet of machine code! After slowLookupCode resolves the property in the traditional ways mentioned in previous sections, it fills in a reasonable value for IMPOSSIBLE_TYPE and IMPOSSIBLE_SLOT like they were blank fields in a form. This way, the next time you run this machine code, there's a reasonable chance you won't need to go to slowLookupCode — the types might compare equal, in which case you can perform a simple load instruction to get the property that you're looking for!

This technique of modifying the JIT-compiled code to reflect a probable value is called inline caching: inline, as in "in the emitted code"; caching, as in "cache a probable value in there". This the basic idea behind inline caches, AKA ICs.

Also, because we emit this snippet for every property-retrieving bytecode we don't rely on global property access patterns like the global property cache does. We mechanical mariners are less at the mercy of the gods of locality.

Code generation

Where does "P" come from?

Er, right, we're still missing a letter. The "P" in "PIC" stands for polymorphic, which is a fancy sounding word that means "more than one type".

The inline cache demonstrated above can only remember information for a single type — any other type will result is a shapeIsKnown of False and you'll end up going to the slowLookupCode.

Surveys have shown that the degree of polymorphism (number of different types that actually pass through a snippet during program execution) in real-world code tends to be low, in JavaScript [♦] as well as related languages. However, polymorphism happens, and when it does, we like to be fast at it, too.

So, if our inline cache only supports a single type, what can we do to handle polymorphism? The answer may still be surprising: self-modify the machine code some more!

Before we talk about handling the polymorphic case, let's recap the PIC lifecycle.

The PIC lifecycle

The evolution of the PIC is managed through slowLookupCode, which keeps track of the state of the inline cache in addition to performing a traditional lookup. Once the slow lookup is performed and the PIC evolves, the slowLookupCode jumps back (to the instruction after the slot load) to do the next thing in the method.

When a PIC is born, it has that useless-looking structure you saw in the previous section — it's like a form waiting to be filled out. The industry terminology for this state is pre-monomorphic, meaning that it hasn't even seen one (mono) type pass through it yet.

The first time that inline cache is executed and we reach slowLookupCode we, shockingly, just ignore it. We do this because there is actually a hidden overhead associated with modifying machine code in-place — we want to make sure that you don't incur any of that overhead unless there's an indication you might be running that code a bunch of times. [♣]

The second time we reach the slowLookupCode, the inline cache is modified and the PIC reaches the state called monomorphic. Let's say we saw a type named ElephantTrunk — the PIC can now recognize ElephantTrunk objects and perform the fast slot lookup.

When the PIC is monomorphic and another type, named GiraffeSnout, flows through, we have a problem. There are no more places to put cache entries — we've filled out the whole form. This is where we get tricky: we create a new piece of code memory that contains the new filled-out form, and we modify the original form's jump to go to the new piece of code memory instead of slowLookupCode.

Recognize the pattern? We're making a chain of cache entries: if it's not an ElephantTrunk, jump to the GiraffeSnout test. If the GiraffeSnout fails, then jump to the slowLookupCode. An inline cache that can hit on more than one type is said to be in the polymorphic state.

PIC lifecycle

There's one last stage that PICs can reach, which is the coolest sounding of all: megamorphic. Once we detect that there are a lot of types flowing through a property access site, slowLookupCode stops creating cache entries. The assumption is that you might be passing an insane number of types through this code, in which case additional caching would only only slow things down. For a prime example of megamorphism, the 280slides code has an invocation site with 1,437 effective types! [**]

Conclusion

There's a lot more to discuss, but this introduction is rambling enough as-is — if people express interest we can further discuss topics like:

Suffice it to say that JavaScript gets a nice speed boost by enabling PICs: x86 JaegerMonkey with PICs enabled is 25% faster on SunSpider than with them disabled on my machine. [††] If something makes a dynamic language fast, then it is awesome. Therefore, inline caches are awesome. (Modus ponens says so.)

Footnotes

[*]

This is as opposed to, say, C++, where in any given snippet of code the == operator could be overloaded.

[†]

"Engine" is a sexy term, but it's just a library of support code that you use when language constructs don't easily fall into the translate-it-directly-to-machine-code model used by C.

[‡]

Coupon only applies to idealized property access latencies. Competitor coupons gladly accepted. Additional terms and restrictions may apply. See store for details.

[§]

Alternative interpreter designs tend to walk over something that looks more like the source text — either an abstract syntax tree or the program tokens themselves. These designs are less common in modern dynamic languages.

[¶]

There have historically been implementations that do things like this; notably, the Lisp machines and Jazelle DBX. The JavaScript semantics for ADD are particularly hairy compared to these hosted languages, because getting the value-for-adding out of an object can potentially invoke arbitrary functions, causing re-entrance into JavaScript interpretation.

[#]

In the bytecode stream the value isPointy is encoded as an immediate.

[♠]

Note that there is actually further overhead in turning the looked-up property into an appropriate JavaScript value. For example, there are additional checks to see whether the looked-up value represents a "getter" function that should be invoked.

[♥]

This is, in itself, a small hash-map lookup, but the hash function is quite fast. At the moment it's four dependent ALU operations: right shift, xor, add, and.

[♦]

Gregor Richards published a paper in PLDI 2010 that analyzed a set of popular web-based JS applications. The results demonstrated that more than eighty percent of all call sites were monomorphic (had the same function body). I'm speculating that this correlates well to the property accesses we're discussing, though that wasn't explicitly established by the research — in JS, property access PIC are easier to discuss than function invocation PICs. In related languages, like Self, there is no distinction between method invocation and property access.

[♣]

"Hidden overhead my foot! Where does it come from?" Today's processors get a little scared when you write to the parts of memory that contain code. Modern processor architecture assumes that the memory you're executing code out of will not be written to frequently, so they don't optimize for it. [‡‡]

[**]

The annoying part is that the instruction prefetcher may have buffered up the modified instructions, so you have to check if the modified cache line is in there. Older cache coherency protocols I've read about flush lines past unified caches if they detect a hit in both the instruction and data caches — maybe it's better nowadays.

[††]

I'm citing Gregor Richards yet again.

[‡‡]

MICs give a nice percentage boost as well, but they're harder to disable at the moment, or I'd have numbers for that too.

Two postfix operations redux: sequence points

Get ready for some serious language lawyering.

I was going back and converting my old entries to reStructuredText when I found an entry in which I was wrong! (Shocking, I know.)

C

Stupid old me didn't know about sequence points back in 2007: the effects of the ++ operator in the C expression i++ * i++ are in an indeterminate state of side-effect completion until one of the language-defined sequence points is encountered (i.e. a semicolon or function invocation).

From the C99 standard 6.5.4.2 item 2 regarding the postfix increment and decrement operators:

The result of the postfix ++ operator is the value of the operand. After the result is obtained, the value of the operand is incremented. The side effect of updating the stored value of the operand shall occur between the previous and the next sequence point.

Therefore, the compiler is totally at liberty to interpret that expression as:

mov lhs_result, i     ; Copy the values of the postincrement evaluation.
mov rhs_result, i     ; (Which is the original value of i.)
mul result, lhs_result, rhs_result
add i, lhs_result, 1
add i, rhs_result, 1  ; Second increment clobbers with the same value!

This results in the same result as the GCC compilation in the referenced entry: i is 12 and the result is 121.

As I mentioned before, the reason this can occur is that nothing in the syntax forces the first postincrement to be evaluated before the second one. To give an analogy to concurrency constructs: you have a kind of compile-time "race condition" in your syntax between the two postincrements that could be solved with a sequence point "barrier". [*]

In this assembly, those adds can float anywhere they like after their corresponding mov instruction and can operate directly on i instead of the temporary if they'd prefer. Here's an possible sequence that results in a value of 132 and i as 13.

mov lhs_result, i ; Gets the original 11.
inc i             ; Increment in-place after the start value is copied.
mov rhs_result, i ; Gets the new value 12.
inc i             ; Increment occurs in-place again, making 13.
mul result, lhs_result, rhs_result

Even if you know what you're doing, mixing two postfix operations, or any side effect, using the less obvious sequence points (like function invocation) is dangerous and easy to get wrong. Clearly it is not a best practice. [†]

Java

The postincrement operation appears to have sequence-point-like semantics in the Java language through experimentation, and it does! From the Java language specification (page 416):

The Java programming language also guarantees that every operand of an operator (except the conditional operators &&, ||, and ? :) appears to be fully evaluated before any part of the operation itself is performed.

Which combines with the definition of the postfix increment expression (page 485):

A postfix expression followed by a ++ operator is a postfix increment expression.

As well as left-to-right expression evaluation (page 415):

The left-hand operand of a binary operator appears to be fully evaluated before any part of the right-hand operand is evaluated.

To a definitive conclusion that i++ * i++ will always result in 132 == 11 * 12 and i == 13 when i == 11 to start.

Python

Python has no increment operators specifically so you don't have to deal with this kind of nonsense.

>>> count = 0
>>> count++
  File "<stdin>", line 1
    count++
          ^
SyntaxError: invalid syntax

Annoyingly for newbies, though, it looks like ++count is a valid expression that happens to look like preincrement.

>>> count = 0
>>> ++count
0
>>> --count
0

They're actually two unary positive and negative operators, respectively. Just one of the hazards of a context free grammar, I suppose.

Footnotes

[*]

I threw this in because the ordeal reminds me of the classic bank account concurrency problem. If it's more confusing than descriptive, please ignore it. :-)

[†]

Since function invocation defines sequence points, I thought this code sequence guaranteed those results:

#include <stdio.h>

int identity(int value) { return value; }

int main() {
        int i = 11;
        printf("%d\n", identity(i++) * identity(i++));
        printf("%d\n", i);
        return 0;
}

As Dan points out, the order of evaluation is totally unspecified — the left hand and right hand subexpression can potentially be evaluated concurrently.

Bit twiddling: Simple O(1) membership test

Disclaimer

Bit twiddling is fun. Plus, it has several advantages:

You have to understand, though, that clever tricks without appropriate documentation will make people want to break your face. [*] Always bit bash responsibly: appoint a designated code-reader to make sure you're clear enough, and leave your keys at the door.

The Problem

Let's say you wanted to know whether a number was a valid PCI Express link width in terms of number of lanes. We know that valid widths are x1, x2, x4, x8, x12, x16, or x32, and want to construct a function of the following form:

#include <stdbool.h>
#include <stdint.h>
#include <assert.h>

/**
 * :return: Whether the lane count is valid.
 */
bool is_valid_link_width(uint8_t lane_count);

/**
 * Unit test for ``is_valid_link_width``.
 */
int main(int argc, char **argv) {
    assert(!is_valid_link_width(0));
    assert(is_valid_link_width(1));
    assert(is_valid_link_width(2));
    assert(!is_valid_link_width(3));
    assert(is_valid_link_width(32));
    assert(!is_valid_link_width(33));
    assert(!is_valid_link_width(33));
    assert(!is_valid_link_width(0xff));
    return 0;
}

Note that the uint8_t has a width of exactly 8 bits. [†]

How would you write it?

Less Interesting Solution

If you were thinking switch statement, that will work. You could use a switch statement with intentional fall-throughs and hope that the compiler optimizes a branch table for you. (For values this small and dense it probably will, as mentioned in the referenced article.) If the compiler doesn't write the branch table for you, but instead generates the equivalent of a big if/else if ladder, your solution doesn't satisfy the O(1) constraint: in that case, the worst case control flow hits every rung of the ladder (the else if guards), making it O(n).

bool is_valid_link_width(uint8_t lane_count) {
    switch (lane_count) {
    case 1:
    case 2:
    case 4:
    case 8:
    case 12:
    case 16:
    case 32:
        return true;
    }
    return false;
}

An implementation that I like better, which doesn't put as much faith in the compiler, is as follows:

bool is_valid_link_width(uint8_t lane_count) {
    return 0x100011116ULL & (1ULL << lane_count);
}

How cool is that?

The Neat Trick

The clever insight here is that we can encode all of our target "true" values in binary form, like so:

       32                      16    12    8     4    1
0b__0001__0000__0000__0000__0001__0001__0001__0001__0110

Now, if we were to take a 1 value and move it over a number of binary slots equal to the lane count, it will line up with a 1 value in this long binary number we've constructed. Take the bitwise-AND of those two values, and we wind up with:

This is exactly what we were looking for.

This long binary number we've created must be converted from binary into a hexadecimal value, so that we can represent it as an integer literal in our C program. Encoding each binary 4-tuple into hex from right to left, we get the value 0x100011116.

There's an issue with this value, however. Unless we specify a suffix for our integer literal, the compiler is allowed to truncate the value to its native word size, [‡] which would cause serious problems. For x86 systems with 16 bit words, our value could be truncated to 0x1116, which would only allow lane sizes of 1, 2, 4, 8, and 12 — the allowed values of 16 and 32 would be cut off!

To solve this, as you can see in the function definition, we add the ULL integer suffix, which explicitly marks the integer literal as an unsigned long long. (The long long integer data type was added to the C language in the C99 standard.) This data type is required to be at least 64 bits wide, so it can definitely hold our 33 relevant bits (32 plus the zero bit which is there for the 1ULL << 0 case). The long data type is too small to hold this value, as long can potentially be only 32 bits wide per the C standard (and it is 32 bits wide on most modern machines).

Readability Counts

Note that there's a more readable version of the same trick in the following:

bool is_valid_link_width(uint8_t lane_count) {
    const uint64_t set =
        1ULL << 1
        | 1ULL << 2
        | 1ULL << 4
        | 1ULL << 8
        | 1ULL << 12
        | 1ULL << 16
        | 1ULL << 32;
    return set & (1ULL << lane_count);
}

Here we make the construction of the big integer more explicit and make the code less prone to our errors in encoding the literal binary value into hex. Any compiler worth its salt will fold out the const calculation at compile time, so no overhead will be incurred for writing it this way.

I demonstrated the other way of doing it first to: a) blow your mind a little bit, and b) demonstrate an idiom you might see in other people's (overly) clever code. Now there's a chance you can recognize and decipher it without adequate documentation. Huzzah!

Footnotes

[*]

A great rift in the universe known as "Other People's Perl" has been the cause of 80% of the computer-engineering face breakage since 1987. Don't let this tragedy happen to you or your beloved programmers.

[†]

I like using fixed-width integer types, especially in describing problems, because they helpfully constrain the possibilities on the input domain. This is even more important for newcomers who are just wrapping their heads around bit twiddling.

[‡]

I can't find standards documents/discussions to support this claim, but it's definitely what I was taught. Can anybody provide evidence to confirm/deny?

procfs and preload

Two of the cool utilities that I've checked out lately have centered around /proc. /proc is a virtual filesystem mountpoint — the filesystem entities are generated on the fly by the kernel. The filesystem entities provide information about the kernel state and, consequently, the currently running processes. [*]

The utilities are preload and powertop. Both are written in C, though I think that either of them could be written more clearly in Python.

preload

Preload's premise is fascinating. Each shared library that a running process is using via MMIO can be queried via /proc/[pid]/maps, which contains entries of the form:

[vm_start_addr]-[vm_end_addr] [perms] [file_offset] [device_major_id]:[device_minor_id] [inode_num] [file_path]

Preload uses a Markov chain to decide which shared library pages to "pre-load" into the page cache by reading and analyzing these maps over time. Preload's primary goal was to reduce login times by pre-emptively warming up a cold page cache, which it was successful in doing. The catch is that running preload was shown to decrease performance once the cache was warmed up, indicating that it may have just gotten in the way of the native Linux page cache prefetch algorithm. [†]

There are a few other things in /proc that preload uses, like /proc/meminfo, but querying the maps is the meat and potatoes. I was thinking of porting it to Python so that I could understand the structure of the program better, but the fact that the daemon caused a performance decrease over a warm cache turned me off the idea.

References

Footnotes

[*]

A cool side note — all files in /proc have a file size of 0 except kcore and self.

[†]

The page_cache_readahead() function in the Linux kernel.

Two postfix operations in a single statement in GCC

#include <stdio.h>

int z = 11;

int main()
{
    printf("%d\n", ((z++) * (z++)));
    printf("%d\n", z);
    return 0;
}
$ gcc -o postfix_test.o postfix_test.c; ./postfix_test.o
121
12

Surprised? I sure was. It looks like gcc interprets two postfix operations in a single statement as a single postfix increment request. I guess this makes sense if you consider the postfix operator to mean, "Wait for this statement to complete, then have the variable increment." Assuming this specification, the second time that you postfix-increment the compiler says, "Yeah, I’m already going to have the variable increment when the statement completes — no need to tell me again."

On the other hand, prefix increment does work twice in the same statement. Maybe this is a decision that’s left up to the compiler? It’s not specified in K&R as far as I can see, but I haven’t checked any of the ANSI specifications.

Updates

2007/09/26 Here’s what Java has to say!

class DoublePostfixTester
{
    public static void main(String[] args)
    {
        int z = 11;
        System.out.println(((z++) * (z++)));
        System.out.println(z);
    }
}
$ javac DoublePostfixTester.java
$ java DoublePostfixTester
132
13

Which is what I would have expected in the first place. Bravo, Java — we’re more alike than I thought.