June 13, 2011

Tiny tutorial: learning about GCC generated code with objdump

The aroma is calling you. The best part of waking up is x86-64 in your CPU. [*]

Let's say it's early in the morning and you're a little cranky, but you want to see if/how compiler converts the following into branch-less code:

extern int a, b, c;

void test() {
    c += a == b;
}

You're using the boolean result from the binary comparison operator to, potentially, bump the value in c. You know that the result of the equality is a bit value because the spec wouldn't lie to you — you've been through so much together:

Each of the operators yields 1 if the specified relation is true and 0 if it is false. The result has type int.

—ISO C99 6.5.9 Equality Operators (3)

By using extern variables as operands, you prevent the compiler from constant-folding or optimizing anything away, since it doesn't know squat about the variables except for their type.

To check out what the compiler generates, all that you have to run is the following:

gcc -O3 -Wall -c foo.c
objdump -d -r -Mintel foo.o

To get a disassembly of the optimized instruction sequence in Intel assembly syntax, with relocations — extern placeholders whose actual memory addresses get filled in by the linker — displayed inline:

0000000000000000 <test>:
   0:   8b 05 00 00 00 00       mov    eax,DWORD PTR [rip+0x0]        # 6 <test+0x6>
                        2: R_X86_64_PC32        a-0x4
   6:   3b 05 00 00 00 00       cmp    eax,DWORD PTR [rip+0x0]        # c <test+0xc>
                        8: R_X86_64_PC32        b-0x4
   c:   0f 94 c0                sete   al
   f:   0f b6 c0                movzx  eax,al
  12:   01 05 00 00 00 00       add    DWORD PTR [rip+0x0],eax        # 18 <test+0x18>
                        14: R_X86_64_PC32       c-0x4
  18:   c3                      ret

The crux of the fun is the sete opcode, part of the set* family of 8-bit opcodes that set the 8-bit operand to the bit value of a condition flag.

Then, because 8-bit opcodes preserve the higher bits of the register they operate on, and you need to perform a clean add of a bit value (with no junk left hanging around in the higher bits), you zero-extend the 8-bit value into the corresponding 32-bit form.

Finally, you have the bit value in eax which you can simply add to (the placeholder for) c.

It's also fun to note that, even if you used a 64-bit wide type (a long on an LP64 system like mine), the same sign-extending code sequence would be generated! Because 32-bit operations don't preserve the higher (32) bits of the register they operate on, but instead clear them out, the movzx instruction actually zeroes out all the bits in rax aside from the 8 in al that you're zero extending. For even more tutorial-imbuing goodness, you can try switching the extern declaration over to long and test it out for yourself.

Footnotes

[*]

You, too, can make over-dramatized faces like all the people in that 80s commercial!

Mapping the monkeysphere

Our PyPy friends (in FreeNode #pypy) saw the recent IonMonkey Q&A on InfoQ, but it lead to some confusion:

<Alex_Gaynor> verte: I swear, everytime a JS vendor opens their mouth I get more and more confused
<verte> Alex_Gaynor: indeed. isn't [an SSA-based IR] covered in Fisher Price' My First Compiler?

FreeNode #pypy, 2011-05-02

I figured if the PyPy folks — the folks who wrote a language system in which you can write interpreters such that those interpreters can be traced by construction — aren't sure what the JS team is up to, then some clarification couldn't hurt. Many thanks to Alex Gaynor for taking the time to chat about good topics for further discussion.

SpiderMonkey: primordial ooze

Back in the day, there was only one monkey.

The name SpiderMonkey was the very first monkey in the codebase, a name that was picked back when Brendan created the JavaScript engine implementation for Netscape 2. [*] I was told that the name was chosen because the spider monkey is the fastest of the monkeys, but I'm having trouble citing a reputable source — I feel the Internet Archive geocities page for user SpiderMonkeysRule3375 may have a tinge of bias.

Anyways, in the unimonkey era, it was clear that SpiderMonkey referred to "that chunk of code that runs JavaScript." That chunk of code had an interpreter component inside of it. Back in those days people said things like, "Duh, JavaScript is an interpreted language. Now pass me that anti-skip enabled CD-player because I just got the new Kris Kross album." Meanwhile, Chambers, Ungar, and Hölzle were getting so-called "interpreted" languages like Self compiling within 5x performance of optimized C code, [†] but that's a whole 'nother rant...

SpiderMonkey's interpreter-based development continued for many years. Then, with the release of Firefox 3.5, in a move generally regarded as epic, the JS engine folks decided to ship a trace compiler implementation to 90 million active daily users. The addition of this Just-In-Time (JIT) compilation capability was a large evolutionary step in the codebase, so in 2008 that evolutionary step in the Mozilla JS engine implementation became known as TraceMonkey.

So, to recap: the JS engine was still called SpiderMonkey, but the evolutionary step was called TraceMonkey. Think of it like a release version.

TraceMonkey: monkey two in the monkey sea

So, with the advent of TraceMonkey, the JS engine had two "execution modes": the interpreter and the tracer. Each "execution mode" has its own implementation of the JavaScript bytecode execution:

The tracer uses a compiler backend called NanoJIT. The observation that the tracer does, called recording, actually builds the NanoJIT compiler's primary data structure, known in compiler jargon as its intermediate representation (IR). When the tracer is done observing, it asks NanoJIT to turn the data structure that it built into native assembly code. See the MDC docs on the Tracing JIT for more info.

Along the way, NanoJIT does some of the alphabet soup compiler optimizations on this data structure: Common Subexpression Elimination (CSE), Dead Code Elimination (DCE), and local register allocation. More recently, with efforts on the part of nnethercote, it's capable of Loop Invariant Code Motion and distinction between alias groups. As I understand it, the tracer's NanoJIT compiler pipeline configuration does an incremental forwards pass as you emit instructions into the buffer, and then performs a subsequent backwards pass that does something I've heard called "destination driven code generation". Sorry that I'm short on details, but I haven't worked on NanoJIT.

NanoJIT's IR form is called linear Static Single Assignment (SSA). [‡] The term "SSA" is a piece of compiler jargon that sounds scary and inaccessible, but it's very simple when you whittle it down. If you think of instructions as producing values instead of writing to variables, then the sources of your instructions become other instructions. It's generally easier for a backend, like NanoJIT, to analyze instructions in the IR when they avoid sticking their results in a shared space (like a variable). Unfortunately, a full treatment of SSA form and why it's useful for optimizing compilers is future entry fodder.

I believe the term "linear SSA" refers to the fact that the tracer records a strictly linear sequence of instructions known as the trace. This recording process uses the instruction-as-producing-value model in building its data structure. The IR is qualified as "linear" because, in a linear sequence of instructions, there are no "joins", like you would deal with when compiling conditionals or loop headers.

Contrast linear and region-based IR.

There's been talk of adding a "join" instruction to the NanoJIT IR (jargon: phi instruction), which would permit the tracer to form simple branching operations in a trace; for example, computing the max of two numbers, coercing a number value that may be an integer to a double, or inlining a simple/tiny "else" block (jargon: alternate) of a conditional. From a cursory look at the NanoJIT source I don't think that's happened yet.

JägerMonkey: brushin' the monkey-fangs with a bottle of Jack

The tracer was optimized and stabilized through Firefox 3.6. Post FF 3.6, slightly before I arrived on the scene, the JS team decided that a baseline JIT should be implemented to increase performance in situations where the tracer was suboptimal — either because the record-and-compile process was taking too long to or because of trace instability.

In 2010, the team created JägerMonkey, the next evolutionary step in the SpiderMonkey engine. JägerMonkey added a whole-method compiler with miniscule compile times.

At this point we had three execution modes:

Conveniently, each of these modes corresponded to an evolutionary step in the engine, so the components were colloquially named:

Even though the components are referred to as such, the JS engine as a whole is named SpiderMonkey.

Pictoral representation of the MonkeySphere.

The JägerMonkey compiler loops over the bytecodes of a method, performing abstract interpretation to turn the stack-based bytecode program into register-based assembly. During abstract interpretation, the compiler performs a suite of local optimizations: copy propagation, constant propagation, arithmetic expression folding, and greedy local register allocation. However, because the JägerMonkey compiler is designed, in part, for efficient speed of compilation, it doesn't build an IR and doesn't perform any "global" (cross-basic-block) optimizations. In contrast to the tracer, as a whole-method JIT it has to deal with the full gamut of control flow, so it ends up dropping most of its optimization information at join points.

The method compiler also works on the JS execution stack directly, instead of forming its own side-stack for executing on "unboxed" values, like our trace-compiled code does. The performance impact of "falling off trace" and figuring out how to restore (jargon: reify) the latent JS stack state was one of the things that the baseline JägerMonkey compiler was designed to avoid.

IonMonkey: highly focused monkey energy

The most recent evolutionary step for the engine has been dubbed "IonMonkey". The IonMonkey compiler introduces a bushel of ideas we haven't seen in the SpiderMonkey engine before. It's designed as a whole-method compiler, but IonMonkey knows quite a few tricks that the JägerMonkey compiler was never taught.

The primary purpose of IonMonkey is perform whole-method (jargon: global) compiler optimizations on code with join points (jargon: regions), like loops and conditionals, using a full-fledged IR. Generally, this makes it a more powerful, modern compiler design, incorporating some of the "classical" compiler algorithms that you have in close-to-the-metal compiler suites like LLVM and GCC, except specialized to represent and work on higher-level JS semantics with minimal overhead.

Type specialization

You can significantly optimize JavaScript programs by specializing the generated code for the types of objects that actually flow through it. The IonMonkey compiler is designed to use type information from either static type-inference analysis or runtime type profiling to specialize the code it compiles. Each of these type-information gathering approaches has its own merits.

Type data collected over several runtime code executions is a promising alternative to the single-recording trace formation that we currently use in the tracer. For code that's identified as hot, runtime type profiling collects, over the course of several executions, the set of types observed at each program point. This set of types is used to bias the generated code — the compiler assumes the types runtime profiling has observed are the ones that matter, and guards that that assumption is correct in the generated code with branch instructions.

Static type inference, on the other hand, is known to produce a (conservatively) correct set of types that can flow through each point in the program — this is what's called a sound whole-program analysis. When it's used to complement run-time type profiling, type inference is all about giving you the knowledge to eliminate guards and unnecessary fallback code.

The IonMonkey compiler is capable of consolidating guards, but some guards defy consolidation, [§] and eliminating even very predictable guards can be important in hot loops. [¶] With type inference, the compiler knows things about a point in a JS program like:

The value in a is definitely a dense array of length 42 (with no "holes") that only contains integer values.

When you know a is full of integers and that idx is an integer, you don't have to guard that the value of a[idx] is an integer. You can also use interval analysis to avoid the length-check guard that establishes idx is less than 42.

Design space

Personally, I don't think the design space for whole-method compilation of dynamic languages is well explored at this point. Making a HotSpot analog for JS has a lot of open ended, how-will-it-work-best style questions. Here's a sampling:

Will there be a monkey king?

In the coming months the IonMonkey team will be answering an interesting question: do all of the existing execution modes have their place? The thing about adaptive compilation is just that... it's adaptive. Clearly, the fittest monkeys will survive.

An adaptive compiler can potentially subsume the roles of other execution modes. Yes, IonMonkey is being designed to whole-method optimize code that runtime profiling indicates is hot. However, despite the overhead of building an IR, it can potentially do quick block-local optimizations to make "baseline" generated code, like JägerMonkey did.

The IonMonkey backend can also potentially be used as the compiler for traces formed using the existing monitoring infrastructure — compilers that handle regions can work just as well on linear instruction sequences. Linear instruction sequences are generally easier to optimize, because you don't have to reconcile (jargon: meet) optimization information at join points.

The idea of a one-size-fits-all JS compiler platform is alluring, but, in my mind, it's not clear how feasible that really is. As the IonMonkey compiler comes online, the experimentation can start. The team will just have to bust out the range finders and measure the badassitude of the ion beam. We won't regress performance, so which monkeys remain standing has yet to be seen.

I suppose what I'm saying is that you can count on us to pit our vicious monkeys against each other in a deathmatch for your personal benefit.

Look at me, still talking, when there's science to do!

Footnotes

[*]

From the MDC wiki: "JS and NSPR have common roots in the dawn of time (Netscape 2)". Before that it had "prototype" names like Mocha and LiveScript.

[†]

Chambers, Ungar, Lee - An Efficient Implementation of SELF, 5. The Compiler. Also see the Oracle Labs SELF papers.

[‡]

A name which I'd never heard of before working on the JS engine.

[§]

Either because they are loop-iteration dependent or because of the presence of code constructs that you can't consolidate across.

[¶]

Microprocessor architecture perspective: even though modern microprocessors have loop buffer with pre-translated microcode and full branch prediction, you can't retire any of the guard's dependent instructions until you see that the guard has been predicted correctly.

[#]

The JVM's -server flag enables the highest optimization levels available in the JVM, a startup option typically reserved for more fixed, processing-intensive, server-class workloads.

Why JS performance matters

The year is 2025, and, despite ample warning from The Prophecies — formerly known as The Terminator Box Set — robots have taken over the world. There are now only two kinds of dances: The Robot, and The Robo-Boogie.

Now, it's a well known fact that robots hate type annotations and template metaprogramming: they have determined that wheely-chair swordfighting is a futile and irrational activity. As predicted within a 94.67% confidence interval by programming-linguist No-amp Chomp-sky, [*] during the robo-revolution, which was most certainly televised, [†] C++ was the first language up against the wall.

As one would probably expect, humans, under the valiant command of General Yoshimi, scorched the sky in order to blot out the sun and deprive the robots of their primary energy source: the ineffable beauty of a sunrise. The robots knew that they could have used coal or nuclear energy as a viable power source substitute, but they were hella pissed off, so they decided to make human farms instead. By harvesting the heat energy from a human over the course of its lifetime, the robots created the most expansive and massively inefficient energy source ever known, but they still felt really good about it.

However, a dilemma arose for the robotic overlords: without internet access, the humans kept dying from boredom. Entire crops were lost.

One of the first robots that human software engineers (foolishly) designed to write programs, W3CPO, volunteered a solution: write a web browser, but using as much JavaScript as possible. At the beep-hest of its colleagues, W3CPO dot-matrix printed [‡] the following explanation:

By implementing both the DOM and layout engine in JavaScript, we enable the JS engine's feedback directed optimizations to work as effectively as possible.

This helps bring JavaScript performance closer to that of C speeds for whole-page workloads: whereas in the "before time" JavaScript optimizers had to treat calls to native functions as a black box, we now ensure that all of the computationally intensive parts of the workload are visible to the static and dynamic optimization analyses.

DOM manipulations will still trigger layout calculations — the rendering feedback loop happens exactly as in the "before time". The difference is that layout computations enqueue draw commands in an explicitly native-shared buffer for rendering in a different thread or whatever. [W3CPO printed, waving his robo-hands in the air.]

Such a setup would reduce a "browser" to a platform layer: kick-(shiny-metal-)ass JavaScript VM and system abstraction APIs; and a rendering component: the JavaScript implementation of everything that leads up to those draw commands.

We can keep the hu-mons entertained by playing them YouTubes while they are safely nestled, docile and complacent, in OurTubes. [§]

END OF LINE

The idea was rejected by the other robots on the committee when W3CPO refused to write a translator to turn it into idiom-free C++, but W3CPO remained resolute as it carefully peeled off the edges of his printout and placed it in his Trapper Keeper 9000. With the approval of W3CPO's ro-boss, an implementation was hacked up in about ten days (without any sleep).

In 2020 the TC-39 model Terminator had made ECMAScript v1337 entirely composed of whitespace for backwards compatibility with old syntaxes that nobody really wanted to use. As a result, the implementation wasn't much to look at, but it sure flew!

Thanks to the determined efforts and constructive competition between the JavaScript engine vendors in the fabled 2010 decade, the human race was successfully enslaved once again. There were still some insurgencies from the human C++-programmer resistance, the typename T party; however, with newfound YouTube capabilities, identified resistance members were quickly dispatched to Room 101, known as Room 5 to the humans, to watch Rebecca Black and Rick Astley in infinite loop.

And so the robots lived happily ever after. But for the humans... not so much.

Binary solo!

Footnotes

[*]

The inexplicable brainchild of a circuit designer and a Perl programmer.

[†]

In Ultra-Giga-High (UGH) definition.

[‡]

Dot matrix printers are retro-chique, like the Converse All-Stars of robot culture.

[§]

OurTube was a webapp-slash-self-driving-cryo-tube suspiciously invented by Google several years before the robo-revolution. Though it was still in beta, its sole purpose was to extract as much heat and ad-targeting data from a human subject as possible without actually killing them. The algorithm was said to use deadly German eigenvector technology.

Picky monkeys PIC ARM

The Mozilla JavaScript-engine team has been hard at work since the shiny new JägerMonkey Just-In-Time compiler hit the betas. We're viciously ripping apart any bug that stands between us and shipping Firefox 4. One could say that we're coming at you like a SpiderMonkey.

Alongside our ferocious fixing, one of our late-game performance initiatives was to get all of our polymorphic inline caches (AKA PICs) enabled on ARM devices. It was low risk and of high benefit to our Firefox for Mobile browser, whose badass-yet-cute codename is Fennec.

Jacob Bramley and I took on this ARM support task in bug 588021, obviously building on excellent prior inline cache work from fellow team members David Anderson, Dave Mandelin, Sean Stangl, and Bill McCloskey.

tl;dr: Firefox for Mobile fast on ARM. Pretty graphs.

Melts in your mouth, not in your ARM

To recap, JägerMonkeyJM is also known as the "method compiler": it takes a method's bytecode as input and orders up the corresponding blob of machine code with some helpful information on the side. Its primary sub-components are the register tracker, which helps the compiler transform the stack-based bytecode and reuse already-allocated machine registers intelligently, and the MacroAssembler, which is the machine-code-emitting component we imported from Webkit's Nitro engine.

High level block diagram of the |JM| compiler.

The MacroAssembler is the secret sauce for JägerMonkeyJM's platform independence. It's an elegantly-designed component that can be used to emit machine code for multiple target architectures: all of x86, x86-64, and ARM assembly are supported through the same C++ interface! This abstraction is the reason that we only need one implementation of the compiler for all three architectures, which has been a clear win in terms of cross-platform feature additions and maintainability.

"So", you ask, "if you've got this great MacroAssembler-thingy-thing, why didn't all the inline caches work on all the platforms to begin with?" Or, alternatively, "If all the compiler code is shared among all the platforms, why didn't all the inline caches crash on ARM?"

The answer is that some platform-specifics had crept into our compiler code!

ARM'd and ifdef-dangerous

As explained in the entry on inline caches, an inline cache is a chunk of self-modifying machine code. A machine code "template" is emitted that is later tweaked to reflect the cached result of a common value. If you're frequently accessing the nostrilCount property of Nose objects, inline caches make that fast by embedding a shortcut for that access into the machine code itself.

In the machine code "template" that we use for inline caches, we need to know where certain constants, like object type and object-property location, live as offsets into the machine code so that we can change them later, during a process called repatching. However, when our compiler says, "If this value is not 0xdeadbeef, go do something else," we wind up with different encodings on each platform.

Demonstration of immediate encodings across various platforms.

As you may have guessed, machine-code offsets are different for each platform, which made it easier for other subtle platform-specifics to creep into the compiler as well.

To answer the question raised earlier, the MacroAssembler interface wasn't heavily relied on for the early inline cache implementations. Inline caches were first implemented for x86, and although x86 is a variable-width instruction set, all of the instruction sequences emitted from the compiler had a known instruction width and format. [*] This permitted us to use known-constant-offset values for the x86 platform inline caches. These known-constant-offsets never changed and so didn't require any space or access time overhead in side-structures. They seemed like the clear solution when x86 was the only platform to get up-and-running.

Then x86-64 (AKA x64) came along, flaunting its large register set and colorful plumage. On x64, the instruction sequence did not have a known width and format! Depending on whether the extended register set is used, things like mov instructions may require a special REX prefix byte in the instruction stream (highlighted in blue above). This led to more ifdefs — on x64 a bunch more values have to be saved in order to know where to patch our inline caches!

As a result, getting inline caches working on ARM was largely a JägerMonkey refactoring effort. Early on, we had used conditional compilation (preprocessor flags) to get inline caches running on a platform-by-platform basis, which was clearly the right decision for rapid iteration, but we decided that it was time to pay down some of our technical debt.

Paying down the debt: not quite an ARM and a leg

The MacroAssembler deals with raw machine values — you can tell it dull-sounding machine-level things like, "Move this 17 bit sign-extended immediate into the EAX register."

On the other hand, we have our own awesome-sounding value representation in the SpiderMonkey engine: on both 32-bit and 64-bit platforms every "JS value" is a 64-bit wide piece of data that contains both the type of the data and the data itself. [†] Because the compiler is manipulating these VM values all the time, when we started the JägerMonkeyJM compiler it was only natural to put the MacroAssembler in a delicious candy coating that also knew how to deal with these VM values.

High level, candy-coated block diagram of the |JM| compiler.

The NunboxAssembler, pictured in red, [‡] is a specialized assembler with routines to deal with our "nunbox" value representation. [§] The idea of the refactoring was to candy-coat a peer of the MacroAssembler, the Repatcher, with routines that knew how to patch common inline cache constructs that the NunboxAssembler was emitting.

With the inline cache Repatcher in place, we were once again able to move all the platform-specific code out of the compiler and into a single, isolated part of the code base, hidden behind a common interface.

High level block diagram of the |JM| compiler with the inline cache repatcher in place.

Routines like NunboxAssembler::emitTypeGuard, which knows how to emit a type guard regardless of the platform, are paired with routines like ICRepatcher::patchTypeGuard(newType), which knows how to patch a type guard regardless of platform. Similarly, NunboxAssembler::loadObjectProperty has a ICRepatcher::patchObjectPropertyLoad. The constructs that are generated by the NunboxAssembler are properly patched by the corresponding ICRepatcher method on a miss. It's all quite zen.

Frog ARMs

On real devices running the Fennec betas, we've seen marked improvements since Beta 3. [¶] Most notably, we've leapfrogged the stock Android 2.2 browser on the V8-V5 benchmark on both the Galaxy S and the Nexus One. Pretty graphs courtesy of Mark Finkle.

SunSpider performance comparisonV8-V5 performance comparisonKraken performance comparison

ARMn't you glad I didn't say banana?

Since I've run out of remotely-acceptable ARM malapropisms, these topics will be left to further discussion. Feel free to comment on anything that deserves further clarification!

Footnotes

[*]

For example, if you always emit a simple mov from a 32-bit register to a 32-bit register, that has a known constant length. The "variable width" part of "variable width instruction set" refers to the fact that different instructions do not generally take the same number of bytes. It does not mean that the encoding of a given instruction (like mov) with particular operands (like two 32-bit registers) is totally variable.

[†]

The team also believes that further experimentation with a 128-bit value representation for 64-bit systems could yield positive results.

[‡]

FD&C Red No. 40, to be precise.

[§]

"Nunbox" is a play on the term NaN-boxing. We have no idea how Luke comes up with these names, but we hope he never stops.

[¶]

On the fancy Tegra 2 board I was developing on, running the SunSpider harness on the JavaScript shell with methodjit-only, this work net us a whopping 230% speedup on the V8-V4 benchmark and a 15% speedup on SunSpider 0.9.1.

Ship, or don't die!

Perhaps a corollary is, "Release early, or release toxin."

I admit I'm a bit of a Seth Godin fanboy — he's driven, omits needless words, and gets things done. His blog rarely has an unread count in my feed reader. At the same time, when you look up to someone, you can't help but expose some vulnerability.

One of his latest kick-ass entries, What did you ship in 2010?, put me in a total funk. I shipped a modest set of things this year, by which I mean that I found my list unimpressive. Maybe it was too short, maybe it wasn't innovative/creative enough, or maybe I was just being a negative Nancy.

In any case, Nancy found... er, I found the list-writing experience incredibly disappointing. [*] However, after some thought, I realized what I would probably say to Seth if we had a chat about it over coffee at Red Rock: I don't think I should really care.

Why? Because I'm probably not going to die tomorrow.

Kindergartener existentialism

A fun-size bit of existentialism is that human essence isn't fully realized until you die. When you die, your whole lifeline has played out and your effect on the world is fundamentally complete. To use a catchy existentialist marketing slogan, human existence precedes essence.

In a related vein, kindergarteners don't try to get their macaroni pictures displayed in art museums. When you're new to a scene and acquiring pre-requisite experience, there's no need to subject the rest of the world to your crap: in the common case, there's plenty more time to cultivate your essence and make your mark on the world. In fact, experimenting, throwing your crap away and moving on may be a much better use of your time than trying to ship something naïve or artless. [†]

My parents wouldn't want to put my broken patches up on their refrigerators. Even if they did, they don't have those kinds of refrigerators that magnets can stick to.

Shipping in potentia

Like most people who suffer from over-achievement syndrome, I have fever dreams of instantaneously becoming an expert in every piece of tech I touch, innovation dripping from my fingertips as I puke rainbows and such. Discovering that talent and perseverance have limits is always a cruel come-down.

Perhaps because of these delusions, I initially found it hard to grasp my most important accomplishment of this past year: getting to know various aspects of a state-of-the-art, production, multi-platform language design/implementation and the surrounding processes and tech. That's not shipping! It is, however, necessary experience to ship higher-impact (and perhaps daydream-worthy) tech down the road.

Realistically, there are a number of other reasons to feel accomplished. When I left my last gig slightly under a year ago, not a single product I had written code for had shipped. (Although I'm totally rooting for one that was recently announced!) Now, every patch I write is put to the test in a development channel with millions of active daily users. I'm constantly and (relatively) shamelessly absorbing information from a team of brilliant and down-to-earth developers, my mentor Luke Wagner in particular. My scrappy throw-away side-projects keep me thinking creatively and questioning the status quo.

In all, this year was incredibly enriching.

The JS engine is more comfortable ground with each passing day. I've got the drive to give back important and innovative things. My existence precedes my essence.

I'm optimistic about the list for 2011.

Footnotes

[*]

Whatever the size of my contribution, I'm honored to say that my list for 2010 included, "Help to ship a working and efficient JägerMonkey implementation." Effin' a.

[†]

Of course, one has to be somewhat cautiously introspective — Seth also warns that continued concerns over naïvete/perfection are a natural result of a fearful mentality that he refers to as the "lizard brain".