February 27, 2012

Paradox of the generalist

Classic management advice is to build a republic: each team member specializes in what they're good at. It just makes sense.

You nurture existing talents in attempt to ensure personal growth; simultaneously, you fill niches that need filling, constructively combine strengths, and orchestrate sufficient overlap in order to wind up with a functioning, durable, kick-ass machine of a team. A place for everyone, everyone in their place, and badassery ensues! (So the old saying goes...)

But what if, instead, you could simultaneously fork off N teams — one for every team member — and make that team member simultaneously responsible for everything? What would happen to the personal knowledge, growth rate, and impact of each member?

Let's take it one step farther: imagine you're that team member. All of a sudden it sounds terrifying, right? If you don't know it, nobody does. If you don't do it, nobody will. If you don't research it, you'll have no idea what it's about. If you don't network, no contacts are made. If you don't ship it, you know it will never change the firm/industry/world.

So, you think like you've been trained to think: you disambiguate the possible results. What could happen? Maybe you'd crumble under the pressure. Maybe you wouldn't be able to find your calling because you're glossing over the details that make you an artisan. Maybe you'd look like a fool. Maybe you would ship totally uninteresting crap that's all been done before.

But, then again, maybe you would grow like you've never grown before, learn things that you never had the rational imperative to learn, talk to interesting people you would have never talked to, ship a product that moves an industry, and blow the fucking lid off of a whole can of worms.

And so we arrive at one tautological cliché that I actually agree with: you never know until you try. And, if you choose wisely, you'll probably have a damn good time doing it.

At the least, by definition, you'll learn something you couldn't have learned by specializing.

Accomplish your new year's resolution of being more badass

I know what you're going through — I've done it all before.

The market is teeming with products that purport to help you meet your badassery quota.

First you do the shakes. Then, you go with the bars that say they're infused with the good stuff, but just seem to leave a slightly corrugated taste in your mouth. Finally, you're droppin' hard-earned dinero on a facility that you don't need or a professional badassery trainer whose appointments you desperately wish you could cancel.

But I'm here to tell you don't need to shell out cash-money to become more badass, my friends. Not anymore, thanks to the beauty of open source, the ES6 plans wiki page, and our delightful SpiderMonkey technical staff who are standing by to receive your calls for mentorship.

Allow me to explain.

Badass begets badass

You may have seen Badass JavaScript: self described as, "A showcase of awesome JavaScript code that pushes the boundaries of what's possible on the web." Check out their badass year in review if you haven't. (Some of the stuff that the interwebs has done with JavaScript even has @NeckbeardHacker envious.)

It probably won't surprise you, but do you know what those folks love? JavaScript evolution. There's nothing quite so refreshing as a fresh-squeezed JS language feature that removes that irritating itching-burning sensation. Sets that do what you mean! String repetition that you can use without copy/pasting from your last project! Inverse hyperbolic sine that you can use for... your... math! (Again, all of this is in the ES6 plans.)

I, for example, have wanted String.prototype.startsWith very badly, to the point that I've started washing people's window panes against their will as they exit highway 101. Around here, odds are that a programmer sees my sign and implements the thing just to stop me from bothering them again. (A little tactic that I call SpiderGuerilla warfare.)

Me, holding my SpiderGuerilla sign.

So what are you waiting for?

I know, you're probably already quite beefcake, but here's my three step plan:

  1. Watch the SpiderMonkey hacking intro.

  2. Pick out a bug from the ES6 plans.

  3. Come talk to great people on irc.mozilla.org in channel #jsapi (for example, cdleary, jorendorff, luke, or whoever else is in there) or comment in the bug — tell them that you're on a quest to become even more badass, describe a bug that you're interested in taking, and give a quick note on what you've done with the engine so far — for example, walking through the video in step 1! We'll find you a mentor who will get you started on the right track.

Don't miss out on this exclusive offer — SpiderMonkey contribution is not sold in stores.

In fact, if you act now, we'll throw in an IonMonkey shirt (or another Firefox shirt of equivalent awesomeness) and publish a blurb about your feature in Mozilla hacks. Of course, you can also have yourself added to about:credits, providing that's what you are into.

IonMonkey shirt.

This one-of-a-kind offer is too ridonk to pass up. Just listen to this testimonial from one of our badass contributors:

I started contributing to SpiderMonkey and now I can write a JIT compiler from scratch in a matter of days. BEEFCAKE!

@evilpies [Liberally paraphrased]

See you in the tubes!

C++, generic wrappers, and CRTP, oh MI!

Reading some of the original traits papers, I got to the part where they mention the conceptual difficulties inherent in multiple inheritance (MI), one of which is "factoring out generic wrappers". [*]

There's a footnote clarifying that, in practice, languages with MI actually do have other ways of accomplishing the factoring, and in reading that I remembered that the first time I actually understood CRTP (Curiously Recurring Template Pattern) was because I needed some generic wrappers.

Some folks were asking about CRTP on our IRC channel this past week, so I figured I'd share a quick walk-though.

Sometimes you want to be able to shove a method implementation onto a class, given that it has a few buiding blocks for you to work with. Let's say that there's some generic and formulaic way of making a delicious cake.

class CakeMaker
{
  public:
    Cake makeCake() {
        Ingredients ingredients = fetchIngredients();
        if (ingredients.spoiled())
            return Cake::Fail;

        BatterAndWhatnot batter = mixAndStuff(ingredients);
        Cake result = bake(batter);
        if (cake.burned())
            return Cake::Fail;

        return cake;
    }
};

This is supposed to be a reusable component for shoving a makeCake method onto another class that already has the necessary methods, fetchIngredients, mixAndStuff, and bake.

Great. So now let's say that we have two different cake makers, CakeFactory and PersonalChef — we want to just implement the necessary methods for CakeMaker in those and somehow shove the makeCake method onto their class definition as well. Maybe we can inherit from CakeMaker or something?

But here's the rub: CakeMaker can't exist. It is an invalid class definition that will not compile, because it refers to methods that it does not have.

cdleary@stretch:~$ g++ -c crtp.cpp
crtp.cpp: In member function ‘Cake CakeMaker::makeCake()’:
crtp.cpp:17:56: error: ‘fetchIngredients’ was not declared in this scope
crtp.cpp:21:62: error: ‘mixAndStuff’ was not declared in this scope
crtp.cpp:22:38: error: ‘bake’ was not declared in this scope

Luckily, C++ templates have this nice lazy instantiation property, where the code goes mostly unchecked by the compiler until you actually try to use it. So, if we just change our definition to:

template <typename T>
class CakeMaker
{
    // ...
};

GCC will accept it if we ask it to shut up a little bit (with -fpermissive), because we're thinking.

So now we take a look at our close friend, PersonalChef:

class PersonalChef
{
    Ingredients fetchIngredients();
    BatterAndWhatnot mixAndStuff(Ingredients);
    Cake bake(BatterAndWhatnot);
};

We want to shove the CakeMaker method onto his/her class definition. We could inherit from the CakeMaker and just pass it an arbitrary type T, like so:

class PersonalChef : public CakeMaker<int>
{
    // ...
};

But we need a way to wire up the methods that CakeMaker needs to the methods that PersonalChef actually has. And this is where we take the final step — via a stroke of intuition, let's pass in the type that actually has the methods on it, and use that type to refer to the method implementations within CakeMaker:

template <class Wrapped>
class CakeMaker
{
  public:
    Cake makeCake() {
        Wrapped *self = static_cast<Wrapped *>(this);
        Ingredients ingredients = self->fetchIngredients();
        if (ingredients.spoiled())
            return Cake::Fail;

        BatterAndWhatnot batter = self->mixAndStuff(ingredients);
        Cake result = self->bake(batter);
        if (result.burned())
            return Cake::Fail;

        return result;
    }
};

class PersonalChef : public CakeMaker<PersonalChef>
{
    Ingredients fetchIngredients();
    BatterAndWhatnot mixAndStuff(Ingredients);
    Cake bake(BatterAndWhatnot);

    friend class CakeMaker;
};

int main()
{
    PersonalChef chef;
    chef.makeCake();
    return 0;
}

Bam! Now it compiles normally. The CakeMaker is given PersonalChef as the template type argument, and the CakeMaker converts its this pointer for use as the PersonalChef type (which is valid in this case, since PersonalChef is a CakeMaker), which does implement the required methods!

This can also be used to enforce minimum interface requirements at compile time (as in the cross-platform macro assembler) without the use of virtual functions, which have a tendency to thwart inlining optimization.

Fun fact: it looks like we have about 90 virtual function declarations in the 190k lines of engine-related C/C++ code that cloc tells me are in the js/src directory.

Footnotes

[*]

Fun concept from the papers: the conceptual issue with MI is that classes are overloaded in their purpose: they are intended to serve both as units of code (implementation) reuse and for instantiation of actual objects.

Lively assertions

Recently, "another" discussion about fatal assertions has cropped up in the Mozilla community. Luckily for me, I've missed all of the other discussions, so this is the one where I get to throw in my two bits.

Effectively, I only work on the JS engine, and the JS engine only has fatal assertions. This approach works for the JS team, and I can take an insider's guess as to why.

What's a fatal assertion?

In Mozilla, we have two relevant build modes: debug and no-debug.

A fatal assertion means that, when I write JS_ASSERT(someCondition), if someCondition doesn't hold, we call abort in debug build mode. As a result, the code which follows the assertion may legitimately assume that someCondition holds. You will never see something like this in the JS engine:

{
    JS_ASSERT(0 <= offset && offset < size);
    if (0 <= offset && offset < size) // Bad! Already enforced!
        offset_ = offset;
}

The interesting thing is that, in no-debug mode, we will not call abort. We eliminate the assertion condition test entirely. This means that, in production, the code which follows the assertion assumes that someCondition holds, and there's nothing checking that to be the case. [*]

Exploding early and often

If a JS-engine hacker assumes someCondition during development, and it turns out that someCondition isn't the case, we'd like to know about it, and we'd like to know about it LOUDLY.

Our valiant security team runs fuzz testing against the JS engine continuously, and hitting any one of these fatal assertions causes an abort. When you know that there is input that causes an abort in debug mode, you have a few potential resolutions:

But I think the real key to this whole process is simple: if things are exploding, a member of the bomb squad will show up and come to some resolution. Fatal assertions force action in a way that logs will not. You must (at least cursorily) investigate any one of these assertions as though it were in the most severe category, and some form of resolution must be timely in order to unblock fuzzers and other developers.

Everything that the hacker feels can and should be asserted is being asserted in a way that's impossible to ignore. Invariants present in the code base are reflected by the fatal assertions and, once they've proven themselves by running the regression/fuzzer/web gamut, can be depended upon — they certainly come to reinforce and strengthen each other over time.

Footnotes

[*]

We do have mechanisms that hackers can use for further checking, however. If crash reports indicate that some assertions may be suspect in production environments, we have a JS_OPT_ASSERT for doing diagnostics in our pre-release distribution channels. Since the most reliable information in a crash report tends to be the line number that you crashed on, fatal non-debug assertions are a very useful capability.

String representation in SpiderMonkey

I'm back from holiday break and I need to limber up my tech writing a bit. Reason 1337 of my ever-growing compendium, Nerdy Reasons to Love the Internet, is that there are always interesting discussions going on. [*] I came across Never create Ruby strings longer than 23 characters the other day, and despite the link-bait title, there's a nice discussion of string representation within MRI (a Ruby VM).

My recap will be somewhat abbreviated, since I've only given myself a chunk of the morning to write this, so feel free to ask for clarification / follow up in the comments.

Basic language overview

At the language level JavaScript strings are pretty easy to understand. They are immutable, same as in Python:

>>> foo = 'abc'
>>> foo[2] = 'd'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'str' object does not support item assignment
js> options('strict')
""
js> foo = 'abc'
"abc"
js> foo[2] = 'd'
typein:3: strict warning: foo[2] is read-only
"d"

But you can (without mutating any of the original values) compare them for equality, concat them, slice them, regexp replace/split/match on them, trim whitespace from them, slap them to chop vegetables, and so forth. (See the MDN docs for String.prototype methods.) In the VM, we need to make those operations fast, with an emphasis on the operations that the web uses heavily, which are ideally [†] the ones reflected in benchmarks.

Abstractly

In an abstract sense, a primitive string in SpiderMonkey is a GC cell (i.e. small header that is managed by the garbage collector) that has a length and refers to an array of UCS2 (uniformly 16-bit) characters. [‡]

Recall that, in many dynamic language implementations, type tagging is used in order to represent the actual type of an statically-unknown-typed value at runtime. This generally allows you to work on integers (and, in SpiderMonkey, doubles) without allocating any space on the heap. Primitive strings are very important to distinguish quickly and they are subtly distinct from (non-primitive) objects, so they have their own type tag in our value representation, as you can see in the following VM function:

/*
 * Convert the given value to a string.  This method includes an inline
 * fast-path for the case where the value is already a string; if the value is
 * known not to be a string, use ToStringSlow instead.
 */
static JS_ALWAYS_INLINE JSString *
ToString(JSContext *cx, const js::Value &v)
{
    if (v.isString())
        return v.toString();
    return ToStringSlow(cx, v);
}

Aside

In JavaScript there's an annoying distinction between primitive strings and string objects that you may have seen:

js> foo = new String('abc')
(new String("abc"))
js> foo.substr(0, 2)
"ab"
js> foo[2]
"c"
js> foo.toString()
"abc"

For simplicity and because they're uninteresting, let's pretend those new String things don't exist.

Atoms

The simplest string form to describe is called an "atom", which is somewhat similar to an interned string in Python. When you write a literal string or identifier in your JavaScript code, SpiderMonkey's parser turns it into one of these atoms.

(function() {
    // Both 'someObject' and 'twenty' are atomized at parse time!
    return someObject['twenty'];
})()

Note that the user has no overt control over which strings get atomized (i.e. there is no intern builtin). Also, there are a bunch of "primordial" atoms that the engine creates when it starts up: things like the empty string, prototype, apply, and so on.

The interesting property of atoms is that any two atoms can be compared in O(1) time (via pointer comparison). Some work is required on behalf of the runtime to guarantee that property.

To get an atom within the VM, you have to say, "Hey SpiderMonkey runtime, atomize these characters for me!" In the general case the runtime then does a classic "get or create" via a hash table lookup: it determines whether or not those characters have an existing atom and, if not, creates one. The atomized primitive string that you get back always has its characters contiguous in memory — a property which is interesting by contrast to...

Ropes

Let's say that you had immutable strings, like in JavaScript, and you had three large books already in string form: let's call them AoCP I, II, and III. Then, some jerk thinks it would be funny to take the first third of the first book, the second third of the second book, and the third third of the third book, and slice them together into a single string.

What's the simplest thing that could possibly work? Let's say that each book is a 8MiB long. You could allocate a new, 8MiB array of characters and memcpy the appropriate characters from each string into the resulting buffer, but now you've added 33% memory overhead and wasted quite a few cycles.

A related issue is efficient appending and prepending. Let's say you have a program that does something like:

var resultString = '';

function onMoreText(text) {
    // If the text starts with a letter in the lower part of the alphabet,
    // put it at the beginning; otherwise, put it at the end.
    if (text[0] < 'l')
        resultString = text + resultString;
    else
        resultString = resultString + text;
};

If you did the naive "new string and memcpy" for all of the appends and prepends, you'd end up creating a lot of useless garbage inside the VM. The Python community has the conventional wisdom that you should build up a container (like a deque) and join on it, but it's difficult to hold the entire ecosystem of web programmers to such standards.

In the SpiderMonkey VM, the general solution to problems like these this is to build up a tree-like data structure that represents the sequence of immutable substrings. and collapse that datastructure only when necessary. Say that you write this:

(function weirdJoin(a, b, c, d) {
    var lhs = a + b;
    var rhs = c + d;
    var result = lhs + rhs;
    return result;
})('I', 'love', 'SpiderMonkey', '!');

The concatenation is performed lazily by using a tree-like data structure (actually a DAG, since the same string cell can appear in the structure at multiple points) that we call a rope. Say that all of the arguments are different atoms — the resulting rope would look like:

SpiderMonkey rope concatenation example.

Since strings are immutable at the language level, cycles can never form. When the character array is requested within the engine, a binary tree traversal is performed to flatten the constituent strings' characters into a single, newly-allocated buffer. Note that, when the rope is not flattened, the characters which constitute the overall string are not in a single contiguous region of memory — they're spread across several buffers!

Dependent strings

How about when you do superHugeString.substr(4096, 16384)? Naively, you need to copy the characters in that range into a new string.

However, in SpiderMonkey there's also a concept of dependent strings which simply reuse some of the buffer contents of an existing string's character array. In a very simple fashion, the derived string keeps the referred-to string alive in order to reuse the characters in the referred-to string's buffer.

Fancier and fancier!

Really small strings are a fairly common case: they are used for things like array property indices and single-characters of strings — recall that, in JavaScript, all object properties are named by strings, unlike in languages like Python which uses arbitrary hashables. To optimize for this case, we have strings with characters embedded into their GC cell header, avoiding a heap-allocated character buffer. [§] We also pre-initialize many of these (less than length-3 strings and integers up to 256) atoms when the runtime starts up to bypass the typical hash table lookup overhead.

I'm out of time, but I hope this gives some insight into the good number of tricks are played to make common uses of JavaScript strings fast within the SpiderMonkey VM. For you curious types, there's lots more information in the code!

Footnotes

[*]

Of course, reason 1336 is that there are so many lame 1337 references that it's actually still funny.

[†]

Note the emphasis on ideally. Ideally, it should make you chuckle.

[‡]

Fun aside: the maximum length of a string in our engine is currently bounded to 28 bits.

[§]

Our garbage collector is not currently capable of creating variable-sized allocations — we only have fixed-size header classes.