January 6, 2012

String representation in SpiderMonkey

I'm back from holiday break and I need to limber up my tech writing a bit. Reason 1337 of my ever-growing compendium, Nerdy Reasons to Love the Internet, is that there are always interesting discussions going on. [*] I came across Never create Ruby strings longer than 23 characters the other day, and despite the link-bait title, there's a nice discussion of string representation within MRI (a Ruby VM).

My recap will be somewhat abbreviated, since I've only given myself a chunk of the morning to write this, so feel free to ask for clarification / follow up in the comments.

Basic language overview

At the language level JavaScript strings are pretty easy to understand. They are immutable, same as in Python:

>>> foo = 'abc'
>>> foo[2] = 'd'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'str' object does not support item assignment
js> options('strict')
""
js> foo = 'abc'
"abc"
js> foo[2] = 'd'
typein:3: strict warning: foo[2] is read-only
"d"

But you can (without mutating any of the original values) compare them for equality, concat them, slice them, regexp replace/split/match on them, trim whitespace from them, slap them to chop vegetables, and so forth. (See the MDN docs for String.prototype methods.) In the VM, we need to make those operations fast, with an emphasis on the operations that the web uses heavily, which are ideally [†] the ones reflected in benchmarks.

Abstractly

In an abstract sense, a primitive string in SpiderMonkey is a GC cell (i.e. small header that is managed by the garbage collector) that has a length and refers to an array of UCS2 (uniformly 16-bit) characters. [‡]

Recall that, in many dynamic language implementations, type tagging is used in order to represent the actual type of an statically-unknown-typed value at runtime. This generally allows you to work on integers (and, in SpiderMonkey, doubles) without allocating any space on the heap. Primitive strings are very important to distinguish quickly and they are subtly distinct from (non-primitive) objects, so they have their own type tag in our value representation, as you can see in the following VM function:

/*
 * Convert the given value to a string.  This method includes an inline
 * fast-path for the case where the value is already a string; if the value is
 * known not to be a string, use ToStringSlow instead.
 */
static JS_ALWAYS_INLINE JSString *
ToString(JSContext *cx, const js::Value &v)
{
    if (v.isString())
        return v.toString();
    return ToStringSlow(cx, v);
}

Aside

In JavaScript there's an annoying distinction between primitive strings and string objects that you may have seen:

js> foo = new String('abc')
(new String("abc"))
js> foo.substr(0, 2)
"ab"
js> foo[2]
"c"
js> foo.toString()
"abc"

For simplicity and because they're uninteresting, let's pretend those new String things don't exist.

Atoms

The simplest string form to describe is called an "atom", which is somewhat similar to an interned string in Python. When you write a literal string or identifier in your JavaScript code, SpiderMonkey's parser turns it into one of these atoms.

(function() {
    // Both 'someObject' and 'twenty' are atomized at parse time!
    return someObject['twenty'];
})()

Note that the user has no overt control over which strings get atomized (i.e. there is no intern builtin). Also, there are a bunch of "primordial" atoms that the engine creates when it starts up: things like the empty string, prototype, apply, and so on.

The interesting property of atoms is that any two atoms can be compared in O(1) time (via pointer comparison). Some work is required on behalf of the runtime to guarantee that property.

To get an atom within the VM, you have to say, "Hey SpiderMonkey runtime, atomize these characters for me!" In the general case the runtime then does a classic "get or create" via a hash table lookup: it determines whether or not those characters have an existing atom and, if not, creates one. The atomized primitive string that you get back always has its characters contiguous in memory — a property which is interesting by contrast to...

Ropes

Let's say that you had immutable strings, like in JavaScript, and you had three large books already in string form: let's call them AoCP I, II, and III. Then, some jerk thinks it would be funny to take the first third of the first book, the second third of the second book, and the third third of the third book, and slice them together into a single string.

What's the simplest thing that could possibly work? Let's say that each book is a 8MiB long. You could allocate a new, 8MiB array of characters and memcpy the appropriate characters from each string into the resulting buffer, but now you've added 33% memory overhead and wasted quite a few cycles.

A related issue is efficient appending and prepending. Let's say you have a program that does something like:

var resultString = '';

function onMoreText(text) {
    // If the text starts with a letter in the lower part of the alphabet,
    // put it at the beginning; otherwise, put it at the end.
    if (text[0] < 'l')
        resultString = text + resultString;
    else
        resultString = resultString + text;
};

If you did the naive "new string and memcpy" for all of the appends and prepends, you'd end up creating a lot of useless garbage inside the VM. The Python community has the conventional wisdom that you should build up a container (like a deque) and join on it, but it's difficult to hold the entire ecosystem of web programmers to such standards.

In the SpiderMonkey VM, the general solution to problems like these this is to build up a tree-like data structure that represents the sequence of immutable substrings. and collapse that datastructure only when necessary. Say that you write this:

(function weirdJoin(a, b, c, d) {
    var lhs = a + b;
    var rhs = c + d;
    var result = lhs + rhs;
    return result;
})('I', 'love', 'SpiderMonkey', '!');

The concatenation is performed lazily by using a tree-like data structure (actually a DAG, since the same string cell can appear in the structure at multiple points) that we call a rope. Say that all of the arguments are different atoms — the resulting rope would look like:

SpiderMonkey rope concatenation example.

Since strings are immutable at the language level, cycles can never form. When the character array is requested within the engine, a binary tree traversal is performed to flatten the constituent strings' characters into a single, newly-allocated buffer. Note that, when the rope is not flattened, the characters which constitute the overall string are not in a single contiguous region of memory — they're spread across several buffers!

Dependent strings

How about when you do superHugeString.substr(4096, 16384)? Naively, you need to copy the characters in that range into a new string.

However, in SpiderMonkey there's also a concept of dependent strings which simply reuse some of the buffer contents of an existing string's character array. In a very simple fashion, the derived string keeps the referred-to string alive in order to reuse the characters in the referred-to string's buffer.

Fancier and fancier!

Really small strings are a fairly common case: they are used for things like array property indices and single-characters of strings — recall that, in JavaScript, all object properties are named by strings, unlike in languages like Python which uses arbitrary hashables. To optimize for this case, we have strings with characters embedded into their GC cell header, avoiding a heap-allocated character buffer. [§] We also pre-initialize many of these (less than length-3 strings and integers up to 256) atoms when the runtime starts up to bypass the typical hash table lookup overhead.

I'm out of time, but I hope this gives some insight into the good number of tricks are played to make common uses of JavaScript strings fast within the SpiderMonkey VM. For you curious types, there's lots more information in the code!

Footnotes

[*]

Of course, reason 1336 is that there are so many lame 1337 references that it's actually still funny.

[†]

Note the emphasis on ideally. Ideally, it should make you chuckle.

[‡]

Fun aside: the maximum length of a string in our engine is currently bounded to 28 bits.

[§]

Our garbage collector is not currently capable of creating variable-sized allocations — we only have fixed-size header classes.

Coding style as a feature of language design

roc recently posted a thought-provoking entry titled, "Coding Style as a Failure of Language Design", in which he states:

Languages already make rules about syntax that are somewhat arbitrary. Projects imposing additional syntax restrictions indicate that the language did not constrain the syntax enough; if the language syntax was sufficiently constrained, projects would not feel the need to do it. Syntax would be uniform within and across projects, and developers would not need to learn multiple variants of the same language.

I totally agree with roc's point that there is overhead in learning-and-conforming-to local style guidelines. I also agree that this overhead is unnecessary and that language implementers should find ways to eliminate it; however, I think that imposing additional arbitrary constraints on the syntax is heading in the wrong direction.

Your language's execution engine [*] already has a method of normalizing crazy styles: it forms an abstract syntax tree. Before the abstract syntax tree (AST) is mutated [†] it is in perfect correspondence with the original source text, modulo the infinite number of possible formatting preferences. This is the necessary set of constraints on the syntax that can actually result in your program being executed as it is written. [‡]

So, why don't we just lug that thing around instead of the source text itself?

The dream

The feature that languages should offer is a mux/demux service: mux an infinite number of formatting preferences into an AST (via a traditional parser); demux the AST into source text via an AST-decompiler, parameterized by an arbitrarily large set of formatting options. Language implementations could ship with a pair of standalone binaries. Seriously, the reference language implementation should understand its own formatting parameters at least as well as Eclipse does. [§]

Once you have the demux tool, you run it on your AST files as a post-checkout hook in your revision control system for instant style personalization. If the engine accepts the AST directly as input, you would only need to demux the files you planned to work on — if the engine accepted an AST directly as input in lieu of source text, this could even be an optimization.

Different execution engines are likely to use different ASTs, but there should be little problem with composability: checked-in AST goes through standalone demux with an arbitrary set of preferences, then through the alternate compiler's mux. So long as the engines have the same language grammar for the source text, everybody's happy, and you don't have to waste time writing silly AST-to-AST-prime transforms.

In this model, linters are just composable AST observers/transforms that have no ordering dependencies. You could even offer a service for simple grammatical extensions without going so far as language level support. Want a block-end delimiter in the Python code you look at? [¶] Why not, just use a transform to rip it out before it leaves the front-end of the execution engine.

Reality

Of course, the set of languages we know and love has some overlap with the set of languages that totally suck to parse, whether due to preprocessors or context sensitivity or the desire to parse poems, but I would bet good money that there are solutions for such languages. In any case, the symmetric difference between those two sets could get with it, and new languages would be kind to follow suit. It would certainly be an interesting post-FF4 experiment for SpiderMonkey, as we've got a plan on file to clean up the parser interfaces for an intriguing ECMAScript strawman proposal anywho.

Footnotes

[*]

Interpreter, compiler, translator, whatever.

[†]

To do constant folding or what have you.

[‡]

Oh yeah, and comments. We would have to keep those around too. They're easy enough to throw away during the first pass over the AST.

[§]

Even more ideal, you'd move all of that formatting and autocompletion code out of IDEs into a language service API.

[¶]

Presumably because you despise all that is good and righteous in the world? ;-)

Eliminating web service dependencies with a language-specific abstraction barrier

Hyperbolic analogy: Saying, "You shouldn't need to wrap the web service interface, because it already provides an API," is like saying, "You shouldn't need different programming languages, because they're all Turing complete."

Web services tend to deliver raw data payloads from a flat interface and thus lack the usability of native language APIs. Inevitably, when you program RPC-like interfaces for no language in particular, you incur incompatibilities for every particular language's best practices, idioms, and data models. [*] The issue of appropriately representing exceptions and/or error codes in RPC-like services is a notorious example of this.

There are additional specification mechanisms like WSDL [†] that allow us to make the payloads more object-like. Additional structure is indicated through the use of user-defined "complex types," but this only gets you part of the way to a usable API for any given language. In Python, it's a lot more sensible to perform an operation like in the following abstraction:

from internal_tracker.service import InternalTracker
bug_serivce = InternalTracker(username=getpass.getuser(),
    password=getpass.getpass())
bug = bug_service.get_bug(123456)
bug.actionable.add('Chris Leary') # may raise ReadOnlyException
comment = Comment(text='Adding self to actionable')
bug.arbs.add_comment(comment)
bug.save() # may raise instanceof ServiceWriteException

Than to use an external web service API solution directly (despite using the excellent Suds library):

# Boilerplate
client = suds.client.Client(wsdl=wsdl_uri)
security = suds.wsse.Security()
security.tokens.append(UsernameToken(getpass.getuser(),
    getpass.getpass()))
client.set_options(wsse=security)
internal_tracker_service = client.service

# Usage
service_bug = internal_tracker_service.GetBug(123456)
service_bug.actionable += ', Chris Leary'
# Do we check the response for all WebFault exceptions?
# (Do we check for and handle all the possible transport issues?)
internal_tracker_service.UpdateBug(service_bug)
Comment = internal_tracker_service.factory['Comment']
comment = Comment()
comment.BugId = service_bug.Id
comment.Text = 'Adding self to actionable'
# Again, what should we check?
internal_tracker_service.AddComment(comment)

Why is it good to have the layer of indirection?

Lemma 1: The former example actually reads like Python code. It raises problem-domain-relevant exceptions, uses keyword arguments appropriately, follows language naming conventions, and uses sensible language-specific data types that may be poorly represented in the web service. For example, actionable may be a big comma-delimited string according to the service, whereas it should clearly be modeled as a set of (unique) names, using Python's set data type. Another example is BigIntegers being poorly represented as strings in order to keep the API language-neutral.

Lemma 2: The layer represents an extremely maintainable abstraction barrier between the client and the backing service. Should a team using the abstraction decide it's prudent to switch to, say, Bugzilla, I would have no trouble writing a port for the backing service in which all client code would continue to work. Another example is a scenario in which we determine that the transport is unreliable for some reason, so decide all requests should be retried three times instead of one. [‡] How many places will I need to make changes? How many client code bases do I potentially need to keep track of?

Why is it risky to use the web service interface?

If the web service API represents the problem domain correctly with constructs that make sense for your language, it's fine to use directly. (As long as you're confident you won't have transport-layer issues.) If you're near-certain that the backing service will not change, and/or you're willing to risk all the client code that will depend on that API directly being instantaneously broken, it's fine. The trouble occurs when one of these is not the case.

Let's say that the backing service does change to Bugzilla. Chances are that hacking in adapter classes for the new service would be a horrible upgrade experience that entails:

  1. Repeated discovery of leaky abstractions,

  2. Greater propensity to bugs, [§] and

  3. More difficult maintenance going forward.

Client code that is tightly coupled to the service API would force a rewrite in order to avoid these issues.

Pragmatic Programming says to rely on reliable things, which is a rule that any reasonable person will agree with. [¶] The abstraction barrier is reliable in its loose coupling (direct modeling of the problem domain), whereas direct use of the web service API could force a reliance on quirky external service facts, perhaps deep into client code.

Is there room for compromise?

This is the point in the discussion where we think something along the lines of, "Well, I can just fix the quirky things with a bunch of shims between my code and the service itself." At that point, I contend, you're really just implementing a half-baked version of the language-specific API. It's better to make the abstractions appropriate for the target language and problem domain the first time around than by incrementally adding shims and hoping client code didn't use the underlying quirks before you got to them. Heck, if the web service is extremely well suited to your language, you'll end up proxying most of the time anyway, and the development will be relatively effortless. [#]

What about speed of deployment?

If we have language-specific APIs, won't there be additional delay waiting for it to update when additional capabilities are added to the backing service?

First of all, if the new capability is not within the problem domain of the library, it should be a separate API. This is the single responsibility principle applied to interfaces — you should be programming to an interface abstraction. Just because a backing service has a hodgepodge of responsibilities doesn't mean that our language-specific API should as well. In fact, it probably shouldn't. Let's assume it is in the problem domain.

If the functionality is sane and ready for use in the target language, it should be really simple for the library owner to extend the language-specific API. In fact, if you're using the proxy pattern, you may not have to do anything at all. Let's assume that the functionality is quirky and you're blocked waiting for the library owner to update with the language-specific shim, because it's non-trivial.

Now our solution tends to vary based on the language. Languages like Python have what's known as "gentlemen's privacy", based on the notion of a gentlemen's agreement. Privacy constraints are not enforced at compile-time and/or run-time, so you can just reach through the abstraction barrier if you believe you know what you're doing. Yes, you're making an informed decision to violate encapsulation. Cases like this are exactly when it comes in handy.

assert not hasattr(bug_service, 'super_new_method_we_need')
# HACK: Violate abstraction -- we need this new capability right now
# and Billy-Bo, the library owner, is swamped!
suds_client = bug_service._suds_client
result = suds_client.SuperNewMethodWeNeed()
target_result = de_quirkify(result)

As you can see, we end up implementing the method de_quirkify to de-quirk the quirky web service result into a more language-specific data model — it's bad form to make the code dependent on the web service's quirky output form. We then submit our code for this method to the library owner and suggest that they use it as a basis for their implementation, so that a) they can get it done faster, and b) we can seamlessly factor the hack out.

For privacy-enforcing languages, you would need to expose a public API for getting at the private service, then tell people not to use it unless they know what they're doing. As you can tell, you pretty much wind up with gentlemen's privacy on that interface, anyway.

Footnotes

[*]

In EE we call this kind of phenomenon an impedance mismatch, which results in power loss.

[†]

And many others, with few successful ones.

[‡]

Or maybe we want to switch from HTTP transport to SMTP. Yes, this is actually possible. :-)

[§]

In duplicating exact web service behaviors — you have to be backwards compatible with whichever behaviors the existing client code relies on.

[¶]

It's a syntactic tautology. The caveat is that reasonable people will almost certainly quibble over the classification of what's reliable.

[#]

At least in Python or other languages with a good degree of dynamism, it will.

Bit twiddling: Simple O(1) membership test

Disclaimer

Bit twiddling is fun. Plus, it has several advantages:

You have to understand, though, that clever tricks without appropriate documentation will make people want to break your face. [*] Always bit bash responsibly: appoint a designated code-reader to make sure you're clear enough, and leave your keys at the door.

The Problem

Let's say you wanted to know whether a number was a valid PCI Express link width in terms of number of lanes. We know that valid widths are x1, x2, x4, x8, x12, x16, or x32, and want to construct a function of the following form:

#include <stdbool.h>
#include <stdint.h>
#include <assert.h>

/**
 * :return: Whether the lane count is valid.
 */
bool is_valid_link_width(uint8_t lane_count);

/**
 * Unit test for ``is_valid_link_width``.
 */
int main(int argc, char **argv) {
    assert(!is_valid_link_width(0));
    assert(is_valid_link_width(1));
    assert(is_valid_link_width(2));
    assert(!is_valid_link_width(3));
    assert(is_valid_link_width(32));
    assert(!is_valid_link_width(33));
    assert(!is_valid_link_width(33));
    assert(!is_valid_link_width(0xff));
    return 0;
}

Note that the uint8_t has a width of exactly 8 bits. [†]

How would you write it?

Less Interesting Solution

If you were thinking switch statement, that will work. You could use a switch statement with intentional fall-throughs and hope that the compiler optimizes a branch table for you. (For values this small and dense it probably will, as mentioned in the referenced article.) If the compiler doesn't write the branch table for you, but instead generates the equivalent of a big if/else if ladder, your solution doesn't satisfy the O(1) constraint: in that case, the worst case control flow hits every rung of the ladder (the else if guards), making it O(n).

bool is_valid_link_width(uint8_t lane_count) {
    switch (lane_count) {
    case 1:
    case 2:
    case 4:
    case 8:
    case 12:
    case 16:
    case 32:
        return true;
    }
    return false;
}

An implementation that I like better, which doesn't put as much faith in the compiler, is as follows:

bool is_valid_link_width(uint8_t lane_count) {
    return 0x100011116ULL & (1ULL << lane_count);
}

How cool is that?

The Neat Trick

The clever insight here is that we can encode all of our target "true" values in binary form, like so:

       32                      16    12    8     4    1
0b__0001__0000__0000__0000__0001__0001__0001__0001__0110

Now, if we were to take a 1 value and move it over a number of binary slots equal to the lane count, it will line up with a 1 value in this long binary number we've constructed. Take the bitwise-AND of those two values, and we wind up with:

This is exactly what we were looking for.

This long binary number we've created must be converted from binary into a hexadecimal value, so that we can represent it as an integer literal in our C program. Encoding each binary 4-tuple into hex from right to left, we get the value 0x100011116.

There's an issue with this value, however. Unless we specify a suffix for our integer literal, the compiler is allowed to truncate the value to its native word size, [‡] which would cause serious problems. For x86 systems with 16 bit words, our value could be truncated to 0x1116, which would only allow lane sizes of 1, 2, 4, 8, and 12 — the allowed values of 16 and 32 would be cut off!

To solve this, as you can see in the function definition, we add the ULL integer suffix, which explicitly marks the integer literal as an unsigned long long. (The long long integer data type was added to the C language in the C99 standard.) This data type is required to be at least 64 bits wide, so it can definitely hold our 33 relevant bits (32 plus the zero bit which is there for the 1ULL << 0 case). The long data type is too small to hold this value, as long can potentially be only 32 bits wide per the C standard (and it is 32 bits wide on most modern machines).

Readability Counts

Note that there's a more readable version of the same trick in the following:

bool is_valid_link_width(uint8_t lane_count) {
    const uint64_t set =
        1ULL << 1
        | 1ULL << 2
        | 1ULL << 4
        | 1ULL << 8
        | 1ULL << 12
        | 1ULL << 16
        | 1ULL << 32;
    return set & (1ULL << lane_count);
}

Here we make the construction of the big integer more explicit and make the code less prone to our errors in encoding the literal binary value into hex. Any compiler worth its salt will fold out the const calculation at compile time, so no overhead will be incurred for writing it this way.

I demonstrated the other way of doing it first to: a) blow your mind a little bit, and b) demonstrate an idiom you might see in other people's (overly) clever code. Now there's a chance you can recognize and decipher it without adequate documentation. Huzzah!

Footnotes

[*]

A great rift in the universe known as "Other People's Perl" has been the cause of 80% of the computer-engineering face breakage since 1987. Don't let this tragedy happen to you or your beloved programmers.

[†]

I like using fixed-width integer types, especially in describing problems, because they helpfully constrain the possibilities on the input domain. This is even more important for newcomers who are just wrapping their heads around bit twiddling.

[‡]

I can't find standards documents/discussions to support this claim, but it's definitely what I was taught. Can anybody provide evidence to confirm/deny?

Experimenting with C++ inheritance diamonds and method resolution order

It seems like g++ uses the order indicated in the class' derivation list over the order indicated in the base specifier list, as in the below example:

#include <iostream>

using namespace std;

class Animal {
public:
    Animal() {
        cout < < "Animal!" << endl;
    }
};

class Man : public virtual Animal {
public:
    Man(string exclamation) {
        cout << "Man " << exclamation << "!" << endl;
    }
};

class Bear : public virtual Animal {
public:
    Bear(string exclamation) {
        cout << "Bear " << exclamation << "!" << endl;
    }
};

class Pig : public virtual Animal {
public:
    Pig(string exclamation) {
        cout << "Pig " << exclamation << "!" << endl;
    }
};

class ManBearPig : public Man, public Bear, public Pig {
public:
    ManBearPig(string exclamation)
        : Pig(exclamation), Bear(exclamation), Man(exclamation)
    {
        cout << "ManBearPig " << exclamation << "!" << endl;
    }
};

int main() {
    ManBearPig mbp("away");
    return 0;
}
cdleary@gamma:~/projects/sandbox/sandbox_cpp$ g++ diamond.cpp && ./a.out
Animal!
Man away!
Bear away!
Pig away!
ManBearPig away!

Note that this experiment is a pretty (very) weak basis for the conclusion — it could be using lexicographic order, order based on the day of month, or any number of other unlikely heuristics :) A lot more experimentation is necessary before getting a discernible pattern, but I just felt like messing around.

Edit (07/27/08): Using correct "base specifier list" terminology instead of my made-up "class initializer list" terminology.