December 13, 2012

Systems programming at my alma mater

Bryan also asked me this at NodeConf last year, where I was chatting with him about the then-in-development IonMonkey:

An old e-mail to the Cornell CS faculty:  Have things changed in the decade since?

I remembered my talk with Bryan when I went to recruit there last year and asked the same interview question that he references — except with the pointer uninitialized so candidates would have to enumerate the possibilities — to see what evidence I could collect. My thoughts on the issue haven't really changed since that chat, so I'll just repeat them here.

(And, although I do not speak for my employer, for any programmers back in Ithaca who think systems programming and stuff like Birman's class is cool beans, my team is hiring both full time and interns in the valley, and I would be delighted if you decided to apply.)

My overarching thought: bring the passion

Many of the people I'm really proud that my teams have hired out of undergrad are just "in love" with systems programming, just as a skilled artisan "cares" about their craft. They work on personal projects and steer their trajectory towards it somewhat independent of the curriculum.

Passion seems to be pretty key, along with follow-through, and ability to work well with others, in the people I've thumbs-up'd over the years. Of course I always want people who do well in their more systems-oriented curriculum and live in a solid part the current-ability curve, but I always have an eye out for the passionately interested ones.

So, I tend to wonder: if an org has a "can systems program" distribution among the candidates, can you predict the existence of the outliers at the career fair from the position of the fat part of that curve?

Anecdotally, myself and two other systems hackers on the JavaScript engine came from the same undergrad program, modulo a few years, although we took radically different paths to get to the team. They are among the best and most passionate systems programmers I've ever known, which also pushes me to think passionate interest may be a high-order bit.

Regardless, it's obviously in systems companies' best interest to try to get the most bang per buck on recruiting trips, so you can see how Bryan's point of order is relevant.

My biased take-away from my time there

I graduated less than a decade ago, so I have my own point of reference. From my time there several years ago, I got the feeling that the mentality was:

This didn't come from any kind of authority, it's just putting into words the "this is how things are done around here" understanding I had at the time. All of them seemed reasonable in context, though I didn't think I wanted to head down the path alluded by those rules of thumb. Of course these were, in the end, just rules of thumb: we still had things like a Linux farm used by some courses.

I feel that the "horrible for teaching" problem extends to other important real-world systems considerations as well: I learned MIPS and Alpha [*], presumably due to their clean RISC heritage, but golly do I ever wish I was taught more about specifics of x86 systems. And POSIX systems. [†]

Of course that kind of thing — picking a "real-world" ISA or compute platform — can be a tricky play for a curriculum: what do you do about the to-be SUN folks? Perhaps you've taught them all this x86-specific nonsense when they only care about SPARC. How many of the "there-be-dragons" lessons from x86 would cross-apply?

There's a balance between trade and fundamentals, and I feel I was often reminded that I was there to cultivate excellent fundamentals which could later be applied appropriately to the trends of industry and academia.

But seriously, it's just writing C...

For my graduating class, CS undergrad didn't really require writing C. The closest you were forced to get was translating C constructs (like loops and function calls) to MIPS and filling in blanks in existing programs. You note the bijection-looking relationship between C and assembly and can pretty much move on.

I tried to steer to hit as much interesting systems-level programming as possible. To summarize a path to learning a workable amount of systems programming in my school of yore, in hopes it will translate to something helpful existing today:

I'm not a good alum in failing to keep up with the goings-ons but, if I had a recommendation based on personal experience, it'd be to do stuff like that. Unfortunately, I've also been at companies where the most basic interview question is "how does a vtable actually work" or on nuances of C++ exceptions, so for some jobs you may want to take an advanced C++ class as well.

Understanding a NULL pointer deref isn't writing C

Eh, it kind of is. On my recruiting trip, if people didn't get my uninitialized pointer dereference question, I would ask them questions about MMUs if they had taken the computer organization class. Some knew how an MMU worked (of course, some more roughly than others), but didn't realize that OSes had a policy of keeping the null page mapping invalid.

So if you understand an MMU, why don't you know what's going to happen in the NULL pointer deref? Because you've never actually written a C program and screwed it up. Or your haven't written enough assembly with pointer manipulation. If you've actually written a Java program and screwed it up you might say NullPointerException, but then you remember there are no exceptions in C, so you have to quickly come up with an answer that fits and say zero.

I think another example might help to illustrate the disconnect: the difference between protected mode and user mode is well understood among people who complete an operating systems course, but the conventions associated with them (something like "tell me about init"), or what a "traditional" physical memory space actually looks like, seem to be out of scope without outside interest.

This kind of interview scenario is usually time to fluency sensitive — wrapping your head around modern C and sane manual memory management isn't trivial, so it does require some time and experience. Plus when you're working regularly with footguns, team members want a basic level of trust in coding capability. It's not that you think the person can't do the job, it's just not the right timing if you need to find somebody who can hit the ground running. Bryan also mentions this in his email.

Thankfully for those of us concerned with the placement of the fat part of the distribution, it sounds like Professor Sirer is saying it's been moving even more in the right direction in the time since I've departed. And, for the big reveal, I did find good systems candidates on my trip, and at the same time avoided freezing to death despite going soft in California all these years.

Brain teaser

I'll round this entry off with a little brain teaser for you systems-minded folks: I contend that the following might not segfault.

// ...

int main() {
    A *a = NULL;
    printf("%d\n", a->integer_member);
    return EXIT_SUCCESS;

How many reasons can you enumerate as to why? What if we eliminate the call to the mysterious function?



In an advanced course we had an Alpha 21264 that I came to love deeply.


I'm hoping there's more emphasis on POSIX these days with the mobile growth and Linux/OS X dominance in that space.

ARM chars are unsigned by default

[Latest from the "I can't believe I'm writing a blog entry about this" department, but the context and surrounding discussion is interesting. --Ed]

If you're like me, or one of the other thousands of concerned parents who has borne C code into this cruel, topsy-turvy, and oftentimes undefined world, you read the C standard aloud to your programs each night. It's comforting to know that K&R are out there, somewhere, watching over them, as visions of Duff's Devices dance in their wee little heads.

The shocking truth

In all probability, you're one of today's lucky bunch who find out that the signedness of the char datatype in C is undefined. The implication being, when you write char, the compiler is implicitly (but consistently) giving it either the signed or unsigned modifier. From the spec: [*]

The three types char, signed char, and unsigned char are collectively called the character types. The implementation shall define char to have the same range, representation, and behavior as either signed char or unsigned char.


Irrespective of the choice made, char is a separate type from the other two and is not compatible with either.

—ISO 9899:1999, section "6.2.5 Types"

Why is char distinct from the explicitly-signed variants to begin with? A great discussion of historical portability questions is given here:

Fast forward [to 1993] and you'll find no single "load character from memory and sign extend" in the ARM instruction set. That's why, for performance reasons, every compiler I'm aware of makes the default char type signed on x86, but unsigned on ARM. (A workaround for the GNU GCC compiler is the -fsigned-char parameter, which forces all chars to become signed.)

Portability and the ARM Processor, Trevor Harmon, 2003

It's worth noting, though, that in modern times there are both LDRB (Load Register Byte) and LDRSB (Load Register Signed Byte) instructions available in the ISA that do sign extension after the load operation in a single instruction. [†]

So what does this mean in practice? Conventional wisdom is that you use unsigned values when you're bit bashing (although you have to be extra careful bit-bashing types smaller than int due to promotion rules) and signed values when you're doing math, [‡] but now we have this third type, the implicit-signedness char. What's the conventional wisdom on that?

Signedness-un-decorated char is for ASCII text

If you find yourself writing:

char some_char = NUMERIC_VALUE;

You should probably reconsider. In that case, when you're clearly doing something numeric, spring for a signed char so the effect of arithmetic expressions across platforms is more clear. But the more typical usage is still good:

char some_char = 'a';

For numeric uses, also consider adopting a fixed-width or minimum-width datatype from <stdint.h>. You really don't want to hold the additional complexity of char signedness in your head, as integer promotion rules are already quite tricky.

Examples to consider

Some of the following mistakes will trigger warnings, but you should realize there's something to be aware of in the warning spew (or a compiler option to consider changing) when you're cross-compiling for ARM.

Example of badness: testing the high bit

Let's say you wanted to see if the high bit were set on a char. If you assume signed chars, this easy-to-write comparison seems legit:

if (some_char < 0)

But if your char type is unsigned that test will never pass.

Example of badness: comparison to negative numeric literals

You could also make the classic mistake:

char c = getchar(); // Should actually be placed in an int!
while (c != EOF)

This comparison would never return true with an 8-bit unsigned char datatype and a 32-bit int datatype. Here's the breakdown:

When getchar() returns ((signed int) -1) to represent EOF, you'll truncate that value into 0xFFu (because chars are an unsigned 8-bit datatype). Then, when you compare against EOF, you'll promote that unsigned value to a signed integer without sign extension (preserving the bit pattern of the original, unsigned char value), and get comparison between 0xFF (255 in decimal) and 0xFFFFFFFF (-1 in decimal). For all the values in the unsigned char range, I hope it's clear that this test will never pass. [§]

To make the example a little more obvious we can replace the call to getchar() and the EOF with a numeric -1 literal and the same thing will happen.

char c = -1;
assert(c == -1); // This assertion fails. Yikes.

That last snippet can be tested by compiling in GCC with -fsigned-char and -funsigned-char if you'd like to see the difference in action.



The spec goes on to say that you can figure out the underlying signedness by checking whether CHAR_MIN from <limits.h> is 0 or SCHAR_MIN. In C++ you could do the <limits>-based std::numeric_limits<char>::is_signed dance.


Although the same encodings exist in Thumb-sub-ISA, the ARM-sub-ISA encoding for LSRSB lacks a shift capability on the load output as a result of this historical artifact.


Although sometimes of the tradeoffs can be more subtle. Scott Meyers discusses more issues quite well, per usual.


Notably, if you make the same mistake in in the signed char case you can breathe easier, because you'll sign extend for the comparison, making the test passable.

Using C89 in 2012 isn't crazy

The first group I worked with in industry wrote the compiler in C and made fun of C++ on a regular basis. The second group I worked with in industry wrote the compiler in C++ and made fun of C on occasion. Like most systems programmers I've met, they were a loveable, but snarky bunch!

In any case, I've seen life on both sides of the fence, and there's really simple reasoning that dictates what you choose from C89, GNU C, C99, C++98, or C++11 in the year 2012 AD:

If this sounds simple, you're lucky!

Life gets a little bit more interesting when the match is fuzzy: you could make a strategic gamble and (at least initially) ignore parts of your "maximal" target market to gain some productivity. If you're under the gun, that may be the right way to go.

But then again, keeping your options open is also important. The wider the target market the more people you can give an immediate "yes" to. I have to imagine that phone calls like this can be important:

[A sunny afternoon in a South Bay office park. Just outside, a white Prius merges three lanes without activating a blinker. Suddenly, the phone rings.]

Nice to hear from you, Bigbucks McWindfall! What's that? You say you want my code to run as an exokernel on an in-house embedded platform with an in-house C89 toolchain? No problem! We'll send a guy to your office to compile our product and run tests tomorrow morning.

Suffice it to say that there are legitimate considerations. Consider that GCC isn't everywhere (though I love how prevalent it is these days!) and it certainly doesn't generate the best code on every platform for every workload. Consider that MSVC can only compile C89 as "real" C (as opposed to a C++ subset). Consider that the folks out there who have custom toolchains probably have them because they can afford them.

There are benefits to taking a dependency on a lowest common denominator.

C++, generic wrappers, and CRTP, oh MI!

Reading some of the original traits papers, I got to the part where they mention the conceptual difficulties inherent in multiple inheritance (MI), one of which is "factoring out generic wrappers". [*]

There's a footnote clarifying that, in practice, languages with MI actually do have other ways of accomplishing the factoring, and in reading that I remembered that the first time I actually understood CRTP (Curiously Recurring Template Pattern) was because I needed some generic wrappers.

Some folks were asking about CRTP on our IRC channel this past week, so I figured I'd share a quick walk-though.

Sometimes you want to be able to shove a method implementation onto a class, given that it has a few buiding blocks for you to work with. Let's say that there's some generic and formulaic way of making a delicious cake.

class CakeMaker
    Cake makeCake() {
        Ingredients ingredients = fetchIngredients();
        if (ingredients.spoiled())
            return Cake::Fail;

        BatterAndWhatnot batter = mixAndStuff(ingredients);
        Cake result = bake(batter);
        if (cake.burned())
            return Cake::Fail;

        return cake;

This is supposed to be a reusable component for shoving a makeCake method onto another class that already has the necessary methods, fetchIngredients, mixAndStuff, and bake.

Great. So now let's say that we have two different cake makers, CakeFactory and PersonalChef — we want to just implement the necessary methods for CakeMaker in those and somehow shove the makeCake method onto their class definition as well. Maybe we can inherit from CakeMaker or something?

But here's the rub: CakeMaker can't exist. It is an invalid class definition that will not compile, because it refers to methods that it does not have.

cdleary@stretch:~$ g++ -c crtp.cpp
crtp.cpp: In member function ‘Cake CakeMaker::makeCake()’:
crtp.cpp:17:56: error: ‘fetchIngredients’ was not declared in this scope
crtp.cpp:21:62: error: ‘mixAndStuff’ was not declared in this scope
crtp.cpp:22:38: error: ‘bake’ was not declared in this scope

Luckily, C++ templates have this nice lazy instantiation property, where the code goes mostly unchecked by the compiler until you actually try to use it. So, if we just change our definition to:

template <typename T>
class CakeMaker
    // ...

GCC will accept it if we ask it to shut up a little bit (with -fpermissive), because we're thinking.

So now we take a look at our close friend, PersonalChef:

class PersonalChef
    Ingredients fetchIngredients();
    BatterAndWhatnot mixAndStuff(Ingredients);
    Cake bake(BatterAndWhatnot);

We want to shove the CakeMaker method onto his/her class definition. We could inherit from the CakeMaker and just pass it an arbitrary type T, like so:

class PersonalChef : public CakeMaker<int>
    // ...

But we need a way to wire up the methods that CakeMaker needs to the methods that PersonalChef actually has. And this is where we take the final step — via a stroke of intuition, let's pass in the type that actually has the methods on it, and use that type to refer to the method implementations within CakeMaker:

template <class Wrapped>
class CakeMaker
    Cake makeCake() {
        Wrapped *self = static_cast<Wrapped *>(this);
        Ingredients ingredients = self->fetchIngredients();
        if (ingredients.spoiled())
            return Cake::Fail;

        BatterAndWhatnot batter = self->mixAndStuff(ingredients);
        Cake result = self->bake(batter);
        if (result.burned())
            return Cake::Fail;

        return result;

class PersonalChef : public CakeMaker<PersonalChef>
    Ingredients fetchIngredients();
    BatterAndWhatnot mixAndStuff(Ingredients);
    Cake bake(BatterAndWhatnot);

    friend class CakeMaker;

int main()
    PersonalChef chef;
    return 0;

Bam! Now it compiles normally. The CakeMaker is given PersonalChef as the template type argument, and the CakeMaker converts its this pointer for use as the PersonalChef type (which is valid in this case, since PersonalChef is a CakeMaker), which does implement the required methods!

This can also be used to enforce minimum interface requirements at compile time (as in the cross-platform macro assembler) without the use of virtual functions, which have a tendency to thwart inlining optimization.

Fun fact: it looks like we have about 90 virtual function declarations in the 190k lines of engine-related C/C++ code that cloc tells me are in the js/src directory.



Fun concept from the papers: the conceptual issue with MI is that classes are overloaded in their purpose: they are intended to serve both as units of code (implementation) reuse and for instantiation of actual objects.

Lively assertions

Recently, "another" discussion about fatal assertions has cropped up in the Mozilla community. Luckily for me, I've missed all of the other discussions, so this is the one where I get to throw in my two bits.

Effectively, I only work on the JS engine, and the JS engine only has fatal assertions. This approach works for the JS team, and I can take an insider's guess as to why.

What's a fatal assertion?

In Mozilla, we have two relevant build modes: debug and no-debug.

A fatal assertion means that, when I write JS_ASSERT(someCondition), if someCondition doesn't hold, we call abort in debug build mode. As a result, the code which follows the assertion may legitimately assume that someCondition holds. You will never see something like this in the JS engine:

    JS_ASSERT(0 <= offset && offset < size);
    if (0 <= offset && offset < size) // Bad! Already enforced!
        offset_ = offset;

The interesting thing is that, in no-debug mode, we will not call abort. We eliminate the assertion condition test entirely. This means that, in production, the code which follows the assertion assumes that someCondition holds, and there's nothing checking that to be the case. [*]

Exploding early and often

If a JS-engine hacker assumes someCondition during development, and it turns out that someCondition isn't the case, we'd like to know about it, and we'd like to know about it LOUDLY.

Our valiant security team runs fuzz testing against the JS engine continuously, and hitting any one of these fatal assertions causes an abort. When you know that there is input that causes an abort in debug mode, you have a few potential resolutions:

But I think the real key to this whole process is simple: if things are exploding, a member of the bomb squad will show up and come to some resolution. Fatal assertions force action in a way that logs will not. You must (at least cursorily) investigate any one of these assertions as though it were in the most severe category, and some form of resolution must be timely in order to unblock fuzzers and other developers.

Everything that the hacker feels can and should be asserted is being asserted in a way that's impossible to ignore. Invariants present in the code base are reflected by the fatal assertions and, once they've proven themselves by running the regression/fuzzer/web gamut, can be depended upon — they certainly come to reinforce and strengthen each other over time.



We do have mechanisms that hackers can use for further checking, however. If crash reports indicate that some assertions may be suspect in production environments, we have a JS_OPT_ASSERT for doing diagnostics in our pre-release distribution channels. Since the most reliable information in a crash report tends to be the line number that you crashed on, fatal non-debug assertions are a very useful capability.