December 13, 2012

Systems programming at my alma mater

Bryan also asked me this at NodeConf last year, where I was chatting with him about the then-in-development IonMonkey:

An old e-mail to the Cornell CS faculty: https://gist.github.com/4278516  Have things changed in the decade since?

I remembered my talk with Bryan when I went to recruit there last year and asked the same interview question that he references — except with the pointer uninitialized so candidates would have to enumerate the possibilities — to see what evidence I could collect. My thoughts on the issue haven't really changed since that chat, so I'll just repeat them here.

(And, although I do not speak for my employer, for any programmers back in Ithaca who think systems programming and stuff like Birman's class is cool beans, my team is hiring both full time and interns in the valley, and I would be delighted if you decided to apply.)

My overarching thought: bring the passion

Many of the people I'm really proud that my teams have hired out of undergrad are just "in love" with systems programming, just as a skilled artisan "cares" about their craft. They work on personal projects and steer their trajectory towards it somewhat independent of the curriculum.

Passion seems to be pretty key, along with follow-through, and ability to work well with others, in the people I've thumbs-up'd over the years. Of course I always want people who do well in their more systems-oriented curriculum and live in a solid part the current-ability curve, but I always have an eye out for the passionately interested ones.

So, I tend to wonder: if an org has a "can systems program" distribution among the candidates, can you predict the existence of the outliers at the career fair from the position of the fat part of that curve?

Anecdotally, myself and two other systems hackers on the JavaScript engine came from the same undergrad program, modulo a few years, although we took radically different paths to get to the team. They are among the best and most passionate systems programmers I've ever known, which also pushes me to think passionate interest may be a high-order bit.

Regardless, it's obviously in systems companies' best interest to try to get the most bang per buck on recruiting trips, so you can see how Bryan's point of order is relevant.

My biased take-away from my time there

I graduated less than a decade ago, so I have my own point of reference. From my time there several years ago, I got the feeling that the mentality was:

This didn't come from any kind of authority, it's just putting into words the "this is how things are done around here" understanding I had at the time. All of them seemed reasonable in context, though I didn't think I wanted to head down the path alluded by those rules of thumb. Of course these were, in the end, just rules of thumb: we still had things like a Linux farm used by some courses.

I feel that the "horrible for teaching" problem extends to other important real-world systems considerations as well: I learned MIPS and Alpha [*], presumably due to their clean RISC heritage, but golly do I ever wish I was taught more about specifics of x86 systems. And POSIX systems. [†]

Of course that kind of thing — picking a "real-world" ISA or compute platform — can be a tricky play for a curriculum: what do you do about the to-be SUN folks? Perhaps you've taught them all this x86-specific nonsense when they only care about SPARC. How many of the "there-be-dragons" lessons from x86 would cross-apply?

There's a balance between trade and fundamentals, and I feel I was often reminded that I was there to cultivate excellent fundamentals which could later be applied appropriately to the trends of industry and academia.

But seriously, it's just writing C...

For my graduating class, CS undergrad didn't really require writing C. The closest you were forced to get was translating C constructs (like loops and function calls) to MIPS and filling in blanks in existing programs. You note the bijection-looking relationship between C and assembly and can pretty much move on.

I tried to steer to hit as much interesting systems-level programming as possible. To summarize a path to learning a workable amount of systems programming in my school of yore, in hopes it will translate to something helpful existing today:

I'm not a good alum in failing to keep up with the goings-ons but, if I had a recommendation based on personal experience, it'd be to do stuff like that. Unfortunately, I've also been at companies where the most basic interview question is "how does a vtable actually work" or on nuances of C++ exceptions, so for some jobs you may want to take an advanced C++ class as well.

Understanding a NULL pointer deref isn't writing C

Eh, it kind of is. On my recruiting trip, if people didn't get my uninitialized pointer dereference question, I would ask them questions about MMUs if they had taken the computer organization class. Some knew how an MMU worked (of course, some more roughly than others), but didn't realize that OSes had a policy of keeping the null page mapping invalid.

So if you understand an MMU, why don't you know what's going to happen in the NULL pointer deref? Because you've never actually written a C program and screwed it up. Or your haven't written enough assembly with pointer manipulation. If you've actually written a Java program and screwed it up you might say NullPointerException, but then you remember there are no exceptions in C, so you have to quickly come up with an answer that fits and say zero.

I think another example might help to illustrate the disconnect: the difference between protected mode and user mode is well understood among people who complete an operating systems course, but the conventions associated with them (something like "tell me about init"), or what a "traditional" physical memory space actually looks like, seem to be out of scope without outside interest.

This kind of interview scenario is usually time to fluency sensitive — wrapping your head around modern C and sane manual memory management isn't trivial, so it does require some time and experience. Plus when you're working regularly with footguns, team members want a basic level of trust in coding capability. It's not that you think the person can't do the job, it's just not the right timing if you need to find somebody who can hit the ground running. Bryan also mentions this in his email.

Thankfully for those of us concerned with the placement of the fat part of the distribution, it sounds like Professor Sirer is saying it's been moving even more in the right direction in the time since I've departed. And, for the big reveal, I did find good systems candidates on my trip, and at the same time avoided freezing to death despite going soft in California all these years.

Brain teaser

I'll round this entry off with a little brain teaser for you systems-minded folks: I contend that the following might not segfault.

// ...

int main() {
    mysterious_function();
    A *a = NULL;
    printf("%d\n", a->integer_member);
    return EXIT_SUCCESS;
}

How many reasons can you enumerate as to why? What if we eliminate the call to the mysterious function?

Footnotes

[*]

In an advanced course we had an Alpha 21264 that I came to love deeply.

[†]

I'm hoping there's more emphasis on POSIX these days with the mobile growth and Linux/OS X dominance in that space.

ARM chars are unsigned by default

[Latest from the "I can't believe I'm writing a blog entry about this" department, but the context and surrounding discussion is interesting. --Ed]

If you're like me, or one of the other thousands of concerned parents who has borne C code into this cruel, topsy-turvy, and oftentimes undefined world, you read the C standard aloud to your programs each night. It's comforting to know that K&R are out there, somewhere, watching over them, as visions of Duff's Devices dance in their wee little heads.

The shocking truth

In all probability, you're one of today's lucky bunch who find out that the signedness of the char datatype in C is undefined. The implication being, when you write char, the compiler is implicitly (but consistently) giving it either the signed or unsigned modifier. From the spec: [*]

The three types char, signed char, and unsigned char are collectively called the character types. The implementation shall define char to have the same range, representation, and behavior as either signed char or unsigned char.

...

Irrespective of the choice made, char is a separate type from the other two and is not compatible with either.

—ISO 9899:1999, section "6.2.5 Types"

Why is char distinct from the explicitly-signed variants to begin with? A great discussion of historical portability questions is given here:

Fast forward [to 1993] and you'll find no single "load character from memory and sign extend" in the ARM instruction set. That's why, for performance reasons, every compiler I'm aware of makes the default char type signed on x86, but unsigned on ARM. (A workaround for the GNU GCC compiler is the -fsigned-char parameter, which forces all chars to become signed.)

Portability and the ARM Processor, Trevor Harmon, 2003

It's worth noting, though, that in modern times there are both LDRB (Load Register Byte) and LDRSB (Load Register Signed Byte) instructions available in the ISA that do sign extension after the load operation in a single instruction. [†]

So what does this mean in practice? Conventional wisdom is that you use unsigned values when you're bit bashing (although you have to be extra careful bit-bashing types smaller than int due to promotion rules) and signed values when you're doing math, [‡] but now we have this third type, the implicit-signedness char. What's the conventional wisdom on that?

Signedness-un-decorated char is for ASCII text

If you find yourself writing:

char some_char = NUMERIC_VALUE;

You should probably reconsider. In that case, when you're clearly doing something numeric, spring for a signed char so the effect of arithmetic expressions across platforms is more clear. But the more typical usage is still good:

char some_char = 'a';

For numeric uses, also consider adopting a fixed-width or minimum-width datatype from <stdint.h>. You really don't want to hold the additional complexity of char signedness in your head, as integer promotion rules are already quite tricky.

Examples to consider

Some of the following mistakes will trigger warnings, but you should realize there's something to be aware of in the warning spew (or a compiler option to consider changing) when you're cross-compiling for ARM.

Example of badness: testing the high bit

Let's say you wanted to see if the high bit were set on a char. If you assume signed chars, this easy-to-write comparison seems legit:

if (some_char < 0)

But if your char type is unsigned that test will never pass.

Example of badness: comparison to negative numeric literals

You could also make the classic mistake:

char c = getchar(); // Should actually be placed in an int!
while (c != EOF)

This comparison would never return true with an 8-bit unsigned char datatype and a 32-bit int datatype. Here's the breakdown:

When getchar() returns ((signed int) -1) to represent EOF, you'll truncate that value into 0xFFu (because chars are an unsigned 8-bit datatype). Then, when you compare against EOF, you'll promote that unsigned value to a signed integer without sign extension (preserving the bit pattern of the original, unsigned char value), and get comparison between 0xFF (255 in decimal) and 0xFFFFFFFF (-1 in decimal). For all the values in the unsigned char range, I hope it's clear that this test will never pass. [§]

To make the example a little more obvious we can replace the call to getchar() and the EOF with a numeric -1 literal and the same thing will happen.

char c = -1;
assert(c == -1); // This assertion fails. Yikes.

That last snippet can be tested by compiling in GCC with -fsigned-char and -funsigned-char if you'd like to see the difference in action.

Footnotes

[*]

The spec goes on to say that you can figure out the underlying signedness by checking whether CHAR_MIN from <limits.h> is 0 or SCHAR_MIN. In C++ you could do the <limits>-based std::numeric_limits<char>::is_signed dance.

[†]

Although the same encodings exist in Thumb-sub-ISA, the ARM-sub-ISA encoding for LSRSB lacks a shift capability on the load output as a result of this historical artifact.

[‡]

Although sometimes of the tradeoffs can be more subtle. Scott Meyers discusses more issues quite well, per usual.

[§]

Notably, if you make the same mistake in in the signed char case you can breathe easier, because you'll sign extend for the comparison, making the test passable.

Using C89 in 2012 isn't crazy

The first group I worked with in industry wrote the compiler in C and made fun of C++ on a regular basis. The second group I worked with in industry wrote the compiler in C++ and made fun of C on occasion. Like most systems programmers I've met, they were a loveable, but snarky bunch!

In any case, I've seen life on both sides of the fence, and there's really simple reasoning that dictates what you choose from C89, GNU C, C99, C++98, or C++11 in the year 2012 AD:

If this sounds simple, you're lucky!

Life gets a little bit more interesting when the match is fuzzy: you could make a strategic gamble and (at least initially) ignore parts of your "maximal" target market to gain some productivity. If you're under the gun, that may be the right way to go.

But then again, keeping your options open is also important. The wider the target market the more people you can give an immediate "yes" to. I have to imagine that phone calls like this can be important:

[A sunny afternoon in a South Bay office park. Just outside, a white Prius merges three lanes without activating a blinker. Suddenly, the phone rings.]

Nice to hear from you, Bigbucks McWindfall! What's that? You say you want my code to run as an exokernel on an in-house embedded platform with an in-house C89 toolchain? No problem! We'll send a guy to your office to compile our product and run tests tomorrow morning.

Suffice it to say that there are legitimate considerations. Consider that GCC isn't everywhere (though I love how prevalent it is these days!) and it certainly doesn't generate the best code on every platform for every workload. Consider that MSVC can only compile C89 as "real" C (as opposed to a C++ subset). Consider that the folks out there who have custom toolchains probably have them because they can afford them.

There are benefits to taking a dependency on a lowest common denominator.

Lively assertions

Recently, "another" discussion about fatal assertions has cropped up in the Mozilla community. Luckily for me, I've missed all of the other discussions, so this is the one where I get to throw in my two bits.

Effectively, I only work on the JS engine, and the JS engine only has fatal assertions. This approach works for the JS team, and I can take an insider's guess as to why.

What's a fatal assertion?

In Mozilla, we have two relevant build modes: debug and no-debug.

A fatal assertion means that, when I write JS_ASSERT(someCondition), if someCondition doesn't hold, we call abort in debug build mode. As a result, the code which follows the assertion may legitimately assume that someCondition holds. You will never see something like this in the JS engine:

{
    JS_ASSERT(0 <= offset && offset < size);
    if (0 <= offset && offset < size) // Bad! Already enforced!
        offset_ = offset;
}

The interesting thing is that, in no-debug mode, we will not call abort. We eliminate the assertion condition test entirely. This means that, in production, the code which follows the assertion assumes that someCondition holds, and there's nothing checking that to be the case. [*]

Exploding early and often

If a JS-engine hacker assumes someCondition during development, and it turns out that someCondition isn't the case, we'd like to know about it, and we'd like to know about it LOUDLY.

Our valiant security team runs fuzz testing against the JS engine continuously, and hitting any one of these fatal assertions causes an abort. When you know that there is input that causes an abort in debug mode, you have a few potential resolutions:

But I think the real key to this whole process is simple: if things are exploding, a member of the bomb squad will show up and come to some resolution. Fatal assertions force action in a way that logs will not. You must (at least cursorily) investigate any one of these assertions as though it were in the most severe category, and some form of resolution must be timely in order to unblock fuzzers and other developers.

Everything that the hacker feels can and should be asserted is being asserted in a way that's impossible to ignore. Invariants present in the code base are reflected by the fatal assertions and, once they've proven themselves by running the regression/fuzzer/web gamut, can be depended upon — they certainly come to reinforce and strengthen each other over time.

Footnotes

[*]

We do have mechanisms that hackers can use for further checking, however. If crash reports indicate that some assertions may be suspect in production environments, we have a JS_OPT_ASSERT for doing diagnostics in our pre-release distribution channels. Since the most reliable information in a crash report tends to be the line number that you crashed on, fatal non-debug assertions are a very useful capability.

Tiny tutorial: learning about GCC generated code with objdump

The aroma is calling you. The best part of waking up is x86-64 in your CPU. [*]

Let's say it's early in the morning and you're a little cranky, but you want to see if/how compiler converts the following into branch-less code:

extern int a, b, c;

void test() {
    c += a == b;
}

You're using the boolean result from the binary comparison operator to, potentially, bump the value in c. You know that the result of the equality is a bit value because the spec wouldn't lie to you — you've been through so much together:

Each of the operators yields 1 if the specified relation is true and 0 if it is false. The result has type int.

—ISO C99 6.5.9 Equality Operators (3)

By using extern variables as operands, you prevent the compiler from constant-folding or optimizing anything away, since it doesn't know squat about the variables except for their type.

To check out what the compiler generates, all that you have to run is the following:

gcc -O3 -Wall -c foo.c
objdump -d -r -Mintel foo.o

To get a disassembly of the optimized instruction sequence in Intel assembly syntax, with relocations — extern placeholders whose actual memory addresses get filled in by the linker — displayed inline:

0000000000000000 <test>:
   0:   8b 05 00 00 00 00       mov    eax,DWORD PTR [rip+0x0]        # 6 <test+0x6>
                        2: R_X86_64_PC32        a-0x4
   6:   3b 05 00 00 00 00       cmp    eax,DWORD PTR [rip+0x0]        # c <test+0xc>
                        8: R_X86_64_PC32        b-0x4
   c:   0f 94 c0                sete   al
   f:   0f b6 c0                movzx  eax,al
  12:   01 05 00 00 00 00       add    DWORD PTR [rip+0x0],eax        # 18 <test+0x18>
                        14: R_X86_64_PC32       c-0x4
  18:   c3                      ret

The crux of the fun is the sete opcode, part of the set* family of 8-bit opcodes that set the 8-bit operand to the bit value of a condition flag.

Then, because 8-bit opcodes preserve the higher bits of the register they operate on, and you need to perform a clean add of a bit value (with no junk left hanging around in the higher bits), you zero-extend the 8-bit value into the corresponding 32-bit form.

Finally, you have the bit value in eax which you can simply add to (the placeholder for) c.

It's also fun to note that, even if you used a 64-bit wide type (a long on an LP64 system like mine), the same sign-extending code sequence would be generated! Because 32-bit operations don't preserve the higher (32) bits of the register they operate on, but instead clear them out, the movzx instruction actually zeroes out all the bits in rax aside from the 8 in al that you're zero extending. For even more tutorial-imbuing goodness, you can try switching the extern declaration over to long and test it out for yourself.

Footnotes

[*]

You, too, can make over-dramatized faces like all the people in that 80s commercial!