What couldn't you ship?
Great excerpt from Jason Hong's article in this month's Communications of the
ACM:
The most impressive story I have ever heard about owning your research is
from Ron Azuma's retrospective "So Long, and Thanks for the Ph.D." Azuma
tells the story of how one graduate student needed a piece of equipment for
his research, but the shipment was delayed due to a strike. The graduate
student flew out to where the hardware was, rented a truck, and drove it
back, just to get his work done.
Stories like that pluck at my heart strings. Best part of Back to Work,
Episode 1 was this, when around 19 minutes in Merlin Mann said:
I was drinking, which I don't usually do, but I was with a guy who likes to
drink, who is a friend of mine, and actually happens to be a client. And, we
were talking about what we're both really interested in and fascinated by,
which is culture. What is it that makes some environments such a petri dish
for great stuff, and what is it about that makes people wanna run away
from the petri dish stealing office supplies and peeing in someone's desk?
What is it, what makes that difference, and can you change it?
In time, I found myself moving more towards this position — as we
had more drinks — that it kind of doesn't really matter what people do,
given that ultimately you're the one who's gotta be the animus. You're the
one who's actually going to have to go ship, right?
And, my sense was — great guy — he kept moving further toward, "Yeah,
but...". "This person does this", and "that person does that", and "I need
this to do that". And I found myself saying, "Well, okay, but what?" What
are you gonna do as a result of that? Do you just give up? Do you spend all
of your time trying to fix these things that these other people are doing
wrong?
And, to get to the nut of the nut; apparently — I'm told by the security
guards who removed me from the room — that it ended with me basically yelling
over and over, "What couldn't you ship?!" "What couldn't you ship?!" "What
couldn't you ship?!"
... If we really, really are honest with ourselves, there's really not that
much stuff we can't ship because of other people...
... When are you ever gonna get enough change in other people to satisfy
you? When are you ever gonna get enough of exactly how you need it to be to
make one thing?
Well, you know, that is always gonna be there. You're
always gonna find some reason to not run today. You're always gonna find
some reason to eat crap from a machine today. You're always gonna find a
reason for everything.
To quote that wonderful Renoir film, Rules of the
Game, something along the lines of, "The trouble in life is that every man
has his reasons." Everybody's got their reasons. And the thing that
separates the people who make cool stuff from the people who don't make
cool stuff is not whether they live in San Francisco. And it's not whether
they have a cool system. It's whether they made it. That's it, end of
story. Did you make it or didn't you make it?
The way I see it, you should never stop asking yourself:
What's really going to be different about tomorrow that you couldn't go make
happen today? Why isn't past inaction indicative of what's going to happen
today, or tomorrow?
What reason do you have to believe that appropriate steps to deliver on your
vision are in flight, and what would it take for you to go drive them harder.
What losses might you have to cut in order to get some thing done,
rather than a theoretically more perfect no thing. For some outcomes, it
really does take a village. I wouldn't expect anybody to single-handedly ship
the Great Pyramid.
Of course, sunk costs are powerful siren, so you have to be very careful to
evaluate whether compromises still allow you to hit the marks you care about as
true goals. But, at the end of the day, all those trade-offs roll up into one
subtly simple question:
What couldn't you ship?
Big design vs simple solutions
The distinction between essential complexity and accidental complexity is a
useful one — it allows you to identify the parts of your design where you're
stumbling over yourself instead of working against something truly reflected
in the problem domain.
The simplest-solution-that-could-possibly-work (SSTCPW) concept is inherently
appealing in that, by design, you're trying to minimize these pieces that you
may come to stumble over. Typically, when you take this approach, you
acknowledge that an unanticipated change in requirements will entail major
rework, and accept that fact in light of the perceived benefits.
Benefits cited typically include:
Less design to validate.
Less implementation to perform.
Less surface area to debug.
Increased confidence the resulting product executes properly (though perhaps
modestly in scope).
As a more quantifiable example: if a SSTCPW contains comparatively less code
paths than an alternative solution, you can see how some of the above merits
could fall out of it.
This also demonstrates some of the appeal of fail-fast and crash-only
approaches to software implementation, in that cutting out unanticipated
program inputs and states, via an acceptance of "failure" as a concept, tends
to hone in on SSTCPW.
Contrast
In my head, this approach is contrasted most starkly against an approach called
big-design-up-front (BDUF). The essence of BDUF is that, in the design process,
one attempts to consider the whole set of possible requirements (typically
both currently-known and projected) and build into the initial design and
implementation the flexibility and structure to accommodate large swaths of
them in the future, if not in the current version.
In essence, this approach acknowledges that the target is likely moving, tries
to anticipate the target's movement, and takes steps to remain one step ahead
of the game by building in flexibility, genericity, and a more 1:1-looking
mapping between the problem domain and the code constructs.
Benefits cited usually relate to ongoing maintenance in some sense and
typically include:
Head to head
In a lot of software engineering doctrine that I've read, been taught, and
toyed with throughout the years, the prevalence of unknown and ever-changing
business requirements for application software has lent a lot of credence to
BDUF, especially in that space.
There have also been enabling trends for this mentality; for example, the
introduction of indirection through abstractions has monumentally less cost on
today's JVM than on the Java interpreter of yore. In that same sense, C++ has
attempted to satisfy an interesting niche in the middle ground with its design
concept of "zero cost abstractions", which intend to be known-reducible to more
easily understood and more predictable underlying code forms at compile time.
On the hardware side, the steady provisioning of single-thread performance and
memory capacity throughout the years has also played an enabling role.
By contrast, the system-software implementation doctrine and conventional
wisdom skews heavily towards SSTCPW, in that any "additional" design reflected
in the implementation tends to come under higher levels of duress from a
{performance, code-size, debuggability, correctness} perspective. Ideas like
"depending on concretions" — which I specifically use because it's denounced
by the D in SOLID — are wholly accepted in SSTCPW given that it (a) makes the
resulting artifact simpler to understand in some sense (b) without sacrificing
the ability to meet necessary requirements.
So what's the underlying trick in acting on a SSTCPW philosophy? You have to do
enough design work (and detailed engineering legwork) to distinguish between
what is necessary and what is wanted, and have some good-taste arbitration
process to distinguish between the two when there's disagreement about the
classification. As part of that process, you have to make the most difficult
decisions: what you definitely will not do and what the design will not
accommodate without major rework.
Systems programming at my alma mater
Bryan also asked me this at NodeConf last year, where I was chatting with him about the then-in-development IonMonkey:

I remembered my talk with Bryan when I went to recruit there last year and asked the same interview question that he references — except with the pointer uninitialized so candidates would have to enumerate the possibilities — to see what evidence I could collect. My thoughts on the issue haven't really changed since that chat, so I'll just repeat them here.
(And, although I do not speak for my employer, for any programmers back in Ithaca who think systems programming and stuff like Birman's class is cool beans, my team is hiring both full time and interns in the valley, and I would be delighted if you decided to apply.)
My overarching thought: bring the passion
Many of the people I'm really proud that my teams have hired out of undergrad are just "in love" with systems programming, just as a skilled artisan "cares" about their craft. They work on personal projects and steer their trajectory towards it somewhat independent of the curriculum.
Passion seems to be pretty key, along with follow-through, and ability to work well with others, in the people I've thumbs-up'd over the years. Of course I always want people who do well in their more systems-oriented curriculum and live in a solid part the current-ability curve, but I always have an eye out for the passionately interested ones.
So, I tend to wonder: if an org has a "can systems program" distribution among the candidates, can you predict the existence of the outliers at the career fair from the position of the fat part of that curve?
Anecdotally, myself and two other systems hackers on the JavaScript engine came from the same undergrad program, modulo a few years, although we took radically different paths to get to the team. They are among the best and most passionate systems programmers I've ever known, which also pushes me to think passionate interest may be a high-order bit.
Regardless, it's obviously in systems companies' best interest to try to get the most bang per buck on recruiting trips, so you can see how Bryan's point of order is relevant.
My biased take-away from my time there
I graduated less than a decade ago, so I have my own point of reference. From my time there several years ago, I got the feeling that the mentality was:
C/C++ are horrible teaching languages, so they shouldn't really be taught in general curricula in circumstances where they can be avoided.
Java and applications-level programming is where most of the well-paying industry jobs are. (Not sure how true this is or was, but it seemed to be the conventional wisdom at the time.)
It's a Windows world. And, if it's not a Windows world, you've probably got a VM under you.
This didn't come from any kind of authority, it's just putting into words the "this is how things are done around here" understanding I had at the time. All of them seemed reasonable in context, though I didn't think I wanted to head down the path alluded by those rules of thumb. Of course these were, in the end, just rules of thumb: we still had things like a Linux farm used by some courses.
I feel that the "horrible for teaching" problem extends to other important real-world systems considerations as well: I learned MIPS and Alpha [*], presumably due to their clean RISC heritage, but golly do I ever wish I was taught more about specifics of x86 systems. And POSIX systems. [†]
Of course that kind of thing — picking a "real-world" ISA or compute platform — can be a tricky play for a curriculum: what do you do about the to-be SUN folks? Perhaps you've taught them all this x86-specific nonsense when they only care about SPARC. How many of the "there-be-dragons" lessons from x86 would cross-apply?
There's a balance between trade and fundamentals, and I feel I was often reminded that I was there to cultivate excellent fundamentals which could later be applied appropriately to the trends of industry and academia.
But seriously, it's just writing C...
For my graduating class, CS undergrad didn't really require writing C. The closest you were forced to get was translating C constructs (like loops and function calls) to MIPS and filling in blanks in existing programs. You note the bijection-looking relationship between C and assembly and can pretty much move on.
I tried to steer to hit as much interesting systems-level programming as possible. To summarize a path to learning a workable amount of systems programming in my school of yore, in hopes it will translate to something helpful existing today:
You may have read K&R, but as a newbie it makes sense to beef up on fundamentals, so CS 116: Introduction to C Programming doesn't hurt (and you meet other passionate systems programming people in the process).
CS 415: Operating Systems Practicum made you write C. Sadly, we were given a library for context switching userspace threads on top of the Win32 API in MSVC that we didn't really have to dig into. We had to write things like concurrency primitives, a scheduler, and a rudimentary filesystem that operated in terms of a soft (i.e. fake) disk model. I think there may have been some networking in there as well. The course was being revamped at the time, so I hope it's more bare-metal now with something practical like qemu.
ECE 476: Designing with Microcontrollers was an amazing class for integrating whatever you were most passionate about from CS and ECE curricula. Though at the time we were using 8-bit Atmels on a proprietary compiler that had no dynamic allocation support, you had to write both assembly and C code and talk to your system board via I/O ports. Plus, I got to be a little sneaky and use avr-gcc.
ECE 473: Optimizing Compilers targeted Alpha at the time, but was a great big systems project that taught a lot about machine specifics and code generation (interfacing to syscalls, executable and linkable formats).
ECE 575: High-Performance Microprocessor Architecture made you write real and well-performing C applications for things like cache modeling with static binary translation. This was a very formative course for me.
I did a bunch of independent projects to mess around and better understand areas where I was lacking knowledge.
I did work with systems researchers at the university. Some were unwilling to take any undergrads as a policy, but some groups are more amenable.
I'm not a good alum in failing to keep up with the goings-ons but, if I had a recommendation based on personal experience, it'd be to do stuff like that. Unfortunately, I've also been at companies where the most basic interview question is "how does a vtable actually work" or on nuances of C++ exceptions, so for some jobs you may want to take an advanced C++ class as well.
Understanding a NULL pointer deref isn't writing C
Eh, it kind of is. On my recruiting trip, if people didn't get my uninitialized pointer dereference question, I would ask them questions about MMUs if they had taken the computer organization class. Some knew how an MMU worked (of course, some more roughly than others), but didn't realize that OSes had a policy of keeping the null page mapping invalid.
So if you understand an MMU, why don't you know what's going to happen in the NULL pointer deref? Because you've never actually written a C program and screwed it up. Or your haven't written enough assembly with pointer manipulation. If you've actually written a Java program and screwed it up you might say NullPointerException, but then you remember there are no exceptions in C, so you have to quickly come up with an answer that fits and say zero.
I think another example might help to illustrate the disconnect: the difference between protected mode and user mode is well understood among people who complete an operating systems course, but the conventions associated with them (something like "tell me about init"), or what a "traditional" physical memory space actually looks like, seem to be out of scope without outside interest.
This kind of interview scenario is usually time to fluency sensitive — wrapping your head around modern C and sane manual memory management isn't trivial, so it does require some time and experience. Plus when you're working regularly with footguns, team members want a basic level of trust in coding capability. It's not that you think the person can't do the job, it's just not the right timing if you need to find somebody who can hit the ground running. Bryan also mentions this in his email.
Thankfully for those of us concerned with the placement of the fat part of the distribution, it sounds like Professor Sirer is saying it's been moving even more in the right direction in the time since I've departed. And, for the big reveal, I did find good systems candidates on my trip, and at the same time avoided freezing to death despite going soft in California all these years.
Brain teaser
I'll round this entry off with a little brain teaser for you systems-minded folks: I contend that the following might not segfault.
// ...
int main() {
mysterious_function();
A *a = NULL;
printf("%d\n", a->integer_member);
return EXIT_SUCCESS;
}
How many reasons can you enumerate as to why? What if we eliminate the call to the mysterious function?
Footnotes
Committers beware
Toiling away with hand swept clocks
Meticulously combed-through kilo-SLOCs
More and more features borne to bear, but
For all continents, a continent unaware.
Streams of commits slake developer thirst
Screams from sales pitches, ever averse
Product with no need but a product indeed, as
People with Real Problems want and bleed.
Words on a page, referred to as "plan", but
Equivocate: business, science fair, fighting the man?
Wanton tech fails on bang per buck
Without users, committer, your work doth suck.
Paradox of the generalist
Classic management advice is to build a republic: each team member specializes in what they're good at. It just makes sense.
You nurture existing talents in attempt to ensure personal growth; simultaneously, you fill niches that need filling, constructively combine strengths, and orchestrate sufficient overlap in order to wind up with a functioning, durable, kick-ass machine of a team. A place for everyone, everyone in their place, and badassery ensues! (So the old saying goes...)
But what if, instead, you could simultaneously fork off N teams — one for every team member — and make that team member simultaneously responsible for everything? What would happen to the personal knowledge, growth rate, and impact of each member?
Let's take it one step farther: imagine you're that team member. All of a sudden it sounds terrifying, right? If you don't know it, nobody does. If you don't do it, nobody will. If you don't research it, you'll have no idea what it's about. If you don't network, no contacts are made. If you don't ship it, you know it will never change the firm/industry/world.
So, you think like you've been trained to think: you disambiguate the possible results. What could happen? Maybe you'd crumble under the pressure. Maybe you wouldn't be able to find your calling because you're glossing over the details that make you an artisan. Maybe you'd look like a fool. Maybe you would ship totally uninteresting crap that's all been done before.
But, then again, maybe you would grow like you've never grown before, learn things that you never had the rational imperative to learn, talk to interesting people you would have never talked to, ship a product that moves an industry, and blow the fucking lid off of a whole can of worms.
And so we arrive at one tautological cliché that I actually agree with: you never know until you try. And, if you choose wisely, you'll probably have a damn good time doing it.
At the least, by definition, you'll learn something you couldn't have learned by specializing.