What couldn't you ship?
Great excerpt from Jason Hong's article in this month's Communications of the
The most impressive story I have ever heard about owning your research is
from Ron Azuma's retrospective "So Long, and Thanks for the Ph.D." Azuma
tells the story of how one graduate student needed a piece of equipment for
his research, but the shipment was delayed due to a strike. The graduate
student flew out to where the hardware was, rented a truck, and drove it
back, just to get his work done.
Stories like that pluck at my heart strings. Best part of Back to Work,
Episode 1 was this, when around 19 minutes in Merlin Mann said:
I was drinking, which I don't usually do, but I was with a guy who likes to
drink, who is a friend of mine, and actually happens to be a client. And, we
were talking about what we're both really interested in and fascinated by,
which is culture. What is it that makes some environments such a petri dish
for great stuff, and what is it about that makes people wanna run away
from the petri dish stealing office supplies and peeing in someone's desk?
What is it, what makes that difference, and can you change it?
In time, I found myself moving more towards this position — as we
had more drinks — that it kind of doesn't really matter what people do,
given that ultimately you're the one who's gotta be the animus. You're the
one who's actually going to have to go ship, right?
And, my sense was — great guy — he kept moving further toward, "Yeah,
but...". "This person does this", and "that person does that", and "I need
this to do that". And I found myself saying, "Well, okay, but what?" What
are you gonna do as a result of that? Do you just give up? Do you spend all
of your time trying to fix these things that these other people are doing
And, to get to the nut of the nut; apparently — I'm told by the security
guards who removed me from the room — that it ended with me basically yelling
over and over, "What couldn't you ship?!" "What couldn't you ship?!" "What
couldn't you ship?!"
... If we really, really are honest with ourselves, there's really not that
much stuff we can't ship because of other people...
... When are you ever gonna get enough change in other people to satisfy
you? When are you ever gonna get enough of exactly how you need it to be to
make one thing?
Well, you know, that is always gonna be there. You're
always gonna find some reason to not run today. You're always gonna find
some reason to eat crap from a machine today. You're always gonna find a
reason for everything.
To quote that wonderful Renoir film, Rules of the
Game, something along the lines of, "The trouble in life is that every man
has his reasons." Everybody's got their reasons. And the thing that
separates the people who make cool stuff from the people who don't make
cool stuff is not whether they live in San Francisco. And it's not whether
they have a cool system. It's whether they made it. That's it, end of
story. Did you make it or didn't you make it?
The way I see it, you should never stop asking yourself:
What's really going to be different about tomorrow that you couldn't go make
happen today? Why isn't past inaction indicative of what's going to happen
today, or tomorrow?
What reason do you have to believe that appropriate steps to deliver on your
vision are in flight, and what would it take for you to go drive them harder.
What losses might you have to cut in order to get some thing done,
rather than a theoretically more perfect no thing. For some outcomes, it
really does take a village. I wouldn't expect anybody to single-handedly ship
the Great Pyramid.
Of course, sunk costs are powerful siren, so you have to be very careful to
evaluate whether compromises still allow you to hit the marks you care about as
true goals. But, at the end of the day, all those trade-offs roll up into one
subtly simple question:
What couldn't you ship?
Paradox of the generalist
Classic management advice is to build a republic: each team member specializes in what they're good at. It just makes sense.
You nurture existing talents in attempt to ensure personal growth; simultaneously, you fill niches that need filling, constructively combine strengths, and orchestrate sufficient overlap in order to wind up with a functioning, durable, kick-ass machine of a team. A place for everyone, everyone in their place, and badassery ensues! (So the old saying goes...)
But what if, instead, you could simultaneously fork off N teams — one for every team member — and make that team member simultaneously responsible for everything? What would happen to the personal knowledge, growth rate, and impact of each member?
Let's take it one step farther: imagine you're that team member. All of a sudden it sounds terrifying, right? If you don't know it, nobody does. If you don't do it, nobody will. If you don't research it, you'll have no idea what it's about. If you don't network, no contacts are made. If you don't ship it, you know it will never change the firm/industry/world.
So, you think like you've been trained to think: you disambiguate the possible results. What could happen? Maybe you'd crumble under the pressure. Maybe you wouldn't be able to find your calling because you're glossing over the details that make you an artisan. Maybe you'd look like a fool. Maybe you would ship totally uninteresting crap that's all been done before.
But, then again, maybe you would grow like you've never grown before, learn things that you never had the rational imperative to learn, talk to interesting people you would have never talked to, ship a product that moves an industry, and blow the fucking lid off of a whole can of worms.
And so we arrive at one tautological cliché that I actually agree with: you never know until you try. And, if you choose wisely, you'll probably have a damn good time doing it.
At the least, by definition, you'll learn something you couldn't have learned by specializing.
Accomplish your new year's resolution of being more badass
I know what you're going through — I've done it all before.
The market is teeming with products that purport to help you meet your badassery quota.
First you do the shakes. Then, you go with the bars that say they're infused with the good stuff, but just seem to leave a slightly corrugated taste in your mouth. Finally, you're droppin' hard-earned dinero on a facility that you don't need or a professional badassery trainer whose appointments you desperately wish you could cancel.
But I'm here to tell you don't need to shell out cash-money to become more badass, my friends. Not anymore, thanks to the beauty of open source, the ES6 plans wiki page, and our delightful SpiderMonkey technical staff who are standing by to receive your calls for mentorship.
Allow me to explain.
Badass begets badass
I, for example, have wanted String.prototype.startsWith very badly, to the point that I've started washing people's window panes against their will as they exit highway 101. Around here, odds are that a programmer sees my sign and implements the thing just to stop me from bothering them again. (A little tactic that I call SpiderGuerilla warfare.)
So what are you waiting for?
I know, you're probably already quite beefcake, but here's my three step plan:
Watch the SpiderMonkey hacking intro.
Pick out a bug from the ES6 plans.
Come talk to great people on irc.mozilla.org in channel #jsapi (for example, cdleary, jorendorff, luke, or whoever else is in there) or comment in the bug — tell them that you're on a quest to become even more badass, describe a bug that you're interested in taking, and give a quick note on what you've done with the engine so far — for example, walking through the video in step 1! We'll find you a mentor who will get you started on the right track.
Don't miss out on this exclusive offer — SpiderMonkey contribution is not sold in stores.
In fact, if you act now, we'll throw in an IonMonkey shirt (or another Firefox shirt of equivalent awesomeness) and publish a blurb about your feature in Mozilla hacks. Of course, you can also have yourself added to about:credits, providing that's what you are into.
This one-of-a-kind offer is too ridonk to pass up. Just listen to this testimonial from one of our badass contributors:
I started contributing to SpiderMonkey and now I can write a JIT compiler from scratch in a matter of days. BEEFCAKE!
—@evilpies [Liberally paraphrased]
See you in the tubes!
"Whoa, Billy reviewed a one-meg patch to the hairiest part of the codebase in just two hours!" [*]
It's pretty easy to identify what's wrong with that sentence. The speed of a review is not an achievement. Billy could have literally just checked the, "yes, I reviewed it" button without looking at the patch.
... but an empty review looks pretty bad, especially as the size of the patch grows. So maybe Billy padded it out by identifying two hours worth of style nits and asking for a few comments here and there. In any case, the code quality is no more assured after the review than before it.
Conventional wisdom is that it's economically prudent to do good code reviews: finding defects early incurs the lowest cost, review has a 'peer pressure' based motivation towards quality improvement, and review creates knowledge redundancy that mitigates the bus effect. In research literature on code review, effectiveness is typically measured as "defects found per KLoC". [†] However, this dimension ignores the element of "time per review": I'm going to argue that time to give a good review varies in the complexity and size of the modifications.
Now, one can argue that, if Billy does anything more than ignorantly checking the little "I've reviewed this" box, he has the potential to add value. After all, code isn't going to be defect-free when it comes out of review, so we're just talking about a difference in degree. If we assume that truly obscure or systematic bugs won't jump out from a diff, what additional value is Billy really providing by taking a long time?
This is where it gets tricky. I think the reason that folks can have trouble deciding how long reviews should take is that we don't know what a review really entails. When I request that somebody review my patch, what will they try to suss out? What kind of code quality (in terms of functional correctness and safety) is actually being assured at the component level, across all reviewed code?
If you can't say that your reviews ensure some generally understood level of code quality (i.e. certain issues have definitively been considered), it's hard to say that you're using reviews as an effective tool.
Aside: even with clear expectations for the code review process, each party has to exercise some discipline and avoid the temptation to lean on the other party. For mental framing purposes, it's a defect-finding game in which you're adversaries: the developer wants to post a patch with as few defects as possible and the reviewer wants to find as many defects as they possibly can within a reasonable window of time.
A few best practices
From the research I've read on code review, these are two simple things that are supposed to increase defect-finding effectiveness:
- Scan, then dig.
Do a preliminary pass to get the gist of how it's structured and what it's doing. Note down anything that looks fishy at a glance. Once you finish your scan, then do another pass that digs into all the corner cases you can think of and inspects each line thoroughly.
- Keep checklists.
One checklist for self-reviews and one checklist for reviews of everybody else's stuff. I've seen it recommended that you scan through the code once for every checklist item to do a truly thorough review.
The self-review checklist is important because you tend to repeat the same mistakes until you've learned them cold. When you make a defect and it gets caught, figure out where it fits into your list and make a mental and/or physical note of the example, or add it as a new category.
Having a communal checklist can also be helpful for identifying group pain points. "Everybody screws up GC-rooting JSString-derived chars sometimes," is easily codified in a communal checklist document that the whole team can reference. In addition, this document helps newcomers avoid potential pitfalls and points out areas of the code that could generally be more usable / less error prone.
Here's another nice summary of more effective practices.
I'm personally of the opinion that, if you find something that you think is defective, you try to write a test to demonstrate it. The beneficial outcomes of this are:
You end up with a test that can be added to the suite, even if no defect is found.
You gain a greater system-level understanding of how to trigger behaviors in the questionable area, giving you an even better understanding of the context for the patch you're reviewing.
If it was unclear to you while reading the patch, you know it requires clarification, either via more expressive code or an appropriate comment.
I think in an ideal situation there are also linter tools in place to avoid style nits altogether: aside from nits masquerading as legitimate review comments, automatically enforced stylistic consistency is nice.
Chemistry and compatibility
There's a spectrum for the working compatibility between two people.
On the far left of the spectrum, there's negativity. You hate the other person's guts, and can't work with them at all. There's some personality conflict (which could simply be, "That person is an asshole") or some impasse that would require psychotherapy to bridge.
On the far right of the spectrum, there's chemistry. Effectively, you want to have their technological babies. You finish each other's... that's right, sandwiches. Or sentences. Or parser combinator libraries. When you stumble with a task or concept, that person is there to pick you up with a how's-it-going or whiteboard marker, and that's a two way street. You work together like the badass components of a emergently-more badass machine. Bio-digital jazz, man.
And smack dab in the middle, there's plain ol' compatible. This is like the "friend zone" of the working world. It's fine, and you can go on that way indefinitely, getting things done at a reasonable clip, but it probably doesn't get the creative juices flowing. You're scheduled to meet at a waypoint instead of bushwhacking away at the thicket together.
It takes time, effort, and luck to find people that you have working chemistry with — they're understandably rare. The effort has to start somewhere, though. Maybe it's a good exercise to imagine a person that you're just working-compatible with: if you bore to them your technological soul, might you get something going on?