February 9, 2010

The source is the thing wherein we'll catch the weirdness of the CSS bling

I've always believed in the separation of content and style. Unfortunately, my faith in CSS is being shaken beyond the normal repetition and cross-browser normalization woes. This little snippet just threw me for a loop:

.entry ul li:before, #sidebar ul ul li:before {
    content: "\00BB \0020";
}

The little double-arrow bullet dealy-boppers (glyphs) that my blog theme uses, which I do think are nice looking, apparently aren't cool enough to be in the default CSS list-style-type glyph set, like disc and double-circled-decimal. The result on the part of the theme designer was the above incantation. Maybe the CSS working group decided to exclude right-pointing double angle quotation mark because it's slightly more to type than disc? I'm not sure.

Now to the crux of the entry: this all wouldn't be so bad if there were some indication in either Firebug or the Webkit inspector that the value was present on my li elements. Seeing those two little magical greater-than signs in an HTML entity explorer would have saved precious time. I suppose it's bug-filing time.

My futile attempt to use one of the entity explorer panes.

After the fact, I found that A List Apart is teaching this voodoo magic! Those rascals! ;-)

Moral of the story: don't trust that everything relevant to the rendering is being displayed in your web developer tools. If something is styled in a strange way, don't hesitate too much to run to the sources.

Two postfix operations redux: sequence points

Get ready for some serious language lawyering.

I was going back and converting my old entries to reStructuredText when I found an entry in which I was wrong! (Shocking, I know.)

C

Stupid old me didn't know about sequence points back in 2007: the effects of the ++ operator in the C expression i++ * i++ are in an indeterminate state of side-effect completion until one of the language-defined sequence points is encountered (i.e. a semicolon or function invocation).

From the C99 standard 6.5.4.2 item 2 regarding the postfix increment and decrement operators:

The result of the postfix ++ operator is the value of the operand. After the result is obtained, the value of the operand is incremented. The side effect of updating the stored value of the operand shall occur between the previous and the next sequence point.

Therefore, the compiler is totally at liberty to interpret that expression as:

mov lhs_result, i     ; Copy the values of the postincrement evaluation.
mov rhs_result, i     ; (Which is the original value of i.)
mul result, lhs_result, rhs_result
add i, lhs_result, 1
add i, rhs_result, 1  ; Second increment clobbers with the same value!

This results in the same result as the GCC compilation in the referenced entry: i is 12 and the result is 121.

As I mentioned before, the reason this can occur is that nothing in the syntax forces the first postincrement to be evaluated before the second one. To give an analogy to concurrency constructs: you have a kind of compile-time "race condition" in your syntax between the two postincrements that could be solved with a sequence point "barrier". [*]

In this assembly, those adds can float anywhere they like after their corresponding mov instruction and can operate directly on i instead of the temporary if they'd prefer. Here's an possible sequence that results in a value of 132 and i as 13.

mov lhs_result, i ; Gets the original 11.
inc i             ; Increment in-place after the start value is copied.
mov rhs_result, i ; Gets the new value 12.
inc i             ; Increment occurs in-place again, making 13.
mul result, lhs_result, rhs_result

Even if you know what you're doing, mixing two postfix operations, or any side effect, using the less obvious sequence points (like function invocation) is dangerous and easy to get wrong. Clearly it is not a best practice. [†]

Java

The postincrement operation appears to have sequence-point-like semantics in the Java language through experimentation, and it does! From the Java language specification (page 416):

The Java programming language also guarantees that every operand of an operator (except the conditional operators &&, ||, and ? :) appears to be fully evaluated before any part of the operation itself is performed.

Which combines with the definition of the postfix increment expression (page 485):

A postfix expression followed by a ++ operator is a postfix increment expression.

As well as left-to-right expression evaluation (page 415):

The left-hand operand of a binary operator appears to be fully evaluated before any part of the right-hand operand is evaluated.

To a definitive conclusion that i++ * i++ will always result in 132 == 11 * 12 and i == 13 when i == 11 to start.

Python

Python has no increment operators specifically so you don't have to deal with this kind of nonsense.

>>> count = 0
>>> count++
  File "<stdin>", line 1
    count++
          ^
SyntaxError: invalid syntax

Annoyingly for newbies, though, it looks like ++count is a valid expression that happens to look like preincrement.

>>> count = 0
>>> ++count
0
>>> --count
0

They're actually two unary positive and negative operators, respectively. Just one of the hazards of a context free grammar, I suppose.

Footnotes

[*]

I threw this in because the ordeal reminds me of the classic bank account concurrency problem. If it's more confusing than descriptive, please ignore it. :-)

[†]

Since function invocation defines sequence points, I thought this code sequence guaranteed those results:

#include <stdio.h>

int identity(int value) { return value; }

int main() {
        int i = 11;
        printf("%d\n", identity(i++) * identity(i++));
        printf("%d\n", i);
        return 0;
}

As Dan points out, the order of evaluation is totally unspecified — the left hand and right hand subexpression can potentially be evaluated concurrently.

Inedible vectors of spam: learning non-reified generics by example

I've been playing with the new Java features, only having done minor projects in it since 1.4, and there have been a lot of nice improvements! One thing that made me do a double take, however, was a run-in with non-reified types in Java generics. Luckily, one of my Java-head friends was online and beat me with a stick of enlightenment until I understood what was going on.

In the Java generic system, the collections are represented by two separate, yet equally important concepts — the compile-time generic parameters and the run-time casts who check the collection members. These are their stories.

Dun, dun!

An example: worth a thousand lame intros

The following code is a distilled representation of the situation I encountered:

import java.util.Arrays;
import java.util.List;

public class BadCast {

    static interface Edible {}

    static class Spam implements Edible {}

    List<Spam> canSomeSpam() {
        return Arrays.asList(new Spam(), new Spam(), new Spam());
    }

    /**
     * @note Return type *must* be List<Editable> (because we intend to
     *       implement an interface that requires it).
     */
    List<Edible> castSomeSpam() {
        return (List<Edible>) canSomeSpam();
    }

}

It produced the following error in my IDE:

Cannot cast from List<BadCast.Spam> to List<BadCast.Edible>

At which point I scratched my head and thought, "If all Spams are Edible, [*] why won't it let me cast List<Spam> to List<Edible>? This seems silly."

Potential for error

A slightly expanded example points out where that simple view goes wrong: [†]

import java.util.Arrays;
import java.util.List;
import java.util.Vector;

public class GenericFun implements Runnable {

    static interface Edible {}

    static class Spam implements Edible {

        void decompose() {}
    }

    List<Spam> canSomeSpam() {
        return Arrays.asList(new Spam(), new Spam(), new Spam());
    }

    /**
     * Loves to stick his apples into things.
     */
    static class JohnnyAppleseed {

        static class Apple implements Edible {}

        JohnnyAppleseed(List<Edible> edibles) {
            edibles.add(new Apple());
        }

    }

    @Override
    public void run() {
        List<Spam> spams = new Vector<Spam>(canSomeSpam());
        List<Edible> edibles = (List<Edible>) spams;
        new JohnnyAppleseed(edibles); // He puts his apple in our spams!
        for (Spam s : spams) {
            s.decompose(); // What does this do when it gets to the apple!?
        }
    }
}

We make a (mutable) collection of spams, but this time, unlike in the previous example, we keep a reference to that collection. Then, when we give it to JohnnyAppleseed, he sticks a damn Apple in there, invalidating the supposed type of spams! (If you still don't see it, note that the object referenced by spams is aliased to edibles.) Then, when we invoke the decompose method on the Apple that is confused with a Spam, what could possibly happen?!

The red pill: there is no runtime-generic-type-parameterization!

Though the above code won't compile, this kind of thing actually is possible, and it's where the implementation of generics starts to leak through the abstraction. To quote Neal Gafter:

Many people are unsatisfied with the restrictions caused by the way generics are implemented in Java. Specifically, they are unhappy that generic type parameters are not reified: they are not available at runtime. Generics are implemented using erasure, in which generic type parameters are simply removed at runtime. That doesn't render generics useless, because you get typechecking at compile-time based on the generic type parameters, and also because the compiler inserts casts in the code (so that you don't have to) based on the type parameters.

...

The implementation of generics using erasure also causes Java to have unchecked operations, which are operations that would normally check something at runtime but can't do so because not enough information is available. For example, a cast to the type List<String> is an unchecked cast, because the generated code checks that the object is a List but doesn't check whether it is the right kind of list.

At runtime, List<Edible> is no different from List. At compile-time, however, a List<Edible> cannot be cast to from List<Spam>, because it knows what evil things you could then do (like sticking Apples in there).

But if you did stick an Apple in there (like I told you that you can actually do, with evidence to follow shortly), you wouldn't know anything was wrong until you tried to use it like a Spam. This is a clear violation of the "error out early" policy that allows you to localize your debugging. [‡]

In what way does the program error out when you try to use the masquerading Apple like a Spam? Well, when you write:

for (Spam s : spams) {
    s.decompose(); // What does this do when it gets to the apple!?
}

The code the compiler actually generates is:

for (Object s : spams) {
    ((Spam)s).decompose();
}

At which point it's clear what will happen to the Apple instance — a ClassCastException, because it's not a Spam!

Exception in thread "main" java.lang.ClassCastException: GenericFun$JohnnyAppleseed$Apple cannot be cast to GenericFun$Spam
        at GenericFun.run(GenericFun.java:36)

Backpedaling

Okay, so in the first example we didn't keep a reference to the List around, making it acceptable (but bad style) to perform an unchecked cast:

Since, under the hood, the generic type parameters are erased, there's no runtime difference between List<Edible> and plain ol' List. If we just cast to List, it will give us a warning:

Type safety: The expression of type List needs unchecked conversion to conform to List<BadCast.Edible>

The real solution, though, is to just make an unnecessary "defensive copy" when you cross this function boundary; i.e.

List<Edible> castSomeSpam() {
    return new Vector<Edible>(canSomeSpam());
}

Footnotes

[*]

Obviously a point of contention among ham connoisseurs.

[†]

This doesn't compile, because we're imagining that the cast were possible. Compilers don't respond well when you ask them to imagine things:

$ javac 'Imagine you could cast List<Spam> to List<Edible>!'
javac: invalid flag: Imagine you could cast List<Spam> to List<Edible>!
Usage: javac <options> <source files>
use -help for a list of possible options
[‡]

Note that if you must do something like this, you can use a Collections.checkedList to get the early detection. Still, the client is going to be pissed that they tried to put their delicious Ham in there and got an unexpected ClassCastException — probably best to use Collections.unmodifiableList if the reference ownership isn't fully transferred.

Thoughts on programming language fluency

I noticed that Effective Java's foreword is written by Guy Steele, so I actually bothered to read it. Here's the bit I found particularly intriguing:

If you have ever studied a second language yourself and then tried to use it outside the classroom, you know that there are three things you must master: how the language is structured (grammar), how to name things you want to talk about (vocabulary), and the customary and effective ways to say everyday things (usage).

When programmers enter the job market, the idea that, "We have the capability to learn any programming language," gets thrown around a lot. I now realize that this sentiment is irrelevant in many cases, because the deciding factor in the hiring process is more often time to fluency.

Time to fluency as a hiring factor

Let's say that there are two candidates, Fry and Laurie, interviewing for a programming position using Haskell. [*] Fry comes off as very intelligent during the interview process, but has only used OCaml and sounds like he mutabled all of the stuff that would make your head explode using monads. Laurie, on the other hand, couldn't figure out how many ping pong balls fit into Air Force One or why manhole covers are round, [†] but is clearly fluent in Haskell. Which one gets hired?

The answer to this question is another question: When are they required to be pumping out production-quality code?

Even working all hours of the day, the time to fluency for a language is on the order of weeks, independent of other scary new-workplace factors. Although books like Effective * can get you on the right track, fluency is ultimately attained through experience. Insofar as programming is a perpetual decision of what to make flexible and what to hard-code, you must spend time in the hot seat to gain necessary intuition — each language's unique characteristics change the nature of the game.

Everybody wants to hire Fry; however, Laurie will end up with the job due to time constraints on the part of the hiring manager. I'm pretty sure that Joel's interview notions are over-idealized in the general case:

Anyway, software teams want to hire people with aptitude, not a particular skill set. Any skill set that people can bring to the job will be technologically obsolete in a couple of years, anyway, so it’s better to hire people that are going to be able to learn any new technology rather than people who happen to know how to make JDBC talk to a MySQL database right this minute.

Reqs have to be filled so that the trains run on time — it's hard to let real, here-and-now schedules slip to avoid hypothetical, three-years-later slip.

Extreme Programming as catalyst

You remember that scene from The Matrix where Neo gets all the Kung Fu downloaded into his brain in a matter of seconds? That whole process is nearly as awesome as code reviews.

Pair programming and code reviews:

This is totally speculative, but from my experience I'd be willing to believe you can reduce the minimum-time-to-fluency by an order of magnitude with the right (read: friendly and supportive) Extreme Programming environment.

What I learned: When you create interfaces for everything (instead of base classes) it's almost less work to make a factory.

Footnotes

[*]

You know it's a hypothetical because it's a Haskell position. Bzinga!

[†]

The point is that Fry has the high ground in terms of perceived aptitude. I actually think most of the Mount Fuji questions are nearly useless in determining aptitude, though I do enjoy them. The referenced sentence is a poor attempt at a joke. ;-)

Why you should bother!

I write this entry in response to Why Should I Bother?. The answer, in short, is that I find Python to be a great language for getting things done, and you shouldn't let stupid interviewers deter you from learning a language that allows you to get more done.

I think I'm a pretty tough interviewer, so I also describe the things I'd recommend that a Python coder knows before applying to a Java position, based on my own technical-interview tactics.

A spectrum of formality

As my friend pointed out to me long ago, many computer scientists don't care about effective programming methods, because they prefer theory. Understandably, we computer scientists qua programmers (AKA software engineers) find ourselves rent in twain.

As a computer science degree candidate, you are inevitably enamored with complex formalities, terminology, [*] and a robust knowledge of mathematical models that you'll use a few times per programming project (if you're lucky). Pragmatic programming during a course of study often takes a back seat to familiar computer science concepts and conformance to industry desires.

As a programmer, I enjoy Python because I find it minimizes boilerplate, maximizes my time thinking about the problem domain, and permits me to use whichever paradigm works best. I find that I write programs more quickly and spend less time working around language deficiencies. Importantly, the execution model fits in my brain.

Real Computer Scientists (TM) tend to love pure-functional programming languages because they fit into mathematical models nicely — founded on recursion, Curry-Howard isomorphism, and what have you — whereas Python is strongly imperative and, in its dynamism, lacks the same sort of formality.

Languages like Java sit somewhere in the middle. They're still strongly imperative (there are no higher-order functions in Java), but there are more formalities. As the most notable example, compile-time type checking eliminates the possibility of type errors, which gives some programmers a sense of safety. [†] Such languages still let scientists chew on some computer sciencey problems; for example, where values clash with the type system, like provably eliminating NullPointerExceptions, which is fun, but difficult!

As the cost of increased formality, this class of languages is more syntax-heavy and leans on design patterns to get some of the flexibility dynamic typing gives you up front.

It's debatable which category of languages is easiest to learn, but Java-like languages have footholds in the industry from historical C++ developer bases, Sun's successful marketing of Java off of C++, and the more recent successes of the C# .NET platform.

It makes sense that we're predominantly taught this category of languages in school: as a result, we can play the percentages and apply for most available developer jobs. Given that we have to learn it, you might as well do some throw-away programming in it now and again to keep yourself from forgetting everything; however, I'd recommend, as a programmer, that you save the fun projects for whichever language(s) that you find most intriguing.

I picture ease-and-rapidity of development-and-maintenance on a spectrum from low to high friction — other languages I've worked in fall somewhere on that spectrum as higher friction than Python. Though many computer scientists much smarter than I seem to conflate formality and safety, I'm fairly convinced I attain code completion and maintainability goals more readily with the imperative and flexible Python language. Plus, perhaps most importantly, I have fun.

My technical-interview protocol

The protocol I use to interview candidates is pretty simple. [‡]

Potential Java interview weaknesses

Interviewing a candidate whose background is primarily Python based for a generic Java developer position (as in Sayamindu's entry), I would immediately flag the following areas as potential weaknesses:

Primitive data types

A programmer can pretty much get away never knowing how a number works in Python, since you typically overflow to appropriately sized data types automatically.

The candidate needs to know what all the Java primitives are when the names are provided to them, and must be able to describe why you would choose to use one over another. Knowing pass-by-value versus pass-by-reference is a plus. In Python there is a somewhat similar distinction between mutable and immutable types — if they understand the subtleties of identifier binding, learning by-ref versus by-value will be a cinch. If they don't know either, I'll be worried.

Object oriented design

The candidate's Python background must not be entirely procedural, or they won't fare well in a Java environment (which forces object orientation). Additionally, this would indicate that they probably haven't done much design work: even if they're an object-orientation iconoclast, they have to know what they're rebelling against and why.

They need to know:

  • When polymorphism is appropriate.

  • What should be exposed from an encapsulation perspective.

  • What the benefits of interfaces are (in a statically typed, single-inheritance language).

Basically, if they don't know the fundamentals of object oriented design, I'll assume they've only ever written "scripts," by which I mean, "Small, unimportant code that glues the I/O of several real applications together." I don't use the term lightly.

Unit testing

If they've been writing real-world Python without a single unit test or doctest, they've been Doing it Wrong (TM).

unittest is purposefully modeled on xUnit. They may have to learn the new jUnit 4 decorator syntax when they start work, but they should be able to claim they've worked with a jUnit 3 -like API.

Abstract data structures

Python has tuples, lists and dictionaries — all polymorphic containers -- and they'll do everything that your programmer heart desires. [§] Some other languages don't have such nice abstractions.

It'd be awesome if they knew:

  • The difference between injective and bijective and how those terms are important to hash function design. If they can tell me this, I'll let them write my high-performance hash functions.

They must know (in ascending importance):

  • The difference between a HashMap and a TreeMap.

  • The difference between a vector and a linked list, or when one should preferred over the other. The names are unimportant — I'd clarify that a vector was a dynamically growing array.

  • The "difference" between a tree and a graph.

Turning attention to you, the reader: if you're lacking in data structures knowledge, I recommend you read a data structures book and actually implement the data structures. Then, take a few minutes to figure out where you'd actually use them in an application. They stick in your head fairly well once you've implemented them once.

Some interviewers will ask stupid questions like how to implement sorting algorithms. Again, just pick up a data structures book and implement them once, and you'll get the gist. Refresh yourself before the interview, because these are a silly favorite — very few people have to implement sorting functions anymore.

Design patterns

Design patterns serve several purposes:

  • They establish a common language for communicating proposed solutions to commonly found problems.

  • They prevent developers for inventing stupid solutions to a solved class of problems.

  • They contain a suite of workarounds for inflexibilities in statically typed languages.

I would want to assure myself that you had an appropriate knowledge of relevant design patterns. More important than the names: if I describe them to you, will you recognize them and their useful applications?

For example, have you ever used the observer pattern? Adapter pattern? Proxying? Facade? You almost certainly had to use all of those if you've done major design work in Python.

Background concepts

These are some things that I would feel extra good about if the candidate knew and could accurately describe how they relate to their Python analogs:

  • The importance of string builders (Python list joining idiom)

  • Basic idea of how I/O streams work (Python files under the hood)

  • Basic knowledge of typecasting (Python has implicit polymorphism)

Practical advice

Some (bad) interviewers just won't like you because you don't know their favorite language. If you're interviewing for a position that's likely to be Java oriented, find the easiest IDE out there and write an application in it for fun. Try porting a Python application you wrote and see how the concepts translate — that's often an eye-opener. Or katas!

If you find yourself unawares in an interview with these "language crusaders," there's nothing you can do but show that you have the capacity to learn their language in the few weeks vacation you have before you start. If it makes you feel better, keep a mental map from languages to number of jerks you've encountered — even normalizing by developer-base size the results can be surprising. ;-)

Footnotes

[*]

Frequently unncessary terminology, often trending towards hot enterprise jargon, since that's what nets the most jobs and grant money.

[†]

Dynamic typing proponents are quick to point out that this doesn't prevent flaws in reasoning, which are the more difficult class of errors, and that you'll end up writing tests for these anyway.

[‡]

Clearly candidates could exploit a vulnerability in my interview protocol: leave off things they know I'm likely to test that they know particularly well; however, I generally ask them to stop after I'm satisfied they know something. Plus, the less I know about their other weaknesses the more unsure I am about them, and thus the less likely I am to recommend them.

[§]

Though not necessarily in a performant way; i.e. note the existence of collections.deque and bisect. Knowing Python, I'd quiz the candidate to see if they knew of the performant datatypes.