December 18, 2012

Quick tips for getting into systems programming

In reply

Andrew (@ndrwdn) asked a great followup question to the last entry on systems programming at my alma mater:

@cdleary Just read your blog post. Are there any resources you would recommend for a Java guy interested in doing systems programming?

What follows are a few quick-and-general pointers on "I want to start doing lower level stuff, but need a motivating direction for a starter project." They're somewhat un-tested because I haven't mentored any apps-to-systems transitions, but, as somebody who plays on both sides of that fence, I think they all sound pretty fun.

A word of warning: systems programming may feel crude at first compared to the managed languages and application-level design you're used to. However, even among experts, the prevalence of footguns motivates simple designs and APIs, which can be a beautiful thing. As a heuristic, when starting out, just code it the simple, ungeneralized way. If you're doing something interesting, hard problems are likely to present themselves anyhow!

Microcontrollers rock

Check out sites like hackaday.com to see the incredible feats that people accomplish through microcontrollers and hobby time. When starting out, it's great to get the tactile feedback of lighting up a bright blue LED or successfully sending that first UDP packet to your desktop at four in the morning.

Microcontroller-based development is also nice because you can build up your understanding of C code, if you're feeling rusty, from basic usage — say, keeping everything you need to store as a global variable or array — to fancier techniques as you improve and gain experience with what works well.

Although I haven't played with them specifically, I understand that Arduino boards are all the rage these days — there are great tutorials and support communities out on the web that love to help newbies get started with microcontrollers. AVR freaks was around even when I was programming on my STK500. I would recommend reading some forums to figure out which board looks right for you and your intended projects.

At school, people really took to Bruce Land's microcontroller class, because you can't help but feel the fiero as you work towards more and more ambitious project goals. Since that class is still being taught, look to the exercises and projects (link above) as good examples of what's possible with bright students and four credits worth of time. [*]

Start fixing bugs on low-level open source projects

Many open source projects love to see willing new contributors. Especially check out projects a) that are known for having good/friendly mentoring and b) that you think are cool (which will help you stay motivated).

I know one amazing person I worked with at Mozilla got into the project by taking his time to figure out how to properly patch some open bugs. If you take that route, either compare your patch to what the project member has already posted, or request that somebody give you feedback on your patch. This is another good way to pick up mentor-like connections.

Check out open courseware for conceptual background

I personally love the rapid evolution of open courseware we're seeing. If you're feeling confident, pick a random low-level thing you've heard-of-but-never-quite-understood, type it into a search engine, and do a deep dive on a lecture or series. If you want a more structured approach, a simple search for systems programming open courseware has quite educational looking results.

General specifics: OSes and reversing

@cdleary Some general but also OS implementation and perhaps malware analysis/RE.

OSes

If you're really into OSes, I think you should just dive in and try writing a little kernel on top of your hardware of choice in qemu (a hardware emulator). Quick searches turn up some seemingly excellent tutorials on writing simple OS kernels on qemu, and writing simple OSes for microcontrollers is often a student project topic in courses like the one I mention above. [†]

With some confidence, patience, maybe a programming guide, and recall of some low-level background from school, I think this should be doable. Some research will be required on effective methods of debugging, though — that's always the trick with bare metal coding.

Or, for something less audacious sounding: build your own Linux kernel with some modifications to figure out what's going on. There are plenty of guides on how to do this for your Linux distribution of choice, and you can learn a great deal just by fiddling around with code paths and using printk. Try doing something on the system (in userspace) that's simple to isolate in the kernel source using grep — like mmapping /dev/mem or accessing an entry in /proc — to figure out how it works, and leave no stone unturned.

I recommend taking copious notes, because I find that's the best way to trace out any complex system. Taking notes makes it easy to refer back to previous realizations and backtrack at will.

Read everything that interests you on Linux Kernel Newbies, and subscribe to kernel changelog summaries. Attempt to understand things that interest you in the source tree's /Documentation. Write a really simple Linux Kernel Module. Then, refer to freely available texts for help in making it do progressively more interesting things. Another favorite read of mine was Understanding the Linux Kernel, if you have a hobby budget or a local library that carries it.

Reversing

This I know less about — pretty much everybody I know that has done significant reversing is an IDA wizard, and I, at this point, am not. They are also typically Win32 experts, which I am not. Understanding obfuscated assembly is probably a lot easier with powerful and scriptable tools of that sort, which ideally also have a good understanding of the OS. [‡]

However, one of the things that struck me when I was doing background research for attack mitigation patches was how great the security community was at sharing information through papers, blog entries, and proof of concept code. Also, I found that there are a good number of videos online where security researchers share their insights and methods in the exploit analysis process. Video searches may turn up useful conference proceedings, or it may be more effective to work from the other direction: find conferences that deal with your topic of interest, and see which of those offer video recordings.

During my research on security-related things, a blog entry by Chris Rohlf caused Practical Malware Analysis to end up on my wishlist as an introductory text. Seems to have good reviews all around. Something else to check out on a trip to the library or online forums, perhaps.

Footnotes

[*]

At the end of the page somebody notes: "This page is transmitted using 100% recycled electrons." ;-)

[†]

Also, don't pass up a chance to browse through the qemu source. Want to know how to emulate a bunch of different hardware efficiently? Use the source, Luke! (Hint: it's a JIT. :-)

[‡]

One other neat thing we occassionally used for debugging at Mozilla was a VMWare-based time-traveling virtual machine instance. It sounded like they were deprecating it a few years back, so I'm not sure the status of it, but if it's still around it would literally allow you to play programs backwards!

Two postfix operations redux: sequence points

Get ready for some serious language lawyering.

I was going back and converting my old entries to reStructuredText when I found an entry in which I was wrong! (Shocking, I know.)

C

Stupid old me didn't know about sequence points back in 2007: the effects of the ++ operator in the C expression i++ * i++ are in an indeterminate state of side-effect completion until one of the language-defined sequence points is encountered (i.e. a semicolon or function invocation).

From the C99 standard 6.5.4.2 item 2 regarding the postfix increment and decrement operators:

The result of the postfix ++ operator is the value of the operand. After the result is obtained, the value of the operand is incremented. The side effect of updating the stored value of the operand shall occur between the previous and the next sequence point.

Therefore, the compiler is totally at liberty to interpret that expression as:

mov lhs_result, i     ; Copy the values of the postincrement evaluation.
mov rhs_result, i     ; (Which is the original value of i.)
mul result, lhs_result, rhs_result
add i, lhs_result, 1
add i, rhs_result, 1  ; Second increment clobbers with the same value!

This results in the same result as the GCC compilation in the referenced entry: i is 12 and the result is 121.

As I mentioned before, the reason this can occur is that nothing in the syntax forces the first postincrement to be evaluated before the second one. To give an analogy to concurrency constructs: you have a kind of compile-time "race condition" in your syntax between the two postincrements that could be solved with a sequence point "barrier". [*]

In this assembly, those adds can float anywhere they like after their corresponding mov instruction and can operate directly on i instead of the temporary if they'd prefer. Here's an possible sequence that results in a value of 132 and i as 13.

mov lhs_result, i ; Gets the original 11.
inc i             ; Increment in-place after the start value is copied.
mov rhs_result, i ; Gets the new value 12.
inc i             ; Increment occurs in-place again, making 13.
mul result, lhs_result, rhs_result

Even if you know what you're doing, mixing two postfix operations, or any side effect, using the less obvious sequence points (like function invocation) is dangerous and easy to get wrong. Clearly it is not a best practice. [†]

Java

The postincrement operation appears to have sequence-point-like semantics in the Java language through experimentation, and it does! From the Java language specification (page 416):

The Java programming language also guarantees that every operand of an operator (except the conditional operators &&, ||, and ? :) appears to be fully evaluated before any part of the operation itself is performed.

Which combines with the definition of the postfix increment expression (page 485):

A postfix expression followed by a ++ operator is a postfix increment expression.

As well as left-to-right expression evaluation (page 415):

The left-hand operand of a binary operator appears to be fully evaluated before any part of the right-hand operand is evaluated.

To a definitive conclusion that i++ * i++ will always result in 132 == 11 * 12 and i == 13 when i == 11 to start.

Python

Python has no increment operators specifically so you don't have to deal with this kind of nonsense.

>>> count = 0
>>> count++
  File "<stdin>", line 1
    count++
          ^
SyntaxError: invalid syntax

Annoyingly for newbies, though, it looks like ++count is a valid expression that happens to look like preincrement.

>>> count = 0
>>> ++count
0
>>> --count
0

They're actually two unary positive and negative operators, respectively. Just one of the hazards of a context free grammar, I suppose.

Footnotes

[*]

I threw this in because the ordeal reminds me of the classic bank account concurrency problem. If it's more confusing than descriptive, please ignore it. :-)

[†]

Since function invocation defines sequence points, I thought this code sequence guaranteed those results:

#include <stdio.h>

int identity(int value) { return value; }

int main() {
        int i = 11;
        printf("%d\n", identity(i++) * identity(i++));
        printf("%d\n", i);
        return 0;
}

As Dan points out, the order of evaluation is totally unspecified — the left hand and right hand subexpression can potentially be evaluated concurrently.

Inedible vectors of spam: learning non-reified generics by example

I've been playing with the new Java features, only having done minor projects in it since 1.4, and there have been a lot of nice improvements! One thing that made me do a double take, however, was a run-in with non-reified types in Java generics. Luckily, one of my Java-head friends was online and beat me with a stick of enlightenment until I understood what was going on.

In the Java generic system, the collections are represented by two separate, yet equally important concepts — the compile-time generic parameters and the run-time casts who check the collection members. These are their stories.

Dun, dun!

An example: worth a thousand lame intros

The following code is a distilled representation of the situation I encountered:

import java.util.Arrays;
import java.util.List;

public class BadCast {

    static interface Edible {}

    static class Spam implements Edible {}

    List<Spam> canSomeSpam() {
        return Arrays.asList(new Spam(), new Spam(), new Spam());
    }

    /**
     * @note Return type *must* be List<Editable> (because we intend to
     *       implement an interface that requires it).
     */
    List<Edible> castSomeSpam() {
        return (List<Edible>) canSomeSpam();
    }

}

It produced the following error in my IDE:

Cannot cast from List<BadCast.Spam> to List<BadCast.Edible>

At which point I scratched my head and thought, "If all Spams are Edible, [*] why won't it let me cast List<Spam> to List<Edible>? This seems silly."

Potential for error

A slightly expanded example points out where that simple view goes wrong: [†]

import java.util.Arrays;
import java.util.List;
import java.util.Vector;

public class GenericFun implements Runnable {

    static interface Edible {}

    static class Spam implements Edible {

        void decompose() {}
    }

    List<Spam> canSomeSpam() {
        return Arrays.asList(new Spam(), new Spam(), new Spam());
    }

    /**
     * Loves to stick his apples into things.
     */
    static class JohnnyAppleseed {

        static class Apple implements Edible {}

        JohnnyAppleseed(List<Edible> edibles) {
            edibles.add(new Apple());
        }

    }

    @Override
    public void run() {
        List<Spam> spams = new Vector<Spam>(canSomeSpam());
        List<Edible> edibles = (List<Edible>) spams;
        new JohnnyAppleseed(edibles); // He puts his apple in our spams!
        for (Spam s : spams) {
            s.decompose(); // What does this do when it gets to the apple!?
        }
    }
}

We make a (mutable) collection of spams, but this time, unlike in the previous example, we keep a reference to that collection. Then, when we give it to JohnnyAppleseed, he sticks a damn Apple in there, invalidating the supposed type of spams! (If you still don't see it, note that the object referenced by spams is aliased to edibles.) Then, when we invoke the decompose method on the Apple that is confused with a Spam, what could possibly happen?!

The red pill: there is no runtime-generic-type-parameterization!

Though the above code won't compile, this kind of thing actually is possible, and it's where the implementation of generics starts to leak through the abstraction. To quote Neal Gafter:

Many people are unsatisfied with the restrictions caused by the way generics are implemented in Java. Specifically, they are unhappy that generic type parameters are not reified: they are not available at runtime. Generics are implemented using erasure, in which generic type parameters are simply removed at runtime. That doesn't render generics useless, because you get typechecking at compile-time based on the generic type parameters, and also because the compiler inserts casts in the code (so that you don't have to) based on the type parameters.

...

The implementation of generics using erasure also causes Java to have unchecked operations, which are operations that would normally check something at runtime but can't do so because not enough information is available. For example, a cast to the type List<String> is an unchecked cast, because the generated code checks that the object is a List but doesn't check whether it is the right kind of list.

At runtime, List<Edible> is no different from List. At compile-time, however, a List<Edible> cannot be cast to from List<Spam>, because it knows what evil things you could then do (like sticking Apples in there).

But if you did stick an Apple in there (like I told you that you can actually do, with evidence to follow shortly), you wouldn't know anything was wrong until you tried to use it like a Spam. This is a clear violation of the "error out early" policy that allows you to localize your debugging. [‡]

In what way does the program error out when you try to use the masquerading Apple like a Spam? Well, when you write:

for (Spam s : spams) {
    s.decompose(); // What does this do when it gets to the apple!?
}

The code the compiler actually generates is:

for (Object s : spams) {
    ((Spam)s).decompose();
}

At which point it's clear what will happen to the Apple instance — a ClassCastException, because it's not a Spam!

Exception in thread "main" java.lang.ClassCastException: GenericFun$JohnnyAppleseed$Apple cannot be cast to GenericFun$Spam
        at GenericFun.run(GenericFun.java:36)

Backpedaling

Okay, so in the first example we didn't keep a reference to the List around, making it acceptable (but bad style) to perform an unchecked cast:

Since, under the hood, the generic type parameters are erased, there's no runtime difference between List<Edible> and plain ol' List. If we just cast to List, it will give us a warning:

Type safety: The expression of type List needs unchecked conversion to conform to List<BadCast.Edible>

The real solution, though, is to just make an unnecessary "defensive copy" when you cross this function boundary; i.e.

List<Edible> castSomeSpam() {
    return new Vector<Edible>(canSomeSpam());
}

Footnotes

[*]

Obviously a point of contention among ham connoisseurs.

[†]

This doesn't compile, because we're imagining that the cast were possible. Compilers don't respond well when you ask them to imagine things:

$ javac 'Imagine you could cast List<Spam> to List<Edible>!'
javac: invalid flag: Imagine you could cast List<Spam> to List<Edible>!
Usage: javac <options> <source files>
use -help for a list of possible options
[‡]

Note that if you must do something like this, you can use a Collections.checkedList to get the early detection. Still, the client is going to be pissed that they tried to put their delicious Ham in there and got an unexpected ClassCastException — probably best to use Collections.unmodifiableList if the reference ownership isn't fully transferred.

Why you should bother!

I write this entry in response to Why Should I Bother?. The answer, in short, is that I find Python to be a great language for getting things done, and you shouldn't let stupid interviewers deter you from learning a language that allows you to get more done.

I think I'm a pretty tough interviewer, so I also describe the things I'd recommend that a Python coder knows before applying to a Java position, based on my own technical-interview tactics.

A spectrum of formality

As my friend pointed out to me long ago, many computer scientists don't care about effective programming methods, because they prefer theory. Understandably, we computer scientists qua programmers (AKA software engineers) find ourselves rent in twain.

As a computer science degree candidate, you are inevitably enamored with complex formalities, terminology, [*] and a robust knowledge of mathematical models that you'll use a few times per programming project (if you're lucky). Pragmatic programming during a course of study often takes a back seat to familiar computer science concepts and conformance to industry desires.

As a programmer, I enjoy Python because I find it minimizes boilerplate, maximizes my time thinking about the problem domain, and permits me to use whichever paradigm works best. I find that I write programs more quickly and spend less time working around language deficiencies. Importantly, the execution model fits in my brain.

Real Computer Scientists (TM) tend to love pure-functional programming languages because they fit into mathematical models nicely — founded on recursion, Curry-Howard isomorphism, and what have you — whereas Python is strongly imperative and, in its dynamism, lacks the same sort of formality.

Languages like Java sit somewhere in the middle. They're still strongly imperative (there are no higher-order functions in Java), but there are more formalities. As the most notable example, compile-time type checking eliminates the possibility of type errors, which gives some programmers a sense of safety. [†] Such languages still let scientists chew on some computer sciencey problems; for example, where values clash with the type system, like provably eliminating NullPointerExceptions, which is fun, but difficult!

As the cost of increased formality, this class of languages is more syntax-heavy and leans on design patterns to get some of the flexibility dynamic typing gives you up front.

It's debatable which category of languages is easiest to learn, but Java-like languages have footholds in the industry from historical C++ developer bases, Sun's successful marketing of Java off of C++, and the more recent successes of the C# .NET platform.

It makes sense that we're predominantly taught this category of languages in school: as a result, we can play the percentages and apply for most available developer jobs. Given that we have to learn it, you might as well do some throw-away programming in it now and again to keep yourself from forgetting everything; however, I'd recommend, as a programmer, that you save the fun projects for whichever language(s) that you find most intriguing.

I picture ease-and-rapidity of development-and-maintenance on a spectrum from low to high friction — other languages I've worked in fall somewhere on that spectrum as higher friction than Python. Though many computer scientists much smarter than I seem to conflate formality and safety, I'm fairly convinced I attain code completion and maintainability goals more readily with the imperative and flexible Python language. Plus, perhaps most importantly, I have fun.

My technical-interview protocol

The protocol I use to interview candidates is pretty simple. [‡]

Potential Java interview weaknesses

Interviewing a candidate whose background is primarily Python based for a generic Java developer position (as in Sayamindu's entry), I would immediately flag the following areas as potential weaknesses:

Primitive data types

A programmer can pretty much get away never knowing how a number works in Python, since you typically overflow to appropriately sized data types automatically.

The candidate needs to know what all the Java primitives are when the names are provided to them, and must be able to describe why you would choose to use one over another. Knowing pass-by-value versus pass-by-reference is a plus. In Python there is a somewhat similar distinction between mutable and immutable types — if they understand the subtleties of identifier binding, learning by-ref versus by-value will be a cinch. If they don't know either, I'll be worried.

Object oriented design

The candidate's Python background must not be entirely procedural, or they won't fare well in a Java environment (which forces object orientation). Additionally, this would indicate that they probably haven't done much design work: even if they're an object-orientation iconoclast, they have to know what they're rebelling against and why.

They need to know:

  • When polymorphism is appropriate.

  • What should be exposed from an encapsulation perspective.

  • What the benefits of interfaces are (in a statically typed, single-inheritance language).

Basically, if they don't know the fundamentals of object oriented design, I'll assume they've only ever written "scripts," by which I mean, "Small, unimportant code that glues the I/O of several real applications together." I don't use the term lightly.

Unit testing

If they've been writing real-world Python without a single unit test or doctest, they've been Doing it Wrong (TM).

unittest is purposefully modeled on xUnit. They may have to learn the new jUnit 4 decorator syntax when they start work, but they should be able to claim they've worked with a jUnit 3 -like API.

Abstract data structures

Python has tuples, lists and dictionaries — all polymorphic containers -- and they'll do everything that your programmer heart desires. [§] Some other languages don't have such nice abstractions.

It'd be awesome if they knew:

  • The difference between injective and bijective and how those terms are important to hash function design. If they can tell me this, I'll let them write my high-performance hash functions.

They must know (in ascending importance):

  • The difference between a HashMap and a TreeMap.

  • The difference between a vector and a linked list, or when one should preferred over the other. The names are unimportant — I'd clarify that a vector was a dynamically growing array.

  • The "difference" between a tree and a graph.

Turning attention to you, the reader: if you're lacking in data structures knowledge, I recommend you read a data structures book and actually implement the data structures. Then, take a few minutes to figure out where you'd actually use them in an application. They stick in your head fairly well once you've implemented them once.

Some interviewers will ask stupid questions like how to implement sorting algorithms. Again, just pick up a data structures book and implement them once, and you'll get the gist. Refresh yourself before the interview, because these are a silly favorite — very few people have to implement sorting functions anymore.

Design patterns

Design patterns serve several purposes:

  • They establish a common language for communicating proposed solutions to commonly found problems.

  • They prevent developers for inventing stupid solutions to a solved class of problems.

  • They contain a suite of workarounds for inflexibilities in statically typed languages.

I would want to assure myself that you had an appropriate knowledge of relevant design patterns. More important than the names: if I describe them to you, will you recognize them and their useful applications?

For example, have you ever used the observer pattern? Adapter pattern? Proxying? Facade? You almost certainly had to use all of those if you've done major design work in Python.

Background concepts

These are some things that I would feel extra good about if the candidate knew and could accurately describe how they relate to their Python analogs:

  • The importance of string builders (Python list joining idiom)

  • Basic idea of how I/O streams work (Python files under the hood)

  • Basic knowledge of typecasting (Python has implicit polymorphism)

Practical advice

Some (bad) interviewers just won't like you because you don't know their favorite language. If you're interviewing for a position that's likely to be Java oriented, find the easiest IDE out there and write an application in it for fun. Try porting a Python application you wrote and see how the concepts translate — that's often an eye-opener. Or katas!

If you find yourself unawares in an interview with these "language crusaders," there's nothing you can do but show that you have the capacity to learn their language in the few weeks vacation you have before you start. If it makes you feel better, keep a mental map from languages to number of jerks you've encountered — even normalizing by developer-base size the results can be surprising. ;-)

Footnotes

[*]

Frequently unncessary terminology, often trending towards hot enterprise jargon, since that's what nets the most jobs and grant money.

[†]

Dynamic typing proponents are quick to point out that this doesn't prevent flaws in reasoning, which are the more difficult class of errors, and that you'll end up writing tests for these anyway.

[‡]

Clearly candidates could exploit a vulnerability in my interview protocol: leave off things they know I'm likely to test that they know particularly well; however, I generally ask them to stop after I'm satisfied they know something. Plus, the less I know about their other weaknesses the more unsure I am about them, and thus the less likely I am to recommend them.

[§]

Though not necessarily in a performant way; i.e. note the existence of collections.deque and bisect. Knowing Python, I'd quiz the candidate to see if they knew of the performant datatypes.

Java GridLayout can't center extra space

Java's GridLayout design appears to lack some forethought. You can't center the elements that are laid out by Java's GridLayout class if the space cannot be evenly distributed among the number of columns (or rows). You can do left-to-right, right-to-left, top-to-bottom, and/or bottom-to-top, but you cannot center. This seems quite silly.

In the end, I wound up subclassing GridLayout and fixing the mistake. GridBagLayout was an inappropriate alternative, since GridBagLayout relies on the preferredWidth of its constituent elements to lay out a container and I just wanted a grid! A simple grid that centers elements in the container if it can't use up all the space can be found inline:

import java.awt.*;

/**
 * Centers the lopsided extra pixel distribution from GridLayout to within
 * one-half pixel margin of error.
 *
 * @author Chris Leary
 */
public class CenteredGridLayout extends GridLayout {

    public CenteredGridLayout() {}

    public CenteredGridLayout(int rows, int cols) {
        super(rows, cols);
    }

    public CenteredGridLayout(int rows, int cols, int hgap, int vgap) {
        super(rows, cols, hgap, vgap);
    }

    /**
     * @return  3-tuple (starts, perObj, realDims).
     *          starts are starting pixel deltas in relation to the
     *              parent container
     *          perObj = (pixelsPerObjX, pixelsPerObjY)
     *          realDims = (rowsX, colsY); automatically calculated if not
     *              set to 0 on the container
     */
    private int[][] lcHelper(Container parent, int componentCount) {
        /*
         * The available space for actual objects is:
         *  parent.width - leftInset - rightInset - hgap * (cols - 1);
         *
         * Note that the (cols - 1) is because the last item needs no hgap.
         *
         * If the available space isn't evenly divisible into the number of
         * columns, we have to distribute the remainder evenly across the
         * insets.
         *
         * If the remainder isn't evenly divisible into the two insets, the
         * right/bottom inset is given the extra pixel.
         */
        int rows = getRows();
        int cols = getColumns();
        Insets insets = parent.getInsets();
        /* Calculate dimensions if not explicitly given. */
        int realCols = (cols == 0)
            ? (int) Math.ceil(((double) componentCount) / rows)
            : cols;
        int realRows = (rows == 0)
            ? (int) Math.ceil(((double) componentCount) / cols)
            : rows;

        /* Helper values. */
        int hInset = insets.left + insets.right;
        int vInset = insets.bottom + insets.top;
        int parentHeight = parent.getHeight();
        int parentWidth = parent.getWidth();

        /* Distribution calculations. */
        int hGapTotal = (realCols - 1) * getHgap();
        int vGapTotal = (realRows - 1) * getVgap();
        int widthPerItem = (parentWidth - hInset - hGapTotal) / realCols;
        int heightPerItem = (parentHeight - vInset - vGapTotal) / realRows;
        int extraWidth = parentWidth
            - (widthPerItem * realCols + hGapTotal);
        int extraHeight = parentHeight
            - (heightPerItem * realRows + vGapTotal);

        /* Package values in containers for return. */
        int[] starts = { /* x, y */
            insets.left + extraWidth / 2,
            insets.top + extraHeight / 2};
        int[] perObj = { widthPerItem, heightPerItem };
        int[] realDims = { realCols, realRows };
        return new int[][] { starts, perObj, realDims };
    }

    /**
     * Set bounds for objects within parent.
     * @param parent    Container being layed out.
     */
    @Override
    public void layoutContainer(Container parent) {
        synchronized (parent.getTreeLock()) {
            int componentCount = parent.getComponentCount();
            if (componentCount == 0) {
                return; /* Nothing to lay out. */
            }
            /* Unpack data calculated by helper. */
            int[][] params = lcHelper(parent, componentCount);
            int[] starts = params[0];
            int[] perObj = params[1];
            int[] realDims = params[2];
            int realCols = realDims[0];
            int realRows = realDims[1];

            /* Move down the height per object plus vertical gap
             * per row. */
            for (
                int y = starts[1], row = 0;
                row < realRows;
                y += perObj[1] + getVgap(), row++
            ) {
                /* Move over the width per object plus horizontal gap per
                 * row. */
                for (
                    int x = starts[0], col = 0;
                    col < realCols;
                    x += perObj[0] + getHgap(), col++
                ) {
                    int arrayIndex = row * realCols + col;
                    parent.getComponent(arrayIndex)
                        .setBounds(x, y, perObj[0], perObj[1]);
                }
            }
        }
    }
}

Here's an illustration of the normal GridLayout failing to allocate my space evenly, using a left-to-right and top-to-bottom setting:

There are 8 pixels of extra space maximum, since 8 pixels isn't evenly divisible over the nine cells.