January 20, 2013

Big design vs simple solutions

The distinction between essential complexity and accidental complexity is a useful one — it allows you to identify the parts of your design where you're stumbling over yourself instead of working against something truly reflected in the problem domain.

The simplest-solution-that-could-possibly-work (SSTCPW) concept is inherently appealing in that, by design, you're trying to minimize these pieces that you may come to stumble over. Typically, when you take this approach, you acknowledge that an unanticipated change in requirements will entail major rework, and accept that fact in light of the perceived benefits.

Benefits cited typically include:

As a more quantifiable example: if a SSTCPW contains comparatively less code paths than an alternative solution, you can see how some of the above merits could fall out of it.

This also demonstrates some of the appeal of fail-fast and crash-only approaches to software implementation, in that cutting out unanticipated program inputs and states, via an acceptance of "failure" as a concept, tends to hone in on SSTCPW.


In my head, this approach is contrasted most starkly against an approach called big-design-up-front (BDUF). The essence of BDUF is that, in the design process, one attempts to consider the whole set of possible requirements (typically both currently-known and projected) and build into the initial design and implementation the flexibility and structure to accommodate large swaths of them in the future, if not in the current version.

In essence, this approach acknowledges that the target is likely moving, tries to anticipate the target's movement, and takes steps to remain one step ahead of the game by building in flexibility, genericity, and a more 1:1-looking mapping between the problem domain and the code constructs.

Benefits cited usually relate to ongoing maintenance in some sense and typically include:

Head to head

In a lot of software engineering doctrine that I've read, been taught, and toyed with throughout the years, the prevalence of unknown and ever-changing business requirements for application software has lent a lot of credence to BDUF, especially in that space.

There have also been enabling trends for this mentality; for example, the introduction of indirection through abstractions has monumentally less cost on today's JVM than on the Java interpreter of yore. In that same sense, C++ has attempted to satisfy an interesting niche in the middle ground with its design concept of "zero cost abstractions", which intend to be known-reducible to more easily understood and more predictable underlying code forms at compile time. On the hardware side, the steady provisioning of single-thread performance and memory capacity throughout the years has also played an enabling role.

By contrast, the system-software implementation doctrine and conventional wisdom skews heavily towards SSTCPW, in that any "additional" design reflected in the implementation tends to come under higher levels of duress from a {performance, code-size, debuggability, correctness} perspective. Ideas like "depending on concretions" — which I specifically use because it's denounced by the D in SOLID — are wholly accepted in SSTCPW given that it (a) makes the resulting artifact simpler to understand in some sense (b) without sacrificing the ability to meet necessary requirements.

So what's the underlying trick in acting on a SSTCPW philosophy? You have to do enough design work (and detailed engineering legwork) to distinguish between what is necessary and what is wanted, and have some good-taste arbitration process to distinguish between the two when there's disagreement about the classification. As part of that process, you have to make the most difficult decisions: what you definitely will not do and what the design will not accommodate without major rework.

Thoughts on blogging quantity vs quality

I'm very hesitant to post things to my real blog. [*] I often have complex ideas that I want to convey via blog entries, and the complexity mandates that I perform a certain level of research before making any real claim. As a result, I'm constantly facing a quantity vs. quality problem. My queue of things to post is ever-increasing and the research/writing process is agonizingly slow. [†]

Just saying "screw it" and posting whatever un-validated crap spills out of my head-holes seems promising, but irresponsible. The question that I really have to ask myself is whether or not the world would be a better place if I posted more frequently, given that it is at the cost of some accuracy and/or completeness.

I'm starting to think that the best solution is to relate a daily occurrence [2] to a larger scheme of things. Trying to piece together some personal mysteries and detailing my thought processes may be both cathartic and productive — in the literal sense of productive.

A preliminary idea is to prefix the titles of these entries with "Thoughts on [topic]" and prefix more definitive and researched articles with a simple "On [topic]". Readers may take what I'm saying more at face value if I explicitly announce that my post relates to notions and not more concrete theories. [‡]



As opposed to my Tumblr account, which I use like an unencumbered Twitter.


My great grammatical aspirations certainly don't speed up the process.


Perhaps one that seems inconsequential! That would make for a bit of a fun challenge.


Theory: a well-substantiated explanation of some aspect of the natural world.