September 18, 2008

Thoughts on self-modifying code and Futurist Programmers

Around 8th grade I read an article about a faction of programmers — the Futurist Programmers — whose rallying cry is paraphrased in the following quotation:

Why does computer science reject self modifying programs? Why have some departments stopped teaching assembly language programming? On what scientific basis has this been done? Where is the experimental evidence to support these actions?

As far as I remember, this movement attempted to emphasize the purity of computer programming, which they believed was a form of artistry. This was posed as a throwback to the tenets Italian Futurism, which were opposed to tradition and commoditization, in the context of computer programming. A Wikipedia excerpt will probably be helpful:

The Futurists admired speed, technology, youth and violence, the car, the plane and the industrial city, all that represented the technological triumph of humanity over nature, and they were passionate nationalists.

Thinking about JavaScript Just In Time compilers (JITs) today — like TraceMonkey — reminded me of this philosophy. I believe that their line of questioning was insightful, but the formulation was misdirected. Technological triumph stems primarily from computers doing what humans want them to do. It's additionally awesome if the computers can do these things extra quickly; however, if they do things incorrectly very quickly, humanity comes out much less triumphant. Perhaps we even come out worse for the experience.

Secondly, we note that humanity strives for the ability to make further progress based on the success of past experiences. This is the concept of extensibility and reusability. Standing on the shoulders of giants, if you will. Self modifying code that I have encountered is often very clever; however, programming cleverness tends to be at odds with readability. [*] This is not to say that all self-modifying code is unreadable: in languages with dynamic method dispatch, swapping a object's methods out (with some kind of locking mechanism) is a recognized idiom that can lead to beneficial efficiency/complexity trade-offs. [†]

Ultimately, you'd have trouble finding computer enthusiasts who find speed unimportant. Everybody loves it when their computers are more efficient! The caveat is that most computer enthusiasts will, in many situations, put speed down here: after correctness and extensibility. As a testament to this, there is continuing emergence and acceptance of Very High Level Languages (VHLLs) over low level programming languages in non-academic contexts.

So how did the futurists have the right idea? "Introspective" programs are important. There's lots of information at runtime that we can use to more efficiently execute programs. [‡] Hotspot JITs, such as the aforementioned TraceMonkey, know this well: the basic premise is that they dynamically rewrite the code they're executing or, in recent developments with Google's V8, rewrite it before executing. The key here is that we can now:

  1. Write correct, extensible programs.

  2. Write correct, extensible programs to optimize the programs from 1.

  3. Run the more efficient result of combining 2 and 1.

Self-hosting platforms such as PyPy and intermediary representation JITs such as LLVM also show astonishing insight into introspective techniques. These platforms can be used to a number of ends, including, but not limited to, the increases in speed that the Futurist Programmers seem to be longing for.

In the end, I only have one rebuttal question for the Futurist Programmers: What kind of science disregards the accuracy and reproducibility of results for the sake of fast "experiments"? [§] We don't reject self-modifying programs without consideration — there are very important maintainability and extensibility concerns that have to be taken into account before making a decision. It's not always a choice between making something artistically beautiful or performing a feat of engineering: if most computer enthusiasts are like me, they're searching for a way to produce an appropriate mix of the two.



This is generally recognized within the Python community.


As an example of this, think of the singleton access pattern in a multithreaded application. After Singleton.get_instance() has instantiated the class on the first call, you could swap get_instance() with a method that simply returns the created reference. This avoids subsequent locking and singleton-instantiation checking that you would incur from the old get_instance() method.


I recommend the Steve Yegge talk on dynamic languages for some more background on this topic.


What is an application if not a software engineer's big, scary experiment?

Thoughts on blogging quantity vs quality

I'm very hesitant to post things to my real blog. [*] I often have complex ideas that I want to convey via blog entries, and the complexity mandates that I perform a certain level of research before making any real claim. As a result, I'm constantly facing a quantity vs. quality problem. My queue of things to post is ever-increasing and the research/writing process is agonizingly slow. [†]

Just saying "screw it" and posting whatever un-validated crap spills out of my head-holes seems promising, but irresponsible. The question that I really have to ask myself is whether or not the world would be a better place if I posted more frequently, given that it is at the cost of some accuracy and/or completeness.

I'm starting to think that the best solution is to relate a daily occurrence [2] to a larger scheme of things. Trying to piece together some personal mysteries and detailing my thought processes may be both cathartic and productive — in the literal sense of productive.

A preliminary idea is to prefix the titles of these entries with "Thoughts on [topic]" and prefix more definitive and researched articles with a simple "On [topic]". Readers may take what I'm saying more at face value if I explicitly announce that my post relates to notions and not more concrete theories. [‡]



As opposed to my Tumblr account, which I use like an unencumbered Twitter.


My great grammatical aspirations certainly don't speed up the process.


Perhaps one that seems inconsequential! That would make for a bit of a fun challenge.


Theory: a well-substantiated explanation of some aspect of the natural world.

Using Python identifiers to helpfully indicate protocols


The Stroop Effect indicates that misleading identifiers will be more prone to improper use and will be more subtle when introducing bugs. Because of this phenomenon, I try to make my identifiers' intended usage as clear as possible without over-specifying and/or repeating myself. Additionally, I prefer programming languages which allow for latent typing, which has interesting results: I end up encoding protocol indicators into identifiers. [*]

An Example

If you're (still) reading this, you're most likely a Python programmer. When you find that there exists an identifier chunks, what "kind of thing" do you most expect chunks to be bound to? Since this is a very hand-wavy question, I'll provide some options to clarify:

  1. A sequence (iterable) of chunk-like objects.

  2. A callable that returns chunks.

  3. A mapping with chunk-like values (presumably not chunk-like keys).

  4. A number (which represents a count of chunk-like objects somewhere in the problem domain).

If you've got a number picked out then you can know that I'm the bachelor behind door number one. Since I would identify a lone chunk by the identifier chunk, the identifier says to me, "I'm identifying some (a collection of) chunks." By iterating, I'm asking to hold the chunks one at a time. (Yuck. :)

Callables and Action Words

If you chose door number two and think that it's a callable, then your bachelor is this Django project API, which I am using in this particular (chunky) example. This practice not at all uncommon, however, and another good example of this present within the Python standard library is the BaseHTTPServer with its version_string and date_time_string methods. I might be missing something major; however, I'm going to claim that callables should be identified with action word (verb) prefixes.

To me, it seems well worth it to put these prefixes onto identifiers that are intended to be callables to make their use more readily apparent. To my associative capacities, action words and callables go together like punch and pie. Since it helps clarify usage while writing code, it seems bound to help clarify potential errors while reading code, as in the following contrast:

for chunk in uploaded_file.get_chunks:
    """Looks wrong and feels wrong writing it... action word but no
for chunk in uploaded_file.chunks:
    """Looks fine and feels okay writing it, but uploaded_file.chunks is
    really a method.

Mappings and Bylines

If you chose number three and think that it's a mapping, I'm surprised. There's nothing about the identifier to indicate that there is a key-value relation. Additionally, attempting to iterate over chunks, if it is a mapping with non-chunk keys, will end up iterating over the (non-chunk) keys, like so:

>>> chunks = {1: 'Spam', 2: 'Eggs'}
>>> i = iter(chunks)
>>> repr(i)
'<dictionary-keyiterator object at 0xb7da4ea0>'

chunks being a mapping makes the code incompatible with the people who interpret the identifier as an iterable of chunks (number two), since the iterator method (__iter__) for a mapping iterates over the keys rather than the chunks. This is the kind of mistake that I dislike the most: a potentially silent one! [†]

To solve this potential ambiguity in my code I use "bylines", as in the following:

>>> chunk_by_health_value = {1: 'Spam', 2: 'Eggs'}
>>> health_values = iter(chunk_by_health_value)

Seeing the fact that the identifier has a _by_healthiness postfix tells me that I'm dealing with a mapping rather than a simple sequence, and the code tends to read in a more straightforward manner: if it has a _by_* postfix, that's what the default iterator will iterate over. In a similar fashion, if you had a mapping of healthiness to sequences of chunks, I would name the identifier chunks_by_healthiness. [‡]

Identifying Numbers

If you chose number four, I see where you're coming from but don't think the same way. Every identifier whose purpose is to reference a numerical count I either prefix with num_ or postfix with _count. This leaves identifiers like chunks free for sequences that I can call len() on, and indicates that chunk_count has a number-like interface.

Compare/Contrast with Hungarian notation

Though my day-to-day usage I find that this approach doesn't really suffer from the Wikipedia-listed criticisms with Hungarian notation.

Unless I sorely misunderstood the distinction, you could classify this system as a broadly applicable Apps Hungarian, since protocols are really all about semantic information (being file-like indicates the purpose of being used like a file!). Really, this guideline just developed from a desire to use identifiers that conform to the some general notions that we have of language and what language describes; I don't tend to think of chunks as something that I can invoke. (Invoke the chunks!)

For objects that span multiple protocols or aren't "inherently" tied to any given protocol, I just use the singular.

Potential Inconsistencies

Strings can be seen as an inconsistency in this schema. Strings really fall into an ordered sequence protocol, but identifiers are in the singular; i.e. "message". One could argue that strings are really more like sequences of symbols in Python and that the identifiers would be more consistent if we used something like: "message_letters". These sort of identifiers just seem impractical, so I'm going to reply that you're really identifying a message object adapted to the string protocol, so it's still a message. Feel free to tear this argument apart. ;)



A protocol in Python is roughly a "well understood" interface. For example, the Python file protocol starts with an open() ability that returns a file-like handle: this handle has a read() method, a write() method, and usually a seek() method. Anything that acts in a file-like way by using these same "well understood" methods can usually be used in place of a file, so the term protocol is used due to the lack of a de jure interface specification. For a really cool PEP on protocols and adapters, read Alex Martelli and Clark Evans' PEP 246. (Note: This PEP wasn't accepted because function annotations are coming along in Python 3, but it's still a really cool idea. :)


Perl has lots of silently conforming behavior that drives me nuts.


I still haven't figured out a methodological way to scale this appropriately in extreme cases; i.e. a map whose values are also maps becomes something like chunk_by_healthiness_by_importance, which gets ugly real fast.

Thoughts on C++ in small memory footprint embedded development


During my senior year I took on an ECE491 Independent Project course to follow up on a ECE476 Microcontrollers project. On the completion of ECE491 we had created a Low Speed USB 2.0 stack library for the Atmel Mega32 Microcontroller using ~$6 worth of hardware and ~6000 standard lines of C.

Everybody in ECE476 used the CodeVisionAVR IDE, but we were a unique group in using avr-gcc. Though most students were okay with it, there were some minor features missing from the CodeVision compiler at the time, such as the ability to allocate objects on the heap. ;)

We rewrote the ECE476 code base in ECE491, again using avr-gcc, because we realized that the USB protocol was a lot more complex than the stack we had originally written. One of my main gripes in ECE491 was that I was writing highly object oriented code in a language which didn't support any of the syntax. I'm starting to hack on the code base again, and a port to C++ seems like a good idea (since avr-g++ is also available), but there are some significant trade-offs running through my mind.

The Trade-offs

Things I want from C++ in the project:

Things I don't want from C++ in the project:

The First Google Hit Says...

I've read through Reducing C++ Code Bloat and found it thought provoking. Though the article writes about gcc 3.4 and I'm using gcc 4.2, I can't imagine that the underlying code-bloat concepts have changed much. I'm betting a lot of the compiler directive advice is taken care of by gcc's -Os, but I'll make a note to check it out.

It seems sensible to give up on exceptions ahead of time, but there seems to be some hope that the compiler can figure out good code reuse for the templates. I'm compiling to ELF, then performing and objcopy to turn it into Intel Hex object format — I'm hoping that the conversion is trivial and the good ELF compilation referenced in the article will stick.

In the end it seems like I'm just gambling on how much template reuse will occur. I sure hope that if I do all the porting-to-C++ work it optimizes well — template hoisting looks like one of those idioms I'd prefer to leave alone. :(

From Blogger to Wordpress

I've decided to move my blog from my Blogger account to a Wordpress install on my personal domain. Once I took a gander at all the new features and capabilities of Wordpress, the choice wasn't very difficult.

Issues with Blogger

Posting mixed text and code over the course of my blogging history presented interesting problems. From what I could tell, each blog on Blogger seems to have a global "interpret newline as <br />" setting, which prevented me from switching styles (to use <br /> explicitly) without editing all of my previous posts. When you mix this with the fact I was using Vim's "generate highlighted syntax as HTML" feature in lieu of searching for a proper way to post source code in Blogger (which I found far too late in the game :), the GeSHi Syntax Highlighter plugin for Wordpress was looking mighty fine. I haven't pinpointed any exact reasons, but the line breaks and HTML equivalents feel a lot more natural in Wordpress than they did in Blogger.

The blogger backlinks ("Links to this post") capability wasn't cutting it for me. Due to either to the infrequency of my posting, the irrelevance of my posts, or my (heretofore) unwillingness to advertise my blog, I couldn't find backlinks via the Blogger service that I knew to exist. The ping system that Wordpress employs seems a lot more enabling for a low-profile blogger like myself. It's possible that my previous blog never received a ping and that Blogger actually has this feature as well; however, I knew of a few other blogs that linked to my Blogger blog that didn't show up (comments were disabled — maybe that was a problem?).

Easy Feed Migration, Evil URI Migration

Thanks to FeedBurner decoupling my feed URI from my blog URI, the feed migration process was easy as pie. It makes me think that everybody should use FeedBurner, if only for an extra level of indirection between the blog hosting and the RSS pollers.

I was evil, however, and totally dropped my old URIs. My Blogger blog wasn't very highly read or recognized, so I figured rather than go through some painful URI redirection process via <meta> tag manipulation in the blogger template, I'd just delete my old blog. Slightly evil, but significantly productive. I'll just cross my fingers and hope that the people who cared were subscribed to my RSS feed as well. :/

Trying out Comments

I did not enable comments on my Blogger account. I theoretically don't like blog comments — they provide inadequate space for a conversation and proper synthesis of ideas surrounding a conversation. I'm fairly convinced that commenting systems are flawed in low-traffic blogs like my own, and that blog-entry-to-blog-entry responses are much more maintainable, scalable, and helpful for bloggers without thousands of readers; however, I'm willing to give comments another short test period before turning them off.

Categories and Tags?

One of the most foreign things in the Wordpress installation is the category-tag-duality. It seems that these two things are distinct, as explained in the Wordpress Glossary:

Think of it like a Category, but smaller in scope. A post may have several tags, many of which relate to it only peripherally.

For the time being, I've only promoted a few of the most-used labels from tags to categories, which I figure I'll continue to do once the tags cross some arbitrary threshold of posts. It sounds kind of neat to have two tiers of categorization — you can go a little wild in the lower tier while keeping the upper tier simple and clean.