ARM chars are unsigned by default

Posted by cdleary on 2012-11-14

[Latest from the "I can't believe I'm writing a blog entry about this" department, but the context and surrounding discussion is interesting. --Ed]

If you're like me, or one of the other thousands of concerned parents who has borne C code into this cruel, topsy-turvy, and oftentimes undefined world, you read the C standard aloud to your programs each night. It's comforting to know that K&R are out there, somewhere, watching over them, as visions of Duff's Devices dance in their wee little heads.

The shocking truth

In all probability, you're one of today's lucky bunch who find out that the signedness of the char datatype in C is undefined. The implication being, when you write char, the compiler is implicitly (but consistently) giving it either the signed or unsigned modifier. From the spec: [*]

The three types char, signed char, and unsigned char are collectively called the character types. The implementation shall define char to have the same range, representation, and behavior as either signed char or unsigned char.

...

Irrespective of the choice made, char is a separate type from the other two and is not compatible with either.

—ISO 9899:1999, section "6.2.5 Types"

[*]The spec goes on to say that you can figure out the underlying signedness by checking whether CHAR_MIN from <limits.h> is 0 or SCHAR_MIN. In C++ you could do the <limits>-based std::numeric_limits<char>::is_signed dance.

Why is char distinct from the explicitly-signed variants to begin with? A great discussion of historical portability questions is given here:

Fast forward [to 1993] and you'll find no single "load character from memory and sign extend" in the ARM instruction set. That's why, for performance reasons, every compiler I'm aware of makes the default char type signed on x86, but unsigned on ARM. (A workaround for the GNU GCC compiler is the -fsigned-char parameter, which forces all chars to become signed.)

Portability and the ARM Processor, Trevor Harmon, 2003

It's worth noting, though, that in modern times there are both LDRB (Load Register Byte) and LDRSB (Load Register Signed Byte) instructions available in the ISA that do sign extension after the load operation in a single instruction. [†]

[†]Although the same encodings exist in Thumb-sub-ISA, the ARM-sub-ISA encoding for LSRSB lacks a shift capability on the load output as a result of this historical artifact.

So what does this mean in practice? Conventional wisdom is that you use unsigned values when you're bit bashing (although you have to be extra careful bit-bashing types smaller than int due to promotion rules) and signed values when you're doing math, [‡] but now we have this third type, the implicit-signedness char. What's the conventional wisdom on that?

[‡]Although sometimes of the tradeoffs can be more subtle. Scott Meyers discusses more issues quite well, per usual.

Signedness-un-decorated char is for ASCII text

If you find yourself writing:

char some_char = NUMERIC_VALUE;

You should probably reconsider. In that case, when you're clearly doing something numeric, spring for a signed char so the effect of arithmetic expressions across platforms is more clear. But the more typical usage is still good:

char some_char = 'a';

For numeric uses, also consider adopting a fixed-width or minimum-width datatype from <stdint.h>. You really don't want to hold the additional complexity of char signedness in your head, as integer promotion rules are already quite tricky.

Examples to consider

Some of the following mistakes will trigger warnings, but you should realize there's something to be aware of in the warning spew (or a compiler option to consider changing) when you're cross-compiling for ARM.

Example of badness: testing the high bit

Let's say you wanted to see if the high bit were set on a char. If you assume signed chars, this easy-to-write comparison seems legit:

if (some_char < 0)

But if your char type is unsigned that test will never pass.

Example of badness: comparison to negative numeric literals

You could also make the classic mistake:

char c = getchar(); // Should actually be placed in an int!
while (c != EOF)

This comparison would never return true with an 8-bit unsigned char datatype and a 32-bit int datatype. Here's the breakdown:

When getchar() returns ((signed int) -1) to represent EOF, you'll truncate that value into 0xFFu (because chars are an unsigned 8-bit datatype). Then, when you compare against EOF, you'll promote that unsigned value to a signed integer without sign extension (preserving the bit pattern of the original, unsigned char value), and get comparison between 0xFF (255 in decimal) and 0xFFFFFFFF (-1 in decimal). For all the values in the unsigned char range, I hope it's clear that this test will never pass. [§]

[§]Notably, if you make the same mistake in in the signed char case you can breathe easier, because you'll sign extend for the comparison, making the test passable.

To make the example a little more obvious we can replace the call to getchar() and the EOF with a numeric -1 literal and the same thing will happen.

char c = -1;
assert(c == -1); // This assertion fails. Yikes.

That last snippet can be tested by compiling in GCC with -fsigned-char and -funsigned-char if you'd like to see the difference in action.