[En-Nut-Discussion] Code size reduction

Harald Kipp harald.kipp at egnite.de
Thu Jun 23 11:54:50 CEST 2005


Hello Johan,

In opposite to Steffen's question about transfer speed, this
one is not easily answered.

The Null pointer optimization of the latest GCC you are talking
about, which causes Nut/OS to fail, doesn't add much to the
code size. Thus, using
   -Os -fno-delete-null-pointer-checks
already produces very compact code.

 From my experience a lot of space can be saved by optimizing
data space. Note, that a copy of all initialization data is
contained in flash memory. For example, explicitly initializing
globals to zero is a bad idea.

Using
   long a;
instead of
   long a = 0;
will save four bytes in flash.

Strings are a good target too. No idea wether AVRGCC supports
string folding. If it does, it doesn't seem to work well.

Using
   prog_char s1[] = "Test th";
   prog_char s2[] = "is";
   prog_char s3[] = "at";
instead of
   prog_char s1[] = "Test this";
   prog_char s2[] = "Test that";
saves you 6 bytes.

Beside that, hand crafted optimization of executable code
beats every compiler optimization. Of course, that requires some
in-depth knowledge of AVR assembly code. Otherwise you
won't be able to properly read the compiler's assembly listing.

One thing you can do to help the compiler's optimizer is
to break large routines into smaller ones.

ImageCraft's Code Compressor is impressive. If the code
isn't written well, it may save 20% or more. However, if
the code is already optimized by an experienced programmer,
it may be disappointing. If you already optimized the code
towards GCC, then ICCAVR may give a bad result. For example,
ICCAVR doesn't support the 'const' modifier for pointer
parameters of functions.

IMHO, in complex applications there is no such thing like
"the optimal code". The more time you spend, the better results
you get.

Harald

P.S. If the next project isn't urgent, you may consider the
ATmega256.




More information about the En-Nut-Discussion mailing list