[En-Nut-Discussion] Using boost::preprocessor (preprocessor metaprogramming)
Thiago A. Corrêa
thiago.correa at gmail.com
Wed Feb 11 16:12:09 CET 2009
Hi,
On Wed, Feb 11, 2009 at 10:20 AM, duane ellis <ethernut at duaneellis.com> wrote:
> thiago> [earlier] [ use of boost macros ]
> thiago> It also follows the DRY principle
>
> Hmm... I'd disagree, macro expansion on this scale is repeating your self.
>
> I see value in the BOOST macros, *WHEN* things need to be independent
> functions.
>
Well, not quite. It's value is when you see yourself writing the same
template over and over with minor changes. As I said, it should be
used with care, as a last resort, not as a panacea to cure the world's
problems.
Look at the AVR32 interrupts for example. It only has 3 vectors, INT0,
INT1, INT2. Each of those represent a priority. An interrupt group can
be on any one of those priorities. The chip will handle first the
higher priority requests, and can even preempt lower priority handlers
to handle higher priority requests. When you have an INT0 for
instance, you need to check the Interrupt Cause Register to figure the
real interrupt source in order to forward it to the proper handler.
That's done with in-memory table.
Now, the problem: each device has a variable number of interrupt
causes, since they vary on number of peripherals and pins on the chip,
as every pin has a GPIO interrupt for each group of 8 pins.
If I do a table large enough, I could meet the UC30512 needs, but then
it's oversized for UC31512, wasting memory. The UC0256 or UC0128 as
well, with the added aggravate that not only memory is wasted, it's
more precious on those with less internal ram.
Because the toolchain headers (<avr32/io.h>) gives me the information
on the number of groups and causes as macro constants, I can generate
tables that are perfect fit for the device I'm compiling for. Atmel
followed that approach in their Software Framework, except that they
used their own home made version of boost::preprocessor. I'm using
Atmel's macros for now, and that's ok for my porting needs so far.
That is set as a private API of the arch, but I guess other parts
might face similar dilemmas as well as user code, so I thought
including boost::preprocessor in the include folder of ethernut could
benefit both library and users.
> There is an old saying a fellow developer I worked with had - it is a
> great and wonderful way of looking at it.
>
> Do not show me your code.
> Show me your data structures
> The data structures will explain your code.
> I don't know where it came from.
I believe that is a quote from Donald Knuth, the guy who wrote "The
Art of Computer Programming", but seems to also be found on the
"Mythical Man-month" (very good reading). :)
> Instead of 5 functions: uart1_write() through write5_write(),
>
> int
> uartX_write( int who, int c )
> {
> UART_REGS *p_uart = uart_pointer_table[ who ];
> ..... code .....
> }
>
> That "uartX_write()" function is functionally identical to: fputc(), by
> doing it that way, one can support any number of "open files" or in this
> example, uarts, you can make legacy code work - create uart0_write(),
> that calls uartX_write() passing the correct parameter for "X".
>
> Please don't mis-understand me - having the "uart_write()" [which calls
> uartX_write()] makes sense if you have a very common UART that is used
> for everything. [See parameter passing code size reduction below]
One can use the device structure and it's base address field for that.
That's what I'm doing for AVR32 u(s)arts.
That's a different problem, and I wouldn't apply macro expansion on that.
Perhaps this could be used on ARM as well, as long as registers have a
fixed offset from a base peripheral memory address, this should work.
> In your example, instead of "boiler plate code" inside a switch
> statement, why is it not parameterized into a table like this:
>
> GPioSPi0ChipSelect( uint8_t cs, int hi )
> {
> if( cs < SOME_VALUE_N_PINS ){
> GpioPinSet( foo[cs].pin, foo[cs].bit, hi );
> GpioPinConfigSet( foo[cs].port, foo[cs].bit, GPIO_CFG_OUTPUT );
> return foo[cs].gspi_reg_ptr;
> } else {
> errno = EIO;
> return NULL;
> }
> }
>
>
> More importantly, these are embedded devices. The above TABLE driven
> approach I believe results in a smaller code foot print then the BOOST
> macros (which run on huge machines that can manage bloat).
That's a trade off we use on ethernut in several places. We are
trading RAM space for ROM/FLASH space here.
> Lastly, when ever I see code that calls several functions in a row, it
> *screams* refactor to me, for example the two GPIO functions *SCREAM*
> for a function that perhaps looks like this:
>
> GpioSetOutput( int PIN_ID, int VALUE )
>
> Which - from the PIN_ID calculates the "port" and "bit" value, and sets
> the direction - thus resulting in even smaller code, for example:
> "PIN_ID div 8" = port number, (1 << (PIN_ID mod 8)) = bitnumber. Or is
> table driven. That function would of course - both *SET* and *CONFIG*
> the pin in the proper way.
On the avr32 port, I had to create a macro to split the pin number
that comes from the toolchain headers into bank and pin, to fit the
API. I don't know if your proposal could be used on all platforms
though.
> I'll bet those two functions are used over and over again in pairs in
> other places in other places in the code. This type of refactoring would
> also reduce overall code size even more.
Not necessarily. My application for instance will only call
GpioSetConfig during initialization, and never more.
> Another small item - by passing less parameters to a function, the
> overall code size shrinks. There is less 'setup' required before each
> function call. And less "messing with parameters" in the function entry
> code. There is also less run time stack usage. In this case, I believe
> you are porting to an avr32 platform, perhaps passing a larger parameter
> (which costs nothing extra in code space) is another approach. The
> underlying machines here is not 8 bit like an 8051.
GCC usually prefers to pass parameters in registers whenever possible.
In this case, all 3 parameters can be in registers. You still have the
mov from the caller but much less code than stack.
> Lastly: maybe there is something about the chip I don't know. Is there
> a specific reason why the pin must be reconfigured every time you set it
> as an output? That seems odd, and a waste of time. If true, that change
> would improve overall speed/performance - yes, only slightly.
AFAIK, 8-bit AVR, ARM and AVR32 don't need to be reconfigured on each
call. Probably that could be moved to an initialization routine and be
called only once. Unless this pin has double function as input in
other parts of the code.
For AVR32 I've expanded the GpioSetConfig to allow me to change the
peripheral pin function. Unlike other MCU's I've used, just enabling
the peripheral won't change the pin functions, you have to explicitly
do so. Each pin can have up to 4 functions according to the
architecture manual, and a peripheral might have different pin choices
for a line. One example of that is the MACB interface in AP7000, that
could be enabled on different pins, so you don't have to clobber the
LCD pins for instance if you want both.
But then, that's also initialization and I only place those calls in
my device initialization routine.
> These are many little things that add up - here in the US - I'd use the
> phrase: "The straw that broke the camel's back"
I have a very good C++ background, and I like what Stroustrup said in
his book about C++: you should only pay for what you use.
That, and the part about being multi paradigm, not forcing your
programmig philosophy on others as other languages does :D
Kind Regards,
Thiago A. Correa
More information about the En-Nut-Discussion
mailing list