[En-Nut-Discussion] NutGetMillis overflow documentation
Ulrich Prinz
uprinz2 at netscape.net
Sun Feb 7 22:04:35 CET 2010
Hi!
I see your point for using NutGetMilis(). For a precise timestamp this
is correct. I need to follow the complete calculations done there. If I
see it correct, there are more problems inside the function. The millis
are derived with a calculation that does a two-times multiplication with
1000. This is almost a bitshift of 10bits ( 1024)... That kills a lot of
bits at the upper end of the counter variable.
But let's see:
uint32_t NutGetMillis(void)
{
First we get the number of ticks till the system was turned on. There
are 1024 ticks/second on an AT91. This will overflow every
2^32/1024/60/60/24 = 48.55days.
uint32_t ticks = NutGetTickCount();
The next line will calculate the seconds till the system was turned on
as NutGetTickClock() returns the number of ticks, the system timer
produces per second.
uint32_t seconds = ticks / NutGetTickClock();
Now here it comes... No where we substract the seconds amount in ticks
from the ticks... so in the ticks variable remain the number of ticks
beginning from the last second.
ticks -= seconds * NutGetTickClock();
The last line is a bit complex. The reason of all this is to get a
number of milliseconds from a counter that is not counting in pairs of
anything like a second, i.e. the counter is counting to 256 or 1024, but
we need to get 1000ms/s. Additionally we have to avoid an overflow of
the registers during calculation.
But... the part seconds*1000 will overflow every 49.71 days but that
doesn't matter as it is derived from ticks and ticks is derived from a
system clock that is running with 1024ticks per second and therefore
overflowing in 48.55 days resetting the seconds calculation too.
The part (ticks*1000)/NutGetTickClock() will not cause an overflow as it
is derived only from the rest of ticks after substracting the seconds.
return seconds * 1000 + (ticks * 1000 ) / NutGetTickClock();
}
To have a precise timestamp one must count the milliseconds himself and
reset them to 0 as seconds are incremented. Then use the complete time
hh:mm:ss:ms as a stamp or combine it into something.
But it is a bit difficult to calculate precise 1000ms from a timer that
is giving 1024 ticks per second. A jitter will be imminent or you need
to change the crystal to a value that can be divided by something that
gives 1000ticks/second.
An other simple way might be to add another byte/word/long to the
calculation. If the counter is extended to 48 bits, the overflow will
happen in 34 years. Most devices will not live that long and the
additional calculation will not consume to much time and memory on an AVR.
On an ARM a calculation with 32bits is faster than anything else. And
with 64bits the overflow will happen in 571.233.829 years...
This is, ah.. roughly 20% of the time they expect the sun will exist?
Best regards, Ulrich
More information about the En-Nut-Discussion
mailing list