[En-Nut-Discussion] systime limit reduces valid uptime to 49 days

Alain M. alainm at pobox.com
Thu Nov 8 14:57:58 CET 2007

Hi Harald,

(I am top-answering because I will not comment item-by-item)

I agre 100% with you, I use this arangement for many years:
- one long variable with a milisencond counter
- one time_t with the correct time in seconds since 1970.

Let me include an extra explanation:
The milisencons overflow works ok because of the way "C" represents a 
long int: if you make a *subtraction* between two numbers before and 
after the overflow or if you add some other long int number, the result 

BUT FOR THIS TO WORK you should never make comparisons between numbers 
but you have to calculate the time diff and test if it is positive or 
negative. Example:
if ( (now_ms - end_time_ms) >=0)   // this works
if ( now_ms >= end_time_ms )       // this will fail at overflow


Harald Kipp escreveu:
> Michael Müller schrieb:
>> Hi,
>> I was quite shocked when looking at the code parts calculating the 
>> system time.
> Me too, at least a bit. But, as other people here already pointed out, 
> this is not a general problem. Timeout calculations should still work 
> during overflows.
>> The comment at the head of NutGetMillis(void) 
>> function tells about a maximum systime of 8 years. It seems to refer to 
>> the old systick of 62ms instead of the current default value of 1ms.
> Indeed, this info is outdated. Thanks for bringing this to our 
> attention. (Anyone out there to add a bug report at SourceForge or fix 
> it immediately?)
>> Are there any suggestions how to handle this?
>> - Change the systick to 62ms again (was the reason just more precision 
>> for the "user application" or was it useful / necessary for the OS, too?) ?
>> - Increase the tick variable of NutOS to a 64bit type
>> (unsigned long long)?
> Initially Nut/OS runs on 3.68 MHz systems and the timer interrupt 
> handled a lot of things, so it was set to 62.5ms. AVRs became faster and 
> the change to 1ms had been mainly done to provide finer granularity for 
> time out values.
> My preference would be a solution, which
> 1. avoids the 64 bit type long long, because it is not supported by all 
> compilers and may result in a porting nightmare.
> 2. avoids any additional code running in interrupt context. Such 
> additional code will increase interrupt latency. See also Bernard's posting.
> Actually the problem is with the calendar functions on boards w/o RTC 
> chip. Thus, the ideal solution I can think of, would be an additional 
> time_t variable, which holds the number of seconds since the epoch. This 
> variable may be updated when calling NutGetSeconds() or similar 
> functions, or in the idle thread. The latter has the disadvantage, that 
> it may run the update too often. Doing the update in a timer query 
> routine seems to be the most economical solution, but requires, that the 
> application calls NutGetSeconds(), time() or similar at least once 
> within 24 days.
> Btw. I do not think, that it is a good idea to reset nut_ticks, because 
> it will interfere with running time outs.
> Harald
> _______________________________________________
> http://lists.egnite.de/mailman/listinfo/en-nut-discussion

More information about the En-Nut-Discussion mailing list