[En-Nut-Discussion] Ideas about possible banked memory usage on Ethernut 2.1 boards

Ernst Stippl ernst at stippl.org
Tue Mar 11 21:43:43 CET 2008


Hi!

Sorry for this lengthy mail being the result of my ideas:
 
I have been thinking about how banked memory on Ethernut 2.1 boards could be
used in situations when it is not needed for segmented memory buffers.

I came up with the following possibilities:

1) Increase system heap size

Use the memory area 0x8000 - 0xBFFF to increase the Nut/OS heap. This can be
achieved by changing the value of NUTXMEM_SIZE = "28416" to NUTXMEM_SIZE =
"44800". This increases the upper limit of the heap from 0x7FFF to 0xBFFF.
On the positive side, there is still only one heap in the system, no changes
need to be done to the SW (see note 1 below). On the negative side, only one
memory bank (0x4000 in size) will be utilized while 29 more banks are still
unused.

2) Introduce "private" heaps

Use the 30 memory banks alternatively for 30 additional "private" heaps, in
addition to the global system heap. Each one of the 30 private heaps will be
located within its own single memory bank. This could be achieved by adding
a second set of NutHeapXxxx functions (NutHeapXxxxBanked) to Nut/OS. This
second set of functions could setup and maintain a "private" heap in each of
the 30 memory banks. Additional functions could be introduced to keep track
of used/unused banks and to switch between them. This would require the
application to initiate the bank switch. Applications need to explicitly
select an active bank to gain access to the respective heap and the
variables stored there.

3) Relieve applications from selecting memory banks 

If it is assumed that the "private" heaps make sense to be thread-private,
an additional variable could be added to NUTTHREADINFO holding the bank
number of the private heap associated with this thread. This variable is
normally set to a "no_private_heap" value during thread creation, and gets
set by NutHeapAllocBanked to point to the memory bank containing the newly
created private heap. During NutThreadSwitch, the memory bank must be
switched from the "old" thread (the one loosing control of the CPU) to the
"new" thread (the one gaining control of the CPU), in case the "new" thread
has a private heap allocated. This way, the application does not need to
explicitly switch memory banks to access "its" private heap.

4) Enhance utilization of the "private" heaps

A final step could be to allocate the thread's stacks within the "private"
heap. This could be done by a modified "NutThreadCreateBanked" function,
which uses a currently unassigned memory bank to create a private heap right
during thread creation. The stack will then be allocated within the private
heap, further relieving the main system heap from holding these stacks.
Switching between private heaps becomes a mandatory function of
NutThreadSwitch, because otherwise the threads could not access their "own"
stack any more. 



Note:

Due to these changes, memory pointers may contain values greater than 0x7FFF
and therefore need to be checked for being declared as "unsigned", otherwise
pointer arithmetic may compromise the system. 

These changes (apart from 1) leave current applications unchanged. If the
additional functions described above exist in Nut/OS but are not called, the
running application will not experience changes in their environment
besides:
- maybe two additional variables holding used/unused memory bank information
and a current bank number in internal RAM 
- an additional variable within the NUTTHREADINFO structure. 

These changes could be controlled by conditional compilation to exist only
within "bank enabled" systems.

I have looked thru the email history within the Ethernut forum to verify
that I do not propose something which has already been discussed in length
there. Besides some discussion on questions similar to 1) I have not found
messages dealing with this subject(s) in greater detail. I hope I did not
overlook something essential.

I realize that memory access speed is different within different address
ranges on the Ethernut systems. I have not conducted any conclusive tests
yet, but I think there may be situations where the gain in heap areas may
offset the larger access times to these heap areas.

What do you think? Could these additions being worthwhile?

Regards

Ernst




More information about the En-Nut-Discussion mailing list