[En-Nut-Discussion] Off-topic - Building Nut/OS fast...very fast

Nathan Moore nategoose at gmail.com
Tue Jul 19 19:08:40 CEST 2011


> On 7/19/2011 5:54 PM, Nathan Moore wrote:
> > Something that I do sometimes is essentially:
> >
> >    if  [ ! make -j all ]
> >    then
> >       make all
> >    fi
>
>
> No idea why _I_ didn't think about this solution. :-) Right, that should
> solve it.
>

Great!  I never actually scripted it, before, and just did it by hand.


>
>
> > I'm thinking about how output from this could be better.  The first thing
> > that comes to mind might cause  more file IO and make things worse,
> > though.   Hmmm...
>
> Another idea is to run several concurrent scripts. Except for the final
> packing, the target builds are completely independent.
>
> > Do all object files get created there initially or do some get created in
> > the source directory and then moved?
>
> Only the final libs and binaries are copied at the end of each
> successful build. The objects are never moved.
>

While moving a file from one location on the disk to another on the same
disk
should only consist of directory updates, those can be more expensive than
you might think due to contention on the directory structure in the OS.
Also,
on SSD if you tried to treat the source directories as read only during the
build
process you might wear out the flash more slowly as that may cause large
directory lists to be re-written multiple times.
If you had all object files initially created in another location besides
being the
same directory as the source files that may help, though it would make
makefiles
much more complicated.


Regarding multiple cores running at 20% instead of 90% for one core when
running
make with only a single job, keep in mind that you're essentially running:

   make &  gcc & ( cc1 < file.c ) | gas -o file.o

Out of this cc1 and gas should be taking the most time and CPU.  There's
also some OS
tasks that may be handling the disk IO and screen updates and each of these
may be
scheduled on separate cores.  I think that under heavy load Linux may
recognize that these
are using the same data (in the pipe) and for cpu cache reasons put them on
the same core,
but I don't know about windows.  Under light load, though, it's looking to
keep the CPUs doing
stuff as long as there's stuff for them to do, so not switching tasks from
cc1 to gas on
the same core might win you something.  Even if it doesn't it most likely
doesn't hurt much,
and the same scheduling algorithms have to handle other processes (and
process behavior)
as well.


Nathan



More information about the En-Nut-Discussion mailing list