gnu-arch-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] programming in the large (Re: On configs and huge s


From: Andrew Suffield
Subject: Re: [Gnu-arch-users] programming in the large (Re: On configs and huge source trees)
Date: Thu, 20 Oct 2005 12:10:26 +0100
User-agent: Mutt/1.5.11

On Wed, Oct 19, 2005 at 09:26:31PM +0200, Alfred M. Szmidt wrote:
>      Alfred> Then there is the major deficency of tla using static
>      Alfred> libraries for hackerlab.  Assume that you have a dozen
>      Alfred> programs using hackerlab, and you find some security
>      Alfred> issue or what not in some function, you will end up
>      Alfred> recompiling everything.  Simply out of the question when
>      Alfred> you have a few hundred programs.
> 
>    I think dynamic libraries are overrated and widely abused but, yes,
>    they are also sometimes very valuable.
> 
> There are cases where static libraries are a better choice, but for
> the example I gave with hackerlab, a shared library would be far more
> suitable.  But yeah, I can't disagree that shared libraries are
> sometimes abused; but then so is everything.  Internal libraries are
> libraries that should be static for example.

Not if they're 'internal' libraries like libbfd from binutils, that
are used by significantly more than one binary. 'Shared library' means
'shared memory', and that's the primary reason why they are absolutely
critical on modern systems. Something like hackerlab as a static
library is simply a non-starter, it would chew through system memory
at a huge pace, purely on the text image being duplicated in every
process. With some brief calculations I estimate 30-100Mb wasted in
this fashion on most of my boxes (peaking at 50-200 unique binaries
running), if hackerlab replaced libc. Multiply that by half a dozen
libraries and you're *fucked*. And modern boxes have hundreds of
libraries.

You can only get away with using static libraries for very small
libraries or very small numbers of binaries. Anything else, forget it
- it's completely impractical. Modern workstations only have 512Mb-2Gb
of physical memory, you can't go wasting half of it on text. And you
*can't* just say 'increase the memory' because these boxes are
typically running at the *limit* of the memory they can carry. It is
not merely a matter of putting more chips into them. Neither is it an
addressing boundary (that's 64Gb on i386). It's a stack of limits
related to the hardware design and economies of scale. Physical memory
is not like disk storage: even today, it is a finite and scarce
resource, and it is not increasing as rapidly as the demands on it.

So, not 'overrated' - absolutely *critical* and *not optional*. On a
unix platform, where the philosophy is lots of binaries and lots of
processes, the majority of the program code is going to have to be in
shared libraries. A unix system built on static libraries is like a
chocolate teapot.

-- 
  .''`.  ** Debian GNU/Linux ** | Andrew Suffield
 : :' :  http://www.debian.org/ |
 `. `'                          |
   `-             -><-          |

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]