bug-gplusplus
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Bug in g++ 2.96


From: Momchil Velikov
Subject: Re: Bug in g++ 2.96
Date: 12 Oct 2003 12:07:49 +0300
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3.50

>>>>> "Michael" == Michael Mwangi <address@hidden> writes:

    Michael> #include <set>
    Michael> #include <vector>
    Michael> int main()
    Michael> {
    Michael>      set<unsigned long> *s1 = new set<unsigned long>();
    Michael>      vector<unsigned long> *v1 = new vector<unsigned long>();
    Michael>      unsigned long i;

    Michael>      for(i = 1; i <= 1000000; i++)
    Michael>      {
    Michael>           (*s1).insert(i);
    Michael>      }
    Michael>      (*s1).clear();
    Michael>      delete s1;
    Michael>      for(i = 1; i <= 6000000; i++)
    Michael>      {
    Michael>           (*v1).push_back(i);
    Michael>      }
    Michael>      while(1);
    Michael> }

    Michael> The second program however uses in total 48 MB. Hence,
    Michael> the 24 MB allocated for the set is not made available
    Michael> using delete for the subsequent vector. You stated above:

    Michael> "That simply means that the memory is not returned to the kernel.
    Michael> Nevertheless, it is still available for allocation by the same
    Michael> process."

  Fragmentation can limit "reusability" of memory blocks.  If, for
exampe, the heap contained 1G of 12 bytes blocks, interleaved with 4
byte blocks, an attempt to allocate 16 bytes would either fail or
resort to obtaining more memory from the kernel, no matter that there
is 1G worth of unused memory.

  This is exactly what happens with the second program -- the
available memory is in the form of a list element sized chunks,
whereas the vector needs one considerably bigger chunk.  

  You can try using the malloc/new based allocator, by setting
GLIBCXX_FORCE_NEW in the environment (for gcc 2.96 it might be called
__USE_MALLOC or something).  It's slower this way though.

~velco




reply via email to

[Prev in Thread] Current Thread [Next in Thread]