lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Limits on pastable census size


From: Vadim Zeitlin
Subject: Re: [lmi] Limits on pastable census size
Date: Sun, 11 Feb 2018 00:15:35 +0100

On Sat, 10 Feb 2018 21:47:46 +0000 Greg Chicares <address@hidden> wrote:

GC> What OS are you running on that machine? GNU/Linux with wine? native msw?

 Sorry for not being clear, this is under native MSW 7. My VMs don't have
much RAM allocated to them (most, including the lmi one, have just 4GiB)
and I was too lazy to reconfigure them. I could do it to test, but I think
it would be better to just fix/avoid the problem in the first place.

GC> gnome-system-monitor says that lmi uses about 900 MB by the time it has
GC> almost finished loading the ten thousand cells. Then that measurement
GC> increases, reaching a high point of 1.6 GB before 'bad_alloc'.

 This is more or less consistent with what I'm seeing here.

GC> IIRC, there's some trickery that lets an msw app use 3 GB instead of 2
GC> GB,

 This isn't done by default and needs to be enabled explicitly on system
boot, i.e. won't be the case in practice.

GC> I'm satisfied that changing the way class Input uses memory would
GC> be a perilous mistake.

 Sorry, but where does this conclusion come from?

GC> > but I do think that the copy at the end should be
GC> > replaced by move because there just doesn't seem to be any need whatsoever
GC> > to pessimize this by allocating so much memory at the end, after already
GC> > doing all the work. In fact, I wonder why is "cells" temporary vector is
GC> > needed at all and why couldn't we just append cells to cell_parms()
GC> > directly?
GC> 
GC> It was written for simplicity, not economy, and I hesitate to change
GC> it much because I suspect a 64-bit build will move the census-size
GC> goalpost to the end of the RAM zone.

 Allocating 1.5GiB (and likely more in 64 bit builds) of RAM on a machine
with 8GiB of RAM is still not going to be a very pleasant user experience.

GC> In particular, in the loop that does this:
GC>     z = YmdToJdn(ymd_t(z)).value();
GC>     values[j] = value_cast<std::string>(z);
GC> it seems that we insert normally-unacceptable values into class Input
GC> and then fix them up before its internal validation has a chance to
GC> notice...so this code might be fragile. Then, after that's done, the
GC> current census is conditionally wiped clean, but not if any of the
GC> preceding operations caused an early exit. I really think that just
GC> appending to cell_parms() is a one-way ticket to a world of hurt.

 Sorry but I just don't see why. It would be trivial to remember the
original size of the vector and roll cell_parms() back to it on any error.

GC> >  Please let me know if you'd like me to optimize this and, if you would,
GC> > how far should I go on the scale of "1 - do the absolute minimal local
GC> > changes in CensusView::UponPasteCensus() helping with memory usage" to
GC> > "10 - rewrite Input class entirely to be more memory-efficient".
GC> 
GC> On that scale of 1..10, I'd say...about 0.333 or so. If it's
GC> possible to improve the lines
GC>     std::back_insert_iterator<std::vector<Input>> iip(cell_parms());
GC>     std::copy(cells.begin(), cells.end(), iip);
GC> only, without changing anything else, then that's probably
GC> worthwhile. It would be okay to optimize only for the case
GC> where this if-statement's condition is true:
GC> 
GC>     if(!document().IsModified() && !document().GetDocumentSaved())
GC>         {
GC>         case_parms ().clear();
GC>         case_parms ().push_back(exemplar);
GC>         class_parms().clear();
GC>         class_parms().push_back(exemplar);
GC>         cell_parms ().clear();
GC> + do something like swap() here instead of copy() below...

 Yes, this is indeed a completely trivial and "free" optimization. Another
one which just cries to be done here is to call reserve() to allocate all
the RAM we need only once instead of reallocating it possibly many times.

 I've created a PR with both of these changes, split into 2 commits for
convenience, please see https://github.com/vadz/lmi/pull/67

 I've also used this patch, which I didn't include in the PR, but which you
could find useful for testing:
---------------------------------- >8 --------------------------------------
diff --git a/census_view.cpp b/census_view.cpp
index de0ef544c..3cb6b6ff6 100644
--- a/census_view.cpp
+++ b/census_view.cpp
@@ -1609,6 +1609,7 @@ void CensusView::UponRunCaseToGroupRoster(wxCommandEvent&)
 
 void CensusView::UponPasteCensus(wxCommandEvent&)
 {
+    Timer timer;
     std::string const census_data = ClipboardEx::GetText();
 
     std::vector<std::string> headers;
@@ -1791,7 +1792,8 @@ void CensusView::UponPasteCensus(wxCommandEvent&)
     list_window_->Select(z);
     list_window_->EnsureVisible(z);
 
-    status() << std::flush;
+    double total_seconds = timer.stop().elapsed_seconds();
+    status() << Timer::elapsed_msec_str(total_seconds) << std::flush;
 
     LMI_ASSERT(1 == case_parms().size());
     LMI_ASSERT(!cell_parms ().empty());
---------------------------------- >8 --------------------------------------

Using this patch and Process Explorer, I've benchmarked the changes by
pasting 1000 (and not 10000 because this just took too long) rows into a
new census and the results are:

Version        Time (s)         Peak RAM (MiB)
----------------------------------------------
Original           15.9                    124
With reserve()     13.0                    102
With swap()        11.7                     77
----------------------------------------------

The gains are not huge, but not negligible neither, especially for such a
simple change, so it definitely looks to be worth applying. I still think
we should be able to do much better though...

 Regards,
VZ


reply via email to

[Prev in Thread] Current Thread [Next in Thread]