lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] progress display


From: Greg Chicares
Subject: Re: [lmi] progress display
Date: Thu, 30 Oct 2014 12:05:34 +0000
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:24.0) Gecko/20100101 Thunderbird/24.6.0

On 2014-10-30 01:02Z, Vadim Zeitlin wrote:
[...]
>  Here is an experimental patch showing the gauge in it, please let me know
> what do you think. The main problem I see with it is that it's impossible
> to cancel it any more. But this might be not a real problem in practice if
> it's really always as fast as you measured. And if it is, we can always add
> a small "x" button near the gauge, also in the status bar, to allow
> cancelling it.
> 
>  The other problem is that it's really just too fast to even for this
> progress meter to be useful. There is not much I can do here other than
> hope that some of your users have slower machines or maybe bigger, more
> complex censuses.

Yes, Skeleton::UpdateViews() really is too fast. But lmi uses progress_meter
for some other operations that may take many seconds or even minutes, and in
those cases the extra information that wxProgressDialog displays can be very
useful.

For example, an end user sent me an actual production case with over four
thousand cells, which I used for testing this progress gauge. "Census | Run"
is expected to take three and a half minutes according to wxProgressDialog;
that's useful information. The reason this case was shared with me is that
this operation ends prematurely, after processing exactly 509 cells--and
seeing that number on the screen helped me to diagnose the problem quickly:
due to an exotic option, one file is kept open for each cell, so we ran into
this limit:

http://stackoverflow.com/questions/870173/is-there-a-limit-on-number-of-open-files-in-windows
| The C run-time libraries have a 512 limit for the number of files that can be 
open at any one time.

In that example, the progress gauge isn't as informative as I'd like. Thanks
for writing it, though. Without a concrete implementation to experiment with,
I wouldn't have come to this realization. I'll save it against the day when
we find a use case for which it's well suited.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]