lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] wx_test cleanup


From: Vadim Zeitlin
Subject: Re: [lmi] wx_test cleanup
Date: Sun, 2 Nov 2014 02:31:37 +0100

On Wed, 29 Oct 2014 21:23:52 +0000 Greg Chicares <address@hidden> wrote:

GC> On 2014-10-29 15:01Z, Vadim Zeitlin wrote:
GC> > On Wed, 29 Oct 2014 12:05:15 +0000 Greg Chicares <address@hidden> wrote:
GC> > 
GC> > GC> I anticipate running 'wx_test' frequently--perhaps before every
GC> > GC> commit--which makes the following side effects inconvenient; can
GC> > GC> they be suppressed?
GC> > 
GC> >  They can, of course. I'm just not sure when should this be done:
GC> > 
GC> > (a) Always?
GC> > (b) Only if all tests pass?
GC> > (c) If some specific command line option is [not] given?
GC> 
GC> Or:
GC>   (d) If the test that created the file passes.

 Yes, this is another possibility which I thought was similar enough to (b)
to not warrant including it, but I'm, to spoil the answer to your question
below, perfectly fine with it.

GC> Of course, that doesn't work for files that are used by many tests
GC> (if there are any).

 No, there are no such files. Generally speaking, all the tests should be
independent of each other, to allow running each of them individually
(useful for debugging) and also to, perhaps, allow running them in a random
order in the future.

GC> Is (d) actually feasible, at least most of the time?

 It definitely is, but I do have a few questions about the exact behaviour
we want:

1. What to do with {New,Calc}Ill{Full,Summary}.txt files created by the
   "calculation_summary" text? This is related to one of the questions in
   my email from 2014-09-08T11:07:20Z (just sort your email by size to find
   that one). Currently they are created because the original testing
   specification mentioned "paste the output for inspection" (item 8), so
   there is no sense in creating them only to delete them at the end.
   Either we shouldn't create them at all and maybe interpret "paste for
   inspection" in some other way or they should be left (maybe in some
   other directory, as these files are test-specific).


2. Some PDF viewers, notably the default/official one, lock the PDF file
   while it's opened in them, meaning that deleting the PDF file created
   in "pdf_illustration" test case will usually fail (it would work for
   me when using the previously mentioned SumatraPDF because it is nice and
   does not lock the file, but this doesn't really help you). This file is
   created in another directory ("print_directory" from the configuration
   file), so maybe it's less annoying than the other ones. But if we still
   want to get rid of it, I don't see any choice but using an external
   "monitor" process which would launch LMI as its child in a job (meaning
   Windows process control object), ensuring that the PDF viewer would be
   part of the same job and so would be closed when LMI exits, allowing the
   monitor process to clean up the file. But this is relatively complicated
   for just one file.

   Alternatively, we could change file_command_wx.cpp to stash the PID
   returned by wxExecute() somewhere so that the PDF viewer process could
   be terminated explicitly by the testing code. But this would require
   changes to the main program code which are not needed outside of
   testing.


3. Implementation-wise, tracking all files created by the test is
   difficult. I looked a little but couldn't find any simple solution to
   collect all the files created by the process, even from the process
   itself (it's possible to hook Windows CreateFile() syscall, but this is
   hardly simple).

   We could take a snapshot of the directories where the output files are
   created and delete all new files appearing there after running the
   tests. This seems appealing but automatically deleting files is always
   dangerous, so I'm not sure if we really want to do this.


4. Continuing with the implementation questions, if we decide to not do
   anything "smart" like automatically deleting all the new files and just
   handle them on a one-by-one basis, it would be very nice if I could
   override wxDocManager::MakeNewDocumentName() in the testing code as I
   could handle almost all new files creation in a single place then.

   The trouble is that to do it, I need to use a custom class deriving from
   DocManagerEx instead of DocManagerEx itself, i.e., again, modify some
   code outside of the tests themselves. This modification would be pretty
   minimal though and would consist in adding a new CreateDocManager()
   virtual method to Skeleton and calling it instead of doing

    doc_manager_ = new DocManagerEx;

   in Skeleton::InitInitDocManager(). And it would also make dealing with
   the file history much simpler, so I'd really like to be able to do it if
   possible if we don't use some blunt automatic way of dealing with the
   files created by the test. Would you object to it?


GC> If so, can I persuade you that it's better than the others?

 I think it is better than the options I listed, but all this thinking
about the monitor process for the test made me wonder if there could be a
much simpler (and hence faster to implement) solution, let me call it

(e) Create a shell script that would run the test itself and then delete
    all the files known to be created by it.

This is as trivial as it gets and, of course, it can't deal with the
individual tests failures. And this script would need to be updated
whenever a new file starts being created by the test. But it's dead easy to
implement and the advantage is that such script could always be ran just to
clean up the files, e.g. after you finish examining them after a test
failure, while all the other solutions would require you to do it manually
or fix the test and rerun it.

 So I have a suspicion that in practice this could be a very simple "good
enough" solution to the problem at hand. What do you think?


GC> > GC> (2) Apparently 'configurable_settings.xml' is modified.
GC> > 
GC> >  This is unexpected, I don't think we intentionally modify it anywhere.
GC> > Could you please tell me what exactly is changed if you already know it?
GC> 
GC> -  <calculation_summary_columns>PolicyYear AttainedAge Outlay 
CSVNet_Current AcctVal_Current CSVNet_Guaranteed AcctVal_Guaranteed 
EOYDeathBft_Current EOYDeathBft_Guaranteed NetWD NewCashLoan 
LoanIntAccrued_Current </calculation_summary_columns>
GC> +  <calculation_summary_columns/>
GC> -  <use_builtin_calculation_summary>0</use_builtin_calculation_summary>
GC> +  <use_builtin_calculation_summary>1</use_builtin_calculation_summary>

 I see, thanks. I thought the test returned the dialog to its initial state
but now I remember that I decided there was no need to do this (relatively)
complex operation -- and then completely forgot about this when writing the
previous reply, sorry about this.

GC> Is "OK" triggered by ExpectMvcDialog::OnInvoked() here?

 Yes, OnInvoked() is what allows us to intercept modal dialogs instead of
showing them to the user and waiting for interactive entry.

GC> > GC> Would it make sense for 'wx_test' to rename it at the start, then
GC> > GC> rename it back at the end?
GC> > 
GC> >  I'll do this if the modifications should be done in the first place.
GC> 
GC> I don't understand this test's implementation thoroughly, but
GC> I suspect that the best way to test the dialog is to simulate
GC> "OK" and verify the outcome...so yes, I think the config file
GC> really should be modified by the test...and therefore, it
GC> would be good, once the test has finished, to restore the file
GC> to its former state.

 Do you think it would be better to do it by simulating the UI actions
needed to do it or...

GC> It might be a good idea to make a backup copy of this file at
GC> the beginning of the test, in case the test fails.

... by just preserving a backup copy of the file? I.e. would there be any
benefit in testing that by performing the correct actions in the dialog we
can prevent the file from being modified?


GC> > GC> (3) File history also changes. Is there any way to save the
GC> > GC> original values before the test, and restore them afterward?
GC> > 
GC> >  We could do it like this, but a simpler solution would be to just avoid
GC> > saving the history into wxConfig when testing. This can be done entirely
GC> > outside of the normal program code by clearing the history in
GC> > SkeletonTest::OnExit() before calling the base class version, as the call
GC> > to FileHistorySave() in it then wouldn't have any effect.

 Just for the record, if my request for adding CreateDocManager() is
granted, this could be done even simpler by just overriding
FileHistorySave() to do nothing in the class of the object returned by the
overridden test version.

 Thanks in advance for your help with the remaining questions!
VZ

reply via email to

[Prev in Thread] Current Thread [Next in Thread]