[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [lmi] PDF unit tests [Was: Integrate wxPdfDocument into lmi build sy
From: |
Greg Chicares |
Subject: |
Re: [lmi] PDF unit tests [Was: Integrate wxPdfDocument into lmi build system] |
Date: |
Thu, 27 Aug 2015 12:40:12 +0000 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.3.0 |
On 2015-08-18 21:48, Vadim Zeitlin wrote:
> On Tue, 18 Aug 2015 14:26:35 +0000 Greg Chicares <address@hidden> wrote:
>
> GC> On 2015-08-07 16:33, Greg Chicares wrote:
> GC> [...]
> GC> > Probably I'll move 'wx_pdfdoc_test' out of $(unit_test_targets), which
> GC> > otherwise don't depend on wx in any way. For now, it can just be a
> GC> > standalone program that's built and run only upon explicit demand.
> GC> > Soon, it may go away, because we can just create a premium-quote PDF
> GC> > in the GUI-test suite and this tiny test will no longer serve any
> GC> > purpose (I think).
> GC>
> GC> For the same reasons, and some additional ones, I plan not to commit
> GC> 'group_premium_pdf_gen_test' at all. It's in the code you sent to my
> GC> personal email. I've built it and run it, and it succeeds--so it was
> GC> useful to have this as temporary scaffolding. But it requires an input
> GC> file that I usually keep in /lmi/src/lmi/ and some mortality tables
> GC> that live in /opt/lmi/data/ on my machine, whereas I'm running it from
> GC> a different directory altogether, so it's really more convenient for
> GC> me just to build lmi and run the same steps manually.
I'm in favor of regression-testing the group premium quote PDF.
I'm opposed to adding such tests to lmi's $(unit_test_targets). It doesn't
fit there: those tests don't require big libraries or real product files,
and I want to preserve that minimalist nature.
It may make the most sense to do this within lmi's 'system_test' target,
which runs the production system and has access to product data files.
PDFs are not machine-comparable, so we'd want some sort of text output:
probably TSV, because it's simple, flexible, and already used for other
tests. For example:
Summary\n
Date prepared:\t2015-09-01\n
Number of participants:\t97\n
...
You might ask how we can do regression testing with a date that defaults
to today(), but that's a problem we've already solved elsewhere, e.g.,
in 'ledger_text_formats.cpp':
// For regression tests, use EffDate as date prepared,
// in order to avoid gratuitous failures.
os << "DatePrepared\t\t'" << Invar.EffDate << "'\n";
Right now, the 'system_test' target uses this target:
$(testdecks):
@-$(bin_dir)/lmi_cli_shared$(EXEEXT) \ [...flags...]
and the CLI doesn't create PDFs. We could overcome that obstacle by
making that a target-specific variable, e.g.:
%.ill: executable_with_flags := $(bin_dir)/lmi_cli_shared$(EXEEXT) [...flags...]
%.something_else: executable_with_flags := $(bin_dir)/lmi_wx_shared [...]
but it would be much nicer if we could find a good way to separate
report creation from PDF generation so that the CLI binary could
create a TSV group premium quote, and CLI and GUI would both derive
from a common base class that contains everything we want to test.
For example, add_ledger() has only a few accidental dependencies
on the wx string and date classes that can easily be removed, and
then it can be moved into the base class.
However, that sounds like a lot of work, so, as an alternative, we
could just add one more module to wx_test$(EXEEXT) to drive a single
group-quote test, and add code to 'group_quote_pdf_gen_wx.cpp' to
generate an optional TSV file.
> The files location could be changed, the problem is rather that it's
> really not very useful to just verify that the PDF file exists: what if it
> does but is empty? Or is truncated? Or has the wrong number of eligibles
> (to take a completely random example)?
The ideas above would ensure that the number of participants is correct.
I don't think any automated test can easily tell us whether a PDF file
has been truncated.
> I'd really like to have some way of testing that it contains roughly the
> correct information automatically, I think this would be genuinely useful
> as it's easy to miss some problems during manual inspection (the proof is
> that I did miss the one above even though I did check it).
Exhaustive comparison of TSV output that contains all the data, combined
with occasional manual inspection of PDF files--that's how we test PDF
illustration output (created by XSL-FO) today, and it works pretty well.
> It would also be useful, IMHO, to test reports with different numbers of
> lines to check that all possible pagination cases, i.e. a single page
> report, a multi-page report with the footer on the same page and on its own
> page, work as expected as this is probably one of the most fragile parts.
Perhaps a new module added to 'wx_test' could conceivably test that. I can
see how it could run several different pagination scenarios, but I don't
know how it would count actual PDF pages under program control.
> So while I agree that it's not useful to have this test right now, I'd
> like to work on making it more useful later by adding stronger checks to
> it.
>
> Do you think it's not worth to do this?
I'm not opposed to automated testing. I'm only opposed to performing it
in $(unit_test_targets).