lmi-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lmi-commits] [lmi] master 5c417498 3/5: Make regression test output ide


From: Greg Chicares
Subject: [lmi-commits] [lmi] master 5c417498 3/5: Make regression test output identical for all platforms
Date: Wed, 1 Jun 2022 17:06:25 -0400 (EDT)

branch: master
commit 5c4174988406f4e19dcc11ae2b7bc59ad7921636
Author: Gregory W. Chicares <gchicares@sbcglobal.net>
Commit: Gregory W. Chicares <gchicares@sbcglobal.net>

    Make regression test output identical for all platforms
    
    [x86_64-pc-linux-gnu]
    All 1485 files match.
    make $coefficiency system_test \
             51.56s user 8.81s system 475% cpu 12.683 total
    
    [x86_64-w64-mingw32]
    All 1485 files match.
    make $coefficiency system_test \
             137.15s user 81.06s system 500% cpu 43.641 total
    
    43.641 / 12.683 ≈ 3.44
    
    I ran this test
      ls -l /opt/lmi/test/analysis-202205* |wc -l
      162
    times last month: eight times per weekday, or once per nominal hour
    for a 40-hour week. I'm more productive when an hourly interruption
    takes thirteen seconds rather than forty-four.
---
 custom_io_0.cpp | 17 +++++++++++------
 custom_io_1.cpp | 17 +++++++++++------
 2 files changed, 22 insertions(+), 12 deletions(-)

diff --git a/custom_io_0.cpp b/custom_io_0.cpp
index ca901a84..b2c4b1b5 100644
--- a/custom_io_0.cpp
+++ b/custom_io_0.cpp
@@ -42,6 +42,7 @@
 #include "yare_input.hpp"
 
 #include <fstream>
+#include <ios>                          // ios_base
 #include <vector>
 
 bool custom_io_0_file_exists()
@@ -486,12 +487,16 @@ void custom_io_0_write(Ledger const& ledger_values, 
std::string const& filename)
         ? filename
         : configurable_settings::instance().custom_output_0_filename()
         ;
-    // Don't specify 'binary' here: the file is to be read by another
-    // program that probably expects platform-specific behavior.
-    std::ofstream os
-        (actual_filename.c_str()
-        ,std::ios_base::out | std::ios_base::trunc
-        );
+    // Specify 'binary' here iff regression testing, so that all
+    // architectures use '\n' line endings--but not for production,
+    // because the file is to be read by a third-party program that
+    // probably expects platform-specific line endings.
+    std::ios_base::openmode m =
+          global_settings::instance().regression_testing()
+        ? std::ios_base::out | std::ios_base::trunc | std::ios_base::binary
+        : std::ios_base::out | std::ios_base::trunc
+        ;
+    std::ofstream os(actual_filename.c_str(), m);
     if(!os.good())
         {
         alarum()
diff --git a/custom_io_1.cpp b/custom_io_1.cpp
index 9206026d..e81b6929 100644
--- a/custom_io_1.cpp
+++ b/custom_io_1.cpp
@@ -39,6 +39,7 @@
 #include "xml_lmi.hpp"
 
 #include <fstream>
+#include <ios>                          // ios_base
 #include <stdexcept>
 #include <vector>
 
@@ -356,12 +357,16 @@ void custom_io_1_write(Ledger const& ledger_values, 
std::string const& filename)
         ? filename
         : configurable_settings::instance().custom_output_1_filename()
         ;
-    // Don't specify 'binary' here: the file is to be read by another
-    // program that probably expects platform-specific behavior.
-    std::ofstream os
-        (actual_filename.c_str()
-        ,std::ios_base::out | std::ios_base::trunc
-        );
+    // Specify 'binary' here iff regression testing, so that all
+    // architectures use '\n' line endings--but not for production,
+    // because the file is to be read by a third-party program that
+    // probably expects platform-specific line endings.
+    std::ios_base::openmode m =
+          global_settings::instance().regression_testing()
+        ? std::ios_base::out | std::ios_base::trunc | std::ios_base::binary
+        : std::ios_base::out | std::ios_base::trunc
+        ;
+    std::ofstream os(actual_filename.c_str(), m);
     if(!os.good())
         {
         alarum()



reply via email to

[Prev in Thread] Current Thread [Next in Thread]