Checksum regression tests¶
WarpX has checksum regression tests: as part of CI testing, when running a given test, the checksum module computes one aggregated number per field (Ex_checksum = np.sum(np.abs(Ex))
) and compares it to a reference (benchmark). This should be sensitive enough to make the test fail if your PR causes a significant difference, print meaningful error messages, and give you a chance to fix a bug or reset the benchmark if needed.
The checksum module is located in Regression/Checksum/
, and the benchmarks are stored as human-readable JSON files in Regression/Checksum/benchmarks_json/
, with one file per benchmark (for instance, test Langmuir_2d
has a corresponding benchmark Regression/Checksum/benchmarks_json/Langmuir_2d.json
).
For more details on the implementation, the Python files in Regression/Checksum/
should be well documented.
From a user point of view, you should only need to use checksumAPI.py
. It contains Python functions that can be imported and used from an analysis Python script. It can also be executed directly as a Python script. Here are recipies for the main tasks related to checksum regression tests in WarpX CI.
Include a checksum regression test in an analysis Python script¶
This relies on function evaluate_checksum
:
- Checksum.checksumAPI.evaluate_checksum(test_name, plotfile, rtol=1e-09, atol=1e-40, do_fields=True, do_particles=True)¶
Compare plotfile checksum with benchmark. Read checksum from input plotfile, read benchmark corresponding to test_name, and assert their equality. @param test_name Name of test, as found between [] in .ini file. @param plotfile Plotfile from which the checksum is computed. @param rtol Relative tolerance for the comparison. @param atol Absolute tolerance for the comparison. @param do_fields Whether to compare fields in the checksum. @param do_particles Whether to compare particles in the checksum.
For an example, see
#! /usr/bin/env python
import sys
sys.path.insert(1, '../../../../warpx/Regression/Checksum/')
import checksumAPI
# this will be the name of the plot file
fn = sys.argv[1]
# Get name of the test
test_name = fn[:-9] # Could also be os.path.split(os.getcwd())[1]
# Run checksum regression test
checksumAPI.evaluate_checksum(test_name, fn)
This can also be included in an existing analysis script. Note that the plotfile must be <test name>_plt?????
, as is generated by the CI framework.
Evaluate a checksum regression test from a bash terminal¶
You can execute checksumAPI.py
as a Python script for that, and pass the plotfile that you want to evaluate, as well as the test name (so the script knows which benchmark to compare it to).
./checksumAPI.py --evaluate --plotfile <path/to/plotfile> --test-name <test name>
See additional options
--skip-fields
if you don’t want the fields to be compared (in that case, the benchmark must not have fields)--skip-particles
same thing for particles--rtol
relative tolerance for the comparison--atol
absolute tolerance for the comparison (a sum of both is used bynumpy.isclose()
)
Reset a benchmark from a plotfile you know is correct¶
This is using checksumAPI.py
as a Python script.
./checksumAPI.py --reset-benchmark --plotfile <path/to/plotfile> --test-name <test name>
See additional options
--skip-fields
if you don’t want the benchmark to have fields--skip-particles
same thing for particles
Since this will automatically change the JSON file stored on the repo, make a separate commit just for this file, and if possible commit it under the Tools
name:
git add <test name>.json
git commit -m "reset benchmark for <test name> because ..." --author="Tools <warpx@lbl.gov>"