WarpX
|
Functions | |
def | evaluate_checksum (test_name, output_file, output_format='plotfile', rtol=1.e-9, atol=1.e-40, do_fields=True, do_particles=True) |
def | reset_benchmark (test_name, output_file, output_format='plotfile', do_fields=True, do_particles=True) |
def | reset_all_benchmarks (path_to_all_output_files, output_format='plotfile') |
Variables | |
parser = argparse.ArgumentParser() | |
dest | |
action | |
default | |
False | |
help | |
type | |
str | |
required | |
True | |
float | |
args = parser.parse_args() | |
do_fields | |
do_particles | |
rtol | |
atol | |
Copyright 2020 This file is part of WarpX. License: BSD-3-Clause-LBNL
def Checksum.checksumAPI.evaluate_checksum | ( | test_name, | |
output_file, | |||
output_format = 'plotfile' , |
|||
rtol = 1.e-9 , |
|||
atol = 1.e-40 , |
|||
do_fields = True , |
|||
do_particles = True |
|||
) |
Compare output file checksum with benchmark. Read checksum from output file, read benchmark corresponding to test_name, and assert their equality. Parameters ---------- test_name: string Name of test, as found between [] in .ini file. output_file : string Output file from which the checksum is computed. output_format : string Format of the output file (plotfile, openpmd). rtol: float, default=1.e-9 Relative tolerance for the comparison. atol: float, default=1.e-40 Absolute tolerance for the comparison. do_fields: bool, default=True Whether to compare fields in the checksum. do_particles: bool, default=True Whether to compare particles in the checksum.
def Checksum.checksumAPI.reset_all_benchmarks | ( | path_to_all_output_files, | |
output_format = 'plotfile' |
|||
) |
Update all benchmarks (overwrites reference json files) found in path_to_all_output_files Parameters ---------- path_to_all_output_files: string Path to all output files for which the benchmarks are to be reset. The output files should be named <test_name>_plt, which is what regression_testing.regtests.py does, provided we're careful enough. output_format: string Format of the output files (plotfile, openpmd).
def Checksum.checksumAPI.reset_benchmark | ( | test_name, | |
output_file, | |||
output_format = 'plotfile' , |
|||
do_fields = True , |
|||
do_particles = True |
|||
) |
Update the benchmark (overwrites reference json file). Overwrite value of benchmark corresponding to test_name with checksum read from output file. Parameters ---------- test_name: string Name of test, as found between [] in .ini file. output_file: string Output file from which the checksum is computed. output_format: string Format of the output file (plotfile, openpmd). do_fields: bool, default=True Whether to write field checksums in the benchmark. do_particles: bool, default=True Whether to write particles checksums in the benchmark.
Checksum.checksumAPI.action |
Checksum.checksumAPI.args = parser.parse_args() |
Checksum.checksumAPI.atol |
Checksum.checksumAPI.default |
Checksum.checksumAPI.dest |
Checksum.checksumAPI.do_fields |
Checksum.checksumAPI.do_particles |
Checksum.checksumAPI.False |
Checksum.checksumAPI.float |
Checksum.checksumAPI.help |
Checksum.checksumAPI.parser = argparse.ArgumentParser() |
Checksum.checksumAPI.required |
Checksum.checksumAPI.rtol |
Checksum.checksumAPI.str |
Checksum.checksumAPI.True |
Checksum.checksumAPI.type |