WarpX
Functions | Variables
Checksum.checksumAPI Namespace Reference

Functions

def evaluate_checksum (test_name, output_file, output_format='plotfile', rtol=1.e-9, atol=1.e-40, do_fields=True, do_particles=True)
 
def reset_benchmark (test_name, output_file, output_format='plotfile', do_fields=True, do_particles=True)
 
def reset_all_benchmarks (path_to_all_output_files, output_format='plotfile')
 

Variables

 parser = argparse.ArgumentParser()
 
 dest
 
 action
 
 default
 
 False
 
 help
 
 type
 
 str
 
 required
 
 True
 
 float
 
 args = parser.parse_args()
 
 do_fields
 
 do_particles
 
 rtol
 
 atol
 

Detailed Description

 Copyright 2020

 This file is part of WarpX.

 License: BSD-3-Clause-LBNL

Function Documentation

◆ evaluate_checksum()

def Checksum.checksumAPI.evaluate_checksum (   test_name,
  output_file,
  output_format = 'plotfile',
  rtol = 1.e-9,
  atol = 1.e-40,
  do_fields = True,
  do_particles = True 
)
Compare output file checksum with benchmark.
Read checksum from output file, read benchmark
corresponding to test_name, and assert their equality.

Parameters
----------
test_name: string
    Name of test, as found between [] in .ini file.

output_file : string
    Output file from which the checksum is computed.

output_format : string
    Format of the output file (plotfile, openpmd).

rtol: float, default=1.e-9
    Relative tolerance for the comparison.

atol: float, default=1.e-40
    Absolute tolerance for the comparison.

do_fields: bool, default=True
    Whether to compare fields in the checksum.

do_particles: bool, default=True
    Whether to compare particles in the checksum.

◆ reset_all_benchmarks()

def Checksum.checksumAPI.reset_all_benchmarks (   path_to_all_output_files,
  output_format = 'plotfile' 
)
Update all benchmarks (overwrites reference json files)
found in path_to_all_output_files

Parameters
----------
path_to_all_output_files: string
    Path to all output files for which the benchmarks
    are to be reset. The output files should be named <test_name>_plt, which is
    what regression_testing.regtests.py does, provided we're careful enough.

output_format: string
    Format of the output files (plotfile, openpmd).

◆ reset_benchmark()

def Checksum.checksumAPI.reset_benchmark (   test_name,
  output_file,
  output_format = 'plotfile',
  do_fields = True,
  do_particles = True 
)
Update the benchmark (overwrites reference json file).
Overwrite value of benchmark corresponding to
test_name with checksum read from output file.

Parameters
----------
test_name: string
    Name of test, as found between [] in .ini file.

output_file: string
    Output file from which the checksum is computed.

output_format: string
    Format of the output file (plotfile, openpmd).

do_fields: bool, default=True
    Whether to write field checksums in the benchmark.

do_particles: bool, default=True
    Whether to write particles checksums in the benchmark.

Variable Documentation

◆ action

Checksum.checksumAPI.action

◆ args

Checksum.checksumAPI.args = parser.parse_args()

◆ atol

Checksum.checksumAPI.atol

◆ default

Checksum.checksumAPI.default

◆ dest

Checksum.checksumAPI.dest

◆ do_fields

Checksum.checksumAPI.do_fields

◆ do_particles

Checksum.checksumAPI.do_particles

◆ False

Checksum.checksumAPI.False

◆ float

Checksum.checksumAPI.float

◆ help

Checksum.checksumAPI.help

◆ parser

Checksum.checksumAPI.parser = argparse.ArgumentParser()

◆ required

Checksum.checksumAPI.required

◆ rtol

Checksum.checksumAPI.rtol

◆ str

Checksum.checksumAPI.str

◆ True

Checksum.checksumAPI.True

◆ type

Checksum.checksumAPI.type