Extend a Simulation with Python

When running WarpX directly from Python it is possible to interact with the simulation.

For instance, with the step() method of the simulation class, one could run sim.step(nsteps=1) in a loop:

# Preparation: set up the simulation
#   sim = picmi.Simulation(...)
#   ...

steps = 1000
for _ in range(steps):
    sim.step(nsteps=1)

    # do something custom with the sim object

As a more flexible alternative, one can install callback functions, which will execute a given Python function at a specific location in the WarpX simulation loop.

Callback Locations

These are the functions which allow installing user created functions so that they are called at various places along the time step.

The following three functions allow the user to install, uninstall and verify the different call back types.

These functions all take a callback location name (string) and function or instance method as an argument. Note that if an instance method is used, an extra reference to the method’s object is saved.

Functions can be called at the following times:

  • loadExternalFields: during WarpX::LoadExternalFields to write B/Efield_fp_external values

  • beforeInitEsolve: before the initial solve for the E fields (i.e. before the PIC loop starts)

  • afterinit: immediately after the init is complete

  • beforeEsolve: before the solve for E fields

  • poissonsolver: In place of the computePhi call but only in an electrostatic simulation

  • afterEsolve: after the solve for E fields

  • afterBpush: after the B field advance for electromagnetic solvers

  • afterEpush: after the E field advance for electromagnetic solvers

  • beforedeposition: before the particle deposition (for charge and/or current)

  • afterdeposition: after particle deposition (for charge and/or current)

  • beforestep: before the time step

  • afterstep: after the time step

  • afterdiagnostics: after diagnostic output

  • oncheckpointsignal: on a checkpoint signal

  • onbreaksignal: on a break signal. These callbacks will be the last ones executed before the simulation ends.

  • particlescraper: just after the particle boundary conditions are applied but before lost particles are processed

  • particleloader: at the time that the standard particle loader is called

  • particleinjection: called when particle injection happens, after the position advance and before deposition is called, allowing a user defined particle distribution to be injected each time step

Example that calls the Python function myplots after each step:

from pywarpx.callbacks import installcallback

def myplots():
    # do something here

installcallback('afterstep', myplots)

# run simulation
sim.step(nsteps=100)

The install can also be done using a Python decorator, which has the prefix callfrom. To use a decorator, the syntax is as follows. This will install the function myplots to be called after each step. The above example is quivalent to the following:

from pywarpx.callbacks import callfromafterstep

@callfromafterstep
def myplots():
    # do something here

# run simulation
sim.step(nsteps=100)
pywarpx.callbacks.installcallback(name, f)[source]

Installs a function to be called at that specified time.

Adds a function to the list of functions called by this callback.

pywarpx.callbacks.isinstalled(name, f)[source]

Checks if a function is installed for this callback.

pywarpx.callbacks.uninstallcallback(name, f)[source]

Uninstalls the function (so it won’t be called anymore).

Removes the function from the list of functions called by this callback.

pyAMReX

Many of the following classes are provided through pyAMReX. After the simulation is initialized, the pyAMReX module can be accessed via

from pywarpx import picmi, libwarpx

# ... simulation definition ...

# equivalent to
#   import amrex.space3d as amr
# for a 3D simulation
amr = libwarpx.amr  # picks the right 1d, 2d or 3d variant

Full details for pyAMReX APIs are documented here. Important APIs include:

Data Access

While the simulation is running, callbacks can have read and write access the WarpX simulation data in situ.

An important object in the pywarpx.picmi module for data access is Simulation.extension.warpx, which is available only during the simulation run. This object is the Python equivalent to the C++ WarpX simulation class.

class pywarpx.callbacks.WarpX
getistep(lev: int)

Get the current step on mesh-refinement level lev.

gett_new(lev: int)

Get the current physical time on mesh-refinement level lev.

getdt(lev: int)

Get the current physical time step size on mesh-refinement level lev.

multifab(multifab_name: str)

Return MultiFabs by name, e.g., "Efield_aux[x][level=0]", "Efield_cp[x][level=0]", …

The physical fields in WarpX have the following naming:

  • _fp are the “fine” patches, the regular resolution of a current mesh-refinement level

  • _aux are temporary (auxiliar) patches at the same resolution as _fp. They usually include contributions from other levels and can be interpolated for gather routines of particles.

  • _cp are “coarse” patches, at the same resolution (but not necessary values) as the _fp of level - 1 (only for level 1 and higher).

multi_particle_container()
get_particle_boundary_buffer()
set_potential_on_domain_boundary(potential_[lo/hi]_[x/y/z]: str)

The potential on the domain boundaries can be modified when using the electrostatic solver. This function updates the strings and function parsers which set the domain boundary potentials during the Poisson solve.

set_potential_on_eb(potential: str)

The embedded boundary (EB) conditions can be modified when using the electrostatic solver. This set the EB potential string and updates the function parser.

evolve(numsteps=-1)

Evolve the simulation the specified number of steps.

finalize(finalize_mpi=1)

Call finalize for WarpX and AMReX. Registered to run at program exit.

The WarpX also provides read and write access to field MultiFab and ParticleContainer data, shown in the following examples.

Fields

This example accesses the \(E_x(x,y,z)\) field at level 0 after every time step and calculate the largest value in it.

from pywarpx import picmi
from pywarpx.callbacks import callfromafterstep

# Preparation: set up the simulation
#   sim = picmi.Simulation(...)
#   ...


@callfromafterstep
def set_E_x():
    warpx = sim.extension.warpx

    # data access
    #   vector field E, component x, on the fine patch of MR level 0
    E_x_mf = warpx.multifab("Efield_fp", dir=0, level=0)
    #   scalar field rho, on the fine patch of MR level 0
    rho_mf = warpx.multifab("rho_fp", level=0)

    # compute on E_x_mf
    # iterate over mesh-refinement levels
    for lev in range(warpx.finest_level + 1):
        # grow (aka guard/ghost/halo) regions
        ngv = E_x_mf.n_grow_vect

        # get every local block of the field
        for mfi in E_x_mf:
            # global index space box, including guards
            bx = mfi.tilebox().grow(ngv)
            print(bx)  # note: global index space of this block

            # numpy representation: non-copying view, including the
            # guard/ghost region;     .to_cupy() for GPU!
            E_x_np = E_x_mf.array(mfi).to_numpy()

            # notes on indexing in E_x_np:
            # - numpy uses locally zero-based indexing
            # - layout is F_CONTIGUOUS by default, just like AMReX

            # notes:
            # Only the next lines are the "HOT LOOP" of the computation.
            # For efficiency, use numpy array operation for speed on CPUs.
            # For GPUs use .to_cupy() above and compute with cupy or numba.
            E_x_np[()] = 42.0


sim.step(nsteps=100)

For further details on how to access GPU data or compute on E_x, please see the pyAMReX documentation.

High-Level Field Wrapper

Note

TODO

Note

TODO: What are the benefits of using the high-level wrapper? TODO: What are the limitations (e.g., in memory usage or compute scalability) of using the high-level wrapper?

Particles

from pywarpx import picmi
from pywarpx.callbacks import callfromafterstep

# Preparation: set up the simulation
#   sim = picmi.Simulation(...)
#   ...

@callfromafterstep
def my_after_step_callback():
    warpx = sim.extension.warpx
    Config = sim.extension.Config

    # data access
    multi_pc = warpx.multi_particle_container()
    pc = multi_pc.get_particle_container_from_name("electrons")

    # compute
    # iterate over mesh-refinement levels
    for lvl in range(pc.finest_level + 1):
        # get every local chunk of particles
        for pti in pc.iterator(pc, level=lvl):
            # compile-time and runtime attributes in SoA format
            soa = pti.soa().to_cupy() if Config.have_gpu else \
                  pti.soa().to_numpy()

            # notes:
            # Only the next lines are the "HOT LOOP" of the computation.
            # For speed, use array operation.

            # write to all particles in the chunk
            # note: careful, if you change particle positions, you might need to
            #       redistribute particles before continuing the simulation step
            soa.real[0][()] = 0.30  # x
            soa.real[1][()] = 0.35  # y
            soa.real[2][()] = 0.40  # z

            # all other attributes: weight, momentum x, y, z, ...
            for soa_real in soa.real[3:]:
                soa_real[()] = 42.0

            # by default empty unless ionization or QED physics is used
            # or other runtime attributes were added manually
            for soa_int in soa.int:
                soa_int[()] = 12


sim.step(nsteps=100)

For further details on how to access GPU data or compute on electrons, please see the pyAMReX documentation.

High-Level Particle Wrapper

Note

TODO: What are the benefits of using the high-level wrapper? TODO: What are the limitations (e.g., in memory usage or compute scalability) of using the high-level wrapper?

Particles can be added to the simulation at specific positions and with specific attribute values:

from pywarpx import particle_containers, picmi

# ...

electron_wrapper = particle_containers.ParticleContainerWrapper("electrons")
class pywarpx.particle_containers.ParticleContainerWrapper(species_name)[source]

Wrapper around particle containers. This provides a convenient way to query and set data in the particle containers.

Parameters:

species_name (string) – The name of the species to be accessed.

add_particles(x=None, y=None, z=None, ux=None, uy=None, uz=None, w=None, unique_particles=True, **kwargs)[source]

A function for adding particles to the WarpX simulation.

Parameters:
  • species_name (str) – The type of species for which particles will be added

  • x (arrays or scalars) – The particle positions (m) (default = 0.)

  • y (arrays or scalars) – The particle positions (m) (default = 0.)

  • z (arrays or scalars) – The particle positions (m) (default = 0.)

  • ux (arrays or scalars) – The particle proper velocities (m/s) (default = 0.)

  • uy (arrays or scalars) – The particle proper velocities (m/s) (default = 0.)

  • uz (arrays or scalars) – The particle proper velocities (m/s) (default = 0.)

  • w (array or scalars) – Particle weights (default = 0.)

  • unique_particles (bool) – True means the added particles are duplicated by each process; False means the number of added particles is independent of the number of processes (default = True)

  • kwargs (dict) – Containing an entry for all the extra particle attribute arrays. If an attribute is not given it will be set to 0.

add_real_comp(pid_name, comm=True)[source]

Add a real component to the particle data array.

Parameters:
  • pid_name (str) – Name that can be used to identify the new component

  • comm (bool) – Should the component be communicated

deposit_charge_density(level, clear_rho=True, sync_rho=True)[source]

Deposit this species’ charge density in rho_fp in order to access that data via pywarpx.fields.RhoFPWrapper().

Parameters:
  • species_name (str) – The species name that will be deposited.

  • level (int) – Which AMR level to retrieve scraped particle data from.

  • clear_rho (bool) – If True, zero out rho_fp before deposition.

  • sync_rho (bool) – If True, perform MPI exchange and properly set boundary cells for rho_fp.

get_particle_count(local=False)[source]

Get the number of particles of this species in the simulation.

Parameters:

local (bool) – If True the particle count on this processor will be returned. Default False.

Returns:

An integer count of the number of particles

Return type:

int

get_particle_cpu(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle ‘cpu’ numbers on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle cpus

Return type:

List of arrays

get_particle_id(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle ‘id’ numbers on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle ids

Return type:

List of arrays

get_particle_idcpu(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle ‘idcpu’ numbers on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle idcpu

Return type:

List of arrays

get_particle_idcpu_arrays(level, copy_to_host=False)[source]

This returns a list of numpy or cupy arrays containing the particle idcpu data on each tile for this process.

Unless copy_to_host is specified, the data for the arrays are not copied, but share the underlying memory buffer with WarpX. The arrays are fully writeable.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle array data

Return type:

List of arrays

get_particle_int_arrays(comp_name, level, copy_to_host=False)[source]

This returns a list of numpy or cupy arrays containing the particle int array data on each tile for this process.

Unless copy_to_host is specified, the data for the arrays are not copied, but share the underlying memory buffer with WarpX. The arrays are fully writeable.

Parameters:
  • comp_name (str) – The component of the array data that will be returned

  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle array data

Return type:

List of arrays

get_particle_r(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle ‘r’ positions on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle r position

Return type:

List of arrays

get_particle_real_arrays(comp_name, level, copy_to_host=False)[source]

This returns a list of numpy or cupy arrays containing the particle real array data on each tile for this process.

Unless copy_to_host is specified, the data for the arrays are not copied, but share the underlying memory buffer with WarpX. The arrays are fully writeable.

Parameters:
  • comp_name (str) – The component of the array data that will be returned

  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle array data

Return type:

List of arrays

get_particle_theta(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle theta on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle theta position

Return type:

List of arrays

get_particle_ux(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle x momentum on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle x momentum

Return type:

List of arrays

get_particle_uy(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle y momentum on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle y momentum

Return type:

List of arrays

get_particle_uz(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle z momentum on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle z momentum

Return type:

List of arrays

get_particle_weight(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle weight on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle weight

Return type:

List of arrays

get_particle_x(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle ‘x’ positions on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle x position

Return type:

List of arrays

get_particle_y(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle ‘y’ positions on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle y position

Return type:

List of arrays

get_particle_z(level=0, copy_to_host=False)[source]

Return a list of numpy or cupy arrays containing the particle ‘z’ positions on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle z position

Return type:

List of arrays

get_species_charge_sum(local=False)[source]

Returns the total charge in the simulation due to the given species.

Parameters:

local (bool) – If True return total charge per processor

property idcpu

Return a list of numpy or cupy arrays containing the particle ‘idcpu’ numbers on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle idcpu

Return type:

List of arrays

property nps

Get the number of particles of this species in the simulation.

Parameters:

local (bool) – If True the particle count on this processor will be returned. Default False.

Returns:

An integer count of the number of particles

Return type:

int

property rp

Return a list of numpy or cupy arrays containing the particle ‘r’ positions on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle r position

Return type:

List of arrays

property thetap

Return a list of numpy or cupy arrays containing the particle theta on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle theta position

Return type:

List of arrays

property uxp

Return a list of numpy or cupy arrays containing the particle x momentum on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle x momentum

Return type:

List of arrays

property uyp

Return a list of numpy or cupy arrays containing the particle y momentum on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle y momentum

Return type:

List of arrays

property uzp

Return a list of numpy or cupy arrays containing the particle z momentum on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle z momentum

Return type:

List of arrays

property wp

Return a list of numpy or cupy arrays containing the particle weight on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle weight

Return type:

List of arrays

property xp

Return a list of numpy or cupy arrays containing the particle ‘x’ positions on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle x position

Return type:

List of arrays

property yp

Return a list of numpy or cupy arrays containing the particle ‘y’ positions on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle y position

Return type:

List of arrays

property zp

Return a list of numpy or cupy arrays containing the particle ‘z’ positions on each tile.

Parameters:
  • level (int) – The refinement level to reference (default=0)

  • copy_to_host (bool) – For GPU-enabled runs, one can either return the GPU arrays (the default) or force a device-to-host copy.

Returns:

The requested particle z position

Return type:

List of arrays

The get_particle_real_arrays(), get_particle_int_arrays() and get_particle_idcpu_arrays() functions are called by several utility functions of the form get_particle_{comp_name} where comp_name is one of x, y, z, r, theta, id, cpu, weight, ux, uy or uz.

Diagnostics

Various diagnostics are also accessible from Python. This includes getting the deposited or total charge density from a given species as well as accessing the scraped particle buffer. See the example in Examples/Tests/ParticleBoundaryScrape for a reference on how to interact with scraped particle data.

class pywarpx.particle_containers.ParticleBoundaryBufferWrapper[source]

Wrapper around particle boundary buffer containers. This provides a convenient way to query data in the particle boundary buffer containers.

clear_buffer()[source]

Clear the buffer that holds the particles lost at the boundaries.

get_particle_boundary_buffer(species_name, boundary, comp_name, level)[source]

This returns a list of numpy or cupy arrays containing the particle array data for a species that has been scraped by a specific simulation boundary.

The data for the arrays are not copied, but share the underlying memory buffer with WarpX. The arrays are fully writeable.

You can find here https://github.com/ECP-WarpX/WarpX/blob/319e55b10ad4f7c71b84a4fb21afbafe1f5b65c2/Examples/Tests/particle_boundary_interaction/PICMI_inputs_rz.py an example of a simple case of particle-boundary interaction (reflection).

Parameters:
  • species_name (str) – The species name that the data will be returned for.

  • boundary (str) – The boundary from which to get the scraped particle data in the form x/y/z_hi/lo or eb.

  • comp_name (str) – The component of the array data that will be returned. “x”, “y”, “z”, “ux”, “uy”, “uz”, “w” “stepScraped”,”deltaTimeScraped”, if boundary=’eb’: “nx”, “ny”, “nz”

  • level (int) – Which AMR level to retrieve scraped particle data from.

get_particle_boundary_buffer_size(species_name, boundary, local=False)[source]

This returns the number of particles that have been scraped so far in the simulation from the specified boundary and of the specified species.

Parameters:
  • species_name (str) – Return the number of scraped particles of this species

  • boundary (str) – The boundary from which to get the scraped particle data in the form x/y/z_hi/lo

  • local (bool) – Whether to only return the number of particles in the current processor’s buffer

Modify Solvers

From Python, one can also replace numerical solvers in the PIC loop or add new physical processes into the time step loop. Examples:

  • Capacitive Discharge: replaces the Poisson solver of an electrostatic simulation (default: MLMG) with a python function that uses superLU to directly solve the Poisson equation.