Skip to content

Commit 1c1e3af

Browse files
committed
Remove old Fortran style from particle-mesh documentation (AMReX-Codes#4761)
The proposed changes: - [ ] fix a bug or incorrect behavior in AMReX - [ ] add new capabilities to AMReX - [ ] changes answers in the test suite to more than roundoff level - [ ] are likely to significantly affect the results of downstream AMReX users - [x] include documentation in the code and/or rst files, if appropriate
1 parent 8af3d94 commit 1c1e3af

File tree

1 file changed

+74
-55
lines changed

1 file changed

+74
-55
lines changed

Docs/sphinx_documentation/source/Particle.rst

Lines changed: 74 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -471,47 +471,43 @@ Interacting with Mesh Data
471471
It is common to want to have the mesh communicate information to the particles
472472
and vice versa. For example, in Particle-in-Cell calculations, the particles
473473
deposit their charges onto the mesh, and later, the electric fields computed on
474-
the mesh are interpolated back to the particles. Below, we show examples of
475-
both these sorts of operations.
474+
the mesh are interpolated back to the particles.
475+
476+
To help perform these sorts of operations, we provide the :cpp:`ParticleToMesh`
477+
and :cpp:`MeshToParticles` functions. These functions operate on an entire
478+
:cpp:`ParticleContainer` at once, interpolating data back and forth between
479+
an input :cpp:`MultiFab`. A user-provided lambda function is passed in that
480+
specifies the kind of interpolation to perform. Any needed parallel communication
481+
(from particles that contribute some of their weight to guard cells, for example)
482+
is performed internally. Additionally, these methods support both a single-grid
483+
(the particles and the mesh use the same boxes and distribution mappings) and dual-grid
484+
(the particles and mesh have different layouts) formalism. In the latter case,
485+
the needed parallel communication is also performed internally.
486+
487+
We show examples of these types of operations below. The first snippet shows
488+
how to deposit a particle quantiy from the first real component of the particle
489+
data to the first component of a :cpp:`MultiFab` using linear interpolation.
476490

477491
.. highlight:: c++
478492

479493
::
480494

481495

482-
Ex.FillBoundary(gm.periodicity());
483-
Ey.FillBoundary(gm.periodicity());
484-
Ez.FillBoundary(gm.periodicity());
485-
for (MyParIter pti(MyPC, lev); pti.isValid(); ++pti) {
486-
const Box& box = pti.validbox();
496+
const auto plo = geom.ProbLoArray();
497+
const auto dxi = geom.InvCellSizeArray();
498+
amrex::ParticleToMesh(myPC, partMF, 0,
499+
[=] AMREX_GPU_DEVICE (const MyParticleContainer::ParticleTileType::ConstParticleTileDataType& ptd, int i,
500+
amrex::Array4<amrex::Real> const& rho)
501+
{
502+
auto p = ptd[i];
503+
ParticleInterpolator::Linear interp(p, plo, dxi);
487504

488-
const auto& particles = pti.GetArrayOfStructs();
489-
int nstride = particles.dataShape().first;
490-
const long np = pti.numParticles();
491-
492-
const FArrayBox& exfab = Ex[pti];
493-
const FArrayBox& eyfab = Ey[pti];
494-
const FArrayBox& ezfab = Ex[pti];
495-
496-
interpolate_cic(particles.data(), nstride, np,
497-
exfab.dataPtr(), eyfab.dataPtr(), ezfab.dataPtr(),
498-
box.loVect(), box.hiVect(), plo, dx, &ng);
499-
}
500-
501-
Here, :fortran:`interpolate_cic` is a Fortran subroutine that actually performs
502-
the interpolation on a single box. :cpp:`Ex`, :cpp:`Ey`, and :cpp:`Ez` are
503-
MultiFabs that contain the electric field data. These MultiFabs must be defined
504-
with the correct number of ghost cells to perform the desired type of
505-
interpolation, and we call :cpp:`FillBoundary` prior to the Fortran call so
506-
that those ghost cells will be up-to-date.
507-
508-
In this example, we have assumed that the :cpp:`ParticleContainer MyPC` has
509-
been defined on the same grids as the electric field MultiFabs, so that we use
510-
the :cpp:`ParIter` to index into the MultiFabs to get the data associated with
511-
current tile. If this is not the case, then an additional copy will need to be
512-
performed. However, if the particles are distributed in an extremely uneven
513-
fashion, it is possible that the load balancing improvements associated with
514-
the two-grid approach are worth the cost of the extra copy.
505+
interp.ParticleToMesh(p, rho, 0, 0, 1,
506+
[=] AMREX_GPU_DEVICE (const MyParticleContainer::ParticleType& part, int comp)
507+
{
508+
return part.rdata(comp);
509+
});
510+
});
515511

516512
The inverse operation, in which the particles communicate data *to* the mesh,
517513
is quite similar:
@@ -521,33 +517,56 @@ is quite similar:
521517
::
522518

523519

524-
rho.setVal(0.0, ng);
525-
for (MyParIter pti(*this, lev); pti.isValid(); ++pti) {
526-
const Box& box = pti.validbox();
520+
amrex::MeshToParticle(myPC, acceleration, 0,
521+
[=] AMREX_GPU_DEVICE (MyParticleContainer::ParticleType& p,
522+
amrex::Array4<const amrex::Real> const& acc)
523+
{
524+
ParticleInterpolator::Linear interp(p, plo, dxi);
525+
526+
interp.MeshToParticle(p, acc, 0, 1+AMREX_SPACEDIM, AMREX_SPACEDIM,
527+
[=] AMREX_GPU_DEVICE (amrex::Array4<const amrex::Real> const& arr,
528+
int i, int j, int k, int comp)
529+
{
530+
return arr(i, j, k, comp); // no weighting
531+
},
532+
[=] AMREX_GPU_DEVICE (MyParticleContainer::ParticleType& part,
533+
int comp, amrex::Real val)
534+
{
535+
part.rdata(comp) += ParticleReal(val);
536+
});
537+
});
538+
539+
In this case, we linearly interpolate `AMREX_SPACEDIM` values starting from the 0th
540+
component of the input :cpp:`MultiFab` to the particles, writing them starting at
541+
particle component 1. Note that :cpp:`ParticleInterpolator::MeshToParticle` takes *two*
542+
lambda functions, one that generates the particle quantity to interpolate and another
543+
that shows how to update the mesh value.
544+
545+
Finally, the snippet below shows how to use this function to simply count the number
546+
of particles in each cell (i.e. to deposit using "nearest neighbor" interpolation)
527547

528-
const auto& particles = pti.GetArrayOfStructs();
529-
int nstride = particles.dataShape().first;
530-
const long np = pti.numParticles();
531-
532-
FArrayBox& rhofab = (*rho[lev])[pti];
548+
.. highlight:: c++
533549

534-
deposit_cic(particles.data(), nstride, np, rhofab.dataPtr(),
535-
box.loVect(), box.hiVect(), plo, dx);
536-
}
550+
::
537551

538-
rho.SumBoundary(gm.periodicity());
552+
amrex::ParticleToMesh(myPC, partiMF, 0,
553+
[=] AMREX_GPU_DEVICE (const MyParticleContainer::SuperParticleType& p,
554+
amrex::Array4<int> const& count)
555+
{
556+
ParticleInterpolator::Nearest interp(p, plo, dxi);
539557

540-
As before, we loop over all our particles, calling a Fortran routine that
541-
deposits them on to the appropriate :cpp:`FArrayBox rhofab`. The :cpp:`rhofab`
542-
must have enough ghost cells to cover the support of all the particles
543-
associated with them. Note that we call :cpp:`SumBoundary` instead of
544-
:cpp:`FillBoundary` after performing the deposition, to add up the charge in
545-
the ghost cells surrounding each Fab into the corresponding valid cells.
558+
interp.ParticleToMesh(p, count, 0, 0, 1,
559+
[=] AMREX_GPU_DEVICE (const MyParticleContainer::ParticleType& /*p*/, int /*comp*/) -> int
560+
{
561+
return 1; // just count the particles per cell
562+
});
563+
});
546564

547-
For a complete example of an electrostatic PIC calculation that includes static
548-
mesh refinement, please see the `Electrostatic PIC tutorial`.
565+
For more complex examples of interacting with mesh data, we refer readers to
566+
our `Electromagnetic PIC tutorial <https://github.com/AMReX-Codes/amrex-tutorials/tree/main/ExampleCodes/Particles/ElectromagneticPIC>`_
549567

550-
.. _`Electrostatic PIC tutorial`: https://amrex-codes.github.io/amrex/tutorials_html/Particles_Tutorial.html#electrostaticpic
568+
Or, for a complete example of an electrostatic PIC calculation that includes static
569+
mesh refinement, please see the `Electrostatic PIC tutorial <https://github.com/AMReX-Codes/amrex-tutorials/tree/main/ExampleCodes/Particles/ElectrostaticPIC>`_.
551570

552571
.. _sec:Particles:ShortRange:
553572

0 commit comments

Comments
 (0)