Welcome to the PySPH documentation!

PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. Users can implement an SPH formulation in pure Python and still obtain excellent performance. PySPH can make use of multiple cores via OpenMP or be run seamlessly in parallel using MPI.

Here are some videos of simulations made with PySPH.

PySPH is hosted on bitbucket. Please see the bitbucket site for development details.

Overview

PySPH: a Python-based SPH framework

PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. It is implemented in Python and the performance critical parts are implemented in Cython.

PySPH is implemented in a way that allows a user to specify the entire SPH simulation in pure Python. High-performance code is generated from this high-level Python code, compiled on the fly and executed. PySPH can use OpenMP to utilize multi-core CPUs effectively. PySPH also features optional automatic parallelization (multi-CPU) using mpi4py and Zoltan. If you wish to use the parallel capabilities you will need to have these installed.

Here are videos of simulations made with PySPH.

PySPH is hosted on bitbucket. Please see the bitbucket site for development details.

Features

  • User scripts and equations are written in pure Python.
  • Flexibility to define arbitrary SPH equations operating on particles.
  • Ability to define your own multi-step integrators in pure Python.
  • High-performance: our performance is comparable to hand-written solvers implemented in FORTRAN.
  • Seamless multi-core support with OpenMP.
  • Seamless parallel integration using Zoltan.
  • BSD license.

SPH formulations

Currently, PySPH has numerous examples to solve the viscous, incompressible Navier-Stokes equations using the weakly compressible (WCSPH) approach. The following formulations are currently implemented:

_images/db3d.png

3D dam-break past an obstacle SPHERIC benchmark Test 2

_images/ldc-streamlines.png

Streamlines for a driven cavity

_images/rings-collision.png

Collision of two elastic rings.

Credits

PySPH is primarily developed at the Department of Aerospace Engineering, IIT Bombay. We are grateful to IIT Bombay for the support. Our primary goal is to build a powerful SPH-based tool for both application and research. We hope that this makes it easy to perform reproducible computational research.

Lead developers:

Earlier developers:

  • Pankaj Pandey (stress solver and improved load balancing, 2011)
  • Chandrashekhar Kaushik (original parallel and serial implementation in 2009)

The following have contributed bug-fixes, features, documentation etc.

  • Arkopal Dutt, Arpit Agarwal, Sarang Minhas, S Saravanan, Vishnu Sivadasan

Citing PySPH

We are in the process of writing up an article on the new PySPH framework as it stands today. In the meanwhile, if you use PySPH and wish to cite it you may use this:

  • Prabhu Ramachandran and Kunal Puri, PySPH: A framework for parallel particle simulations, In proceedings of the 3rd International Conference on Particle-Based Methods (Particles 2013), Stuttgart, Germany, 18th September 2013.

History

  • 2009: PySPH started with a simple Cython based 1D implementation written by Prabhu.
  • 2009-2010: Chandrashekhar Kaushik worked on a full 3D SPH implementation with a more general purpose design. The implementation was in a mix of Cython and Python.
  • 2010-2012: The previous implementation was a little too complex and was largely overhauled by Kunal and Pankaj. This became the PySPH 0.9beta release. The difficulty with this version was that it was almost entirely written in Cython, making it hard to extend or add new formulations without writing more Cython code. Doing this was difficult and not too pleasant. In addition it was not as fast as we would have liked it. It ended up feeling like we might as well have implemented it all in C++ and exposed a Python interface to that.
  • 2011-2012: Kunal also implemented SPH2D and another internal version called ZSPH in Cython which included Zoltan based parallelization using PyZoltan. This was specific to his PhD research and again required writing Cython making it difficult for the average user to extend.
  • 2013-present In early 2013, Prabhu reimplemented the core of PySPH to be almost entirely auto-generated from pure Python. The resulting code was faster than previous implementations and very easy to extend entirely from pure Python. Kunal and Prabhu integrated PyZoltan into PySPH and the current version of PySPH was born. Subsequently, OpenMP support was also added in 2015.

Support

If you have any questions or are running into any difficulties with PySPH, please email or post your questions on the pysph-users mailing list here: https://groups.google.com/d/forum/pysph-users

Please also take a look at the PySPH issue tracker.

Installation and getting started

Installation and getting started

To install PySPH, you need a working Python environment with the required dependencies installed. You may use any of the available Python distributions. Note that PySPH is currently tested with Python-2.x and not with 3.x (yet). If you are new to Python we recommend Enthought Canopy. PySPH will work fine with Anaconda or other environments like WinPython. The following instructions should help you get started.

Since there is a lot of information here, we suggest that you skim the section on Dependencies and then directly jump to one of the “Installing the dependencies on xxx” sections below depending on your operating system. Depending on your chosen Python distribution, simply follow the instructions and links referred therein.

Dependencies

Core dependencies

The core dependencies are:

These packages can be installed from your Python distribution’s package manager, or using pip or easy_install. For more detailed instructions on how to do this for different distributions, see below.

Running PySPH requires a working C/C++ compiler on your machine. On Linux/OS X the gcc toolchain will work well. On Windows, you will need to have Microsoft Visual C++ Compiler for Python 2.7 or an equivalent compiler. More details are available below.

Note

PySPH generates high-performance code and compiles it on the fly. This requires a working C/C++ compiler even after installing PySPH.

Optional dependencies

The optional dependencies are:

  • OpenMP: PySPH can use OpenMP if it is available. Installation instructions are available below.
  • Mayavi: PySPH provides a convenient viewer to visualize the output of simulations. This viewer is called pysph_viewer and requires Mayavi to be installed. Since this is only a viewer it is optional for use, however, it is highly recommended that you have it installed as the viewer is very convenient.
  • mpi4py and Zoltan: If you want to use PySPH in parallel, you will need mpi4py and the Zoltan data management library. PySPH will work in serial without mpi4py or Zoltan. Simple build instructions for Zoltan are included below.

Mayavi is packaged with all the major distributions and is easy to install. Zoltan is very unlikely to be already packaged and will need to be compiled.

Building and linking PyZoltan on OSX/Linux

We’ve provided a simple Zoltan build script in the repository. This works on Linux and OS X but not on Windows. It can be used as:

$ ./build_zoltan.sh INSTALL_PREFIX

where the INSTALL_PREFIX is where the library and includes will be installed. You may edit and tweak the build to suit your installation. However, this script is what we use to build Zoltan on our continuous integration servers on Drone and Shippable.

After Zoltan is build, set the environment variable ZOLTAN to point to the INSTALL_PREFIX that you used above:

$ export ZOLTAN=$INSTALL_PREFIX

Note that replace $INSTALL_PREFIX with the directory you specified above. After this, follow the instructions to build PySPH. The PyZoltan wrappers will be compiled and available.

Note

The installation will use $ZOLTAN/include and $ZOLTAN/lib to find the actual directories, if these do not work for your particular installation for whatever reason, set the environment variables ZOLTAN_INCLUDE and ZOLTAN_LIBRARY explicitly without setting up ZOLTAN. If you used the above script, this would be:

$ export ZOLTAN_INCLUDE=$INSTALL_PREFIX/include
$ export ZOLTAN_LIBRARY=$INSTALL_PREFIX/lib

Installing the dependencies on GNU/Linux

GNU/Linux is probably the easiest platform to install PySPH. On Ubuntu one may install the dependencies using:

$ sudo apt-get install build-essential python-dev python-numpy \
    python-mako cython python-nose mayavi2 python-qt4 python-virtualenv

OpenMP is typically available but can be installed with:

$ sudo apt-get install libgomp1

If you need parallel support:

$ sudo apt-get install libopenmpi-dev python-mpi4py
$ ./build_zoltan.sh ~/zoltan # Replace ~/zoltan with what you want
$ export ZOLTAN=~/zoltan

On Linux it is probably best to install PySPH into its own virtual environment. This will allow you to install PySPH as a user without any superuser priviledges. See the section below on Using a virtualenv for PySPH. In short do the following:

$ virtualenv --system-site-packages pysph_env
$ source pysph_env/bin/activate
$ pip install cython --upgrade # if you have an old version.

You should be set now and should skip to Downloading PySPH and Building and Installing PySPH.

If you are using Enthought Canopy or Anaconda the instructions in the section Installing the dependencies on Mac OS X will be useful as the instructions are similar.

Note

If you wish to see a working build/test script please see our shippable.yml. Or you could see the build script hosted at Drone.io.

Installing the dependencies on Mac OS X

On OS X, your best bet is to install Enthought Canopy or Anaconda or some other Python distribution. Ensure that you have gcc or clang installed by installing XCode. See this if you installed XCode but can’t find clang or gcc.

OpenMP on OSX

If you need to use OpenMP, the default clang compiler on OSX does not support it. There are some experimental versions available. One easy to install option is to use brew to install gcc. For example you can try:

$ sudo brew install gcc

The build may not see omp.h and you can work around this by manually linking to it like so (modify this to suit your installation):

$ cd /usr/local/include
$ sudo ln -s ../Cellar/gcc/4.9.2_1/lib/gcc/4.9/gcc/x86_64-apple-darwin12.6.0/4.9.2/include/omp.h .

Once this is done, you need to use this as your default compiler, you can tell the Python to use this by setting:

$ export CC=gcc-4.9
$ export CXX=g++-4.9
Using Canopy

Download the Canopy express installer for your platform (the full installer is also fine). Launch Canopy after you install it so it initializes your user environment. If you have made Canopy your default Python, all should be well, otherwise launch the Canopy terminal from the Tools menu of the Canopy editor before typing your commands below.

NumPy ships by default but Cython does not. Mako and Cython can be installed with pip easily (pip will be available in your Canopy environment):

$ pip install cython mako

Mayavi is best installed with the Canopy package manager:

$ enpkg mayavi

Note

If you are a subscriber you can also enpkg cython to install Enthought’s build.

If you need parallel support, please see Installing mpi4py and Zoltan on OS X, otherwise, skip to Downloading PySPH and Building and Installing PySPH.

Using Anaconda

After installing Anaconda, you will need to make sure the dependencies are installed:

$ conda install cython mayavi
$ pip install mako

If you need parallel support, please see Installing mpi4py and Zoltan on OS X, otherwise, skip to Downloading PySPH and Building and Installing PySPH.

Installing mpi4py and Zoltan on OS X

In order to build/install mpi4py one first has to install the MPI library. This is easily done with Homebrew as follows (you need to have brew installed for this but that is relatively easy to do):

$ sudo brew install open-mpi

After this is done, one can install mpi4py by hand. First download mpi4py from here. Then run the following (modify these to suit your XCode installation and version of mpi4py):

$ cd /tmp
$ tar xvzf ~/Downloads/mpi4py-1.3.1.tar.gz
$ cd mpi4py-1.3.1
$ export MACOSX_DEPLOYMENT_TARGET=10.7
$ export SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/
$ python setup.py install

Change the above environment variables to suite your SDK version. If this installs correctly, mpi4py should be available. You can now build Zoltan, (the script to do this is in the pysph sources, see Downloading PySPH)

$ cd pysph
$ ./build_zoltan.sh ~/zoltan # Replace ~/zoltan with what you want
$ export ZOLTAN=~/zoltan

You should be set now and should move to Building and Installing PySPH.

Installing the dependencies on Windows

While it should be possible to use mpi4py and Zoltan on Windows, we do not at this point have much experience with this. Feel free to experiment and let us know if you’d like to share your instructions. The following instructions are all without parallel support.

Using Canopy

Download and install Canopy Express for you Windows machine (32 or 64 bit). Launch the Canopy editor at least once so it sets up your user environment. Make the Canopy Python the default Python when it prompts you. If you have already skipped that option, you may enable it in the Edit->Preferences menu. With that done you may install the required dependencies. You can either use the Canopy package manager or use the command line. We will use the command line for the rest of the instructions. To start a command line, click on “Start” and navigate to the All Programs/Enthought Canopy menu. Select the “Canopy command prompt”, if you made Canopy your default Python, just starting a command prompt (via cmd.exe) will also work.

On the command prompt, Mako and Cython can be installed with pip easily (pip should be available in your Canopy environment):

> pip install cython mako

Mayavi is best installed with the Canopy package manager:

> enpkg mayavi

Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python 2.7.

Note

If you are a subscriber you can also enpkg cython to install Enthought’s build.

Using WinPython

Instead of Canopy or Anaconda you could try WinPython 2.7.x.x. To obtain the core dependencies, download the corresponding binaries from Christoph Gohlke’s Unofficial Windows Binaries for Python Extension Packages. Mayavi is available through the binary ETS.

You can now add these binaries to your WinPython installation by going to WinPython Control Panel. The option to add packages is available under the section Install/upgrade packages.

Make sure to set your system PATH variable pointing to the location of the scripts as required. If you have installed WinPython 2.7.6 64-bit, make sure to set your system PATH variables to <path to installation folder>/python-2.7.6.amd64 and <path to installation folder>/python-2.7.6.amd64/Scripts/.

Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python 2.7.

Using Anaconda

Install Anaconda for your platform, make it the default and then install the required dependencies:

$ conda install cython mayavi
$ pip install mako

Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python 2.7.

Installing Visual C++ Compiler for Python 2.7

For all of the above Python distributions, it is highly recommended that you build PySPH with Microsoft’s Visual C++ for Python 2.7. We recommend that you download and install the VCForPython27.msi available from the link. Make sure you install the system requirements specified on that page. For example, on Windows 7 you will need to install the Microsoft Visual C++ 2008 SP1 Redistributable Package for your platform (x86 for 32 bit or x64 for 64 bit) and on Windows 8 you will need to install the .NET framework 3.5. Please look at the link given above, it should be fairly straightforward. Note that doing this will also get OpenMP working for you.

After you do this, you will find a “Microsoft Visual C++ Compiler Package for Python 2.7” in your Start menu. Choose a suitable command prompt from this menu for your architecture and start it (we will call this the MSVC command prompt). You may make a short cut to it as you will need to use this command prompt to build PySPH and also run any of the examples.

See section Downloading PySPH and get a copy of PySPH. On the MSVC command prompt, do the following (replace the pysph with the path to where you have downloaded and unzipped the pysph sources):

> cd pysph
> .\windows_env.bat

One this is done, you may follow section Building and Installing PySPH. You may notice that the windows_env.bat convenience batch file merely sets up two environment variables:

> SET DISTUTILS_USE_SDK=1
> SET MSSdk=1

If you are using pip to directly install PySPH you will need to set these manually.

Warning

On 64 bit Windows, do not build PySPH with mingw64 as it does not work reliably at all and frequently crashes. YMMV with mingw32 but it is safer and just as easy to use the MS VC++ compiler.

Using a virtualenv for PySPH

A virtualenv allows you to create an isolated environment for PySPH and its related packages. This is useful in a variety of situations.

  • Your OS does not provide a recent enough Cython version (say you are running Debian stable).
  • You do not have root access to install any packages PySPH requires.
  • You do not want to mess up your system files and wish to localize any installations inside directories you control.
  • You wish to use other packages with conflicting requirements.
  • You want PySPH and its related packages to be in an “isolated” environment.

You can either install virtualenv (or ask your system administrator to) or just download the virtualenv.py script and use it (run python virtualenv.py after you download the script).

Create a virtualenv like so:

$ virtualenv --system-site-packages pysph_env

This creates a directory called pysph_env which contains all the relevant files for your virtualenv, this includes any new packages you wish to install into it. You can delete this directory if you don’t want it anymore for some reason. This virtualenv will also “inherit” packages from your system. Hence if your system administrator already installed NumPy it may be imported from your virtual environment and you do not need to install it. This is very useful for large packages like Mayavi, Qt etc.

Note

If your version of virtualenv does not support the --system-site-packages option, please use the virtualenv.py script mentioned above.

Once you create a virtualenv you can activate it as follows (on a bash shell):

$ source pysph_env/bin/activate

On Windows you run a bat file as follows:

$ pysph_env/bin/activate

This sets up the PATH to point to your virtualenv’s Python. You may now run any normal Python commands and it will use your virtualenv’s Python. For example you can do the following:

$ virtualenv myenv
$ source myenv/bin/activate
(myenv) $ pip install Cython mako nose
(myenv) $ cd pysph
(myenv) $ python setup.py install

Now PySPH will be installed into myenv. You may deactivate your virtualenv using the deactivate command:

(myenv) $ deactivate
$

On Windows, use myenv\Scripts\activate.bat and myenv\Scripts\deactivate.bat.

If for whatever reason you wish to delete myenv just remove the entire directory:

$ rm -rf myenv

Note

With a virtualenv, one should be careful while running things like ipython or nosetests as these are sometimes also installed on the system in /usr/bin. If you suspect that you are not running the correct Python, you could simply run (on Linux/OS X):

$ python `which ipython`

to be absolutely sure.

Using Virtualenv on Canopy

If you are using Enthought Canopy, it already bundles virtualenv for you but you should use the venv script. For example:

$ venv --help
$ venv --system-site-packages myenv
$ source myenv/bin/activate

The rest of the steps are the same as above.

Downloading PySPH

One way to install PySPH is to use pip

$ pip install PySPH

This will install PySPH, and you should be able to import it and use the modules with your Python scripts that use PySPH. This will not give you access to the examples. The best way to get those is to get PySPH from git or download a tarball or ZIP as described below.

To get PySPH using git type the following

$ git clone https://bitbucket.org/pysph/pysph.git

If you do not have git or do not wish to bother with it, you can get a ZIP or tarball from the pysph site. You can unzip/untar this and use the sources.

In the instructions, we assume that you have the pysph sources in the directory pysph and are inside the root of this directory. For example:

$ unzip pysph-pysph-*.zip
$ cd pysph-pysph-1ce*

or if you cloned the repository:

$ git clone https://bitbucket.org/pysph/pysph.git
$ cd pysph

Once you have downloaded PySPH you should be ready to build and install it, see Building and Installing PySPH.

Building and Installing PySPH

Once you have the dependencies installed you can install PySPH with:

$ pip install PySPH

If you downloaded PySPH using git or used a tarball you can do:

$ python setup.py install

You could also do:

$ python setup.py develop

This is useful if you are tracking the latest version of PySPH via git. With git you can update the sources and rebuild using:

$ git pull
$ python setup.py develop

You should be all set now and should next consider Running the tests.

Running the tests

If you installed PySPH using python setup.py develop you can run the tests as:

$ python -m nose.core pysph

If you installed PySPH using python setup.py install you should run the tests as:

$ python -m nose.core -w docs pysph

This is because nosetests will incorrectly will pick up the local pysph packages instead of the installed version. The -w option just changes the active directory to docs. Alternatively, change directory to some other directory that does not contain the directory pysph in it and run the first command above.

If you see errors you might want more verbose reporting which you can get with:

$ python -m nose.core -v pysph

This should run all the tests that do not take a long while to complete. If this fails, please contact the pysph-users mailing list or send us email.

Once you run the tests, you should see the section on Running the examples.

Note

We use python -m nose.core instead of nosetests as this ensures that the right Python executable is used. nostests is sometimes installed in the system in /usr/bin/nosetests and running that would pick up the system Python instead of the one in a virtualenv. This results in incorrect test errors leading to confusion.

Running the examples

You can verify the installation by exploring some examples:

$ cd examples
$ python elliptical_drop.py

Try this:

$ python elliptical_drop.py -h

to see the different options supported by each example. You can view the data generated by the simulation (after the simulation is complete or during the simulation) by running the pysph_viewer application. To view the simulated data you may do:

$ pysph_viewer elliptical_drop_output/*.npz

If you have Mayavi installed this should show a UI that looks like:

PySPH viewer

There are other examples like those in the transport_velocity directory:

$ cd transport_velocity
$ python cavity.py

This runs the driven cavity problem using the transport velocity formulation of Adami et al. You can verify the results for this problem using the helper script examples/transport_velocity/ldcavity_results.py to plot, for example the streamlines:

_images/ldc-streamlines.png

If you want to use PySPH for elastic dynamics, you can try some of the examples from Gray et al., Comput. Methods Appl. Mech. Engrg. 190 (2001), 6641-6662:

$ cd examples/solid_mech
$ python rings.py

Which runs the problem of the collision of two elastic rings:

_images/rings-collision.png

The auto-generated code for the example resides in the directory ~/.pysph/source. A note of caution however, it’s not for the faint hearted.

Running the examples with OpenMP

If you have OpenMP available run any of the examples as follows:

$ python elliptical_drop.py --openmp

This should run faster if you have multiple cores on your machine. If you wish to change the number of threads to run simultaneously, you can try the following:

$ OMP_NUM_THREADS=8 python elliptical_drop.py --openmp

You may need to set the number of threads to about 4 times the number of physical cores on your machine to obtain the most scale-up. If you wish to time the actual scale up of the code with and without OpenMP you may want to disable any output (which will be serial), you can do this like:

$ python elliptical_drop.py --disable-output --openmp
Running the examples with MPI

If you compiled PySPH with Zoltan and have mpi4py installed you may run any of the examples with MPI as follows:

$ mpirun -np 4 python dam_break3D.py

This may not give you significant speedup if the problem is too small. You can also combine OpenMP and MPI if you wish. You should take care to setup the MPI host information suitably to utilize the processors effectively.

Organization of the pysph package

PySPH is organized into several sub-packages. These are:

  • pysph.base: This subpackage defines the pysph.base.particle_array.ParticleArray, pysph.base.carray.CArray (which are used by the particle arrays), the various SPH Kernels, the nearest neighbor particle search (NNPS) code, and the Cython code generation utilities.
  • pysph.sph: Contains the various SPH equations, the Integrator module and associated integration steppers, and the code generation for the SPH looping. pysph.sph.wc contains the equations for the weakly compressible formulation. pysph.sph.solid_mech contains the equations for solid mechanics and pysph.sph.misc has miscellaneous equations.
  • pysph.solver: Provides the pysph.solver.solver.Solver, the pysph.solver.application.Application and a convenient way to interact with the solver as it is running.
  • pysph.parallel: Provides the parallel functionality.
  • pysph.tools: Provides some useful tools including the pysph_viewer which is based on Mayavi.

Learning the ropes

In the tutorials, we will introduce the PySPH framework in the context of the examples provided. Read this if you are a casual user and want to use the framework as is. If you want to add new functions and capabilities to PySPH, you should read The PySPH framework. If you are new to PySPH however, we highly recommend that you go through this document.

Recall that PySPH is a framework for parallel SPH-like simulations in Python. The idea therefore, is to provide a user friendly mechanism to set-up problems while leaving the internal details to the framework. All examples follow the following steps:

_images/pysph-examples-common-steps.png

The tutorials address each of the steps in this flowchart for problems with increasing complexity.

The first example we consider is a “patch” test for SPH formulations for incompressible fluids in examples/elliptical_drop.py. This problem simulates the evolution of a 2D circular patch of fluid under the influence of an initial velocity field given by:

\[\begin{split}u &= -100 x \\ v &= 100 y\end{split}\]

The kinematical constraint of incompressibility causes the initially circular patch of fluid to deform into an ellipse such that the volume (area) is conserved. An expression can be derived for this deformation which makes it an ideal test to verify codes.

Imports

Taking a look at the example, the first several lines are imports of various modules:

numpy import ones_like, mgrid, sqrt, array, savez
from time import time

# PySPH base and carray imports
from pysph.base.utils import get_particle_array_wcsph
from pysph.base.kernels import CubicSpline
from pyzoltan.core.carray import LongArray

# PySPH solver and integrator
from pysph.solver.application import Application
from pysph.solver.solver import Solver
from pysph.sph.integrator import PECIntegrator

# PySPH sph imports
from pysph.sph.basic_equations import ContinuityEquation, XSPHCorrection
from pysph.sph.wc.basic import TaitEOS, MomentumEquation

Note

This is common for all examples and it is worth noting the pattern of the PySPH imports. Fundamental SPH constructs like the kernel and particle containers are imported from the base subpackage. The framework related objects like the solver and integrator are imported from the solver subpackage. Finally, we import from the sph subpackage, the physics related part for this problem.

Functions for loading/generating the particles

Next in the code are two functions called exact_solution and get_circular_patch. The former produces an exact solution for comparison, the latter looks like:

def get_circular_patch(dx=0.025, **kwargs):
    """Create the circular patch of fluid."""
    name = 'fluid'
    x,y = mgrid[-1.05:1.05+1e-4:dx, -1.05:1.05+1e-4:dx]
    x = x.ravel()
    y = y.ravel()

    m = ones_like(x)*dx*dx
    h = ones_like(x)*hdx*dx
    rho = ones_like(x) * ro

    p = ones_like(x) * 1./7.0 * co**2
    cs = ones_like(x) * co

    u = -100*x
    v = 100*y

    # remove particles outside the circle
    indices = []
    for i in range(len(x)):
        if sqrt(x[i]*x[i] + y[i]*y[i]) - 1 > 1e-10:
            indices.append(i)

    pa = get_particle_array_wcsph(x=x, y=y, m=m, rho=rho, h=h, p=p, u=u, v=v,
                                  cs=cs, name=name)

    la = LongArray(len(indices))
    la.set_data(array(indices))

    pa.remove_particles(la)

    print "Elliptical drop :: %d particles"%(pa.get_number_of_particles())

    # add requisite variables needed for this formulation
    for name in ('arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'rho0', 'u0',
                 'v0', 'w0', 'x0', 'y0', 'z0'):
        pa.add_property(name)

    return [pa,]

and is used to initialize the particles in Python. In PySPH, we use a ParticleArray object as a container for particles of a given species. You can think of a particle species as any homogenous entity in a simulation. For example, in a two-phase air water flow, a species could be used to represent each phase. A ParticleArray can be conveniently created from the command line using NumPy arrays. For example

>>> from pysph.base.utils import get_particle_array
>>> x, y = numpy.mgrid[0:1:0.01, 0:1:0.01]
>>> x = x.ravel(); y = y.ravel()
>>> pa = sph.get_particle_array(x=x, y=y)

would create a ParticleArray, representing a uniform distribution of particles on a Cartesian lattice in 2D using the helper function get_particle_array() in the base subpackage.

Note

ParticleArrays in PySPH use flattened or one-dimensional arrays.

The ParticleArray is highly convenient, supporting methods for insertions, deletions and concatenations. In the get_circular_patch function, we use this convenience to remove a list of particles that fall outside a circular region:

pa.remove_particles(la)

where, a list of indices is provided in the form of a LongArray which, as the name suggests, is an array of 64 bit integers.

Note

Any one-dimensional (NumPy) array is valid input for PySPH. You can generate this from an external program for solid modelling and load it.

Note

PySPH works with multiple ParticleArrays. This is why we actually return a list in the last line of the get_circular_patch function above.

Setting up the PySPH framework

As we move on, we encounter instantiations of the PySPH framework objects. These are the pysph.solver.application.Application, pysph.sph.integrator.PECIntegrator and pysph.solver.solver.Solver objects:

# Create the application.
app = Application()

kernel = CubicSpline(dim=2)

integrator = PECIntegrator(fluid=WCSPHStep())

# Create and setup a solver.
solver = Solver(kernel=kernel, dim=2, integrator=integrator)

# Setup default parameters.
solver.set_time_step(1e-5)
solver.set_final_time(0.0075)

The Application makes it easy to pass command line arguments to the solver. It is also important for the seamless parallel execution of the same example. To appreciate the role of the Application consider for a moment how might we write a parallel version of the same example. At some point, we would need some MPI imports and the particles should be created in a distributed fashion. All this (and more) is handled through the abstraction of the Application which hides all this detail from the user.

Intuitively, in an SPH simulation, the role of the PECIntegrator should be obvious. In the code, we see that we ask for the “fluid” to be stepped using a WCSPHStep object. Taking a look at the get_circular_patch function once more, we notice that the ParticleArray representing the circular patch was named as fluid. So we’re essentially asking the PySPH framework to step or integrate the properties of the ParticleArray fluid using WCSPHStep. Safe to assume that the framework takes the responsibility to call this integrator at the appropriate time during a time-step.

The Solver is the main driver for the problem. It marshals a simulation and takes the responsibility (through appropriate calls to the integrator) to update the solution to the next time step. It also handles input/output and computing global quantities (such as minimum time step) in parallel.

Specifying the interactions

At this stage, we have the particles (represented by the fluid ParticleArray) and the framework to integrate the solution and marshall the simulation. What remains is to define how to actually go about updating properties within a time step. That is, for each particle we must “do something”. This is where the physics for the particular problem comes in.

For SPH, this would be the pairwise interactions between particles. In PySPH, we provide a specific way to define the sequence of interactions which is a list of Equation objects (see SPH equations). For the circular patch test, the sequence of interactions is relatively straightforward:

  • Compute pressure from the EOS: \(p = f(\rho)\)
  • Compute the rate of change of density: \(\frac{d\rho}{dt}\)
  • Compute the rate of change of velocity (accelerations): \(\frac{d\boldsymbol{v}}{dt}\)
  • Compute corrections for the velocity (XSPH): \(\frac{d\boldsymbol{x}}{dt}\)

We request this in PySPH like so:

# The equations of motion.
equations = [
    # Equation of state: p = f(rho)
    TaitEOS(dest='fluid', sources=None, rho0=ro, c0=co, gamma=7.0),

    # Density rate: drho/dt
    ContinuityEquation(dest='fluid',  sources=['fluid',]),

    # Acceleration: du,v/dt
    MomentumEquation(dest='fluid', sources=['fluid'], alpha=1.0, beta=1.0),

    # XSPH velocity correction
    XSPHCorrection(dest='fluid', sources=['fluid']),

    ]

Each interaction is specified through an Equation object, which is instantiated with the general syntax:

Equation(dest='array_name', sources, **kwargs)

The dest argument specifies the target or destination ParticleArray on which this interaction is going to operate on. Similarly, the sources argument specifies a list of ParticleArrays from which the contributions are sought. For some equations like the EOS, it doesn’t make sense to define a list of sources and a None suffices. The specification basically tells PySPH that for one time step of the calculation:

  • Use the Tait’s EOS to update the properties of the fluid array
  • Compute \(\frac{d\rho}{dt}\) for the fluid from the fluid
  • Compute accelerations for the fluid from the fluid
  • Compute the XSPH corrections for the fluid, using fluid as the source

Note

Notice the use of the ParticleArray name “fluid”. It is the responsibility of the user to ensure that the equation specification is done in a manner consistent with the creation of the particles.

With the list of equations, our problem is completely defined. PySPH now knows what to do with the particles within a time step. More importantly, this information is enough to generate code to carry out a complete SPH simulation.

Running the example

In the last two lines of the example, we use the Application to run the problem:

# Setup the application and solver.  This also generates the particles.
app.setup(solver=solver, equations=equations,
          particle_factory=get_circular_patch)

app.run()

We can see that the Application.setup() method is where we tell PySPH what we want it to do. We pass in the function to create the particles, the list of equations defining the problem and the solver that will be used to marshal the problem.

Many parameters can be configured via the command line, and these will override any parameters setup before the app.setup call. For example one may do the following to find out the various options:

$ python elliptical_drop.py -h

If we run the example without any arguments it will run until a final time of 0.0075 seconds. We can change this for example to 0.005 by the following:

$ python elliptical_drop.py --tf=0.005

When this is run, PySPH will generate Cython code from the equations and integrators that have been provided, compiles that code and runs the simulation. This provides a great deal of convenience for the user without sacrificing performance. The generated code is available in ~/.pysph/source. If the code/equations have not changed, then the code will not be recompiled. This is all handled automatically without user intervention.

If we wish to run the code in parallel (and have compiled PySPH with Zoltan and mpi4py) we can do:

$ mpirun -np 4 /path/to/python elliptical_drop.py

This will automatically parallelize the run. In this example doing this will only slow it down as the number of particles is extremely small.

Visualizing and post-processing

You can view the data generated by the simulation (after the simulation is complete or during the simulation) by running the pysph_viewer application. To view the simulated data you may do:

$ pysph_viewer elliptical_drop_output/*.npz

If you have Mayavi installed this should show a UI that looks like:

PySPH viewer

On the user interface, the right side shows the visualized data. On top of it there are several toolbar icons. The left most is the Mayavi logo and clicking on it will present the full Mayavi user interface that can be used to configure any additional details of the visualization.

On the bottom left of the main visualization UI there is a button which has the text “Launch Python Shell”. If one clicks on this, one obtains a full Python interpreter with a few useful objects available. These are:

>>> dir()
['__builtins__', '__doc__', '__name__', 'interpolator', 'mlab',
 'particle_arrays', 'scene', 'self', 'viewer']
>>> len(particle_arrays)
1
>>> particle_arrays[0].name
'fluid'

The particle_arrays object is a list of ParticleArrays. The interpolator is an instance of pysph.tools.interpolator.Interpolator that is used by the viewer. The other objects can be used to script the user interface if desired.

Loading output data files

The simulation data is dumped out in *.npz files. You may use the pysph.solver.utils.load() function to access the raw data:

from pysph.solver.utils import load
data = load('elliptical_drop_100.npz')

When opening the saved .npz file with load, a dictionary object is returned. The particle arrays and other information can be obtained from this dictionary:

particle_arrays = data['arrays']
solver_data = data['solver_data']

particle_arrays is a dictionary of all the PySPH particle arrays. You may obtain the PySPH particle array, fluid, like so:

fluid = particle_arrays['fluid']
p = fluid.p

p is a numpy array containing the pressure values. All the saved particle array properties can thus be obtained and used for any post-processing task. The solver_data provides information about the iteration count, timestep and the current time.

Interpolating properties

Data from the solver can also be interpolated using the pysph.tools.interpolator.Interpolator class. Here is the simplest example of interpolating data from the results of a simulation onto a fixed grid that is automatically computed from the known particle arrays:

from pysph.solver.utils import load
data = load('elliptical_drop_output/elliptical_drop_100.npz')
from pysph.tools.interpolator import Interpolator
parrays = data['arrays']
interp = Interpolator(parrays.values(), num_points=10000)
p = interp.interpolate('p')

p is now a numpy array of size 10000 elements shaped such that it interpolates all the data in the particle arrays loaded. interp.x and interp.y are numpy arrays of the chosen x and y coordinates corresponding to p. To visualize this we may simply do:

from matplotlib import pyplot as plt
plt.contourf(interp.x, interp.y, p)

It is easy to interpolate any other property too. If one wishes to explicitly set the domain on which the interpolation is required one may do:

xmin, xmax, ymin, ymax, zmin, zmax = 0., 1., -1., 1., 0, 1
interp.set_domain((xmin, xmax, ymin, ymax, zmin, zmax), (40, 50, 1))
p = interp.interpolate('p')

This will create a meshgrid in the specified region with the specified number of points.

One could also explicitly set the points on which one wishes to interpolate the data as:

interp.set_interpolation_points(x, y, z)

Where x, y, z are numpy arrays of the coordinates of the points on which the interpolation is desired. This can also be done with the constructor as:

interp = Interpolator(parrays.values(), x=x, y=y, z=z)

For more details on the class and the available methods, see pysph.tools.interpolator.Interpolator.

In addition to this there are other useful pre and post-processing utilities described in Miscellaneous Tools for PySPH.

The framework and library

The PySPH framework

This document is an introduction to the design of PySPH. This provides additional high-level details on the functionality that the PySPH framework provides. This should allow you to use PySPH effectively and extend the framework to solve problems other than those provided in the main distribution.

To elucidate some of the internal details of PySPH, we will consider a typical SPH problem and proceed to write the code that implements it. Thereafter, we will look at how this is implemented using the PySPH framework.

The dam-break problem

The problem that is used for the illustration is the Weakly Compressible SPH (WCSPH) formulation for free surface flows, applied to a breaking dam problem:

_images/dam-break-schematic.png

A column of water is initially at rest (presumably held in place by some membrane). The problem simulates a breaking dam in that the membrane is instantly removed and the column is free to fall under it’s own weight and the effect of gravity. This and other variants of the dam break problem can be found in the examples directory of PySPH.

Equations

The discrete equations for this formulation are given as

(1)\[p_a = B\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} - 1 \right )\]
(2)\[\frac{d\rho_a}{dt} = \sum_{b=1}^{N}m_b\,(\vec{v_b} - \vec{v_a})\cdot\,\nabla_a W_{ab}\]
(3)\[\frac{d\vec{v_a}}{dt} = -\sum_{b=1}^Nm_b\left(\frac{p_a}{\rho_a^2} + \frac{p_b}{\rho_b^2}\right)\nabla W_{ab}\]
(4)\[\frac{d\vec{x_a}}{dt} = \vec{v_a}\]
Boundary conditions

The dam break problem involves two types of particles. Namely, the fluid (water column) and solid (tank). The basic boundary condition enforced on a solid wall is the no-penetration boundary condition which can be stated as

\[\vec{v_f}\cdot \vec{n_b} = 0\]

Where \(\vec{n_b}\) is the local normal vector for the boundary. For this example, we use the dynamic boundary conditions. For this boundary condition, the boundary particles are treated as fixed fluid particles that evolve with the continuity ((2)) and equation the of state ((1)). In addition, they contribute to the fluid acceleration via the momentum equation ((3)). When fluid particles approach a solid wall, the density of the fluids and the solids increase via the continuity equation. With the increased density and consequently increased pressure, the boundary particles express a repulsive force on the fluid particles, thereby enforcing the no-penetration condition.

Time integration

For the time integration, we use a second order predictor-corrector integrator. For the predictor stage, the following operations are carried out:

(5)\[\begin{split}\rho^{n + \frac{1}{2}} = \rho^n + \frac{\Delta t}{2}(a_\rho)^{n-\frac{1}{2}} \\\end{split}\]\[\begin{split}\boldsymbol{v}^{n + \frac{1}{2}} = \boldsymbol{v}^n + \frac{\Delta t}{2}(\boldsymbol{a_v})^{n-\frac{1}{2}} \\\end{split}\]\[\boldsymbol{x}^{n + \frac{1}{2}} = \boldsymbol{x}^n + \frac{\Delta t}{2}(\boldsymbol{u} + \boldsymbol{u}^{\text{XSPH}})^{n-\frac{1}{2}}\]

Once the variables are predicted to their half time step values, the pairwise interactions are carried out to compute the accelerations. Subsequently, the corrector is used to update the particle positions:

(6)\[\begin{split}\rho^{n + 1} = \rho^n + \Delta t(a_\rho)^{n+\frac{1}{2}} \\\end{split}\]\[\begin{split}\boldsymbol{v}^{n + 1} = \boldsymbol{v}^n + \Delta t(\boldsymbol{a_v})^{n+\frac{1}{2}} \\\end{split}\]\[\boldsymbol{x}^{n + 1} = \boldsymbol{x}^n + \Delta t(\boldsymbol{u} + \boldsymbol{u}^{\text{XSPH}})^{n+\frac{1}{2}}\]

Note

The acceleration variables are prefixed like \(a_\). The boldface symbols in the above equations indicate vector quantities. Thus \(a_\boldsymbol{v}\) represents \(a_u,\, a_v,\, \text{and}\, a_w\) for the vector components of acceleration.

Required arrays and properties

We will be using two ParticleArrays (see pysph.base.particle_array.ParticleArray), one for the fluid and another for the solid. Recall that for the dynamic boundary conditions, the solid is treated like a fluid with the only difference being that the velocity (\(a_\boldsymbol{v}\)) and position accelerations (\(a_\boldsymbol{x} = \boldsymbol{u} + \boldsymbol{u}^{\text{XSPH}}\)) are never calculated. The solid particles therefore remain fixed for the duration of the simulation.

To carry out the integrations for the particles, we require the following variables:

  • SPH properties: x, y, z, u, v, w, h, m, rho, p, cs
  • Acceleration variables: au, av, aw, ax, ay, az, arho
  • Properties at the beginning of a time step: x0, y0, z0, u0, v0, w0, rho0

A non-PySPH implementation

We first consider the pseudo-code for the non-PySPH implementation. We assume we have been given two ParticleArrays fluid and solid corresponding to the dam-break problem. We also assume that an pysph.base.nnps.NNPS object nps is available and can be used for neighbor queries:

from pysph.base import nnps
fluid = get_particle_array_fluid(...)
solid = get_particle_array_solid(...)
particles = [fluid, solid]
nps = nnps.LinkedListNNPS(dim=2, particles=particles, radius_scale=2.0)

The part of the code responsible for the interactions can be defined as

class SPHCalc:
    def __init__(nnps, particles):
        self.nnps = nnps
        self.particles = particles

    def compute(self):
        self.eos()
        self.accelerations()

    def eos(self):
        for array in self.particles:
            num_particles = array.get_number_of_particles()
            for i in range(num_particles):
                array.p[i] =  # TAIT EOS function for pressure
                array.cs[i] = # TAIT EOS function for sound speed

    def accelerations(self):
        fluid, solid = self.particles[0], self.particles[1]
        nps = self.nps
        nbrs = UIntArray()

        # continuity equation for the fluid
        dst = fluid; dst_index = 0

        # source is fluid
        src = fluid; src_index = 0
        num_particles = dst.get_number_of_particles()
        for i in range(num_particles):

            # get nearest fluid neigbors
            nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)

            for j in nbrs:
                # pairwise quantities
                xij = dst.x[i] - src.x[j]
                yij = dst.y[i] - src.y[j]
                ...

                # kernel interaction terms
                wij = kenrel.function(xi, ...)  # kernel function
                dwij= kernel.gradient(xi, ...)  # kernel gradient

                # compute the interaction and store the contribution
                dst.arho[i] += # interaction term

        # source is solid
        src = solid; src_index = 1
        num_particles = dst.get_number_of_particles()
        for i in range(num_particles):

            # get nearest fluid neigbors
            nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)

            for j in nbrs:
                # pairwise quantities
                xij = dst.x[i] - src.x[j]
                yij = dst.y[i] - src.y[j]
                ...

                # kernel interaction terms
                wij = kenrel.function(xi, ...)  # kernel function
                dwij= kernel.gradient(xi, ...)  # kernel gradient

                # compute the interaction and store the contribution
                dst.arho[i] += # interaction term

        # Destination is solid
        dst = solid; dst_index = 1

        # source is fluid
        src = fluid; src_index = 0

        num_particles = dst.get_number_of_particles()
        for i in range(num_particles):

            # get nearest fluid neigbors
            nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)

            for j in nbrs:
                # pairwise quantities
                xij = dst.x[i] - src.x[j]
                yij = dst.y[i] - src.y[j]
                ...

                # kernel interaction terms
                wij = kenrel.function(xi, ...)  # kernel function
                dwij= kernel.gradient(xi, ...)  # kernel gradient

                # compute the interaction and store the contribution
                dst.arho[i] += # interaction term

We see that the use of multiple particle arrays has forced us to write a fairly long piece of code for the accelerations. In fact, we have only shown the part of the main loop that computes \(a_\rho\) for the continuity equation. Recall that our problem states that the continuity equation should evaluated for all particles, taking influences from all other particles into account. For two particle arrays (fluid, solid), we have four such pairings (fluid-fluid, fluid-solid, solid-fluid, solid-solid). The last one can be eliminated when we consider the that the boundary has zero velocity and hence the contribution will always be trivially zero.

The apparent complexity of the SPHCalc.accelerations method notwithstanding, we notice that similar pieces of the code are being repeated. In general, we can break down the computation for a general source-destination pair like so:

# consider first destination particle array

for all dst particles:
    get_neighbors_from_source()
    for all neighbors:
        compute_pairwise_terms()
        compute_inteactions_for_dst_particle()

# consider next source for this destination particle array
...

# consider the next destination particle array

Note

The SPHCalc.compute method first calls the EOS before calling the main loop to compute the accelerations. This is because the EOS (which updates the pressure) must logically be completed for all particles before the accelerations (which uses the pressure) are computed.

The predictor-corrector integrator for this problem can be defined as

class Integrator:
    def __init__(self, particles, nps, calc):
        self.particles = particles
        self.nps = nps
        self.calc = calc

    def initialize(self):
        for array in self.particles:
            array.rho0[:] = array.rho[:]
            ...
            array.w0[:] = array.w[:]

   def stage1(self, dt):
       dtb2 = 0.5 * dt
       for array in self.particles:
           array.rho = array.rho0[:] + dtb2*array.arho[:]

           array.u = array.u0[:] + dtb2*array.au[:]
           array.v = array.v0[:] + dtb2*array.av[:]
           ...
           array.z = array.z0[:] + dtb2*array.az[:]

   def stage2(self, dt):
       for array in self.particles:
           array.rho = array.rho0[:] + dt*array.arho[:]

           array.u = array.u0[:] + dt*array.au[:]
           array.v = array.v0[:] + dt*array.av[:]
           ...
           array.z = array.z0[:] + dt*array.az[:]

   def integrate(self, dt):
       self.initialize()
       self.stage1(dt)   # predictor step
       self.nps.update()    # update NNPS structure
       self.calc.compute()  # compute the accelerations
       self.stage2(dt)   # corrector step

The Integrator.integrate method is responsible for updating the solution the next time level. Before the predictor stage, the Integrator.initialize method is called to store the values x0, y0... at the beginning of a time-step. Given the positions of the particles at the half time-step, the NNPS data structure is updated before calling the SPHCalc.compute method. Finally, the corrector step is called once we have the updated accelerations.

This hypothetical implementation can be integrated to the final time by calling the Integrator.integrate method repeatedly. In the next section, we will see how PySPH does this automatically.

PySPH implementation

Now that we have a hypothetical implementation outlined, we can proceed to describe the abstractions that PySPH introduces, enabling a highly user friendly and flexible way to define pairwise particle interactions.

We assume that we have the same ParticleArrays (fluid and solid) and NNPS objects as before.

Specifying the equations

Given the particle arrays, we ask for a given set of operations to be performed on the particles by passing a list of Equation objects (see SPH equations) to the Solver (see pysph.solver.solver.Solver)

equations = [

    # Equation of state
    Group(equations=[

            TaitEOS(dest='fluid', sources=None, rho0=ro, c0=co, gamma=gamma),
            TaitEOS(dest='boundary', sources=None, rho0=ro, c0=co, gamma=gamma),

            ]),

    Group(equations=[

            # Continuity equation
            ContinuityEquation(dest='fluid', sources=['fluid', 'boundary']),
            ContinuityEquation(dest='boundary', sources=['fluid']),

            # Momentum equation
            MomentumEquation(dest='fluid', sources=['fluid', 'boundary'],
                     alpha=alpha, beta=beta, gy=-9.81, c0=co),

            # Position step with XSPH
            XSPHCorrection(dest='fluid', sources=['fluid'])
            ]),
    ]

We see that we have used two Group objects (see pysph.sph.equation.Group), segregating two parts of the evaluation that are logically dependent. The second group, where the accelerations are computed must be evaluated after the first group where the pressure is updated. Recall we had to do a similar seggregation for the SPHCalc.compute method in our hypothetical implementation:

class SPHCalc:
    def __init__(nnps, particles):
        ...

    def compute(self):
        self.eos()
        self.accelerations()

Note

PySPH will respect the order of the Equation and equation Groups as provided by the user. This flexibility also means it is quite easy to make subtle errors.

Writing the equations

It is important for users to be able to easily write out new SPH equations of motion. PySPH provides a very convenient way to write these equations. The PySPH framework allows the user to write these equations in pure Python. These pure Python equations are then used to generate high-performance code and then called appropriately to perform the simulations.

There are two types of particle computations in SPH simulations:

  1. The most common type of interaction is to change the property of one particle (the destination) using the properties of a source particle.
  2. A less common type of interaction is to calculate say a sum (or product or maximum or minimum) of values of a particular property. This is commonly called a “reduce” operation in the context of Map-reduce programming models.

Computations of the first kind are inherently parallel and easy to perform correctly both in serial and parallel. Computations of the second kind (reductions) can be tricky in parallel. As a result, in PySPH we distinguish between the two. This will be elaborated in more detail in the following.

In general an SPH algorithm proceeds as the following pseudo-code illustrates:

for destination in particles:
    for equation in equations:
        equation.initialize(destination)

# This is where bulk of the computation happens.
for destination in particles:
    for source in destination.neighbors:
        for equation in equations:
            equation.loop(source, destination)

for destination in particles:
    for equation in equations:
        equation.post_loop(destination)

# Reduce any properties if needed.
total_mass = reduce_array(particles.m, 'sum')
max_u = reduce_array(particles.u, 'max')

The neighbors of a given particle are identified using a nearest neighbor algorithm. PySPH does this automatically for the user and internally uses a link-list based algorithm to identify neighbors.

In PySPH we follow some simple conventions when writing equations. Let us look at a few equations first. In keeping the analogy with our hypothetical implementation and the SPHCalc.accelerations method above, we consider the implementations for the PySPH pysph.sph.wc.basic.TaitEOS and pysph.sph.basic_equations.ContinuityEquation objects. The former looks like:

class TaitEOS(Equation):
    def __init__(self, dest, sources=None,
                 rho0=1000.0, c0=1.0, gamma=7.0):
        self.rho0 = rho0
        self.rho01 = 1.0/rho0
        self.c0 = c0
        self.gamma = gamma
        self.gamma1 = 0.5*(gamma - 1.0)
        self.B = rho0*c0*c0/gamma
        super(TaitEOS, self).__init__(dest, sources)

    def loop(self, d_idx, d_rho, d_p, d_cs):
        ratio = d_rho[d_idx] * self.rho01
        tmp = pow(ratio, self.gamma)

        d_p[d_idx] = self.B * (tmp - 1.0)
        d_cs[d_idx] = self.c0 * pow( ratio, self.gamma1 )

Notice that it has only one loop method and this loop is applied for all particles. Since there are no sources, there is no need for us to find the neighbors. There are a few important conventions that are to be followed when writing the equations.

  • d_* indicates a destination array.
  • s_* indicates a source array.
  • d_idx and s_idx represent the destination and source index respectively.
  • Each function can take any number of arguments as required, these are automatically supplied internally when the application runs.
  • All the standard math symbols from math.h are also available.

Let us look at the ContinuityEquation as another simple example. It is instantiated as:

class ContinuityEquation(Equation):
    def initialize(self, d_idx, d_arho):
        d_arho[d_idx] = 0.0

    def loop(self, d_idx, d_arho, s_idx, s_m, DWIJ, VIJ):
        vijdotdwij = DWIJ[0]*VIJ[0] + DWIJ[1]*VIJ[1] + DWIJ[2]*VIJ[2]
        d_arho[d_idx] += s_m[s_idx]*vijdotdwij

Notice that the initialize method merely sets the value to zero. The loop method also accepts a few new quantities like DWIJ, VIJ etc. These are precomputed quantities and are automatically provided depending on the equations needed for a particular source/destination pair. The following precomputed quantites are available and may be passed into any equation:

  • HIJ = 0.5*(d_h[d_idx] + s_h[s_idx]).
  • XIJ[0] = d_x[d_idx] - s_x[s_idx], XIJ[1] = d_y[d_idx] - s_y[s_idx], XIJ[2] = d_z[d_idx] - s_z[s_idx]
  • R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2]
  • RIJ = sqrt(R2IJ)
  • WIJ = KERNEL(XIJ, RIJ, HIJ)
  • WJ = KERNEL(XIJ, RIJ, s_h[s_idx])
  • RHOIJ = 0.5*(d_rho[d_idx] + s_rho[s_idx])
  • WI = KERNEL(XIJ, RIJ, d_h[d_idx])
  • RHOIJ1 = 1.0/RHOIJ
  • DWIJ: GRADIENT(XIJ, RIJ, HIJ, DWIJ)
  • DWI: GRADIENT(XIJ, RIJ, s_h[s_idx], DWJ)
  • DWI: GRADIENT(XIJ, RIJ, d_h[d_idx], DWI)
  • VIJ[0] = d_u[d_idx] - s_u[s_idx] VIJ[1] = d_v[d_idx] - s_v[s_idx] VIJ[2] = d_w[d_idx] - s_w[s_idx]
  • DT_ADAPT: is an array of three doubles that stores an adaptive time-step, the first element is the CFL based time-step limit, the second is the force-based limit and the third a viscosity based limit. See pysph.sph.wc.basic.MomentumEquation for an example of how this is used.

In addition if one requires the current time or the timestep in an equation, the following may be passed into any of the methods of an equation:

  • t: is the current time.
  • dt: the current time step.

In an equation, any undeclared variables are automatically declared to be doubles in the high-performance Cython code that is generated. In addition one may declare a temporary variable to be a matrix or a cPoint by writing:

mat = declare("matrix((3,3))")
point = declare("cPoint")

When the Cython code is generated, this gets translated to:

cdef double[3][3] mat
cdef cPoint point

One may also perform any reductions on properties. Consider a trivial example of calculating the total mass and the maximum u velocity in the following equation:

class FindMaxU(Equation):
    def reduce(self, dst):
        m = serial_reduce_array(dst.array.m, 'sum')
        max_u = serial_reduce_array(dst.array.u, 'max')
        dst.total_mass[0] = parallel_reduce_array(m, 'sum')
        dst.max_u[0] = parallel_reduce_array(u, 'max')

where:

  • dst: refers to a destination ParticleArrayWrapper.
  • src: refers to a the source ParticleArrayWrapper.
  • serial_reduce_array: is a special function provided that performs reductions correctly in serial. It currently supports sum, prod, max and min operations. See pysph.base.reduce_array.serial_reduce_array(). There is also a pysph.base.reduce_array.parallel_reduce_array() which is to be used to reduce an array across processors. Using parallel_reduce_array is expensive as it is an all-to-all communication. One can reduce these by using a single array and use that to reduce the communication.

The ParticleArrayWrapper, wraps a ParticleArray into a high-performance Cython object. It has an array attribute which is a reference the the underlying ParticleArray and also attributes corresponding to each property that are DoubleArrays. For example in the Cython code one may access dst.x to get the raw arrays used by the particle array. This is mainly done for performance reasons.

Note that in the above example, pysph.base.reduce_array.serial_reduce_array() is passed a dst.array.m, this is important as in parallel the dst.m will contain all particle properties including ghost properties. On the other hand dst.array.m will be a numpy array of only the real particles.

We recommend that for any kind of reductions one always use the serial_reduce_array function and the parallel_reduce_array inside a reduce method. One should not worry about parallel/serial modes in this case as this is automatically taken care of by the code generator. In serial, the parallel reduction does nothing.

With this machinery, we are able to write complex equations to solve almost any SPH problem. A user can easily define a new equation and instantiate the equation in the list of equations to be passed to the application. It is often easiest to look at the many existing equations in PySPH and learn the general patterns.

Writing the Integrator

The integrator stepper code is similar to the equations in that they are all written in pure Python and Cython code is automatically generated from it. The simplest integrator is the Euler integrator which looks like this:

class EulerIntegrator(Integrator):
    def one_timestep(self, t, dt):
        self.initialize()
        self.compute_accelerations()
        self.stage1()
        self.do_post_stage(dt, 1)

Note that in this case the integrator only needs to implement one timestep using the one_timestep method above. The initialize and stage methods need to be implemented in stepper classes which perform the actual stepping of the values. Here is the stepper for the Euler integrator:

class EulerStep(IntegratorStep):
    def initialize(self):
        pass
    def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y,
                  d_z, d_rho, d_arho, dt=0.0):
        d_u[d_idx] += dt*d_au[d_idx]
        d_v[d_idx] += dt*d_av[d_idx]
        d_w[d_idx] += dt*d_aw[d_idx]

        d_x[d_idx] += dt*d_u[d_idx]
        d_y[d_idx] += dt*d_v[d_idx]
        d_z[d_idx] += dt*d_w[d_idx]

        d_rho[d_idx] += dt*d_arho[d_idx]

As can be seen the general structure is very similar to how equations are written in that the functions take an arbitrary number of arguments and are set. The value of dt is also provided automatically when the methods are called.

It is important to note that if there are additional variables to be stepped in addition to these standard ones, you must write your own stepper. Currently, only certain steppers are supported by the framework. Take a look at the Integrator module for more examples.

Using the PySPH library

In this document, we describe the fundamental data structures for working with particles in PySPH. Take a look at Learning the ropes for a tutorial introduction to some of the examples. For the experienced user, take a look at The PySPH framework for some of the internal code-generation details and if you want to extend PySPH for your application.

Working With Particles

As an object oriented framework for particle methods, PySPH provides convenient data structures to store and manipulate collections of particles. These can be constructed from within Python and are fully compatible with NumPy arrays. We begin with a brief description for the basic data structures for arrays

C-arrays

The BaseArray class provides a typed array data structure called CArray. These are used throughout PySPH and are fundamentally very similar to NumPy arrays. The following named types are supported:

Some simple commands to work with BaseArrays from the interactive shell are given below

>>> import numpy
>>> from pyzoltan.core.carray import DoubleArray
>>> array = DoubleArray(10)                      # array of doubles of length 10
>>> array.set_data( numpy.arange(10) )           # set the data from a NumPy array
>>> array.get(3)                                 # get the value at a given index
>>> array.set(5, -1.0)                           # set the value at an index to a value
>>> array[3]                                     # standard indexing
>>> array[5] = -1.0                              # standard indexing
ParticleArray

In PySPH, a collection of BaseArrays make up what is called a ParticleArray. This is the main data structure that is used to represent particles and can be created from NumPy arrays like so:

>>> import numpy
>>> from pysph.base.utils import get_particle_array
>>> x, y = numpy.mgrid[0:1:0.1, 0:1:0.1]             # create some data
>>> x = x.ravel(); y = y.ravel()                     # flatten the arrays
>>> pa = get_particle_array(name='array', x=x, y=y)  # create the particle array

In the above, the helper function pysph.base.utils.get_particle_array() will instantiate and return a ParticleArray with properties x and y set from given NumPy arrays. In general, a ParticleArray can be instantiated with an arbitrary number of properties. Each property is stored internally as a pyzoltan.core.carray.BaseArray of the appropriate type.

By default, every ParticleArray returned using the helper function will have the following properties:

  • x, y, z : Position coordinates (doubles)
  • u, v, w : Velocity (doubles)
  • h, m, rho : Smoothing length, mass and density (doubles)
  • au, av, aw: Accelerations (doubles)
  • p : Pressure (doubles)
  • gid : Unique global index (unsigned int)
  • pid : Processor id (int)
  • tag : Tag (int)

The role of the particle properties like positions, velocities and other variables should be clear. These define either the kinematic or dynamic properties associated with SPH particles in a simulation.

PySPH introduces a global identifier for a particle which is required to be unique for that particle. This is represented with the property gid which is of type unsigned int. This property is used in the parallel load balancing algorithm with Zoltan.

The property pid for a particle is an integer that is used to identify the processor to which the particle is currently assigned.

The property tag is an integer that is used for any other identification. For example, we might want to mark all boundary particles with the tag 100. Using this property, we can delete all such particles as

>>> pa.remove_tagged_particles(tag=100)

This gives us a very flexible way to work with particles. Another way of deleting/extracting particles is by providing the indices (as a list, NumPy array or a LongArray) of the particles to be removed:

>>> indices = [1,3,5,7]
>>> pa.remove_particles( indices )
>>> extracted = pa.extract_particles(indices, props=['rho', 'x', 'y'])

A ParticleArray can be concatenated with another array to result in a larger array:

>>> pa.append_parray(another_array)

To set a given list of properties to zero:

>>> props = ['au', 'av', 'aw']
>>> pa.set_to_zero(props)

Properties in a particle array are automatically sized depending on the number of particles. There are times when fixed size properties are required. For example if the total mass or total force on a particle array needs to be calculated, a fixed size constant can be added. This can be done by adding a constant to the array as illustrated below:

>>> pa.add_constant('total_mass', 0.0)
>>> pa.add_constant('total_force', [0.0, 0.0, 0.0])
>>> print pa.total_mass, pa.total_force

In the above, the total_mass is a fixed DoubleArray of length 1 and the total_force is a fixed DoubleArray of length 3. These constants will never be resized as one adds or removes particles to/from the particle array. The constants may be used inside of SPH equations just like any other property.

The constants can also set in the constructor of the ParticleArray by passing a dictionary of constants as a constants keyword argument. For example:

>>> pa = ParticleArray(
...     name='test', x=x,
...     constants=dict(total_mass=0.0, total_force=[0.0, 0.0, 0.0])
... )

Take a look at ParticleArray reference documentation for some of the other methods and their uses.

Nearest Neighbour Particle Searching (NNPS)

To carry out pairwise interactions for SPH, we need to find the nearest neighbours for a given particle within a specified interaction radius. The NNPS object is responsible for handling these nearest neighbour queries for a list of particle arrays:

>>> from pysph.base import nnps
>>> pa1 = get_particle_array(...)                    # create one particle array
>>> pa2 = get_particle_array(...)                    # create another particle array
>>> particles = [pa1, pa2]
>>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, radius_scale=3)

The above will create an NNPS object that uses the classical linked-list algorithm for nearest neighbour searches. The radius of interaction is determined by the argument radius_scale. The book-keeping cells have a length of \(\text{radius_scale} \times h_{\text{max}}\), where \(h_{\text{max}}\) is the maximum smoothing length of all particles assigned to the local processor.

Note that the NNPS classes also support caching the neighbors computed. This is useful if one needs to reuse the same set of neighbors. To enable this, simply pass cache=True to the constructor:

>>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, cache=True)

Since we allow a list of particle arrays, we need to distinguish between source and destination particle arrays in the neighbor queries.

Note

A destination particle is a particle belonging to that species for which the neighbors are sought.

A source particle is a particle belonging to that species which contributes to a given destination particle.

With these definitions, we can query for nearest neighbors like so:

>>> nbrs = UIntArray()
>>> nps.get_nearest_particles(src_index, dst_index, d_idx, nbrs)

where src_index, dst_index and d_idx are integers. This will return, for the d_idx particle of the dst_index particle array (species), nearest neighbors from the src_index particle array (species). Passing the src_index and dst_index every time is repetitive so an alternative API is to call set_context as done below:

>>> nps.set_context(src_index=0, dst_index=0)

If the NNPS instance is configured to use caching, then it will also pre-compute the neighbors very efficiently. Once the context is set one can get the neighbors as:

>>> nps.get_nearest_neighbors(d_idx, nbrs)

Where d_idx and nbrs are as discussed above.

If we want to re-compute the data structure for a new distribution of particles, we can call the NNPS.update() method:

>>> nps.update()
Periodic domains

The constructor for the NNPS accepts an optional argument (DomainManager) that is used to delimit the maximum spatial extent of the simulation domain. Additionally, this argument is also used to indicate the extents for a periodic domain. We construct a DomainManager object like so

>>> from pysph.base.nnps import DomainManager
>>> from pysph.base.point import Point
>>> domain = DomainManager(xmin, xmax, ymin, ymax, zmin, zmax,
                           periodic_in_x=True, periodic_in_y=True,
                           periodic_in_z=False)

where xmin ... zmax are floating point arguments delimiting the simulation domain and periodic_in_x,y,z are bools defining the periodic axes.

When the NNPS object is constructed with this DomainManager, care is taken to create periodic ghosts for particles in the vicinity of the periodic boundaries. These ghost particles are given a special tag defined by ParticleTAGS

class ParticleTAGS:
    Local = 0
    Remote = 1
    Ghost = 2

Note

The Local tag is used to for ordinary particles assigned and owned by a given processor. This is the default tag for all particles.

Note

The Remote tag is used for ordinary particles assigned to but not owned by a given processor. Particles with this tag are typically used to satisfy neighbor queries across processor boundaries in a parallel simulation.

Note

The Ghost tag is used for particles that are created to satisfy boundary conditions locally.

Particle aligning

In PySPH, the ParticleArray aligns all particles upon a call to the ParticleArray.align_particles() method. The aligning is done so that all particles with the Local tag are placed first, followed by particles with other tags.

There is no preference given to the tags other than the fact that a particle with a non-zero tag is placed after all particles with a zero (Local) tag. Intuitively, the local particles represent real particles or particles that we want to do active computation on (destination particles).

The data attribute ParticleArray.num_real_particles returns the number of real or Local particles. The total number of particles in a given ParticleArray can be obtained by a call to the ParticleArray.get_number_of_particles() method.

The following is a simple example demonstrating this default behaviour of PySPH:

>>> x = numpy.array( [0, 1, 2, 3], dtype=numpy.float64 )
>>> tag = numpy.array( [0, 2, 0, 1], dtype=numpy.int32 )

>>> pa = utils.get_particle_array(x=x, tag=tag)

>>> print pa.get_number_of_particles()                     # total number of particles
>>> 4
>>> print pa.num_real_particles                            # no. of particles with tag 0
>>> 2

>>> x, tag = pa.get('x', 'tag', only_real_particles=True)  # get only real particles (tag == 0)
>>> print x
>>> [0. 2.]
>>> print tag
>>> [0 0]

>>> x, tag = pa.get('x', 'tag', only_real_particles=False) # get all particles
>>> print x
>>> [0. 2. 1. 3.]
>>> print tag
>>> [0 0 2 1]

We are now in a position to put all these ideas together and write our first SPH application.

Parallel NNPS with PyZoltan

PySPH uses the Zoltan data management library for dynamic load balancing through a Python wrapper PyZoltan, which provides functionality for parallel neighbor queries in a manner completely analogous to NNPS.

Particle data is managed and exchanged in parallel via a derivative of the abstract base class ParallelManager object. Continuing with our example, we can instantiate a ZoltanParallelManagerGeometric object as:

>>> ... # create particles
>>> from pysph.parallel import ZoltanParallelManagerGeometric
>>> pm = ZoltanParallelManagerGeometric(dim, particles, comm, radius_scale, lb_method)

The constructor for the parallel manager is quite similar to the NNPS constructor, with two additional parameters, comm and lb_method. The first is the MPI communicator object and the latter is the partitioning algorithm requested. The following geometric load balancing algorithms are supported:

  • Recursive Coordinate Bisection (RCB)
  • Recursive Inertial Bisection (RIB)
  • Hilbert Space Filling Curves (HSFC)

The particle distribution can be updated in parallel by a call to the ParallelManager.update() method. Particles across processor boundaries that are needed for neighbor queries are assigned the tag Remote as shown in the figure:

_images/local-remote-particles.png

Local and remote particles in the vicinity of a processor boundary (dashed line)

Putting it together: A simple example

Now that we know how to work with particles, we will use the data structures to carry out the simplest SPH operation, namely, the estimation of particle density from a given distribution of particles.

We consider particles distributed on a uniform Cartesian lattice ( \(\Delta x = \Delta y = \Delta\)) in a doubly periodic domain \([0,1]\times[0,1]\).

The particle mass is set equal to the “volume” \(\Delta^2\) associated with each particle and the smoothing length is taken as \(1.3\times \Delta\). With this initialization, we have for the estimation for the particle density

\[\begin{split}<\rho>_a = \sum_{b\in\mathcal{N}(a)} m_b W_{ab} \approx 1\end{split}\]

We will use the CubicSpline kernel, defined in pysph.base.kernels module. The code to set-up the particle distribution is given below

# PySPH imports
from pyzoltan.core.carray import UIntArray
from pysph.base.utils import utils
from pysph.base.kernels import CubicSpline
from pysph.base.nnps import DomainManager, LinkedListNNPS

# NumPy
import numpy

# Create a particle distribution
dx = 0.01; dxb2 = 0.5 * dx
x, y = numpy.mgrid[dxb2:1:dx, dxb2:1:dx]

x = x.ravel(); y = y.ravel()
h = numpy.ones_like(x) * 1.3*dx
m = numpy.ones_like(x) * dx*dx

# Create the particle array
pa = utils.get_particle_array(x=x,y=y,h=h,m=m)

# Create the periodic DomainManager object and NNPS
domain = DomainManager(xmin=0., xmax=1., ymin=0., ymax=1., periodic_in_x=True, periodic_in_y=True)
nps = LinkedListNNPS(dim=2, particles=[pa,], radius_scale=2.0, domain=domain)

# The SPH kernel. The dimension argument is needed for the correct normalization constant
k = CubicSpline(dim=2)

Note

Notice that the particles were created with an offset of \(\frac{\Delta}{2}\). This is required since the NNPS object will box-wrap particles near periodic boundaries.

The NNPS object will create periodic ghosts for the particles along each periodic axis.

_images/periodic-domain-ghost-particle-tags.png

The ghost particles are assigned the tag value 2. For this example, periodic ghosts are created along each coordinate direction as shown in the figure.

SPH Kernels

Pairwise interactions in SPH are weighted by the kernel \(W_{ab}\). In PySPH, the pysph.base.kernels module provides a Python interface for these terms. The general definition for an SPH kernel is of the form:

class Kernel(object):
    def __init__(self, dim=1):
        self.radius_scale = 2.0
        self.dim = dim

    def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0):
        ...
        return wij

    def gradient(self, xij=[0., 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0]):
        ...
        grad[0] = dwij_x
        grad[1] = dwij_y
        grad[2] = dwij_z

The kernel is an object with two methods kernel and gradient. \(\text{xij}\) is the difference vector between the destination and source particle \(\boldsymbol{x}_{\text{i}} - \boldsymbol{x}_{\text{j}}\) with \(\text{rij} = \sqrt{ \boldsymbol{x}_{ij}^2}\). The gradient method accepts an additional argument that upon exit is populated with the kernel gradient values.

Density summation

In the final part of the code, we iterate over all target or destination particles and compute the density contributions from neighboring particles:

nbrs = UIntArray()                                                      # array for neighbors
x, y, h, m  = pa.get('x', 'y', 'h', 'm', only_real_particles=False)     # source particles will include ghosts

for i in range( pa.num_real_particles ):                                # iterate over all local particles
    xi = x[i]; yi = y[i]; hi = h[i]

    nps.get_nearest_particles(0, 0, i, nbrs)                            # get neighbors
    neighbors = nbrs.get_npy_array()                                    # numpy array of neighbors

    rho = 0.0
    for j in neighbors:                                                 # iterate over each neighbor

        xij = xi - x[j]                                                 # interaction terms
        yij = yi - y[j]
        rij = numpy.sqrt( xij**2 + yij**2 )
        hij = 0.5 * (h[i] + h[j])

        wij = k.kernel( [xij, yij, 0.0], rij, hij)                      # kernel interaction

        rho += m[j] * wij

    pa.rho[i] = rho                                                    # contribution for this destination

The average density computed in this manner can be verified as \(\rho_{\text{avg}} = 0.99994676895585222\).

Summary

In this document, we introduced the most fundamental data structures in PySPH for working with particles. With these data structures, PySPH can be used as a library for managing particles for your application.

If you are interested in the PySPH framework and want to try out some eaxmples, check out the tutorials: Learning the ropes.

Introduction to PyZoltan

PyZoltan is as the name suggests, is a Python wrapper for the Zoltan data management library. Although it’s primary purpose is a tool for dynamic load balancing for PySPH, the features are general enough to warrant a separate discussion. In Using the PySPH library, we touched upon how to perform nearest neighbor queries in a distributed environment. In this document, we will introduce the PyZoltan interface in it’s native non-SPH format.

In PyZoltan, we wrap the specific routines and objects that we wish to use. The following features of Zoltan are currently supported:

  • Dynamic load balancing using geometric algorithms
  • Unstructured point-to-point communication
  • Distributed data directories

PyZoltan interfaces with Zoltan, which is itself a library that is called to perform specific tasks on the application data. Information about the application data is provided to Zoltan through the method of callbacks, whereby, query functions are registered with Zoltan. These query functions are called internally by Zoltan and are responsible to provide the correct information about the application data to the library. The user is responsible to make available this data in consonance with the application requirement.

A simple example: Partitioning points

The following simple example demonstrates the partitioning of a random collection of points in the unit cube \([0,1]^3\). The objects to be partitioned in this case is therefore the points themselves which can be thought of as particles in an SPH simulation. We begin with some imports:

# Imports
import mpi4py.MPI as mpi
from pyzoltan.core.carray import UIntArray, DoubleArray
from pyzoltan.core import zoltan

from numpy import random
import numpy as np

from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt

comm = mpi.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()

colors = ['r', 'g', 'b', 'y', 'm', 'k', 'c', 'burlywood']
Creating the data

After the initial imports, we define the local data on each processor and broadcast this to the root node for plotting the initial assignment:

numPoints = 1<<12

x = random.random( numPoints )
y = random.random( numPoints )
z = random.random( numPoints )
gid = np.arange( numPoints*size, dtype=np.uint32 )[rank*numPoints:(rank+1)*numPoints]

X = np.zeros( size * numPoints )
Y = np.zeros( size * numPoints )
Z = np.zeros( size * numPoints )
GID = np.arange( numPoints*size, dtype=np.uint32 )

comm.Gather( sendbuf=x, recvbuf=X, root=0 )
comm.Gather( sendbuf=y, recvbuf=Y, root=0 )
comm.Gather( sendbuf=z, recvbuf=Z, root=0 )

Note

Each object (point) is assigned a unique global identifier (the gid array). The identifiers must be unique for a load balancing cycle.

Note

The data type of the global identifiers is of type ZOLTAN_ID_TYPE (default uint32). This is set at the time of building the Zoltan library.

ZoltanGeometricPartitioner

The ZoltanGeometricPartitioner is a concrete sub-class of PyZoltan. This class defines all helper methods needed for a domain decomposition using a geometric algorithm. After the data has been initialized, we instantiate the ZoltanGeometricPartitioner object and set some parameters:

xa = DoubleArray(numPoints); xa.set_data(x)
ya = DoubleArray(numPoints); ya.set_data(y)
za = DoubleArray(numPoints); za.set_data(z)
gida = UIntArray(numPoints); gida.set_data(gid)

pz = zoltan.ZoltanGeometricPartitioner(
    dim=3, comm=comm, x=xa, y=ya, z=za, gid=gida)

pz.set_lb_method('RCB')
pz.Zoltan_Set_Param('DEBUG_LEVEL', '1')

Note

We use CArrays internally to represent the data in PyZoltan. This is done mainly for compatibility with the PySPH particle data structure.

The complete list of parameters can be found in the Zoltan reference manual. All parameters are supported through the wrapper PyZoltan.Zoltan_Set_Param() method. In this example, we set the desired load balancing algorithm (Recursive Coordinate Bisection) and the output debug level.

Calling the load balance function

Once all the parameters are appropriately set-up, we can ask Zoltan to provide new assignments for the particles:

pz.Zoltan_LB_Balance()

This will call the chosen load balancing function internally and upon return, set a number of lists (arrays) indicating which objects need to be exported and which objects need to be imported. The data attributes for the export lists are:

  • numExport : Number of objects to be exported
  • exportLocalids : Local indices of the objects to be exported
  • exportGlobalids : Global indices of the objects to be exported
  • exportProcs : A list of size numExport indicating to which processor each object is exported

And similar arrays for the import lists. The import/export lists returned by Zoltan give an application all the information required to initiate the data transfer.

Note

Zoltan does not perform the data transfer. The data transfer must be done by the application or using the Unstructured communication utilities provided by Zoltan.

Given the new assignments, we once again broadcast this to the root to plot the final partition. The partition generated by this approach is shown below.

_images/point-partition.png

Point assignment to 4 processors where color indicates assignment.

We can see that the RCB method has resulted in cuts orthogonal to the domain axes. Each processor has exactly one fourth of the total number of particles.

The code for this example can be found in pyzoltan/core/tests/3d_partition.py

Inverting the Import/Export lists

In the example above, Zoltan returned a list of objects that were to be imported and exported. There arise situations in applications however, when only one set of arrays is available. For example, a common scenario is that we might know which objects need to be exported to remote processors but do not know in advance which objects need to be imported. The matter is complicated for dynamic applications because without a knowledge of the number of objects to be imported, we cannot allocate buffers of appropriate size on the receiving end.

For these scenarios when only one set (either import or export) of arrays is available, we use the PyZoltan.Zoltan_Invert_Lists() method to get the other set.

PyZoltan exposes this important utility function from Zoltan by assuming that the export lists are known to the application. Upon return from this method, the relevant import lists are also known. Note that the behaviour of import and export lists can be interchanged from the application.

A simple example demonstrating this is given below:

from pyzoltan.core import carray
from pyzoltan.core import zoltan

import numpy
import mpi4py.MPI as mpi

comm = mpi.COMM_WORLD; rank = comm.Get_rank(); size = comm.Get_size()

if rank == 0:
    proclist = numpy.array( [1, 1, 2, 1], dtype=numpy.int32 )
    locids = numpy.array( [1, 3, 5, 7], dtype=numpy.uint32 )
    glbids = numpy.array( [1, 3, 5, 7], dtype=numpy.uint32 )

if rank == 1:
    proclist = numpy.array( [0, 2, 0], dtype=numpy.int32 )
    locids = numpy.array( [1, 3, 5], dtype=numpy.uint32 )
    glbids = numpy.array( [11, 33, 55], dtype=numpy.uint32 )

if rank == 2:
    proclist = numpy.array( [1, 1], dtype=numpy.int32 )
    locids = numpy.array( [1, 3], dtype=numpy.uint32 )
    glbids = numpy.array( [111, 333], dtype=numpy.uint32 )

# create the Zoltan object
zz = zoltan.PyZoltan(comm)

# set the export lists
numExport = proclist.size; zz.numExport = numExport
zz.exportLocalids.resize(numExport); zz.exportLocalids.set_data(locids)
zz.exportGlobalids.resize(numExport); zz.exportGlobalids.set_data(glbids)
zz.exportProcs.resize(numExport); zz.exportProcs.set_data(proclist)

print 'Proc %d to send %s to %s'%(rank, glbids, proclist)

# Invert the lists
zz.Zoltan_Invert_Lists()

# get the import lists
numImport = zz.numImport
importlocids = zz.importLocalids.get_npy_array()
importglbids = zz.importGlobalids.get_npy_array()
importprocs = zz.importProcs.get_npy_array()

print 'Proc %d to recv %s from %s'%(rank, importglbids, importprocs)

In this example (which is hard coded for up to 3 processors), each processor artificially creates a list of objects it knows it must send to remote processors, which is set-up as the export lists for the PyZoltan object. Thereafter, PyZoltan.Zoltan_Invert_Lists() is called to get the lists that must be imported by each processor. The output from this example is shown below:

$ mpirun -n 3 python invert_lists.py
Proc 2 to send [111 333] to [1 1]
Proc 1 to send [11 33 55] to [0 2 0]
Proc 0 to send [1 3 5 7] to [1 1 2 1]
Proc 2 to recv [ 5 33] from [0 1]
Proc 0 to recv [11 55] from [1 1]
Proc 1 to recv [  1   3   7 111 333] from [0 0 0 2 2]

We can see that after a call to this method, each processor knows of remote data that must be received. In an application, this information can be used to effect the data transfer.

Another option is to use the unstructured communication utilities offered by Zoltan. This is described next.

Unstructured point to point communication

In the previous section, we saw how to use the Zoltan library function to invert a set of export indices to get corresponding import indices. Naturally, with a little bit of work, we can structure the requisite communication (MPI send and receives) to exchange the data.

This set of operations is fairly common and Zoltan (PyZoltan) provides a very convenient utility called ZComm to perform such communication. The typical use case for ZComm is when we know the list of local objects to send to remote processors but have no information about the objects to be received. An example (pyzoltan/core/tests/zcomm.py) demonstrating the use of the ZComm object is outlined below.

The example can be run with an arbitrary number of processors. Each processor allocates some data locally and randomly picks \(5\) of these objects to be sent to remote processors. The remote processors are also picked randomly:

import mpi4py.MPI as mpi
import numpy as np
from numpy import random

# import the unstructured communication package
from pyzoltan.core import zoltan_comm
from pyzoltan.core import zoltan

# MPI comm, rank and size
comm = mpi.COMM_WORLD; rank = comm.Get_rank(); size = comm.Get_size()

# each processor creates some random data
numObjectsTotal = 1<<10

x = random.random(numObjectsTotal)
gids = np.array( np.arange(size * numObjectsTotal) )[rank*numObjectsTotal:(rank+1)*numObjectsTotal]
gids = gids.astype(np.uint32)

# arbitrarily assign some objects to be sent to some other processor
nsend = np.int32( random.random_integers(low=1, high=10) )
object_ids = random.random_integers( low=0, high=numObjectsTotal, size=nsend )
proclist = random.random_integers(low=0, high=size-1, size=nsend).astype(np.int32)

my_indices = np.where(proclist == rank)[0]
proclist[my_indices] = (rank+1)%size

This information is sufficient enough to instantiate the ZComm object which will be used as the communication plan to exchange this data. Once the communication plan is setup, each processor knows of the data it must receive with the ZComm.nreturn attribute. This is used to allocate receive buffers:

# create the ZComm object
tag = np.int32(0)
zcomm = zoltan_comm.ZComm(comm, tag=tag, nsend=nsend, proclist=proclist)

# the data to send and receive
senddata = x[ object_ids ]
recvdata = np.ones( zcomm.nreturn )

With the send buffer and the allocated receive buffer, we can perform the communication using the ZComm.Comm_Do() method:

# use zoltan to exchange doubles
print "Proc %d, Sending %s to %s"%(rank, senddata, proclist)
zcomm.Comm_Do(senddata, recvdata)
print "Proc %d, Received %s"%(rank, recvdata)

Note that the user does not need to explicitly write the MPI send and receive calls. All that is required from the user is to correctly allocate the data on the receive side. The output from this example is (it will vary given the use of random numbers):

$ mpirun  -n 3 python zcomm.py
Proc 2, Sending [ 0.83476393  0.07041833  0.20059537  0.7722934   0.4529769 ] to [0 1 0 0 1]
Proc 2, Received [ 0.50391764  0.40028207]
Proc 0, Sending [ 0.50391764] to [2]
Proc 1, Sending [ 0.29755463  0.40028207  0.69433472] to [0 2 0]
Proc 1, Received [ 0.07041833  0.4529769 ]
Proc 0, Received [ 0.29755463  0.69433472  0.83476393  0.20059537  0.7722934 ]
Using the plan for similar communication

The ZComm object was used to send values of type float64 in this example. If the number of objects to be sent and their destinations are the same, we can modify the ZComm to send other objects. For example, the same object can be used to exchange the global indices (uint32) like so:

senddata = gids[ object_ids ]
recvdata = np.ones(zcomm.nreturn, dtype=np.uint32)
zcomm.set_nbytes(recvdata.itemsize)

print "Proc %d, Sending %s to %s"%(rank, senddata, proclist)
zcomm.Comm_Do(senddata, recvdata)
print "Proc %d, Received %s"%(rank, recvdata)

Note

The ZComm.set_nbytes() method is used to indicate the size of the individual objects that is communicated via ZComm.Comm_Do()

The output with this change is:

$ mpirun  -n 3  python zcomm.py
Proc 1, Sending [1054 1692 2034] to [0 2 0]
Proc 0, Sending [214] to [2]
Proc 2, Sending [2720 3034 2511 2412 2975] to [0 1 0 0 1]
Proc 2, Received [ 214 1692]
Proc 1, Received [3034 2975]
Proc 0, Received [1054 2034 2720 2511 2412]
Reversing the communication plan

It is often the case for dynamic applications that objects initially shared with remote processors have their values updated on remote processors. Subsequently, these updated values are required on the originating processor, necessitating them to be communicated back.

For such scenarios, the communication plan represented by ZComm can be used to reverse the communication. That is the data that was originally sent will be treated as a receive and vice-versa.

To illustrate the use of this feature, we continue with our example. The received data (array of unsigned ints) is modified on the remote processor and communicated back using the ZComm.Comm_Do_Reverse() method:

recvdata[:] = rank

updated_info = np.zeros(zcomm.nsend, dtype=senddata.dtype)
print 'Proc %d, sending updated data %s'%(rank, recvdata)
zcomm.Comm_Do_Reverse(recvdata, updated_info)
print 'Proc %d, received updated data %s'%(rank, updated_info)

Note

The size of the buffer each processor will receive in the reverse communication phase is equal to the number of objects initially sent. This is available through the ZComm.send attribute

The output from this when run on 3 processors is:

$ mpirun  -n 3  python zcomm.py
Proc 1, Sending [1054 1692 2034] to [0 2 0]
Proc 0, Sending [214] to [2]
Proc 2, Sending [2720 3034 2511 2412 2975] to [0 1 0 0 1]
Proc 2, Received [ 214 1692]
Proc 1, Received [3034 2975]
Proc 0, Received [1054 2034 2720 2511 2412]
Proc 0, sending updated data [0 0 0 0 0]
Proc 2, sending updated data [2 2]
Proc 1, sending updated data [1 1]
Proc 0, received updated data [2]
Proc 1, received updated data [0 2 0]
Proc 2, received updated data [0 1 0 0 1]

Distributed data directories

The Zoltan Distributed Data Directory utility is a convenient way for a processor to locate remote data. It is implemented as a parallel hash map, keyed on the object identifiers (global indices) and with arbitrary user data associated with each entry.

The use of this feature is highly problem dependent since the user defined data will necessarily change for different applications. We use a simple example demonstrating it’s use. Each processor stores ownership of the object in the distributed directory without any user data associated with each entry.

We begin with the standard set of imports and create some data on each processor and assign each object a unique global identifier:

import numpy
import pyzoltan.api as pz
import mpi4py.MPI as mpi

comm = mpi.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()

# every processor owns some data
numObjectsTotal = 5
my_indices = numpy.array( range( rank*numObjectsTotal,(rank+1)*numObjectsTotal ), dtype=numpy.uint32 )

gid = pz.UIntArray(my_indices.size); gid.set_data( my_indices )

Additionally, each processor has an IntArray which denotes object assignment:

part_assignment = numpy.array( [rank]*numObjectsTotal, dtype=numpy.int32 )
part = pz.IntArray( part_assignment.size ); part.set_data( part_assignment )

This is sufficient data to create the distributed directory:

# create a zoltan dd and store the object assignments
dd = pz.Zoltan_DD(comm)

# update the dd with the data
dd.Zoltan_DD_Update(gid, part)

Note that after instantiation of the Zoltan_DD object, we call the Zoltan_DD.Zoltan_DD_Update() method to update the data associated with this directory. Now, given the shared data available with each processor, we can query for object assignments. In the example below, each processor queries for the objects with global indices numObjectsTotal + rank and numObjectsTotal - rank:

# now we can query the dd
owner_data = pz.IntArray()   # output array for the object data assignment
owner_parts = pz.IntArray()  # output array for the object assignment

# every processor requests for information about some data
query_gids = pz.UIntArray(2); query_gids.set_data( numpy.array([numObjectsTotal+rank,
                                                                numObjectsTotal-rank], dtype=numpy.uint32) )

# use Zoltan_DD_Find to query the data
dd.Zoltan_DD_Find(query_gids, owner_parts, owner_data)

The result from this quey with \(3\) processors is shown below:

$ mpirun  -n 3 python zoltan_dd.py
Processor 0, query_gids = [5 5], owner_parts = [1 1], owner_data = [1 1]
Processor 1, query_gids = [6 4], owner_parts = [1 0], owner_data = [1 0]
Processor 2, query_gids = [7 3], owner_parts = [1 0], owner_data = [1 0]

Reference documentation

Autogenerated from doc strings using sphinx’s autodoc feature.

PySPH Reference Documentation

Autogenerated from doc strings using sphinx’s autodoc feature.

Module application

class pysph.solver.application.Application(fname=None, domain=None)[source]

Bases: object

Class used by any SPH application.

Constructor

Parameters:
  • fname (str) – file name to use.
  • domain (pysph.nnps.DomainManager) – A domain manager to use. This is used for periodic domains etc.
add_option(opt)[source]

Add an Option/OptionGroup or their list to OptionParser

dump_code(file)[source]

Dump the generated code to given file.

run()[source]

Run the application.

setup(solver, equations, nnps=None, inlet_outlet_factory=None, particle_factory=None, *args, **kwargs)[source]

Setup the application’s solver.

This will parse the command line arguments (if this is not called from within an IPython notebook or shell) and then using those parameters and any additional parameters and call the solver’s setup method.

Parameters:
  • solver (pysph.solver.solver.Solver) – The solver instance.
  • equations (list) – A list of Groups/Equations.
  • nnps (pysph.base.nnps.NNPS) – Optional NNPS instance. If None is given a default NNPS is created.
  • inlet_outlet_factory (callable or None) – The inlet_outlet_factory is passed a dictionary of the particle arrays. The factory should return a list of inlets and outlets.
  • particle_factory (callable or None) – If supplied, particles will be created for the solver using the particle arrays returned by the callable. Else particles for the solver need to be set before calling this method
  • args – extra positional arguments passed on to the particle_factory.
  • kwargs – extra keyword arguments passed to the particle_factory.

Examples

>>> def create_particles():
...    ...
...
>>> solver = Solver(...)
>>> equations = [...]
>>> app = Application()
>>> app.setup(solver=solver, equations=equations,
...           particle_factory=create_particles)
>>> app.run()

Module carray

Implementation of resizeable arrays of different types in Cython.

All arrays provide for the following operations:

  • access by indexing.
  • access through get/set function.
  • appending values at the end of the array.
  • reserving space for future appends.
  • access to internal data through a numpy array.

Each array also provides an interface to its data through a numpy array. This is done through the get_npy_array function. The returned numpy array can be used just like any other numpy array but for the following restrictions:

  • the array may not be resized.
  • references of this array should not be kept.
  • slices of this array may not be made.

The numpy array may however be copied and used in any manner.

class pyzoltan.core.carray.BaseArray

Bases: object

Base class for managed C-arrays.

__iter__

Support the iteration protocol

__len__
align_array(self, LongArray new_indices)

Rearrange the array contents according to the new indices.

alloc

alloc: ‘long’

copy_subset(self, BaseArray source, long start_index=-1, long end_index=-1)

Copy subset of values from source to self.

copy_values(self, LongArray indices, BaseArray dest)

Copy values of indexed particles from self to dest.

extend(self, ndarray in_array)

Extend the array with data from in_array.

get_c_type(self) → str

Return the c data type of this array.

get_npy_array(self) → ndarray

Returns a numpy array of the data: do not keep its reference.

length

length: ‘long’

remove(self, ndarray index_list, bool input_sorted=0)

Remove the particles with indices in index_list.

reserve(self, long size)

Resizes the internal data to required size.

reset(self)

Reset the length of the array to 0.

resize(self, long size)

Resizes the array to the new size.

set_data(self, ndarray nparr)

Set data from the given numpy array.

If the numpy array is a reference to the numpy array maintained internally by this class, nothing is done. Otherwise, if the size of nparr matches this array, values are copied into the array maintained.

squeeze(self)

Release any unused memory.

update_min_max(self)

Update the min and max values of the array.

class pyzoltan.core.carray.BaseArrayIter(BaseArray arr)

Bases: object

Iteration object to support iteration over BaseArray.

__iter__
next
class pyzoltan.core.carray.DoubleArray

Bases: pyzoltan.core.carray.BaseArray

Represents an array of doubles

Mallocs a memory buffer of size (n*sizeof(double)) and sets up the numpy array. The memory is aligned to 64 byte boundaries.

Parameters:n (long) – Length of the array.
data

pointer

Pointer to an integer array.

length

long

Size of the array itself.

alloc

long

Size of the data buffer allocated.

Examples

>>> x = DoubleArray()
>>> x.resize(5)
>>> x.set_data(np.arange(5))
>>> x[0]
0
>>> x = DoubleArray(5)
>>> xnp = x.get_npy_array()
>>> xnp[:] = np.arange(5)
>>> x[0], x[4]
(0.0, 4.0)
__contains__

Returns True if value is in self.

__delitem__

x.__delitem__(y) <==> del x[y]

__getitem__

Get item at position idx.

__reduce__(self)

Implemented to facilitate pickling.

__setitem__

Set location idx to value.

__setstate__(self, d)

Load the carray from the dictionary d.

append(self, double value)

Appends value to the end of the array.

copy_subset(self, BaseArray source, long start_index=-1, long end_index=-1)

Copy a subset of values from src to self.

Parameters:
  • start_index (long) – the first index in dest that corresponds to the 0th index in source
  • end_index (long) – the first index in dest from start_index that is not copied
copy_values(self, LongArray indices, BaseArray dest)

Copies values of indices in indices from self to dest.

No size check if performed, we assume the dest to of proper size i.e. atleast as long as indices.

extend(self, ndarray in_array)

Extend the array with data from in_array.

Parameters:in_array (ndarray) – a numpy array with data to be added to the current array.

Notes

  • accessing the in_array using the indexing operation seems to be costly. Look at the annotated cython html file.
get(self, long idx) → double

Gets value stored at position idx.

get_c_type(self) → str

Return the c data type for this array as a string.

index(self, double value) → long

Returns the index at which value is in self, else -1.

maximum

maximum: ‘double’

minimum

minimum: ‘double’

remove(self, ndarray index_list, bool input_sorted=0)

Remove the particles with indices in index_list.

Parameters:
  • index_list (ndarray) – a list of indices which should be removed.
  • input_sorted (bool) – indicates if the input is sorted in ascending order. if not, the array will be sorted internally.

Notes

If the input indices are not sorted, sort them in ascending order. Starting with the last element in the index list, start replacing the element at the said index with the last element in the data and update the length of the array.

reserve(self, long size)

Resizes the internal data to size*sizeof(double) bytes.

reset(self)

Reset the length of the array to 0.

resize(self, long size)

Resizes internal data to size*sizeof(double) bytes and sets the length to the new size.

set(self, long idx, double value)

Sets location idx to value.

set_view(self, DoubleArray parent, long start, long end)

Create a view of a given a parent array from start to end.

Note that this excludes the end index.

Parameters:
  • parent (DoubleArray) – The parent array of which this is a view.
  • start (long) – The starting index to start the view from.
  • end (long) – The ending index to end the view at, excludes the end itself.
squeeze(self)

Release any unused memory.

update_min_max(self)

Updates the min and max values of the array.

class pyzoltan.core.carray.FloatArray

Bases: pyzoltan.core.carray.BaseArray

Represents an array of floats

Mallocs a memory buffer of size (n*sizeof(float)) and sets up the numpy array. The memory is aligned to 64 byte boundaries.

Parameters:n (long) – Length of the array.
data

pointer

Pointer to an integer array.

length

long

Size of the array itself.

alloc

long

Size of the data buffer allocated.

Examples

>>> x = FloatArray()
>>> x.resize(5)
>>> x.set_data(np.arange(5))
>>> x[0]
0
>>> x = FloatArray(5)
>>> xnp = x.get_npy_array()
>>> xnp[:] = np.arange(5)
>>> x[0], x[4]
(0.0, 4.0)
__contains__

Returns True if value is in self.

__delitem__

x.__delitem__(y) <==> del x[y]

__getitem__

Get item at position idx.

__reduce__(self)

Implemented to facilitate pickling.

__setitem__

Set location idx to value.

__setstate__(self, d)

Load the carray from the dictionary d.

append(self, float value)

Appends value to the end of the array.

copy_subset(self, BaseArray source, long start_index=-1, long end_index=-1)

Copy a subset of values from src to self.

Parameters:
  • start_index (long) – the first index in dest that corresponds to the 0th index in source
  • end_index (long) – the first index in dest from start_index that is not copied
copy_values(self, LongArray indices, BaseArray dest)

Copies values of indices in indices from self to dest.

No size check if performed, we assume the dest to of proper size i.e. atleast as long as indices.

extend(self, ndarray in_array)

Extend the array with data from in_array.

Parameters:in_array (ndarray) – a numpy array with data to be added to the current array.

Notes

  • accessing the in_array using the indexing operation seems to be costly. Look at the annotated cython html file.
get(self, long idx) → float

Gets value stored at position idx.

get_c_type(self) → str

Return the c data type for this array as a string.

index(self, float value) → long

Returns the index at which value is in self, else -1.

maximum

maximum: ‘float’

minimum

minimum: ‘float’

remove(self, ndarray index_list, bool input_sorted=0)

Remove the particles with indices in index_list.

Parameters:
  • index_list (ndarray) – a list of indices which should be removed.
  • input_sorted (bool) – indicates if the input is sorted in ascending order. if not, the array will be sorted internally.

Notes

If the input indices are not sorted, sort them in ascending order. Starting with the last element in the index list, start replacing the element at the said index with the last element in the data and update the length of the array.

reserve(self, long size)

Resizes the internal data to size*sizeof(float) bytes.

reset(self)

Reset the length of the array to 0.

resize(self, long size)

Resizes internal data to size*sizeof(float) bytes and sets the length to the new size.

set(self, long idx, float value)

Sets location idx to value.

set_view(self, FloatArray parent, long start, long end)

Create a view of a given a parent array from start to end.

Note that this excludes the end index.

Parameters:
  • parent (FloatArray) – The parent array of which this is a view.
  • start (long) – The starting index to start the view from.
  • end (long) – The ending index to end the view at, excludes the end itself.
squeeze(self)

Release any unused memory.

update_min_max(self)

Updates the min and max values of the array.

class pyzoltan.core.carray.IntArray

Bases: pyzoltan.core.carray.BaseArray

Represents an array of ints

Mallocs a memory buffer of size (n*sizeof(int)) and sets up the numpy array. The memory is aligned to 64 byte boundaries.

Parameters:n (long) – Length of the array.
data

pointer

Pointer to an integer array.

length

long

Size of the array itself.

alloc

long

Size of the data buffer allocated.

Examples

>>> x = IntArray()
>>> x.resize(5)
>>> x.set_data(np.arange(5))
>>> x[0]
0
>>> x = IntArray(5)
>>> xnp = x.get_npy_array()
>>> xnp[:] = np.arange(5)
>>> x[0], x[4]
(0.0, 4.0)
__contains__

Returns True if value is in self.

__delitem__

x.__delitem__(y) <==> del x[y]

__getitem__

Get item at position idx.

__reduce__(self)

Implemented to facilitate pickling.

__setitem__

Set location idx to value.

__setstate__(self, d)

Load the carray from the dictionary d.

append(self, int value)

Appends value to the end of the array.

copy_subset(self, BaseArray source, long start_index=-1, long end_index=-1)

Copy a subset of values from src to self.

Parameters:
  • start_index (long) – the first index in dest that corresponds to the 0th index in source
  • end_index (long) – the first index in dest from start_index that is not copied
copy_values(self, LongArray indices, BaseArray dest)

Copies values of indices in indices from self to dest.

No size check if performed, we assume the dest to of proper size i.e. atleast as long as indices.

extend(self, ndarray in_array)

Extend the array with data from in_array.

Parameters:in_array (ndarray) – a numpy array with data to be added to the current array.

Notes

  • accessing the in_array using the indexing operation seems to be costly. Look at the annotated cython html file.
get(self, long idx) → int

Gets value stored at position idx.

get_c_type(self) → str

Return the c data type for this array as a string.

index(self, int value) → long

Returns the index at which value is in self, else -1.

maximum

maximum: ‘int’

minimum

minimum: ‘int’

remove(self, ndarray index_list, bool input_sorted=0)

Remove the particles with indices in index_list.

Parameters:
  • index_list (ndarray) – a list of indices which should be removed.
  • input_sorted (bool) – indicates if the input is sorted in ascending order. if not, the array will be sorted internally.

Notes

If the input indices are not sorted, sort them in ascending order. Starting with the last element in the index list, start replacing the element at the said index with the last element in the data and update the length of the array.

reserve(self, long size)

Resizes the internal data to size*sizeof(int) bytes.

reset(self)

Reset the length of the array to 0.

resize(self, long size)

Resizes internal data to size*sizeof(int) bytes and sets the length to the new size.

set(self, long idx, int value)

Sets location idx to value.

set_view(self, IntArray parent, long start, long end)

Create a view of a given a parent array from start to end.

Note that this excludes the end index.

Parameters:
  • parent (IntArray) – The parent array of which this is a view.
  • start (long) – The starting index to start the view from.
  • end (long) – The ending index to end the view at, excludes the end itself.
squeeze(self)

Release any unused memory.

update_min_max(self)

Updates the min and max values of the array.

class pyzoltan.core.carray.LongArray

Bases: pyzoltan.core.carray.BaseArray

Represents an array of longs

Mallocs a memory buffer of size (n*sizeof(long)) and sets up the numpy array. The memory is aligned to 64 byte boundaries.

Parameters:n (long) – Length of the array.
data

pointer

Pointer to an integer array.

length

long

Size of the array itself.

alloc

long

Size of the data buffer allocated.

Examples

>>> x = LongArray()
>>> x.resize(5)
>>> x.set_data(np.arange(5))
>>> x[0]
0
>>> x = LongArray(5)
>>> xnp = x.get_npy_array()
>>> xnp[:] = np.arange(5)
>>> x[0], x[4]
(0.0, 4.0)
__contains__

Returns True if value is in self.

__delitem__

x.__delitem__(y) <==> del x[y]

__getitem__

Get item at position idx.

__reduce__(self)

Implemented to facilitate pickling.

__setitem__

Set location idx to value.

__setstate__(self, d)

Load the carray from the dictionary d.

append(self, long value)

Appends value to the end of the array.

copy_subset(self, BaseArray source, long start_index=-1, long end_index=-1)

Copy a subset of values from src to self.

Parameters:
  • start_index (long) – the first index in dest that corresponds to the 0th index in source
  • end_index (long) – the first index in dest from start_index that is not copied
copy_values(self, LongArray indices, BaseArray dest)

Copies values of indices in indices from self to dest.

No size check if performed, we assume the dest to of proper size i.e. atleast as long as indices.

extend(self, ndarray in_array)

Extend the array with data from in_array.

Parameters:in_array (ndarray) – a numpy array with data to be added to the current array.

Notes

  • accessing the in_array using the indexing operation seems to be costly. Look at the annotated cython html file.
get(self, long idx) → long

Gets value stored at position idx.

get_c_type(self) → str

Return the c data type for this array as a string.

index(self, long value) → long

Returns the index at which value is in self, else -1.

maximum

maximum: ‘long’

minimum

minimum: ‘long’

remove(self, ndarray index_list, bool input_sorted=0)

Remove the particles with indices in index_list.

Parameters:
  • index_list (ndarray) – a list of indices which should be removed.
  • input_sorted (bool) – indicates if the input is sorted in ascending order. if not, the array will be sorted internally.

Notes

If the input indices are not sorted, sort them in ascending order. Starting with the last element in the index list, start replacing the element at the said index with the last element in the data and update the length of the array.

reserve(self, long size)

Resizes the internal data to size*sizeof(long) bytes.

reset(self)

Reset the length of the array to 0.

resize(self, long size)

Resizes internal data to size*sizeof(long) bytes and sets the length to the new size.

set(self, long idx, long value)

Sets location idx to value.

set_view(self, LongArray parent, long start, long end)

Create a view of a given a parent array from start to end.

Note that this excludes the end index.

Parameters:
  • parent (LongArray) – The parent array of which this is a view.
  • start (long) – The starting index to start the view from.
  • end (long) – The ending index to end the view at, excludes the end itself.
squeeze(self)

Release any unused memory.

update_min_max(self)

Updates the min and max values of the array.

class pyzoltan.core.carray.UIntArray

Bases: pyzoltan.core.carray.BaseArray

Represents an array of unsigned ints

Mallocs a memory buffer of size (n*sizeof(unsigned int)) and sets up the numpy array. The memory is aligned to 64 byte boundaries.

Parameters:n (long) – Length of the array.
data

pointer

Pointer to an integer array.

length

long

Size of the array itself.

alloc

long

Size of the data buffer allocated.

Examples

>>> x = UIntArray()
>>> x.resize(5)
>>> x.set_data(np.arange(5))
>>> x[0]
0
>>> x = UIntArray(5)
>>> xnp = x.get_npy_array()
>>> xnp[:] = np.arange(5)
>>> x[0], x[4]
(0.0, 4.0)
__contains__

Returns True if value is in self.

__delitem__

x.__delitem__(y) <==> del x[y]

__getitem__

Get item at position idx.

__reduce__(self)

Implemented to facilitate pickling.

__setitem__

Set location idx to value.

__setstate__(self, d)

Load the carray from the dictionary d.

append(self, unsigned int value)

Appends value to the end of the array.

copy_subset(self, BaseArray source, long start_index=-1, long end_index=-1)

Copy a subset of values from src to self.

Parameters:
  • start_index (long) – the first index in dest that corresponds to the 0th index in source
  • end_index (long) – the first index in dest from start_index that is not copied
copy_values(self, LongArray indices, BaseArray dest)

Copies values of indices in indices from self to dest.

No size check if performed, we assume the dest to of proper size i.e. atleast as long as indices.

extend(self, ndarray in_array)

Extend the array with data from in_array.

Parameters:in_array (ndarray) – a numpy array with data to be added to the current array.

Notes

  • accessing the in_array using the indexing operation seems to be costly. Look at the annotated cython html file.
get(self, long idx) → unsigned int

Gets value stored at position idx.

get_c_type(self) → str

Return the c data type for this array as a string.

index(self, unsigned int value) → long

Returns the index at which value is in self, else -1.

maximum

maximum: ‘unsigned int’

minimum

minimum: ‘unsigned int’

remove(self, ndarray index_list, bool input_sorted=0)

Remove the particles with indices in index_list.

Parameters:
  • index_list (ndarray) – a list of indices which should be removed.
  • input_sorted (bool) – indicates if the input is sorted in ascending order. if not, the array will be sorted internally.

Notes

If the input indices are not sorted, sort them in ascending order. Starting with the last element in the index list, start replacing the element at the said index with the last element in the data and update the length of the array.

reserve(self, long size)

Resizes the internal data to size*sizeof(unsigned int) bytes.

reset(self)

Reset the length of the array to 0.

resize(self, long size)

Resizes internal data to size*sizeof(unsigned int) bytes and sets the length to the new size.

set(self, long idx, unsigned int value)

Sets location idx to value.

set_view(self, UIntArray parent, long start, long end)

Create a view of a given a parent array from start to end.

Note that this excludes the end index.

Parameters:
  • parent (UIntArray) – The parent array of which this is a view.
  • start (long) – The starting index to start the view from.
  • end (long) – The ending index to end the view at, excludes the end itself.
squeeze(self)

Release any unused memory.

update_min_max(self)

Updates the min and max values of the array.

pyzoltan.core.carray.py_aligned(long n, int item_size) → long

Align n items each having size (in bytes) item_size to 64 bits and return the appropriate number of items that would be aligned to 64 bytes.

Module controller

Implement infrastructure for the solver to add various interfaces

class pysph.solver.controller.CommandManager(solver, comm=None)[source]

Bases: object

Class to manage and synchronize commands from various Controllers

add_function(callable, interval=1)[source]

add a function to to be called every interval iterations

add_interface(*args, **kwds)[source]

Add a callable interface to the controller

The callable must accept an Controller instance argument. The callable is called in a new thread of its own and it can do various actions with methods defined on the Controller instance passed to it The new created thread is set to daemon mode and returned

cont()[source]

continue after a pause command

dispatch(*args, **kwargs)[source]

execute/queue a command with specified arguments

execute_commands(solver)[source]

called by the solver after each timestep

get_particle_array_combined(idx, procs=None)[source]

get a single particle array with combined data from all procs

specifying processes is currently not implemented

get_particle_array_from_procs(idx, procs=None)[source]

get particle array at index from all processes

specifying processes is currently not implemented

get_particle_array_index(name)[source]

get the index of the named particle array

get_particle_array_names()[source]

get the names of the particle arrays

get_prop(name)[source]

get a solver property

get_result(lock_id)[source]

get the result of a previously queued command

get_status()[source]

get the status of the controller

get_task_lock(lock_id)[source]

get the Lock instance associated with a command

pause_on_next()[source]

pause and wait for command on the next control interval

set_log_level(level)[source]

set the logging level

set_prop(name, value)[source]

set a solver property

solver_method(name, *args, **kwargs)[source]

execute a method on the solver

sync_commands()[source]

send the pending commands to all the procs in parallel run

wait_for_cmd()[source]

wait for command from any interface

class pysph.solver.controller.Controller(command_manager, block=True)[source]

Bases: object

Controller class acts a a proxy to control the solver

This is passed as an argument to the interface

Methods available:

  • get – get the value of a solver parameter
  • set – set the value of a solver parameter
  • get_result – return result of a queued command
  • pause_on_next – pause solver thread on next iteration
  • wait – wait (block) calling thread till solver is paused (call after pause_on_next)
  • cont – continue solver thread (call after pause_on_next)

Various other methods are also available as listed in CommandManager.dispatch_dict which perform different functions.

  • The methods in CommandManager.active_methods do their operation and return the result (if any) immediately
  • The methods in CommandManager.lazy_methods do their later when solver thread is available and return a task-id. The result of the task can be obtained later using the blocking call get_result() which waits till result is available and returns the result. The availability of the result can be checked using the lock returned by get_task_lock() method

FIXME: wait/cont currently do not work in parallel

cont()[source]

continue solver thread after it has been paused by pause_on_next

call this only after calling the pause_on_next method

get(name)[source]

get a solver property; returns immediately

get_blocking()[source]

get the blocking mode ( True/False )

get_result(task_id)[source]

get the result of a previously queued command

pause_on_next()[source]

pause the solver thread on next iteration

set(name, value)[source]

set a solver property; returns immediately;

set_blocking(block)[source]

set the blocking mode to True/False

In blocking mode (block=True) all methods other than getting of solver properties block until the command is executed by the solver and return the results. The blocking time can vary depending on the time taken by solver per iteration and the command_interval In non-blocking mode, these methods queue the command for later and return a string corresponding to the task_id of the operation. The result can be later obtained by a (blocking) call to get_result with the task_id as argument

wait()[source]

block the calling thread until the solver thread pauses

call this only after calling the pause_on_next method to tell the controller to pause the solver thread

class pysph.solver.controller.DummyComm[source]

Bases: object

A dummy MPI.Comm implementation as placeholder for for serial runs

Get_rank()[source]

return the rank of the process (0)

Get_size()[source]

return the size of the comm (1)

bcast(data)[source]

bcast (broadcast) implementation for serial run

gather(data)[source]

gather implementation for serial run

recv(pid)[source]

dummy recv implementation

send(data, pid)[source]

dummy send implementation

pysph.solver.controller.in_parallel(f)[source]

return a list of results of running decorated function on all procs

pysph.solver.controller.on_root_proc(f)[source]

run the decorated function only on the root proc

pysph.solver.controller.synchronized(lock_or_func)[source]

decorator for synchronized (thread safe) function

Usage:

  • sync_func = synchronized(lock)(func) # sync with an existing lock
  • sync_func = synchronized(func) # sync with a new private lock

SPH equations

class pysph.sph.equation.Equation(dest, sources=None)[source]

Bases: object

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
converged()[source]

Return > 0 to indicate converged iterations and < 0 otherwise.

Basic SPH Equations
class pysph.sph.basic_equations.BodyForce(dest, sources, fx=0.0, fy=0.0, fz=0.0)[source]

Bases: pysph.sph.equation.Equation

Add a body force to the particles:

\(\boldsymbol{f} = f_x, f_y, f_z\)

Parameters:
  • fx (float) – Body force per unit mass along the x-axis
  • fy (float) – Body force per unit mass along the y-axis
  • fz (float) – Body force per unit mass along the z-axis
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, d_au, d_av, d_aw)[source]
class pysph.sph.basic_equations.ContinuityEquation(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Density rate:

\(\frac{d\rho_a}{dt} = \sum_b m_b \boldsymbol{v}_{ab}\cdot \nabla_a W_{ab}\)

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_arho)[source]
loop(d_idx, d_arho, s_idx, s_m, DWIJ, VIJ)[source]
class pysph.sph.basic_equations.IsothermalEOS(dest, sources=None, rho0=1000.0, c0=1.0, p0=0.0)[source]

Bases: pysph.sph.equation.Equation

Compute the pressure using the Isothermal equation of state:

\(p = p_0 + c_0^2(\rho_0 - \rho)\)

Parameters:
  • rho0 (float) – Reference density of the fluid (\(\rho_0\))
  • c0 (float) – Maximum speed of sound expected in the system (\(c0\))
  • p0 (float) – Reference pressure in the system (\(p0\))
loop(d_idx, d_rho, d_p)[source]
class pysph.sph.basic_equations.MonaghanArtificialViscosity(dest, sources=None, alpha=1.0, beta=1.0)[source]

Bases: pysph.sph.equation.Equation

Classical Monaghan style artificial viscosity [Monaghan2005]

\[\begin{split}\frac{d\mathbf{v}_{a}}{dt}&=&-\sum_{b}m_{b}\Pi_{ab}\nabla_{a}W_{ab}\end{split}\]

where

\[\begin{split}\Pi_{ab}=\begin{cases}\frac{-\alpha_{\pi}\bar{c}_{ab}\phi_{ab}+ \beta_{\pi}\phi_{ab}^{2}}{\bar{\rho}_{ab}}, & \mathbf{v}_{ab}\cdot \mathbf{r}_{ab}<0\\0, & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}\geq0 \end{cases}\end{split}\]

with

\[\begin{split}\phi_{ab}=\frac{h\mathbf{v}_{ab}\cdot\mathbf{r}_{ab}} {|\mathbf{r}_{ab}|^{2}+\epsilon^{2}}\\\end{split}\]\[\begin{split}\bar{c}_{ab}&=&\frac{c_{a}+c_{b}}{2}\\\end{split}\]\[\begin{split}\bar{\rho}_{ab}&=&\frac{\rho_{a}+\rho_{b}}{2}\end{split}\]

References

[Monaghan2005]J. Monaghan, “Smoothed particle hydrodynamics”, Reports on Progress in Physics, 68 (2005), pp. 1703-1759.
Parameters:
  • alpha (float) – produces a shear and bulk viscosity
  • beta (float) – used to handle high Mach number shocks
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, d_rho, d_cs, d_au, d_av, d_aw, s_m, s_rho, s_cs, VIJ, XIJ, HIJ, R2IJ, RHOIJ1, EPS, DWIJ, DT_ADAPT)[source]
class pysph.sph.basic_equations.SummationDensity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Good old Summation density:

\(\rho_a = \sum_b m_b W_{ab}\)

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_rho)[source]
loop(d_idx, d_rho, s_idx, s_m, WIJ)[source]
class pysph.sph.basic_equations.VelocityGradient2D(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Compute the SPH evaluation for the velocity gradient tensor in 2D.

The expression for the velocity gradient is:

\(\frac{\partial v^i}{\partial x^j} = \sum_{b}\frac{m_b}{\rho_b}(v_b - v_a)\frac{\partial W_{ab}}{\partial x_a^j}\)

Notes

The tensor properties are stored in the variables v_ij where ‘i’ refers to the velocity component and ‘j’ refers to the spatial component. Thus v_21 is \(\frac{\partial v}{\partial x}\)

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_v00, d_v01, d_v10, d_v11)[source]
loop(d_idx, s_idx, s_m, s_rho, d_v00, d_v01, d_v10, d_v11, DWIJ, VIJ)[source]
class pysph.sph.basic_equations.XSPHCorrection(dest, sources=None, eps=0.5)[source]

Bases: pysph.sph.equation.Equation

Position stepping with XSPH correction [Monaghan1992]

\[\frac{d\mathbf{r}_{a}}{dt}=\mathbf{\hat{v}}_{a}=\mathbf{v}_{a}- \epsilon\sum_{b}m_{b}\frac{\mathbf{v}_{ab}}{\bar{\rho}_{ab}}W_{ab}\]

References

[Monaghan1992]J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543-574.
Parameters:eps (float) – \(\epsilon\) as in the above equation

Notes

This equation must be used to advect the particles. XSPH can be turned off by setting the parameter eps = 0.

initialize(d_idx, d_ax, d_ay, d_az)[source]
loop(s_idx, d_idx, s_m, d_ax, d_ay, d_az, WIJ, RHOIJ1, VIJ)[source]
post_loop(d_idx, d_ax, d_ay, d_az, d_u, d_v, d_w)[source]
class pysph.sph.basic_equations.XSPHCorrectionForLeapFrog(dest, sources=None, eps=0.5)[source]

Bases: pysph.sph.equation.Equation

The XSPH correction [Monaghan1992] alone. This is meant to be used with a leap-frog integrator which already considers the velocity of the particles. It simply computes the correction term and adds that to ax, ay, az.

\[\frac{d\mathbf{r}_{a}}{dt}=\mathbf{\hat{v}}_{a}= - \epsilon\sum_{b}m_{b}\frac{\mathbf{v}_{ab}}{\bar{\rho}_{ab}}W_{ab}\]

References

[Monaghan1992]J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543-574.
Parameters:eps (float) – \(\epsilon\) as in the above equation

Notes

This equation must be used to advect the particles. XSPH can be turned off by setting the parameter eps = 0.

initialize(d_idx, d_ax, d_ay, d_az)[source]
loop(s_idx, d_idx, s_m, d_ax, d_ay, d_az, WIJ, RHOIJ1, VIJ)[source]
Basic WCSPH Equations
class pysph.sph.wc.basic.ContinuityEquationDeltaSPH(dest, sources, c0, delta=0.1)[source]

Bases: pysph.sph.equation.Equation

Continuity equation with dissipative terms

\(\frac{d\rho_a}{dt} = \sum_b \rho_a \frac{m_b}{\rho_b} \left( \boldsymbol{v}_{ab}\cdot \nabla_a W_{ab} + \delta \eta_{ab} \cdot \nabla_{a} W_{ab} (h_{ab}\frac{c_{ab}}{\rho_a}(\rho_b - \rho_a)) \right)\)

References

[Marrone2011]S. Marrone et al., “delta-SPH model for simulating violent impact flows”, Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526–1542.
Parameters:
  • c0 (float) – reference speed of sound
  • delta (float) – coefficient used to control the intensity of diffusion of density
initialize(d_idx, d_arho)[source]
loop(d_idx, d_arho, s_idx, s_m, d_cs, s_cs, d_rho, s_rho, DWIJ, VIJ, XIJ, RIJ, HIJ, EPS)[source]
class pysph.sph.wc.basic.MomentumEquation(dest, sources=None, alpha=1.0, beta=1.0, gx=0.0, gy=0.0, gz=0.0, c0=1.0, tensile_correction=False)[source]

Bases: pysph.sph.equation.Equation

Classic Monaghan Style Momentum Equation with Artificial Viscosity

\[\frac{d\mathbf{v}_{a}}{dt}=-\sum_{b}m_{b}\left(\frac{p_{b}} {\rho_{b}^{2}}+\frac{p_{a}}{\rho_{a}^{2}}+\Pi_{ab}\right) \nabla_{a}W_{ab}\]

where

\[\begin{split}\Pi_{ab}=\begin{cases} \frac{-\alpha\bar{c}_{ab}\mu_{ab}+\beta\mu_{ab}^{2}}{\bar{\rho}_{ab}} & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}<0;\\ 0 & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}\geq0; \end{cases}\end{split}\]

with

\[\begin{split}\mu_{ab}=\frac{h\mathbf{v}_{ab}\cdot\mathbf{r}_{ab}} {\mathbf{r}_{ab}^{2}+\eta^{2}}\\\end{split}\]\[\begin{split}\bar{c}_{ab} = \frac{c_a + c_b}{2}\\\end{split}\]\[\bar{\rho}_{ab} = \frac{\rho_a + \rho_b}{2}\]

References

[Monaghan1992]J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543-574.
Parameters:
  • alpha (float) – produces a shear and bulk viscosity
  • beta (float) – used to handle high Mach number shocks
  • gx (float) – body force per unit mass along the x-axis
  • gy (float) – body force per unit mass along the y-axis
  • gz (float) – body force per unit mass along the z-axis
  • c0 (float) – reference speed of sound
  • tensilte_correction (bool) – switch for tensile instability correction (Default: False)
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, d_rho, d_cs, d_p, d_au, d_av, d_aw, s_m, s_rho, s_cs, s_p, VIJ, XIJ, HIJ, R2IJ, RHOIJ1, EPS, DWIJ, DT_ADAPT, WIJ, WDP)[source]
post_loop(d_idx, d_au, d_av, d_aw, DT_ADAPT)[source]
class pysph.sph.wc.basic.MomentumEquationDeltaSPH(dest, sources=None, alpha=1.0, gx=0.0, gy=0.0, gz=0.0, rho0=1000.0, c0=1.0)[source]

Bases: pysph.sph.equation.Equation

Momentum equation defined in JOSEPHINE and the delta-SPH model

\[\frac{du_{i}}{dt}=-\frac{1}{\rho_{i}}\sum_{j}\left(p_{j}+p_{i}\right) \nabla_{i}W_{ij}V_{j}+\mathbf{g}_{i}+\alpha hc_{0}\rho_{0}\sum_{j} \pi_{ij}\nabla_{i}W_{ij}V_{j}\]

where

\[\pi_{ij}=\frac{\mathbf{u}_{ij}\cdot\mathbf{r}_{ij}} {|\mathbf{r}_{ij}|^{2}}\]

References

[Marrone2011]S. Marrone et al., “delta-SPH model for simulating violent impact flows”, Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526–1542.
[Cherfils2012]J. M. Cherfils et al., “JOSEPHINE: A parallel SPH code for free-surface flows”, Computer Physics Communications, 183 (2012), pp 1468–1480.
Parameters:
  • alpha (float) – coefficient used to control the intensity of the diffusion of velocity
  • gx (float) – body force per unit mass along the x-axis
  • gy (float) – body force per unit mass along the y-axis
  • gz (float) – body force per unit mass along the z-axis
  • rho0 (float) – reference density
  • c0 (float) – reference speed of sound

Notes

Artificial viscosity is used in this momentum equation and is controlled by the parameter \(\alpha\). This form of the artificial viscosity is similar but not identical to the Monaghan-style artificial viscosity.

initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, d_rho, d_cs, d_p, d_au, d_av, d_aw, s_m, s_rho, s_cs, s_p, VIJ, XIJ, HIJ, R2IJ, RHOIJ1, EPS, WIJ, DWIJ)[source]
post_loop(d_idx, d_au, d_av, d_aw, DT_ADAPT)[source]
class pysph.sph.wc.basic.PressureGradientUsingNumberDensity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Pressure gradient discretized using number density:

\[\frac{d \boldsymbol{v}_a}{dt} = -\frac{1}{m_a}\sum_b (\frac{p_a}{V_a^2} + \frac{p_b}{V_b^2})\nabla_a W_{ab}\]
Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, d_m, d_rho, s_rho, d_au, d_av, d_aw, d_p, s_p, d_V, s_V, DWIJ)[source]
class pysph.sph.wc.basic.TaitEOS(dest, sources=None, rho0=1000.0, c0=1.0, gamma=7.0, p0=0.0)[source]

Bases: pysph.sph.equation.Equation

Tait equation of state for water-like fluids

\(p_a = \frac{c_{0}^2\rho_0}{\gamma}\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} -1\right)\)

References

[Cole1948]H. R. Cole, “Underwater Explosions”, Princeton University Press, 1948.
[Batchelor2002]G. Batchelor, “An Introduction to Fluid Dynamics”, Cambridge University Press, 2002.
[Monaghan2005]J. Monaghan, “Smoothed particle hydrodynamics”, Reports on Progress in Physics, 68 (2005), pp. 1703-1759.
Parameters:
  • rho0 (float) – reference density of fluid particles
  • c0 (float) – maximum speed of sound expected in the system
  • gamma (float) – constant
  • p0 (float) – reference pressure in the system

Notes

The reference speed of sound, c0, is to be taken approximately as 10 times the maximum expected velocity in the system. The particle sound speed is given by the usual expression:

\(c_a = \sqrt{\frac{\partial p}{\partial \rho}}\)

loop(d_idx, d_rho, d_p, d_cs)[source]
class pysph.sph.wc.basic.TaitEOSHGCorrection(dest, sources=None, rho0=1000.0, c0=1.0, gamma=7.0)[source]

Bases: pysph.sph.equation.Equation

Tait Equation of State with Hughes and Graham Correction

\[p_a = \frac{c_{0}^2\rho_0}{\gamma}\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} -1\right)\]

where

\[\begin{split}\rho_{a}=\begin{cases}\rho_{a} & \rho_{a}\geq\rho_{0}\\ \rho_{0} & \rho_{a}<\rho_{0}\end{cases}`\end{split}\]

References

[Hughes2010]J. P. Hughes and D. I. Graham, “Comparison of incompressible and weakly-compressible SPH models for free-surface water flows”, Journal of Hydraulic Research, 48 (2010), pp. 105-117.
Parameters:
  • rho0 (float) – reference density
  • c0 (float) – reference speed of sound
  • gamma (float) – constant

Notes

The correction is to be applied on boundary particles and imposes a minimum value of the density (rho0) which is set upon instantiation. This correction avoids particle sticking behaviour at walls.

loop(d_idx, d_rho, d_p, d_cs)[source]
class pysph.sph.wc.basic.UpdateSmoothingLengthFerrari(dest, dim, hdx=1.0, sources=None)[source]

Bases: pysph.sph.equation.Equation

Update the particle smoothing lengths

\(h_a = hdx \left(\frac{m_a}{\rho_a}\right)^{\frac{1}{d}}\)

References

[Ferrari2009]A. Ferrari et al., “A new 3D parallel SPH scheme for free surface flows”, Computers and Fluids, 38 (2009), pp. 1203–1217.
Parameters:
  • dim (float) – number of dimensions
  • hdx (float) – scaling factor

Notes

Ideally, the kernel scaling factor should be determined from the kernel used based on a linear stability analysis. The default value of (hdx=1) reduces to the formulation suggested by Ferrari et al. who used a Cubic Spline kernel.

Typically, a change in the smoothing length should mean the neighbors are re-computed which in PySPH means the NNPS must be updated. This equation should therefore be placed as the last equation so that after the final corrector stage, the smoothing lengths are updated and the new NNPS data structure is computed.

Note however that since this is to be used with incompressible flow equations, the density variations are small and hence the smoothing lengths should also not vary too much.

loop(d_idx, d_rho, d_h, d_m)[source]

Viscosity functions

class pysph.sph.wc.viscosity.ClearyArtificialViscosity(dest, sources, alpha=1.0, dim=2)[source]

Bases: pysph.sph.equation.Equation

Artificial viscosity proposed By P. Cleary:

\[\mathcal{Pi}_{ab} = -\]

rac{16}{mu_a mu_b}{ ho_a ho_b

(mu_a + mu_b)}left(
rac{oldsymbol{v}_{ab} cdot
oldsymbol{r}_{ab}}{oldsymbol{r}_{ab}^2 + epsilon}

ight),

where the viscosity is determined from the parameter \(lpha\) as

\[\mu_a =\]

rac{1}{8}lpha h_a c_a ho_a

This equation is described in the 2005 review paper by Monaghan

  • J. J. Monaghan, “Smoothed Particle Hydrodynamics”, Reports on Progress in Physics, 2005, 68, pp 1703–1759 [JM05]
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, d_m, s_m, d_rho, s_rho, d_h, s_h, d_cs, s_cs, d_au, d_av, d_aw, XIJ, VIJ, R2IJ, EPS, DWIJ)[source]
class pysph.sph.wc.viscosity.LaminarViscosity(dest, sources=None, nu=1e-06, eta=0.01)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, s_idx, s_m, d_rho, s_rho, d_au, d_av, d_aw, DWIJ, XIJ, VIJ, R2IJ, HIJ)[source]
class pysph.sph.wc.viscosity.MonaghanSignalViscosityFluids(dest, sources=None, alpha=0.5, h=None)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, s_idx, d_rho, s_rho, s_m, d_au, d_av, d_aw, d_cs, s_cs, RIJ, HIJ, VIJ, XIJ, DWIJ)[source]
Transport Velocity Formulation

References

[Adami2012](1, 2) S. Adami et. al “A generalized wall boundary condition for smoothed particle hydrodynamics”, Journal of Computational Physics (2012), pp. 7057–7075.
[Adami2013]S. Adami et. al “A transport-velocity formulation for smoothed particle hydrodynamics”, Journal of Computational Physics (2013), pp. 292–307.
class pysph.sph.wc.transport_velocity.ContinuityEquation(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Conservation of mass equation

Eq (6) in [Adami2012]:

\[\frac{d\rho_a}{dt} = \rho_a \sum_b \frac{m_b}{\rho_b} \boldsymbol{v}_{ab} \cdot \nabla_a W_{ab}\]
Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_arho)[source]
loop(d_idx, s_idx, d_arho, s_m, d_rho, s_rho, VIJ, DWIJ)[source]
class pysph.sph.wc.transport_velocity.MomentumEquationArtificialStress(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Artificial stress contribution to the Momentum Equation

\[\frac{d\boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[ \frac{1}{2}(\boldsymbol{A}_a + \boldsymbol{A}_b) : \nabla_a W_{ab}\right]\]

where the artificial stress terms are given by:

\[ \boldsymbol{A} = \rho \boldsymbol{v} (\tilde{\boldsymbol{v}} - \boldsymbol{v})\]
Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, d_rho, d_u, d_v, d_V, d_uhat, d_vhat, d_au, d_av, d_aw, d_m, s_rho, s_u, s_v, s_V, s_uhat, s_vhat, DWIJ)[source]
class pysph.sph.wc.transport_velocity.MomentumEquationArtificialViscosity(dest, sources=None, alpha=0.1, c0=1.0)[source]

Bases: pysph.sph.equation.Equation

Artificial viscosity for the momentum equation

Eq. (11) in [Adami2012]:

\[\frac{d \boldsymbol{v}_a}{dt} = -\sum_b m_b \alpha h_{ab} c_{ab} \frac{\boldsymbol{v}_{ab}\cdot \boldsymbol{r}_{ab}}{\rho_{ab}\left(|r_{ab}|^2 + \epsilon \right)}\nabla_a W_{ab}\]

where

\[\begin{split}\rho_{ab} = \frac{\rho_a + \rho_b}{2}\\\end{split}\]\[\begin{split}c_{ab} = \frac{c_a + c_b}{2}\\\end{split}\]\[h_{ab} = \frac{h_a + h_b}{2}\]
Parameters:
  • alpha (float) – constant
  • c0 (float) – speed of sound
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, s_m, d_au, d_av, d_aw, RHOIJ1, R2IJ, EPS, DWIJ, VIJ, XIJ, HIJ)[source]
class pysph.sph.wc.transport_velocity.MomentumEquationPressureGradient(dest, sources=None, pb=0.0, gx=0.0, gy=0.0, gz=0.0, tdamp=0.0)[source]

Bases: pysph.sph.equation.Equation

Momentum equation for the Transport Velocity Formulation: Pressure

Eq. (8) in [Adami2013]:

\[\frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[-\bar{p}_{ab}\nabla_a W_{ab} \right]\]

where

\[\bar{p}_{ab} = \frac{\rho_b p_a + \rho_a p_b}{\rho_a + \rho_b}\]
Parameters:
  • pb (float) – background pressure
  • gx (float) – Body force per unit mass along the x-axis
  • gy (float) – Body force per unit mass along the y-axis
  • gz (float) – Body force per unit mass along the z-axis
  • tdamp (float) – damping time

Notes

This equation should have the destination as fluid and sources as fluid and boundary particles.

This function also computes the contribution to the background pressure and accelerations due to a body force or gravity.

The body forces are damped according to Eq. (13) in [Adami2012] to avoid instantaneous accelerations. By default, damping is neglected.

initialize(d_idx, d_au, d_av, d_aw, d_auhat, d_avhat, d_awhat)[source]
loop(d_idx, s_idx, d_m, d_rho, s_rho, d_au, d_av, d_aw, d_p, s_p, d_auhat, d_avhat, d_awhat, d_V, s_V, DWIJ)[source]
post_loop(d_idx, d_au, d_av, d_aw, t=0.0)[source]
class pysph.sph.wc.transport_velocity.MomentumEquationViscosity(dest, sources=None, nu=0.01)[source]

Bases: pysph.sph.equation.Equation

Momentum equation for the Transport Velocity Formulation: Viscosity

Eq. (8) in [Adami2013]:

\[\frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[ \bar{\eta}_{ab}\hat{r}_{ab}\cdot \nabla_a W_{ab} \frac{\boldsymbol{v}_{ab}}{|\boldsymbol{r}_{ab}|}\right]\]

where

\[\bar{\eta}_{ab} = \frac{2\eta_a \eta_b}{\eta_a + \eta_b}\]
Parameters:nu (float) – kinematic viscosity
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, d_rho, s_rho, d_m, d_V, s_V, d_au, d_av, d_aw, R2IJ, EPS, DWIJ, VIJ, XIJ)[source]
class pysph.sph.wc.transport_velocity.SetWallVelocity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Extrapolating the fluid velocity on to the wall

Eq. (22) in [Adami2012]:

\[\tilde{\boldsymbol{v}}_a = \frac{\sum_b\boldsymbol{v}_b W_{ab}} {\sum_b W_{ab}}\]

Notes

The destination particle array for this equation should define the filtered velocity variables \(uf, vf, wf\).

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_uf, d_vf, d_wf, d_wij)[source]
loop(d_idx, s_idx, d_uf, d_vf, d_wf, s_u, s_v, s_w, d_wij, WIJ)[source]
post_loop(d_uf, d_vf, d_wf, d_wij, d_idx, d_ug, d_vg, d_wg, d_u, d_v, d_w)[source]
class pysph.sph.wc.transport_velocity.SolidWallNoSlipBC(dest, sources=None, nu=0.01)[source]

Bases: pysph.sph.equation.Equation

Solid wall boundary condition [Adami2012]

This boundary condition is to be used with fixed ghost particles in SPH simulations and is formulated for the general case of moving boundaries.

The velocity and pressure of the fluid particles is extrapolated to the ghost particles and these values are used in the equations of motion.

No-penetration:

Ghost particles participate in the continuity and state equations with fluid particles. This means as fluid particles approach the wall, the pressure of the ghost particles increases to generate a repulsion force that prevents particle penetration.

No-slip:

Extrapolation is used to set the dummy velocity of the ghost particles for viscous interaction. First, the smoothed velocity field of the fluid phase is extrapolated to the wall particles:

\[\tilde{v}_a = \frac{\sum_b v_b W_{ab}}{\sum_b W_{ab}}\]

In the second step, for the viscous interaction in Eqs. (10) in [Adami2012] and Eq. (8) in [Adami2013], the velocity of the ghost particles is assigned as:

\[v_b = 2v_w -\tilde{v}_a,\]

where \(v_w\) is the prescribed wall velocity and \(v_b\) is the ghost particle in the interaction.

Parameters:nu (float) – kinematic viscosity

Notes

For this equation the destination particle array should be the fluid and the source should be ghost or boundary particles. The boundary particles must define a prescribed velocity \(u_0, v_0, w_0\)

initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, s_idx, d_m, d_rho, s_rho, d_V, s_V, d_u, d_v, d_w, s_uf, s_vf, s_wf, s_u, s_v, s_w, d_au, d_av, d_aw, s_ug, s_vg, s_wg, DWIJ, R2IJ, EPS, XIJ)[source]
class pysph.sph.wc.transport_velocity.SolidWallPressureBC(dest, sources=None, rho0=1.0, p0=100.0, gx=0.0, gy=0.0, gz=0.0, b=1.0)[source]

Bases: pysph.sph.equation.Equation

Solid wall pressure boundary condition [Adami2012]

This boundary condition is to be used with fixed ghost particles in SPH simulations and is formulated for the general case of moving boundaries.

The velocity and pressure of the fluid particles is extrapolated to the ghost particles and these values are used in the equations of motion.

Pressure boundary condition:

The pressure of the ghost particle is also calculated from the fluid particle by interpolation using:

\[p_g = \frac{\sum_f p_f W_{gf} + \boldsymbol{g - a_g} \cdot \sum_f \rho_f \boldsymbol{r}_{gf}W_{gf}}{\sum_f W_{gf}},\]

where the subscripts g and f relate to the ghost and fluid particles respectively.

Density of the wall particle is then set using this pressure

\[\rho_w=\rho_0\left(\frac{p_w - \mathcal{X}}{p_0} + 1\right)^{\frac{1}{\gamma}}\]
Parameters:
  • rho0 (float) – reference density
  • p0 (float) – reference pressure
  • b (float) – constant
  • gx (float) – Body force per unit mass along the x-axis
  • gy (float) – Body force per unit mass along the y-axis
  • gz (float) – Body force per unit mass along the z-axis

Notes

For a two fluid system (boundary, fluid), this equation must be instantiated with boundary as the destination and fluid as the source.

The boundary particle array must additionally define a property \(wij\) for the denominator in Eq. (27) from [Adami2012]. This array sums the kernel terms from the ghost particle to the fluid particle.

initialize(d_idx, d_p, d_wij)[source]
loop(d_idx, s_idx, d_p, s_p, d_wij, s_rho, d_ax, d_ay, d_az, WIJ, XIJ)[source]
post_loop(d_idx, d_wij, d_p, d_rho)[source]
class pysph.sph.wc.transport_velocity.StateEquation(dest, sources=None, p0=1.0, rho0=1.0, b=1.0)[source]

Bases: pysph.sph.equation.Equation

Generalized Weakly Compressible Equation of State

\[p_a = p_0\left[ \left(\frac{\rho}{\rho_0}\right)^\gamma - b \right] + \mathcal{X}\]

Notes

This is the generalized Tait’s equation of state and the suggested values in [Adami2013] are \(\mathcal{X} = 0\), \(\gamma=1\) and \(b = 1\).

The reference pressure \(p_0\) is calculated from the artificial sound speed and reference density:

\[p_0 = \frac{c^2\rho_0}{\gamma}\]
Parameters:
  • p0 (float) – reference pressure
  • rho0 (float) – reference density
  • b (float) – constant
loop(d_idx, d_p, d_rho)[source]
class pysph.sph.wc.transport_velocity.SummationDensity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Summation density with volume summation

In addition to the standard summation density, the number density for the particle is also computed. The number density is important for multi-phase flows to define a local particle volume independent of the material density.

\[\begin{split}\rho_a = \sum_b m_b W_{ab}\\\end{split}\]\[\mathcal{V}_a = \frac{1}{\sum_b W_{ab}}\]

Notes

For this equation, the destination particle array must define the variable V for particle volume.

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_V, d_rho)[source]
loop(d_idx, d_V, d_rho, d_m, WIJ)[source]
class pysph.sph.wc.transport_velocity.VolumeFromMassDensity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Set the inverse volume using mass density

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
loop(d_idx, d_V, d_rho, d_m)[source]
class pysph.sph.wc.transport_velocity.VolumeSummation(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Number density for volume computation

See SummationDensity

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_V)[source]
loop(d_idx, d_V, WIJ)[source]
SPH Boundary Equations
class pysph.sph.boundary_equations.MonaghanBoundaryForce(dest, sources=None, deltap=-1)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, s_idx, s_m, s_rho, d_m, d_cs, s_cs, d_h, s_tx, s_ty, s_tz, s_nx, s_ny, s_nz, d_au, d_av, d_aw, XIJ)[source]
class pysph.sph.boundary_equations.MonaghanKajtarBoundaryForce(dest, sources=None, K=None, beta=None, h=None)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, s_idx, d_m, s_m, d_au, d_av, d_aw, RIJ, R2IJ, XIJ)[source]
pysph.sph.boundary_equations.wendland_quintic(rij=1.0, h=1.0)[source]
Basic Equations for Solid Mechanics

References

[Gray2001]J. P. Gray et al., “SPH elastic dynamics”, Computer Methods in Applied Mechanics and Engineering, 190 (2001), pp 6641 - 6662.
class pysph.sph.solid_mech.basic.EnergyEquationWithStress2D(dest, sources=None, alpha=1.0, beta=1.0, eta=0.01)[source]

Bases: pysph.sph.equation.Equation

initialize(d_idx, d_ae)[source]
loop(d_idx, s_idx, s_m, d_rho, s_rho, d_p, s_p, d_cs, s_cs, d_ae, XIJ, VIJ, DWIJ, HIJ, R2IJ, RHOIJ1)[source]
post_loop(d_idx, d_rho, d_s00, d_s01, d_s11, s_s00, s_s01, s_s11, d_v00, d_v01, d_v10, d_v11, d_ae)[source]
class pysph.sph.solid_mech.basic.HookesDeviatoricStressRate2D(dest, sources=None, shear_mod=1.0)[source]

Bases: pysph.sph.equation.Equation

Rate of change of stress (2D)

\[\frac{dS^{ij}}{dt} = 2\mu\left(\epsilon^{ij} - \frac{1}{3}\delta^{ij} \epsilon^{ij}\right) + S^{ik}\Omega^{jk} + \Omega^{ik}S^{kj}\]

where

\[\begin{split}\epsilon^{ij} = \frac{1}{2}\left(\frac{\partial v^i}{\partial x^j} + \frac{\partial v^j}{\partial x^i}\right)\\\end{split}\]\[\Omega^{ij} = \frac{1}{2}\left(\frac{\partial v^i}{\partial x^j} - \frac{\partial v^j}{\partial x^i} \right)\]
Parameters:shear_mod (float) – shear modulus (\(\mu\))
initialize(d_idx, d_as00, d_as01, d_as11)[source]
loop(d_idx, d_s00, d_s01, d_s11, d_v00, d_v01, d_v10, d_v11, d_as00, d_as01, d_as11)[source]
class pysph.sph.solid_mech.basic.MomentumEquationWithStress2D(dest, sources=None, wdeltap=-1, n=1)[source]

Bases: pysph.sph.equation.Equation

Momentum Equation with Artificial Stress

\[\frac{D\vec{v_a}^i}{Dt} = \sum_b m_b\left(\frac{\sigma_a^{ij}}{\rho_a^2} +\frac{\sigma_b^{ij}}{\rho_b^2} + R_{ab}^{ij}f^n \right)\nabla_a W_{ab}\]

where

\[\begin{split}f_{ab} = \frac{W(r_{ab})}{W(\Delta p)}\\\end{split}\]\[R_{ab}^{ij} = R_{a}^{ij} + R_{b}^{ij}\]
Parameters:
  • wdeltap (float) – evaluated value of \(W(\Delta p)\)
  • n (float) – constant
  • with_correction (bool) – switch for using tensile instability correction
initialize(d_idx, d_au, d_av)[source]
loop(d_idx, s_idx, d_rho, s_rho, s_m, d_p, s_p, d_s00, d_s01, d_s11, s_s00, s_s01, s_s11, d_r00, d_r01, d_r11, s_r00, s_r01, s_r11, d_au, d_av, WIJ, DWIJ)[source]
class pysph.sph.solid_mech.basic.MonaghanArtificialStress(dest, sources=None, eps=0.3)[source]

Bases: pysph.sph.equation.Equation

Artificial stress to remove tensile instability

The dispersion relations in [Gray2001] are used to determine the different components of \(R\).

Angle of rotation for particle \(a\)

\[\tan{2 \theta_a} = \frac{2\sigma_a^{xy}}{\sigma_a^{xx} - \sigma_a^{yy}}\]

In rotated frame, the new components of the stress tensor are

\[\begin{split}\bar{\sigma}_a^{xx} = \cos^2{\theta_a} \sigma_a^{xx} + 2\sin{\theta_a} \cos{\theta_a}\sigma_a^{xy} + \sin^2{\theta_a}\sigma_a^{yy}\\\end{split}\]\[\bar{\sigma}_a^{yy} = \sin^2{\theta_a} \sigma_a^{xx} + 2\sin{\theta_a} \cos{\theta_a}\sigma_a^{xy} + \cos^2{\theta_a}\sigma_a^{yy}\]

Components of \(R\) in rotated frame:

\[\begin{split}\bar{R}_{a}^{xx}=\begin{cases}-\epsilon\frac{\bar{\sigma}_{a}^{xx}} {\rho^{2}} & \bar{\sigma}_{a}^{xx}>0\\0 & \bar{\sigma}_{a}^{xx}\leq0 \end{cases}\\\end{split}\]\[\begin{split}\bar{R}_{a}^{yy}=\begin{cases}-\epsilon\frac{\bar{\sigma}_{a}^{yy}} {\rho^{2}} & \bar{\sigma}_{a}^{yy}>0\\0 & \bar{\sigma}_{a}^{yy}\leq0 \end{cases}\end{split}\]

Components of \(R\) in original frame:

\[\begin{split}R_a^{xx} = \cos^2{\theta_a} \bar{R}_a^{xx} + \sin^2{\theta_a} \bar{R}_a^{yy}\\\end{split}\]\[\begin{split}R_a^{yy} = \sin^2{\theta_a} \bar{R}_a^{xx} + \cos^2{\theta_a} \bar{R}_a^{yy}\\\end{split}\]\[R_a^{xy} = \sin{\theta_a} \cos{\theta_a}\left(\bar{R}_a^{xx} - \bar{R}_a^{yy}\right)\]
Parameters:eps (float) – constant
loop(d_idx, d_rho, d_p, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_r00, d_r01, d_r02, d_r11, d_r12, d_r22)[source]

Compute the stress terms

Parameters:
  • d_sxx (DoubleArray) – Stress Tensor Deviatoric components (Symmetric)
  • d_rxx (DoubleArray) – Artificial stress components (Symmetric)
Equations for the High Velocity Impact Problems
class pysph.sph.solid_mech.hvi.MieGruneisenEOS(dest, sources=None, gamma=1.4, r0=-1, c0=-1, S=-1)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_p, d_rho, d_e)[source]
class pysph.sph.solid_mech.hvi.VonMisesPlasticity2D(dest, sources=None, flow_stress=-1)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_s00, d_s01, d_s11)[source]
Gas Dynamics

Basic equations for Gas-dynamics

class pysph.sph.gas_dynamics.basic.IdealGasEOS(dest, sources=None, gamma=1.4)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_p, d_rho, d_e, d_cs)[source]
class pysph.sph.gas_dynamics.basic.MPMAccelerations(dest, sources, beta=2.0, update_alapha1=False, update_alapha2=False, alpha1_min=0.1, alpha2_min=0.1, sigma=0.1)[source]

Bases: pysph.sph.equation.Equation

initialize(d_idx, d_au, d_av, d_aw, d_ae, d_am, d_aalpha1, d_aalpha2, d_del2e)[source]
loop(d_idx, s_idx, d_m, s_m, d_p, s_p, d_cs, s_cs, d_e, s_e, d_rho, s_rho, d_au, d_av, d_aw, d_ae, d_omega, s_omega, XIJ, VIJ, DWI, DWJ, DWIJ, HIJ, d_del2e, d_alpha1, s_alpha1, d_alpha2, s_alpha2, EPS, RIJ, R2IJ, RHOIJ, DT_ADAPT)[source]
post_loop(d_idx, d_h, d_cs, d_alpha1, d_aalpha1, d_div, d_del2e, d_e, d_alpha2, d_aalpha2)[source]
class pysph.sph.gas_dynamics.basic.Monaghan92Accelerations(dest, sources, alpha=1.0, beta=2.0)[source]

Bases: pysph.sph.equation.Equation

initialize(d_idx, d_au, d_av, d_aw, d_ae)[source]
loop(d_idx, s_idx, d_rho, s_rho, d_p, s_p, d_cs, s_cs, d_au, d_av, d_aw, d_ae, s_m, VIJ, DWIJ, XIJ, EPS, HIJ, R2IJ, RHOIJ1)[source]
class pysph.sph.gas_dynamics.basic.ScaleSmoothingLength(dest, sources=None, factor=2.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_h)[source]
class pysph.sph.gas_dynamics.basic.SummationDensity(dest, sources=None, dim=2, density_iterations=False, iterate_only_once=False, k=1.2, htol=1e-06)[source]

Bases: pysph.sph.equation.Equation

Summation density with iterative solution of the smoothing lengths.

Parameters:

density_iterations : bint
Flag to indicate density iterations are required.
iterate_only_once : bint
Flag to indicate if only one iteration is required
k : double
Kernel scaling factor
htol : double
Iteration tolerance
converged()[source]
initialize(d_idx, d_rho, d_div, d_grhox, d_grhoy, d_arho, d_dwdh)[source]
loop(d_idx, s_idx, d_rho, d_grhox, d_grhoy, d_arho, d_dwdh, s_m, d_converged, VIJ, WI, DWI, GHI)[source]
post_loop(d_idx, d_arho, d_rho, d_div, d_omega, d_dwdh, d_h0, d_h, d_m, d_ah, d_converged)[source]
class pysph.sph.gas_dynamics.basic.UpdateSmoothingLengthFromVolume(dest, sources=None, k=1.2, dim=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_m, d_rho, d_h)[source]
Surface tension

Implementation of the equations used for surface tension modelling, for example in KHI simulations. The references are as under:

  • M. Shadloo, M. Yildiz, “Numerical modelling of Kelvin-Helmholtz isntability using smoothed particle hydrodynamics”, IJNME, 2011, 87, pp 988–1006 [SY11]
  • Joseph P. Morris “Simulating surface tension with smoothed particle hydrodynamics”, JCP, 2000, 33, pp 333–353 [JM00]
  • Adami et al. “A new surface-tension formulation for multi-phase SPH using a reproducing divergence approximation”, JCP 2010, 229, pp 5011–5021 [A10]
class pysph.sph.surface_tension.AdamiColorGradient(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Gradient of color Eq. (14) in [A10]

\[\nabla c_a = \frac{1}{V_a}\sum_b \left[V_a^2 + V_b^2 \right]\tilde{c}_{ab}\nabla_a W_{ab}\,,\]

where, the average \(\tilde{c}_{ab}\) is defined as

\[\tilde{c}_{ab} = \frac{\rho_b}{\rho_a + \rho_b}c_a + \frac{\rho_a}{\rho_a + \rho_b}c_b\]
Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N)[source]
loop(d_idx, s_idx, d_V, s_V, d_rho, s_rho, d_cx, d_cy, d_cz, d_color, s_color, DWIJ)[source]
post_loop(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N)[source]
class pysph.sph.surface_tension.AdamiReproducingDivergence(dest, sources=None, dim=2)[source]

Bases: pysph.sph.equation.Equation

Reproducing divergence approximation Eq. (20) in [A10] to compute the curvature

\[\nabla \cdot \boldsymbol{\phi}_a = d\frac{\sum_b \boldsymbol{\phi}_{ab}\cdot \nabla_a W_{ab}V_b}{\sum_b\boldsymbol{x}_{ab}\cdot \nabla_a W_{ab} V_b}\]
initialize(d_idx, d_kappa, d_wij_sum)[source]
loop(d_idx, s_idx, d_kappa, d_wij_sum, d_nx, d_ny, d_nz, s_nx, s_ny, s_nz, d_V, s_V, DWIJ, XIJ, RIJ, EPS)[source]
post_loop(d_idx, d_kappa, d_wij_sum)[source]
class pysph.sph.surface_tension.CSFSurfaceTensionForce(dest, sources=None, sigma=0.1)[source]

Bases: pysph.sph.equation.Equation

Acceleration due to surface tension force Eq. (25) in [JM00]:

Note that as per Eq. (17) in [JM00], the un-normalized normal is basically the gradient of the color function. The acceleration term therefore depends on the gradient of the color field.

initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, d_au, d_av, d_aw, d_kappa, d_cx, d_cy, d_cz, d_rho)[source]
class pysph.sph.surface_tension.ColorGradientUsingNumberDensity(dest, sources=None, epsilon=1e-06)[source]

Bases: pysph.sph.equation.Equation

Gradient of the color function using Eq. (13) of [SY11]:

\[\nabla C_a = \sum_b \frac{2 C_b - C_a}{\psi_a + \psi_a} \nabla_{a} W_{ab}\]

Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.

Singularities are avoided as per the recommendation by [JM00] (see eqs 20 & 21) using the parameter \(\epsilon\)

initialize(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N)[source]
loop(d_idx, s_idx, d_scolor, s_scolor, d_cx, d_cy, d_cz, d_V, s_V, DWIJ)[source]
post_loop(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_N, d_ddelta)[source]
class pysph.sph.surface_tension.InterfaceCurvatureFromNumberDensity(dest, sources=None, with_morris_correction=True)[source]

Bases: pysph.sph.equation.Equation

Interface curvature using number density. Eq. (15) in [SY11]:

\[\kappa_a = \sum_b \frac{2.0}{\psi_a + \psi_b} \left(\boldsymbol{n_a} - \boldsymbol{n_b}\right) \cdot \nabla_a W_{ab}\]
initialize(d_idx, d_kappa, d_wij_sum)[source]
loop(d_idx, s_idx, d_kappa, d_nx, d_ny, d_nz, s_nx, s_ny, s_nz, d_V, s_V, d_N, s_N, d_wij_sum, s_rho, s_m, WIJ, DWIJ)[source]
post_loop(d_idx, d_wij_sum, d_nx, d_kappa)[source]
class pysph.sph.surface_tension.MorrisColorGradient(dest, sources=None, epsilon=1e-06)[source]

Bases: pysph.sph.equation.Equation

Gradient of the color function using Eq. (17) of [JM00]:

\[\nabla c_a = \sum_b \frac{m_b}{\rho_b}(c_b - c_a) \nabla_{a} W_{ab}\,,\]

where a smoothed representation of the color is used in the equation. Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.

Singularities are avoided as per the recommendation by [JM00] (see eqs 20 & 21) using the parameter \(\epsilon\)

initialize(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N)[source]
loop(d_idx, s_idx, d_scolor, s_scolor, d_cx, d_cy, d_cz, s_m, s_rho, DWIJ)[source]
post_loop(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_N, d_ddelta)[source]
class pysph.sph.surface_tension.SY11ColorGradient(dest, sources=None, epsilon=1e-06)[source]

Bases: pysph.sph.equation.Equation

Gradient of the color function using Eq. (13) of [SY11]:

\[\nabla C_a = \sum_b \frac{2 C_b - C_a}{\psi_a + \psi_a} \nabla_{a} W_{ab}\]

Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.

initialize(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N)[source]
loop(d_idx, s_idx, d_color, s_color, d_cx, d_cy, d_cz, d_V, s_V, DWIJ)[source]
post_loop(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_N, d_ddelta)[source]
class pysph.sph.surface_tension.SY11DiracDelta(dest, sources=None, epsilon=1e-06)[source]

Bases: pysph.sph.equation.Equation

Discretized dirac-delta for the SY11 formulation Eq. (14) in [SY11]

This is essentially the same as computing the color gradient, the only difference being that this might be called with a reduced smoothing length.

Note that the normals should be computed using the SY11ColorGradient equation. This function will effectively overwrite the color gradient.

initialize(d_idx, d_cx, d_cy, d_cz, d_ddelta)[source]
loop(d_idx, s_idx, d_color, s_color, d_cx, d_cy, d_cz, d_V, s_V, DWIJ)[source]
post_loop(d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_N, d_ddelta)[source]
class pysph.sph.surface_tension.ShadlooYildizSurfaceTensionForce(dest, sources=None, sigma=0.1)[source]

Bases: pysph.sph.equation.Equation

Acceleration due to surface tension force Eq. (7,9) in [SY11]:

where, \(\delta^s\) is the discretized dirac delta function, \(\boldsymbol{n}\) is the interface normal, \(\kappa\) is the discretized interface curvature and \(\sigma\) is the surface tension force constant.

initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, d_au, d_av, d_aw, d_kappa, d_nx, d_ny, d_nz, d_m, d_rho, d_ddelta)[source]
class pysph.sph.surface_tension.SmoothedColor(dest, sources=None, smooth=False)[source]

Bases: pysph.sph.equation.Equation

Smoothed color function. Eq. (17) in [JM00]

\[c_a = \sum_b \frac{m_b}{\rho_b} c_b^i \nabla_a W_{ab}\,,\]

where, \(c_b^i\) is the color index associated with a particle.

initialize(d_idx, d_scolor)[source]
loop(d_idx, s_idx, d_rho, s_rho, s_m, s_color, d_scolor, WIJ)[source]
post_loop(d_idx, d_color, d_scolor)[source]
Implicit Incompressible SPH

The basic equations for the IISPH formulation of

M. Ihmsen, J. Cornelis, B. Solenthaler, C. Horvath, M. Teschner, “Implicit Incompressible SPH,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 3, pp. 426-435, March 2014. http://dx.doi.org/10.1109/TVCG.2013.105

class pysph.sph.iisph.AdvectionAcceleration(dest, sources=None, gx=0.0, gy=0.0, gz=0.0)[source]

Bases: pysph.sph.equation.Equation

initialize(d_idx, d_au, d_av, d_aw, d_uadv, d_vadv, d_wadv)[source]
loop()[source]
post_loop(d_idx, d_au, d_av, d_aw, d_uadv, d_vadv, d_wadv, d_u, d_v, d_w, dt=0.0)[source]
class pysph.sph.iisph.ComputeAII(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_aii)[source]
loop(d_idx, d_aii, d_dii0, d_dii1, d_dii2, d_m, d_rho, s_idx, s_m, s_rho, DWIJ, dt=0.0)[source]
class pysph.sph.iisph.ComputeAIIBoundary(dest, sources=None, rho0=1.0)[source]

Bases: pysph.sph.equation.Equation

This is important and not really discussed in the original IISPH paper.

loop(d_idx, d_aii, d_dii0, d_dii1, d_dii2, d_rho, s_idx, s_m, s_V, DWIJ, dt=0.0)[source]
class pysph.sph.iisph.ComputeDII(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_dii0, d_dii1, d_dii2)[source]
loop(d_idx, d_rho, d_dii0, d_dii1, d_dii2, s_idx, s_m, DWIJ, dt=0.0)[source]
class pysph.sph.iisph.ComputeDIIBoundary(dest, sources=None, rho0=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_dii0, d_dii1, d_dii2, d_rho, s_idx, s_m, s_V, DWIJ, dt=0.0)[source]
class pysph.sph.iisph.ComputeDIJPJ(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_dijpj0, d_dijpj1, d_dijpj2)[source]
loop(d_idx, d_dijpj0, d_dijpj1, d_dijpj2, s_idx, s_m, s_rho, s_piter, DWIJ, dt=0.0)[source]
class pysph.sph.iisph.ComputeRhoAdvection(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_rho_adv, d_rho, d_p0, d_p, d_piter, d_aii)[source]
loop(d_idx, d_rho, d_rho_adv, d_uadv, d_vadv, d_wadv, d_u, d_v, d_w, s_idx, s_m, s_uadv, s_vadv, s_wadv, DWIJ, dt=0.0)[source]
class pysph.sph.iisph.ComputeRhoBoundary(dest, sources=None, rho0=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_rho, d_rho_adv, d_uadv, d_vadv, d_wadv, s_idx, s_u, s_v, s_w, s_V, WIJ, DWIJ, dt=0.0)[source]
class pysph.sph.iisph.IISPHStep[source]

Bases: pysph.sph.integrator_step.IntegratorStep

A straightforward and simple integrator to be used for IISPH.

initialize()[source]
stage1(d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_uadv, d_vadv, d_wadv, d_au, d_av, d_aw, d_ax, d_ay, d_az, dt)[source]
class pysph.sph.iisph.NormalizedSummationDensity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_rho, d_rho_adv, d_rho0, d_V)[source]
loop(d_idx, d_rho, d_rho_adv, d_V, s_idx, s_m, s_rho0, WIJ)[source]
post_loop(d_idx, d_rho, d_rho_adv)[source]
class pysph.sph.iisph.NumberDensity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_V)[source]
loop(d_idx, d_V, WIJ)[source]
class pysph.sph.iisph.PressureForce(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_au, d_av, d_aw)[source]
loop(d_idx, d_rho, d_p, d_au, d_av, d_aw, s_idx, s_m, s_rho, s_p, DWIJ)[source]
post_loop(d_idx, d_au, d_av, d_aw, d_uadv, d_vadv, d_wadv, DT_ADAPT)[source]
class pysph.sph.iisph.PressureForceBoundary(dest, sources=None, rho0=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, DWIJ)[source]
class pysph.sph.iisph.PressureSolve(dest, sources=None, rho0=1.0, omega=0.5, tolerance=0.01, debug=False)[source]

Bases: pysph.sph.equation.Equation

converged()[source]
initialize(d_idx, d_p, d_compression)[source]
loop(d_idx, d_p, d_piter, d_rho, d_m, d_dijpj0, d_dijpj1, d_dijpj2, s_idx, s_m, s_dii0, s_dii1, s_dii2, s_piter, s_dijpj0, s_dijpj1, s_dijpj2, DWIJ, dt=0.0)[source]
post_loop(d_idx, d_piter, d_p0, d_p, d_aii, d_rho_adv, d_rho, d_compression, dt=0.0)[source]
reduce(dst)[source]
class pysph.sph.iisph.PressureSolveBoundary(dest, sources=None, rho0=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_p, d_rho, d_dijpj0, d_dijpj1, d_dijpj2, s_idx, s_V, DWIJ)[source]
class pysph.sph.iisph.SummationDensity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_rho)[source]
loop(d_idx, d_rho, s_idx, s_m, WIJ)[source]
class pysph.sph.iisph.SummationDensityBoundary(dest, sources=None, rho0=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_rho, s_idx, s_V, WIJ)[source]
class pysph.sph.iisph.ViscosityAcceleration(dest, sources=None, nu=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_au, d_av, d_aw, s_idx, s_m, EPS, VIJ, XIJ, RHOIJ1, R2IJ, DWIJ)[source]
class pysph.sph.iisph.ViscosityAccelerationBoundary(dest, sources=None, rho0=1.0, nu=1.0)[source]

Bases: pysph.sph.equation.Equation

The acceleration on the fluid due to a boundary.

loop(d_idx, d_au, d_av, d_aw, d_rho, s_idx, s_V, EPS, VIJ, XIJ, R2IJ, DWIJ)[source]
Rigid body motion

Rigid body related equations.

class pysph.sph.rigid_body.BodyForce(dest, sources=None, gx=0.0, gy=0.0, gz=0.0)[source]

Bases: pysph.sph.equation.Equation

initialize(d_idx, d_m, d_fx, d_fy, d_fz)[source]
class pysph.sph.rigid_body.EulerStepRigidBody[source]

Bases: pysph.sph.integrator_step.IntegratorStep

Fast but inaccurate integrator. Use this for testing

initialize()[source]
stage1(d_idx, d_u, d_v, d_w, d_x, d_y, d_z, d_omega, d_omega_dot, d_vc, d_ac, dt=0.0)[source]
class pysph.sph.rigid_body.NumberDensity(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_V)[source]
loop(d_idx, d_V, WIJ)[source]
class pysph.sph.rigid_body.PressureRigidBody(dest, sources=None, rho0=1.0)[source]

Bases: pysph.sph.equation.Equation

The pressure acceleration on the fluid/solid due to a boundary. Implemented from Akinci et al. http://dx.doi.org/10.1145/2185520.2185558

Use this with the fluid as a destination and body as source.

loop(d_idx, d_m, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, s_fx, s_fy, s_fz, DWIJ)[source]
class pysph.sph.rigid_body.RK2StepRigidBody[source]

Bases: pysph.sph.integrator_step.IntegratorStep

initialize(d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_omega, d_omega0, d_vc, d_vc0)[source]
stage1(d_idx, d_u, d_v, d_w, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_omega, d_omega_dot, d_vc, d_ac, d_omega0, d_vc0, dt=0.0)[source]
stage2(d_idx, d_u, d_v, d_w, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_omega, d_omega_dot, d_vc, d_ac, d_omega0, d_vc0, dt=0.0)[source]
class pysph.sph.rigid_body.RigidBodyCollision(dest, sources=None, k=1.0, d=1.0, eta=1.0, kt=1.0)[source]

Bases: pysph.sph.equation.Equation

This is inspired from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch29.html and

BK Mishra’s article on DEM http://dx.doi.org/10.1016/S0301-7516(03)00032-2

A review of computer simulation of tumbling mills by the discrete element method: Part I - contact mechanics

Note that d is a factor multiplied with the “h” of the particle.

loop(d_idx, d_fx, d_fy, d_fz, d_h, d_total_mass, XIJ, RIJ, R2IJ, VIJ)[source]
class pysph.sph.rigid_body.RigidBodyMoments(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
reduce(dst)[source]
class pysph.sph.rigid_body.RigidBodyMotion(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_cm, d_vc, d_ac, d_omega)[source]
class pysph.sph.rigid_body.SummationDensityRigidBody(dest, sources=None, rho0=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_rho, s_idx, s_V, WIJ)[source]
class pysph.sph.rigid_body.ViscosityRigidBody(dest, sources=None, rho0=1.0, nu=1.0)[source]

Bases: pysph.sph.equation.Equation

The viscous acceleration on the fluid/solid due to a boundary. Implemented from Akinci et al. http://dx.doi.org/10.1145/2185520.2185558

Use this with the fluid as a destination and body as source.

loop(d_idx, d_m, d_au, d_av, d_aw, d_rho, s_idx, s_V, s_fx, s_fy, s_fz, EPS, VIJ, XIJ, R2IJ, DWIJ)[source]
pysph.sph.rigid_body.get_alpha_dot()[source]

Use sympy to perform most of the math and use the resulting formulae to calculate:

inv(I) ( au - w x (I w))
pysph.sph.rigid_body.get_torque()[source]

Use sympy to perform some simple math. R x F C_m x F w x r

pysph.sph.rigid_body.skew(vec)[source]
Miscellaneous
Functions for advection
class pysph.sph.misc.advection.Advect(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
loop(d_idx, d_ax, d_ay, d_u, d_v)[source]
class pysph.sph.misc.advection.MixingVelocityUpdate(dest, sources=None, T=1.0)[source]

Bases: pysph.sph.equation.Equation

loop(d_idx, d_u, d_v, d_u0, d_v0, t=0.1)[source]

Functions to reduce array data in serial or parallel.

pysph.base.reduce_array.dummy_reduce_array(array, op='sum')[source]

Simply returns the array for the serial case.

pysph.base.reduce_array.mpi_reduce_array(array, op='sum')[source]

Reduce an array given an array and a suitable reduction operation.

Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.

Parameters

  • array: numpy.ndarray: Any numpy array (1D).
  • op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)
pysph.base.reduce_array.parallel_reduce_array(array, op='sum')

Reduce an array given an array and a suitable reduction operation.

Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.

Parameters

  • array: numpy.ndarray: Any numpy array (1D).
  • op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)
pysph.base.reduce_array.serial_reduce_array(array, op='sum')[source]

Reduce an array given an array and a suitable reduction operation.

Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.

Parameters

  • array: numpy.ndarray: Any numpy array (1D).
  • op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)
Group of equations
class pysph.sph.equation.Group(equations, real=True, update_nnps=False, iterate=False, max_iterations=1, min_iterations=0)[source]

Bases: object

A group of equations.

This class provides some support for the code generation for the collection of equations.

Constructor.

Parameters:
  • equations (list) – a list of equation objects.
  • real (bool) – specifies if only non-remote/non-ghost particles should be operated on.
  • update_nnps (bool) – specifies if the neighbors should be re-computed locally after this group
  • iterate (bool) – specifies if the group should continue iterating until each equation’s “converged()” methods returns with a positive value.
  • max_iterations (int) – specifies the maximum number of times this group should be iterated.
  • min_iterations (int) – specifies the minimum number of times this group should be iterated.

Notes

When running simulations in parallel, one should typically run the summation density over all particles (both local and remote) in each processor. This is because we must update the pressure/density of the remote neighbors in the current processor. Otherwise the results can be incorrect with the remote particles having an older density. This is also the case for the TaitEOS. In these cases the group that computes the equation should set real to False.

Integrator module

Basic code for the templated integrators.

Currently we only support two-step integrators.

These classes are used to generate the code for the actual integrators from the sph_eval module.

class pysph.sph.integrator.EPECIntegrator(**kw)[source]

Bases: pysph.sph.integrator.Integrator

Predictor corrector integrators can have two modes of operation.

In the Evaluate-Predict-Evaluate-Correct (EPEC) mode, the system is advanced using:

\[\begin{split}F(y^n) --> Evaluate\end{split}\]\[\begin{split}y^{n+\frac{1}{2}} = y^n + F(y^n) --> Predict\end{split}\]\[\begin{split}F(y^{n+\frac{1}{2}}) --> Evaluate\end{split}\]\[\begin{split}y^{n+1} = y^n + \Delta t F(y^{n+\frac{1}{2}}) --> Correct\end{split}\]

Notes:

The Evaluate stage of the integrator forces a function evaluation. Therefore, the PEC mode is much faster but relies on old accelertions for the Prediction stage.

In the EPEC mode, the final corrector can be modified to:

\(y^{n+1} = y^n + \frac{\Delta t}{2}\left( F(y^n) + F(y^{n+\frac{1}{2}}) \right)\)

This would require additional storage for the accelerations.

Pass fluid names and suitable IntegratorStep instances.

For example:

>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())

where “fluid” and “solid” are the names of the particle arrays.

one_timestep(t, dt)[source]
class pysph.sph.integrator.EulerIntegrator(**kw)[source]

Bases: pysph.sph.integrator.Integrator

Pass fluid names and suitable IntegratorStep instances.

For example:

>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())

where “fluid” and “solid” are the names of the particle arrays.

one_timestep(t, dt)[source]
class pysph.sph.integrator.Integrator(**kw)[source]

Bases: object

Generic class for multi-step integrators in PySPH for a system of ODES of the form \(\frac{dy}{dt} = F(y)\).

Pass fluid names and suitable IntegratorStep instances.

For example:

>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())

where “fluid” and “solid” are the names of the particle arrays.

compute_h_minimum()[source]
compute_time_step(dt, cfl)[source]

If there are any adaptive timestep constraints, the appropriate timestep is returned, else None is returned.

one_timestep(t, dt)[source]

User written function that actually does one timestep.

This function is used in the high-performance Cython implementation. The assumptions one may make are the following:

  • t and dt are passed.

  • the following methods are available:

    • self.initialize()
    • self.stage1(), self.stage2() etc. depending on the number of stages available.
    • self.compute_accelerations(t, dt)
    • self.do_post_stage(stage_dt, stage_count_from_1)

Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator.

set_compiled_object(c_integrator)[source]

Set the high-performance compiled object to call internally.

set_fixed_h(fixed_h)[source]
set_nnps(nnps)[source]
set_parallel_manager(pm)[source]
set_post_stage_callback(callback)[source]

This callback is called when the particles are moved, i.e one stage of the integration is done.

This callback is passed the current time value, the timestep and the stage.

The current time value is t + stage_dt, for example this would be 0.5*dt for a two stage predictor corrector integrator.

step(time, dt)[source]

This function is called by the solver.

To implement the integration step please override the one_timestep method.

class pysph.sph.integrator.LeapFrogIntegrator(**kw)[source]

Bases: pysph.sph.integrator.PECIntegrator

A leap-frog integrator.

Pass fluid names and suitable IntegratorStep instances.

For example:

>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())

where “fluid” and “solid” are the names of the particle arrays.

one_timestep(t, dt)[source]
class pysph.sph.integrator.PECIntegrator(**kw)[source]

Bases: pysph.sph.integrator.Integrator

In the Predict-Evaluate-Correct (PEC) mode, the system is advanced using:

\[\begin{split}y^{n+\frac{1}{2}} = y^n + \frac{\Delta t}{2}F(y^{n-\frac{1}{2}}) --> Predict\end{split}\]\[\begin{split}F(y^{n+\frac{1}{2}}) --> Evaluate\end{split}\]\[y^{n + 1} = y^n + \Delta t F(y^{n+\frac{1}{2}})\]

Pass fluid names and suitable IntegratorStep instances.

For example:

>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())

where “fluid” and “solid” are the names of the particle arrays.

one_timestep(t, dt)[source]
class pysph.sph.integrator.PEFRLIntegrator(**kw)[source]

Bases: pysph.sph.integrator.Integrator

A Position-Extended Forest-Ruth-Like integrator [Omeylan2002]

References

[Omeylan2002]I.M. Omelyan, I.M. Mryglod and R. Folk, “Optimized Forest-Ruth- and Suzuki-like algorithms for integration of motion in many-body systems”, Computer Physics Communications 146, 188 (2002) http://arxiv.org/abs/cond-mat/0110585

Pass fluid names and suitable IntegratorStep instances.

For example:

>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())

where “fluid” and “solid” are the names of the particle arrays.

one_timestep(t, dt)[source]
class pysph.sph.integrator.TVDRK3Integrator(**kw)[source]

Bases: pysph.sph.integrator.Integrator

In the TVD-RK3 integrator, the system is advanced using:

\[y^{n + \frac{1}{3}} = y^n + \Delta t F( y^n )\]\[y^{n + \frac{2}{3}} = \frac{3}{4}y^n + \frac{1}{4}(y^{n + \frac{1}{3}} + \Delta t F(y^{n + \frac{1}{3}}))\]\[y^{n + 1} = \frac{1}{3}y^n + \frac{2}{3}(y^{n + \frac{2}{3}} + \Delta t F(y^{n + \frac{2}{3}}))\]

Pass fluid names and suitable IntegratorStep instances.

For example:

>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())

where “fluid” and “solid” are the names of the particle arrays.

one_timestep(t, dt)[source]

SPH Kernels

Definition of some SPH kernel functions

class pysph.base.kernels.CubicSpline(dim=1)[source]

Bases: object

Cubic Spline Kernel: [Monaghan1992]

\[\begin{split}W(q) = \ &\sigma_3\left[ (2-q)^3 - 4(1-q)^3\right], \ & \textrm{for} \ 0 \leq q \leq 1,\\ = \ &\sigma_3(2-q)^3, & \textrm{for}\ 1 \leq q \leq 2,\\ = \ &0, & \textrm{for}\ q>2, \\\end{split}\]

where \(\sigma_3\) is a dimensional normalizing factor for the cubic spline function given by:

\[\begin{split}\sigma_3 = \ & \frac{2}{3h^1}, & \textrm{for dim=1}, \\ \sigma_3 = \ & \frac{10}{7\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_3 = \ & \frac{1}{\pi h^3}, & \textrm{for dim=3}. \\\end{split}\]

References

[Monaghan1992]J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543-574.
get_deltap()[source]
gradient(xij=[0.0, 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0])[source]
gradient_h(xij=[0.0, 0, 0], rij=1.0, h=1.0)[source]
kernel(xij=[0.0, 0, 0], rij=1.0, h=1.0)[source]
class pysph.base.kernels.Gaussian(dim=2)[source]

Bases: object

Gaussian Kernel: [Liu2010]

\[\begin{split}W(q) = \ &\sigma_g e^{-q^2}, \ & \textrm{for} \ 0\leq q \leq 3,\\ = \ & 0, & \textrm{for} \ q>3,\\\end{split}\]

where \(\sigma_g\) is a dimensional normalizing factor for the gaussian function given by:

\[\begin{split}\sigma_g = \ & \frac{1}{\pi^{1/2} h^1}, \ & \textrm{for dim=1}, \\ \sigma_g = \ & \frac{1}{\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_g = \ & \frac{1}{\pi^{3/2} h^3}, & \textrm{for dim=3}. \\\end{split}\]

References

[Liu2010](1, 2) M. Liu, & G. Liu, Smoothed particle hydrodynamics (SPH): an overview and recent developments, “Archives of computational methods in engineering”, 17.1 (2010), pp. 25-76.
get_deltap()[source]
gradient(xij=[0.0, 0, 0], rij=1.0, h=1.0, grad=[0.0, 0, 0])[source]
gradient_h(xij=[0.0, 0.0, 0.0], rij=1.0, h=1.0)[source]
kernel(xij=[0.0, 0, 0], rij=1.0, h=1.0)[source]
class pysph.base.kernels.QuinticSpline(dim=2)[source]

Bases: object

Quintic Spline SPH kernel: [Liu2010]

\[\begin{split}W(q) = \ &\sigma_5\left[ (3-q)^5 - 6(2-q)^5 + 15(1-q)^5 \right], \ & \textrm{for} \ 0\leq q \leq 1,\\ = \ &\sigma_5\left[ (3-q)^5 - 6(2-q)^5 \right], & \textrm{for} \ 1 \leq q \leq 2,\\ = \ &\sigma_5 \ (3-5)^5 , & \textrm{for} \ 2 \leq q \leq 3,\\ = \ & 0, & \textrm{for} \ q>3,\\\end{split}\]

where \(\sigma_5\) is a dimensional normalizing factor for the quintic spline function given by:

\[\begin{split}\sigma_5 = \ & \frac{120}{h^1}, & \textrm{for dim=1}, \\ \sigma_5 = \ & \frac{7}{478\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_5 = \ & \frac{3}{359\pi h^3}, & \textrm{for dim=3}. \\\end{split}\]
Raises:NotImplementedError – Quintic spline currently only supports 2D kernels.
get_deltap()[source]
gradient(xij=[0.0, 0, 0], rij=1.0, h=1.0, grad=[0.0, 0, 0])[source]
gradient_h(xij=[0.0, 0, 0], rij=1.0, h=1.0)[source]
kernel(xij=[0.0, 0, 0], rij=1.0, h=1.0)[source]
class pysph.base.kernels.WendlandQuintic(dim=2)[source]

Bases: object

get_deltap()[source]
gradient(xij=[0.0, 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0])[source]
gradient_h(xij=[0.0, 0, 0], rij=1.0, h=1.0)[source]
kernel(xij=[0.0, 0, 0], rij=1.0, h=1.0)[source]
pysph.base.kernels.get_compiled_kernel(kernel)[source]

Given a kernel, return a high performance wrapper kernel.

pysph.base.kernels.get_correction(kernel, h0)[source]

Module nnps: Nearest Neighbor Particle Search

class pysph.base.nnps.BoxSortNNPS

Bases: pysph.base.nnps.LinkedListNNPS

Nearest neighbor query class using the box sort method but which uses the LinkedList algorithm. This makes this very fast but perhaps not as safe as the DictBoxSortNNPS. All this class does is to use a std::map to obtain a linear cell index from the actual flattened cell index.

Constructor for NNPS

Parameters:
  • dim (int) – Number of dimension.
  • particles (list) – The list of particle arrays we are working on
  • radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
  • ghost_layers (int) – Optional number of layers to share in parallel
  • domain (DomainManager, default (None)) – Optional limits for the domain
  • fixed_h (bint) – Optional flag to use constant cell sizes throughout.
  • cache (bint) – Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations.
  • sort_gids (bint, default (False)) – Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run.
cell_to_index

cell_to_index: ‘map[long,int]’

class pysph.base.nnps.Cell(IntPoint cid, double cell_size, int narrays, int layers=2)

Bases: object

Basic indexing structure for the box-sort NNPS.

For a spatial indexing based on the box-sort algorithm, this class defines the spatial data structure used to hold particle indices (local and global) that are within this cell.

Constructor

Parameters:
  • cid (IntPoint) – Spatial index (unflattened) for the cell
  • cell_size (double) – Spatial extent of the cell in each dimension
  • narrays (int) – Number of arrays being binned
  • layers (int) – Factor to compute the bounding box
centroid

centroid: ‘cPoint’

get_bounding_box(self, Point boxmin, Point boxmax, int layers=1, cell_size=None)

Compute the bounding box for the cell.

Parameters:
  • boxmin (Point (output)) – The bounding box min coordinates are stored here
  • boxmax (Point (output)) – The bounding box max coordinates are stored here
  • layers (int (input) default (1)) – Number of offset layers to define the bounding box
  • cell_size (double (input) default (None)) – Optional cell size to use to compute the bounding box. If not provided, the cell’s size will be used.
get_centroid(self, Point pnt)

Utility function to get the centroid of the cell.

Parameters:
  • pnt (Point (input/output)) – The centroid is cmoputed and stored in this object.
  • centroid is defined as the origin plus half the cell size (The) –
  • each dimension. (in) –
gindices

gindices: list

is_boundary

is_boundary: ‘bool’

lindices

lindices: list

set_indices(self, int index, UIntArray lindices, UIntArray gindices)

Set the global and local indices for the cell

size

size: ‘int’

class pysph.base.nnps.DictBoxSortNNPS(int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, cache=False, sort_gids=False)

Bases: pysph.base.nnps.NNPS

Nearest neighbor query class using the box-sort algorithm using a dictionary.

NNPS bins all local particles using the box sort algorithm in Cells. The cells are stored in a dictionary ‘cells’ which is keyed on the spatial index (IntPoint) of the cell.

Constructor for NNPS

Parameters:
  • dim (int) – Number of dimensions.
  • particles (list) – The list of particle arrays we are working on.
  • radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
  • domain (DomainManager, default (None)) – Optional limits for the domain
  • cache (bint) – Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations.
  • sort_gids (bint, default (False)) – Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run.
cells

cells: dict

get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bool prealloc)

Utility function to get near-neighbors for a particle.

Parameters:
  • src_index (int) – Index of the source particle array in the particles list
  • dst_index (int) – Index of the destination particle array in the particles list
  • d_idx (int (input)) – Destination particle for which neighbors are sought.
  • nbrs (UIntArray (output)) – Neighbors for the requested particle are stored here.
  • prealloc (bool) – Specifies if the neighbor array already has pre-allocated space for the neighbor list. In this case the neighbors are directly set in the given array without resetting or appending to the array. This improves performance when the neighbors are cached.
class pysph.base.nnps.DomainManager(double xmin=-1000, double xmax=1000, double ymin=0, double ymax=0, double zmin=0, double zmax=0, periodic_in_x=False, periodic_in_y=False, periodic_in_z=False)

Bases: object

This class determines the limits of the solution domain.

We expect all simulations to have well defined domain limits beyond which we are either not interested or the solution is invalid to begin with. Thus, if a particle leaves the domain, the solution should be considered invalid (at least locally).

The initial domain limits could be given explicitly or asked to be computed from the particle arrays. The domain could be periodic.

Constructor

cell_size

cell_size: ‘double’

compute_cell_size_for_binning(self)
dim

dim: ‘int’

is_periodic

is_periodic: ‘bool’

narrays

narrays: ‘int’

pa_wrappers

pa_wrappers: list

periodic_in_x

periodic_in_x: ‘bool’

periodic_in_y

periodic_in_y: ‘bool’

periodic_in_z

periodic_in_z: ‘bool’

radius_scale

radius_scale: ‘double’

set_cell_size(self, cell_size)
set_in_parallel(self, bool in_parallel)
set_pa_wrappers(self, wrappers)
set_radius_scale(self, double radius_scale)
update(self, *args, **kwargs)

General method that is called before NNPS can bin particles.

This method is responsible for the computation of cell sizes and creation of any ghost particles for periodic or wall boundary conditions.

xmax

xmax: ‘double’

xmin

xmin: ‘double’

xtranslate

xtranslate: ‘double’

ymax

ymax: ‘double’

ymin

ymin: ‘double’

ytranslate

ytranslate: ‘double’

zmax

zmax: ‘double’

zmin

zmin: ‘double’

ztranslate

ztranslate: ‘double’

class pysph.base.nnps.LinkedListNNPS(int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bool fixed_h=False, bool cache=False, bool sort_gids=False)

Bases: pysph.base.nnps.NNPS

Nearest neighbor query class using the linked list method.

Constructor for NNPS

Parameters:
  • dim (int) – Number of dimension.
  • particles (list) – The list of particle arrays we are working on
  • radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
  • ghost_layers (int) – Optional number of layers to share in parallel
  • domain (DomainManager, default (None)) – Optional limits for the domain
  • fixed_h (bint) – Optional flag to use constant cell sizes throughout.
  • cache (bint) – Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations.
  • sort_gids (bint, default (False)) – Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run.
fixed_h

fixed_h: ‘bool’

get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bool prealloc)

Utility function to get near-neighbors for a particle.

Parameters:
  • src_index (int) – Index of the source particle array in the particles list
  • dst_index (int) – Index of the destination particle array in the particles list
  • d_idx (int (input)) – Destination particle for which neighbors are sought.
  • nbrs (UIntArray (output)) – Neighbors for the requested particle are stored here.
  • prealloc (bool) – Specifies if the neighbor array already has pre-allocated space for the neighbor list. In this case the neighbors are directly set in the given array without resetting or appending to the array. This improves performance when the neighbors are cached.
get_spatially_ordered_indices(self, int pa_index, LongArray indices)
heads

heads: list

ncells_per_dim

ncells_per_dim: pyzoltan.core.carray.IntArray

ncells_tot

ncells_tot: ‘int’

nexts

nexts: list

set_context(self, int src_index, int dst_index)

Setup the context before asking for neighbors. The dst_index represents the particles for whom the neighbors are to be determined from the particle array with index src_index.

Parameters:
  • src_index (int: the source index of the particle array.) –
  • dst_index (int: the destination index of the particle array.) –
class pysph.base.nnps.NNPS(int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bool cache=False, bool sort_gids=False)

Bases: object

Nearest neighbor query class using the box-sort algorithm.

NNPS bins all local particles using the box sort algorithm in Cells. The cells are stored in a dictionary ‘cells’ which is keyed on the spatial index (IntPoint) of the cell.

Constructor for NNPS

Parameters:
  • dim (int) – Dimension (fixme: Not sure if this is really needed)
  • particles (list) – The list of particle arrays we are working on.
  • radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
  • domain (DomainManager, default (None)) – Optional limits for the domain
  • cache (bint) – Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations.
  • sort_gids (bint, default (False)) – Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run.
brute_force_neighbors(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs)
cell_size

cell_size: ‘double’

current_cache

current_cache: pysph.base.nnps.NeighborCache

dim

dim: ‘int’

domain

domain: pysph.base.nnps.DomainManager

get_nearest_particles(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs)
get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bool prealloc)
get_spatially_ordered_indices(self, int pa_index, LongArray indices)
is_periodic

is_periodic: ‘bool’

n_cells

n_cells: ‘int’

narrays

narrays: ‘int’

pa_wrappers

pa_wrappers: list

particles

particles: list

radius_scale

radius_scale: ‘double’

set_context(self, int src_index, int dst_index)

Setup the context before asing for neighbors. The dst_index represents the particles for whom the neighbors are to be determined from the particle array with index src_index.

Parameters:
  • src_index (int: the source index of the particle array.) –
  • dst_index (int: the destination index of the particle array.) –
set_in_parallel(self, bool in_parallel)
sort_gids

sort_gids: ‘bool’

spatially_order_particles(self, int pa_index)

Spatially order particles such that nearby particles have indices nearer each other. This may improve pre-fetching on the CPU.

update(self)

Update the local data after particles have moved.

For parallel runs, we want the NNPS to be independent of the ParallelManager which is solely responsible for distributing particles across available processors. We assume therefore that after a parallel update, each processor has all the local particle information it needs and this operation is carried out locally.

For serial runs, this method should be called when the particles have moved.

update_domain(self, *args, **kwargs)
use_cache

use_cache: ‘bool’

xmax

xmax: pyzoltan.core.carray.DoubleArray

xmin

xmin: pyzoltan.core.carray.DoubleArray

class pysph.base.nnps.NNPSParticleArrayWrapper(ParticleArray pa)

Bases: object

gid

gid: pyzoltan.core.carray.UIntArray

h

h: pyzoltan.core.carray.DoubleArray

pa

pa: pysph.base.particle_array.ParticleArray

remove_tagged_particles(self, int tag)
tag

tag: pyzoltan.core.carray.IntArray

x

x: pyzoltan.core.carray.DoubleArray

y

y: pyzoltan.core.carray.DoubleArray

z

z: pyzoltan.core.carray.DoubleArray

class pysph.base.nnps.NeighborCache(NNPS nnps, int dst_index, int src_index)

Bases: object

find_all_neighbors(self)
get_neighbors(self, int src_index, size_t d_idx, UIntArray nbrs)
update(self)
pysph.base.nnps.arange_uint(int start, int stop=-1) → UIntArray

Utility function to return a numpy.arange for a UIntArray

pysph.base.nnps.get_centroid(double cell_size, IntPoint cid)

Get the centroid of the cell.

Parameters:
  • cell_size (double (input)) – Cell size used for binning
  • cid (IntPoint (input)) – Spatial index for a cell
Returns:

centroid

Return type:

Point

Notes

The centroid in any coordinate direction is defined to be the origin plus half the cell size in that direction

pysph.base.nnps.get_number_of_threads() → int
pysph.base.nnps.py_flatten(IntPoint cid, IntArray ncells_per_dim, int dim)

Python wrapper

pysph.base.nnps.py_get_valid_cell_index(IntPoint cid, IntArray ncells_per_dim, int dim, int n_cells)

Return the flattened cell index for a valid cell

pysph.base.nnps.py_unflatten(long cell_index, IntArray ncells_per_dim, int dim)

Python wrapper

pysph.base.nnps.set_number_of_threads(int n)

Module parallel_manager

Module particle_array

The ParticleArray class itself is documented as below.

A ParticleArray represents a collection of particles.

class pysph.base.particle_array.ParticleArray(str name='', default_particle_tag=Local, constants=None, **props)

Bases: object

Class to represent a collection of particles.

name

str

name of this particle array.

properties

dict

dictionary of {prop_name:carray}.

constants

dict

dictionary of {const_name: carray}

Examples

There are many ways to create a ParticleArray:

>>> p = ParticleArray(name='fluid', x=[1.,2., 3., 4.])
>>> p.name
'fluid'
>>> p.x, p.tag, p.pid, p.gid

For a full specification of properties with their type etc.:

>>> p = ParticleArray(name='fluid',
...                   x=dict(data=[1,2], type='int', default=1))
>>> p.get_carray('x').get_c_type()
'int'

The default value is what is used to set the default value when a new particle is added and the arrays auto-resized.

To create constants that are not resized with added/removed particles:

>>> p = ParticleArray(name='f', x=[1,2], constants={'c':[0.,0.,0.]})

Constructor

Parameters:
  • name (str) – name of this particle array.
  • default_particle_tag (int) – one of Local, Remote or Ghost
  • constants (dict) – dictionary of constant arrays for the entire particle array. These must be arrays and are not resized when particles are added or removed. These are stored as CArrays internally.
  • props – any additional keyword arguments are taken to be properties, one for each property.
Receive(self, int send_proc)
Send(self, int recv_proc, props=None)
__delattr__

x.__delattr__(‘name’) <==> del x.name

__getattr__()

Convenience, to access particle property arrays as an attribute

A numpy array is returned. Look at the get() functions documentation for care on using this numpy array.

__getattribute__

x.__getattribute__(‘name’) <==> x.name

__reduce__(self)

Implemented to facilitate pickling of extension types

__setattr__

Convenience, to set particle property arrays as an attribute

__setstate__(self, d)

Load the particle array object from the saved dictionary

add_constant(self, str name, data)

Add a constant property to the particle array.

A constant property is an array but has a fixed size in that it is never resized as particles are added or removed. These properties are always stored internally as CArrays.

An example of where this is useful is if you need to calculate the center of mass of a solid body or the net force on the body.

Parameters:
  • name (str) – name of the constant
  • data (array-like) – the value for the data.
add_output_arrays(self, list props)

Append props to the existing list of output arrays

Parameters:props (list) – The additional list of property arrays to save
add_particles(self, **particle_props)

Add particles in particle_array to self.

Parameters:particle_props (dict) – a dictionary containing numpy arrays for various particle properties.

Notes

  • all properties should have same length arrays.
  • all properties should already be present in this particles array. if new properties are seen, an exception will be raised. properties.
add_property(self, str name, str type='double', default=None, data=None)

Add a new property to the particle array.

If a default is not supplied 0 is assumed.

Parameters:
  • name (str) – compulsory name of property.
  • type (str) – specifying the data type of this property (‘double’, ‘int’ etc.)
  • default (value) – specifying the default value of this property.
  • data (ndarray) – specifying the data associated with each particle.

Notes

If there are no particles currently in the particle array, and a new property with some particles is added, all the remaining properties will be resized to the size of the newly added array.

If there are some particles in the particle array, and a new property is added without any particles, then this new property will be resized according to the current size.

If there are some particles in the particle array and a new property is added with a different number of particles, then an error will be raised.

Warning

  • it is best not to add properties with data when you already have particles in the particle array. Reason is that, the particles in the particle array will be stored so that the ‘Real’ particles are in the top of the list and later the dummy ones. The data in your property array should be matched to the particles appropriately. This may not always be possible when there are particles of different type in the particle array.
  • Properties without any values can be added anytime.
  • While initializing particle arrays only using the add_property function, you will have to call align_particles manually to make sure particles are aligned properly.
align_particles(self) → int

Moves all ‘Local’ particles to the beginning of the array

This makes retrieving numpy slices of properties of ‘Local’ particles possible. This facility will be required frequently.

Notes

Pseudo-code:

index_arr = LongArray(n)

next_insert = 0
for i from 0 to n
    p <- ith particle
    if p is Local
        if i != next_insert
            tmp = index_arr[next_insert]
            index_arr[next_insert] = i
            index_arr[i] = tmp
            next_insert += 1
        else
            index_arr[i] = i
            next_insert += 1
    else
        index_arr[i] = i

 # we now have the new index assignment.
 # swap the required values as needed.
 for every property array:
     for i from 0 to n:
         if index_arr[i] != i:
             tmp = prop[i]
             prop[i] = prop[index_arr[i]]
             prop[index_arr[i]] = tmp
append_parray(self, ParticleArray parray) → int

Add particles from a particle array

properties that are not there in self will be added

cl_properties

cl_properties: dict

cl_setup_done

cl_setup_done: ‘bool’

clear(self)

Clear all data held by this array

constants

constants: dict

copy_over_properties(self, dict props)

Copy the properties from one set to another.

Parameters:props (dict) – A mapping between the properties to be copied.

Examples

To save the properties ‘x’ and ‘y’ to say ‘x0’ and ‘y0’:

>>> pa.copy_over_properties(props = {'x':'x0', 'y':'y0'}
copy_properties(self, ParticleArray source, long start_index=-1, long end_index=-1)

Copy properties from source to self

Parameters:
  • source (ParticleArray) – the particle array from where to copy.
  • start_index (long) – the first particle in self which maps to the 0th particle in source
  • end_index (long) – the index of first particle from start_index that is not copied
default_values

default_values: dict

extend(self, int num_particles)

Increase the total number of particles by the requested amount

New particles are added at the end of the list, you may have to manually call align_particles later.

extract_particles(self, indices, list props=None) → ParticleArray

Create new particle array for particles with indices in index_array

Parameters:
  • indices (list/array/LongArray) – indices of particles to be extracted (can be a LongArray or list/numpy array).
  • props (list) – the list of properties to extract, if None all properties are extracted.

Notes

The algorithm is as follows:

  • create a new particle array with the required properties.
  • resize the new array to the desired length (index_array.length)
  • copy the properties from the existing array to the new array.
get(self, *args, only_real_particles=True)

Return the numpy array/constant for the property names in the arguments.

Parameters:only_real_particles (bool) –
indicates if properties of only real particles need to be
returned or all particles to be returned. By default only real particles will be returned.
args : additional args
a list of property names.

Notes

The returned numpy array does NOT own its data. Other operations may be performed.

Returns:
Return type:Numpy array.
get_carray(self, str prop) → BaseArray

Return the c-array for the property or constant.

get_lb_props(self)

Return the properties that are to be load balanced. If none are explicitly set by the user, return all of the properties.

get_number_of_particles(self, bool real=False) → int

Return the number of particles

get_property_arrays(self, all=True, only_real=True)

Return a dictionary of arrays held by the ParticleArray container.

This does not include the constants.

Parameters:
  • all (bint) – Flag to select all arrays
  • only_real (bint) – Flag to select Local/Remote particles

Notes

The dictionary returned is keyed on the property name and the value is the NumPy array representing the data. If all is set to False, the list of arrays is determined by the output_property_arrays data attribute.

get_property_index(self, prop_name)

Get the index of the property in the property array

get_time(self) → double
has_array(self, str arr_name)

Returns true if the array arr_name is present

indices_invalid

indices_invalid: ‘bool’

is_dirty

is_dirty: ‘bool’

name

name: str

num_real_particles

num_real_particles: ‘long’

output_property_arrays

output_property_arrays: list

properties

properties: dict

property_arrays

property_arrays: list

remove_particles(self, indices)

Remove particles whose indices are given in index_list.

We repeatedly interchange the values of the last element and values from the index_list and reduce the size of the array by one. This is done for every property that is being maintained.

Parameters:indices (array) – an array of indices, this array can be a list, numpy array or a LongArray.

Notes

Pseudo-code for the implementation:

if index_list.length > number of particles
    raise ValueError

sorted_indices <- index_list sorted in ascending order.

for every every array in property_array
    array.remove(sorted_indices)
remove_property(self, str prop_name)

Removes property prop_name from the particle array

remove_tagged_particles(self, int tag)

Remove particles that have the given tag.

Parameters:tag (int) – the type of particles that need to be removed.
resize(self, long size)

Resize all arrays to the new size

set(self, **props)

Set properties from numpy arrays like objects

Parameters:props (dict) – a dictionary of properties containing the arrays to be set.

Notes

  • the properties being set must already be present in the properties dict.
  • the size of the data should match the array already present.
set_dirty(self, bool value)

Set the is_dirty variable to given value

set_indices_invalid(self, bool value)

Set the indices_invalid to the given value

set_lb_props(self, list lb_props)
set_name(self, str name)
set_output_arrays(self, list props)

Set the list of output arrays for this ParticleArray

Parameters:props (list) – The list of property arrays

Notes

In PySPH, the solver obtains the list of property arrays to output by calling the ParticleArray.get_property_arrays method. If detailed output is not requested, the output_property_arrays attribute is used to determine the arrays that will be written to file

set_pid(self, int pid)

Set the processor id for all particles

set_tag(self, long tag_value, LongArray indices)

Set value of tag to tag_value for the particles in indices

set_time(self, double time)
set_to_zero(self, list props)
time

time: ‘double’

update_min_max(self, props=None)

Update the min,max values of all properties

update_property(self, BaseArray a, BaseArray a0, BaseArray acc, double dt)

Update an array from another array and a scalar.

This situation arises often when we want to update an array from computed accelrations. The scalar in this case will be the the time step.

pysph.base.particle_array.get_ghost_tag() → int
pysph.base.particle_array.get_local_tag() → int
pysph.base.particle_array.get_remote_tag() → int
pysph.base.particle_array.is_ghost(int tag) → bool
pysph.base.particle_array.is_local(int tag) → bool
pysph.base.particle_array.is_remote(int tag) → bool
Convenience functions to create particle arrays

There are several convenience functions that provide a particle array with a requisite set of particle properties that are documented below.

pysph.base.utils.arange_long(start, stop=-1)[source]

Creates a LongArray working same as builtin range with upto 2 arguments both expected to be positive

pysph.base.utils.create_dummy_particles(info)[source]

Returns a replica (empty) of a list of particles

pysph.base.utils.get_particle_array(additional_props=None, constants=None, **props)[source]

Create and return a particle array with default properties.

The default properties are [‘x’, ‘y’, ‘z’, ‘u’, ‘v’, ‘w’, ‘m’, ‘h’, ‘rho’, ‘p’, ‘au’, ‘av’, ‘aw’, ‘gid’, ‘pid’, ‘tag’]

Parameters:
  • additional_props (list) – If specified, add these properties.
  • constants (dict) – Any constants to be added to the particle array.
Other Parameters:
 

props (dict) – Additional keywords passed are set as the property arrays.

Examples

>>> x = linspace(0,1,10)
>>> pa = get_particle_array(name='fluid', x=x)
>>> pa.properties.keys()
['x', 'z', 'rho', 'pid', 'v', 'tag', 'm', 'p', 'gid', 'au',
 'aw', 'av', 'y', 'u', 'w', 'h']
>>> pa1 = get_particle_array(name='fluid', additional_props=['xx', 'yy'])
>>> pa = get_particle_array(name='fluid', x=x, constants={'alpha': 1.0})
>>> pa.constants.keys()
['alpha']
pysph.base.utils.get_particle_array_gasd(constants=None, **props)[source]

Return a particle array for a Gas Dynamics problem.

Parameters:constants (dict) – Dictionary of constants
Other Parameters:
 props (dict) – Additional keywords passed are set as the property arrays.
pysph.base.utils.get_particle_array_iisph(constants=None, **props)[source]

Get a particle array for the IISPH formulation.

The default properties are:

['x', 'y', 'z', 'u', 'v', 'w', 'm', 'h', 'rho', 'p', 'au', 'av', 'aw',
'gid', 'pid', 'tag' 'uadv', 'vadv', 'wadv', 'rho_adv', 'au', 'av',
'aw','ax', 'ay', 'az', 'dii0', 'dii1', 'dii2', 'V', 'aii', 'dijpj0',
'dijpj1', 'dijpj2', 'p', 'p0', 'piter', 'compression'
 ]
Parameters:constants (dict) – Dictionary of constants
Other Parameters:
 props (dict) – Additional keywords passed are set as the property arrays.
pysph.base.utils.get_particle_array_rigid_body(constants=None, **props)[source]

Return a particle array for a rigid body motion.

Parameters:constants (dict) – Dictionary of constants
Other Parameters:
 props (dict) – Additional keywords passed are set as the property arrays.
pysph.base.utils.get_particle_array_tvf_fluid(constants=None, **props)[source]

Return a particle array for the TVF formulation for a fluid.

Parameters:constants (dict) – Dictionary of constants
Other Parameters:
 props (dict) – Additional keywords passed are set as the property arrays.
pysph.base.utils.get_particle_array_tvf_solid(constants=None, **props)[source]

Return a particle array for the TVF formulation for a solid.

Parameters:constants (dict) – Dictionary of constants
Other Parameters:
 props (dict) – Additional keywords passed are set as the property arrays.
pysph.base.utils.get_particle_array_wcsph(constants=None, **props)[source]

Return a particle array for the WCSPH formulation.

This sets the default properties to be:

['x', 'y', 'z', 'u', 'v', 'w', 'h', 'rho', 'm', 'p', 'cs', 'ax', 'ay',
'az', 'au', 'av', 'aw', 'x0','y0', 'z0','u0', 'v0','w0', 'arho',
'rho0', 'div', 'gid','pid', 'tag']
Parameters:constants (dict) – Dictionary of constants
Other Parameters:
 props (dict) – Additional keywords passed are set as the property arrays.
pysph.base.utils.get_particles_info(particles)[source]

Return the array information for a list of particles.

Returns:
  • A dictionary containing the property information for a list of particles.
  • This dict can be used for example to set-up dummy/empty particle arrays.

Module solver

An implementation of a general solver base class

class pysph.solver.solver.Solver(dim=2, integrator=None, kernel=None, n_damp=0, tf=1.0, dt=0.001, adaptive_timestep=False, cfl=0.3, output_at_times=[], fixed_h=False, **kwargs)[source]

Bases: object

Base class for all PySPH Solvers

Constructor

Any additional keyword args are used to set the values of any of the attributes.

Parameters:
  • dim (int) – Dimension of the problem
  • integrator (pysph.sph.integrator.Integrator) – Integrator to use
  • kernel (pysph.base.kernels.Kernel) – SPH kernel to use
  • n_damp (int) – Number of timesteps for which the initial damping is required. This is used to improve stability for problems with strong discontinuity in initial condition. Setting it to zero will disable damping of the timesteps.
  • dt (double) – Suggested initial time step for integration
  • tf (double) – Final time for integration
  • adaptive_timestep (bint) – Flag to use adaptive time steps
  • cfl (double) – CFL number for adaptive time stepping
  • pfreq (int) – Output files dumping frequency.
  • output_at_times (list/array) – Optional list of output times to force dump the output file
  • fixed_h (bint) – Flag for constant smoothing lengths h

Example

>>> integrator = PECIntegrator(fluid=WCSPHStep())
>>> kernel = CubicSpline(dim=2)
>>> solver = Solver(dim=2, integrator=integrator, kernel=kernel,
...                 n_damp=50, tf=1.0, dt=1e-3, adaptive_timestep=True,
...                 pfreq=100, cfl=0.5, output_at_times=[1e-1, 1.0])
add_post_stage_callback(callback)[source]

These callbacks are called after each integrator stage.

The callbacks are passed (current_time, dt, stage). See the the Integrator.one_timestep methods for examples of how this is called.

Example

>>> def post_stage_callback_function(t, dt, stage):
>>>     # This function is called after every stage of integrator.
>>>     print t, dt, stage
>>>     # Do something
>>> solver.add_post_stage_callback(post_stage_callback_function)
add_post_step_callback(callback)[source]

These callbacks are called after each timestep is performed.

The callbacks are passed the solver instance (i.e. self).

Example

>>> def post_step_callback_function(solver):
>>>     # This function is called after every time step.
>>>     print solver.t, solver.dt
>>>     # Do something
>>> solver.add_post_step_callback(post_step_callback_function)
add_pre_step_callback(callback)[source]

These callbacks are called before each timestep is performed.

The callbacks are passed the solver instance (i.e. self).

Example

>>> def pre_step_callback_function(solver):
>>>     # This function is called before every time step.
>>>     print solver.t, solver.dt
>>>     # Do something
>>> solver.add_pre_step_callback(pre_step_callback_function)
append_particle_arrrays(arrays)[source]

Append the particle arrays to the existing particle arrays

dump_output()[source]

Dump the simulation results to file

The arrays used for printing are determined by the particle array’s output_property_arrays data attribute. For debugging it is sometimes nice to have all the arrays (including accelerations) saved. This can be chosen from using the command line option –detailed-output

Output data Format:

A single file named as: <fname>_<rank>_<iteration_count>.npz

The data is saved as a Python dictionary with two keys:

solver_data : Solver meta data like time, dt and iteration number

arrays : A dictionary keyed on particle array names and with
particle properties as value.

Example:

You can load the data output by PySPH like so:

>>> from pysph.solver.utils import load
>>> data = load('output_directory/filename_x_xxx.npz')
>>> solver_data = data['solver_data']
>>> arrays = data['arrays']
>>> fluid = arrays['fluid']
>>> ...

In the above example, it is assumed that the output file contained an array named fluid.

get_options(opt_parser)[source]

Implement this to add additional options for the application

load_output(count)[source]

Load particle data from dumped output file.

Parameters:count (str) – The iteration time from which to load the data. If time is ‘?’ then list of available data files is returned else the latest available data file is used

Notes

Data is loaded from the output_directory using the same format as stored by the dump_output() method. Proper functioning required that all the relevant properties of arrays be dumped.

set_adaptive_timestep(value)[source]

Set it to True to use adaptive timestepping based on cfl, viscous and force factor.

Look at pysph.sph.integrator.compute_time_step for more details.

set_arrays_to_print(array_names=None)[source]

Only print the arrays with the given names.

set_cfl(value)[source]

Set the CFL number for adaptive time stepping

set_command_handler(callable, command_interval=1)[source]

set the callable to be called at every command_interval iteration

the callable is called with the solver instance as an argument

set_disable_output(value)[source]

Disable file output.

set_final_time(tf)[source]

Set the final time for the simulation

set_output_at_times(output_at_times)[source]

Set a list of output times

set_output_directory(path)[source]

Set the output directory

set_output_fname(fname)[source]

Set the output file name

set_output_only_real(output_only_real)[source]

Set the flag to save out only real particles

set_output_printing_level(detailed_output)[source]

Set the output printing level

set_parallel_output_mode(mode='collected')[source]

Set the default solver dump mode in parallel.

The available modes are:

collected : Collect array data from all processors on root and
dump a single file.

distributed : Each processor dumps a file locally.

set_print_freq(n)[source]

Set the output print frequency

set_time_step(dt)[source]

Set the time step to use

setup(particles, equations, nnps, kernel=None, fixed_h=False)[source]

Setup the solver.

The solver’s processor id is set if the in_parallel flag is set to true.

The order of the integrating calcs is determined by the solver’s order attribute.

This is usually called at the start of a PySPH simulation.

setup_solver(options=None)[source]

Implement the basic solvers here

All subclasses of Solver may implement this function to add the necessary operations for the problem at hand.

Look at solver/fluid_solver.py for an example.

Parameters:options (dict) – options set by the user using commandline (there is no guarantee of existence of any key)
solve(show_progress=True)[source]

Solve the system

Notes

Pre-stepping functions are those that need to be called before the integrator is called.

Similarly, post step functions are those that are called after the stepping within the integrator.

Module solver_interfaces

class pysph.solver.solver_interfaces.CommandlineInterface[source]

Bases: object

command-line interface to the solver controller

class pysph.solver.solver_interfaces.CrossDomainXMLRPCRequestHandler(request, client_address, server)[source]

Bases: SimpleXMLRPCServer.SimpleXMLRPCRequestHandler, SimpleHTTPServer.SimpleHTTPRequestHandler

SimpleXMLRPCRequestHandler subclass which attempts to do CORS

CORS is Cross-Origin-Resource-Sharing (http://www.w3.org/TR/cors/) which enables xml-rpc calls from a different domain than the xml-rpc server (such requests are otherwise denied)

do_GET()[source]

Handle http requests to serve html/image files only

do_OPTIONS()[source]

Implement the CORS pre-flighted access for resources

end_headers()[source]

End response header with adding Access-Control-Allow-Origin

This is done to enable CORS request from all clients

class pysph.solver.solver_interfaces.MultiprocessingClient(address=None, authkey=None, serializer='pickle', start=True)[source]

Bases: multiprocessing.managers.BaseManager

A client for the multiprocessing interface

Override the run() method to do appropriate actions on the proxy instance of the controller object or add an interface using the add_interface methods similar to the Controller.add_interface method

add_interface(callable)[source]

This makes it act as substitute for the actual command_manager

class pysph.solver.solver_interfaces.MultiprocessingInterface(address=None, authkey=None, try_next_port=False)[source]

Bases: multiprocessing.managers.BaseManager

A multiprocessing interface to the solver controller

This object exports a controller instance proxy over the multiprocessing interface. Control actions can be performed by connecting to the interface and calling methods on the controller proxy instance

class pysph.solver.solver_interfaces.XMLRPCInterface(addr, requestHandler=<class pysph.solver.solver_interfaces.CrossDomainXMLRPCRequestHandler>, logRequests=True, allow_none=True, encoding=None, bind_and_activate=True)[source]

Bases: SimpleXMLRPCServer.SimpleXMLRPCServer

An XML-RPC interface to the solver controller

Currently cannot work with objects which cannot be marshalled (which is basically most custom classes, most importantly ParticleArray and numpy arrays)

Miscellaneous Tools for PySPH

Input/Output of data files

The following functions are handy functions when processing output generated by PySPH or to generate new files.

pysph.solver.utils.dump(filename, particles, solver_data, detailed_output=False, only_real=True, mpi_comm=None)[source]

Dump the given particles and solver data to the given filename.

Parameters:
  • filename (str) – Filename to dump to.
  • particles (sequence(ParticleArray)) – Sequence of particle arrays to dump.
  • solver_data (dict) – Additional information to dump about solver state.
  • detailed_output (bool) – Specifies if all arrays should be dumped.
  • only_real (bool) – Only dump the real particles.
  • mpi_comm (mpi4pi.MPI.Intracomm) – An MPI communicator to use for parallel commmunications.
  • mpi_comm is not passed or is set to None the local particles alone (If) –
  • dumped, otherwise only rank 0 dumps the output. (are) –
pysph.solver.utils.get_files(dirname=None, fname=None, endswith='.npz')[source]

Get all solution files in a given directory, dirname.

Parameters:
  • dirname (str) – Name of directory.
  • fname (str) – An initial part of the filename, if not specified use the first part of the dirname.
  • endswith (str) – The extension of the file to load.
pysph.solver.utils.load(fname)[source]

Load and return data from an output (.npz) file dumped by PySPH.

For output file version 1, the function returns a dictionary with the keys:

"solver_data" : Solver constants at the time of output like time, time step and iteration count.

"arrays" : ParticleArrays keyed on names with the ParticleArray object as value.

Parameters:fname (str) – Name of the file.

Examples

>>> data = load('elliptical_drop_100.npz')
>>> data.keys()
['arrays', 'solver_data']
>>> arrays = data['arrays']
>>> arrays.keys()
['fluid']
>>> fluid = arrays['fluid']
>>> type(fluid)
pysph.base.particle_array.ParticleArray
>>> data['solver_data']
{'count': 100, 'dt': 4.6416394784204199e-05, 't': 0.0039955855395528766}
pysph.solver.utils.load_and_concatenate(prefix, nprocs=1, directory='.', count=None)[source]

Load the results from multiple files.

Given a filename prefix and the number of processors, return a concatenated version of the dictionary returned via load.

Parameters:
  • prefix (str) – A filename prefix for the output file.
  • nprocs (int) – The number of processors (files) to read
  • directory (str) – The directory for the files
  • count (int) – The file iteration count to read. If None, the last available one is read
Interpolator

This module provides a convenient class called interpolator.Interpolator which can be used to interpolate any scalar values from the points onto either a mesh or a collection of other points. SPH interpolation is performed with a simple Shepard filtering.

class pysph.tools.interpolator.InterpolateFunction(dest, sources=None)[source]

Bases: pysph.sph.equation.Equation

Parameters:
  • dest (str) – name of the destination particle array
  • sources (list of str) – names of the source particle arrays
initialize(d_idx, d_prop, d_number_density)[source]
loop(s_idx, d_idx, s_temp_prop, d_prop, d_number_density, WIJ)[source]
post_loop(d_idx, d_prop, d_number_density)[source]
class pysph.tools.interpolator.Interpolator(particle_arrays, num_points=125000, kernel=None, x=None, y=None, z=None, domain_manager=None)[source]

Bases: object

Convenient class to interpolate particle properties onto a uniform grid. This is particularly handy for visualization.

The x, y, z coordinates need not be specified, and if they are not, the bounds of the interpolated domain is automatically computed and num_points number of points are used in this domain uniformly placed.

Parameters:
  • particle_arrays (list) – A list of particle arrays.
  • num_points (int) – the number of points to interpolate on to.
  • kernel (Kernel) – the kernel to use for interpolation.
  • x (ndarray) – the x-coordinate of points on which to interpolate.
  • y (ndarray) – the y-coordinate of points on which to interpolate.
  • z (ndarray) – the z-coordinate of points on which to interpolate.
  • domain_manager (DomainManager) – An optional Domain manager for periodic domains.
interpolate(prop, gradient=False)[source]

Interpolate given property.

Parameters:
  • prop (str) – The name of the property to interpolate.
  • gradient (bool) – Evaluate gradient and not function.
Returns:

Return type:

A numpy array suitably shaped with the property interpolated.

set_domain(bounds, shape)[source]

Set the domain to interpolate into.

Parameters:
  • bounds (tuple) – (xmin, xmax, ymin, ymax, zmin, zmax)
  • shape (tuple) – (nx, ny, nz)
set_interpolation_points(x=None, y=None, z=None)[source]

Set the points on which we must interpolate the arrays.

If any of x, y, z is not passed it is assumed to be 0.0 and shaped like the other non-None arrays.

Parameters:
  • x (ndarray) – the x-coordinate of points on which to interpolate.
  • y (ndarray) – the y-coordinate of points on which to interpolate.
  • z (ndarray) – the z-coordinate of points on which to interpolate.
update_particle_arrays(particle_arrays)[source]

Call this for a new set of particle arrays which have the same properties as before.

For example, if you are reading the particle array data from files, each time you load a new file a new particle array is read with the same properties. Call this function to reset the arrays.

pysph.tools.interpolator.get_bounding_box(particle_arrays, tight=False, stretch=0.05)[source]

Find the size of the domain given a sequence of particle arrays.

If tight is True, the bounds are tight, if not the domain is stretched along each dimension by an amount stretch specified as a percentage of the length along that dimension is added in each dimension.

pysph.tools.interpolator.get_nx_ny_nz(num_points, bounds)[source]

Given a number of points to use and the bounds, return a triplet of integers for a uniform mesh with approximately that many points.

pysph.tools.interpolator.main(fname, prop, npoint)[source]

Module zoltan: A Python wrapper for Zoltan

Solver Interfaces

Interfaces are a way to control, gather data and execute commands on a running solver instance. This can be useful for example to pause/continue the solver, get the iteration count, get/set the dt or final time or simply to monitor the running of the solver.

CommandManager

The CommandManager class provides functionality to control the solver in a restricted way so that adding multiple interfaces to the solver is possible in a simple way.

The figure Overview of the Solver Interfaces shows an overview of the classes and objects involved in adding an interface to the solver.

_images/controller.png

Overview of the Solver Interfaces

The basic design of the controller is as follows:

  1. Solver has a method set_command_handler() takes a callable and a command_interval, and calls the callable with self as an argument every command_interval iterations
  2. The method CommandManager.execute_commands() of CommandManager object is set as the command_handler for the solver. Now CommandManager can do any operation on the solver
  3. Interfaces are added to the CommandManager by the CommandManager.add_interface() method, which takes a callable (Interface) as an argument and calls the callable in a separate thread with a new Controller instance as an argument
  4. A Controller instance is a proxy for the CommandManager which redirects its methods to call CommandManager.dispatch() on the CommandManager, which is synchronized in the CommandManager class so that only one thread (Interface) can call it at a time. The CommandManager queues the commands and sends them to all procs in a parallel run and executes them when the solver calls its execute_commands() method
  5. Writing a new Interface is simply writing a function/method which calls appropriate methods on the Controller instance passed to it.

Controller

The Controller class is a convenience class which has various methods which redirect to the Controller.dispatch() method to do the actual work of queuing the commands. This method is synchronized so that multiple controllers can operate in a thread-safe manner. It also restricts the operations which are possible on the solver through various interfaces. This enables adding multiple interfaces to the solver convenient and safe. Each interface gets a separate Controller instance so that the various interfaces are isolated.

Blocking and Non-Blocking mode

The Controller object has a notion of Blocking and Non-Blocking mode.

  • In Blocking mode operations wait until the command is actually executed on the solver and then return the result. This means execution stops until the execute_commands method of the CommandManager is executed by the solver, which is after every commmand_interval iterations. This mode is the default.
  • In Non-Blocking mode the Controller queues the command for execution and returns a task_id of the command. The result of the command can then be obtained anytime later by the get_result method of the Controller passing the task_id as argument. The get_result call blocks until result can be obtained.

Switching between modes

The blocking/non-blocking modes can be get/set using the methods Controller.get_blocking() and Controller.set_blocking() methods

NOTE : The blocking/non-blocking mode is not for getting/setting solver properties. These methods always return immediately, even if the setter is actually executed only when the CommandManager.execute_commands() function is called by the solver.

Interfaces

Interfaces are functions which are called in a separate thread and receive a Controller instance so that they can query the solver, get/set various properties and execute commands on the solver in a safe manner.

Here’s the example of a simple interface which simply prints out the iteration count every second to monitor the solver

import time

def simple_interface(controller):
    while True:
        print controller.get_count()
        time.sleep(1)

You can use dir(controller) to find out what methods are available on the controller instance.

A few simple interfaces are implemented in the solver_interfaces module, namely CommandlineInterface, XMLRPCInterface and MultiprocessingInterface, and also in examples/controller_elliptical_drop_client.py. You can check the code to see how to implement various kinds of interfaces.

Adding Interface to Solver

To add interfaces to a plain solver (not created using Application), the following steps need to be taken:

  • Set CommandManager for the solver (it is not setup by default)
  • Add the interface to the CommandManager

The following code demonstrates how the the Simple Interface created above can be added to a solver:

# add CommandManager to solver
command_manager = CommandManager(solver)
solver.set_command_handler(command_manager.execute_commands)

# add the interface
command_manager.add_interface(simple_interface)

For code which uses Application, you simply need to add the interface to the application’s command_manager:

app = Application()
app.set_solver(s)
...
app.command_manager.add_interface(simple_interface)

Commandline Interface

The CommandLine interface enables you to control the solver from the commandline even as it is running. Here’s a sample session of the command-line interface from the controller_elliptical_drop.py example:

$ python controller_elliptical_drop.py
pysph[0]>>>
Invalid command
Valid commands are:
    p | pause
    c | cont
    g | get <name>
    s | set <name> <value>
    q | quit -- quit commandline interface (solver keeps running)
pysph[9]>>> g dt
1e-05
pysph[64]>>> g tf
0.1
pysph[114]>>> s tf 0.01
None
pysph[141]>>> g tf
0.01
pysph[159]>>> get_particle_array_names
['fluid']

The number inside the square brackets indicates the iteration count.

Note that not all operations can be performed using the command-line interface, notably those which use complex python objects.

XML-RPC Interface

The XMLRPCInterface interface exports the controller object’s methods over an XML-RPC interface. An example html file controller_elliptical_drop_client.html uses this XML-RPC interface to control the solver from a web page.

The following code snippet shows the use of XML-RPC interface, which is not much different from any other interface, as they all export the interface of the Controller object:

import xmlrpclib

# address is a tuple of hostname, port, ex. ('localhost',8900)
client = xmlrpclib.ServerProxy(address, allow_none=True)

# client has all the methods of the controller
print client.system.listMethods()

print client.get_t()
print client.get('count')

The XML-RPC interface also implements a simple http server which serves html, javascript and image files from the directory it is started from. This enables direct use of the file controller_elliptical_drop_client.html to get an html interface without the need of a dedicated http server.

The figure PySPH html client using XML-RPC interface shows a screenshot of the html client in action

_images/html_client.png

PySPH html client using XML-RPC interface

One limitation of XML-RPC interface is that arbitrary python objects cannot be sent across. XML-RPC standard predefines a limited set of types which can be transferred.

Multiprocessing Interface

The MultiprocessingInterface interface also exports the controller object similar to the XML-RPC interface, but it is more featured, can use authentication keys and can send arbitrary picklable objects. Usage of Multiprocessing client is also similar to the XML-RPC client:

from pysph.solver.solver_interfaces import MultiprocessingClient

# address is a tuple of hostname, port, ex. ('localhost',8900)
# authkey is authentication key set on server, defaults to 'pysph'
client = MultiprocessingClient(address, authkey)

# controller proxy
controller = client.controller

pa_names = controller.get_particle_array_names()

# arbitrary python objects can be transferred (ParticleArray)
pa = controller.get_named_particle_array(pa_names[0])

Example

Here’s an example (straight from controller_elliptical_drop_client.py) put together to show how the controller can be used to create useful interfaces for the solver. The code below plots the particle positions as a scatter map with color-mapped velocities, and updates the plot every second while maintaining user interactivity:

from pysph.solver.solver_interfaces import MultiprocessingClient

client = MultiprocessingClient(address, authkey)
controller = client.controller

pa_name = controller.get_particle_array_names()[0]
pa = controller.get_named_particle_array(pa_name)

#plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
line = ax.scatter(pa.x, pa.y, c=numpy.hypot(pa.u,pa.v))

global t
t = time.time()
def update():
    global t
    t2 = time.time()
    dt = t2 - t
    t = t2
    print 'count:', controller.get_count(), '\ttimer time:', dt,
    pa = controller.get_named_particle_array(pa_name)

    line.set_offsets(zip(pa.x, pa.y))
    line.set_array(numpy.hypot(pa.u,pa.v))
    fig.canvas.draw()

    print '\tresult & draw time:', time.time()-t

    return True

update()
gobject.timeout_add_seconds(1, update)
plt.show()

Indices and tables