skuschel / postpic Goto Github PK
View Code? Open in Web Editor NEWThe open-source particle-in-cell post-processor.
License: GNU General Public License v3.0
The open-source particle-in-cell post-processor.
License: GNU General Public License v3.0
I'm plotting the following for a TNSA simulation
ekin = pa.createField(MS.X, MS.Y, weights=MS.Ekin_MeV, optargsh=optargsh, simextent=True)
plotter.plotField(ekin, xlim=[-0.5*1e-5, 1.0*1e-5], log10plot=False)
In the final plot, the colorbar shows a range from -2e28 to 2e28. From another plot,
EkinvsX = pa.createField(MS.X, MS.Ekin_MeV, optargsh=optargsh, simextent=True)
plotter.plotField(EkinvsX, xlim=[-0.5*1e-5, 1.0*1e-5])
I know that the maximum kinetic energy in this simulation is about 15 MeV, which is much more reasonable. How do I get a correct color scale?
will be VERY useful for #16
Currently all fields dumped need ot live on the same grid. This is an unnecessary limitation.
The Field
object is basically a numpy array plus axes and labels. Therefore it may be more intuitive to be able to use it as a numpy array:
np.abs
can work with a Field
object and return a Field
object.postpic.MultiSpecies
should probably also behave like that. Check first, if this has mayor performance implicationsOn low particle per cell numbers the current approch of creating a histogram creates a very noisy image. Instead there should be a routine to transfer particle properties to the grid by assigning a particle shape to each particles as it is done in the pic simulations.
Argh, I really wish methods in classes like plotting.plottercls
would have doc-strings ;)
No seriously, we need more doc strings on every object and member function โจ
There are two ways to do this:
for i in xrange(n)
) should be parallelized within cython. This change is to be made in https://github.com/skuschel/postpic/blob/master/postpic/particles/_particlestogrid.pyxI suspect, that option 2 seems a little better to me, however benchmarking is needed. Benchmarks are already carried out as part of the tests.
this might help
http://www.dlr.de/sc/Portaldata/15/Resources/dokumente/PyHPC2013/submissions/pyhpc2013_submission_2.pdf
http://cs.nyu.edu/courses/spring15/CSCI-UA.0480-003/lecture13.pdf
As for OpenPMD the example data is very useful for better testing. Build a similar thing for the sdf file format (as used by EPOCH).
The dummyreader is a mess. It needs to be nicely rewritten. The returned data should be much better structured such that nice figures are created and no blobs of particles.
Implement Equality tests for dumpreader and SingleSpeciesAnalyzer. This allows that ParticleAnalyzer can be added together without adding Particles twice, in case they overlap. It will also allow to easily check if a particle analyzer contains particles of another.
There are routines needed to reconstruct the k-space from the E and B Fields. Since E and B Fields are available it is possible to distinguish between a forward (+k) and a backward (-k) propagating wave. So this is not just the fourier transform of different field components!
In full 3D the 6 real scalar fields (Ex, Ey, Ez, Bx, By, Bz) should be converted to 3 complex scalar k-spaces (for example k_x, k_y, k_z) for each of the 3 possible polarizations in 3D. So k_x would mean, that the EM wave is polarized along the x-Axis. Dont know if there are better naming conventions, this is just the first that came to my mind.
These routines should be placed inside the Field Analyzer class.
Rescaling of axis currently requires something like this:
ax = kspace.axes[1]
ax.setextent(np.array(ax.extent)*800e-9/6.3, len(ax))
Maybe replace this by overloading of operators for axis object?
If there are multiple species defined in the epoch input.deck file with names like 'C6-1', 'C6-2', 'C6-3' (in my case for different types of C6), then the name gets truncated by postpic's listSpecies() function to 'C6'. In turn, it is not possible to use the output of listSpecies() to plot e.g. the number density of all species:
pas = [MS(dr, s) for s in dr.listSpecies()]
for pa in pas:
nd = pa.createField(MS.X, MS.Y, optargsh=optargsh,simextent=True)
raises the error
KeyError: 'Grid/Particles/C6'
as the keys are (correctly) named Grid/Particles/C6-1, Grid/Particles/C6-2, etc.
I think the error might by in this line
Using higher order particle shapes causes the autoscale of the ylimts or climit to break because the minimal value is chosen to match the minimal value of the data, which can be almost nothing then. Any idea how to compute good limits for autoscaling?
since EPOCH 4.7.0 the code outputs charge and mass as a per species value. Make the EPOCH reader read it.
see
https://cfsa-pmw.warwick.ac.uk/EPOCH/epoch/issues/1221
https://cfsa-pmw.warwick.ac.uk/EPOCH/epoch/commit/5345706f26cd152d2b65428bdfe82dff5f1b1bf6
When I try to use the contourlevels
option for a 2D plot, I get the error that an numpy array is not callable.
Change this line from
extent=field.extent())
to
extent=field.extent)
The current plotter Class works, but could be much closer to the object oriented interface of matplotlib. This would give more freedom to the user such as arranging plots in subplots or choosing custom color scales.
This might become a PR to https://github.com/skuschel/postpic-examples instead.
Analyzer submodule is a mess.
Numpys histogram functions takes longer than needed with a high number of bins. This takes a significant portion of the total processing time when dealing with more than a few 10e6 particles. There is already a numpy discussion on that topic:
numpy/numpy#2656
As well as a second possible workaround:
http://stackoverflow.com/questions/8805601/efficiently-create-2d-histograms-from-large-datasets
add calculation of beam parameters for a particle analyzer, that represents particles of a bunch. Since the particle analyzer returns per particle properties this should probably be placed within a new class. Properties may be (to be completed)
Charge, Emittance, normalized emittance,
average P, Px, Py, Pz, Energy, Gamma, divergence,...
spread of P, Px, Py, Pz, Energzy Gamma, divergence,...
Having a good and robust measure for the spread is essential. Therefore multiple methods should be implemented in the long go. There are two easy ways to go:
This issue focusses on the first. The algorithm to do this probably requires some discussion.
postpic needs at least numpy 1.7.0 or higher
It seems logical to have the information available in how many dimensions the simulation was running, but this information is barely used and not even necessary to display the data. Remove it.
It should be easiy to view any Field object in reciprocal space -- no matter what property it contains. Add Field.fft(self, ...)
to transform and ensure axis labels and ticks are correct then.
slice the Field object as an array can be sliced.
Particle properties should be handled all the same and it should be allowed for all of them to be in one out of 3 possible states:
The current installation routine seems to not (yet) provide compatibility with MS Windows operating systems. Neither installation using the setup.py nor pip was successful.
Some unfortunate users might nevertheless be bound to Windows, e.g. if one would want to install it on a machine maintained by DESY IT. Although PostPIC should be compatible with, e.g. python(x,y), I could track the errors down to some unsatisfying inabilities of Windows:
_postpic_version.py
link in the main directory fails.import _postpic_version
from postpic import _postpic_version
__init.py__
in the postpic subfolder is properly set up.Finally, I feel the need for appreciating that the installation on my personal Ubuntu machine ran as smoothly as it could ever be. However, providing support for Windows might open postpic up to a broader range of users.
edit: I seem to be not able to add labels to this issue, my apologies
Seems like this attribute is never really used. Should be removed
The bash scripts run smoothly on a linux machine, but not under Windows. They have to be rewritten or windows pendants have to be added to make this work.
run-tests
work under windows. (Maybe include all tests in setup.py
, so that ./setup.py test
runs everything? having a single command will also make it very easy to set up a code coverage statistics #18 )OpenPMD standard output based on hdf5 files needs to be added.
running the tests should also include running the examples.
the run-tests script should autodetect if nosetests2 or nosetests is available. On some systems like arch linux nosetests defaults to the python3 version causing tests to fail.
In older postpic versions, it could detect the properties of particle species by inferring them from the name (like "electron" or "proton"). Therefore, I dumped my particles with
charge = never
mass = never
in EPOCH's input.deck. Now I want to plot the following field:
import postpic as pp
pp.chooseCode('EPOCH')
from postpic import MultiSpecies as MS
dump = pp.readDump('data/0002.sdf')
plotter = pp.plotting.plottercls(project='Test', autosave=False, reader=dump)
pas = [MS(dump, s) for s in dump.listSpecies()]
for pa in pas:
numberdensity = pa.createField('x', 'y', weights='Ekin_MeV')
p = plotter.plotField(numberdensity)
but I get the following error:
https://gist.github.com/stetie/6c0931def740815236ab0a8330598247
It seems to me that the new scalar properties are directly using the data from the EPOCH dump without the auto-detection used in older postpic versions. Is it possible to fix this?
In ParticleAnalzyer.__init__
particle properties (extent and gridpoints) are always requested, even if they are not used anymore. This causes issues for Simlations that didnt dump theses properties, for example because people want to look the particles only but not at the grid.
I have a species called 'Proton', which contains - obviously - protons. When I try to create a new field like
ekin = pa.createField(MS.X, MS.Y, weights=MS.Ekin_MeV, optargsh=optargsh, simextent=True)
it invokes the identifypsecies()
function, which fails with the following error:
[...]/postpic/postpic/helper.py in identifyspecies(cls, species)
180 if regexdict['elem']:
181 ret['mass'] = float(cls._masslistelement[regexdict['elem']]) * \
--> 182 1836.2 * cls.me
183 if regexdict['elem_c'] == '':
184 ret['charge'] = 0
KeyError: 'Pr'
It seems like identifypsecies()
doesn't know Protons and it also doesn't give meaningful error messages.
A command line interface should provide an easy way to inspect dumps, show informations, and at some point in the future also to plot data.
is not possible yet. Find something to start for easy plots. Paraview? mayavi?
An interactive jupyter interface to navigate through the data
If I set the option savecsv=True
when calling plotField()
, either of this happens:
postpic/postpic/plotting/plotter_matplotlib.py in plotFields1d(self, *fields, **kwargs)
325 ax = fig.add_subplot(1, 1, 1)
326 name = kwargs.pop('name', fields[0].name)
--> 327 MatplotlibPlotter.addFields1d(ax, *fields, **kwargs)
328 self._plotfinalize(fig)
329 self.annotate(fig, project=self.project)
/home_4TB/stietze/postpic/postpic/plotting/plotter_matplotlib.py in addFields1d(ax, *fields, **kwargs)
234 if clearinfos:
235 field.infos = []
--> 236 MatplotlibPlotter.addField1d(ax, field, **kwargs)
237 MatplotlibPlotter.annotate_fromfield(ax, field)
238 MatplotlibPlotter.annotate(ax, infostring=str(infostrings))
TypeError: addField1d() got an unexpected keyword argument 'savecsv'
In either case, no csv file is written.
some lower level functions currently do not start with an underscore _
. This is problematic for the design and also for confusing for used if they find 7 methods to create a histogram....
However, if someone is already using one of them, this change will not be downwards compatible. It should therefore include all changes at once. Some functions might need to be renamed for consistancy. This issue should serve as a list to collect and discuss the changes
old name | new name | reason |
---|---|---|
createHistgramField1d |
_createHistgramField1d |
use createField instead |
createHistgramField2d |
_createHistgramField2d |
use createField instead |
createHistgramField3d |
_createHistgramField3d |
use createField instead |
createField
s docstring accordinglycythonfunctions.pyx
to _cythonfunctions.pyx
. They should only be used via postpic.helper
postpic.MultiSpecies.compress
is altering the current object as opposed to numpy.compress
which is returning the compressed object. changing might make things more coherent? Would also allow to remove the uncompress
function.
Particle subsets using the subset
block are currently not recognized by postpic. need a good description of the key pattern the subset can be found in the .sdf file. @hollatz can you give one, please?
add mean(self, func)
and var(self, func)
to https://github.com/skuschel/postpic/blob/master/postpic/analyzer/particles.py#L559 such that mean and variance can be calculated from any per-particle-propery taking the weight into account. What elese to add here? median
and quantile
maybe?
When plotting, postpic always uses the predefined colormaps jet
and MatplotlibPlotter.symmap
. To manually change the colormap, I have to use a command like
p = plotter.plotField(nd, log10plot=False)
p.axes[0].images[0].set_cmap('viridis')
I would like to have a cmap
keyword for the plotField()
function. If I understood the code correctly, the only change necessary would be in
addField2d()
would have to take a cmap=
argument and pass it to imshow()
. If no cmap=
keyword is given, the default colormaps should be used.A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.