magritte-code / magritte Goto Github PK
View Code? Open in Web Editor NEWA modern software library for simulating radiation transport.
Home Page: https://magritte.readthedocs.io
License: GNU General Public License v3.0
A modern software library for simulating radiation transport.
Home Page: https://magritte.readthedocs.io
License: GNU General Public License v3.0
If Anaconda is properly installed, the magritte
conda environment can be installed form the environment file, named conda_env.yml
, which can be found here. To download it, run:
wget https://github.com/Magritte-code/Magritte/raw/stable/dependencies/conda_env.yml
Now conda
can be used to create an environment from this file by running:
conda env create -f conda_env.yml
More information about the Magritte installation process can be found here.
In solvers.tpp
, measures are taken such that we do not step over peaked line centers (due to doppler shifts). This is done by computing the line width and limiting the maximal allowed doppler shift to a fraction of this. However, that limiting factor is currently only computed for a single point on every ray, thus it might be inaccurate for the entire ray.
Therefore, this limiting factor should be changed to be computed for every distance increment individually.
This issue is related to openjournals/joss-reviews#3905.
This repo is lacking a contributing guide
The FITS file is fine but some information could be added to its header. The program I am using to analyse the FITS files (GILDAS) uses more parameters to perform the plotting correctly.
I will attach the header produced by Magritte versus a header produced by Lime which contains more header parameters that GILDAS uses. They are not files created from the same model though, but I think it can show some of the fields that I would need to use that are missing (in case it's helpful for you to see the two files created from the same model I could try it and send them to you). I am not sending you the whole FITS files because they are too large, but the Magritte example I am using is the analytic spiral and for Lime the sample model given in GitHub (and I can try to send them to you in another way if needed).
Most of the fields missing are important for GILDAS to work correctly. For instance, CUNITs, CTYPEs, CRPIXs, CRVALs, CDELTs, EPOCH, RESTFREQ... I understand that this information is stored in the HDF5 model file, right? Would it be possible to add those parameters to the FITS header? Even just adding some of them might work, but I would need to test it (I am not that familiar with GILDAS). I also attach the comparison of the headers created by GILDAS for both of the files with the important information missing marked with ** at the end of the line (I don't know if this can help you see which parameters make a change, but just in case).
For determining the collisional species when reading a lambda file, one only has to consider the index in front (see lamda database paper):
Page 9:
This is followed by NPART blocks of collisional data:
Lines 13+NLEV+NLIN and 14+NLEV+NLIN give the collision partner ID and reference. Valid
identifications are: 1 = H2, 2 = para-H2, 3 = ortho-H2, 4 = electrons, 5 = H, 6 = He, 7 = H+.
We are currently doing some serious regex work (see magritte/setup.py
) to figure out which is the collisional species, instead of merely reading the first number.
Plotly plots will probably not work "out-of-the-box" in jupyter notebooks or jupyter lab when installing Magritte and the Magritte conda environment.
Plotly requires additional extensions to be able to render the plots in a jupyter notebook or jupyter lab.
These extensions can be found in the plotly documentation.
When running the analytical benchmark all_constant_single_ray.py
and increasing the velocity field by a factor 10 (as i was curious whether the boundary condition still worked in high velocity fields, see #95 ), I encountered segfaults (more specifically munmap_chunk() invalid pointer
). After some brief investigation, it seems that the allocated size of the arrays for the feautrier solver is insufficient. My hunch is that the nonzero velocity at the inner boundary could have something to do with it.
However, it is possible that #93 will fix this issue anyway, as we then no longer need to take into account all the frequency interpolations.
By default, the opacity and emissivity is computed using all lines. However, as the influence of the lines drops like a gaussian, much redundant computation is done in case of many lines. An attempt to alleviate this is made using the single line approximation.
However, this approximation only takes into account a single line for each ray (ignoring that doppler shifts may cause another line to lie closer). Therefore another solution must be found.
If we would be still using the general approach, one needs to keep track of which lines lie close to the given frequency. This can be done be checking the doppler shift and checking one side to add new lines (if they lie close enough) and checking for the other side to remove old lines (if they lie too far). For an efficient implementation, one needs to add this checking during set_data
(for a data sparse implementation) in solver.tpp
and one needs to precompute the sorted line frequencies (for efficiently computing which lines to add/remove).
In the single line approximation, similar things can be done; however only a single line should be taken into account.
FInd alternative for Travis-CI.
Here we should collect ideas on how to make Magritte more user friendly.
Jupyter notebooks as basis?
This page is currently empty.
https://magritte.readthedocs.io/en/stable/2_benchmarks/0_analytic_benchmarks.html
This is related to the review in openjournals/joss-reviews#3905
Currently, we use long doubles for most of our solver computations typedef long double Real;
(in tools/types.hpp
). However, it might be that the solvers do not require such high precision arithmetic. Therefore we should try out lower precision datatypes and see whether the obtained results are still reasonably accurate (and faster of course).
In the oneline approximation, one needs to be able to estimate the frequency of the closest line. However, due to sorting the frequencies (and not the line frequencies), the indices get mismatched...
Make all parameters equal.
Warning: currently the parameters that are not Meyer's singletons are not available everywhere!
When writing out the species in the model, dummy behavior without comment is used. It seems that we only care about the number of species, but species.hpp
also has a datafield with species names available. I propose to change the current behavior to also incorporate reading/writing of species names.
Let the user explicitly provide units (using astropy) and do conversions internally during setup.
Create a python class that constructs Magritte models and can convert between different versions.
One of Magritte's key points is its flexibility regarding data structures etc... exploit it!
The currently implemented short-characteristics method is only first-order accurate, thus slightly inaccurate in regions where the source function can change significantly (i.e. transitions regimes in high optical depths NLTE models). Using a second-order accurate interpolation method for the source can result in negative interpolated values, so we might need to consider using bezier interpolants, as in Rodriguez and Piskunov 2013.
This analytic python test does not run. This is due to a bug in line 147 (tau() has too little arguments).
It is not actually used for much (which explains why it has been broken for at least two years according to git history), but any test on the stable branch should work properly.
Currently, the most expensive step of solving the radiative transfer equation, is computing the opacities and emissivities at every frequency and point. This is due to needing to sum over the opacities/emissivities of all individual lines.
However, most lines are always out of range of the , wasting a lot of computation time O(N^2) scaling instead of O(N) in terms of number of lines. An easy fix would be to predefine a frequency range (accounting for doppler shifts) for each ray(/point on ray) computation and only use the lines within that range.
Currently, incrementing the version number happens after testing the build. This can lead to some time lag between updating the code in the repository and incrementing the version number. In some cases, this can lead to people having a wrong (old) version number. Thus it is better to switch this ordering.
In magritte, we currently have no way of adding masers-like circumstances in any model, as the optical depth/opacity is always bounded from below (see model/parameters/parameters.hpp
) by a very small value to avoid numerical issues.
However, if we were to merely take into consideration the sign of the optical depth/opacity and bound appropriately, the feautrier solver will compute wrong results.
A solution would be to switch to another solver (e.g. short-characteristics) in case of masers, although this needs to be clearly documented.
In solver.tpp
, currently two different multithreading functions are used from Paracabs.
threaded_for
only uses the cpu, while accelerated_for
uses the gpu if able.
The exact performance of both implementations should be tested (and if possible documented). In the end, I estimate that only one of both should remain in solver.tpp
. Otherwise documentation stating why this mix of paradigms is made, should be added.
I installed Magritte according to documentation and then opened the above mentioned example notebook.
Line 14 returns
FileNotFoundError: [Errno 2] No such file or directory: 'Analytic_disk/analytic_disk_convert_to_pos.geo_unrolled'
/bin/sh: 1: gmsh: not found
I suspected this came up because gmsh is installed via conda, so Popen.subprocess not being aware of a conda env might raise. Indeed, installing gmsh systemwide gets me one step further.
Rerunning the same cell now produces:
Info : Running 'gmsh -0 Analytic_disk/analytic_disk_convert_to_pos.geo' [Gmsh 4.1.3, 1 node, max. 1 thread]
Info : Started on Mon Nov 29 14:05:54 2021 Info : Reading 'Analytic_disk/analytic_disk_convert_to_pos.geo'...
Info : Done reading 'Analytic_disk/analytic_disk_convert_to_pos.geo'
Info : Writing 'Analytic_disk/analytic_disk_convert_to_pos.geo_unrolled'...
Info : Done writing 'Analytic_disk/analytic_disk_convert_to_pos.geo_unrolled'
Info : Stopped on Mon Nov 29 14:05:54 2021
Error : Unknown number option 'General.NativeFileChooser'
Warning : Unable to open file 'Analytic_disk/Analytic_disk/analytic_disk.msh'
Error : 'Analytic_disk/analytic_disk_convert_to_pos.geo', line 2 : Unknown view 0
For problems involving polarization, it is necessary to evaluate higher-order generalization of the mean intensity. Although you might want to let Magritte solve the generalized non-LTE problem including polarization in the future, a first much simpler step is to let Magritte compute the simplest higher-order tensor (the anisotropy) and use it externally to compute the atomic polarization using the mean intensity and the anisotropy. The only change would be to add a new quantity J20 that is computed alongside the mean intensity, which contains the geometric weight 3*cos(theta)^2-1, where theta is defined with respect to a pre-defined suitable reference system. This would be pretty straightforward to add in the code. However, directional boundary conditions (for instance to define a nearby star) are also needed. I don't know if your plans include allowing for these directional BCs because the Solver::boundary_intensity function doesn't seem to add the information about the specific ray. I'll try to implement these changes in a fork and, if successful, open a pull-request in case you're interested.
I'd add a bit more information to the README.md regarding the field(s) of research in which Magritte can be applied. Is it limited to the astrophysical domain or more broadly applicable (e.g. plasma physics).
Rudimentary. Only very few cpp and no py objects are documented.
turbulence(rr)
: rr
is not used in the function bodywdir = os.path.abspath(os.path.join(os.getcwd(), 'Analytic_disk'))
Cell [6] then produces filenames such as "Analytic_diskco.txt" (without the "/" dir separator). I suggest to make use of os.path.join()
to
construct paths (here and everywhere else in the code if applicable).
%%capture
mesher.convert_msh_to_pos (f'{bmesh_name}.msh', replace=True)
Error : Unknown number option 'General.NativeFileChooser'
Error : Unknown entity 1 of dimension 0
Error : Could not read vertices
Error : Error loading '/home/grotec/Repositories/JOSSReviews/Magritte/docs/src/1_examples/0_creating_models/Analytic_disk/analytic_disk.msh'
Error : '/home/grotec/Repositories/JOSSReviews/Magritte/docs/src/1_examples/0_creating_models/Analytic_disk/analytic_disk_convert_to_pos.geo', line 2 : Unknown view 0
mesher.convert_msh_to_pos (f'{bmesh_name}.msh', replace=True)
Error : Unknown number option 'General.NativeFileChooser'
Error : Unknown entity 1 of dimension 0
Error : Could not read vertices
Error : Error loading '/home/grotec/Repositories/JOSSReviews/Magritte/docs/src/1_examples/0_creating_models/Analytic_spiral/analytic_spiral.msh'
Error : '/home/grotec/Repositories/JOSSReviews/Magritte/docs/src/1_examples/0_creating_models/Analytic_spiral/analytic_spiral_convert_to_pos.geo', line 2 : Unknown view 0
I'll be happy to test the other examples but this should probably be sorted out first. Maybe worth a separate issue.
As this project is maturing, a style guide should be made.
This would enable people to more easily start contributing to this code and should make the code in general more readable by introducing an enforced consistent style.
A non-expert reader may find the summary too technical and dominated by domain specific jargon. I'd move the first three sentences of the "Statement of need" section to the "Summary" section to make the latter more accessible. Otherwise, the section (as the remainder of the paper) is extremely clearly written and transparent.
There is a typo on page 3, line 43: Furtherore Furthermore
As mentioned elsewhere, please delineate precisely the realm of applicability of the code, i.e. if it can be applied to objects other than
astrophysical objects.
Figure 1 is referenced in the text but it remains unclear how to connect it to the statement "Their intricate morpho-kinematics, moreover, makes their appearance in observations far from evident (e.g., Figure 1)."
The authors claim that their software can be used to characterize complex structures. The inclined reader will find tests and benchmarks in the referenced work (in particular De Ceuster 2019 and 2020). This could be emphasized again in the software paper. Are these benchmarks obtained from other radiative transport codes or did you also compare to other methods such as e.g. particle-in-cell simulations?
Out of personal curiosity, is there a principle difficulty in modifying the method to simulate continuum radiative transfer? Would this have to be considered in the mentioned treatment of non-linear coupling?
Some preliminary investigation into optically thick water models, show that some issues can arise with applying accelerated lambda iteration: negative level populations can appear. I think this is due to maser having a more than 1 times contribution of the intensity on the source function, leading to negative values in the level populations computation.
When using pybindings, one has the option to pass the variables as read_write or readonly (see pybinding.cpp
). However, it seems that a bit too much variables than makes sense can be written. Examples include the lambda operator, the various references to subclasses and things that are results of computations.
When accidentally setting a model parameter twice, magritte throws a DoubleSetException
, which is as intended. However, the resulting error message does not tell us anything about what value has accidentally been set twice, making debugging unnecessarily difficult. Therefore I suggest somehow adding a variable name to the debug message (a way to do this, is by adding another field std::string name
to the SetOnce
class).
For setting up magritte, the script get_gsmh.sh installs a hardcoded version gmsh. This should be replaced with the stable version available at https://gmsh.info/bin/Linux/gmsh-stable-"OS"-sdk.tgz if no issues arise from this substitution.
In order to properly handle negative optical depths, some changes should be made to the way how we handle the minimal optical depth increment. Currently, we just add some very small number; this does not take into account negative optical depths.
Recently, the automated builds have been implemented. However, we have not taken into account that http-errors might ruin our automated builds. As http errors are erratic, we might want to retry doing that http step automatically a few times before concluding that the build fails.
Thus it would be helpful to implement some way to retry steps. This would remove the need to manually need to rerun the entire build.
In masers, I have observed that Ng acceleration might lead to negative values of level populations. This is quite unsettling; we might need to check if Ng acceleration can still be used. We might just need to check the computed level populations for negative values...
Do not store the square of the turbulence (w.r.t. the speed of light)
model.thermodynamics.turbulence.vturb2
as this has caused several mistakes!
Just store the turbulence.
model.thermodynamics.turbulence.vturb
In light of an upcoming internal change in datatypes, we should think about what actual minimal distance is allowed in models. This theoretical minimal distance will probably exist due to rounding errors when dividing by almost zero (optical depth increments). For most practical models, this minimal range will probably be below any real usecase of the code, but it might be practical to be able to query what minimal distance is allowed for the code.
The current dependency management (downloading everything in the dependency directory) is not very elegant. Any suggestions to improve this are very welcome.
In the recent implementation for automated testing c9122a8, a new option was added to specify what compilers to use. By setting the environment variables CC and CXX, one can define respectively the c- and c++-compiler. This might be useful for macos users, as gcc and g++ by default return clang (which doesn't support openmp) on macos.
Since the new method for computing optical depths 1b6d6e3, we no longer compute the opacity (nor the inverse opacity). Unfortunately, the lambda operator computation did depend on this. Somehow, our automated tests did let this regression pass. Thus I will need to do some refactoring to get it working again.
Due to Magritte model parameters only being allowed to be set once, changing the available parameters might result in old (benchmark) models not being able to be run anymore on current magritte versions. Either we add this behavior to the docs, or we at least fix the benchmark files.
If any of the examples gives an error on importing VTK, this most likely means VTK did not get installed properly.
(This seems to happen in particular with the latest version of conda: 4.10.3
)
This can be resolved be removing and reinstalling VTK, i.e.
conda remove vtk
conda install -c conda-forge vtk
In solver.tpp, get_Lambda requires us to know which exact quadrature point a given frequency belongs to. However, due to line overlap, the frequency order might change. As the frequency vector is sorted for all points, this means that the indices can change for every point. In current implementation, the index determination is static (same for all points), thus a bug could be encountered in the presence of overlapping lines.
After every statistical equilibrium step, the previous level populations (and residuals) get stored.
However, we do not put a limit on how many previous results we store; thus the memory size may increase arbitrarily.
Either we need to remove this functionality or put a limit on the number of previous iterations we store.
As the Magritte library is still quite new, the documentation is not yet on point everywhere.
This issue will contain references to parts of the library which do not yet have a clear documentation. It will be updated when parts are noted where the documentation may be improved.
Current documentation issues:
Unused stuff that probably no longer needs to be there:
In the benchmarks folders, there are some plots that do not belong there.
Maybe move them to documentation or somewhere else.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.