Coder Social home page Coder Social logo

magritte-code / magritte Goto Github PK

View Code? Open in Web Editor NEW
16.0 16.0 11.0 348.45 MB

A modern software library for simulating radiation transport.

Home Page: https://magritte.readthedocs.io

License: GNU General Public License v3.0

CMake 0.92% Shell 0.12% C++ 23.19% Python 16.57% Jupyter Notebook 59.21%
astrophysics physics-simulation radiative-transfer

magritte's People

Contributors

freddeceuster avatar github-actions[bot] avatar lint-action avatar matsesseldeurs avatar owenvm avatar thomasceulemans avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

magritte's Issues

Magritte Anaconda environment

If Anaconda is properly installed, the magritte conda environment can be installed form the environment file, named conda_env.yml, which can be found here. To download it, run:

wget https://github.com/Magritte-code/Magritte/raw/stable/dependencies/conda_env.yml

Now conda can be used to create an environment from this file by running:

conda env create -f conda_env.yml

More information about the Magritte installation process can be found here.

Solvers: determine maximal doppler shift for every step individually

In solvers.tpp, measures are taken such that we do not step over peaked line centers (due to doppler shifts). This is done by computing the line width and limiting the maximal allowed doppler shift to a fraction of this. However, that limiting factor is currently only computed for a single point on every ray, thus it might be inaccurate for the entire ray.
Therefore, this limiting factor should be changed to be computed for every distance increment individually.

Extend fits header.

The FITS file is fine but some information could be added to its header. The program I am using to analyse the FITS files (GILDAS) uses more parameters to perform the plotting correctly.

I will attach the header produced by Magritte versus a header produced by Lime which contains more header parameters that GILDAS uses. They are not files created from the same model though, but I think it can show some of the fields that I would need to use that are missing (in case it's helpful for you to see the two files created from the same model I could try it and send them to you). I am not sending you the whole FITS files because they are too large, but the Magritte example I am using is the analytic spiral and for Lime the sample model given in GitHub (and I can try to send them to you in another way if needed).

Most of the fields missing are important for GILDAS to work correctly. For instance, CUNITs, CTYPEs, CRPIXs, CRVALs, CDELTs, EPOCH, RESTFREQ... I understand that this information is stored in the HDF5 model file, right? Would it be possible to add those parameters to the FITS header? Even just adding some of them might work, but I would need to test it (I am not that familiar with GILDAS). I also attach the comparison of the headers created by GILDAS for both of the files with the important information missing marked with ** at the end of the line (I don't know if this can help you see which parameters make a change, but just in case).

lime_header.txt
GILDAS_header_comp.txt
magritte_header.txt

LAMDA file read. Slight non-compliance with data format

For determining the collisional species when reading a lambda file, one only has to consider the index in front (see lamda database paper):
Page 9:

This is followed by NPART blocks of collisional data:
Lines 13+NLEV+NLIN and 14+NLEV+NLIN give the collision partner ID and reference. Valid
identifications are: 1 = H2, 2 = para-H2, 3 = ortho-H2, 4 = electrons, 5 = H, 6 = He, 7 = H+.

We are currently doing some serious regex work (see magritte/setup.py) to figure out which is the collisional species, instead of merely reading the first number.

Possible segfault when running 1D spherically symmetric models

When running the analytical benchmark all_constant_single_ray.py and increasing the velocity field by a factor 10 (as i was curious whether the boundary condition still worked in high velocity fields, see #95 ), I encountered segfaults (more specifically munmap_chunk() invalid pointer). After some brief investigation, it seems that the allocated size of the arrays for the feautrier solver is insufficient. My hunch is that the nonzero velocity at the inner boundary could have something to do with it.

However, it is possible that #93 will fix this issue anyway, as we then no longer need to take into account all the frequency interpolations.

Treatment of distant lines during opacity/emissivity computation

By default, the opacity and emissivity is computed using all lines. However, as the influence of the lines drops like a gaussian, much redundant computation is done in case of many lines. An attempt to alleviate this is made using the single line approximation.
However, this approximation only takes into account a single line for each ray (ignoring that doppler shifts may cause another line to lie closer). Therefore another solution must be found.
If we would be still using the general approach, one needs to keep track of which lines lie close to the given frequency. This can be done be checking the doppler shift and checking one side to add new lines (if they lie close enough) and checking for the other side to remove old lines (if they lie too far). For an efficient implementation, one needs to add this checking during set_data (for a data sparse implementation) in solver.tpp and one needs to precompute the sorted line frequencies (for efficiently computing which lines to add/remove).
In the single line approximation, similar things can be done; however only a single line should be taken into account.

Possible internal change of datatypes

Currently, we use long doubles for most of our solver computations typedef long double Real; (in tools/types.hpp). However, it might be that the solvers do not require such high precision arithmetic. Therefore we should try out lower precision datatypes and see whether the obtained results are still reasonably accurate (and faster of course).

OneLine approximation issue

In the oneline approximation, one needs to be able to estimate the frequency of the closest line. However, due to sorting the frequencies (and not the line frequencies), the indices get mismatched...

Dummy behavior when writing species

When writing out the species in the model, dummy behavior without comment is used. It seems that we only care about the number of species, but species.hpp also has a datafield with species names available. I propose to change the current behavior to also incorporate reading/writing of species names.

Units

Let the user explicitly provide units (using astropy) and do conversions internally during setup.
Create a python class that constructs Magritte models and can convert between different versions.
One of Magritte's key points is its flexibility regarding data structures etc... exploit it!

Short-characteristics with bezier interpolants

The currently implemented short-characteristics method is only first-order accurate, thus slightly inaccurate in regions where the source function can change significantly (i.e. transitions regimes in high optical depths NLTE models). Using a second-order accurate interpolation method for the source can result in negative interpolated values, so we might need to consider using bezier interpolants, as in Rodriguez and Piskunov 2013.

Test constant_velocity_gradient_1D_image not working

This analytic python test does not run. This is due to a bug in line 147 (tau() has too little arguments).
It is not actually used for much (which explains why it has been broken for at least two years according to git history), but any test on the stable branch should work properly.

Scaling behavior with number of lines

Currently, the most expensive step of solving the radiative transfer equation, is computing the opacities and emissivities at every frequency and point. This is due to needing to sum over the opacities/emissivities of all individual lines.

However, most lines are always out of range of the , wasting a lot of computation time O(N^2) scaling instead of O(N) in terms of number of lines. An easy fix would be to predefine a frequency range (accounting for doppler shifts) for each ray(/point on ray) computation and only use the lines within that range.

Version number increment delayed

Currently, incrementing the version number happens after testing the build. This can lead to some time lag between updating the code in the repository and incrementing the version number. In some cases, this can lead to people having a wrong (old) version number. Thus it is better to switch this ordering.

Addition of masers

In magritte, we currently have no way of adding masers-like circumstances in any model, as the optical depth/opacity is always bounded from below (see model/parameters/parameters.hpp) by a very small value to avoid numerical issues.
However, if we were to merely take into consideration the sign of the optical depth/opacity and bound appropriately, the feautrier solver will compute wrong results.

A solution would be to switch to another solver (e.g. short-characteristics) in case of masers, although this needs to be clearly documented.

Inconsitent use different threading functions

In solver.tpp, currently two different multithreading functions are used from Paracabs.
threaded_for only uses the cpu, while accelerated_for uses the gpu if able.
The exact performance of both implementations should be tested (and if possible documented). In the end, I estimate that only one of both should remain in solver.tpp. Otherwise documentation stating why this mix of paradigms is made, should be added.

Example notebook `0_creating_models.ipynb` not working (JOSS Review)

I installed Magritte according to documentation and then opened the above mentioned example notebook.

Line 14 returns

FileNotFoundError: [Errno 2] No such file or directory: 'Analytic_disk/analytic_disk_convert_to_pos.geo_unrolled'
/bin/sh: 1: gmsh: not found

I suspected this came up because gmsh is installed via conda, so Popen.subprocess not being aware of a conda env might raise. Indeed, installing gmsh systemwide gets me one step further.

Rerunning the same cell now produces:

Info    : Running 'gmsh -0 Analytic_disk/analytic_disk_convert_to_pos.geo' [Gmsh 4.1.3, 1 node, max. 1 thread]
Info    : Started on Mon Nov 29 14:05:54 2021 Info    : Reading 'Analytic_disk/analytic_disk_convert_to_pos.geo'...
Info    : Done reading 'Analytic_disk/analytic_disk_convert_to_pos.geo'
Info    : Writing 'Analytic_disk/analytic_disk_convert_to_pos.geo_unrolled'...
Info    : Done writing 'Analytic_disk/analytic_disk_convert_to_pos.geo_unrolled'
Info    : Stopped on Mon Nov 29 14:05:54 2021

      Error   : Unknown number option 'General.NativeFileChooser'
      Warning : Unable to open file 'Analytic_disk/Analytic_disk/analytic_disk.msh'
      Error   : 'Analytic_disk/analytic_disk_convert_to_pos.geo', line 2 : Unknown view 0

Computing anisotropy alongside the mean intensity

For problems involving polarization, it is necessary to evaluate higher-order generalization of the mean intensity. Although you might want to let Magritte solve the generalized non-LTE problem including polarization in the future, a first much simpler step is to let Magritte compute the simplest higher-order tensor (the anisotropy) and use it externally to compute the atomic polarization using the mean intensity and the anisotropy. The only change would be to add a new quantity J20 that is computed alongside the mean intensity, which contains the geometric weight 3*cos(theta)^2-1, where theta is defined with respect to a pre-defined suitable reference system. This would be pretty straightforward to add in the code. However, directional boundary conditions (for instance to define a nearby star) are also needed. I don't know if your plans include allowing for these directional BCs because the Solver::boundary_intensity function doesn't seem to add the information about the specific ray. I'll try to implement these changes in a fork and, if successful, open a pull-request in case you're interested.

JOSS Review (Documentation and Examples)

Documentation

README

I'd add a bit more information to the README.md regarding the field(s) of research in which Magritte can be applied. Is it limited to the astrophysical domain or more broadly applicable (e.g. plasma physics).

API docs

Rudimentary. Only very few cpp and no py objects are documented.

Community guidelines

  • Add guidelines for contributors (e.g. point to issue labels, recomment on forking vs. becoming a repo member etc)

Examples

docs/src/1_examples/0_creating_models/0_create_analytic_disk.ipynb

  • Typo in cell [1]: "We create a Magritte model form from..."
  • Function turbulence(rr): rr is not used in the function body
  • wdir issue: I modified cell [3] to
wdir = os.path.abspath(os.path.join(os.getcwd(), 'Analytic_disk'))

Cell [6] then produces filenames such as "Analytic_diskco.txt" (without the "/" dir separator). I suggest to make use of os.path.join() to
construct paths (here and everywhere else in the code if applicable).

  • Error in cell [15]:
%%capture
mesher.convert_msh_to_pos (f'{bmesh_name}.msh', replace=True)
Error   : Unknown number option 'General.NativeFileChooser'
Error   : Unknown entity 1 of dimension 0
Error   : Could not read vertices
Error   : Error loading '/home/grotec/Repositories/JOSSReviews/Magritte/docs/src/1_examples/0_creating_models/Analytic_disk/analytic_disk.msh'
Error   : '/home/grotec/Repositories/JOSSReviews/Magritte/docs/src/1_examples/0_creating_models/Analytic_disk/analytic_disk_convert_to_pos.geo', line 2 : Unknown view 0

1_create_analytic_spiral.ipynb

  • Same typo as above
  • Same path issue as above
  • Error:
mesher.convert_msh_to_pos (f'{bmesh_name}.msh', replace=True)
Error   : Unknown number option 'General.NativeFileChooser'
Error   : Unknown entity 1 of dimension 0
Error   : Could not read vertices
Error   : Error loading '/home/grotec/Repositories/JOSSReviews/Magritte/docs/src/1_examples/0_creating_models/Analytic_spiral/analytic_spiral.msh'
Error   : '/home/grotec/Repositories/JOSSReviews/Magritte/docs/src/1_examples/0_creating_models/Analytic_spiral/analytic_spiral_convert_to_pos.geo', line 2 : Unknown view 0

I'll be happy to test the other examples but this should probably be sorted out first. Maybe worth a separate issue.

Style guide missing

As this project is maturing, a style guide should be made.
This would enable people to more easily start contributing to this code and should make the code in general more readable by introducing an enforced consistent style.

JOSS Review (Software paper)

Summary

A non-expert reader may find the summary too technical and dominated by domain specific jargon. I'd move the first three sentences of the "Statement of need" section to the "Summary" section to make the latter more accessible. Otherwise, the section (as the remainder of the paper) is extremely clearly written and transparent.
There is a typo on page 3, line 43: Furtherore Furthermore
As mentioned elsewhere, please delineate precisely the realm of applicability of the code, i.e. if it can be applied to objects other than
astrophysical objects.

Statement of need

Figure 1 is referenced in the text but it remains unclear how to connect it to the statement "Their intricate morpho-kinematics, moreover, makes their appearance in observations far from evident (e.g., Figure 1)."

The authors claim that their software can be used to characterize complex structures. The inclined reader will find tests and benchmarks in the referenced work (in particular De Ceuster 2019 and 2020). This could be emphasized again in the software paper. Are these benchmarks obtained from other radiative transport codes or did you also compare to other methods such as e.g. particle-in-cell simulations?

Future work

Out of personal curiosity, is there a principle difficulty in modifying the method to simulate continuum radiative transfer? Would this have to be considered in the mentioned treatment of non-linear coupling?

Masers: ALI

Some preliminary investigation into optically thick water models, show that some issues can arise with applying accelerated lambda iteration: negative level populations can appear. I think this is due to maser having a more than 1 times contribution of the intensity on the source function, leading to negative values in the level populations computation.

Read-write privileges of pybind passed variables

When using pybindings, one has the option to pass the variables as read_write or readonly (see pybinding.cpp). However, it seems that a bit too much variables than makes sense can be written. Examples include the lambda operator, the various references to subclasses and things that are results of computations.

Parameters: improve SetOnce error message

When accidentally setting a model parameter twice, magritte throws a DoubleSetException, which is as intended. However, the resulting error message does not tell us anything about what value has accidentally been set twice, making debugging unnecessarily difficult. Therefore I suggest somehow adding a variable name to the debug message (a way to do this, is by adding another field std::string name to the SetOnce class).

Masers: minimal optical depth

In order to properly handle negative optical depths, some changes should be made to the way how we handle the minimal optical depth increment. Currently, we just add some very small number; this does not take into account negative optical depths.

Automated builds failing on http-errors

Recently, the automated builds have been implemented. However, we have not taken into account that http-errors might ruin our automated builds. As http errors are erratic, we might want to retry doing that http step automatically a few times before concluding that the build fails.

Thus it would be helpful to implement some way to retry steps. This would remove the need to manually need to rerun the entire build.

Masers: Ng acceleration

In masers, I have observed that Ng acceleration might lead to negative values of level populations. This is quite unsettling; we might need to check if Ng acceleration can still be used. We might just need to check the computed level populations for negative values...

Numerical accuracy: minimal distance allowed in models

In light of an upcoming internal change in datatypes, we should think about what actual minimal distance is allowed in models. This theoretical minimal distance will probably exist due to rounding errors when dividing by almost zero (optical depth increments). For most practical models, this minimal range will probably be below any real usecase of the code, but it might be practical to be able to query what minimal distance is allowed for the code.

Proper dependency management

The current dependency management (downloading everything in the dependency directory) is not very elegant. Any suggestions to improve this are very welcome.

Update install docs

In the recent implementation for automated testing c9122a8, a new option was added to specify what compilers to use. By setting the environment variables CC and CXX, one can define respectively the c- and c++-compiler. This might be useful for macos users, as gcc and g++ by default return clang (which doesn't support openmp) on macos.

Approximate lambda operator no longer working

Since the new method for computing optical depths 1b6d6e3, we no longer compute the opacity (nor the inverse opacity). Unfortunately, the lambda operator computation did depend on this. Somehow, our automated tests did let this regression pass. Thus I will need to do some refactoring to get it working again.

Incompatibility between old magritte models and new magritte versions

Due to Magritte model parameters only being allowed to be set once, changing the available parameters might result in old (benchmark) models not being able to be run anymore on current magritte versions. Either we add this behavior to the docs, or we at least fix the benchmark files.

VTK-related errors

If any of the examples gives an error on importing VTK, this most likely means VTK did not get installed properly.
(This seems to happen in particular with the latest version of conda: 4.10.3)
This can be resolved be removing and reinstalling VTK, i.e.

conda remove vtk
conda install -c conda-forge vtk

Incorrect determination of indices during Lambda computation

In solver.tpp, get_Lambda requires us to know which exact quadrature point a given frequency belongs to. However, due to line overlap, the frequency order might change. As the frequency vector is sorted for all points, this means that the indices can change for every point. In current implementation, the index determination is static (same for all points), thus a bug could be encountered in the presence of overlapping lines.

Possible memory leak in lineProducingSpecies.hpp

After every statistical equilibrium step, the previous level populations (and residuals) get stored.
However, we do not put a limit on how many previous results we store; thus the memory size may increase arbitrarily.
Either we need to remove this functionality or put a limit on the number of previous iterations we store.

Documentation issues

As the Magritte library is still quite new, the documentation is not yet on point everywhere.
This issue will contain references to parts of the library which do not yet have a clear documentation. It will be updated when parts are noted where the documentation may be improved.

Current documentation issues:

  • in Solver.tpp: solve_feautrier_order2: This is a giant computation with many constants, so it is not very clear how and what we are calculating. Either add a reference to a paper for this calculation, or add some more comments. Same issue with solve_shortchar_order_0 and update_Lambda.
  • In LineProducingSpecies: update_using_Ng_acceleration: same comment, too many constants without knowing what they are. Probably best to just add a reference to a paper on Ng-acceleration (although it should be well-known in the astrophysics community) or optionally: add some psuedocode with the calculation in a few lines.
  • In a lot of .hpp files with a definition of a vector: also put some indication of what the indices represent (eg: species.hpp; Double2 abundance; ///< (current) abundance in every cell (point, species); also please add a bit more than a single letter to represent something (e.g. in radiation.hpp Tensor I; ///< intensity (r, p, f) /change to/ (ray, point, frequency)

Unused stuff that probably no longer needs to be there:

  • In species.hpp/tpp: Is abundance_init still used in anything?
  • In multigrid branch: delete the Neighbors struct, superseded by Multiscale
  • In both branches: far too many commented-out functions (which are no longer used) around
  • Scattering struct: seems not to do anything anymore

Plots in stable branch

In the benchmarks folders, there are some plots that do not belong there.
Maybe move them to documentation or somewhere else.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.