danielkoll / pyrads Goto Github PK
View Code? Open in Web Editor NEWA line-by-line longwave radiation code for planetary atmospheres.
License: MIT License
A line-by-line longwave radiation code for planetary atmospheres.
License: MIT License
Awesome repository! I suggest you add a license so it's clear how others can use or modify the code.
PyRADS will be a lot more accessible to users if it is packaged with a standard setup.py
file so it can be installed and imported like any other library.
It should also be possible to build and wrap the MTCKD code with f2py
so it is callable from python without reading and writing to disk.
I'll submit some PRs when I have time.
Doesn't affect calculations at normal temperatures, but will become important at high temps once the mass contribution from H2O becomes important.
On line 63, in compute_tau_H2ON2_CO2dilute
, kappaCO2 = getKappa_HITRAN
is called with "H2O". Shouldn't be called with "CO2"?
Old calculation in verticalstructure.py
# v1: pick highest pressure level in strat, assume q is uniform at that value
# mask = T_adiabat < Tstrat
# T = T_adiabat
# T[mask] = Tstrat
# q = q_adiabat
# q[mask] = q_adiabat[p==(p[mask].max())]
New calculation in verticalstructure.py
# v2, more accurate: use interpolation to find p_trop, where T=Tstrat. Then compute q at that level.
# Note: at insufficient res, interpolation will produce p_trop=min(p)!
p_trop = np.interp(Tstrat,T_adiabat,p)
q_trop = get_q(Tstrat,p_trop,params,RH=RH) # analytically ..
mask = p<=p_trop
T = T_adiabat
T[mask] = Tstrat
q = q_adiabat
q[mask] = q_trop
As noted in the examples/tests, most of the time taken to calculate the spectra is from the OpticalThickness.compute_tau
calculation.
I've done a little bit of digging and it seems to me that the functions computeAbsorption_fixedCutoff()
/computeAbsorption()
(called by getKappa_HITRAN()
) in pyrads/Absorption_Crosssections_HITRAN2016.py
are the ones which cause this slowdown.
This is because in both functions there exists a very large loop over the wavenumbers:
for i in range(len(waveList)):
n = waveList[i] # Wavenumber of the line
#gam = gamList[i]*(p/1.013e5)*(296./T)**TExpList[i] # DKOLL: old
gam = getGamma(i)*(296./T)**TExpList[i] # DKOLL: new. getGamma includes p-scaling
#Temperature scaling of line strength
Given that there isn't anything particularly complex that happens inside this loop, I'm wondering whether it makes sense to try and accelerate this function using numba. I'm playing around with this idea a bit now and I think that it should be possible, but the code in Absorption_Crosssections_HITRAN2016.py
will have to be tidied up a bit.
Specifically:
global
statements / make all function inputs explicitparams
) -- numba doesn't like classes at the momentThere might be more issues here, and I've only just started playing with the idea, but I'll try and keep a log here of what does/doesn't work.
Also tagging @hdrake and @brian-rose here too, in case anyone's interested / has any thoughts on this / has already tried it?!
See @brian-rose's issue running into the same problem and the workaround he implemented in his build script for the climlab package.
Python 2.7 is very close to end-of-life. It's important to make PyRADS work on Python 3 if it's going to have any future.
I had a brief glance through the code. Converting to Python 3 may be as simple as replacing print
statements with print()
functions.
The new numba capability reproduces pyrads results identically when I only use HITRAN data from a single species.
However, numba appears to fail silently when using line data from multiple HITRAN species, which requires repeated calls to Absorption_Crosssections_HITRAN2016_numba. Test03.runaway_with_co2 contains an example of calculations which use both H2O and CO2 data. In this case the optical thicknesses computed using numba don't match the results without numba anymore.
By default, PyRADS only implements the dominant isotopologue of each gas. However, minor isotopologues have lines and bands in parts of the spectrum where the main isotopologue doesn't absorb much.
This could potentially affect CO2 forcing at intermediate/high CO2 levels -- currently unclear how big the effect is.
It seems that, by default, HITRAN data is already weighted by a 'representative' isotope ratio (https://hitran.org/docs/definitions-and-units). So necessary changes would be to update the HITRAN line data, plus the mean molecular mass of each gas assumed in the code (e.g., Absorption_Crosssections_HITRAN2016.py assumes a molecular weight of 44 g/mol for CO2).
At the moment, compute_tau_H2ON2_CO2dilute
assumes that CO2 is a trace gas and so doesn't include the partial pressure of CO2 in the calculation. I'm currently running PyRADS with CO2 values up to ~50,000ppmv and so this is a problem.
To fix this, is it sufficient to add in the partial pressure of CO2 and then just change the broadening to "mixed"?
i.e.
p_CO2 = pres * ppv_CO2
p_H2O = RH * params.esat(temp) # ...
q_CO2 = convert_molar_to_mass_ratio(ppv_CO2,params.R_CO2,R_mean)
R_mean = q_H2O*params.Rv + q_CO2*params.R_CO2 + (1.-q_H2O - q_CO2)*params.R
kappaH2O = getKappa_HITRAN(grid.n,grid.n0,grid.n1,grid.dn, \
"H2O",press=pres,press_self=p_H2O, \
temp=temp,broadening="mixed", lineWid=25., \
cutoff_option="fixed",remove_plinth=True)
kappaCO2 = getKappa_HITRAN(grid.n,grid.n0,grid.n1,grid.dn, \
"CO2",press=pres,press_self=p_CO2, \
temp=temp,broadening="mixed", lineWid=25., \
cutoff_option="fixed",remove_plinth=False)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.