Coder Social home page Coder Social logo

geoschem / geos-chem Goto Github PK

View Code? Open in Web Editor NEW
158.0 23.0 153.0 100.34 MB

GEOS-Chem "Science Codebase" repository. Contains GEOS-Chem science routines, run directory generation scripts, and interface code. This repository is used as a submodule within the GCClassic and GCHP wrappers, as well as in other modeling contexts (external ESMs).

Home Page: http://geos-chem.org

License: Other

Fortran 98.09% Perl 0.02% Shell 1.42% Python 0.14% CMake 0.10% C 0.14% Makefile 0.08%
cloud-computing atmospheric-modelling aws scientific-computing bash-script configuration integration-tests greenhouse-gases aerosols atmospheric-chemistry atmospheric-composition carbon-cycle climate mercury particulate-matter atmospheric-chemistry-modeling fortran run-directory earth-system-modeling methane

geos-chem's Introduction

Release DOI License

Description

This repository contains the GEOS-Chem science codebase. Included in this repository are:

  • The source code for GEOS-Chem science routines;
  • Scripts to create GEOS-Chem run directories;
  • Template configuration files that specify run-time options;
  • Scripts to run GEOS-Chem tests;
  • Driver routines (e.g. main.F90) that enable GEOS-Chem to be run in several different implementations (as GEOS-Chem "Classic", as GCHP, etc.)

Version 12.9.3 and prior

GEOS-Chem 12.9.3 was the last version in which this "Science Codebase" repository was used in a standalone manner.

Version 13.0.0 and later

GEOS-Chem 13.0.0 and later versions use this "Science Codebase" repository as a submodule within the GCClassic and GCHP repositories.

Releases for GEOS-Chem 13.0.0 and later versions will be issued at the GCClassic and GCHP Github repositories. We will also tag and release the corresponding versions at this repository for the sake of completeness.

User Manuals

Each implementation of GEOS-Chem has its own manual page. For more information, please see:

About GEOS-Chem

GEOS-Chem is a global 3-D model of atmospheric chemistry driven by meteorological input from the Goddard Earth Observing System (GEOS) of the NASA Global Modeling and Assimilation Office. It is applied by research groups around the world to a wide range of atmospheric composition problems. Scientific direction of the model is provided by the international GEOS-Chem Steering Committee and by User Working Groups. The model is managed by the GEOS-Chem Support Team, based at Harvard University and Washington University with support from the US NASA Earth Science Division, the Canadian National and Engineering Research Council, and the Nanjing University of Information Sciences and Technology.

GEOS-Chem is a grass-roots open-access model owned by its users, and ownership implies some responsibilities as listed in our welcome page for new users.

geos-chem's People

Contributors

bettycroft avatar cdholmes avatar christophkeller avatar cpthackray avatar daridley avatar emily-ramnarine avatar fritzt avatar ganluoasrc avatar jdshutter avatar jennyfisher avatar jiaweizhuang avatar jimmielin avatar jourdan-he avatar kelvinhb avatar laestrada avatar liambindle avatar lizziel avatar ltmurray avatar michael-s-long avatar msl3v avatar msulprizio avatar nicholasbalasus avatar noelleselin avatar sdeastham avatar sfarina avatar spacemouse avatar tsherwen avatar williamdowns avatar xin-chen-github avatar yantosca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

geos-chem's Issues

[BUG/ISSUE] Chemistry gives slightly different results if ND65 bpch diagnostic is turned off vs. on

Describe the bug

We obtain slightly different results when the ND65 bpch prod/loss diagnostics are turned off as opposed to when they are turned on.

The ND65 is turned on (T) or off (F) in this menu of input.geos:

%% PROD & LOSS MENU %%%:
Turn on P/L (ND65) diag?: F

To Reproduce

  1. Clone the GEOS-Chem code (any branch)
  2. Create the geosfp_4x5_standard folder from the unit tester (any branch)
  3. In input.geos, set Turn on P/L (ND65) diag?: F
  4. Run GEOS-Chem for a short simulation (1 hour)
  5. Move the output and log files to a folder where they won't get overwritten
  6. In input.geos, set Turn on P/L (ND65) diag?: T
  7. Run GEOS-Chem again for a short simulation (1 hour)
  8. Compare the log files and output files. You should see small differences.

Example output

A 1-hour simulation using GEOS-Chem in the dev/12.7.0 branch shows the following differences in Mean OH (printed at the end of the log file). This indicates that the gas-phase chemistry has differences:

With P/L off: Mean OH =    11.551091212834173       [1e5 molec/cm3]
With P/L on:  Mean OH =    11.551113072867304       [1e5 molec/cm3]

If you look at species concentration, you will notice very small differences on the order of numerical noise, e.g:
acet
This can be enough to cause difference tests to fail!

Solution

We have traced this issue to the following code in GeosCore/flexchem_mod.F90:

       IF ( Input_Opt%DO_SAVE_PL ) THEN

          ! Loop over # prod/loss families
          DO F = 1, NFAM

             ! Determine dummy species index in KPP
             KppID =  ND65_Kpp_Id(F)

             ! Initialize prod/loss rates
             IF ( KppID > 0 ) C(KppID) = 0.0_dp

          ENDDO

       ENDIF

This code resets the concentration for each "dummy species" (which is used to accumulate the prod/loss for each defined KPP prod/loss family) to zero before the KPP rates are updated. If these dummy species are not reset to zero, then this can cause the results of the chemistry to change slightly.

The problem is the Input_Opt%DO_SAVE_PL only applies to the bpch diagnostics. It is set from the value of the Turn on P/L (ND65) diag?: F line in input.geos.

Our recommended solution is to remove the IF and ENDIF lines so that we always zero out the "dummy species" on every timestep, regardless of whether the prod/loss diagnostics are used or not. When we do this, we now get identical results, regardless of the setting of the Turn on P/L (ND65) diag?: F line in input.geos

PL_Off: Mean OH = 11.551113072867304 [1e5 molec/cm3]
PL_On: Mean OH = 11.551113072867304 [1e5 molec/cm3]

acet2

Implementation

We plan to add this fix into GEOS-Chem 12.7.0.

[FEATURE REQUEST] Store CEDS emission data in one file per year?

The newly-added CEDS data are 140 GB in total. This increases the size of the default HEMCO data directory from ~70 GB to ~210 GB.

The problem is that 65 years (1950~2014) of data are stored in a single file. But most users would probably just need recent years. Would it be more reasonable to use one file per year? Other biggish data such as QFED are also stored in a per-year basis. This saves downloading time and also reduces the size of the tutorial AMI on AWS from 200+ GB to ~80 GB.

Breaking the CEDS data into 65 files (years) can be done by 3 lines of Python:

ds = xr.open_mfdataset(MAINDIR+'*-em-anthro_CMIP_CEDS_195001-201412.nc')
for year in range(1950, 2015):
    ds.sel(time=str(year)).to_netcdf('~/output/CEDS_{}.nc'.format(year))

See this notebook for a walk through.

[DISCUSSION] Soil wetness in 2x2.5 resolution is wrong in coastal regions?

Regridding soil wetness directly yields scientifically incorrect results in coastal regions

Overview

Originally, root zone soil wetness (GWETROOT) is lower and surface soil wetness (GWETTOP) is higher in coastal regions, because they defaults to 0 and 1 respectively over the ocean in the MERRA-2 native resolution, and regridding directly yields such results. However, values over the ocean should not have any effects in terms of regridding soil wetness.

Therefore, soil wetness fields should require pre-processing before regridding. Otherwise, the regridded fields no longer have the same meaning at where land fraction is not 1.

This issue is not caused by GEOS-Chem, but it might affect simulation results, so I decided to put it here.

Detailed description

As far as I understand, regridding of MERRA-2 from its native resolution uses an area-preserved mapping scheme and some interpolation/optimization steps which I did not dive into. Some fields like U-wind and V-wind requires special pre-processing to be regridded correctly. I think soil wetness fields (GWETROOT and GWETTOP) should also require pre-processing to take porosity and land fraction into consideration.

The reason is that soil wetness refers to the volume of water per unit volume of soil pore space, so it should not be multiplied to the area of grid box directly. To convert it into a quantity that the above mapping scheme applies, it should be multiplied by the porosity (volume of soil pore space per unit volume of land) and the land fraction (area of land per area of grid cell). Therefore, assuming a fixed column height, the product is essentially an equivalent area of water per area of grid cell. Regridding this quantity and dividing it by the product of porosity and land fraction at the coarse resolution would give the correct soil wetness ideally.

A practical complication arises as the interpolation/optimization process may cause the resulting soil wetness to be larger than 1, which is unphysical. So far, I have encountered a maximum value of 1.2 and 1.8 when regridding GWETROOT and GWETTOP respectively for Jan 2012. I wonder if there is a better option, e.g. some special settings during interpolation, other than directly adjusting the results.

Implication

GWETROOT will increase and GWETTOP will decrease in coastal regions. GWETROOT is currently not used but GWETTOP is used in CH4 simulation and dust mobilization, according to the GEOS-Chem Wiki. I have no idea how this may change the simulation results, but any changes are expected to be slight at most.

Comparison

Attached below is the comparison of both soil wetness fields between the MERRA-2 dataset at native 0.5x0.625 resolution, the default MERRA-2 dataset at 2x2.5 resolution and my modified dataset at 2x2.5 resolution on 1 Jan 2012. As seen from the graphs, the default MERRA-2 dataset at 2x2.5 resolution deviates from both the native one and the modified one at grid cells with land fraction < 1.

wetness_compare

How to reproduce

You may use f2py to convert this Fortran code, which I took and briefly modified from https://github.com/geoschem/MERRA2/blob/master/Code/Merra2_RegridModule.F90, to obtain a python module and use this Python script to regrid the native MERRA-2 dataset.

[QUESTION] A question about GEO-Chem v 12.1.1 need netcdf4 library?

In Ncop_Rd, cannot open: /public/cmaq/standard/geoschem/Code.12.1.1/Extdata/HEMCO/EDGARv43/v2016-11/EDGAR_v43.CO.POW.0.1x0.1.nc

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Code stopped from DO_ERR_OUT (in module NcdfUtil/m_do_err_out.F90)

This is an error that was encountered in one of the netCDF I/O modules,
which indicates an error in writing to or reading from a netCDF file!

[FEATURE REQUEST] Replace HEMCO built-in unit conversions with scale factors in config file

HEMCO checks the unit string in HEMCO_Config.rc for each emission and converts to HEMCO standard units (kg/m2/s) upon file read if the input units are different. For VOCS, it further converts to kgC/m2/s. The code for the unit conversions is in HEMCO/Core/hco_unit_mod.F90.

Since the automatic unit conversion is performed during HEMCO file read it does not get executed in GCHP which uses MAPL/ESMF for I/O. GCHP users must instead manually include the conversion information as scale factors in HEMCO_Config.rc. To increase clarity and to make GCHP and GCC better match we plan on removing the HEMCO built-in unit conversions and replacing them with scale factors in HEMCO_Config.rc. This change may result in small number differences due to order of operations and precision of scale factors.

[BUG/ISSUE] HEMCO_Config files in GC v12.5.0 point to BIOFUEL v2014-07 rather than the "default" v2019-08

Bug description

The online documentation states that v2019-08 of the BIOFUEL inventory is used as the default BIOFUEL inventory in GEOS-Chem v12.5.0, but all of the HEMCO_Config template files that come with this version still point to v2014-07.

GEOS-Chem compiles and runs as normal, but points to an older version of the BIOFUEL inventory. The only documented change in v2019-08 from v2014-07 is that the inventory has been regridded from 4x5 to 2x2.5 degrees horizontal resolution, so this bug should not result in an error message and may not be easily identifiable in model output.

This bug can be easily fixed by updating the HEMCO_Config templates, then re-running the UT and re-compiling. Only known instance occurs at or near line 1694. Suggested changes are highlighted in bold:

BIOFUEL_C3H8 $ROOT/BIOFUEL/v2019-08/biofuel.geos.2x25.nc BIOFUEL_C3H8 1985/1/1/0 C xy kgC/m2/s C3H8 - 1 10

Required information

  • GEOS-Chem version you are using: 12.5.0
  • Are you using "out of the box" code, or have you made modifications? Out of the box

[BUG/ISSUE] GEOS-Chem 12.6.0 with CMake encounters seg fault when compiling ocean_mercury_mod.F

Describe the bug

GEOS-Chem in the dev/12.6.0 branch (commit 8ab5a66) dies with an internal compiler error when trying to compile ocean_mercury_mod.F.

This is known behavior and also occurs when you compile GEOS-Chem with GNU Make. But in the GNU Makefiles, we have to add a special rule to compile ocean_mercury_mod.F as -O1 in order to avoid the error:

ocean_mercury_mod.o         : ocean_mercury_mod.F                            \
                              dao_mod.o               depo_mercury_mod.o     \
                              diag03_mod.o            toms_mod.o             \
                              hco_interface_mod.o
##############################################################################
# NOTE: For some reason gfortran 8.x.x throws an internal compiler error
# in this routine.  The error does not happen when optimization is turned
# off.  For now, lower the optimization level to get around this issue.
# The ocean mercury module might eventually be replaced later on.
#   -- Bob Yantosca, 17 Aug 2018
ifeq ($(IS_GNU_8),1)
	$(F90) -c -O1 $<
endif
##############################################################################

This appears to be present in gfortran 8 and higher. I think it is because gfortran 8+ cannot handle and/or optimize some old-timey legacy code in ocean_mercury_mod.F.

To Reproduce

Steps to reproduce the behavior:

  1. Create a rundir geosfp_4x5_TransportTracers from the UT branch dev/12.6.0
  2. cd geosfp_4x5_TransportTracers
  3. mkdir build
  4. cd build
  5. cmake ../CodeDir
  6. make -j8

Expected behavior

The executable should be built.

Error message:

[ 88%] Building Fortran object GeosCore/CMakeFiles/GeosCore.dir/ocean_mercury_mod.F.o
[ 88%] Building Fortran object GeosCore/CMakeFiles/GeosCore.dir/land_mercury_mod.F.o
[ 89%] Building Fortran object GeosCore/CMakeFiles/GeosCore.dir/wetscav_mod.F.o
during GIMPLE pass: ccp
/local/ryantosca/GC/rundirs/12.6.0/geosfp_4x5_TransportTracers/CodeDir/GeosCore/ocean_mercury_mod.F:404:0:

       USE CMN_SIZE_MOD
 
internal compiler error: Segmentation fault
0xad755f crash_signal
        ../.././gcc/toplev.c:325
0xbaeb0c gimple_code
        ../.././gcc/gimple.h:1679
0xbaeb0c gimple_nop_p
        ../.././gcc/gimple.h:6346
0xbaeb0c get_default_value
        ../.././gcc/tree-ssa-ccp.c:279
0xbb04bc get_value
        ../.././gcc/tree-ssa-ccp.c:354
0xbb04bc ccp_finalize
        ../.././gcc/tree-ssa-ccp.c:962
0xbb04bc do_ssa_ccp
        ../.././gcc/tree-ssa-ccp.c:2475
0xbb04bc execute
        ../.././gcc/tree-ssa-ccp.c:2518
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.
make[2]: *** [GeosCore/CMakeFiles/GeosCore.dir/ocean_mercury_mod.F.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [GeosCore/CMakeFiles/GeosCore.dir/all] Error 2
make: *** [all] Error 2

Required information

Out of the box" GEOS-Chem dev/12.6.0 running on CentOS7:

Linux holyjacob01.rc.fas.harvard.edu 3.10.0-957.12.1.el7.x86_64 #1 SMP Mon Apr 29 14:59:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

with these modules

Currently Loaded Modules:
  1) git/2.17.0-fasrc01    7) mpc/1.0.3-fasrc06      13) netcdf/4.1.3-fasrc02
  2) perl/5.26.1-fasrc01   8) gcc/8.2.0-fasrc01      14) libtiff/4.0.9-fasrc01
  3) IDL/8.4.1-fasrc01     9) openmpi/3.1.1-fasrc01  15) emacs/26.1-fasrc01
  4) flex/2.6.4-fasrc01   10) zlib/1.2.8-fasrc07     16) cmake/3.12.1-fasrc01
  5) gmp/6.1.2-fasrc01    11) szip/2.1-fasrc02       17) jdk/1.8.0_172-fasrc01
  6) mpfr/3.1.5-fasrc01   12) hdf5/1.8.12-fasrc12    18) tau-2.28.2-gcc-8.2.0-ofp23hs

Other info:

The gfortran 8.2.0 compiler was built by our Research Computing staff. There might be some issues with it. If time allows I will try to build a fresh version of gfortran 8.2.0 (and maybe also gfortran 9.3) cleanly with Spack and see if we still get the error.

The ocean_mercury_mod.F90 is not used for fullchem simulations, but it is compiled into the executable. It is a bit of a mess, so there must be some kind of construct that the newer gfortran (which skews to the newer Fortran standards like F2003, F2008) cannot parse. Or perhaps could parse but doesn't want to.

[FEATURE REQUEST] Removal of binary punch diagnostics

Overview

We are removing the legacy binary punch (aka bpch) diagnostics, because they cannot be used by GCHP, as well as GEOS-Chem connected to other ESMs (GEOS, WRF, CESM, etc).

Validation

Prior to removing the bpch diagnostics, we performed several validations:

Comparison of bpch and netCDF diagnostic output

Please see our Validation of netCDF diagnostics section on the GEOS-Chem wiki.

Difference tests

Ref = bpch code present
Dev = bpch code removed (feature/RemoveBpch branch)

Compiled with BPCH_DIAG=n (this will be the new default)
---------------------------------------------------------
DiffTest_geosfp_2x25_CO2   	        PASS
DiffTest_geosfp_4x5_aerosol/            PASS	
DiffTest_geosfp_4x5_benchmark/          PASS
DiffTest_geosfp_4x5_CH4/                PASS	
DiffTest_geosfp_4x5_complexSOA/         PASS
DiffTest_geosfp_4x5_standard/           PASS
DiffTest_geosfp_4x5_tagCO/              PASS
DiffTest_geosfp_4x5_tagO3/              PASS			
DiffTest_geosfp_4x5_TransportTracers/   PASS
DiffTest_geosfp_4x5_tropchem/           PASS

Compiled with BPCH_DIAG=y
---------------------------------------------------------
DiffTest_geosfp_2x25_CO2/               PASS
DiffTest_geosfp_4x5_aerosol/		PASS
DiffTest_geosfp_4x5_benchmark/          PASS**
DiffTest_geosfp_4x5_CH4/                PASS
DiffTest_geosfp_4x5_Hg/                 PASS          
DiffTest_geosfp_4x5_standard/           PASS*
DiffTest_geosfp_4x5_tagCO/              PASS
DiffTest_geosfp_4x5_tagO3/	        PASS			
DiffTest_geosfp_4x5_TransportTracers/   PASS
DiffTest_geosfp_4x5_tropchem/           PASS**

*: The perl script validate.pl, which is used to check for differences, 
   notes that certain diagnostic outputs are different.
   But when examined in Python, we get identical results.  
   This might denote some very small differences caused by optimization.   
   For all intents and purposes this denotes dentical results.

**: Differences are within the limits of numerical noise.

The Mean OH value for Dev in geosfp_4x5_standard is identical with bpch turned off or on:

Bpch off : Mean OH =    11.551113072639739       [1e5 molec/cm3]
Bpch on  : Mean OH =    11.551113072639739       [1e5 molec/cm3]

as is the mean OH value for Dev in geosfp_4x5_tropchem:

Bpch off : Mean OH =    12.979569056826467       [1e5 molec/cm3]
Bpch on  : Mean OH =    12.979569056826467       [1e5 molec/cm3]

and as is the mean OH value for Dev in geosfp_4x5_benchmark:

Bpch off : Mean OH =    11.544115278980730       [1e5 molec/cm3]
Bpch on  : Mean OH =    11.544115278980730       [1e5 molec/cm3]

Unit Tests

ut

Remaining bpch diagnostics

Due to legacy code in certain "specialty" simulations, we are unable to remove every single bpch diagnostic. For the time being we will preserve the following bpch diagnostics:

  1. ND03 (for Hg simulations)
  2. ND06 (for TOMAS simulations)
  3. ND44 (for TOMAS simulations)
  4. ND51 and ND51b (Satellite timeseries)
  5. ND53 (for POPs simulations)
  6. ND59 (for TOMAS simulations)
  7. ND60 (for TOMAS simulations)
  8. ND61 (for TOMAS simulations)
  9. ND72 (for RRTMG simulations)

[QUESTION] Is the name of growth factor for PM2.5 calculation correct in Wiki?

Hi,

I have just found that you give WetMass2DryMassRatio = 1 + [{(radiusAtRH_wet / radiusAtRH_dry)^3 - 1} x (Density_Water / Density_DrySpecies)] in this wiki page. I also found that you calculate SIA_GROWTH, ORG_GROWTH, SSA_GROWTH in aerosol_mod.F using exactly the same formula. However, I found that you calculate PM2.5 by multiplying these growth factors instead of dividing them. Therefore, should WetMass2DryMassRatio be renamed to DryMass2WetMassRatio in the wiki page? @msulprizio

Yours faithfully,
Fei

[QUESTION] What is the easiest way to output diagnostics of components of PM2.5 and AOD?

Hi All,

I am running GC 12.2.1 to collect PM2.5, AOD, and their components, but I found that I can only define some of them in HISTORY.rc. I have searched the aerosol_mod.F file and found the following codes that I believe is most relevant to PM2.5 components.

#if defined( MODEL_GEOS )
         ! PM2.5 sulfates
         IF ( State_Diag%Archive_PM25su ) THEN
            State_Diag%PM25su(I,J,L) = ( SO4(I,J,L) * SIA_GROWTH  )
     &                               * ( 1013.25_fp / PMID(I,J,L) )
     &                               * ( T(I,J,L)   / 298.0_fp    )
     &                               * 1.0e+9_fp
         ENDIF

         ! PM2.5 nitrates
         IF ( State_Diag%Archive_PM25ni ) THEN
            State_Diag%PM25ni(I,J,L) = ( NH4(I,J,L) * SIA_GROWTH
     &                               +   NIT(I,J,L) * SIA_GROWTH  )
     &                               * ( 1013.25_fp / PMID(I,J,L) )
     &                               * ( T(I,J,L)   / 298.0_fp    )
     &                               * 1.0e+9_fp
         ENDIF
         ! PM2.5 BC
         IF ( State_Diag%Archive_PM25bc  ) THEN
            State_Diag%PM25bc(I,J,L) = ( BCPI(I,J,L) + BCPO(I,J,L) )
     &                               * ( 1013.25_fp  / PMID(I,J,L) )
     &                               * ( T(I,J,L)    / 298.0_fp    )
     &                               * 1.0e+9_fp
         ENDIF
         ! PM2.5 OC
         IF ( State_Diag%Archive_PM25oc  ) THEN
            State_Diag%PM25oc(I,J,L) = ( OCPO(I,J,L)
     &                               +   OCPI(I,J,L) * ORG_GROWTH  )
     &                               * ( 1013.25_fp  / PMID(I,J,L) )
     &                               * ( T(I,J,L)    / 298.0_fp    )
     &                               * 1.0e+9_fp
         ENDIF
         ! PM2.5 dust
         IF ( State_Diag%Archive_PM25du  ) THEN
            State_Diag%PM25du(I,J,L) = ( SOILDUST(I,J,L,1)
     &                               +   SOILDUST(I,J,L,2)
     &                               +   SOILDUST(I,J,L,3)
     &                               +   SOILDUST(I,J,L,4)
     &                               +   SOILDUST(I,J,L,5) * 0.38  )
     &                               * ( 1013.25_fp  / PMID(I,J,L) )
     &                               * ( T(I,J,L)    / 298.0_fp    )
     &                               * 1.0e+9_fp
         ENDIF
         ! PM2.5 sea salt
         IF ( State_Diag%Archive_PM25ss  ) THEN
            State_Diag%PM25ss(I,J,L) = ( SALA(I,J,L) * ORG_GROWTH  )
     &                               * ( 1013.25_fp  / PMID(I,J,L) )
     &                               * ( T(I,J,L)    / 298.0_fp    )
     &                               * 1.0e+9_fp
         ENDIF
         ! PM2.5 SOA
         IF ( State_Diag%Archive_PM25soa ) THEN
            State_Diag%PM25soa(I,J,L) = ( TSOA(I,J,L)   * ORG_GROWTH
     &                                +   ISOA(I,J,L)   * ORG_GROWTH
     &                                +   ASOA(I,J,L)   * ORG_GROWTH
     &                                +   SOAS(I,J,L)   * ORG_GROWTH
     &                                +   ISOAAQ(I,J,L) * ORG_GROWTH  )
     &                                * ( 1013.25_fp    / PMID(I,J,L) )
     &                                * ( T(I,J,L)      / 298.0_fp    )
     &                                * 1.0e+9_fp
         ENDIF
#endif

I first tried to add PM25su, PM25ni, and etc. to HISTORY.rc but the model soon complained that these things have not been registered in history_mod.F90. In this sense, could you suggest an easiest way to collect these things?

I also wonder whether the practice that State_Diag%PM25soa(I,J,L) includes both SOAS and ISOAAQ would cause the double counting problem described in this page?

For PM2.5 components, I found the aforementioned codes contains sulfate, nitrate, bc, oc, dust, ss, soa? How about POA? I searched AerMassPOA in aerosol_mod.F and found the following.

           !--------------------------------------
         ! AerMassPOA [ug/m3], OA:OC=2.1
         !--------------------------------------
         IF ( State_Diag%Archive_AerMassPOA ) THEN
            IF ( Is_POA ) THEN
               State_Diag%AerMassPOA(I,J,L)   = OCPO(I,J,L)
     &                                        * kgm3_to_ugm3
            ELSE
               State_Diag%AerMassPOA(I,J,L)   = ( OCPI(I,J,L)
     &                                          + OCPO(I,J,L) )
     &                                          * kgm3_to_ugm3
            ENDIF
         ENDIF

Hence POA=OCPO+OCPI=OC? I almost got lost by these classification...

Regarding AOD, I believe I can only collect them using BPCH diagnostics like from ND49? Nevertheless, that list is somewhat incomplete. How could I collect nitrate AOD, POA AOD, SOA AOD, and etc. just like the practice in PM2.5? Does OC=POA+SOA?

BTW, when specifying Tracers to include :, I believe I can find the number in NDxx # field here. I just want to confirm that the number for advected species is exactly the order they appear in ADVECTED SPECIES MENU in input.geos? Besides, What's the main purpose of that MENU?

[QUESTION] Some questions about the running CO2 simulation

Hi everyone,

I have a questions that don't quite understand.

TMPU1    not found in restart, keep as value at t=0
 SPHU1    not found in restart, keep as value at t=0
 PS1_WET  not found in restart, keep as value at t=0
 PS1_DRY  not found in restart, keep as value at t=0
 DELP_DRY not found in restart, set to zero

In my understanding, initial time there is no need for meteorological restart file, Why do these state variables need to be initialized? Do they have an impact on the results by not passing the file to the initial value?could anyone help me? Thank you very much!

huoxiao

[QUESTION] Compiler error when building with PRECISION=4

Hi everyone,

In yesterday's MDS meeting I mentioned that I was having trouble building GEOS-Chem with PRECISION=4. This issue is following up on that.

When I try to build GEOS-Chem Classic with PRECISION=4 I get a complier error when state_chm_mod.F90 is compiling. The compiler error is the following:

$ ifort -cpp -w -auto -noalign -convert big_endian -O2 -vec-report0 -fp-model source -openmp -mcmodel=medium -shared-intel -traceback -DLINUX_IFORT -DBPCH_DIAG -DBPCH_TIMESER -DBPCH_TPBC -DNC_HAS_COMPRESSION -module ../mod -I/software/include -c -free state_chm_mod.F90
state_chm_mod.F90(2982): error #5286: Ambiguous generic interface REGISTER_CHMFIELD: previously declared specific procedure REGISTER_CHMFIELD_R4_3D is not distinguishable from this declaration. [REGISTER_CHMFIELD_RFP_3D]
  SUBROUTINE Register_ChmField_Rfp_3D( am_I_Root, metadataID, Ptr2Data,  &
-------------^
compilation aborted for state_chm_mod.F90 (code 1)
../Makefile_header.mk:1277: recipe for target 'state_chm_mod.o' failed
make[5]: *** [state_chm_mod.o] Error 1

Info

  • GEOS-Chem version: 12.1.1, 12.2.1, 12.4.0 (the only version I've checked)
  • No modifications to the source

To reproduce

  1. Create a new geosfp_4x5_standard run directory and cd into it
  2. Build with PRECISION=4
    make PRECISION=4 mpbuild
    

Attachements


Is there something I'm missing?

[FEATURE REQUEST] Only read necessary met fields to speed-up simulation s

I notice that CH4 simulations spend ~50% of time on HEMCO I/O, for both global and nested settings.

Here's 1-month global 4x5 timing:

  Timer name                       DD-hh:mm:ss.SSS     Total Seconds
-------------------------------------------------------------------------------
  GEOS-Chem                     :  00-00:16:30.940           990.941
  Initialization                :  00-00:00:12.158            12.159
  Timesteps                     :  00-00:16:18.717           978.717
  HEMCO                         :  00-00:10:16.362           616.363
  All chemistry                 :  00-00:00:09.649             9.649
  => Gas-phase chem             :  00-00:00:08.747             8.747
  => FAST-JX photolysis         :  >>>>> THE TIMER DID NOT RUN <<<<<
  => All aerosol chem           :  00-00:00:00.004             0.004
  => Strat chem                 :  >>>>> THE TIMER DID NOT RUN <<<<<
  => Unit conversions           :  00-00:00:06.406             6.407
  Transport                     :  00-00:03:07.912           187.912
  Convection                    :  00-00:00:05.086             5.086
  Boundary layer mixing         :  00-00:01:51.981           111.981
  Dry deposition                :  >>>>> THE TIMER DID NOT RUN <<<<<
  Wet deposition                :  >>>>> THE TIMER DID NOT RUN <<<<<
  All diagnostics               :  00-00:00:05.363             5.363
  => HEMCO diagnostics          :  00-00:00:00.014             0.015
  => Binary punch diagnostics   :  00-00:00:00.003             0.004
  => ObsPack diagnostics        :  >>>>> THE TIMER DID NOT RUN <<<<<
  => History (netCDF diags)     :  00-00:00:05.379             5.380
  Input                         :  00-00:07:26.930           446.930
  Output                        :  00-00:00:05.371             5.372
  Finalization                  :  00-00:00:00.063             0.064

Here's 1-month nested NA 0.25x0.3125 timing:

  Timer name                       DD-hh:mm:ss.SSS     Total Seconds
-------------------------------------------------------------------------------
  GEOS-Chem                     :  00-06:13:29.189         22409.189
  Initialization                :  00-00:00:24.189            24.190
  Timesteps                     :  00-06:13:04.380         22384.380
  HEMCO                         :  00-03:09:20.634         11360.634
  All chemistry                 :  00-00:04:44.697           284.698
  => Gas-phase chem             :  00-00:04:11.651           251.651
  => FAST-JX photolysis         :  >>>>> THE TIMER DID NOT RUN <<<<<
  => All aerosol chem           :  00-00:00:00.002             0.003
  => Strat chem                 :  >>>>> THE TIMER DID NOT RUN <<<<<
  => Unit conversions           :  00-00:03:24.439           204.439
  Transport                     :  00-01:32:07.885          5527.886
  Convection                    :  00-00:02:30.030           150.031
  Boundary layer mixing         :  00-00:58:20.541          3500.542
  Dry deposition                :  >>>>> THE TIMER DID NOT RUN <<<<<
  Wet deposition                :  >>>>> THE TIMER DID NOT RUN <<<<<
  All diagnostics               :  00-00:01:14.885            74.885
  => HEMCO diagnostics          :  00-00:00:00.022             0.023
  => Binary punch diagnostics   :  00-00:00:00.010             0.011
  => ObsPack diagnostics        :  >>>>> THE TIMER DID NOT RUN <<<<<
  => History (netCDF diags)     :  00-00:01:15.064            75.064
  Input                         :  00-00:55:22.954          3322.954
  Output                        :  00-00:01:14.901            74.902
  Finalization                  :  00-00:00:00.597             0.598

The transport & PBL mixing calculation is so fast, so most time is just waiting for the slow I/O. From the List of GEOS-FP met fields, it seems to me that most of metfields are not needed by CH4-only simulation. Just need to keep the met variables associated with "CH4 simulation", "Advection" and "PBL mixing".

Is it possible to skip the reading of unused met variables via FlexGrid? How much speed-up can we expect? If the real bottleneck is opening & closing files, I can also merge the actually-used variables into a single file.

[QUESTION]Why not all speciated AOD output from GC 12.2.1?

It might be a stupid question but I have been really bothered. I have run GC 12.2.1 and would like to collect different components of AOD. However, I found that I can only collect an incomplete list of speciated AOD, see the following BPCH diagnostics for example. I wonder why nitrate AOD, SOA AOD are missing? Do they contribute so small that we can ignore them? Similar to my previous question, does OC mean POA if complexSOA_SVPOA? Or does OC include all POA and SOA? I feel it would be great to let the AOD have the same or more (dur to PM diameter issue) number of components compared to that of PM? BTW, I do not think you provide a TotalAOD diagnostic? I think I need to sum all AOD components to obtain that, necessitating the complete list of speciated AOD including nitrate AOD and SOA AOD?

Data variables:
    OD_MAP_S_OPSO4550  (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPBC550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPOC550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPSSa550  (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPSSc550  (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPD       (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPD1550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPD2550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPD3550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPD4550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPD5550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPD6550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>
    OD_MAP_S_OPD7550   (lon, lat, lev) float32 dask.array<shape=(144, 91, 47), chunksize=(144, 91, 47)>

[BUG/ISSUE] Restart file generated for unexpected date

Describe the bug
In a simulation starting 20130101 and ending 20131231, a restart file was produced for 20131201, although the restart file was requested for output at the end of the simulation. However, a HEMCO restart file was produced for the correct date. I can pick up my next simulation starting from 20131201, but this was an unexpected result and I'm not sure what's caused it.

To Reproduce
Steps to reproduce the behavior:

  1. In input.geos file:
    Start YYYYMMDD, hhmmss : 20130101 000000
    End YYYYMMDD, hhmmss : 20131231 000000
  2. In HISTORY.rc file:
    Restart.filename: './GEOSChem.Restart.%y4%m2%d2_%h2%n2z.nc4',
    Restart.format: 'CFIO',
    Restart.frequency: 'End',
    Restart.duration: 'End',
    Restart.mode: 'instantaneous'
    Restart.fields: 'SpeciesRst_?ALL? ', 'GIGCchem',

Expected behavior
A restart file was expected to be produced at the end of the simulation (20131231), rather than at 20131201.

Required information

  • GEOS-Chem version you are using: 12.5.0
  • Compiler version that you are using: gfortran 4.8.5
  • netCDF and netCDF-Fortran library version: 4.3.3.1
  • Computational environment: local computer
  • Are you using "out of the box" code, or have you made modifications? out of the box

Input and log files to attach
For more info, see: http://wiki.geos-chem.org/Submitting_GEOS-Chem_support_requests

  • The GEOS-Chem "Classic" log file
    run.log

[BUG/ISSUE] RRTMG does not compile with GNU Fortran

The GEOS-Chem Support Team discovered that the RRTMG code in GEOS-Chem does not compile with the gfortran compiler. The compile error is included below for reference. While users still have the option to use the IFORT compilers, this is somewhat problematic for our movement towards using open-source software. For example, if users run GEOS-Chem on the Amazon Web Services cloud computing environment, we recommend that they use gfortran. We’ve also been using gfortran 7.1.0 for our unit tests because that compiler tends to be more strict.

We wanted to alert you to this unresolved issue in case anyone is interested in pursuing a fix with the folks at AER and/or updating RRTMG in GEOS-Chem. The GCST typically tries to remain hands-off with third party code (e.g. RRTMG, ISORROPIA) in GEOS-Chem. It appears that RRTMG v3.0 was implemented in GEOS-Chem. The latest RRTMG version is 3.3, so it may be that the issue has been resolved on AER’s end but that’s not clear.

rrtmg_lw_k_g.F90:4298:0:

       subroutine lw_kgb03

note: variable tracking size limit exceeded with -fvar-tracking-assignments, retrying without

ar crs librad.a rrsw_cld.o rrlw_kg01.o mcica_random_numbers.o rrlw_kg09.o mcica_subcol_gen_sw.o rrlw_wvn.o rrsw_kg28.o rrtmg_sw_setcoef.o rrsw_kg23.o parkind.o rrlw_cld.o rrlw_ncpar.o rrsw_kg29.o rrlw_tbl.o rrsw_kg17.o rrlw_kg11.o rrtmg_sw_spcvmc.o rrsw_tbl.o rrtmg_sw_init.o parrrtm.o rrtmg_sw_k_g.o rrlw_kg04.o rrsw_kg18.o rrlw_kg02.o rrtmg_lw_k_g.o rrtmg_lw_init.o rrtmg_lw_taumol.o rrsw_con.o rrtmg_lw_rtrnmc.o rrsw_kg19.o rrtmg_sw_rad.o rrsw_kg21.o rrsw_kg27.o rrlw_kg16.o rrsw_kg26.o rrsw_ref.o rrtmg_lw_setcoef.o rrtmg_sw_reftra.o rrsw_aer.o rrsw_vsn.o rrlw_kg10.o rrlw_ref.o test_arr_mult.o rrlw_kg12.o rrlw_con.o parrrsw.o rrsw_kg22.o rrsw_kg20.o rrlw_vsn.o rrlw_kg15.o rrtmg_sw_taumol.o rrsw_kg25.o rrsw_kg16.o rrlw_kg03.o rrtmg_lw_cldprmc.o rrlw_kg05.o rrlw_kg14.o rrtmg_sw_cldprmc.o rrlw_kg06.o rrtmg_sw_vrtqdr.o rrlw_kg13.o rrlw_kg07.o rrlw_kg08.o rrsw_wvn.o rrtmg_lw_rad.o rrsw_kg24.o mcica_subcol_gen_lw.o
ar: rrsw_cld.o: No such file or directory
make[4]: *** [lib] Error 1
make[4]: Leaving directory `/n/home05/msulprizio/GC/Code.Dev/GeosRad'
make[3]: *** [librad] Error 2
make[3]: Leaving directory `/n/home05/msulprizio/GC/Code.Dev/GeosCore'
make[2]: *** [lib] Error 2
make[2]: Leaving directory `/n/home05/msulprizio/GC/Code.Dev/GeosCore'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/n/home05/msulprizio/GC/Code.Dev/GeosCore'
make: *** [all] Error 2```

[FEATURE REQUEST] Enable "make fileclean" remove NetCDF diagnostics saved in a subdirectory?

I found that GEOS-Chem 12.2.1 can archive NetCDF diagnostics in a subdirectory (e.g. OutputDir) under the run directory, which is a great function to keep the run directory clean and tidy! Nevertheless, I noted that make fileclean cannot remove those NetCDF diagnostics. While this issue does not bother me too much, would it be great to improve make fileclean (maybe essentially make dataclean) so as to keep consistent with the wiki description?

[BUG/ISSUE] Number of ND64 tracers gets set to zero in input_mod.F

Describe the bug
The ND64 bpch radiative flux diagnostics are not printed out, even when ND64 is turned on in input.geos.

To Reproduce

  1. Create a geosfp_4x5_standard run dir.
  2. make -j8 all
  3. make -j8 run

Expected behavior
The ND64 tracers should be saved to the bpch file, and metadata printed to tracerinfo.dat.

Required information

  • GEOS-Chem 12.6.0, in bugfix/RRTMG branch
  • gfortran 9.2.0 via Spack
  • netcdf 4.7.0
  • netcdf-fortran 4.4.5

Input and log files to attach

---------------
Diag    L   Tracers being saved to disk
ND05   72   1 - 20
ND06    1   1 -  4
ND07   72   1 - 15
ND08    1   1 -  4
ND11    1   1 -  5
ND13   72   1 -  1
ND21   72   1 - 60
ND22   72   1 - 78
ND31   73   1 -  1
ND32   72   1 -  1
ND38   72   1 -162
ND39   72   1 -162
ND42   72   1 - 25
ND43   72   1 -  4
ND45   72   1 -162
ND54   72   1 -  1
ND55   72   1 -  3
ND64   72   1  - 0  **NOTE: Should be 1 - 3 ***
ND66   72   1 -  7
ND67   72   1 - 23
ND68   72   1 -  8

[BUG/ISSUE] Typo in julday_mod.F which may lead to failure when year of simulation spans across 2 centuries

Hi all,

TL;DR

A typo in calculating Julian date leads to an incorrect simulation length and raises an error for simulation that spans across 2 centuries, e.g. from 2099 to 2100. Fortunately, the case from 1999 to 2000 is unaffected.

I discovered the typo in julday_mod.F upon checking with the original algorithm in the literature.

Description

My colleague encounters this error when he runs a simulation from 01-Dec-2099 to 01-Jan-2100:

GEOS-Chem ERROR: No diagnostic output will be created for collection: 
"Restart"!  Make sure that the length of the simulation as specified in 
"input.geos" (check the start and end times) is not shorter than the frequency 
setting in HISTORY.rc!  For example, if the frequency is 010000 (1 hour) but 
the simulation is set up to run for only 20 minutes, then this error will occur.
 -> at History_ReadCollectionData (in module History/history_mod.F90)

 -> ERROR occurred at (or near) line     67 of the HISTORY.rc file

Julian date algorithm

Upon investigation, I checked the book "Practical Astronomy With Your Calculator", Third Edition, by Peter Duffett-Smith (1992). Here is the relevant algorithm used in calculating Julian date:
Screenshot 2019-09-02 at 3 43 36 PM

Typo in calculating Julian date

The bug is in line 110 of GeosUtil/julday_mod.F:

! Compute YEAR and MONTH1
IF ( ( MM == 1 ) .OR. ( MM == 2 ) ) THEN
YEAR1 = YYYY - 1
MONTH1 = MM + 12
ELSE
YEAR1 = YYYY
MONTH1 = MM
ENDIF
! Compute the "A" term.
X1 = DBLE( YYYY ) / 100.0d0
A = MINT( X1 )

YYYY (equivalent to y in the book) in Line 110 should instead be YEAR1 (equivalent to y' in the book), i.e.

      X1 = DBLE( YEAR1 ) / 100.0d0

This affects the calculation of SimLengthSec in History/history_mod.F90:

! Compute the Astronomical Julian Date corresponding to the yyyymmdd
! and hhmmss values at the start and end of the simulation, which are
! needed below. This can be done outside of the DO loop below.
CALL Compute_Julian_Date( yyyymmdd, hhmmss, JulianDate )
CALL Compute_Julian_Date( yyyymmdd_end, hhmmss_end, JulianDateEnd )
! Compute the length of the simulation, in elapsed seconds
SimLengthSec = NINT( ( JulianDateEnd - JulianDate ) * SECONDS_PER_DAY )

and, in our case, gives 2592000 seconds (=30 days) instead of 2678400 seconds (= 31 days). Meanwhile the FileWriteAlarm remains correct (31 days), so it gives the following error message:
!-----------------------------------------------------------------
! ERROR CHECK: Make sure that the length of the simulation is
! not shorter than the requested "File Write" interval. This
! will prevent simulations without diagnostic output.
!-----------------------------------------------------------------
IF ( SimLengthSec < Container%FileWriteAlarm ) THEN
! Construct error message
ErrMsg = &
'No diagnostic output will be created for collection: "' // &
TRIM( CollectionName(C) ) // '"! Make sure that the ' // &
'length of the simulation as specified in "input.geos" ' // &
'(check the start and end times) is not shorter than ' // &
'the frequency setting in HISTORY.rc! For example, if ' // &
'the frequency is 010000 (1 hour) but the simulation ' // &
'is set up to run for only 20 minutes, then this error ' // &
'will occur.'

When will this bug lead to simulation failure?

I think this bug only affects simulation that spans across different centuries, e.g. from 1899 to 1900, 2099 to 2100, 2199 to 2200. But fortunately, the case 1999 to 2000 is unaffected because of the calculation of "B" term: B = 2 - A + INT( A/4 ) is exactly the same as if the bug was not there. Therefore, most simulation would not encounter this issue.

Relevant configurations

GEOS-Chem version 12.2.0
Excerpt from HISTORY.rc:

#==============================================================================
# %%%%% THE Restart COLLECTION %%%%%
#
# GEOS-Chem restart file fields
#
# Available for all simulations
#==============================================================================
  Restart.template:           '%y4%m2%d2_%h2%n2z.nc4',
  Restart.format:             'CFIO',
  Restart.frequency:          'End',
  Restart.duration:           'End',
  Restart.mode:               'instantaneous'
  Restart.fields:             'SpeciesRst_?ALL?               ', 'GIGCchem',
                              'Chem_H2O2AfterChem             ', 'GIGCchem',
                              'Chem_SO2AfterChem              ', 'GIGCchem',
                              'Chem_DryDepNitrogen            ', 'GIGCchem',
                              'Chem_WetDepNitrogen            ', 'GIGCchem',
                              'Chem_KPPHvalue                 ', 'GIGCchem',
                              'Met_DELPDRY                    ', 'GIGCchem',
                              'Met_PS1WET                     ', 'GIGCchem',
                              'Met_PS1DRY                     ', 'GIGCchem',
                              'Met_SPHU1                      ', 'GIGCchem',
                              'Met_TMPU1                      ', 'GIGCchem',
::

Excerpt from input.geos:

GEOS-CHEM UNIT TEST SIMULATION: merra2_2x25_tropchem
------------------------+------------------------------------------------------
%%% SIMULATION MENU %%% :
Start YYYYMMDD, hhmmss  : 20991201 000000
End   YYYYMMDD, hhmmss  : 21000101 000000
Run directory           : ./
Root data directory     : /project/TGABI/GEOS-Chem/ExtData
Global offsets I0, J0   : 0 0
------------------------+------------------------------------------------------

Best regards,
Joey
02 Sep 2019

[QUESTION] Where are variables like `IS_POA` are defined?

Hi,

I have a small question on some codes in aerosol_mod.F, in which I have found that you defined id_POA1, id_POA2 at the top of module and then use them to define the value of IS_POA in aerosol_mod.F as IS_POA = ( id_POA1 > 0 .AND. id_POA2 > 0 ). However, I did not find any initialisation to id_POA1, id_POA2 before using them to define IS_POA. I just wonder this will give the IS_POA a value of .FALSE.?

Nevertheless, id_POA1, id_POA2 may be defined in other modules before entering subroutine aerosol_conc that I am missing? I just wonder if there is a simple rule for average users to know the values of IS_xxxx? For instance , if I have included POA1 in Advected Species in input.geos, then IS_POA will hold the value of .TRUE.?

Thanks in advance!

Yours faithfully,
Fei

[QUESTION] How to geos-chem apply scale factor to co2 emission?

Ask your question here
I am running geos-chem_v12.4.0 with only odiac emission inventory. I have some problems about it.
1.The odiac data is monthly data and there is no scale factor item in the HEMCO_Config.rc file. I want to know how HEMCO scales monthly odiac data to emission data per emission timestep.
2.In which procedure the odiac emissions were added to the co2 concentration?
3.If I use the adjoint method to solve the optimal emission factor of the model simulation, how can I multiply the optimal emission factor by the emission of the mode grid because the lat/lon(2x2.5_d) of the model simulation is inconsistent with the lat/lon(1x1_d) of the odiac emission?
Thanks!

[FEATURE REQUEST] Minor polishing changes to CMakeLists

Here is a list of minor changes to the CMakeLists that should be done prior to 12.6. These are mostly polish and shouldn't change any behaviour.

  • rename targets (e.g. GeosUtil2 -> GeosUtil)
  • move environment variable's from find package modules to CMAKE_PREFIX_PATH
  • add mechanism to specify build directory settings without a build directory (to help with things like geoschem/GCHP#36)
  • add RelWithDebInfo build type
  • use cached variables in CMakeLists
  • remove PRECISION=R8 flag from CMakeLists
  • simplify CMakeLists and remove old things from CMakeScripts

I'll follow up with a PR in a day or two.

[DISCUSSION] Allow `conda install geoschem` ?

Just learned that making a new conda package is not too difficult (just made one for xesmf). For example, see the config file and build script for ESMF. It shows how to handle MPI/NetCDF dependencies. It is possible to follow a similar pattern to allow:

conda install -c conda-forge geoschem
conda install -c conda-forge gchp

ESMF is already on conda which makes things a lot simpler.

How does it work?
Conda ships its own compiler. Defaults for gcc/gfortran on Linux and clang on Mac. The package is pre-compiled and stored on Anaconda cloud. Then users runconda install to download the compiled binary.

Limitations?

No compile-time configurations allowed. However, with FlexGrid I believe there are almost no compile-time flags.

Versioning is not a big concern as you can compile & upload all major versions, just like other packages.

Compare to container?

Pros: Conda is more lightweight and doesn't require Docker/Singularity installed.
Cons: Container environment also allows you to re-compile from source.

Compare to Spack?

Frankly two approaches are quite similar... They can both install in user space without root permission.
Pros: More people know how to use conda (for Python env) than Spack; conda install is faster, as the package is pre-compiled.
Cons: Spack allows fine-tuning compile flags and using other compilers (e.g. ifort).
To get optimal performance, especially on HPC systems, Spack is still the best choice. But if you just want a reasonable performance with gfortran -O3, conda is good enough.

Does this sound useful for users?

@LiamBindle you might be interested in this. Seems a fun exercise.

[BUG/ISSUE] netCDF installed, but not properly detected

Hi,

somehow I do not get netCDF detected by makefile system, although it is installed. I have Linux labs 3.16.0-4-amd64 #1 SMP Debian 3.16.51-3 (2017-12-13) x86_64 GNU/Linux and geos-chem master branch.

Any help, please ?

[email protected]:~/work/air-pollution-modelling/GEOS-Chem/GEOS_master/.make ncdfcheck
make[1]: Entering directory '/home/milias/work/air-pollution-modelling/GEOS-Chem/GEOS_master/GeosCore'
/bin/bash: /nc-config: No such file or directory
grep: /netcdf.inc: No such file or directory
make[2]: Entering directory '/home/milias/work/air-pollution-modelling/GEOS-Chem/GEOS_master/GeosCore'
/bin/bash: /nc-config: No such file or directory
grep: /netcdf.inc: No such file or directory
make[3]: Entering directory '/home/milias/work/air-pollution-modelling/GEOS-Chem/GEOS_master/NcdfUtil'
/bin/bash: /nc-config: No such file or directory
grep: /netcdf.inc: No such file or directory
gfortran -cpp -w -std=legacy -fautomatic -fno-align-commons -fconvert=big-endian -fno-range-check -O3 -funroll-loops -fopenmp -mcmodel=medium -fbacktrace -g -DLINUX_GFORTRAN -DBPCH_TIMESER -DBPCH_TPBC -DUSE_REAL8 -J../mod -I -c -ffree-form -ffree-line-length-none charpak_mod.F90
Warning: Nonexistent include directory "-c"
/usr/lib/gcc/x86_64-linux-gnu/4.9/../../../x86_64-linux-gnu/crt1.o: In function `_start':
/build/glibc-6V9RKT/glibc-2.19/csu/../sysdeps/x86_64/start.S:118: undefined reference to `main'
collect2: error: ld returned 1 exit status
../Makefile_header.mk:1277: recipe for target 'charpak_mod.o' failed
make[3]: *** [charpak_mod.o] Error 1
make[3]: Leaving directory '/home/milias/work/air-pollution-modelling/GEOS-Chem/GEOS_master/NcdfUtil'
Makefile:322: recipe for target 'libnc' failed
make[2]: *** [libnc] Error 2
make[2]: Leaving directory '/home/milias/work/air-pollution-modelling/GEOS-Chem/GEOS_master/GeosCore'
Makefile:334: recipe for target 'ncdfcheck' failed
make[1]: *** [ncdfcheck] Error 2
make[1]: Leaving directory '/home/milias/work/air-pollution-modelling/GEOS-Chem/GEOS_master/GeosCore'
Makefile:88: recipe for target 'ncdfcheck' failed
make: *** [ncdfcheck] Error 2
[email protected]:~/work/air-pollution-modelling/GEOS-Chem/GEOS_master/.
.
.
[email protected]:~/work/air-pollution-modelling/GEOS-Chem/GEOS_master/.which nc-config
/usr/bin/nc-config
[email protected]:~/work/air-pollution-modelling/GEOS-Chem/GEOS_master/.nc-config --all

This netCDF 4.1.3 has been built with the following features:

  --cc        -> gcc
  --cflags    ->  -I/usr/include -DgFortran
  --libs      -> -L/usr/lib -lnetcdf

  --cxx       -> g++
  --has-c++   -> yes

  --fc        -> gfortran
  --fflags    -> -g -O2 -I/usr/include
  --flibs     -> -L/usr/lib -lnetcdff -lnetcdf
  --has-f77   -> yes
  --has-f90   -> yes

  --has-dap   -> yes
  --has-nc2   -> yes
  --has-nc4   -> yes
  --has-hdf5  -> yes
  --has-hdf4  -> no
  --has-pnetcdf-> no
  --has-szlib ->

  --prefix    -> /usr
  --includedir-> /usr/include
  --version   -> netCDF 4.1.3

[email protected]:~/work/air-pollution-modelling/GEOS-Chem/GEOS_master/.

[QUESTION] Error: make realclean

Hi there,

I am new to GEOS-CHEM. And when I was trying to type make reclean in my rundir on Ubuntu. Here are bugs showed. Do you guys have any experience on it?

make[4]: Entering directory '/glade/u/home/lixujin/GEOS-CHEM/Code.12.4.0/KPP/Standard'
ifort: warning #10315: specifying -lm before files may supersede the Intel(R) math library and affect performance
grep: path/to/NetCDF-Fortran/library/files/include/netcdf.inc: No such file or directory
rm -f *.o *.mod geos
make[4]: Leaving directory '/glade/u/home/lixujin/GEOS-CHEM/Code.12.4.0/KPP/Standard'
make[4]: Entering directory '/glade/u/home/lixujin/GEOS-CHEM/Code.12.4.0/KPP/SOA_SVPOA'
make[4]: *** No rule to make target 'clean'. Stop.
make[4]: Leaving directory '/glade/u/home/lixujin/GEOS-CHEM/Code.12.4.0/KPP/SOA_SVPOA'
Makefile:140: recipe for target 'realclean' failed
make[3]: *** [realclean] Error 2
make[3]: Leaving directory '/glade/u/home/lixujin/GEOS-CHEM/Code.12.4.0/KPP'
Makefile:359: recipe for target 'realclean' failed
make[2]: *** [realclean] Error 2
make[2]: Leaving directory '/glade/u/home/lixujin/GEOS-CHEM/Code.12.4.0/GeosCore'
Makefile:109: recipe for target 'realclean' failed
make[1]: *** [realclean] Error 2
make[1]: Leaving directory '/glade/u/home/lixujin/GEOS-CHEM/Code.12.4.0'
Makefile:641: recipe for target 'realclean' failed
make: *** [realclean] Error 2

[FEATURE REQUEST] Can CMake put the GEOS-Chem version number into main.F (for log file printout)?

For GEOS-Chem "Classic", we currently use an include file with the version number (GeosCore/gc_classic_version.H). This is inlined into routine Display_Model_Info of main.F so that the version number can be printed to the log file. This prints out to the top of the log file, e.g.

*************   S T A R T I N G   G E O S - C H E M   *************

===> Mode of operation         : GEOS-Chem "Classic"
===> GEOS-Chem version         : 12.7.0
===> Compiler                  : GNU Fortran compiler (aka gfortran)
===> Parallelization w/ OpenMP : ON
===> Binary punch diagnostics  : ON
===> netCDF diagnostics        : ON
===> netCDF file compression   : SUPPORTED

But the version number that CMake uses is defined at the top of the CMakeList in the source code folder:

cmake_minimum_required(VERSION 3.5)
project(GEOS_Chem VERSION 12.7.0 LANGUAGES Fortran)

Is there a way for CMake to automatically add the version number from the CMakeList into main.F so that GEOS-Chem can print it out to the log file? We probably don't want to keep defining the version number in the CMakeList and in gc_classic_version.H, as that could lead to mismatches.

[BUG/ISSUE] Error building GEOS-Chem with TOMAS in 12.3

Hi everyone,

I've been having some trouble building GEOS-Chem with TOMAS ever since the NC_DIAG option was removed. What make command should use to build GEOS-Chem Classic with TOMAS in 12.3?

Context

I've been trying to build from the geosfp_4x5_TOMAS15 and geosfp_4x5_TOMAS40 run directories with

make mpbuild

and

make TOMAS15=yes mpbuild                                         # for TOMAS15

Note: Before 12.3 I used to build these run directories with

make TOMAS15=yes NC_DIAG=n mpbuild                     # for TOMAS15

Problem

With both of those commands, I get the following error:

ifort -cpp -w -auto -noalign -convert big_endian -O2 -vec-report0 -fp-model source -openmp -mcmodel=medium -shared-intel -traceback -DLINUX_IFORT -DBPCH_DIAG -DBPCH_TIMESER -DBPCH_TPBC -DGEOS_FP -DGRIDREDUCED -DGRID4x5 -DTOMAS -DTOMAS15 -DNC_HAS_COMPRESSION -DUSE_REAL8 -module ../../mod -I/software/include -c -free hcox_tomas_jeagle_mod.F90
hcox_tomas_jeagle_mod.F90(160): error #6404: This name does not have a type, and must have an explicit type.   [INST] 
    Inst   => NULL()
----^
hcox_tomas_jeagle_mod.F90(161): error #6691: A pointer dummy argument may only be argument associated with a pointer.   [INST]
    CALL InstGet ( ExtState%TOMAS_Jeagle, Inst, RC )
------------------------------------------^
hcox_tomas_jeagle_mod.F90(169): error #6460: This is not a field name that is defined in the encompassing structure.   [TC1]
    Inst%TC1 = 0.0_hp
---------^
hcox_tomas_jeagle_mod.F90(170): error #6460: This is not a field name that is defined in the encompassing structure.   [TC2]
    Inst%TC2 = 0.0_hp
---------^
hcox_tomas_jeagle_mod.F90(221): error #6404: This name does not have a type, and must have an explicit type.   [TOMAS_DBIN] 
             rwet=TOMAS_DBIN(k)*1.0E6*BETHA/2. ! convert from dry diameter [m] to wet (80% RH) radius [um]
------------------^
hcox_tomas_jeagle_mod.F90(235): error #6404: This name does not have a type, and must have an explicit type.   [DRFAC]
             dfo=dfo*DRFAC(k)*BETHA  !hemco units???? jkodros
---------------------^
hcox_tomas_jeagle_mod.F90(277): error #6911: The syntax of this substring is invalid.   [TC2]
       CALL HCO_EmisAdd( am_I_Root, HcoState, Inst%TC2(:,:,:,K), &

I get a similar problem when I try to compile with gfortran. Is there something I'm missing?

Thanks in advance,

Liam

[FEATURE REQUEST]Can GEOS-CHEM get the area on the input grid?

Hi there,

I am trying to get the amount of one year fire emission of CO from GFAS. The unit of fire emission in GFAS is kg/m2/s and the resolution is 0.1°*0.1°. So I was wondering if GEOS-CHEM has anything to get the area on the given grid and latitude.

Lixu

[FEATURE REQUEST] NetCDF diagnostics for UV Fluxes from FAST-JX

New Feature:

GEOS-Chem 12.7.0 will now contain the following additional diagnostics:

  1. UVFluxDiffuse -- diffuse UV fluxes from FAST-JX (W/m2)
  2. UVFluxDirect -- direct UV fluxes from FAST-JX (W/m2)
  3. UVFluxNet -- net UV fluxes from FAST-JX (W/m2)

Fluxes will be computed for each of the FAST-JX wavelength bins:

187nm, 191nm, '193nm, '196nm, 202nm, 208nm, 
211nm, 214nm, '261nm, '267nm, 277nm, 295nm,
303nm, 310nm, '316nm, '333nm, 380nm, 574nm

These netCDF diagnostics correspond to the ND64 bpch diagnostics.

Validation:

Comparing the UVFlux* diagnostics to the ND64 diagnostics results in differences at the level of numerical noise.

Here is a plot of a short (1-hour) test run at the top of the atmosphere (level 72) of the Diffuse UV flux at 303nm:
uvflux1

And the corresponding zonal mean flux:
uvflux2

The other variables show similar differences to the plots shown above.

References:

  1. Issue #67
  2. http://wiki.seas.harvard.edu/geos-chem/index.php/Benchmark/GEOS-Chem_12.5.0#Validation_of_netCDF_diagnostics

[BUG/ISSUE] GEOS-Chem cannnot find netcdf.inc

Dear developers,

I'm trying to build the current 12.5.0 dev master branch with gcc-fortran 9.1.0 (@archlinux). Common scientific libraries like netcdf (including nc-config) are present:

FC=gfortran make

miserably fails:

make[4]: Entering directory './geos-chem/NcdfUtil'
/bin/bash: /nc-config: No such file or directory
grep: /netcdf.inc: No such file or directory
gfortran -cpp -w -std=legacy -fautomatic -fno-align-commons -fconvert=big-endian -fno-range-check -O3 -funroll-loops -fopenmp -mcmodel=medium -fbacktrace -g -DLINUX_GFORTRAN -DBPCH_TIMESER -DBPCH_TPBC -DUSE_REAL8 -J../mod -I -c -ffree-form -ffree-line-length-none charpak_mod.F90
f951: Warning: Nonexistent include directory '-c' [-Wmissing-include-dirs]
/usr/bin/ld: /usr/lib/gcc/x86_64-pc-linux-gnu/9.1.0/../../../../lib/Scrt1.o: in function `_start':
(.text+0x24): undefined reference to `main'
collect2: error: ld returned 1 exit status
make[4]: *** [../Makefile_header.mk:1008: charpak_mod.o] Error 1

Probably I've not found the install guide with some obvious advice. Could you please help me further?

[BUG/ISSUE] HEMCO interpolation error when reading Yuan-processed MODIS LAI

Bob Yantosca discovered an error when using the Yuan-processed MODIS LAI data in GEOS-Chem "Classic" simulations. HEMCO cannot locate the proper bounding timestamps for the interpolation.

Example: HEMCO should have detected that simulation date 20160701 was bounded by data timestamps 20160624 and 20160704, and should have interpolated accordingly. Instead, HEMCO was detecting that simulation data 20160701 was bounded by data timestamps 20160602 and 20160610. This is clearly wrong.
Christoph Keller traced this error to routine GET_TIMEIDX in module HEMCO/Core/hcoio_read_std_mod.F90, which is only used for GEOS-Chem "Classic" simulations. The search algorithm was continuing to look for timestamps further back in time than was necessary.

If you need to run GEOS-Chem simulations with the Yuan-processed MODIS LAI data, Christoph Keller has submitted a quick fix. Add the IF statement to this code in hcoio_read_std_mod.F90. This will force the timestamp search algorithm to terminate before it goes too far back in time.

       ! Check if we need to continue search. Even if the call above
       ! returned a time slice, it may be possible to continue looking
       ! for a better suited time stamp. This is only the case if
       ! there are discontinuities in the time stamps, e.g. if a file
       ! contains monthly data for 2005 and 2020. In that case, the
       ! call above would return the index for Dec 2005 for any 
       ! simulation date between 2005 and 2010 (e.g. July 2010),
       ! whereas it makes more sense to use July 2005 (and eventually
       ! interpolate between the July 2005 and July 2020 data).
       ! The IsClosest command checks if there are any netCDF time
       ! stamps (prior to the selected one) that are closer to each
       ! other than the difference between the preferred time stamp
       ! prefYMDhm and the currently selected time stamp 
       ! availYMDhm(tidx1a). In that case, it continues the search by
       ! updating prefYMDhm so that it falls within the range of the
       ! 'high-frequency' interval.
       ! -------------------------------------------------------------
       ExitSearch = .FALSE.
       IF ( Lct%Dct%Dta%CycleFlag == HCO_CFLAG_EXACT ) THEN
          ExitSearch = .TRUE.
       ELSE IF ( tidx1a > 0 ) THEN 
          ExitSearch = IsClosest( prefYMDhm, availYMDhm, nTime, tidx1a )
       ENDIF 
       !### for testing, only apply to containers with “XLAI” in the name
       IF ( INDEX( Lct%Dct%Cname, 'XLAI' ) > 0 ) THEN
          ExitSearch = .true.
       ENDIF

We will try to add a more robust fix for this issue in the near future. We did not want to add a fix that was contingent on a HEMCO container name to the standard code.

NOTE: GCHP is unaffected by this issue, as it uses the MAPL ExtData functionality to read data from disk. We have verified that GCHP reads in the Yuan-processed MODIS LAI data properly.

[BUG/ISSUE] RRTMG builds failing on dev/12.4.0

Hi everyone,

I was testing feature/CMake rebased on dev/12.4.0 and noticed that RRTMG builds were failing for both Make and CMake build systems.

With the Make build system, I'm trying to compile geos.mp from the RRTMG run directories with

$ make mpbuild

When I do this I get the following compiler errors

ifort -cpp -w -auto -noalign -convert big_endian -O2 -vec-report0 -fp-model source -openmp -mcmodel=medium -shared-intel -traceback -DLINUX_IFORT -DBPCH_DIAG -DBPCH_TIMESER -DBPCH_TPBC -DRRTMG -DNC_HAS_COMPRESSION -DUSE_REAL8 -module ../mod -I/software/include -c rrtmg_rad_transfer_mod.F
rrtmg_rad_transfer_mod.F(788): error #6351: The number of subscripts is incorrect.   [YMID]
       YLAT            = State_Grid%YMid( I, J, 1 )
------------------------------------^
rrtmg_rad_transfer_mod.F(2146): error #6631: A non-optional actual argument must be present when invoking a procedure with an explicit interface.   [STATE_GRID]
      CALL Init_Surface_Rad()
-----------^
rrtmg_rad_transfer_mod.F(2635): error #6404: This name does not have a type, and must have an explicit type.   [STATE_GRID]
      ALLOCATE( CLDFMCL_LW( NGPTLW, State_Grid%NX, State_Grid%NY,
------------------------------------^
rrtmg_rad_transfer_mod.F(2635): error #6460: This is not a field name that is defined in the encompassing structure.   [NX]
      ALLOCATE( CLDFMCL_LW( NGPTLW, State_Grid%NX, State_Grid%NY,
-----------------------------------------------^
rrtmg_rad_transfer_mod.F(2635): error #6385: The highest data type rank permitted is INTEGER(KIND=8).   [NX]
      ALLOCATE( CLDFMCL_LW( NGPTLW, State_Grid%NX, State_Grid%NY,
-----------------------------------------------^
rrtmg_rad_transfer_mod.F(2635): error #6460: This is not a field name that is defined in the encompassing structure.   [NY]
      ALLOCATE( CLDFMCL_LW( NGPTLW, State_Grid%NX, State_Grid%NY,
--------------------------------------------------------------^
rrtmg_rad_transfer_mod.F(2635): error #6385: The highest data type rank permitted is INTEGER(KIND=8).   [NY]
      ALLOCATE( CLDFMCL_LW( NGPTLW, State_Grid%NX, State_Grid%NY,
--------------------------------------------------------------^
rrtmg_rad_transfer_mod.F(2636): error #6460: This is not a field name that is defined in the encompassing structure.   [NZ]
     &                      State_Grid%NZ ), STAT=AS )
---------------------------------------^
rrtmg_rad_transfer_mod.F(2636): error #6385: The highest data type rank permitted is INTEGER(KIND=8).   [NZ]
     &                      State_Grid%NZ ), STAT=AS )
---------------------------------------^

.
.
.

/tmp/ifortbySTCx.i(2802): catastrophic error: Too many errors, exiting
compilation aborted for rrtmg_rad_transfer_mod.F (code 1)

And I get this for all 4 RRTMG run directories.

Are there any changes to how I should be compiling GC+RRTMG in 12.4.0?

Thanks,

Liam

[QUESTION] IO error reading merra2 meteorological file

Hi everyone,

I simulate CO2 from 2005-01-01-00 to 2005-01-01-01, and I downloaded the merra2 data which is put in ~/GEOS_CHEM/data/GEOS_2x2.5/MERRA2/2005/01 directory.The error when I run ./geos.mp command is followiing:

HEMCO: Opening /home/huoxiao/GEOS_CHEM/data/GEOS_2x2.5/MERRA2/2015/01/MERRA2.20150101.CN.2x25.nc4

In Ncop_Rd, cannot open:  /home/huoxiao/GEOS_CHEM/data/GEOS_2x2.5/MERRA2/2005/01/MERRA2.20050101.A3cld.2x25.nc4
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Code stopped from DO_ERR_OUT (in module NcdfUtil/m_do_err_out.F90) 
This is an error that was encountered in one of the netCDF I/O modules,
which indicates an error in writing to or reading from a netCDF file!

I have two problems:
1)My simulation time is 2015-01-01. why does it open meteorological file in 2015:

Opening /home/huoxiao/GEOS_CHEM/data/GEOS_2x2.5/MERRA2/2015/01/MERRA2.20150101.CN.2x25.nc4

2)In "A question about GEO-Chem v 12.1.1 need netcdf4 library?", the answer is the data file maybe not exsit, but in my directory :

/home/huoxiao/GEOS_CHEM/data/GEOS_2x2.5/MERRA2/2005/01

the file is exsiting, why can't the program read this file?

MERRA2.20150101.CN.2x25.nc4

Is there something I'm missing?

[QUESTION] KPP do not know the families

I am trying to run geos-chem v12.3.2 with the standard chemistry. When I build the standard chemistry in KPP, the problem occur as follows:

rm: cannot remove `*o': No such file or directory

This is KPP-2.2.

KPP is parsing the equation file.Warning :Standard.eqn:1188: Duplicate equation: (eqn<559> = eqn<560> )
Error :gckpp.kpp:10: 'FAMILIES': Unknown command (ignored)
Error :gckpp.kpp:11: Extra parameter on command line 'POx'
Error :gckpp.kpp:11: Extra parameter on command line ':'
........
Error :gckpp.kpp:17: Extra parameter on command line ':'
Error :gckpp.kpp:17: Extra parameter on command line 'H2O2'
;
Fatal error : 222 errors and 1 warnings encountered.
Program aborted
KPP failure! Aborting.

The KPP is downloaded from https://bitbucket.org/gcst/kpp/downloads/?tab=branches, I download the GC_updates version

How can I solve this problem, Thank you very much!

[QUESTION] Adding new file to the makefile to automatically compile

Ask your question here
I am sorry to ask question again, but I think this question has been thinking for a long time.I really don't know how I could fix it.
I create a new directory adjoint in the geos-chem root directory and I put file adj_arrays_mod.f90 and logical_adj_mod.f90 in it. I modified hco_calc_mod.F90 to use variables from adj_arrays_mod.f90.The content of the makefile I changed is as follows:
1.I added a new makefile in the adjoint directory content as follows:
ROOT := .. LIB := $(ROOT)/lib MOD :=$(ROOT)/mod SOURCES := $(wildcard *.f90) OBJECTS := $(SOURCES:.f90=.o) MODULES :=$(OBJECTS:.o=.mod) MODULES :=$(shell echo $(MODULES) | tr A-Z a-z) MODULES :=$(foreach I,$(MODULES),$(MOD)/$(I)) $(warning huoxiao$(MODULES)) LIBRARY := libADJOINT.a include $(ROOT)/Makefile_header.mk all:lib lib:$(OBJECTS) $(AR) crs $(LIBRARY) $(OBJECTS) mv $(LIBRARY) $(LIB) adj_arrays_mod.o:adj_arrays_mod.f90 logical_adj_mod.o logical_adj_mod.o:logical_adj_mod.f90
2. I added new library file name in the Makefile_header.mk from the geos-chem root directory LINK :=$(LINK) -lADJOINT
3.I think the work mechanism of hco_calc_mod.o
hco_calc_mod.o : hco_calc_mod.F90 \ hco_arr_mod.o \ hco_datacont_mod.o \ hco_diagn_mod.o \ hco_extlist_mod.o \ hco_error_mod.o \ hco_filedata_mod.o \ hco_scale_mod.o \ hco_state_mod.o \ hco_types_mod.o \ hco_tidx_mod.o
from HEMCO/Core/Makefile is same as emissions_mod.o
emissions_mod.o : emissions_mod.F90 \ bromocarb_mod.o carbon_mod.o \ co2_mod.o global_ch4_mod.o \ hcoi_gc_main_mod.o mercury_mod.o \ sulfate_mod.o tomas_mod.o \ ucx_mod.o sfcvmr_mod.o \ diagnostics_mod.o pops_mod.o
from GeosCore/Makefile, because emissions_mod.o depend on the file in HEMCO/Core/, but the file is not implicitly pointed in the prerequisites, I think it is implemented by static library in lib directory, so I create libADJOINT.a static library by the same method. Finally when I run the command make in the rundir/merra2_2x25_co2 merra, it occurs some errors:
/home/huoxiao/temp/Code.12.4.0/HEMCO/Core/hco_calc_mod.F90:945: Undefined reference to ‘__adj_arrays_mod_MOD_mmscl’
Anyone could help me? Sincerely thank you very much!

[BUG/ISSUE] GEOS-Chem compilation takes too long?

Hi there,

Sorry to bother you. I am new to GEOS-CHEM. And I am trying to compile one-year tropchem simulation by type 'make -j4 MET=geosfp GRID=4x5 COMPILER=ifort CHEM=Tropchem' in my run directory. However, it is stucked by cp ./geos.sp to ''./GC_***.log.sp". it is so slow that it have been compilliing for one hour and it is still working. I was wondering if it is a normal situation. And do we have any mehthod to speed it up?

Cheers,
Lixu

[QUESTION] Complete steps to run a nested version? Is it possible to run a nested simulation with netCDF output?

I am trying to run a nested GEOS-Chem over China using GEOS-Chem 12.2.0 on AWS. I guess the main links I should follow are http://wiki.seas.harvard.edu/geos-chem/index.php/Setting_up_GEOS-Chem_nested_grid_simulations#How_to_run_the_0.25x0.3125_nested-grid_for_GEOS-FP and http://wiki.seas.harvard.edu/geos-chem/index.php/Creating_GEOS-Chem_run_directories#Tips_and_tricks_for_creating_run_directories. I have run a global 2x25 simulation to save BC files. I have also regridded the restart files using xESMF. I am sorry that I did not get what the next steps that I should follow because the links above point to to each other when referring run nested GEOS-Chem. It is somewhat confusing for me. Therefore, I tried the following:

  • Regenerate a run directory from UT
  • change simulation period in input.geos
  • specify collections in HISTORY.rc
  • and copy BC files into the generated directory.
  • Compile the model: make -j4 mpbuild NC_DIAG=y BPCH_DIAG=n TIMERS=1 NEST=CH (seems BPCH_DIAG have to equal y if NEST specified?)
  • Run the model: ./geos.mp (It is running smoothly but I see HEMCO opening 2x25 met data…)

I have also checked output files and found that they are in 2x25 resolution, so there must be some important steps that I missed, say GRID=0.25x0.3125 or any other configurations in input.geos?

[BUG/ISSUE] TOMAS40 simulations exit with a floating-point exception

We were unit testing the TOMAS40 code in the GEOS-Chem v11-02-release-candidate (soon to be finalized as 12.0.0) with the debugging flags:

make –j8 DEBUG=y BOUNDS=y FPEX=y NO_ISO=y ut &

which revealed a a NaN error. This was thrown at line 903 of GeosCore/tomas_mod.F:

           ERR_MSG = 'After COND_NUC'
            ! check for NaN and Inf (win, 10/4/08)
            do jc = 1, icomp-1
               ERR_IND(1) = I
               ERR_IND(2) = J
               ERR_IND(3) = L
               ERR_IND(4) = 0
               **call check_value( Gcout(jc), ERR_IND, ERR_VAR, ERR_MSG )**

!               if( IT_IS_FINITE(Gcout(jc))) then
!                  print *,'xxxxxxxxx Found Inf in Gcout xxxxxxxxxxxxxx'
!                  print *,'Location ',I,J,L, 'comp',jc
!                  call debugprint( Nkout, Mkout, i,j,l,'After COND_NUC')
!                  stop
!               endif
            enddo```

This error might be caused by the several chemistry and emissions updates that have been added to GEOS-Chem. The GEOS-Chem Support Team has referred the error to the TOMAS development team, who will be investigating further.

[QUESTION] How to run lightning NOx online in GEOS-Chem v12.5.0?

Is there a (relatively easy) way to turn off the offline lightning NOx emissions, but still run LNOx online, in GC v12.5.0? I would like to run GC at 0.25x0.3125 resolution for 2014, with spin up at 4x5 for 2013, but it doesn't look like the offline LNOx emissions are available for 2013 from GEOS-FP. I would still like to include lightning NOx in the spin up run, but unsure how to do so. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.