Coder Social home page Coder Social logo

fv3atm's Introduction

fv3atm

This repository contains a driver and key subcomponents of the atmospheric component of the NOAA's Unified Forecast System (UFS) weather model.

The subcomponents include:

Prerequisites

This package requires the following NCEPLIBS packages:

If the INLINE_POST cmake variable is set, the upp library will be needed:

This package also requires the following external packages:

Obtaining fv3atm

To obtain fv3atm, clone the git repository, and update the submodules:

git clone https://github.com/NOAA-EMC/fv3atm.git
cd fv3atm
git submodule update --init --recursive

Disclaimer

The United States Department of Commerce (DOC) GitHub project code is provided on an "as is" basis and the user assumes responsibility for its use. DOC has relinquished control of the information and no longer has responsibility to protect the integrity, confidentiality, or availability of the information. Any claims against the Department of Commerce stemming from the use of its GitHub project will be governed by all applicable Federal law. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by the Department of Commerce. The Department of Commerce seal and logo, or the seal and logo of a DOC bureau, shall not be used in any manner to imply endorsement of any commercial product or activity by DOC or the United States Government.

fv3atm's People

Contributors

anningcheng-noaa avatar bensonr avatar binli2337 avatar binliu-noaa avatar briancurtis-noaa avatar chunxizhang-noaa avatar climbfuji avatar deniseworthen avatar domheinzeller avatar dusanjovic-noaa avatar dustinswales avatar edwardhartnett avatar ericaligo-noaa avatar grantfirl avatar haiqinli avatar helinwei-noaa avatar jessicameixner-noaa avatar jilidong-noaa avatar junwang-noaa avatar lisa-bengtsson avatar mark-a-potts avatar mdtoynoaa avatar pjpegion avatar rmontuoro avatar samueltrahannoaa avatar shansunnoaa avatar smoorthi-emc avatar uturuncoglu avatar wenmeng-noaa avatar xiaqiongzhou-noaa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fv3atm's Issues

ice fraction needs to be truncated to values < 1.0

When running in coupled mode, the ice fraction imported from the mediator can be greater than 1. This is non-physical and results in a seg fault when the (1-fice) is used to calculate the surface stress3.

In atmos_model, the imported ice fraction needs to be bounded to be strictly between zero and one:

IPD_Data(nb)%Coupling%ficein_cpl(ix) = max(zero,min(datar8(i,j),one))

This is a bugfix and is expected to change answers in the coupled model but have no impact on standalone.

parallel netcdf writes

Parallel write capability is needed in module_write_netcdf.F90. The current version of the netcdf library does not support parallel writing of compressed files, but uncompressed parallel writes should work. Here are some steps needed to implement this:

  1. add a flag to model_configure to indicate parallel IO is desired (for now make sure this flag is set to false if compression is enabled).
  2. if parallel IO is enabled, open the file using nf90_create on all tasks and pass the optional mpi_comm and mpi_info arguments.
  3. the nf90_put_var calls need to be modified to write independent slices (defined by istart,jstart,iend,jend,kstart,kend). The ESMF_Gather call should be skipped.

Model uses constant SST, different from SST forcing file

The output surface temperature (tsfc) over ocean is constant in time, despite using RTGSST.1982.2012.monthly.clim.grb which depends on month. Is this expected behavior? When doing the same run with the previous public release source code, the surface temperature over the ocean tracks much closer to the SST forcing file, as I would expect.

The figure below plots the monthly-mean tsfc (left panel) and t2m (right panel). Output is shown from a run with the source code of this repo (fv3atm) and the previous public release (fv3gfs). Data from the SST forcing file is also shown for the surface temperature plot.

tsfc_t2m_2016

Possibly related: I notice in the code, tsfc is described as the surface air temperature in a comment. Is that accurate?

Thanks for any guidance you can provide!

field name changes for cmeps integration

Two field names for the coupled model need to be changed for final cmeps integration. The field names are:

mean_zonal_moment_flx => mean_zonal_moment_flx_atm
mean_merid_moment_flx => mean_merid_moment_flx_atm

This change needs to be implemented with a corresponding change in NEMS issue #48

These changes will break baselines for the coupled model only because of name changes in the mediator restart files. Otherwise no changes are expected.

snow depth over sea-ice

This issue addresses a bug in the snow depth treatment within GFS when GFS is coupled to sea-ice model. Sea-ice model exports snow volume, but radiation in GFS expects snow depth in mm.
Also, currently weasd (water equivalent snow depth) is set to zero over sea-ice.
While weasd is simply a diagnostic, the snow albedo over sea ice could have significant role in polar climate. The update will be in sfc_cice.f. I appears that snow volume in cice is defined as ice_fraction*snow depth; if this is correct then the snow depth passed to radiation should have snow volume / ice_fraction (this is what I am assuming for now unless ice experts can convince me otherwise).

inline post does not reproduce with threads

It is found that the inline post results can not reproduce itself when running with threads. This is fixed by compiling post without option "-qopenmp" when generating post lib. Post group will have a project to make POST thread safe.

Restarting model changes final result

We are using the "ufs_public_release" branches of fv3atm, FMS, and stochastic_physics, and ESMF 8.0.0. We are using the GFS physics option. @oliverwm1 has also been working on this issue.

Our build configuration is in configure.fv3.gnu_docker.txt
Most of the namelist we're using is contained in default_config.yml.txt, though I toggle flags to enable restart mode. The issue also occurs when the initial model state is read from a restart file.

Long story short, when we run the model for 4 hours, the result is different than if we run the model for 2 hours, write to a restart file, restart the model from that restart file, and run for 2 more hours. The result is the same when running with the dycore only, but different when physics is enabled.

However, the restart files are correctly read in and their values are correctly set in the state variables of the model. We've checked this by probing the model state variables during runtime. Despite this, the model progression changes. This implies something important is not being written out during a restart.

Earlier when using the fv3gfs repo we had the same issue, but we were able to correct the restart by first running the initialization in cold start mode and then reading in the restart files after this point using wrapping Python code, followed by stepping the model forward the final 2 hours. On the newer UFS public release, this does not work.

I'd be glad to send the forcing/initial conditions/namelist files we are using if someone would like to reproduce this issue on their own machine with their own build environment.

add FMS option -DENABLE_QUAD_PRECISION in FV3

In FMS tag 2019.01, the default option on QUAD-PRECISION is no quad precision. In FMS tag 2019.01.01 and later version, the default is to use double precision. When we update FMS to the latest version 2020.02, the -DENABLE_QUAD_PRECISION compile option will be used so that the calculations with FV3 related to grid metric terms will still use quad precision.

Update in scale-aware TKE-based moist eddy-diffusion mass-flux scheme satmedmfvdifq

Following code changes are made in satmedmfvdifq.f to reduce the cold bias in lower atmosphere layer.

in satmedmfvdifq.f:
1) background diffusivity (K0) (which was previously reduced with
increasing grid resolution and surface layer stability) goes back to the
current operational one.
2) minimum TKE is deduced from the K0, which results in increased
minimum TKE and consequent increase of TKE dissipative heating
especially in stable atmospheric layers.

using different POST control file for FH00 for inline post

Inline POST needs to output different products at FH00 from other forecast hours. Currently POST has POST control file name hardcoded. POST group is working on updating the code to pass POST control file name into the subroutine that reads the file. Once that is done, post_gfs.F90 in fv3atm needs to be updated to pass a different POST control file at FH00. This capability is required for GFSv16 implementation.

Optimize netcdf write component

Some modifications to the netcdf write component to improve IO speed and compression.

IO speed for uncompressed files has been improved by

  1. making sure all attributes are written before nf90_enddef is called (rather than switching in and out of define model with nf90_enddef/nf90_redef).
  2. creating the dataset with NF90_SHARE

C384 tests on hera show that these modifications improve the write speed by 4-5x (with compression turned off by setting ideflate=0 in model_configure). Write speeds with uncompressed netcdf are now very close to nemsio.

Unfortunately, this has no effect when compression is enabled. To improve write performance with compression I have tried tuning the chunksizes and chunk cache. The write speed seems very insensitive to these parameters, however the files compress better with a larger chunk size. Changing the chunk size to be the size of the full 3d state results in about a 25% reduction in the 3d file size with ideflate=1, nbits=14.

These changes are in PR #19

update post lib to 8.0.6

Post lib 8.0.6 i s now available on dell, cray and hera. The library contains two updates:

  1. Upgrade crtm to 2.3.0
  2. Use g2tmpl 1.6.0 from operational library side

Model needs to update to the latest post lib. It is expected that there is no impact on the current ufs-weather-model results.

_FillValue type and attribute mismatches in phyf*nc and dynf*nc files

_FillValue in phyf*nc and dynf*nc does not appear to be defined as the same type as the variable. This is causing "NetCDF: Not a valid data type or _FillValue type mismatch" errors when I try to combine the files using fregrid, and appears to be the same kind of issue described here:
beatrixparis/connectivity-modeling-system#38

The variable definitions are consistent in the atmos_4x files, and I was able to use fregrid to combine them without an errors.

The dynf, phyf, and atmos_4x missing value attributes are also different, with dynf using missing_value, phyf using _FillValue, and atmos_4x using both.

Can this repo be built stand-alone?

Can this repo be built stand-alone?

That will be required for CI to be set up with Travis.

CMake has the capability to build this as part of ufs_weather_model, and also in stand-alone mode.

bug in gfsphysics/physics/moninedmf_hafs.f

When the namelist option dspheat = True, the energy from turbulence dissipation is added to the heating rate from the hybrid EDMF PBL scheme. In the version modified for the HAFS application (gfsphysics/physics/moninedmf_hafs.f), a different hard-coded value is used to apply this heating (0.7 vs 0.5). This change is supposed to apply to all model levels, but as coded, the change is only applied to the lowest model level. See line 1363. It should be done like lines 1349-1353 as in module_bl_gfsedmf.F of the HWRF code.

(Line numbers correspond to commit #8a567812866a80d8320ecb8d946c894a91c8047c)

Note that this error has been addressed in the merged moninedmf.f + moninedmf_hafs.f in the CCPP (see NCAR/ccpp-physics#370).

enabling szlib compression

This issue is an offshoot of #23

@junwang-noaa here are instructions for trying szlib compression.

1 - Build HDF5 (1.10.6 for best performance) with szip. Use the --enable-szlib= option to configure. For example:
./configure --with-szlib=/usr/local/szip-2.1.1

At the end of the configure, information will be printed about the build. You should see:
I/O filters (external): deflate(zlib),szip(encoder)

2 - Rebuild netcdf-c with that HDF5 build. NetCDF will detect that szip has been included in HDF5, and you will see this in the information at the end of the configure step:

SZIP Support:           yes
SZIP Write Support:     yes
Parallel Filters:       yes

3 - In your fortran code, do not set the deflate settings, and instead call code like this:

     integer, parameter :: H5_SZIP_NN_OPTION_MASK = 32
     integer, parameter :: H5_SZIP_MAX_PIXELS_PER_BLOCK_IN = 32
     integer, parameter :: HDF5_FILTER_SZIP = 4
C     Set  filter on variable
      params(1) = H5_SZIP_NN_OPTION_MASK
      params(1) = H5_SZIP_MAX_PIXELS_PER_BLOCK_IN
      retval = nf90_def_var_filter(ncid, varid, HDF5_FILTER_SZIP, 1, params)
      if (retval .ne. nf_noerr) stop 1

Just as with nc_def_var_deflate(), this must be called for each variable you want to be compressed.

4 - When done, you can detect filtered data with ncdump -h -s, the variable will have a special attribute like this:
datasetF32:_Filter = "4,169,32,32,2500" ;

update the time stamp at fh00 in inline post

It is found that the time stamp at fh00 in inline POST has minutes in it. The following change is to fix the problem:
in io/post_gfs.F90

@@ -133,6 +133,7 @@ module post_gfs
!
ifhr = mynfhr
ifmin = mynfmin

  •  if (ifhr == 0 ) ifmin = 0
     if(mype==0) print *,'bf set_postvars,ifmin=',ifmin,'ifhr=',ifhr
     call set_postvars_gfs(wrt_int_state,mpicomp,setvar_atmfile,   &
          setvar_sfcfile)
    

fix the real(8) lat/lon in netcdf file

It is reported that the values of lat/lon in the netcdf history file are not exactly real(8) as they are defined in some of FV3 runs. The issue is identified in fv3 32bit runs when the lat/lon coordinate in the write component grid is changed from degrees to radians when computing the vector interpolation. The computation is done with real(4) numbers when compiled with 32BIT=Y so the values lat/lon is not purely real(8). The problem is fixed by using two local arrays to hold the lat/lon in radians.

negative precipitation with stochastic physics when coupled

Currently, there may be negative rain or snow being passed to the ocean and ice when SPPT is enabled. This is because of the inconsistency between the partitioning of the rain/snow and the rain/snow tendency.

The fix requires modifications to ccpp_physics: FV3/ccpp/physics/physics/GFS_MP_generic.F90 There will also be a separate pull request for the IPD side. This is related to
ufs-community/ufs-s2s-model#58
Phil

possible memory leak in fv3

In the coupled s2s model, it is found that the fv3 memory is increasing every 6hour for the first 2 days, then the memory increases slightly for the rest of forecast time. This may indicate a memory leak problem in fv3 component, further investigation is needed.

tprcp unit in GFS_diagnostics.F90

Eric Aligo noticed tprcp has unit kg/m^2 In GFS_diagnostics.F90. But from the computation in GFS_physics_driver.F90, the unit of rain should be m/phys_timestep.

tem = dtp * con_p001 / con_day
rain1(i) = (rain0(i,1)+snow0(i,1)+ice0(i,1)+graupel0(i,1)) * tem

Diag%rain(:) = Diag%rainc(:) + frain * rain1(:) ! total rain per timestep

do i=1,im
Sfcprop%tprcp(i) = max(zero, Diag%rain(i) )

.

merge code changes in public release back to develop branch

When public release branch is finalized, the hot fixes in public release branch needs to be merged to develop branch.

The code changes include:

fv3atm github pull request:

  1. Add no_nsst CCPP suites (htps://github.com//pull/69)
  2. release/public-v1: bug fix for threading issue in stochastic physics (htps://github.com//pull/62)
  3. ufs_public_release: remove libxml2. (htps://github.com//pull/55)
  4. Fix dependencies for fv3cap library (htps://github.com//pull/45)
  5. make variables using filename_base consistant in length (htps://github.com//pull/43)
  6. Set default calendar to 'gregorian' (htps://github.com//pull/34)
  7. Remove unused and unsupported code (htps://github.com//pull/31)

Add 3D reflectivity to restart file

The 3D reflectivity is written to restart file only the flag lrefres is set to true. Changes made to GFS_restart.F90 and GFS_typdefs.F90. This works for GFDL, Thompson and Ferrier-Aligo microphysics.

check fdiag consistency with fhmax/FHOUT/FHOUT_HF

In current fv3, the fdiag is defined in namelist while FHMAX, FHOUT/FHOUT_HF are defined in the model configuration. there could be difference in fdiag and fhout/fhout_hf which can problems such as model output does not change. This ticket is to fix the issue to make sure the fdiag and fhout/fhou_hf to be consistent.

change nbit for delp compression in netcdf output

Currently fv3 is using one single variable nbit for the compression of 3D model output fields. It turns out that the nbit(14 in fv3 parallel) could cause delp field to loose required precision, especially when delp is small in high altitude. Fanglin made a quick fix to use different nbit for delp field. This fix is planned to be used in GFSv16 implementation.

add capability to write out restart files at specified forecast hours

It is requested from GDAS and GEFS that model should be able to write out restart files at forecast hours specified in model configuration file. Currently model can only output restart files with fixed frequency and the restart files at the end of forecast time are always written out.

Code changes will be made to use restart_interval variable in model_configure to write out restart files at specified forecast hours while keeping all the current restart output frequency capability. This will also allow users not to write out the last restart file when forecast ends.

land mask and tsfco issue when cplflx=true

Using the current (10/16) develop branch of fv3atm if you run with cplflx=true: The land mask on tile 4 shows differences with the oro_data and the surface temperature in FV3ATM does not match the composite temperature in CICE (where the composite is ai*ice surface temperature + (1-ai)*sst). (See page 1 of attached). The final set of fixes use Moorthis' fixes (for the tsfco) and Denises's fix of going June 10 develop branch and used that code to create the slmsk and landfrac in io/FV3GFS_io.F90. The results of this are shown on pages 4 and 5 of attached: mask_tsfc.pdf

The two sets of code differences are:

$ git diff develop
diff --git a/gfsphysics/GFS_layer/GFS_physics_driver.F90 b/gfsphysics/GFS_layer/GFS_physics_driver.F90
index a547169..313c1d0 100644
--- a/gfsphysics/GFS_layer/GFS_physics_driver.F90
+++ b/gfsphysics/GFS_layer/GFS_physics_driver.F90
@@ -1154,7 +1154,7 @@ module module_physics_driver
             if (fice(i) < one) then
               wet(i) = .true.
 !             Sfcprop%tsfco(i) = tgice
-              Sfcprop%tsfco(i) = max(Sfcprop%tisfc(i), tgice)
+              if (.not. Model%cplflx) Sfcprop%tsfco(i) = max(Sfcprop%tisfc(i), tgice)
 !             Sfcprop%tsfco(i) = max((Sfcprop%tsfc(i) - fice(i)*sfcprop%tisfc(i)) &
 !                                     / (one - fice(i)), tgice)
             endif
diff --git a/io/FV3GFS_io.F90 b/io/FV3GFS_io.F90
index 6017f5f..0f4e01b 100644
--- a/io/FV3GFS_io.F90
+++ b/io/FV3GFS_io.F90
@@ -1114,16 +1114,11 @@ module FV3GFS_io_mod
           Sfcprop(nb)%zorll(ix) = Sfcprop(nb)%zorlo(ix)
           Sfcprop(nb)%zorl(ix)  = Sfcprop(nb)%zorlo(ix)
           Sfcprop(nb)%tsfc(ix)  = Sfcprop(nb)%tsfco(ix)
-          if (Sfcprop(nb)%slmsk(ix) < 0.1 .or. Sfcprop(nb)%slmsk(ix) > 1.9) then
+          if (Sfcprop(nb)%slmsk(ix) > 1.9) then
             Sfcprop(nb)%landfrac(ix) = 0.0
-            if (Sfcprop(nb)%oro_uf(ix) > 0.01) then
-              Sfcprop(nb)%lakefrac(ix) = 1.0        ! lake
-            else
-              Sfcprop(nb)%lakefrac(ix) = 0.0        ! ocean
-            endif
           else
-            Sfcprop(nb)%landfrac(ix) = 1.0          ! land
-          endif
+            Sfcprop(nb)%landfrac(ix) = Sfcprop(nb)%slmsk(ix)
+          end if
         enddo
       enddo
     endif ! if (Model%frac_grid)

Bug fixes in support of FV3GFS-AQM

The following bugs were discovered during development work for the coupled FV3GFS-CMAQ system (FV3GFS-AQM):

  • Floating invalid errors are generated due to uninitialized optical depth diagnostic arrays (0.55 and 10 um channels). The code needs to be refactored.

  • The canopy resistance output variable needs to be initialized to zero in the Noah/OSU land-surface model sub driver. This is required for FV3GFS-AQM as this quantity is passed to AQM (CMAQ) during coupling.

update coupled field name in fv3

The coupled field names in fv3 is different from that used in CICE5. In this ticket, the names will be changed so that both components will use same field names, and nems code can be clean up for name matching.

test io layout for restart

It is found that writing restart files is very slow in C768L127 runs. So far we are using io_layout = 1,1 in namelist input.nml. Tests have been done to use different layout to reduce the write time. FV3 GFS restart test will be updated to use non-1 io_layout.

reducing the sending time for forecast tasks to write tasks

George/Jim found that the sending time from forecast tasks to write tasks can reach to 4.5 s for each data transfer in the high resolution global and regional run. Gerhard was working on the ESMF snapshot 8.0.1 to fix the problem, the improvement includes:

  1. write all messages from sending PEs going to the same dst PET into a single buffer and then send the whole message using single MPI_Isend
  2. optimization with option to drop the buffer for esmf for memory relief,
  3. optimization of memory copies on the send side
    also when using large value of the srcTermProcessing argument can reduce data volume and further reduce time, but for global fv3 Gaussisan grid, we will still keep srcTermProcessing=0. for regional FV3, a different value can be used. srcTermProcessing will be added in model_configure with default value 0, and fv3_cap will be updated with the ESMF_VMEpochStart future.

The feature shows the time of sending data is reduced from 4.5s to 0.5s.

add land_surface_model attribute in model sfc history file

Since model now can run with several different land surface model, POST needs some information in model output file to specify which land surface model is used. A land_surface_model attribute will be added to model sfc history files.

add butterfly effect in fv3

There is a request to conduct butterfly effect test for ufs-weather-model and ufs-s2s-model. It will be used as a reference to verify model results change due to small perturbations caused by porting (platform change, compiler version change, ...), code structure changes(no scientific change) , etc.

Currently FV3 already has an option "add_noise" to add random noise (-1,1) *scale to full 3D temperature field. To implement real butterfly effect, the last digit of the temperature field on a single point at the lowest model layer on tile 1 is flipped. Dusan made the code changes and conducted test using ufs-weather-model, results show that a maximal 3K change in lowest model layer temperature field shows up at 24 hours.

Add support for GEFS-Aerosols restart capability

The following minor changes to FV3's coupling infrastructure are required to support bit-for-bit restart capability for GEFS-Aerosols:

  • GSDCHEM, the coupled chemistry component in GEFS-Aerosols, uses precipitation tendencies computed as differences between FV3 accumulated precipitations at consecutive coupling steps.
    Accumulation always begins at startup, whether the model is restarting or cold starting, introducing roundoff errors that lead to precipitation tendencies not being identical between a cold start run and a restart run. Coupling accumulation arrays for large-scale and convective rain, as well as snow, need to be reset to zero at every coupling step.

  • FV3's instantaneous total moisture tendency coupling array will also need to be reset to zero at the beginning of each coupling step when coupling with GSDCHEM.

Update cellular automata code and coupling

The cellular automata (CA) code is updated so that several CA's can run on the FV3 grid and the sub-grid simultaneously. The coupling to the SASAS convection scheme is updated to address uncertainty associated with entrainment when convection becomes more organized. The capability of running SPPT with a pattern generated from cellular automata (instead of AR(1)) is added.

The updates concern the submodules: fv3atm and stochastic_physics

add air density and potential temperature to export state

When cplflx = true, the air density and potential temperature should be added to the FV3 export state. They should be "height_lowest" values.

These variables are needed by CICE5/CICE6; currently they are being calculated either in the NEMS mediator or in the CICE cap using temp_height_lowest, pres_height_lowest and spec_humid_height_lowest.

fix bug in ugwd

Velary and hist team reported two buges to Fanglin:

  1. [Jun.Wang@m72a2 FV3]$ git diff 8a56781 gfsphysics/GFS_layer/GFS_driver.F90
    diff --git a/gfsphysics/GFS_layer/GFS_driver.F90 b/gfsphysics/GFS_layer/GFS_driver.F90
    index 3b6a943..05f97bd 100644
    --- a/gfsphysics/GFS_layer/GFS_driver.F90
    +++ b/gfsphysics/GFS_layer/GFS_driver.F90
    @@ -422,7 +422,7 @@ module GFS_driver
    call cires_ugwp_init(Model%me, Model%master, Model%nlunit, Init_parm%logunit, &
    Model%fn_nml, Model%lonr, Model%latr, Model%levs, &
    Init_parm%ak, Init_parm%bk, p_ref, Model%dtp, &
  •                       Model%cdmbgwd, Model%cgwf,   Model%prslrd0, Model%ral_ts)
    
  •                       Model%cdmbgwd(1:2), Model%cgwf,   Model%prslrd0, Model%ral_ts)
    
    endif
    #endif
  1. [Jun.Wang@m72a2 FV3]$ git diff 8a56781 gfsphysics/physics/ugwp_driver_v0.f
    diff --git a/gfsphysics/physics/ugwp_driver_v0.f b/gfsphysics/physics/ugwp_driver_v0.f
    index 804bbac..0ca37e8 100644
    --- a/gfsphysics/physics/ugwp_driver_v0.f
    +++ b/gfsphysics/physics/ugwp_driver_v0.f
    @@ -46,7 +46,9 @@
    &, rain

     real(kind=kind_phys), intent(in), dimension(im,levs) :: ugrs
    
  • &,        vgrs, tgrs, qgrs, prsi, prsl, prslk, phii, phil, del
    
  • &,        vgrs, tgrs, qgrs, prsl, prslk, phil, del
    
  •   real(kind=kind_phys), intent(in), dimension(im,levs+1) :: prsi
    
  • &,        phii
    

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.