Coder Social home page Coder Social logo

shield_build's People

Contributors

bensonr avatar josephmouallem avatar kaiyuan-cheng avatar laurenchilutti avatar mcallic2 avatar spencerkclark avatar stevepny avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

shield_build's Issues

Please provide a FV3-202305-public release that aligns with fv3 and shield_physics

Is your feature request related to a problem? Please describe.
There is not a public release of SHiELD_build that matches the FV3-202305-public releases for the GFDL_atmos_cubed_sphere and SHiELD_physics repos. It is not immediately obvious which FMS version to use, and what location to look for the coupler_main.F90 file, among other things, to match the build tested at GFDL.

Describe the solution you'd like
Please tag the commit used for tests of the FV3-202305-public GFDL_atmos_cubed_sphere and SHiELD_physics tagged here as a public release.

mk_scripts/mk_make is not general

building in ubuntu docker container, gnu, arm64...

In mk_scripts/mk_make there are a few design choices making the build more challenging:
(1) It uses the 'module' command
(2) It depends on "${BUILD_ROOT}/site/environment.${COMPILER}.sh"
(3) Paths to FMSlib and nceplibs are relative
(4) Build itself, including exec and bin are put into the repo directory structure.

The file ${BUILD_ROOT}/site/environment.${COMPILER}.sh only has a few options, and they seem designed specifically for NOAA HPC:
(gaea, orion, jet, hera, lsc)

It would be helpful if the 'default' (i.e. the else clause) was a user-supplied set of parameters, and it could fail if one of the required parameters is not supplied (e.g. as an environment variable).

It would be my preference to have the paths to FMSlib and nceplibs specified as an environment variable, as they will generally either already be installed or will need to be installed by the user using the same compiler. Also, I think it is generally not good practice to build the model and libraries in the directory structure of the github repository. It's not a huge issue for me since I'm building in docker, but if I was doing direct development of SHiELD I would find this more difficult to deal with.

At the moment I'm fixing these issues with:
(1) ignore the module command - this doesn't seem to stop execution of the script
(2) I just remove the line referencing the environment.${COMPILER}.sh file (see below)

# Remove the template call in mk_scripts/mk_make
# and fix the library references in mk_scripts/mk_make
RUN cd ${BUILD_ROOT}/Build \
  && sed 's|. ${BUILD_ROOT}/site/environment.${COMPILER}.sh||' < mk_scripts/mk_make > OUT \
  && mv OUT mk_scripts/mk_make \
  && sed 's|../../libFMS/${COMPILER}/${bit}|${FMS_DIR}/lib|' < mk_scripts/mk_make > OUT \
  && mv OUT mk_scripts/mk_make \
  && sed "s|../../nceplibs/\${COMPILER}/|${NCEPLIBS_DIR}/lib|" < mk_scripts/mk_make > OUT \
# && sed "s|../../nceplibs|${BUILD_ROOT}/Build/nceplibs|" < mk_scripts/mk_make > OUT \    #STEVE: this is what gfdl wants but I don't think this is possible
  && mv OUT mk_scripts/mk_make \
  && sed "s/-lbacio_4  -lsp_v2.0.2_d  -lw3emc_d  -lw3nco_d/-lbacio_4  -lsp_d  -lw3emc_d  -lw3nco_d/" < mk_scripts/mk_make > OUT \
  && mv OUT mk_scripts/mk_make \
  && chmod 755 mk_scripts/mk_make

(3) updated in code shown above
(4) I move the executable to an external bin directory that still exists after reducing the docker build to a more minimal image:

  && mv exec/${FV3_config}_${FV3_compiler}/test.x ${SHiELD_BIN}/${FV3_config}_${FV3_hydro}.${FV3_comp}.${FV3_bit}.${FV3_compiler}.x

pushd in /bin/sh does not work

The script Build/mk_scripts/MAKE_libFMS has the shebang #!/bin/sh and fails because of the use of pushd which is not native to posix shell. The other scripts in this directory are using #!/bin/bash, so this shebang in Build/mk_scripts/MAKE_libFMS should be updated.

porting build from gnu to intel, runtime error: "Subscript #1 of the array DES has value 0 which is less than the lower bound of 1"

Short issue:

We are getting the runtime error:

forrtl: severe (408): fort: (3): Subscript #1 of the array DES has value 0 which is less than the lower bound of 1

In more detail:

We are using the latest version of shield, i.e. SHiELD_BUILD_VERSION="FV3-202204-public", FV3_VERSION="FV3-202210-public", FMS_VERSION="2022.04". We're running on an ubuntu 22.04 linux AWS ec2 instance, and have built/run SHiELD successfully for many months using OpenMPI/gfortran.

We are now switching our build over from OpenMPI/gfortran (MKMF_TEMPLATE=linux-ubuntu-trusty-gnu.mk) to IntelMPI/ifort (MKMF_TEMPLATE="intel.mk"). We are using intel version:

mpiifort for the Intel(R) MPI Library 2021.10 for Linux*
Copyright Intel Corporation.
ifort version 2021.10.0

Our build is based as closely as possible on this SHiELD_build repo. We're testing a 1-hour C96 simulation with our original OpenMPI/gfortran build, and it completes successfully (~300 seconds on 24 cores). With IntelMPI/ifort, the model builds successfully, but from the same experiment directory where the GNU build runs without error, the intel build gives the following error at runtime:

 ---------------------------------------------
NOTE from PE     0: READING FROM SST_restart DISABLED
 Before adi: W max =    1.573370      min =   -1.371867    
NOTE from PE     0: Performing adiabatic init   1 times
forrtl: severe (408): fort: (3): Subscript #1 of the array DES has value 0 which is less than the lower bound of 1

Image              PC                Routine            Line        Source             
shield_nh.prod.32  00000000015BCBBE  gfdl_mp_mod_mp_qs        7233  gfdl_mp.F90
shield_nh.prod.32  00000000015BD7D4  gfdl_mp_mod_mp_iq        7369  gfdl_mp.F90
shield_nh.prod.32  00000000015174C5  gfdl_mp_mod_mp_cl        4621  gfdl_mp.F90
shield_nh.prod.32  0000000001428F26  gfdl_mp_mod_mp_mp        1429  gfdl_mp.F90
shield_nh.prod.32  00000000015589F5  gfdl_mp_mod_mp_fa        5648  gfdl_mp.F90
shield_nh.prod.32  00000000018EB123  intermediate_phys         257  intermediate_phys.F90
libiomp5.so        000014B302363493  __kmp_invoke_micr     Unknown  Unknown
libiomp5.so        000014B3022D1CA4  __kmp_fork_call       Unknown  Unknown
libiomp5.so        000014B302289D23  __kmpc_fork_call      Unknown  Unknown
shield_nh.prod.32  00000000018C8D09  intermediate_phys         186  intermediate_phys.F90
shield_nh.prod.32  0000000000BE6BC0  fv_mapz_mod_mp_la         841  fv_mapz.F90
shield_nh.prod.32  00000000019FA0A1  fv_dynamics_mod_m         590  fv_dynamics.F90
shield_nh.prod.32  0000000002D31F23  atmosphere_mod_mp        1553  atmosphere.F90
shield_nh.prod.32  0000000002C61BFA  atmosphere_mod_mp         431  atmosphere.F90
shield_nh.prod.32  0000000002280A56  atmos_model_mod_m         395  atmos_model.F90
shield_nh.prod.32  0000000000EDE999  coupler_main_IP_c         417  coupler_main.F90
shield_nh.prod.32  0000000000ED93FF  MAIN__                    146  coupler_main.F90
shield_nh.prod.32  000000000041504D  Unknown               Unknown  Unknown
libc.so.6          000014B301E29D90  Unknown               Unknown  Unknown
libc.so.6          000014B301E29E40  __libc_start_main     Unknown  Unknown
shield_nh.prod.32  0000000000414F65  Unknown               Unknown  Unknown

For reference the traceback is pointing to intermediate_phys.F90, line 257:
https://github.com/NOAA-GFDL/GFDL_atmos_cubed_sphere/blob/d2e5bef344b64d6a10524479b3288717239fb2a2/model/intermediate_phys.F90#L257

! fast saturation adjustment
            call fast_sat_adj (abs (mdt), is, ie, kmp, km, hydrostatic, consv .gt. consv_min, &
                     adj_vmr (is:ie, kmp:km), te (is:ie, j, kmp:km), dte (is:ie), q (is:ie, j, kmp:km, sphum), &
                     q (is:ie, j, kmp:km, liq_wat), q (is:ie, j, kmp:km, rainwat), &
                     q (is:ie, j, kmp:km, ice_wat), q (is:ie, j, kmp:km, snowwat), &
                     q (is:ie, j, kmp:km, graupel), q (is:ie, j, kmp:km, cld_amt), &
                     q2 (is:ie, kmp:km), q3 (is:ie, kmp:km), hs (is:ie, j), &
                     dz (is:ie, kmp:km), pt (is:ie, j, kmp:km), delp (is:ie, j, kmp:km), &
#ifdef USE_COND
                     q_con (is:ie, j, kmp:km), &
#else
                     q_con (isd:, jsd, 1:), &
#endif
#ifdef MOIST_CAPPA
                     cappa (is:ie, j, kmp:km), &
#else
                     cappa (isd:, jsd, 1:), &
#endif
                     gsize, last_step, inline_mp%cond (is:ie, j), inline_mp%reevap (is:ie, j), &
                     inline_mp%dep (is:ie, j), inline_mp%sub (is:ie, j), do_sat_adj)

I checked our build logs, and we are using both USE_COND and MOIST_CAPPA, which are activated due to the 'nh' setting.

I noticed this is called from:
https://github.com/NOAA-GFDL/SHiELD_physics/blob/2882fdeb429abc2349a8e881803ac67b154532c3/simple_coupler/coupler_main.F90#L146C19-L146C19

 call fms_init()
 call mpp_init()
 initClock = mpp_clock_id( 'Initialization' )
 call mpp_clock_begin (initClock) !nesting problem

 call fms_init
 call constants_init
 call fms_affinity_init
 call sat_vapor_pres_init

 call coupler_init

As an additional piece of information, we have also generated our own control/coupler file, and do not have this runtime error with the intel build. In our case, we comment out fms_init and fms_affinity_init since fms_init is called here twice and fms_affinity_init was removed later in https://github.com/NOAA-GFDL/FMScoupler/blob/main/SHiELD/coupler_main.F90:

!   call fms_init(mpi_comm_fv3)
    if (dodebug) print *, "fv3_shield_cap:: calling constants_init..."
    call constants_init
!   if (dodebug) print *, "fv3_shield_cap:: calling fms_affinity_init..."
!   call fms_affinity_init

I've tried building the IntelMPI/ifort build in both a docker container and a bash script directly on the ec2 instance, and I've tried building in both 'prod' mode and 'debug' model, but all give the same error above.

I've tried removing "export FMS_CPPDEFS=-DHAVE_GETTID" from the build options - in that case the make FMS fails.

I found a similar issue report in E3SM due to an upgrade in the intel complier. In their case it was related to a bug, but I'm not sure if that is true here:
E3SM-Project/E3SM#2051

Have you seen this error before, and do you have any idea what might be causing it? I recall getting a similar error in Dec 2022 and I believe the FMS version was part of the problem, and it was resolved by upgrading FMS. However, the FMS versions are the same between builds in this case.

Features required for building a Python-wrapped version of SHiELD

Is your feature request related to a problem? Please describe.

At AI2 we have developed a Python-wrapped version of NOAA's FV3GFS model, a close relative of SHiELD. The code is currently maintained here, and described in McGibbon et al. (2021). It has been a powerful tool that has facilitated several published hybrid machine learning research projects. A drawback of it, however, is that while similar to SHiELD, our now aging fork of FV3GFS is not the same, and a number of features lag behind what has been developed and made public in SHiELD. We would like to transition from wrapping FV3GFS to wrapping SHiELD, both for our work at AI2 and for the fact that it would make it more attractive (and intuitive) for users of SHiELD to use our Python-wrapped model. It has been something on the roadmap of @lharris4 and our group for a while.

I have set up a repository for such a wrapper, which includes a PR introducing a basic initial working version: ai2cm/SHiELD-wrapper#1. This is not totally feature-complete (as described in the PR description), but it represents a major first step towards our eventual goal. In addition, as part of this goal, we would like to move away from forking the entire fortran codebase and instead depend directly on the evolving public repositories of the NOAA-GFDL organization, which will keep the Python wrapper up to date. What this means, however, is that any changes to the fortran code and build system required to implement wrapper features must be made in these repositories. This issue is meant to introduce this overall effort, and to discuss how best to incorporate the initial required changes for wrapping SHiELD in Python in a similar way to how we have wrapped FV3GFS.

Initial required changes

In this initial phase, the main changes required were to the SHiELD_build repository (I made a stab at this in this branch, but there may be another preferred way of going about it):

  • We need to be able to build a static library containing the dynamical core and atmos_drivers code, which we can link to when compiling the wrapper.
    • I did this by adding another config option for shield_wrapper, which splits off all but the coupler_main.F90 file into static libraries, before building the executable.
  • We need a way to compile all pieces of the code (FMS, nceplibs, and SHiELD code) as "position independent code," i.e. with the -fPIC flag.
    • I did this by threading through the config option into the FMS, nceplibs, and SHiELD build scripts, and adding the -fPIC flag in the event that config equals "shield_wrapper".

There was also a very minor change needed to declare some variables as public in the atmos_drivers repository (see this small diff).

Ask

Please let me know from the SHiELD / SHiELD_build perspective if things look reasonable enough for me to initiate pull requests from these branches for review. There is no need to review ai2cm/SHiELD-wrapper#1, though you are of course welcome to take a look there for more context (I will wait to merge that until I am no longer pointing to personal forks of the SHiELD_build and atmos_drivers repositories).

I am happy to answer any questions about / discuss this project that come up now or later. Thanks!

Disclaimer

I totally recognize I am posting this issue on the eve of a possible government shutdown, and so I understand that you will likely not be able to look deeply into this issue today (and possibly not for several days into the future).

Build nceplibs bug

Describe the bug
When building SHiELD (especially with the cleanall argument) you get an error that one of the nceplibs is not found.

To Reproduce
build with ./COMPILE cleanall

Expected behavior
No errors

System Environment
Describe the system environment, include:
Gaea c4 intel 19 build and gaea c5 intel 21

Additional context
I have a fix for this that I will make a PR for shortly. We need to checkout a tag and not the latest development branches for the nceplib GitHub repositories from which we build nceplibs.

Checkout script fixes and mk_paths fixes

Describe the bug

In CHECKOUT_code, we set the release to "main" and the latest main branch code is always checked out. It would be safest to set the release to the latest release (currently 202305) and users can manually modify this when they would like to checkout the latest main branch.

When compiling SHiELD, there is a warning in the build output file:

find: ‘GFDL_atmos_cubed_sphere/model/gfdl_cld_mp.F90’: No such file or directory
find: ‘GFDL_atmos_cubed_sphere/model/cld_eff_rad.F90’: No such file or directory
find: ‘GFDL_atmos_cubed_sphere/GFDL_tools/fv_diag_column.F90’: No such file or directory

These files were removed in FV3202210 release. gfdl_cld_mp.F90 and cld_eff_rad.F90 were replaced with SHiELD_physics/gsmphys/gfdl_cld_mp.F90

To Reproduce
Checkout the code via the CHECKOUT_code script and build via ./COMPILE (defualt is a SHiELD configuration - a SHiELD configuration gives the no such file warnings)

Expected behavior
No warnings, checkout latest release.

System Environment
N/A

Additional context

gnu.mk file OPENMP, Undefined reference to `omp_get_ ...

The mk_make script builds with:

(cd exec/${CONFIG}_${COMPILER} ; make -j 8 OPENMP=Y NETCDF=3 ${COMP} AVX=${AVX} ${BIT} NCEPLIBS="${NCEPLIBS}" -f Makefile_fv3)

which indicates the use of OPENMP=Y. However in the gnu.mk template file
https://github.com/NOAA-GFDL/SHiELD_build/blob/main/site/gnu.mk
it has the openmp flag commented out, for example:
FFLAGS_OPENMP = #-fopenmp
at:

FFLAGS_OPENMP = #-fopenmp

CFLAGS_OPENMP = #-fopenmp

LDFLAGS_OPENMP := #-fopenmp

This causes a series of compile-time errors of the type omp_get_ ...

At the moment I'm changing this using:

sed "s/#-fopenmp/-fopenmp/g" < ${BUILD_ROOT}/${TEMPLATE} > OUT \
  && mv OUT ${BUILD_ROOT}/${TEMPLATE}

and the model seems to compile successfully.

Is there a reason for removing the openmp flags?

portability bug - pushd is bash specific

building in ubuntu docker container, gnu, arm64...

I get this error when building incrementally:

root@bd9eb8ef3929:/SHiELD_SRC/SHiELD_build/Build# mk_scripts/mk_paths shield gnu
/opt/mkmf/bin:/opt/netcdf/bin:/opt/openmpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
mk_scripts/mk_paths: 50: pushd: not found

I found some guidance that said this is a bash command, but it's being called with "#!/bin/sh" as the header:
https://stackoverflow.com/questions/5193048/bin-sh-pushd-not-found

I think this can be quick-fixed by changing the header of the makefile(s) to "#!/bin/bash"

At the moment, I'm resolving this with:
RUN cd ${SHiELD_SRC}/SHiELD_build/Build
&& sed -i '1s|^#! */bin/sh|#!/bin/bash|' mk_scripts/mk_paths
&& sed -i '1s|^#! */bin/sh|#!/bin/bash|' mk_scripts/mk_makefile
&& sed -i '1s|^#! */bin/sh|#!/bin/bash|' mk_scripts/mk_make

Independent builds do necessarily not produce consistent results

Is your question related to a problem? Please describe.

As part of a more involved development project, I am building and running SHiELD in a docker image using GNU compilers. A test I am running depends on the model consistently producing bitwise identical results for a given configuration. I am puzzlingly finding that the answers the model produces change depending on the build. Specifically they seem to flip randomly between two states.

This repository minimally illustrates my setup. It contains a Dockerfile which is used to build SHiELD using the COMPILE script in this repository, submodules for the relevant SHiELD source code, and some infrastructure for testing the model within the docker image. The README should contain all the information necessary for reproducing the issue locally (at least on a system that supports docker). The upshot is that the regression tests, which check for bit-for-bit reproducibility across builds, do not always pass.

Describe what you have tried

I am a bit stumped at this point, so my idea here was to try and distill things to a minimal reproducible example, and reach out to see if there was something obvious I am doing wrong. Is there an issue in my environment or how I am configuring the build that is leading to this problem? I am happy to provide more information where needed. I appreciate your help!

SHiELD_physics/COSP is expected but doesn't exist

In mk_scripts/mk_paths, there is:

elif [ ${CONFIG} = 'shield' ] ; then
  list_paths -o ${BUILD_ROOT}/Build/exec/${CONFIG}_${COMPILER}/pathnames_gfs \
      GFDL_atmos_cubed_sphere/model/gfdl_cld_mp.F90 \
      GFDL_atmos_cubed_sphere/model/cld_eff_rad.F90 \
      SHiELD_physics/gsmphys/  \
      SHiELD_physics/GFS_layer/ \
      SHiELD_physics/IPD_layer/ \
      SHiELD_physics/COSP

However I get the error "No such file or directory":

root@4a6714b59553:/SHiELD_SRC/SHiELD_build/Build# mk_scripts/mk_paths shield gnu
/opt/mkmf/bin:/opt/netcdf/bin:/opt/openmpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
/SHiELD_SRC /SHiELD_SRC/SHiELD_build/Build
find: 'SHiELD_physics/COSP': No such file or directory
A list of the files you checked out is in the file SHiELD_build/Build/exec/shield_gnu/pathnames_gfs.
A list of the files you checked out is in the file SHiELD_build/Build/exec/shield_gnu/pathnames_fv3.
/SHiELD_SRC/SHiELD_build/Build

I do not see COSP in SHiELD_physics directory:
https://github.com/NOAA-GFDL/SHiELD_physics

root@4a6714b59553:/SHiELD_SRC/SHiELD_build/Build# ls /SHiELD_SRC/SHiELD_physics/
FV3GFS  GFS_layer  IPD_layer  LICENSE.md  README.md  atmos_drivers  gsmphys  simple_coupler

This doesn't seem to break anything, but as far as I can tell it doesn't belong there.

distributed libraries not general

As far as I know, the nceplibs libraries must be compiled using the exact same compiler used to build the full executable. I'm not sure that it is possible to distribute these with the build package.

/usr/bin/ld: skipping incompatible /SHiELD_SRC/SHiELD_build/Build/nceplibs/gnu//libbacio_4.a when searching for -lbacio_4
/usr/bin/ld: cannot find -lbacio_4
/usr/bin/ld: skipping incompatible /SHiELD_SRC/SHiELD_build/Build/nceplibs/gnu//libsp_v2.0.2_d.a when searching for -lsp_v2.0.2_d
/usr/bin/ld: cannot find -lsp_v2.0.2_d
/usr/bin/ld: skipping incompatible /SHiELD_SRC/SHiELD_build/Build/nceplibs/gnu//libw3emc_d.a when searching for -lw3emc_d
/usr/bin/ld: cannot find -lw3emc_d
/usr/bin/ld: skipping incompatible /SHiELD_SRC/SHiELD_build/Build/nceplibs/gnu//libw3nco_d.a when searching for -lw3nco_d
/usr/bin/ld: cannot find -lw3nco_d

To fix this, I build nceplibs myself and then replace the library names:

#-----------------------------------------------------------------------------------
# Build NCEPLIBS
#-----------------------------------------------------------------------------------

# Only make those NCEP LIBS used by SHiELD:
RUN cd ${NCEPLIBS_BUILD_DIR} && \
    make -j${MAKEJOBS} bacio && \
    make -j${MAKEJOBS} sp && \
    make -j${MAKEJOBS} w3emc && \
    make -j${MAKEJOBS} w3nco

# Move libs and include files to a single location
RUN mkdir -p ${NCEPLIBS_DIR}/include && mkdir -p ${NCEPLIBS_DIR}/lib \
    && ln -fs ${NCEPLIBS_BUILD_DIR}/bacio/src/bacio-build/src/lib* ${NCEPLIBS_DIR}/lib \
    && ln -fs ${NCEPLIBS_BUILD_DIR}/bacio/src/bacio-build/src/include_4/* ${NCEPLIBS_DIR}/include \
    && ln -fs ${NCEPLIBS_BUILD_DIR}/sp/src/sp-build/src/lib* ${NCEPLIBS_DIR}/lib \
    && ln -fs ${NCEPLIBS_BUILD_DIR}/sp/src/sp-build/src/include_4/* ${NCEPLIBS_DIR}/include \
    && ln -fs ${NCEPLIBS_BUILD_DIR}/w3emc/src/w3emc-build/src/lib* ${NCEPLIBS_DIR}/lib \
    && ln -fs ${NCEPLIBS_BUILD_DIR}/w3emc/src/w3emc-build/src/include_4/* ${NCEPLIBS_DIR}/include \
    && ln -fs ${NCEPLIBS_BUILD_DIR}/w3nco/src/w3nco-build/src/lib* ${NCEPLIBS_DIR}/lib \
    && ln -fs ${NCEPLIBS_BUILD_DIR}/w3nco/src/w3nco-build/src/include_4/* ${NCEPLIBS_DIR}/include

then:

&& sed "s/-lbacio_4  -lsp_v2.0.2_d  -lw3emc_d  -lw3nco_d/-lbacio_4  -lsp_d  -lw3emc_d  -lw3nco_d/" < mk_scripts/mk_make > OUT \
  && mv OUT mk_scripts/mk_make

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.