Coder Social home page Coder Social logo

lanl / hippynn Goto Github PK

View Code? Open in Web Editor NEW
53.0 9.0 21.0 6.22 MB

python library for atomistic machine learning

Home Page: https://lanl.github.io/hippynn/

License: Other

Python 100.00%
atomistic-models interatomic-potentials machine-learning physics-informed-neural-networks graph-neural-networks library atomistic-machine-learning

hippynn's Introduction

The hippynn python package - a modular library for atomistic machine learning with pytorch.

We aim to provide a powerful library for the training of atomistic (or physical point-cloud) machine learning. We want entry-level users to be able to efficiently train models to millions of datapoints, and a modular structure for extensions and contribution.

While hippynn's development so-far has centered around the HIP-NN architecture, don't let that discourage you if you are performing research with another model. Get in touch, and let's work together to provide a high-quality implementation of your work, either as a contribution or an interface extension to your own package.

Features:

Modular set of pytorch layers for atomistic operations

  • Atomistic operations can be tricky to write in native pytorch. Most operations provided here support linear-scaling models.
  • Model energy, force charge & charge moments, bond orders, and more!
  • nn.Modules are written with minimal reference to the rest of the library; if you want to use them in your scripts without using the rest of the features provided here -- no problem!

Graph level API for simple and flexible construction of models from pytorch components.

  • Build models based on the abstract physics/mathematics of the problem, without having to think about implementation details.
  • Graph nodes support native python syntax, for example different forms of loss can be directly added.
  • Link predicted values in the model with a database entry to compare predicted and true values
  • IndexType logic records metadata about tensor structure, and provides automatic conversion to compatible structures when possible.
  • Graph API is independent of module implementation.

Plot level API for tracking your training.

  • Using the graph API, define quantities to evaluate before, during, or after training as figures using matplotlib.

Training & Experiment API

  • Integrated with graph level API
  • Pretty-printing loss metrics, generating plots periodically
  • Callbacks and checkpointing

Custom Kernels for fast execution

  • Certain operations are not efficiently written in pure pytorch, we provide alternative implementations with numba and cupy
  • These are directly linked in with pytorch Autograd -- use them like native pytorch functions.
  • These provide advantages in memory footprint and speed
  • Includes CPU and GPU execution for custom kernels

Interfaces to other codes

  • ASE: Define ASE calculators based on the graph-level API.
  • PYSEQM: Use PYSEQM calculations as nodes in a graph.
  • LAMMPS: Use models from hippynn in LAMMPS via the MLIAP Unified interface.

Installation

  • Clone this repository and navigate into it.

Dependencies using conda:

  • Run conda install -c pytorch -c conda-forge --file conda_requirements.txt

Dependencies using pip:

  • Run pip install .
  • If you fee like tinkering, do an editable install: pip install -e .
  • You can install using all optional dependencies from pip with: pip install -e .[full]

Notes

  • Install dependencies with pip from requirements.txt .
  • Install dependencies with conda from conda_requirements.txt .
  • If you don't want pip to install them, conda install from file before installing hippynn. You may want to use -c pytorch for the pytorch channel. For ase and cupy, you probably want to use -c conda-forge.
  • Optional dependencies are in optional_dependencies.txt

Documentation

Please see https://lanl.github.io/hippynn/ for the latest documentation. You can also build the documentation locally, see /docs/README.txt

Other things

We are currently under development. At the moment you should be prepared for breaking changes -- keep track of what version you are using if you need to maintain consistency.

As we clean up the rough edges, we are preparing a manuscript. If, in the mean time, you are using hippynn in your work, please cite this repository and the HIP-NN paper:

Lubbers, N., Smith, J. S., & Barros, K. (2018). Hierarchical modeling of molecular energies using a deep neural network. The Journal of chemical physics, 148(24), 241715.

See AUTHORS.txt for information on authors.

See LICENSE.txt for licensing information. hippynn is licensed under the BSD-3 license.

Triad National Security, LLC (Triad) owns the copyright to hippynn, which it identifies as project number LA-CC-19-093.

Copyright 2019. Triad National Security, LLC. All rights reserved. This program was produced under U.S. Government contract 89233218CNA000001 for Los Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC for the U.S. Department of Energy/National Nuclear Security Administration. All rights in the program are reserved by Triad National Security, LLC, and the U.S. Department of Energy/National Nuclear Security Administration. The Government is granted for itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide license in this material to reproduce, prepare derivative works, distribute copies to the public, perform publicly and display publicly, and to permit others to do so.

hippynn's People

Contributors

bnebgen-lanl avatar boogie3d avatar eshinkle1 avatar jan-janssen avatar lubbersnick avatar mchigaev avatar mgt16-lanl avatar peterli3819 avatar sakibmatin avatar tautomer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hippynn's Issues

Using hippynn via mliap interface on CPU causes it to fail

Using hippynn via mliap causes script to fail:

Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.001
Traceback (most recent call last):
  File "mliap_unified_couple_kokkos.pyx", line 412, in mliap_unified_couple_kokkos.compute_forces_python_kokkos
  File "mliap_unified_couple_kokkos.pyx", line 394, in mliap_unified_couple_kokkos.MLIAPUnifiedInterfaceKokkos.compute_forces
  File "/home/bradwu/hippynn/hippynn/interfaces/lammps_interface/mliap_interface.py", line 86, in compute_forces
    data.eatoms = atom_energy.numpy().astype(np.double)
AttributeError: attribute 'eatoms' of 'mliap_unified_couple_kokkos.MLIAPDataPy' objects is not writable
ERROR: Running mliappy unified compute_forces failure. (src/KOKKOS/mliap_unified_kokkos.cpp:82)
Last command: run             10000

Which points to this if-else branch:

if return_device=="cpu":
fij = fij.numpy()
data.eatoms = atom_energy.numpy().astype(np.double)
else:
eatoms = torch.as_tensor(data.eatoms,device=return_device)
eatoms.copy_(atom_energy)

Known fixes (which we are unsure if is correct) is:

  1. Update line 86 to eatoms = atom_energy.numpy().astype(np.double)
  2. Remove the if return_device=='cpu' branch.

Request for group weights

Sometimes we wanna have groups of structures, where we weight some configurations higher than others in the loss function.

Another step further when training to forces is to give weights that are inversely proportional to force magnitude. Sometimes the zero force configurations are very important for stabilizing weird structures.

Happy to help and test things out!

Easy training restarts

@yingwaili requests that we do the training in such away that restarting the model is basically automated. Maybe we can have some function to write a restart script in the model directory or something. Or a function that restarts training and takes as input only the directory of the checkpoint.

Items of difficulty

  • restarting databases is a bit fragile - may need to be customized somehow.

Variable inner loops for different optimizers

If we pull out the step loop from the epoch loop as a variable function, this will make the library compatible with closure-based optimizers or other optimizers (such as SAM).

To do this, we should add three versions

  • (default) regular steps
  • closure-based steps
  • two-step (like SAM)

Which version should be controlled by the Controller. We can create an enum or use a string.

Installation issues for python version 3.12

hippynn is not compatible with python3 version 3.12 due to versioneer.py

Installation works with
conda create -n test_env311 python=3.11 numpy scipy matplotlib pytorch cudatoolkit ase numba python-graphviz tqdm numba python-graphviz tqdm cupy cython -c conda-forge

Installation fails with
conda create -n test_env312 python=3.12 numpy scipy matplotlib pytorch cudatoolkit ase numba python-graphviz tqdm numba python-graphviz tqdm cupy cython -c conda-forge

Possible fix to hippynn via mliap interface CPU failure

For reference:

# Assuming fij and atom_energy are tensors and data is an object with an eatoms attribute
if return_device == "cpu":
    # Convert tensors to numpy arrays if we're moving them to CPU
    fij = fij.cpu().numpy()  # Ensure fij is moved to CPU before converting to numpy array
    data.eatoms = atom_energy.cpu().numpy().astype(np.double)  # Move atom_energy to CPU, then convert
else:
    # Ensure eatoms is a tensor on the target device, and copy atom_energy to it
    # The .to() method ensures the tensor is moved to the specified device and converted to the desired dtype
    eatoms = atom_energy.to(device=return_device, dtype=torch.double)
    data.eatoms = eatoms  # Update data.eatoms with the new tensor

    # If you need to preserve the original eatoms tensor and only copy values from atom_energy, use the following:
    data.eatoms = torch.empty_like(atom_energy, device=return_device, dtype=torch.double)
    data.eatoms.copy_(atom_energy.to(device=return_device, dtype=torch.double))

I've also tried to run it without the kokkos package, you can adjust the command by simply omitting the options related to kokkos. Some of the options offered like -sf kk, -k on, and -pk kokkos are specific to enabling and configuring the kokkos package for parallel execution on various architectures.

If your original command line call to LAMMPS includes these options, such as:

lmp -sf kk -k on -pk kokkos

To run LAMMPS without Kokkos, you would remove -sf kk, -k on, and -pk kokkos options, leaving you with:

lmp 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.