Coder Social home page Coder Social logo

i207m / pinnacle Goto Github PK

View Code? Open in Web Editor NEW
155.0 3.0 31.0 277.1 MB

Codebase for PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs.

Home Page: https://arxiv.org/abs/2306.08827

Python 99.90% Shell 0.10%
pde-solver physics-informed-ml pinn

pinnacle's Introduction

PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs

This repository is our codebase for PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs. Our paper is currently under review. We will provide more detailed guide soon.

Implemented Methods

This benchmark paper implements the following variants and create a new challenging dataset to compare them,

Method Type
PINN Vanilla PINNs
PINNs(Adam+L-BFGS) Vanilla PINNs
PINN-LRA Loss reweighting
PINN-NTK Loss reweighting
RAR Collocation points resampling
MultiAdam New optimizer
gPINN New loss functions (regularization terms)
hp-VPINN New loss functions (variational formulation)
LAAF New architecture (activation)
GAAF New architecture (activation)
FBPINN New architecture (domain decomposition)

See these references for more details,

Installation

git clone https://github.com/i207M/PINNacle.git --depth 1
cd PINNacle
pip install -r requirements.txt

Usage

📄 Full Documention

Run all 20 cases with default settings:

python benchmark.py [--name EXP_NAME] [--seed SEED] [--device DEVICE]

Citation

If you find out work useful, please cite our paper at:

@article{hao2023pinnacle,
  title={PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs},
  author={Hao, Zhongkai and Yao, Jiachen and Su, Chang and Su, Hang and Wang, Ziao and Lu, Fanzhi and Xia, Zeyu and Zhang, Yichi and Liu, Songming and Lu, Lu and others},
  journal={arXiv preprint arXiv:2306.08827},
  year={2023}
}

We also suggest you have a look at the survey paper (Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications) about PINNs, neural operators, and other paradigms of PIML.

@article{hao2022physics,
  title={Physics-informed machine learning: A survey on problems, methods and applications},
  author={Hao, Zhongkai and Liu, Songming and Zhang, Yichi and Ying, Chengyang and Feng, Yao and Su, Hang and Zhu, Jun},
  journal={arXiv preprint arXiv:2211.08064},
  year={2022}
}

pinnacle's People

Contributors

edwardix avatar haozhongkai avatar i207m avatar melvoyager avatar wangziao9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pinnacle's Issues

Problem in quike test

Hi, thanks for your great work. I'm trying to quikly test the code, and the following problem was presented. Can you tell me what's wrong with my code? Thank you very much.
image

The problem with using the method named 'laaf' and 'gaaf'

Hi, we meet a problem when we are running the cloned code with the primary settings except changing the default method from 'adam' to 'laaf' or 'gaaf'. Here is the content of logerr.txt:

Traceback (most recent call last):
  File "/scratch/tpang/yuanzhe_hu/TBv2-PINNacle/benchmark.py", line 153, in <module>
    trainer.train_all()
  File "/scratch/tpang/yuanzhe_hu/TBv2-PINNacle/trainer.py", line 98, in train_all
    model = get_model()
  File "/scratch/tpang/yuanzhe_hu/TBv2-PINNacle/benchmark.py", line 104, in get_model_dde
    net = DNN_LAAF(len(parse_hidden_layers(command_args))-1, parse_hidden_layers[0], pde.input_dim, pde.output_dim)
TypeError: 'function' object is not subscriptable

Besides, how can we run the method 'gpinn' and 'hp-vpinn' since I cannot find the specific settings for these two methods in benchmark.py file ?

How do you benchmark FBPINN and use MultiAdam

Hello Authors,

Thank you for the hard work on the benchmarks.

The demo code did not include any benchmark using FBPINN on different cases nor utilised multiAdam. Can you provide more documentation, please?

Thank you.

Good luck with NIPs!

RAR interval argument vs log-every argument

Hello,
I am currently working on creating new sampling/resampling methods for PINNs. I decided to use your benchmark, because it has great number of PDEs and methods to compare!
However, I have already encountered many minor and major flaws in your code, because, of course, it is a very new work. And I am happy to help :) (i also hope that your work will be accepted for ICLR).

Issue description

The main issue I encountering currently is about RAR-wrapper. I can't find myself the "root" or the inter-dependency of it, in order to correct the code. In becnhmark.py I change RAR parameter as:

line 120: model.train = rar_wrapper(pde, model, {"interval": 10, "count": 1})

When I change argument interval to a number lower than log-every, the following error occurs:

$ python benchmark_fast.py --method rar --log-every 100 --device cpu
Using backend: pytorch

Set the default float type to float32
***** Begin #0-0 *****
Compiling model...
'compile' took 0.000140 s

PDE Class Name: Burgers1D
Training model...

Step      Train loss                        Test loss                         Test metric
0         [1.45e-02, 6.48e-01, 4.15e-02]    [1.58e-02, 6.48e-01, 4.15e-02]    []  
10        [4.48e-04, 4.10e-01, 1.98e-02]    [4.47e-04, 4.10e-01, 1.98e-02]    []  
Traceback (most recent call last):
  File "/home/dymchens-ext/PINNacle/benchmark_fast.py", line 149, in <module>
    trainer.train_all()
  File "/home/dymchens-ext/PINNacle/trainer.py", line 99, in train_all
    model.train(**train_args, model_save_path=save_path)
  File "/home/dymchens-ext/PINNacle/src/utils/rar.py", line 19, in wrapper
    train(*args, **kwargs)
  File "/home/dymchens-ext/PINNacle/deepxde/utils/internal.py", line 22, in wrapper
    result = f(*args, **kwargs)
  File "/home/dymchens-ext/PINNacle/deepxde/model.py", line 603, in train
    self.callbacks.on_train_end()
  File "/home/dymchens-ext/PINNacle/deepxde/callbacks.py", line 94, in on_train_end
    callback.on_train_end()
  File "/home/dymchens-ext/PINNacle/src/utils/callbacks.py", line 208, in on_train_end
    self.frmses[:, 0], self.frmses[:, 1], self.frmses[:, 2]]).T,
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed

I guess it has to do with the fact, that wrapper changes model attribute iterations to interval, and therefore the callback on_train_end is called earlier than any logging was done, so there is no metrics calculated yet. I, though, don't know whether it is better to change the wrapper behaviour or the callbacks, or just to not log anything, or to restrict calculating default metrics (because, for example, I don't care about any except mse and residual). Currently, I just log every 10 iterations, and, of course, it is eating a lot of storage and as well logger is messy. In case of method i want to implement (writing a new wrapper, similar to rar), I have to, actually, update training data every iteration, so, my --interval 1 argument is surely impossible to use without code correction.

Can you either direct me how to resolve this issue, and I can do merge commit for you, or maybe you have time to resolve it yourself so I can continue my work? Thanks in advance.


Additional comments/questions (not connected to the issue).

Apart from this issue, I wanted to express few things (if needed, I can open issues, just tell me):

  1. Argparse:

    • add to argparser of benchmark.py help comment for each argument, their usage and intentend values are not obvious without looking inside the code.
    • I think it is better if rar parameter interval and count are included in argparser. and I noticed that config.json doesn't save anything but iterations and log-every values (but maybe it is designed to be like this).
  2. the "full documentation" website is lacking many descriptions, even when there is a subsection "header" in a content tree, but content item is not clickable (for example, RAR), but I gues you are aware of that.

  3. weird one: if I would like to use summary() function from src.utils.summary standalone, deepxde uses tensorflow backend (at least the logger says so, and then I get error "no module tensorflow"), and there is no tensorflow in requirements.txt file... (I work in isolated environment with Nix, so it was easy to catch, but maybe it is not intended for user to use plot/summary functions).

  4. RAR implementation and choice of RAR:

    • I am not sure that the algorithm is correctly implemented, as it doesn't have the sampling step (see screenshot below from [Wu et al., 2023] below), since the added points are from X_train:
      image
    • I am confused with the choice of RAR algorithm as the representative method for sampling method, as in [Wu et al., 2023] it shows worse quality compared to RAD or Random-R, can you explain your choice for me, please?
    • I am, though, planning to implement several algorithms through deepXde as wrappers for PINNacle as I want to compare different resampling methods. Only thing I can't find, is there some script or option to run several methods and to have in the end some plot which compares loss for different methods or generates table similar to ones in your papers? Or script that collects errors.txt through runs/? I think this kind of benchmark script would be amazing.

Thanks a lot in advance for your attention and possible answer. You can also reply me in email: sofya (dot) dymchenko -at- gmail com.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.