Coder Social home page Coder Social logo

mp-neural-pde-solvers's Introduction

Message Passing Neural PDE Solvers

Johannes Brandstetter*, Daniel Worrall*, Max Welling

Link to the paper

ICLR 2022 Spotlight Paper

If you find our work and/or our code useful, please cite us via:

@article{brandstetter2022message,
  title={Message Passing Neural PDE Solvers},
  author={Brandstetter, Johannes and Worrall, Daniel and Welling, Max},
  journal={arXiv preprint arXiv:2202.03376},
  year={2022}
}

Set up conda environment

source environment.sh

Produce datasets for tasks E1, E2, E3, WE1, WE2, WE3

python generate/generate_data.py --experiment={E1, E2, E3, WE1, WE2, WE3} --train_samples=2048 --valid_samples=128 --test_samples=128 --log=True --device=cuda:0

Train MP-PDE solvers for tasks E1, E2, E3

python experiments/train.py --device=cuda:0 --experiment={E1, E2, E3} --model={GNN, ResCNN, Res1DCNN} --base_resolution=250,{100,50,40} --time_window=25 --log=True

Train MP-PDE solvers for tasks WE1, WE2

python experiments/train.py --device=cuda:0 --experiment={WE1, WE2} --base_resolution=250,{100,50,40} --neighbors=6 --time_window=25 --log=True

Train MP-PDE solvers for task WE3

python experiments/train.py --device=cuda:0 --experiment=WE3 --base_resolution=250,100 --neighbors=20 --time_window=25 --log=True

python experiments/train.py --device=cuda:0 --experiment=WE3 --base_resolution=250,50 --neighbors=12 --time_window=25 --log=True

python experiments/train.py --device=cuda:0 --experiment=WE3 --base_resolution=250,40 --neighbors=10 --time_window=25 --log=True

python experiments/train.py --device=cuda:0 --experiment=WE3 --base_resolution=250,40 --neighbors=6 --time_window=25 --log=True

mp-neural-pde-solvers's People

Contributors

brandstetter-johannes avatar nickmcgreivy avatar yoeripoels avatar johbrandstetter avatar pkhudov avatar

Stargazers

Clay Curry avatar  avatar Niranjan Anandkumar avatar Sacha Lewin avatar  avatar Yegon Kim avatar  avatar Mingyu Jeon avatar Milad Ramezankhani avatar Fan Xu avatar Stéphane Breuils avatar  avatar LiuPengwei avatar Vignesh Gopakumar avatar Jeff Carpenter avatar  avatar Xiang Zheng avatar Jiarong Wu avatar Aoming Liang avatar  avatar Alexander G avatar . avatar 670Pro avatar  avatar  avatar cip17 avatar  avatar Nima Hosseini Dashtbayaz avatar CoderPanda avatar Tianyi Li avatar Sepehr Mousavi avatar Learn2Learn avatar Jiaqing Xie avatar Rashad Merisier avatar  avatar Lise Le Boudec avatar Subash avatar zengbocheng avatar  avatar Jákup Svøðstein avatar Roussel Desmond Nzoyem avatar Axel Brunnbauer avatar  avatar Giovanni_Canali avatar  avatar Ali Can Bekar avatar Fred Xu avatar Gleb Solovev avatar Z avatar Fedor Buzaev avatar Christian Pehle avatar Yu Zheng avatar Xinyang Liu avatar Tianyu Chen avatar  avatar Xwqtju avatar oogie avatar Yannick Limmer avatar Yun Young Choi avatar Jan Kaczmarczyk avatar Felix Köhler avatar Dario Coscia avatar  avatar Pengfei avatar Friendly Neighborhood Droid avatar Winona avatar Haitong Ma avatar  avatar Yue Wang avatar Halil Çağrı Bilgi avatar steevenjanny avatar SuiAn avatar Peiyan Hu avatar Zelin Zhao avatar Qi Sun 孙启 avatar Lihao Liu avatar  avatar Sean Current avatar Tianrui Liu avatar Francesco Alesiani avatar Yoon, Seungje avatar Jiho Lee avatar Junyoung Park avatar  avatar OptRay avatar Zihao Wang avatar Sabari Kumar avatar  avatar Gianluca Galletti avatar Jacob Helwig avatar Vaibhav Bansal avatar  avatar Michael Churchill avatar DingShizhe avatar ChongjianGE avatar NedChen avatar Raed Bouslama avatar Luowei Yin avatar Shota DEGUCHI avatar Park Yunkyung avatar

Watchers

 avatar

mp-neural-pde-solvers's Issues

Incompatibility of F.conv1d with 4D tensors in __getItem__ of HDF5Dataset in utils.py

Hello,

Thank you for an amazing paper and great code!

I came across an error that occurs when processing the dataset, which has dimensions [batch_size, channels, spatial_res, temp_res], using F.conv1d, in these parts of utils.py:

weights = torch.tensor([[[[0.2]*5]]])
u_super = F.conv1d(u_super_padded, weights, stride=(1, self.ratio_nx)).squeeze().numpy()

u_super = F.conv1d(torch.tensor(u_super), weights, stride=(1, self.ratio_nx)).squeeze().numpy()

x = F.conv1d(x_super, weights, stride=(1, self.ratio_nx)).squeeze().numpy()

It expects either 2D unbatched or 3D batched, while our data is 4D. The convolution weights of shape [1, 1, 1, 5] suggest a 1D-like convolution in a 2D context, which F.conv1d cannot do.

I suggest replacing it with F.conv2d, which fixes the issue while keeping the intended 1d convolution operation due to weights' shape.

Small issue in decoder 'dt' scaling

Hi,
Thank you for the beautiful paper & code!

I noticed a small issue in the construction of dt in the decoders. The PDE time/grid parameters are loaded from the train dataset in train.py as follows:

# Equation specific parameters
pde.tmin = train_dataset.tmin
pde.tmax = train_dataset.tmax
pde.grid_size = base_resolution

However, in the decoder pde.dt is also used:
dt = (torch.ones(1, self.time_window) * self.pde.dt).to(h.device)

Since this is not set, currently pde.dt is always the value of the (default) constructor of the PDE object rather than what was used to generate the dataset. Since the dt is constant throughout the dataset this is only a constant scaling and leads to no issues, but it might be nice to have it correct :) A simple fix would be to simply add pde.dt = train_dataset.dt to train.py.

Cheers,
Yoeri

Code for FNO-PF model

Hi, I am having some issues reproducing the FNO-PF results. I am working on experiment E1, using the burgers equation and the 1d FNO model from their github rep. With only the Push-forward applied I arrive at reasonable results, but I can't quite figure out how the temporal bundling is implemented.
One could for instance just change the output size of the last layer to match the time window, whereas another option is to apply the approach as in equation 10 in your paper, possibly with the extra convolutional decoder layers included.
Which approach did you use in the experiments?

Thanks in advance,
Winfried

Question related to test process..

Hello, I have read your paper. It is excellent research.
I have one question.

When examining the calculation of the unroll loss during the testing process,
it appears that the starting time step is 50 (in the case of the E1 dataset).

Does the starting value use the ground truth obtained from the PDE?
In other tests mentioned in the paper, is the ground truth value included in addition to the initial condition for the initial timestep?

I appreciate you taking the time to read.
Thank you very much, and have a great day!

Experiments and results on Fourier Neural Operator

Hello,

Thank you for the excellent paper.

I am very interested in the experimental results of applying some of the techniques to the FNO. However, I faild to find the corresponding code in this repository.Maybe I missed it or is the code not yet released?

Error on the WE3 generation?

Hi,

I was going over the generation script of WE3 in generate/generate_data.py and I noticed that the bc_right was not modified in this mixed setting with Neumann or Dirichlet conditions. I guess the line 197 is at fault: bc_left = np.random.randint(0, 2, size=1) should be bc_right = np.random.randint(0, 2, size=1), but maybe this is intended. Should we always keep the right boundary condition as dirichlet for this case ?
Best,
Louis

A typo in Appendix F

2D_%3FM3)83DMNWXN$44O2

I believe the third-to-last line here should read
target <-data(t+NK:t+(N+1)K)

It's easy to see this if we let N=1

About the "Sector 4.4 2D EXPERIMENT"

Dear author,
I am very interested in the 2D experiment, but I have not found the code or data for the 2D experiment. Please tell me where I can find the code and data.
Thank you for your help.

Euler Sod Shock Tube Problem

Dear Johannes,

Thanks for making this public !!

I would like to apply your method to the the astrophysics field
in a context where one has to solve hyperbolic PDEs with shocks
similar, for instance, to the 1D Euler equations (coupled system
of equations).

My question is whether is possible to use your method
in such a type of PDEs.

Thanks,

Roberto

Code for 2D MP-Neural-PDE-Solver

Hi, another code request ;)
Is there by any chance code available for the 2d variant of the Message-Passing neural network used?
Or any information regarding architecture/hyperparameter choices used?
It would help me a lot!

Kind regards,

Winfried

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.