Comments (12)
from deepwave.
from deepwave.
from deepwave.
This is my code, because there are too many, I converted it into a file, thank you for your correction.
elatasic.txt
from deepwave.
from deepwave.
Thanks for your guidance, after my testing, I finally found the reason, I ran the program on four Gpus at the same time, one GPU could run, but it could not conduct the gradient well, resulting in the loss of inversion results. This is the result of my latest run, obviously it can conduct the gradient very well, but it still appears two lines, I guess it should be multiple GPU conduction gradient, not as correct as one GPU conduction, I don't know what you think?
test.pdf
from deepwave.
from deepwave.
This is my code, the main change is to run the code on 0,1,3 Gpus, welcome your correction.
elatasic.txt
from deepwave.
I think I see the problem. In Deepwave's DataParallel example (https://github.com/ar4/deepwave/blob/master/docs/example_distributed_dp.py) the models are passed to the constructor of the propagator object, not when the propagator is being applied. This is because PyTorch's DataParallel divides input Tensors between GPUs on the specified dimension (0 by default), so if you pass the models when the propagator is applied the models will also be divided among the GPUs on this dimension. By passing them to the constructor instead, you avoid this.
So, I suggest that you modify your Prop class to take vp, vs, and rho as inputs in __init__
rather than passing them when applying the propagator:
class Prop(torch.nn.Module):
def __init__(self, dx, dt, freq, vp, vs, rho):
super().__init__()
self.dx = dx
self.dt = dt
self.freq = freq
self.vp = vp
self.vs = vs
self.rho = rho
def forward(self, source_amplitudes, source_locations, receiver_locations):
out = elastic(
*deepwave.common.vpvsrho_to_lambmubuoyancy(self.vp, self.vs,
self.rho),
self.dx,
self.dt,
source_amplitudes_y=source_amplitudes,
source_locations_y=source_locations,
receiver_locations_y=receiver_locations,
pml_freq=self.freq,
)
return out[-2]
I hope that will resolve the problem you encountered, but note that PyTorch's documentation recommends using DistributedDataParallel rather than DataParallel for better performance. Here is an example of using it with Deepwave: https://github.com/ar4/deepwave/blob/master/docs/example_distributed_ddp.py
from deepwave.
Thank you for your guidance. My multi GPU running program is running correctly. The gradient can be conducted correctly.
from deepwave.
from deepwave.
I hope this Issue is resolved, so I am going to close it. Please feel free to reopen it if you have further questions about this.
from deepwave.
Related Issues (20)
- Asking for Propagator function in the newest version of Deepwave HOT 3
- Error in executing deepwave in MAC HOT 17
- How to calculate RTM using deepwave HOT 11
- Try the first-order acoustic equation propagation HOT 2
- scalar_born memory issue HOT 4
- 3D forward modelling HOT 5
- Incorrect output from DistributedDataParallel HOT 6
- It seams the scalar function cannot generate the ground roll when setting the free surface HOT 4
- Calculated Hessian for the elastic example. It gives zero values HOT 2
- I was unable to complete compilation HOT 5
- Apply deepwave to ultrasound HOT 13
- Generate the waveform data HOT 3
- How can I get the file called scalar2d_gpu_iso_4_float and scalar2d_gpu_iso_4_float.cp38-win_amd64 HOT 3
- How to write a propagator by scalar with the newest version HOT 3
- looked at the source code HOT 8
- how to simulate a source that is not point source, but has an arbitrarily spatial distribution? HOT 6
- Elastic FWI parameterization (Impedance) HOT 2
- How to generate reverse time migration HOT 6
- Elastic wave gradient calculation HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepwave.