prstrive / epcdepth Goto Github PK
View Code? Open in Web Editor NEW[ICCV 2021] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation
License: MIT License
[ICCV 2021] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation
License: MIT License
Hi there, thanks for the excellent work! I am working on using a large backbone on your EPCDepth network which takes a lot of time to train. I am wondering if we can accelerate the training with multiple GPUs. I have tried using the torch.distributed but failed for some reason. Have you tried using multiple GPUs for training? I really appreciate any help you can provide.
Hi, dear author, I really appreciate your awesome work! It is more stable and performs better than depth estimation with monocular video.
However, I met a problem when I trained EPCNet on my own dataset.
When the model is only trained for 3 epochs, the performance is good. However, when i trained for more epochs (such as 20 epochs), artifacts appears on the predicted disparity map, as shown in the following figures.
What could be possible to lead to the result? Could you provide me some advice?
THANK YOU!
hi! i used the pretrained model and got the predicted map. The values range from 0 to 1.3, what does this mean?
hi, I sincerely congratulate you for publishing such an excellent article.
i just wonder can i use my own jpg image to evaluate on pretrained model?
Hello,
This is great work!.
I am facing an issue with random color augmentation.
out = self.color_aug(x)
TypeError: 'tuple' object is not callable
Could you please take a look?
Hi @prstrive,
Thank you for your great work. I have one question regarding to the warpping process. Specifically about the variable 'stereo_T'. In this segment of code:
EPCDepth/dataset/kitti_dataset.py
Line 164 in 84119c8
Thank you in advance.
Best,
Traceback (most recent call last):
File "main.py", line 55, in
model.main()
File "/hpcfiles/users/hx/EPCDepth-main/model.py", line 89, in main
train_loss = self.train_epoch(epoch)
File "/hpcfiles/users/hx/EPCDepth-main/model.py", line 189, in train_epoch
progressbar.Timer(), ",", progressbar.ETA(), ",", progressbar.Variable('LR', width=1), ",",
AttributeError: module 'progressbar' has no attribute 'Variable'
Epoch 0/20: N/A% 00/5650 || Elapsed Time: 0:00:00,ETA: --:--:--,LR: -,Loss: ------
Traceback (most recent call last):
File "main.py", line 55, in
model.main()
File "/home/ji322906/EPCDepth/model.py", line 90, in main
train_loss = self.train_epoch(epoch)
File "/home/ji322906/EPCDepth/model.py", line 197, in train_epoch
for batch, data in enumerate(self.train_loader):
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/ji322906/EPCDepth/dataset/kitti_dataset.py", line 151, in getitem
data["curr"] = self.transform(self.get_img(folder, frame_idx, side), is_flip, False, color_aug)
File "/home/ji322906/EPCDepth/dataset/kitti_dataset.py", line 88, in get_img
img_path = os.path.join(self.data_path, folder, "image_0{}/data".format(self.side_map[side]), "{:010d}{}".format(frame_idx, ".png"))
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/posixpath.py", line 76, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
How do I fix it??
I downloaded your pre-trained model named "model18_lr" from: https://drive.google.com/file/d/1Z60MI_UdTHfoSFSFwLI39yfe8njEN6Kp/view?usp=sharing .
I saved the estimated disparity map by your script:
python main.py
--val --data_path --resume /model18_192x640.pth.tar
--use_full_scale --post_process --output_scale 0 --disps_path
The result is:
Mono evaluation - using median scaling
Scaling ratios | med: 6.675 | std: 0.085
abs_rel | sq_rel | rmse | rmse_log | a1 | a2 | a3 |
& 0.169 & 0.981 & 5.269 & 0.241 & 0.745 & 0.943 & 0.978 \
It is not good. Is there anything I have missed?
Thank you!
Thanks authors for the interesting idea in the paper.
In my test, the portion containing near car of the reconstructed point cloud is distorted, which means the disparity between near obvious cars and environment background is not predicted distinctly.
I guess three reasons may cause this. First, the encoding part of the network is not deep enough, the semantic is not learned well, so the difference between the environment and the vehicles may not be well judged. Second, the disparity output decoder contains down-sampled part, so the disparity of the car and adjacent environment may belong to the same grid in the output feature map. Third, the photo-metric loss contain lots of surrounding parts of the image such as the sky, making the fine-grained loss is submerged.
Please tell me if you ever encountered this situation.
Hi, first of all, thanks for your excellent work! When I tried to reproduce your results on the KITTI test set with your code and pretrained weights, I got different results from those reported in this repository. Specifically, I tested model50
with:
python main.py --val --data_path <kitti path> --resume <model path>/model50.tar --use_full_scale --post_process --output_scale 0 --disps_path <disparity save path> --num_layer 50 --batch_size 4
And the results are:
From | Abs Rel | Sq Rel | RMSE | δ < 1.25 |
---|---|---|---|---|
This Repository | 0.091 | 0.646 | 4.207 | 0.901 |
My Reproduction | 0.096 | 0.669 | 4.254 | 0.888 |
It is noticed that the extension name of the images in my KITTI dataset is .png
, and you mentioned that 'Our pre-trained model is trained in jpg, and the test performance on png will slightly decrease.'. Are the differences just caused by the extension name of the images? Or do I misunderstand other things?
Thanks for your code. I have some question. Can I train this to train other datasets? And can I use pretrain model to predict which pictures are not from KITTI?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.