Coder Social home page Coder Social logo

prstrive / epcdepth Goto Github PK

View Code? Open in Web Editor NEW
128.0 128.0 18.0 790 KB

[ICCV 2021] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation

License: MIT License

Python 100.00%
data-augmentation deep-learning depth-estimation monocular monocular-depth-estimation monodepth self-supervised stereo unsupervised

epcdepth's People

Contributors

prstrive avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

epcdepth's Issues

Multiple GPU training

Hi there, thanks for the excellent work! I am working on using a large backbone on your EPCDepth network which takes a lot of time to train. I am wondering if we can accelerate the training with multiple GPUs. I have tried using the torch.distributed but failed for some reason. Have you tried using multiple GPUs for training? I really appreciate any help you can provide.

Artifact appears as the training goes on

Hi, dear author, I really appreciate your awesome work! It is more stable and performs better than depth estimation with monocular video.

However, I met a problem when I trained EPCNet on my own dataset.
When the model is only trained for 3 epochs, the performance is good. However, when i trained for more epochs (such as 20 epochs), artifacts appears on the predicted disparity map, as shown in the following figures.
image
image(1)

What could be possible to lead to the result? Could you provide me some advice?
THANK YOU!

I sincerely congratulate you for publishing such an excellent article. After reading your article, I encountered a problem when running the code, I hope you can help to take a look

Traceback (most recent call last):
File "main.py", line 55, in
model.main()
File "/hpcfiles/users/hx/EPCDepth-main/model.py", line 89, in main
train_loss = self.train_epoch(epoch)
File "/hpcfiles/users/hx/EPCDepth-main/model.py", line 189, in train_epoch
progressbar.Timer(), ",", progressbar.ETA(), ",", progressbar.Variable('LR', width=1), ",",
AttributeError: module 'progressbar' has no attribute 'Variable'

TypeError: expected str, bytes or os.PathLike object, not NoneType

Epoch 0/20: N/A% 00/5650 || Elapsed Time: 0:00:00,ETA: --:--:--,LR: -,Loss: ------
Traceback (most recent call last):
File "main.py", line 55, in
model.main()
File "/home/ji322906/EPCDepth/model.py", line 90, in main
train_loss = self.train_epoch(epoch)
File "/home/ji322906/EPCDepth/model.py", line 197, in train_epoch
for batch, data in enumerate(self.train_loader):
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/ji322906/EPCDepth/dataset/kitti_dataset.py", line 151, in getitem
data["curr"] = self.transform(self.get_img(folder, frame_idx, side), is_flip, False, color_aug)
File "/home/ji322906/EPCDepth/dataset/kitti_dataset.py", line 88, in get_img
img_path = os.path.join(self.data_path, folder, "image_0{}/data".format(self.side_map[side]), "{:010d}{}".format(frame_idx, ".png"))
File "/home/ji322906/.conda/envs/jihyungkim94/lib/python3.8/posixpath.py", line 76, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType

How do I fix it??

Getting different test results on the KITTI

  1. I downloaded your pre-trained model named "model18_lr" from: https://drive.google.com/file/d/1Z60MI_UdTHfoSFSFwLI39yfe8njEN6Kp/view?usp=sharing .

  2. I saved the estimated disparity map by your script:

python main.py
--val --data_path --resume /model18_192x640.pth.tar
--use_full_scale --post_process --output_scale 0 --disps_path

  1. I tested the depth map using the script provided by monodepth2
    ( https://github.com/nianticlabs/monodepth2/blob/master/evaluate_depth.py ).
    The command is: python evaluate_depth.py --data_path <dataset_dir> --eval_mono --ext_disp_to_eval <saved_depth_map> --post_process.

The result is:
Mono evaluation - using median scaling
Scaling ratios | med: 6.675 | std: 0.085

abs_rel | sq_rel | rmse | rmse_log | a1 | a2 | a3 |
& 0.169 & 0.981 & 5.269 & 0.241 & 0.745 & 0.943 & 0.978 \

It is not good. Is there anything I have missed?
Thank you!

distortion between near cars and adjacent environment

Thanks authors for the interesting idea in the paper.
In my test, the portion containing near car of the reconstructed point cloud is distorted, which means the disparity between near obvious cars and environment background is not predicted distinctly.
I guess three reasons may cause this. First, the encoding part of the network is not deep enough, the semantic is not learned well, so the difference between the environment and the vehicles may not be well judged. Second, the disparity output decoder contains down-sampled part, so the disparity of the car and adjacent environment may belong to the same grid in the output feature map. Third, the photo-metric loss contain lots of surrounding parts of the image such as the sky, making the fine-grained loss is submerged.
Please tell me if you ever encountered this situation.

Getting different test results on the KITTI

Hi, first of all, thanks for your excellent work! When I tried to reproduce your results on the KITTI test set with your code and pretrained weights, I got different results from those reported in this repository. Specifically, I tested model50 with:
python main.py --val --data_path <kitti path> --resume <model path>/model50.tar --use_full_scale --post_process --output_scale 0 --disps_path <disparity save path> --num_layer 50 --batch_size 4
And the results are:

From Abs Rel Sq Rel RMSE δ < 1.25
This Repository 0.091 0.646 4.207 0.901
My Reproduction 0.096 0.669 4.254 0.888

It is noticed that the extension name of the images in my KITTI dataset is .png, and you mentioned that 'Our pre-trained model is trained in jpg, and the test performance on png will slightly decrease.'. Are the differences just caused by the extension name of the images? Or do I misunderstand other things?

Dataset

Thanks for your code. I have some question. Can I train this to train other datasets? And can I use pretrain model to predict which pictures are not from KITTI?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.