Coder Social home page Coder Social logo

nqgn's Introduction

NQGN

N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks

Citation

@inproceedings{braun2022n,
  title={N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating Networks},
  author={Braun, Daniel and Morell, Olivier and Vasseur, Pascal and Demonceaux, C{\'e}dric},
  booktitle={2022 International Conference on Robotics and Automation (ICRA)},
  pages={2381--2387},
  year={2022},
  organization={IEEE}
}

nqgn's People

Contributors

dbraun-ub avatar

Stargazers

Aamir Bader Shah avatar

Watchers

 avatar Aamir Bader Shah avatar

nqgn's Issues

Missing Decoder Architectures Related to Resnet18

Sorry to keep asking questions. I just noticed that options.py has multiple architectures listed for --decoder_arch. The default architecture listed is QGN_resnet18 which is different from the decoder architecture in networks folder. Can you share the other decoder architectures related to resnet18, that are listed in the options.py file?

AttributeError: module 'sparseconvnet.SCN' has no attribute 'Metadata_2'

I am trying to train the model myself. I have a conda environment with following packages:

(QGN) xubuntu@08d6013cfe2d:/homelocal/NQGN$ conda list
# packages in environment at /home/xubuntu/.conda/envs/QGN:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main  
_openmp_mutex             4.5                       1_gnu  
attrs                     21.2.0             pyhd3eb1b0_0  
backcall                  0.2.0              pyhd3eb1b0_0    anaconda
blas                      1.0                         mkl  
brotlipy                  0.7.0           py36h27cfd23_1003  
bzip2                     1.0.8                h7b6447c_0  
c-ares                    1.17.1               h27cfd23_0  
ca-certificates           2022.07.19           h06a4308_0    anaconda
certifi                   2021.5.30        py36h06a4308_0    anaconda
cffi                      1.14.6           py36h400218f_0  
charset-normalizer        2.0.4              pyhd3eb1b0_0  
cloudpickle               2.0.0              pyhd3eb1b0_0    anaconda
cryptography              3.4.7            py36hd23ed53_0  
curl                      7.78.0               h1ccaba5_0  
cycler                    0.11.0             pyhd3eb1b0_0    anaconda
cytoolz                   0.11.0           py36h7b6447c_0    anaconda
daiquiri                  3.0.0              pyhd3deb0d_0    conda-forge
dask-core                 2021.3.1           pyhd3eb1b0_0    anaconda
dataclasses               0.8                      pypi_0    pypi
decorator                 5.1.1              pyhd3eb1b0_0    anaconda
deprecated                1.2.12             pyhd3eb1b0_0  
expat                     2.4.1                h2531618_2  
freetype                  2.11.0               h70c0345_0  
gettext                   0.21.0               hf68c758_0  
git                       2.32.0          pl5262hc120c5b_1  
git-pull-request          6.0.1              pyh9f0ad1d_1    conda-forge
hdf5                      1.10.2               hba1933b_1  
icu                       58.2                 he6710b0_3  
idna                      3.2                pyhd3eb1b0_0  
imageio                   2.9.0              pyhd3eb1b0_0    anaconda
intel-openmp              2021.4.0          h06a4308_3561  
ipython                   7.16.1           py36h5ca1d4c_0    anaconda
ipython_genutils          0.2.0              pyhd3eb1b0_1    anaconda
jedi                      0.18.0           py36h06a4308_1    anaconda
jpeg                      9d                   h7f8727e_0  
kiwisolver                1.3.1            py36h2531618_0    anaconda
krb5                      1.19.2               hac12032_0  
lcms2                     2.12                 h3be6417_0  
ld_impl_linux-64          2.35.1               h7274673_9  
libcurl                   7.78.0               h0b77cf5_0  
libedit                   3.1.20210910         h7f8727e_0  
libev                     4.33                 h7f8727e_1  
libffi                    3.3                  he6710b0_2  
libgcc-ng                 9.3.0               h5101ec6_17  
libgfortran-ng            7.5.0               ha8ba4b0_17  
libgfortran4              7.5.0               ha8ba4b0_17  
libgomp                   9.3.0               h5101ec6_17  
libiconv                  1.15                 h63c8f33_5  
libnghttp2                1.41.0               hf8bcb03_2  
libpng                    1.6.37               hbc83047_0  
libsodium                 1.0.18               h7b6447c_0  
libssh2                   1.9.0                h1ba5d50_1  
libstdcxx-ng              9.3.0               hd4cf53a_17  
libtiff                   4.2.0                h85742a9_0  
libwebp-base              1.2.0                h27cfd23_0  
libxml2                   2.9.12               h03d6c58_0  
lz4-c                     1.9.3                h295c915_1  
matplotlib-base           3.3.4            py36h62a2d02_0    anaconda
mkl                       2020.2                      256  
mkl-service               2.3.0            py36he8ac12f_0  
mkl_fft                   1.3.0            py36h54f3939_0  
mkl_random                1.1.1            py36h0573a6f_0  
ncurses                   6.3                  heee7806_1  
networkx                  2.5                        py_0    anaconda
numpy                     1.19.2           py36h54aff64_0  
numpy-base                1.19.2           py36hfa32c7d_0  
olefile                   0.46                     py36_0  
opencv                    3.4.1            py36h6fd60c2_1  
opencv-python             4.5.4.58                 pypi_0    pypi
openssl                   1.1.1q               h7f8727e_0    anaconda
parso                     0.8.3              pyhd3eb1b0_0    anaconda
pcre2                     10.35                h14c3975_1  
perl                      5.26.2               h14c3975_0  
pexpect                   4.8.0              pyhd3eb1b0_3    anaconda
pickleshare               0.7.5           pyhd3eb1b0_1003    anaconda
pillow                    8.2.0            py36he98fc37_0  
pip                       21.2.2           py36h06a4308_0  
prompt-toolkit            3.0.20             pyhd3eb1b0_0    anaconda
protobuf                  3.19.4                   pypi_0    pypi
ptyprocess                0.7.0              pyhd3eb1b0_2    anaconda
pycparser                 2.20                       py_2  
pygithub                  1.55               pyhd3eb1b0_1  
pygments                  2.11.2             pyhd3eb1b0_0    anaconda
pyjwt                     2.1.0            py36h06a4308_0  
pynacl                    1.4.0            py36h7b6447c_1  
pyopenssl                 21.0.0             pyhd3eb1b0_1  
pyparsing                 3.0.4              pyhd3eb1b0_0    anaconda
pysocks                   1.7.1            py36h06a4308_0  
python                    3.6.13               h12debd9_1  
python-dateutil           2.8.2              pyhd3eb1b0_0    anaconda
python-json-logger        2.0.1                      py_0  
pywavelets                1.1.1            py36h7b6447c_2    anaconda
pyyaml                    5.4.1            py36h27cfd23_1    anaconda
readline                  8.1                  h27cfd23_0  
requests                  2.26.0             pyhd3eb1b0_0  
scikit-image              0.17.2           py36hdf5156a_0    anaconda
scipy                     1.2.1            py36h7c811a0_0  
setuptools                58.0.4           py36h06a4308_0  
six                       1.16.0             pyhd3eb1b0_0  
sparseconvnet             0.2                      pypi_0    pypi
sqlite                    3.36.0               hc218d9a_0  
tensorboardx              2.5.1                    pypi_0    pypi
tifffile                  2020.10.1        py36hdd07704_2    anaconda
tk                        8.6.11               h1ccaba5_0  
toolz                     0.11.2             pyhd3eb1b0_0    anaconda
torch                     1.10.0+cu113             pypi_0    pypi
torchaudio                0.10.0+cu113             pypi_0    pypi
torchvision               0.11.1+cu113             pypi_0    pypi
tornado                   6.1              py36h27cfd23_0    anaconda
traitlets                 4.3.3            py36h06a4308_0    anaconda
typing-extensions         3.10.0.2                 pypi_0    pypi
unrar                     0.4                      pypi_0    pypi
urllib3                   1.26.7             pyhd3eb1b0_0  
wcwidth                   0.2.5              pyhd3eb1b0_0    anaconda
wheel                     0.37.0             pyhd3eb1b0_1  
wrapt                     1.12.1           py36h7b6447c_1  
xz                        5.2.5                h7b6447c_0  
yaml                      0.2.5                h7b6447c_0    anaconda
zlib                      1.2.11               h7b6447c_3  
zstd                      1.4.9                haebb681_0

But when I try to train the model on Kitti dataset, it gives following error. Although I have installed Sparseconvnet but seems like an issue with some dependencies. Could you share what environment and packages are required to run training. May be short readme file would be helpful. Also I would suggest to upload Sparseconvnet folder, as there may be several changes in Sparseconvnet files to match the latest dependencies. Thanks.

QGN) xubuntu@08d6013cfe2d:/homelocal/NQGN$ CUDA_VISIBLE_DEVICES=1,2,3 python train.py --png
Training model named:
   mdp
Models and tensorboard events files are saved to:
   /home/xubuntu/tmp
Training is using:
   cuda
/home/xubuntu/.conda/envs/QGN/lib/python3.6/site-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
  "Argument interpolation should be of type InterpolationMode instead of int. "
Using split:
   eigen_zhou
There are 39810 training items and 4424 validation items

range(0, 20)
/home/xubuntu/.conda/envs/QGN/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
Training
Traceback (most recent call last):
  File "train.py", line 20, in <module>
    trainer.train()
  File "/homelocal/NQGN/trainer.py", line 158, in train
    self.run_epoch()
  File "/homelocal/NQGN/trainer.py", line 176, in run_epoch
    outputs, losses = self.process_batch(inputs)
  File "/homelocal/NQGN/trainer.py", line 209, in process_batch
    quad, mask = self.models["depth"](features, None, crit=self.opt.crit)
  File "/home/xubuntu/.conda/envs/QGN/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/homelocal/NQGN/networks/depth_decoder.py", line 180, in forward
    in4 = self.dense_to_sparse(in4)
  File "/home/xubuntu/.conda/envs/QGN/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/homelocal/QGN/SparseConvNet/sparseconvnet/denseToSparse.py", line 27, in forward
    output.metadata = Metadata(self.dimension)
  File "/homelocal/QGN/SparseConvNet/sparseconvnet/metadata.py", line 17, in Metadata
    return getattr(sparseconvnet.SCN, 'Metadata_%d'%dim)()
AttributeError: module 'sparseconvnet.SCN' has no attribute 'Metadata_2'

Pretrained Model and test.py

Nice work. Do you have a pretrained model to run. Also I do not see a test file to run the trained model. Can you update on these issues.

KeyError: ('cam_T_cam', 0, -1)

So I have tried almost everything I could to get rid of this error but it stay. Any help as to how to address this error?

(QGNResnet18) [ashah29@compute-0-3 NQGN]$ python train.py --png
Training model named:
   mdp
Models and tensorboard events files are saved to:
   /home/ashah29/tmp
Training is using:
   cuda
Using split:
   eigen_zhou
There are 39810 training items and 4424 validation items

/project/xfu/aamir/anaconda3/envs/QGNResnet18/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:122: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
Training
Traceback (most recent call last):
  File "train.py", line 18, in <module>
    trainer.train()
  File "/project/xfu/aamir/NQGN/trainer.py", line 156, in train
    self.run_epoch()
  File "/project/xfu/aamir/NQGN/trainer.py", line 173, in run_epoch
    outputs, losses = self.process_batch(inputs)
  File "/project/xfu/aamir/NQGN/trainer.py", line 222, in process_batch
    self.generate_images_pred(inputs, outputs)
  File "/project/xfu/aamir/NQGN/trainer.py", line 371, in generate_images_pred
    T = outputs[("cam_T_cam", 0, frame_id)]
KeyError: ('cam_T_cam', 0, -1)

Measuring the inference time of the model on a single image

I am successfully able to train and reproduce the results. However, I was wondering if there is a way in your code to measure the inference time? Or if I would like to measure the inference time using following code (given as an example), where exactly should I insert it ?

from torch.profiler import profile, record_function, ProfilerActivity

x, y = next(train_data)
x = x.chunk(config.world_size, 0)[global_rank].cuda(model_engine.local_rank)
y = y.chunk(config.world_size, 0)[global_rank].cuda(model_engine.local_rank)
with profile(
    activities= [ProfilerActivity.CPU, ProfilerActivity.CUDA],
    schedule = torch.profiler.schedule(wait=1, warmup=10, active=4)
) as prof:
    for _ in range(15):
        with record_function("forward"):
            loss = model_engine(x, y)
        with record_function("backward"):
            model_engine.backward(loss)
            # torch.cuda.synchronize()
        with record_function("update"):
            model_engine.step()
        # dist.barrier()
        prof.step()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.