Zheng Shi, Ethan Tseng, Mario Bijelic, Werner Ritter, Felix Heide
If you find our work useful in your research, please cite:
@article{shi2021zeroscatter,
title={ZeroScatter: Domain Transfer for Long Distance Imaging and Vision through Scattering Media},
author={Shi, Zheng and Tseng, Ethan and Bijelic, Mario and Ritter, Werner and Heide, Felix},
journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021}
}
This code is developed using TesnorFlow 2.2.0 on Linux machine. Full frozen environment can be found in 'environment.yml', note some of these libraries are not necessary to run this code.
This code requires data in TFRecord format. If you are not familiar with it, you may find this Tensorflow tutorial helpful. Please refer to the paper and the data loading functions in 'utils.py' for more detailed requirements for each training step.
To perform inference on real-world captures, please first download the pre-trained model from here to 'ckpts/' folder, then you can run the 'inference.ipynb' notebook in Jupyter Notebook. The notebook will load the checkpoint and process captured sensor measurements located in 'tfrecord_example/'. The reconstructed images will be displayed within the notebook.
We include 3 bash scripts for training purpose:
- train_0.sh: performs training of RGB2Gated model, which is later used for indirect supervision based on gated captures.
- train_1.sh: performs training of the ZeroScatter translation block.
- train_2.sh: performs training of the ZeroScatter consistency block.
Our code is licensed under BSL-1. By downloading the software, you agree to the terms of this License. The data in the folder 'tfrecord_example/' based on the DENSE Dataset.
If there is anything unclear, please feel free to reach out to me at zhengshi[at]princeton[dot]edu.