Coder Social home page Coder Social logo

princeton-computational-imaging / nsf Goto Github PK

View Code? Open in Web Editor NEW
264.0 20.0 13.0 12.71 MB

Official code repository for the paper: "Neural Spline Fields for Burst Image Fusion and Layer Separation"

License: MIT License

Shell 0.02% Python 0.29% Jupyter Notebook 99.69%

nsf's Introduction

Neural Spline Fields for Burst Image Fusion and Layer Separation

Open In Colab Android Capture App

This is the official code repository for the work: Neural Spline Fields for Burst Image Fusion and Layer Separation. If you use parts of this work, or otherwise take inspiration from it, please considering citing our paper:

@article{chugunov2023neural,
  title={Neural Spline Fields for Burst Image Fusion and Layer Separation},
  author={Chugunov, Ilya and Shustin, David and Yan, Ruyu and Lei, Chenyang and Heide, Felix},
  journal={arXiv preprint arXiv:2312.14235},
  year={2023}
}

Requirements:

  • Code was written in PyTorch 2.0 on an Ubuntu 22.04 machine.
  • Condensed package requirements are in \requirements.txt. Note that this contains the exact package versions at the time of publishing. Code will most likely work with newer versions of the libraries, but you will need to watch out for changes in class/function calls.
  • The non-standard packages you may need are pytorch_lightning, commentjson, rawpy, and tinycudann. See NVlabs/tiny-cuda-nn for installation instructions. Depending on your system you might just be able to do pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch, or might have to cmake and build it from source.

Project Structure:

NSF
  ├── checkpoints  
  │   └── // folder for network checkpoints
  ├── config
  │   └── // network and encoding configurations for different sizes of MLPs
  ├── data  
  │   └── // folder for long-burst data
  ├── lightning_logs  
  │   └── // folder for tensorboard logs
  ├── outputs  
  │   └── // folder for model outputs (e.g., final reconstructions) 
  ├── scripts  
  │   └── // training scripts for different tasks (e.g., occlusion/reflection/shadow separation)
  ├── utils  
  │   └── utils.py  // network helper functions (e.g., RAW demosaicing, spline interpolation)
  ├── LICENSE  // legal stuff
  ├── README.md  // <- you are here
  ├── requirements.txt  // frozen package requirements
  ├── train.py  // dataloader, network, visualization, and trainer code
  └── tutorial.ipynb // interactive tutorial for training the model

Getting Started:

We highly recommend you start by going through tutorial.ipynb, either on your own machine or with this Google Colab link.

TLDR: models can be trained with:

bash scripts/{application}.sh --bundle_path {path_to_data} --name {checkpoint_name}

And reconstruction outputs will get saved to outputs/{checkpoint_name}-final

For a full list of training arguments, we recommend looking through the argument parser section at the bottom of \train.py.

Data:

You can download the long-burst data used in the paper (and extra bonus scenes) via the following links:

  1. Main occlusion scenes: occlusion-main.zip (use scripts/occlusion.sh to train) Main Occlusion

  2. Supplementary occlusion scenes: occlusion-supp.zip (use scripts/occlusion.sh to train) Supplementary Occlusion

  3. In-the-wild occlusion scenes: occlusion-wild.zip (use scripts/occlusion-wild.sh to train) Wild Occlusion

  4. Main reflection scenes: reflection-main.zip (use scripts/reflection.sh to train) Main Reflection

  5. Supplementary reflection scenes: reflection-supp.zip (use scripts/reflection.sh to train) Supplementary Reflection

  6. In-the-wild reflection scenes: reflection-wild.zip (use scripts/reflection-wild.sh to train) Wild Reflection

  7. Extra scenes: extras.zip (use scripts/dehaze.sh, segmentation.sh, or shadow.sh) Extras

  8. Synthetic validation: synthetic-validation.zip (use scripts/reflection.sh or occlusion.sh with flag --rgb)

We recommend you download and extract these into the data/ folder.

App:

Want to record your own long-burst data? Check out our Android RAW capture app Pani!

Good luck have fun, Ilya

nsf's People

Contributors

ilya-muromets avatar

Stargazers

 avatar HJWang avatar  avatar  avatar Hyo avatar YingYan0017 avatar XiaoJian Zhang avatar Aaron Smith avatar  avatar yaxu avatar  avatar  avatar Teng Xu avatar  avatar liukersun avatar lcolok avatar Yuchao Yao avatar NULL avatar Chaochao Zhou avatar  avatar WZH avatar  avatar Martim avatar Shen Junru avatar Ricardo de Azambuja avatar TaoBingcheng avatar  avatar Adnan Pen avatar Martim Gaspar avatar Liang Guo avatar  avatar Clausy avatar TyrSheng avatar Kenny Na avatar Dehao Qin avatar ZeHuaJun avatar  avatar opteroncx avatar James Yang avatar Zhibohe avatar  avatar Xuejian Rong avatar Zhangyi avatar h3nr7 avatar 王焓鹭 avatar EESheep avatar  avatar Haotian Hong avatar Vinyeh Shaw avatar  avatar  avatar 学渣戊 avatar  avatar  avatar  avatar  avatar cooper avatar ltyec avatar  avatar  avatar byronwai avatar  avatar  avatar  avatar MuyuChen avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar Jack Hu avatar Zhen Xu avatar nemo avatar Rn_reck avatar Jiabao Li avatar Hao GONG avatar 채가예 avatar  avatar Dhruv Kool Rajamani avatar yuzy avatar  avatar Lichi Zhang avatar Qing Shuai avatar  avatar  avatar Tianshuo_YANG avatar  avatar Haotong LIN avatar  avatar Noah Snavely avatar  avatar hypothesis76 avatar  avatar Lukas Schwabe avatar Yongsen Mao avatar  avatar  avatar  avatar

Watchers

Alex Birkett avatar Antonio Feliziani avatar Hiro Matsuoka avatar Jack Higgins avatar  avatar  avatar PeterZs avatar Felix Heide avatar Ethan Tseng avatar  avatar Claus Steinmassl avatar Qiang Wen avatar Alessio Regalbuto avatar sabari avatar Xiaoyu Zhu avatar  avatar Abbas avatar Francesco Fugazzi avatar  avatar Nishant Gajjar avatar

nsf's Issues

Pani can't installed

i use honor phone,when installing it's showing '解析错误'.To other phone,also has this issue.what i should do?thankyou

Inference on a video

Looking forward to the code release. Your work sounds very interesting!

I‘m wondering if it could be adapted to be used with videos / image sequences instead of burst images (given they have enough parallax) so the transmission and occlusion are calculated for each frame of the sequence.
Extracting or recreating transmission and occlusion is a tedious manual task in traditionell compositing. This could be a game changer for this kind of tasks. Example use cases are screen inserts, label changes of products, removal of signs etc.

I assume temporal consistency could be an issue. Also thinking about moving occlusions (e.g. a person is moving and seen in the reflection / occlusion).

Thanks in advance!
Cheers,
Claus

Updates on the code release?

Hi,

I'd be interested in having a look at the code... Is there an updated timeframe for its release?

Thanks :)

Testing using section 3 in the colab toturial

I tried running section 3 on this image:
2017_Train_00010

and duplicated it 5 times to match the number of examples in the tutorial, I trained for 50 epoch and this is the final output:

NFS-output

all the pixels = 1 in transmission, reference, and obstruction:
image

Cannot install tinycudann unfortunately W11

Thanks for the amazing work.
I cannot unfortunately install tinycudann due to the error bellow.
If someone on W11 comes across the same error or/and has a solution, please let me know.

CUDA 11.8 is installed
Ninja the latest

/usr/bin/link: extra operand '/LTCG'
Try '/usr/bin/link --help' for more information.
error: command 'C:\msys64\usr\bin\link.exe' failed with exit code 1

Thanks in advance.

Synthetic data

Excellent work!!!
Is it possible to also release the synthetic data that are used in the paper for validation?

Why are ray directions normalized by z?

Hi,

In the code I noticed the following line:

ray_directions = ray_directions / ray_directions[:,2:3] # normalize by z

This is also mentioned in the paper, but I was wondering why that's the case.

Thanks :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.