This is the code for our paper "GRAINS: Generative Recursive Autoencoders for INdoor Scenes".
Project website here.
The code has been tested on the following. To re-run our code, we recommend the below softwares/tools to work with:
(a) Python 2.7 and Pytorch 0.3.1, OR
(b) Python 3.6/3.7 and Pytorch >1.0, and
(c) MATLAB (>2017a)
The best way is to install the latest Python and Pytorch versions is via Anaconda.
- Download your version (depending on your OS) of anaconda from here.
- Make sure your conda is setup properly. This is how you do it:
export PATH="............./anaconda3/bin:$PATH"
- The following command at the terminal prompt should not throw any error
conda
- Create a virtual environment called "GRAINS".
conda create --name GRAINS
- Activate your virtual env:
source activate GRAINS
You are now setup with the working environments.
Make a local copy of this repository using
git clone https://github.com/ManyiLi12345/GRAINS.git
We use indoor scenes represented as herarchies for the training. To create the training data, first download the original SUNCG Dataset and extract house
, object
, room_wcf
folder under the path ./0-data/SUNCG/
.
The room_wcf
data is here
run ./1-genSuncgDataset/main_gendata.m
The output is saved in ./0-data/1-graphs
.
run ./2-genHierarchies/main_buildhierarchies.m
The output is saved in ./0-data/2-hierarchies
.
run ./3-datapreparation/main_genSUNCGdataset.m
The output is saved in ./0-data/3-offsetrep
.
run ./4-genPytorchData/main_genprelpos_pydata.m
The output is saved in ./0-data/4-pydata
.
run ./4-training/train.py
It loads the training set from ./0-data/4-pydata
. The trained model will be saved in ./0-data/models/
. You can download the pre-trained model here.
run ./4-training/test.py
It loads the trained model in ./0-data/models/
and randomly generate 1000 scenes. The output is a set of scenes represented as hierarchies, saved as ./0-data/4-pydata/generated_scenes.mat
.
run ./5-reconVAE/main_recon.m
It reconstructs the object OBBs in each scene from the generated hierarchy. The topview images are saved in ./0-data/5-generated_scenes/images/
.
The training part of our code is built upon GRASS.
Please cite the paper if you use this code for research:
@article{li2019grains,
title={Grains: Generative recursive autoencoders for indoor scenes},
author={Li, Manyi and Patil, Akshay Gadi and Xu, Kai and Chaudhuri, Siddhartha and Khan, Owais and Shamir, Ariel and Tu, Changhe and Chen, Baoquan and Cohen-Or, Daniel and Zhang, Hao},
journal={ACM Transactions on Graphics (TOG)},
volume={38},
number={2},
pages={12},
year={2019},
publisher={ACM}
}