Coder Social home page Coder Social logo

peterzs / volumetric_opaque_solids Goto Github PK

View Code? Open in Web Editor NEW

This project forked from cmu-ci-lab/volumetric_opaque_solids

0.0 0.0 0.0 588 KB

Proof-of-concept surface reconstruction experiments to explore the design space for volumetric opaque solids.

Home Page: https://imaging.cs.cmu.edu/volumetric_opaque_solids

License: MIT License

Python 100.00%

volumetric_opaque_solids's Introduction

A theory of volumetric representations for opaque solids

We explore the design space of attenuation coefficients for volumetric representations of opaque solids and demonstrate improvements from our theory in proof-of-concept surface reconstruction experiments. Primarily, we compare the parameters controlling the attenuation coefficient such as the implicit function distribution and the normal distribution. When applicable to the normal distirbution, we allow an anisotropy parameter to either be annealed on a fixed schedule or learned spatially. For the implementation of these distributions see the attenuation coefficient model.

Configuration Options

The main parameters for our experiments are defined below. Across methods we mainly vary the implicit distribution and normal distribution. Refer to an example configuration for full set of training options.

point_sampler {
  n_sdf_pts = 1024         # num evaluation points to find 0-level set intersection
  n_fg_samples = 21        # num samples prior to intersection interval
  n_surf_samples = 22      # num samples inside intersection interval
  n_bg_samples = 21        # num samples behind intersection interval
}

attenuation_coefficient {
  implicit_distribution = gaussian     # e.g. Laplace, loogistic, or Gaussian
  normal_distribution = linearmixture  # e.g. delta, uniform, or linear mixture
}

For normal distributions that take an anisotropy or mixture parameter, we provide additional control over whether the parameter is annealed or learned spatially.

train {
  anneal_end = 0          # anisotropy decays to 0 by anneal_end iters
}

# overrides annealed anisotropy
anisotropy_network {
  d_feature = 256         # dim of feature from SDF net
}

Finally, based on the dataset we use a different background. The DTU dataset uses a black background, the NeRF dataset uses a white background, and blended MVS learns a background color using NeRF++.

# controls constant background color (black or white)
train {
  use_white_bkgd = False
}

# if outside samples > 0 overrides constant background color with learned background
sampler {
  n_outside = 32          # num samples used for background net
}

Datasets

The following datasets can be readily ingested by our training pipeline. (See below for the data convention.)

Data Convention

The data is organized as follows:

<case_name>
|-- cameras_xxx.npz    # camera parameters
|-- image
    |-- 000.png        # target image for each view
    |-- 001.png
    ...
|-- mask
    |-- 000.png        # target mask each view (For unmasked setting, set all pixels as 255)
    |-- 001.png
    ...

Here the cameras_xxx.npz follows the data format in IDR, where world_mat_xx denotes the world to image projection matrix, and scale_mat_xx denotes the normalization matrix.

Usage

Setup

cd volumetric_opaque_solids
pip install -r requirements.txt
Dependencies (click to expand) trimesh==3.9.8 numpy==1.26.2 pyhocon==0.3.57 opencv_python==4.8.1.78 tqdm==4.50.2 torch==1.13.0 scipy==1.11.3 PyMCubes==0.1.2 tensorboard

Running

  • Training
python exp_runner.py \
  --conf ./confs/<config name>.conf \
  --case <case_name> \
  --mode train
  • Extract surface from trained model
python exp_runner.py \
  --conf ./confs/<config name>.conf \
  --case <case_name> \
  --mode validate_mesh \
  --is_continue # use latest checkpoint

The corresponding mesh can be found in exp/<case_name>/<exp_name>/meshes/<iter_steps>.ply.

  • Render Image
python exp_runner.py \
  --conf ./confs/<config name>.conf \
  --case <case_name> \
  --mode render \
  --image_idx 0 \ # image index
  --is_continue   # use latest checkpoint

Acknowledgement

This codebase is simplified adaption of NeuS; the original codebase makes use of code snippets borrowed from IDR and NeRF-pytorch. Thanks for all of these great projects.

volumetric_opaque_solids's People

Contributors

baileymiller avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.