Coder Social home page Coder Social logo

cmu-ci-lab / inversetransportnetworks Goto Github PK

View Code? Open in Web Editor NEW
28.0 28.0 6.0 73.02 MB

Towards Learning-based Inverse Subsurface Scattering

Home Page: http://imaging.cs.cmu.edu/inverse_transport_networks/

License: GNU General Public License v3.0

Shell 0.16% Python 1.78% CMake 1.81% C 10.18% Makefile 0.02% C++ 82.75% XSLT 0.15% Batchfile 0.01% PowerShell 0.01% TeX 2.38% Objective-C 0.08% Objective-C++ 0.35% GLSL 0.32% CSS 0.01% MATLAB 0.01%

inversetransportnetworks's People

Contributors

brucect2 avatar igkiou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

inversetransportnetworks's Issues

Function "extract_patches" has been deprecated

Hi, it's me again.
Sorry for bothering you so many times.

from sklearn.feature_extraction.image import extract_patches

This function has been deprecated in the current sklearn package, and I couldn't find any document about this.

patches_luminance = extract_patches(luminance[:,:,0], patch_shape=(4, 4), extraction_step=(4, 4))

If my understanding was correct, the purpose of using this function here is to resize the image from 512x512 to 128x128.
I want to ask if I can just use some linear interpolation technics to resize the image?

why normalization is needed?

Hi, great work, thank you for your contribution!

I'm kind of new to computer graphics,
when I read your code, I found that you multiply the reconstructed images with a constant (28.2486) when computing the loss between the GT images.

diff = self.normalization * tmp_img - gt_img

I'm curious that why this normalization is needed?

than you!

Segfault in example scene

I used the following command to test the renderer.

mitsuba_AD -L trace scenes/cube_sunsky.xml -Dmeshmodel=cube -DsigmaT=100 -Dalbedo=0.8 -Dg=0.2 -DnumSamples=4096 -Dx=0.433 -Dy=0.866 -Dz=0.25 -o cube_e30_d100_a0.8_g0.2_q4096.exr

I got following segfault. Can you kindly let me know if I am missing something in setup.

Reading symbols from mitsuba_AD...
(gdb) r
Starting program: /home/arpit/projects/inverseTransportNetworks/renderer/dist_debug/mitsuba_AD -L trace scenes/cube_sunsky.xml -Dmeshmodel=cube -DsigmaT=100 -Dalbedo=0.8 -Dg=0.2 -DnumSamples=4096 -Dx=0.433 -Dy=0.866 -Dz=0.25 -o cube_e30_d100_a0.8_g0.2_q4096.exr
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff37d5700 (LWP 581482)]
[New Thread 0x7ffff2fd4700 (LWP 581483)]
[New Thread 0x7ffff27d3700 (LWP 581484)]
[New Thread 0x7ffff1fd2700 (LWP 581485)]
[New Thread 0x7ffff17d1700 (LWP 581486)]
[New Thread 0x7ffff0fd0700 (LWP 581487)]
[New Thread 0x7ffff07cf700 (LWP 581488)]
2021-06-07 20:58:49 INFO  main [mitsuba.cpp:275] Mitsuba version 0.5.0 (Linux, 64 bit), Copyright (c) 2014 Wenzel Jakob
2021-06-07 20:58:49 DEBUG main [Thread] Spawning thread "wrk0"
[New Thread 0x7fffeffce700 (LWP 581489)]
2021-06-07 20:58:49 DEBUG main [Thread] Spawning thread "wrk1"
[New Thread 0x7fffef7cd700 (LWP 581490)]
2021-06-07 20:58:49 DEBUG main [Thread] Spawning thread "wrk2"
[New Thread 0x7fffeefcc700 (LWP 581491)]
2021-06-07 20:58:49 DEBUG main [Thread] Spawning thread "wrk3"
[New Thread 0x7fffee7cb700 (LWP 581492)]
2021-06-07 20:58:49 INFO  main [mitsuba.cpp:379] Parsing scene description from "scenes/cube_sunsky.xml" ..
2021-06-07 20:58:49 INFO  main [PluginManager] Loading plugin "plugins/volpath.so" ..
2021-06-07 20:58:49 INFO  main [PluginManager] Loading plugin "plugins/sunsky.so" ..
2021-06-07 20:58:49 WARN  main [properties.cpp:77] Property "sunDirection" was specified multiple times!
2021-06-07 20:58:49 INFO  main [PluginManager] Loading plugin "plugins/sky.so" ..
2021-06-07 20:58:49 DEBUG main [SunSkyEmitter] Rasterizing sun & skylight emitter to an 512x256 environment map ..
2021-06-07 20:58:50 DEBUG main [SunSkyEmitter] Done (took 868 ms)
2021-06-07 20:58:50 INFO  main [PluginManager] Loading plugin "plugins/envmap.so" ..
2021-06-07 20:58:51 INFO  main [PluginManager] Loading plugin "plugins/lanczos.so" ..
2021-06-07 20:58:51 DEBUG main [MIPMap] Created 341.4 KiB of MIP maps in 10 ms
2021-06-07 20:58:51 INFO  main [EnvironmentMap] Precomputing data structures for environment map sampling (515.0 KiB)
2021-06-07 20:58:51 INFO  main [EnvironmentMap] Done (took 3 ms)
2021-06-07 20:58:51 INFO  main [PluginManager] Loading plugin "plugins/hg.so" ..

Thread 1 "mitsuba_AD" received signal SIGSEGV, Segmentation fault.
0x00007fffede2012a in std::vector<stan::math::var, std::allocator<stan::math::var> >::size (this=0x0) at /usr/include/c++/10/bits/stl_vector.h:919
919	      { return size_type(this->_M_impl._M_finish - this->_M_impl._M_start); }
(gdb) bt
#1  0x00007fffede20743 in std::vector<stan::math::var, std::allocator<stan::math::var> >::operator= (this=0x5555556860d0, __x=<error reading variable: Cannot access memory at address 0x8>)
    at /usr/include/c++/10/bits/vector.tcc:223
#2  0x00007fffede1e381 in FloatAD_shared::operator= (this=0x5555556860d0) at include/mitsuba/core/FloatADshared.h:11
#3  0x00007fffede1e95c in mitsuba::TSpectrum_shared<FloatAD_shared, 1>::operator= (this=0x5555556860d0) at include/mitsuba/core/spectrumshared.h:39
#4  0x00007fffede1e99d in mitsuba::Spectrum_shared::operator= (this=0x5555556860d0) at include/mitsuba/core/spectrumshared.h:375
#5  0x00007fffede1ea8e in mitsuba::HGPhaseFunction::HGPhaseFunction (this=0x555555685f00, props=...) at src/phase/hg.cpp:61
#6  0x00007fffede1ba01 in mitsuba::CreateInstance (props=...) at src/phase/hg.cpp:193
#7  0x00007ffff75df6f3 in mitsuba::Plugin::createInstance (this=0x5555558301f0, props=...) at src/libcore/plugin.cpp:132
#8  0x00007ffff75dfb64 in mitsuba::PluginManager::createObject (this=0x555555682610, classType=0x5555556825a0, props=...) at src/libcore/plugin.cpp:189
#9  0x00007ffff721626f in mitsuba::SceneHandler::endElement (this=0x5555557251e0, xmlName=0x555555784c10 u"phase") at src/librender/scenehandler.cpp:879
#10 0x00007ffff7e1c77d in xercesc_3_2::SAXParser::endElement(xercesc_3_2::XMLElementDecl const&, unsigned int, bool, char16_t const*) () at /usr/lib/x86_64-linux-gnu/libxerces-c-3.2.so
#11 0x00007ffff7dc4599 in xercesc_3_2::IGXMLScanner::scanEndTag(bool&) () at /usr/lib/x86_64-linux-gnu/libxerces-c-3.2.so
#12 0x00007ffff7dc8bb9 in xercesc_3_2::IGXMLScanner::scanContent() () at /usr/lib/x86_64-linux-gnu/libxerces-c-3.2.so
#13 0x00007ffff7dc8d40 in xercesc_3_2::IGXMLScanner::scanDocument(xercesc_3_2::InputSource const&) () at /usr/lib/x86_64-linux-gnu/libxerces-c-3.2.so
#14 0x00007ffff7df27a5 in xercesc_3_2::XMLScanner::scanDocument(char16_t const*) () at /usr/lib/x86_64-linux-gnu/libxerces-c-3.2.so
#15 0x00007ffff7df2b22 in xercesc_3_2::XMLScanner::scanDocument(char const*) () at /usr/lib/x86_64-linux-gnu/libxerces-c-3.2.so
#16 0x00007ffff7e1d5a2 in xercesc_3_2::SAXParser::parse(char const*) () at /usr/lib/x86_64-linux-gnu/libxerces-c-3.2.so
#17 0x0000555555575486 in mitsuba_app (argc=14, argv=0x7fffffffd8e8) at src/mitsuba/mitsuba.cpp:380
#18 0x00005555555765d5 in mts_main (argc=14, argv=0x7fffffffd8e8) at src/mitsuba/mitsuba.cpp:449
#19 0x0000555555576640 in main (argc=14, argv=0x7fffffffd8e8) at src/mitsuba/mitsuba.cpp:474

Explaination Encoder Part

Hi , thank you for the code!
I wanted to ask if you can explain how the encoder part works. I mean, there is the MatNet which is supposed to give you the parameters (like sigmat, sigmas and g)? For what I get the MatNet is the Encoder part and then Mitsuba does the rendering. Is it right? Which script do I have to use to train? the train.py script or the inverserRendering one? what is the difference between them?

What is this mask?

Hi! It's me again.

I found that you multiply a "mask" with the luminance image during the evaluation.

luminance = 28.2486 * luminance * mask

Is this a binary mask that separates the foreground and background? I can not find this in your original paper.

Also, I noticed that during the training, It seems that this kind of preprocessing is not applied to the luminance image.
https://github.com/cmu-ci-lab/inverseTransportNetworks/blob/master/learning/train.py
Are there any special reasons that the "mask" is only needed during the evaluation?

would be grateful for your reply!

Pre-trained Models are deleted

The Dropbox link provided in readme does not work, as it says "Files have been deleted". Kindly reupload the pre-trained models.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.