Comments (3)
Hi,
in terms of using a depth map to generate novel views, I used the idea from https://arxiv.org/pdf/2004.01294.pdf to warp contents of static regions from neighboring frames and warp contents of the dynamic region from reference view into a novel view through point cloud splatting, with the code from https://github.com/sniklaus/3d-ken-burns.
Another way to achieve it, which usually provides me with a better rendering result, is to create a textured mesh from point clouds and input pixels, followed by rasterization to render novel views. I used the implementation from https://github.com/vt-vl-lab/3d-photo-inpainting for a baseline of 3D photos. This approach usually produces a much better result (especially for disocculusion) from RGBD images.
from neural-scene-flow-fields.
Thanks for the details, I will look into them. Let me clarify
- I am trying to test this hypothesis within your framework: How the depth estimate affects the highest quality rendering. When you compared to CVD, I assume you switched their depth maps into yours (but you seem to also not do inpainting, if I interpret the videos correctly).
- Maybe your comparison is different to CVD, but I can't tell from the online page. If you are willing to share the code that took in the CVD depth and generated the video it will save me guessing how the comparison was done.
from neural-scene-flow-fields.
Hi,
In comparison to CVD, I am not using CVD depth in our framework, but using CVD depth to perform traditional depth-based image-based rendering based on the method described in Sec 3.2 of https://arxiv.org/pdf/2004.01294.pdf . (Unfortunately, I kept this implementation at Adobe Research private repo, but I am no longer at Adobe. )
I don't think CVD would work for a general dynamic scene since it minimizes epipolar consistency within the entire scene without taking into consideration the object motion or ignoring moving objects, and this would cause incorrect depth results for moving objects, from my previous experiments. Thus, I think single-view depth is still the best initialization strategy in our case.
from neural-scene-flow-fields.
Related Issues (20)
- was nsff tested with 360 captured scene? HOT 1
- Coordinate System Operations HOT 3
- error when run the evaluation.py with trained model HOT 3
- Number of training images on Nivida dynamic dataset HOT 1
- Does default config match the implementation detail of the paper
- Evaluation on broom and curl dataset HOT 1
- MiDaS depth prediction -- inverse depth? HOT 1
- RuntimeError: stack expects each tensor to be equal size & AttributeError: 'NoneType' object has no attribute 'shape' HOT 1
- urlopen error [Errno 111] HOT 1
- sceneflow visualization
- What's the minimum system requirements for running the inference?
- How would you recommend adapting NSFF to non-forward facing scenes? HOT 1
- Evaluation Set HOT 2
- Singularly in NDC2Euclidean
- Faster Training.
- Question about Least Kinetic Motion Prior HOT 1
- Implementation different from paper? HOT 2
- Question about softsplat
- How to get the Motion mask?
- poses_bounds.npy only contains poses from one camera HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from neural-scene-flow-fields.