leonidk / fuzzy-metaballs Goto Github PK
View Code? Open in Web Editor NEWHome Page: https://leonidk.com/fuzzy-metaballs/
License: Apache License 2.0
Home Page: https://leonidk.com/fuzzy-metaballs/
License: Apache License 2.0
The position and pose calibrated by COLMAP will lead to no convergence on costumized dataset.
Hey there, thank you for this exciting project!
I have one question regarding the silhouette loss for pose estimation: How would I use this in a real-world case, where I don't know the true silhouette, and can't infer it from the depth data? Currently, the only way I see, is to estimate it with a NN.
Mitsuba's pose estimation demo just uses the a mean squared pixel wise error, but I don't think that is directly applicable here. Did you experiment with other measures?
Hi! Thank you for your work.
When I run it , I met some problems:
First, can it run directly on jupyter lab? When I run pose_estimation.ipynb and shape_from_silhouette.ipynb on jupyter lab, it shows
---------------------------------------------------------------------------
NoSuchDisplayException Traceback (most recent call last)
Cell In[12], line 41
38 camera = pyrender.IntrinsicsCamera(focal_length,focal_length,cx,cy,znear=0.1*shape_scale,zfar=100*shape_scale)
39 scene.add(camera,pose=pose)
---> 41 r = pyrender.OffscreenRenderer(image_size[1],image_size[0])
42 color, target_depth = r.render(scene)
43 target_depth[target_depth ==0] = np.nan
File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyrender/offscreen.py:31, in OffscreenRenderer.__init__(self, viewport_width, viewport_height, point_size)
29 self._platform = None
30 self._renderer = None
---> 31 self._create()
File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyrender/offscreen.py:149, in OffscreenRenderer._create(self)
145 else:
146 raise ValueError('Unsupported PyOpenGL platform: {}'.format(
147 os.environ['PYOPENGL_PLATFORM']
148 ))
--> 149 self._platform.init_context()
150 self._platform.make_current()
151 self._renderer = Renderer(self.viewport_width, self.viewport_height)
File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyrender/platforms/pyglet_platform.py:50, in PygletPlatform.init_context(self)
48 for conf in confs:
49 try:
---> 50 self._window = pyglet.window.Window(config=conf, visible=False,
51 resizable=False,
52 width=1, height=1)
53 break
54 except pyglet.window.NoSuchConfigException as e:
File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyglet/window/xlib/__init__.py:133, in XlibWindow.__init__(self, *args, **kwargs)
130 else:
131 self._event_handlers[message] = func
--> 133 super(XlibWindow, self).__init__(*args, **kwargs)
135 global _can_detect_autorepeat
136 if _can_detect_autorepeat is None:
File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyglet/window/__init__.py:513, in BaseWindow.__init__(self, width, height, caption, resizable, style, fullscreen, visible, vsync, file_drops, display, screen, config, context, mode)
510 self._event_queue = []
512 if not display:
--> 513 display = pyglet.canvas.get_display()
515 if not screen:
516 screen = display.get_default_screen()
File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyglet/canvas/__init__.py:59, in get_display()
56 return display
58 # Otherwise, create a new display and return it.
---> 59 return Display()
File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyglet/canvas/xlib.py:88, in XlibDisplay.__init__(self, name, x_screen)
86 self._display = xlib.XOpenDisplay(name)
87 if not self._display:
---> 88 raise NoSuchDisplayException(f'Cannot connect to "{name}"')
90 screen_count = xlib.XScreenCount(self._display)
91 if x_screen >= screen_count:
NoSuchDisplayException: Cannot connect to "None"
I know it is not a problem about your code, it is about pyglet and pyrender, but I still want to know how do you run your code
I run the code on the server, then use SSH to project it locally and run jupyter lab on the local browser to run your code, some people on the internet say that a graphical interface needs to be installed for the server to support operation. I would like to know how you do it.
Besides, when I run run_co3d.ipynb, it shows
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[7], line 3
1 # do it at some canonical size
2 in_files = sorted(glob.glob(os.path.join(input_folder,'*.jpg')) + glob.glob(os.path.join(input_folder,'*.png')))
----> 3 PYo,PXo = sio.imread(in_files[0]).shape[:2]
4 init_scale = np.prod([PYo,PXo])
5 scales = {}
IndexError: list index out of range
What's wrong with it? I don't know if it's because I didn't provide the images that need to be rendered.
These two problems may be quite foolish, but I still hope you can help me solve them, I'll appreciate it a lot if you can help me!
First of all, very exciting project!
But I having trouble setting up proper environment to reproduce the code. The jax and pytorch3d libraries having lots of conflicts due to different requirements for pytorch and cudatoolkit versions.
Can you provide the what versions of the jax, pytorch3d, pytorch and cuda were used by you?
Hello, I really appreciate your exceptional work! It has been highly useful for me. I observed that in your co3d code (without optical flow), the dataset comprises a sequence of teddy bear images. I'm looking to apply the code to my dataset, which contains images of a 3D model from some various views, almost same to the 'cow' data in the shape_from_silhouette code except the particular model.
However, when attempting to just replace the bear images and masks with pictures of the 3D model, I encountered an issue where the metaball struggles to estimate depth and color for the 3D model data. Despite the loss decreasing, it remains significantly high after optimization, and the model fails to predict shape, depth, or color.
Could you kindly share if you've had the opportunity to explore the use of 3D models as a dataset for the co3d code, or if you've come across a similar challenge?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.