Coder Social home page Coder Social logo

fuzzy-metaballs's People

Contributors

leonidk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

fuzzy-metaballs's Issues

Silhouette Loss Question

Hey there, thank you for this exciting project!
I have one question regarding the silhouette loss for pose estimation: How would I use this in a real-world case, where I don't know the true silhouette, and can't infer it from the depth data? Currently, the only way I see, is to estimate it with a NN.

Mitsuba's pose estimation demo just uses the a mean squared pixel wise error, but I don't think that is directly applicable here. Did you experiment with other measures?

Some issues with software and code for running code

Hi! Thank you for your work.
When I run it , I met some problems:
First, can it run directly on jupyter lab? When I run pose_estimation.ipynb and shape_from_silhouette.ipynb on jupyter lab, it shows

---------------------------------------------------------------------------
NoSuchDisplayException                    Traceback (most recent call last)
Cell In[12], line 41
     38 camera = pyrender.IntrinsicsCamera(focal_length,focal_length,cx,cy,znear=0.1*shape_scale,zfar=100*shape_scale)
     39 scene.add(camera,pose=pose)
---> 41 r = pyrender.OffscreenRenderer(image_size[1],image_size[0])
     42 color, target_depth = r.render(scene)
     43 target_depth[target_depth ==0] = np.nan

File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyrender/offscreen.py:31, in OffscreenRenderer.__init__(self, viewport_width, viewport_height, point_size)
     29 self._platform = None
     30 self._renderer = None
---> 31 self._create()

File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyrender/offscreen.py:149, in OffscreenRenderer._create(self)
    145 else:
    146     raise ValueError('Unsupported PyOpenGL platform: {}'.format(
    147         os.environ['PYOPENGL_PLATFORM']
    148     ))
--> 149 self._platform.init_context()
    150 self._platform.make_current()
    151 self._renderer = Renderer(self.viewport_width, self.viewport_height)

File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyrender/platforms/pyglet_platform.py:50, in PygletPlatform.init_context(self)
     48 for conf in confs:
     49     try:
---> 50         self._window = pyglet.window.Window(config=conf, visible=False,
     51                                             resizable=False,
     52                                             width=1, height=1)
     53         break
     54     except pyglet.window.NoSuchConfigException as e:

File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyglet/window/xlib/__init__.py:133, in XlibWindow.__init__(self, *args, **kwargs)
    130         else:
    131             self._event_handlers[message] = func
--> 133 super(XlibWindow, self).__init__(*args, **kwargs)
    135 global _can_detect_autorepeat
    136 if _can_detect_autorepeat is None:

File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyglet/window/__init__.py:513, in BaseWindow.__init__(self, width, height, caption, resizable, style, fullscreen, visible, vsync, file_drops, display, screen, config, context, mode)
    510 self._event_queue = []
    512 if not display:
--> 513     display = pyglet.canvas.get_display()
    515 if not screen:
    516     screen = display.get_default_screen()

File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyglet/canvas/__init__.py:59, in get_display()
     56     return display
     58 # Otherwise, create a new display and return it.
---> 59 return Display()

File ~/anaconda3/envs/lfd/lib/python3.8/site-packages/pyglet/canvas/xlib.py:88, in XlibDisplay.__init__(self, name, x_screen)
     86 self._display = xlib.XOpenDisplay(name)
     87 if not self._display:
---> 88     raise NoSuchDisplayException(f'Cannot connect to "{name}"')
     90 screen_count = xlib.XScreenCount(self._display)
     91 if x_screen >= screen_count:

NoSuchDisplayException: Cannot connect to "None"

I know it is not a problem about your code, it is about pyglet and pyrender, but I still want to know how do you run your code
I run the code on the server, then use SSH to project it locally and run jupyter lab on the local browser to run your code, some people on the internet say that a graphical interface needs to be installed for the server to support operation. I would like to know how you do it.

Besides, when I run run_co3d.ipynb, it shows

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In[7], line 3
      1 # do it at some canonical size
      2 in_files = sorted(glob.glob(os.path.join(input_folder,'*.jpg')) + glob.glob(os.path.join(input_folder,'*.png')))
----> 3 PYo,PXo = sio.imread(in_files[0]).shape[:2]
      4 init_scale = np.prod([PYo,PXo])
      5 scales = {}

IndexError: list index out of range

What's wrong with it? I don't know if it's because I didn't provide the images that need to be rendered.
These two problems may be quite foolish, but I still hope you can help me solve them, I'll appreciate it a lot if you can help me!

Package requirements with versions

First of all, very exciting project!
But I having trouble setting up proper environment to reproduce the code. The jax and pytorch3d libraries having lots of conflicts due to different requirements for pytorch and cudatoolkit versions.
Can you provide the what versions of the jax, pytorch3d, pytorch and cuda were used by you?

Issues with using 3D Models in co3d Code

Hello, I really appreciate your exceptional work! It has been highly useful for me. I observed that in your co3d code (without optical flow), the dataset comprises a sequence of teddy bear images. I'm looking to apply the code to my dataset, which contains images of a 3D model from some various views, almost same to the 'cow' data in the shape_from_silhouette code except the particular model.

However, when attempting to just replace the bear images and masks with pictures of the 3D model, I encountered an issue where the metaball struggles to estimate depth and color for the 3D model data. Despite the loss decreasing, it remains significantly high after optimization, and the model fails to predict shape, depth, or color.

Could you kindly share if you've had the opportunity to explore the use of 3D models as a dataset for the co3d code, or if you've come across a similar challenge?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.