Coder Social home page Coder Social logo

diyer22 / bpycv Goto Github PK

View Code? Open in Web Editor NEW
464.0 18.0 58.0 365 KB

Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)

License: MIT License

Python 98.86% Dockerfile 0.91% Shell 0.22%
blender blender-cv computer-vision data-synthesis deep-learning instance-segmentation ycb depth synthetic-datasets dataset-generation

bpycv's People

Contributors

diyer22 avatar ethnhe avatar jc211 avatar lucasew avatar salingo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bpycv's Issues

6D poses matrix?

Hello, thanks for your great job.
I have one question in the generated 6d-pose mat file. In the mat file, there is '6ds', which I interpret as the object's relative pose matrix wrt. the camera It should have the form as [ [ r11,r12,r13,t1], [r21,r22,r23,t2], [r31,r32,r33,t3], [0,0,0,1] ]
May I ask why it is like this? (not the form a transfer matrix)

Global variables break undo functionality

I want to use render_data function within an undo() context because otherwise the annotation rendering modified the material of several background objects in my scene. I want to run it in a loop for several modifications of some objects. The script run for only the first time and then I got an error "ReferenceError: StructRNA of type Scene has been removed" in

self.set_attr(scene.eevee, "taa_render_samples", 1)

As I saw from your code in

from .select_utils import scene, render
you use some global variables, e.g. for the scene. While for other cases you don't
scene = bpy.context.scene

The error I got above was due to the global scene variable which refers to deleted data, caused due to the undo operation. Such behavior is described in Blender documentation https://docs.blender.org/api/blender2.8/info_gotcha.html#undo-redo

A working modfication is to override the global scene variable and re-obtain it from the context within here

def __init__(self):
.

I don't know however where else these global variables are used.

Multiple materials on an object do not get reversed

This was referenced in another issue but it was then closed without a resolution. If an object has multiple materials, the materials are not reversed after the instance material is applied. The issue is in the replace_collection code. I don't how how blender associates materials to different vertices but that doesn't seem to be in the material itself. So even though bpycv is restoring the list of materials, the data about where they are applied is lost.

occurred an error on demo

I've tried to run your demo in the main page but it failed to finish.
I just installed your library and boxx, opencv-python, bs4(I'm not sure why bs4 is needed for this), OpenEXR , simply copied your demo src in text editor and ran the script.

Error message is below:
/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/utils.py:58: RuntimeWarning: divide by zero encountered in floor_divide
numerator_odd = numerator // low_bit
/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/utils.py:60: RuntimeWarning: divide by zero encountered in log2
up = np.int32(np.log2(low_bit, dtype=np.float32))
Traceback (most recent call last):
File "/Text", line 22, in
File "/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/render_utils.py", line 96, in render_data
result = ImageWithAnnotation(**render_result)
File "/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/exr_image_parser.py", line 66, in init
self["inst"] = exr.get_inst()
File "/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/exr_image_parser.py", line 55, in get_inst
inst = encode_inst_id.rgb_to_id(rgb)
File "/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/utils.py", line 62, in rgb_to_id
int_part = 2 ** (depth - 1) - 1 + idx_in_level
ValueError: Integers to negative integer powers are not allowed.
Error: Python script failed, check the message in the system console

I tried this in blender 2.82 also but no luck as same.
Any suggestions will be helpful. Thanks.

Rendering both semantic & instance segmentation maps

Hi All,
I saw on other issue how to render semantic segmentation maps by basically change obj["inst_id"] to object_class_id instead of object_id. Is it possible to render both semantic segmentation and instance segmentation in one rendering action?

As a workaround we can obtain the maps with 2 rendering steps:

  • Assign obj["inst_id"] with unique value per object, render the scene and save on the side the rgb image and instance segmentation map.
  • re-assign obj["inst_id"] with unique value per object class and render to obtain the semantic segmentation map.

The procedure above is inefficient as we need to render the scene twice. How can we solve this? (maybe use another filed in obj for class_id?)

Easy way to get un occluded segmentation maps?

I'm trying to get segmentation maps for each object in the scene with and without occlusions, is there an easy way to do this built in? I'm currently using

bpy.ops.object.hide_view_set(unselected=True)

inside the loop that loads up the objects and segmenting them once, and then segmenting the entire image, but this seems to be really inefficient.

why is orthographic camera not supported?

Hi! Just a quick question:
In camera_utils.py a check is performed so that the camera is perspective.

if camd.type != "PERSP":
      raise ValueError("Non-perspective cameras not supported")

Why is that? It seems to work fine if i disable this.

Does something happen that i'm not aware of or is this just an untested feature?

Thanks!

Installation with Blender 4.0.2 failed

Followed installation steps.
./blender -b -E CYCLES --python-expr "import bpycv,cv2;d=bpycv.render_data();bpycv.tree(d);cv2.imwrite('/tmp/try_bpycv_vis(inst-rgb-depth).jpg', d.vis()[...,::-1])" resulted with the following traceback;

Traceback (most recent call last):
File "", line 1, in
File "/home/art/Applications/blender-4.0.2-linux-x64/4.0/python/lib/python3.10/site-packages/bpycv/render_utils.py", line 118, in render_data
with set_inst_material(), set_annotation_render(), withattr(
File "/home/art/Applications/blender-4.0.2-linux-x64/4.0/python/lib/python3.10/site-packages/bpycv/render_utils.py", line 72, in init
self.set_attrs(render.image_settings, attrs)
File "/home/art/Applications/blender-4.0.2-linux-x64/4.0/python/lib/python3.10/site-packages/bpycv/statu_recover.py", line 50, in set_attrs
self.set_attr(obj, attr, value)
File "/home/art/Applications/blender-4.0.2-linux-x64/4.0/python/lib/python3.10/site-packages/bpycv/statu_recover.py", line 53, in set_attr
self.obj_to_old_attr_value.append([(obj, attr), getattr(obj, attr)])
AttributeError: 'ImageFormatSettings' object has no attribute 'use_zbuffer'

color_depth left changed by bpycv.render_data

Easy enough to workaround by manually fixing after the call, but I can see you go to some advanced effort in the code to change and then reset parameters like this.

Blender 2.80

To reproduce, from Blender launch:

In the Scripting Console:

>>> import bpycv
/opt/blender/blender-2.80/2.80/python/lib/python3.7/site-packages/bs4/element.py:16: UserWarning: The soupsieve package is not installed. CSS selectors cannot be used.
  'The soupsieve package is not installed. CSS selectors cannot be used.'
>>> bpy.context.scene.render.image_settings.color_depth
'8'
>>> bpycv.render_data(render_image=False)
# results...
>>> bpy.context.scene.render.image_settings.color_depth
'16'
>>> bpycv.__version__
'0.2.9'

bpycv.render_data error

There is an error with the random file name saved in /tmp/. Seems like the it was not able to generate the random filename.
I'm using blender 2.93 and python 3.9


Render image using: BLENDER_EEVEE
Saved: '/tmp/.png'
 Time: 00:00.18 (Saving: 00:00.00)

Traceback (most recent call last):
  File "/home/liuyanqi/spot/digit_new.blend/Text.002", line 225, in <module>
  File "/home/liuyanqi/spot/digit_new.blend/Text.002", line 144, in render_scene
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/bpycv/render_utils.py", line 110, in render_data
    render_result["image"] = _render_image()
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/bpycv/render_utils.py", line 90, in render_image
    image = imread(png_path)[..., :3]
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/boxx/ylimg/ylimgTool.py", line 46, in imread
    return imread(fname, as_grey, plugin, **plugin_args)
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/skimage/io/_io.py", line 48, in imread
    img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/skimage/io/manage_plugins.py", line 207, in call_plugin
    return func(*args, **kwargs)
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/skimage/io/_plugins/imageio_plugin.py", line 10, in imread
    return np.asarray(imageio_imread(*args, **kwargs))
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/imageio/core/functions.py", line 159, in imread
    with imopen(uri, "ri", plugin=format) as file:
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/imageio/core/imopen.py", line 137, in imopen
    request = Request(uri, io_mode)
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/imageio/core/request.py", line 245, in __init__
    self._parse_uri(uri)
  File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/imageio/core/request.py", line 383, in _parse_uri
    raise FileNotFoundError("No such file: '%s'" % fn)
FileNotFoundError: No such file: '/tmp/tmp93g8ifud.png'
Error: Python script failed, check the message in the system console

no module named 'cycler'

Hi, I'm trying to run the demo script on ubuntu 18.04 with python3.7.7 and blender 2.92.0. However, during saving I get the following errors:

...
Saved: '/tmp/tmp3xramr2h.png'
 Time: 00:00.93 (Saving: 00:00.32)

.../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/boxx/ylcompat.py:69: BoxxWarning: warning from boxx
    os.environ["DISPLAY"] is not found
    plt.show() are redirect to plt.savefig(tmp)
    function: show, loga, crun, heatmap, plot will be affected
  def __setDisplayEnv():
Traceback (most recent call last):
  File ".../test_script.py", line 28, in <module>
    result = bpycv.render_data()
  File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/bpycv/render_utils.py", line 107, in render_data
    render_result["image"] = _render_image()
  File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/bpycv/render_utils.py", line 87, in render_image
    image = imread(png_path)[..., :3]
  File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/boxx/ylimg/ylimgTool.py", line 43, in imread
    beforImportPlt()
  File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/boxx/ylcompat.py", line 110, in beforImportPlt
    __noDisplayEnv()
  File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/boxx/ylcompat.py", line 85, in __noDisplayEnv
    import matplotlib.pyplot as plt
  File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/matplotlib/__init__.py", line 107, in <module>
    from . import cbook, rcsetup
  File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/matplotlib/rcsetup.py", line 32, in <module>
    from cycler import Cycler, cycler as ccycler
ModuleNotFoundError: No module named 'cycler'
Blender quit

I have tried ./python3.7m -m pip install -U cycler, and it shows Requirement already satisfied. Do you have any suggestions on how I can overcome this issue?

Thanks in advance.

Changing exposure in HDR?

Hello , thanks for your great work. I've been using this for 6D pose estimation for industrial assets.
For HDRI manager, is there a method to change the exposure so that we can simulate different lightening conditions?

how to control the rotation (x-y plane) of an object

Dear authors,

In the ycb_demo.py file, I set all the random location and rotation variables to a fixed value, but the objects in the generated images still rotate in the x-y plane, could you please tell me if I miss something?

e.g. line 44 - 46 and line 48, in the meantime, I set to run the physical engine for 0 frame in line 69-70.

Thanks a lot!

HDRI Haven Replaced by Polyhaven

Hi there,
HDRI haven seems to have been replaced by a site called Polyhaven, and the API endpoints bpycv uses don't seem to be working (just got a redirect and 404 from the link https://hdrihaven.com/files/hdris/tv_studio_4k.hdr which I found in the source).

I'm curious if there's plans to update to Polyhaven? It appears to be a paid service now, which is actually fine at least for me.

Some question about the blender's RGB

Thanks for your great work, it really helps me a a lot.
Now I meet a problem that I don't know how to change the accuracy RGB such as (0, 0, 1) to the blender's RGB which range in (0, 1).
I read your code and find you have a transform about that, so I really want to know the way I should do to get the accuracy RGB in blender.
Hope to your reply!

Missing License

Hi,

I'd like to use your module in one of my projects. Could you please add a license or confirm that the missing license is intended, if one is not allowed to use your code?

Thanks,
pilk

Adding other 3D model failed

Dear authors,

I try to add some other 3D model (in .obj format) as my objects, but the generated images doesn't contain such an object, even if there is no error when bpycv.load_obj loads it. Here is what I have done:

  1. download the .obj file from the internet, say this one: https://free3d.com/3d-model/carboard-box-with-holders-for-fruit-v2--710406.html
  2. use meshlab to load it and export it as .obj file (suggested by the authors)
  3. use the code to load it.

Could you please help me with it? Thanks!

Installation gets stuck on error and retrying pip install bpycv

Hi, thanks for the great initiative of creating this package.

I am having trouble with the installation. I have windows 10, blender 2.82, python 3.7.4, pip 21.3.1. There is no module ensurepip, and I seem to be unable to install it. When I execute blender -b --python-expr "__import__('pip._internal')._internal.main(['install', '-U', 'bpycv'])", the terminal gets in a loop of error and retrying to install. The error output is shown in the added screenshots, with the second screenshot looping.

Any clue on how to solve this?

Thanks!

ss1
ss2

Does this tool supports semantic segmentation ?

Hello,
Thank you for the great tool. I would be really interested to know if this tool :

a. supports semantic segmentation?
b. can we use our predefined labels like cityscapes label info?

Question about asserting the upper bound of inst_id

Hello there 👋

As described here in #38 one should specify the instance id as follows: object["inst_id"] = categories_id * 1000 + index. The following example demonstrates that uint16 should be used to store the instance segmentation information in an image:

cv2.imwrite("demo-inst.png", np.uint16(result["inst"]))

I am assuming this is done to comply with Cityscape dataset, which is perfectly fine.

Since 2^16 = 65536 and the instance id is categories_id * 1000 + index we can not have more than 65 categories (classes) and can not count more than roughly 500 instances per class with this approach if my math is right.

But I would like to go way beyond this and I have noticed that 32 bit integer are used internally and I could use

cv2.imwrite('/out/put/image.tiff', np.float32(result["inst"]))

to save a 32 bit floating point image, which obviously gives me much more possibilities when it comes to the number of categories and instances that can be annotated.

However I noticed following assertion in the code:

assert inst_id <= 100e4, f"inst_id '{inst_id}' should <= 100e4"

which again limits the instance id, but I could not find an explanation for what is finally my question:

Why must the inst_id be <= 100e4 here ?

Many thanks in advance 😺

Instance map incorrect when objects are converted from duplicates

Hi, thanks for the awesome blender utils!

I am trying to convert particle emissions to objects and then segment them, but separate objects are considered to be part of the same instance. Here's the minimal code that will generate the issue

import bpy
import bpycv
import cv2
from bpy import context
import numpy as np

# Scene setup
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete()
bpy.ops.mesh.primitive_cube_add(size=1)
bpy.ops.mesh.primitive_plane_add(size=20,)
bpy.ops.object.camera_add(location=(0, 0, 80), rotation=(0, 0, 0))
bpy.context.scene.camera = context.object
bpy.ops.object.light_add(type='SUN')
cube = bpy.data.objects["Cube"]
plane = bpy.data.objects["Plane"]

ps = plane.modifiers.new("part", 'PARTICLE_SYSTEM')
psys = plane.particle_systems[ps.name]
psys.settings.type = "HAIR"
psys.settings.use_advanced_hair = True
psys.settings.count = 2 # Creating two hair emissions of cube
psys.settings.instance_object = cube # Creating two hair emissions of cube
psys.settings.render_type = "OBJECT"
psys.settings.particle_size = 1

plane.select_set(True)
bpy.ops.object.duplicates_make_real() # Make instanced objects attached to this object real


# Render using bpycv
bpy.data.objects["Cube.001"]["inst_id"] = 1001
bpy.data.objects["Cube.002"]["inst_id"] = 1002
result = bpycv.render_data()
cv2.imwrite("demo.jpg", result["image"][..., ::-1])
cv2.imwrite("demo-seg.png", np.uint16(result["inst"]))

The inst_id are correctly shown in the UI (One cube has inst_id = 1001 and second has inst_id = 1002, but all cubes seem to be in the same instance in demo-seg.png (all cubes have inst_id = 1002)

Screenshot from 2022-09-05 15-40-59
Screenshot from 2022-09-05 15-59-17

Objects that share the same mesh do not allow the emission material to be reversed

If you have objects within your scene that share the same mesh, set_inst_material will not recover the original material after its done running. This causes the objects to permanently turn green. This is because of the replace_collection code. It doesn't recognize that its changed the same object twice so it ends up overwriting the original materials in the StatuRecover object with the emission material and therefore loses all reference to the original.

self.replace_collection(obj.data.materials, [material])

def replace_collection(self, bpy_path, bpy_prop_collection):

Adding PNG background

Dear authors,

Very nice work! I wonder how to add a png image as background, such as empty box/bin for training robotic arms? Thanks for your answering!

is it possible to use bpy to control third-party addons in blender?

Hi Lei,

I am currently also working on using blender to render some synthetic images for deep learning training, though I am new to blender.
I need to use one addon/plugin naming retarget-bvh to load motion capture data. I wonder if bpy can be used to call third-party addons.

Thanks,
Zhe

How to replace primitives with my own objs in demo.py?

I have a bunch of obj files, and as the tile says, I want to replace the bpy.ops.mesh.primitive_cube_add with my own obj files.

I didn't refer to ycb_demo.py because my obj files are very simple just like the primitives in demo_vis. They only contain vertices and faces, and with no materials.

How can I achieve this?

Blender 3.0 support?

Hi,

I get the following errors if I try the example script with Blender 3.0:

Eevee:

AttributeError: 'RenderSettings' object has no attribute 'tile_x'

Cycles:

    scene.view_layers["View Layer"].cycles, "use_denoising", False
KeyError: 'bpy_prop_collection[key]: key "View Layer" not found'

Are there solutions for this? Will Blender 3.0 get support?

Segmentation mask of shadow

Hello,

I'm working on a computer vision project in which I try to use synthetic data for image segmentation.
Your library can already generate a segmentation map of the objects in a scene. However, I also need to have the mask of the shadow projected by the objects. Is it possible to do that with bpycv ?

For example, left image is the rendered image by Blender (with Render / Render Image) and right image is what I want. Currently, when I generate the segmentation mask with bpycv, I only have the red part (wind turbines) but I would like to add the green part (shadow). Note that the shadow is not rendered by bpycv when I save the result["image"] array (just like in the given examples).

end_goal

Thanks

getting a view from multiple cameras

Thank you for providing a great library.

I would like to know how to do this when there are multiple cameras.

So far (if I'm not mistaken) I can only get data from the camera viewpoint that exists by default in blender. Please let me know how to get the viewpoints from the cameras I have specified.

I am waiting for your reply.

Compile bug on Windows

Hello,
Thanks for providing this useful tool!

An error occured when I installing bpycv on Windows:

ERROR: Command errored out with exit status 1:
command: 'C:\Program Files\Blender Foundation\Blender 2.90\2.90\python\bin\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Salingo\\AppData\\Local\\Temp\\pip-install-o3rqu4a6\\bpycv\\setup.py'"'"'; __file__='"'"'C:\\Users\\Salingo\\AppData\\Local\\Temp\\pip-install-o3rqu4a6\\bpycv\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Salingo\AppData\Local\Temp\pip-pip-egg-info-nduuwed0'
        cwd: C:\Users\Salingo\AppData\Local\Temp\pip-install-o3rqu4a6\bpycv\
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Salingo\AppData\Local\Temp\pip-install-o3rqu4a6\bpycv\setup.py", line 13, in <module>
        long_description = f.read()
UnicodeDecodeError: 'gbk' codec can't decode byte 0xae in position 318: illegal multibyte sequence
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

I think this is a issue related to the string encoding, maybe you can try to modify line 12 in setup.py to:
with open("README.md", encoding='utf-8') as f:

Thanks!

Wrong instance segmentation map with many objects

Hi! Thanks for the awesome blender util!

I use your util to create a segmentation map for bunch of clumped objects created with instances and geocodes. I realized before creating instance map and reassigned with data.copy().
The problem is that I have more than 100 objects on area, but some objects still have the same color on instance map, limited to 20-30.
I used a lot of attempts to change inst_id, but situation stayed the same. How can I fix it?

Here is the code (some hacks was used to work with material, so code is not perfect)

for img_num in range(1):
    # remove all MESH objects
    bpy.ops.wm.read_homefile()
    update_camera(bpy.data.objects['Camera'])
    update_light(bpy.data.objects['Light'])
    with bpy.data.libraries.load('rocks.blend') as (data_from, data_to):
        data_to.collections = data_from.collections
    with bpy.data.libraries.load('froth_gm.blend') as (data_from, data_to):
        data_to.materials = data_from.materials
    

    # bpy.ops.mesh.primitive_cube_add(size=1, location=(0,0,0))
    cube =  bpy.data.objects.get('Cube')
    
    modifier=cube.modifiers.new("Bubbles", "NODES")
    modifier.node_group = geometry_nodes_node_group1(seed=(2,4, 3), dists=(70.0, 50.0, 50))
    
    for scene in bpy.data.scenes:
      scene.cycles.device = 'GPU'

    prefs = bpy.context.preferences
    cprefs = prefs.addons['cycles'].preferences
    bpy.context.scene.render.engine = 'CYCLES'
    bpy.context.preferences.addons[
        "cycles"
    ].preferences.compute_device_type = "CUDA" # or "OPENCL"
    # bpy.context.scene.cycles.samples = 32
    # Set the device and feature set
    bpy.context.scene.cycles.device = "GPU"
    
    result = bpycv.render_data()
    # save result
    cv2.imwrite(
        f"img{img_num}.jpg", result["image"][..., ::-1]
    )  # transfer RGB image to opencv's BGR
    
    # modifier=cube.modifiers.new("Bubbles", "NODES")
    modifier.node_group = geometry_nodes_node_group2(seed=(2, 4, 3),  dists=(70.0, 50.0, 50))
    
    cube.select_set(True)
    bpy.context.view_layer.objects.active = cube
    bpy.ops.object.mode_set(mode='OBJECT')
    bpy.ops.object.mode_set(mode='EDIT')
    bpy.ops.mesh.select_all(action='SELECT')
    bpy.ops.object.mode_set(mode='OBJECT')
    bpy.ops.object.duplicates_make_real()
    
    for i, o in enumerate(bpy.data.objects):
        if o.type in ("MESH", "CURVE"):
            o.data=o.data.copy()
            o["inst_id"] = randint(1, 8)*1000+i
   
    bpy.context.scene.render.engine = 'BLENDER_EEVEE'

    bpy.context.scene.render.resolution_y = 512
    bpy.context.scene.render.resolution_x = 512
    
    bpy.context.view_layer.cycles.use_denoising = True
    bpy.context.view_layer.cycles.denoising_store_passes = True
    bpy.context.scene.cycles.samples = 200
    # Set the device and feature set
    bpy.context.scene.cycles.device = "GPU"
    
    result = bpycv.render_data(render_image=False)
    objs = {
                obj.name: obj for obj in bpy.data.objects if obj.type in ("MESH", "CURVE")
            }
    
    print(len(objs))
    cv2.imwrite(f"mask{img_num}.png", np.uint16(result["inst"]))
    cv2.imwrite(f"demo-vis{img_num}.jpg", result.vis()[..., ::-1])

Here are the examples of render and instance map.
image
image

render_data() bug when a single mesh has multiple materials

First of all, this tool is absolutely perfect for what I need and I love it! It was working fine for me on a simpler scene, but now I am using it to create an image segmentation dataset on a more complex scene with a different material setup. I've now encountered a bug when multiple materials are used for a single mesh.

Here is the simple script I have setup which causes this behavior:

import sys
sys.path.append("/home/mitchell/anaconda3/envs/blender/lib/python3.7/site-packages")

import bpycv
    
result = bpycv.render_data()

Before running:

before

After running:

image

And here you can see vaguely what my materials look like for the city object beforehand:

image

The first material (default.001) is applied to all parts of the city after calling render_data(), whereas before there were 4 different materials for different parts of the city.

I can imagine this would be difficult to support, however one quick way to solve this for me would be the ability to ignore the city object entirely when generating annotations. I'm not sure if this is possible though.

I'm not a blender expert by any means, but if there's anything else you'd like to know I'd be happy to help. Thanks in advance!

Cann't find "bpycv'

First thanks for your work. Here is a problem about "bpycv'. "bpycv " has been installed in Figure 1 shwon. but still, get the problem in figure 2. how to deal with this? Thanks!

1

2

Depth map accuracy

Thanks for open-sourcing this handy package.
Wondering whether you have suggestions about controling depth map accuracy in Blender 2.9?

In my test, the reprojection errors of depth points (on the input mesh) is around 1e-3, although I already use opencv's exr format to export depth image in float32 precision.
This seems to say the errors are coming from blender's depth map.
Or it may come from blender's rendering logics e.g.
https://blender.stackexchange.com/questions/87504/blender-cycles-z-coordinate-vs-camera-centered-z-coordinate-orthogonal-distan

Render results are not consistent

Hello,

Recently I met a weird bug when trying to build a virtual scanner.

If I directly run bpycv.render_data() after blender is initialized, the rendered color image looks like this:
0-color

However, the result becomes correct if I run bpycv.render_data() twice:
0-color

Same issue can be found on other models, so maybe it is something wrong inside the render function?

Segmentation with transparent background

Hi,
I am trying to do a semantic segmentation for a dataset containing plants.
My models contain a lot of leaves, which are basically built using support squares, with an image of a leaf on them.
In these images the background is transparent.
In the RGB render, the leaves appear normal, the transparency works - I can only see the leaves, not their supports.
However, in the depth and semantic windows, the leaves appear square (I tried .obj and .fbx formats).
Is there a setting that I can use, so that, the semantic mask is created using only the visible parts, without their transparent support square? Or do I have to fix the plant models first? (In Nvidia Omniverse the segmentation is correct for these models.)

Thanks.
0

To cityscape format?

Thanks for this work, I notice that bpycv support cityscape format, I want to ask is there a demo or doc to demonstrate how to save the results as cityscape format?

Unexpected result

I've copy-pasted your code and run it, but the images I get are different from yours (namely, all the meshes are the same color). What could be the reason?

demo-depth.png
demo-depth

demo-inst.png
demo-inst

demo-rgb.png
demo-rgb

demo-vis(inst|rgb|depth).png
demo-vis(inst|rgb|depth)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.