diyer22 / bpycv Goto Github PK
View Code? Open in Web Editor NEWComputer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
License: MIT License
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
License: MIT License
Hello, thanks for your great job.
I have one question in the generated 6d-pose mat file. In the mat file, there is '6ds', which I interpret as the object's relative pose matrix wrt. the camera It should have the form as [ [ r11,r12,r13,t1], [r21,r22,r23,t2], [r31,r32,r33,t3], [0,0,0,1] ]
May I ask why it is like this? (not the form a transfer matrix)
Dear author,
Thanks for implementing this function:
#24 (comment)
But how should I set this variable as background? Like
Lines 51 to 52 in c576e01
Thanks!
I want to use render_data function within an undo() context because otherwise the annotation rendering modified the material of several background objects in my scene. I want to run it in a loop for several modifications of some objects. The script run for only the first time and then I got an error "ReferenceError: StructRNA of type Scene has been removed" in
Line 35 in 0ff7644
As I saw from your code in
Line 17 in 0ff7644
Line 24 in 0ff7644
The error I got above was due to the global scene variable which refers to deleted data, caused due to the undo operation. Such behavior is described in Blender documentation https://docs.blender.org/api/blender2.8/info_gotcha.html#undo-redo
A working modfication is to override the global scene variable and re-obtain it from the context within here
Line 25 in 0ff7644
I don't know however where else these global variables are used.
This was referenced in another issue but it was then closed without a resolution. If an object has multiple materials, the materials are not reversed after the instance material is applied. The issue is in the replace_collection code. I don't how how blender associates materials to different vertices but that doesn't seem to be in the material itself. So even though bpycv is restoring the list of materials, the data about where they are applied is lost.
I've tried to run your demo in the main page but it failed to finish.
I just installed your library and boxx, opencv-python, bs4(I'm not sure why bs4 is needed for this), OpenEXR , simply copied your demo src in text editor and ran the script.
Error message is below:
/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/utils.py:58: RuntimeWarning: divide by zero encountered in floor_divide
numerator_odd = numerator // low_bit
/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/utils.py:60: RuntimeWarning: divide by zero encountered in log2
up = np.int32(np.log2(low_bit, dtype=np.float32))
Traceback (most recent call last):
File "/Text", line 22, in
File "/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/render_utils.py", line 96, in render_data
result = ImageWithAnnotation(**render_result)
File "/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/exr_image_parser.py", line 66, in init
self["inst"] = exr.get_inst()
File "/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/exr_image_parser.py", line 55, in get_inst
inst = encode_inst_id.rgb_to_id(rgb)
File "/home/noname/blender-2.90.0-linux64/2.90/python/lib/python3.7/site-packages/bpycv/utils.py", line 62, in rgb_to_id
int_part = 2 ** (depth - 1) - 1 + idx_in_level
ValueError: Integers to negative integer powers are not allowed.
Error: Python script failed, check the message in the system console
I tried this in blender 2.82 also but no luck as same.
Any suggestions will be helpful. Thanks.
Hi All,
I saw on other issue how to render semantic segmentation maps by basically change obj["inst_id"]
to object_class_id
instead of object_id
. Is it possible to render both semantic segmentation and instance segmentation in one rendering action?
As a workaround we can obtain the maps with 2 rendering steps:
obj["inst_id"]
with unique value per object, render the scene and save on the side the rgb image and instance segmentation map.obj["inst_id"]
with unique value per object class and render to obtain the semantic segmentation map.The procedure above is inefficient as we need to render the scene twice. How can we solve this? (maybe use another filed in obj
for class_id
?)
any plan to add stereo process
I'm trying to get segmentation maps for each object in the scene with and without occlusions, is there an easy way to do this built in? I'm currently using
bpy.ops.object.hide_view_set(unselected=True)
inside the loop that loads up the objects and segmenting them once, and then segmenting the entire image, but this seems to be really inefficient.
Hi! Just a quick question:
In camera_utils.py a check is performed so that the camera is perspective.
if camd.type != "PERSP":
raise ValueError("Non-perspective cameras not supported")
Why is that? It seems to work fine if i disable this.
Does something happen that i'm not aware of or is this just an untested feature?
Thanks!
Followed installation steps.
./blender -b -E CYCLES --python-expr "import bpycv,cv2;d=bpycv.render_data();bpycv.tree(d);cv2.imwrite('/tmp/try_bpycv_vis(inst-rgb-depth).jpg', d.vis()[...,::-1])" resulted with the following traceback;
Traceback (most recent call last):
File "", line 1, in
File "/home/art/Applications/blender-4.0.2-linux-x64/4.0/python/lib/python3.10/site-packages/bpycv/render_utils.py", line 118, in render_data
with set_inst_material(), set_annotation_render(), withattr(
File "/home/art/Applications/blender-4.0.2-linux-x64/4.0/python/lib/python3.10/site-packages/bpycv/render_utils.py", line 72, in init
self.set_attrs(render.image_settings, attrs)
File "/home/art/Applications/blender-4.0.2-linux-x64/4.0/python/lib/python3.10/site-packages/bpycv/statu_recover.py", line 50, in set_attrs
self.set_attr(obj, attr, value)
File "/home/art/Applications/blender-4.0.2-linux-x64/4.0/python/lib/python3.10/site-packages/bpycv/statu_recover.py", line 53, in set_attr
self.obj_to_old_attr_value.append([(obj, attr), getattr(obj, attr)])
AttributeError: 'ImageFormatSettings' object has no attribute 'use_zbuffer'
Easy enough to workaround by manually fixing after the call, but I can see you go to some advanced effort in the code to change and then reset parameters like this.
Blender 2.80
To reproduce, from Blender launch:
In the Scripting Console:
>>> import bpycv
/opt/blender/blender-2.80/2.80/python/lib/python3.7/site-packages/bs4/element.py:16: UserWarning: The soupsieve package is not installed. CSS selectors cannot be used.
'The soupsieve package is not installed. CSS selectors cannot be used.'
>>> bpy.context.scene.render.image_settings.color_depth
'8'
>>> bpycv.render_data(render_image=False)
# results...
>>> bpy.context.scene.render.image_settings.color_depth
'16'
>>> bpycv.__version__
'0.2.9'
There is an error with the random file name saved in /tmp/. Seems like the it was not able to generate the random filename.
I'm using blender 2.93 and python 3.9
Render image using: BLENDER_EEVEE
Saved: '/tmp/.png'
Time: 00:00.18 (Saving: 00:00.00)
Traceback (most recent call last):
File "/home/liuyanqi/spot/digit_new.blend/Text.002", line 225, in <module>
File "/home/liuyanqi/spot/digit_new.blend/Text.002", line 144, in render_scene
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/bpycv/render_utils.py", line 110, in render_data
render_result["image"] = _render_image()
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/bpycv/render_utils.py", line 90, in render_image
image = imread(png_path)[..., :3]
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/boxx/ylimg/ylimgTool.py", line 46, in imread
return imread(fname, as_grey, plugin, **plugin_args)
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/skimage/io/_io.py", line 48, in imread
img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/skimage/io/manage_plugins.py", line 207, in call_plugin
return func(*args, **kwargs)
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/skimage/io/_plugins/imageio_plugin.py", line 10, in imread
return np.asarray(imageio_imread(*args, **kwargs))
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/imageio/core/functions.py", line 159, in imread
with imopen(uri, "ri", plugin=format) as file:
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/imageio/core/imopen.py", line 137, in imopen
request = Request(uri, io_mode)
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/imageio/core/request.py", line 245, in __init__
self._parse_uri(uri)
File "/home/liuyanqi/Downloads/blender-2.93.6-linux-x64/2.93/python/lib/python3.9/site-packages/imageio/core/request.py", line 383, in _parse_uri
raise FileNotFoundError("No such file: '%s'" % fn)
FileNotFoundError: No such file: '/tmp/tmp93g8ifud.png'
Error: Python script failed, check the message in the system console
Hi, I'm trying to run the demo script on ubuntu 18.04 with python3.7.7 and blender 2.92.0. However, during saving I get the following errors:
...
Saved: '/tmp/tmp3xramr2h.png'
Time: 00:00.93 (Saving: 00:00.32)
.../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/boxx/ylcompat.py:69: BoxxWarning: warning from boxx
os.environ["DISPLAY"] is not found
plt.show() are redirect to plt.savefig(tmp)
function: show, loga, crun, heatmap, plot will be affected
def __setDisplayEnv():
Traceback (most recent call last):
File ".../test_script.py", line 28, in <module>
result = bpycv.render_data()
File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/bpycv/render_utils.py", line 107, in render_data
render_result["image"] = _render_image()
File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/bpycv/render_utils.py", line 87, in render_image
image = imread(png_path)[..., :3]
File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/boxx/ylimg/ylimgTool.py", line 43, in imread
beforImportPlt()
File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/boxx/ylcompat.py", line 110, in beforImportPlt
__noDisplayEnv()
File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/boxx/ylcompat.py", line 85, in __noDisplayEnv
import matplotlib.pyplot as plt
File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/matplotlib/__init__.py", line 107, in <module>
from . import cbook, rcsetup
File ".../blender-2.92.0-linux64/2.92/python/lib/python3.7/site-packages/matplotlib/rcsetup.py", line 32, in <module>
from cycler import Cycler, cycler as ccycler
ModuleNotFoundError: No module named 'cycler'
Blender quit
I have tried ./python3.7m -m pip install -U cycler
, and it shows Requirement already satisfied
. Do you have any suggestions on how I can overcome this issue?
Thanks in advance.
Hello , thanks for your great work. I've been using this for 6D pose estimation for industrial assets.
For HDRI manager, is there a method to change the exposure so that we can simulate different lightening conditions?
Dear authors,
In the ycb_demo.py file, I set all the random location and rotation variables to a fixed value, but the objects in the generated images still rotate in the x-y plane, could you please tell me if I miss something?
e.g. line 44 - 46 and line 48, in the meantime, I set to run the physical engine for 0 frame in line 69-70.
Thanks a lot!
Hi there,
HDRI haven seems to have been replaced by a site called Polyhaven, and the API endpoints bpycv uses don't seem to be working (just got a redirect and 404 from the link https://hdrihaven.com/files/hdris/tv_studio_4k.hdr which I found in the source).
I'm curious if there's plans to update to Polyhaven? It appears to be a paid service now, which is actually fine at least for me.
Thanks for your great work, it really helps me a a lot.
Now I meet a problem that I don't know how to change the accuracy RGB such as (0, 0, 1) to the blender's RGB which range in (0, 1).
I read your code and find you have a transform about that, so I really want to know the way I should do to get the accuracy RGB in blender.
Hope to your reply!
Hi,
I'd like to use your module in one of my projects. Could you please add a license or confirm that the missing license is intended, if one is not allowed to use your code?
Thanks,
pilk
Hi authors,
I wonder if there is a way to add or change the color or texture for the box generated via add_environment_box method? Currently, it looks white/milk. I didn't find any information in the source code regarding the color/appearance.
Thanks!
Dear authors,
I try to add some other 3D model (in .obj format) as my objects, but the generated images doesn't contain such an object, even if there is no error when bpycv.load_obj loads it. Here is what I have done:
Could you please help me with it? Thanks!
Hi, thanks for the great initiative of creating this package.
I am having trouble with the installation. I have windows 10, blender 2.82, python 3.7.4, pip 21.3.1. There is no module ensurepip, and I seem to be unable to install it. When I execute blender -b --python-expr "__import__('pip._internal')._internal.main(['install', '-U', 'bpycv'])"
, the terminal gets in a loop of error and retrying to install. The error output is shown in the added screenshots, with the second screenshot looping.
Any clue on how to solve this?
Thanks!
Thanks for the amazing repository. I was wondering how to use it for instance segmentation in videos?
Just like the title, will keypoint function be added later on in this project?
Hello,
Thank you for the great tool. I would be really interested to know if this tool :
a. supports semantic segmentation?
b. can we use our predefined labels like cityscapes label info?
Hello there 👋
As described here in #38 one should specify the instance id as follows: object["inst_id"] = categories_id * 1000 + index
. The following example demonstrates that uint16
should be used to store the instance segmentation information in an image:
Line 44 in 074f49b
I am assuming this is done to comply with Cityscape dataset, which is perfectly fine.
Since 2^16 = 65536
and the instance id is categories_id * 1000 + index
we can not have more than 65 categories (classes) and can not count more than roughly 500 instances per class with this approach if my math is right.
But I would like to go way beyond this and I have noticed that 32 bit integer are used internally and I could use
cv2.imwrite('/out/put/image.tiff', np.float32(result["inst"]))
to save a 32 bit floating point image, which obviously gives me much more possibilities when it comes to the number of categories and instances that can be annotated.
However I noticed following assertion in the code:
Line 147 in 074f49b
which again limits the instance id, but I could not find an explanation for what is finally my question:
Why must the inst_id
be <= 100e4 here ?
Many thanks in advance 😺
Hi, thanks for the awesome blender utils!
I am trying to convert particle emissions to objects and then segment them, but separate objects are considered to be part of the same instance. Here's the minimal code that will generate the issue
import bpy
import bpycv
import cv2
from bpy import context
import numpy as np
# Scene setup
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete()
bpy.ops.mesh.primitive_cube_add(size=1)
bpy.ops.mesh.primitive_plane_add(size=20,)
bpy.ops.object.camera_add(location=(0, 0, 80), rotation=(0, 0, 0))
bpy.context.scene.camera = context.object
bpy.ops.object.light_add(type='SUN')
cube = bpy.data.objects["Cube"]
plane = bpy.data.objects["Plane"]
ps = plane.modifiers.new("part", 'PARTICLE_SYSTEM')
psys = plane.particle_systems[ps.name]
psys.settings.type = "HAIR"
psys.settings.use_advanced_hair = True
psys.settings.count = 2 # Creating two hair emissions of cube
psys.settings.instance_object = cube # Creating two hair emissions of cube
psys.settings.render_type = "OBJECT"
psys.settings.particle_size = 1
plane.select_set(True)
bpy.ops.object.duplicates_make_real() # Make instanced objects attached to this object real
# Render using bpycv
bpy.data.objects["Cube.001"]["inst_id"] = 1001
bpy.data.objects["Cube.002"]["inst_id"] = 1002
result = bpycv.render_data()
cv2.imwrite("demo.jpg", result["image"][..., ::-1])
cv2.imwrite("demo-seg.png", np.uint16(result["inst"]))
The inst_id
are correctly shown in the UI (One cube has inst_id = 1001
and second has inst_id = 1002
, but all cubes seem to be in the same instance in demo-seg.png
(all cubes have inst_id = 1002
)
If you have objects within your scene that share the same mesh, set_inst_material will not recover the original material after its done running. This causes the objects to permanently turn green. This is because of the replace_collection code. It doesn't recognize that its changed the same object twice so it ends up overwriting the original materials in the StatuRecover object with the emission material and therefore loses all reference to the original.
Line 41 in 7a6c7a1
Line 56 in 7a6c7a1
Dear authors,
Very nice work! I wonder how to add a png image as background, such as empty box/bin for training robotic arms? Thanks for your answering!
Hi Lei,
I am currently also working on using blender to render some synthetic images for deep learning training, though I am new to blender.
I need to use one addon/plugin naming retarget-bvh to load motion capture data. I wonder if bpy can be used to call third-party addons.
Thanks,
Zhe
I have a bunch of obj
files, and as the tile says, I want to replace the bpy.ops.mesh.primitive_cube_add
with my own obj
files.
I didn't refer to ycb_demo.py
because my obj
files are very simple just like the primitives in demo_vis. They only contain vertices and faces, and with no materials.
How can I achieve this?
Hi,
I get the following errors if I try the example script with Blender 3.0:
Eevee:
AttributeError: 'RenderSettings' object has no attribute 'tile_x'
Cycles:
scene.view_layers["View Layer"].cycles, "use_denoising", False
KeyError: 'bpy_prop_collection[key]: key "View Layer" not found'
Are there solutions for this? Will Blender 3.0 get support?
Hello,
I'm working on a computer vision project in which I try to use synthetic data for image segmentation.
Your library can already generate a segmentation map of the objects in a scene. However, I also need to have the mask of the shadow projected by the objects. Is it possible to do that with bpycv
?
For example, left image is the rendered image by Blender (with Render
/ Render Image
) and right image is what I want. Currently, when I generate the segmentation mask with bpycv
, I only have the red part (wind turbines) but I would like to add the green part (shadow). Note that the shadow is not rendered by bpycv when I save the result["image"]
array (just like in the given examples).
Thanks
Thank you for providing a great library.
I would like to know how to do this when there are multiple cameras.
So far (if I'm not mistaken) I can only get data from the camera viewpoint that exists by default in blender. Please let me know how to get the viewpoints from the cameras I have specified.
I am waiting for your reply.
Hello,
Thanks for providing this useful tool!
An error occured when I installing bpycv on Windows:
ERROR: Command errored out with exit status 1:
command: 'C:\Program Files\Blender Foundation\Blender 2.90\2.90\python\bin\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Salingo\\AppData\\Local\\Temp\\pip-install-o3rqu4a6\\bpycv\\setup.py'"'"'; __file__='"'"'C:\\Users\\Salingo\\AppData\\Local\\Temp\\pip-install-o3rqu4a6\\bpycv\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Salingo\AppData\Local\Temp\pip-pip-egg-info-nduuwed0'
cwd: C:\Users\Salingo\AppData\Local\Temp\pip-install-o3rqu4a6\bpycv\
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Salingo\AppData\Local\Temp\pip-install-o3rqu4a6\bpycv\setup.py", line 13, in <module>
long_description = f.read()
UnicodeDecodeError: 'gbk' codec can't decode byte 0xae in position 318: illegal multibyte sequence
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
I think this is a issue related to the string encoding, maybe you can try to modify line 12 in setup.py
to:
with open("README.md", encoding='utf-8') as f:
Thanks!
Hi! Thanks for the awesome blender util!
I use your util to create a segmentation map for bunch of clumped objects created with instances and geocodes. I realized before creating instance map and reassigned with data.copy().
The problem is that I have more than 100 objects on area, but some objects still have the same color on instance map, limited to 20-30.
I used a lot of attempts to change inst_id, but situation stayed the same. How can I fix it?
Here is the code (some hacks was used to work with material, so code is not perfect)
for img_num in range(1):
# remove all MESH objects
bpy.ops.wm.read_homefile()
update_camera(bpy.data.objects['Camera'])
update_light(bpy.data.objects['Light'])
with bpy.data.libraries.load('rocks.blend') as (data_from, data_to):
data_to.collections = data_from.collections
with bpy.data.libraries.load('froth_gm.blend') as (data_from, data_to):
data_to.materials = data_from.materials
# bpy.ops.mesh.primitive_cube_add(size=1, location=(0,0,0))
cube = bpy.data.objects.get('Cube')
modifier=cube.modifiers.new("Bubbles", "NODES")
modifier.node_group = geometry_nodes_node_group1(seed=(2,4, 3), dists=(70.0, 50.0, 50))
for scene in bpy.data.scenes:
scene.cycles.device = 'GPU'
prefs = bpy.context.preferences
cprefs = prefs.addons['cycles'].preferences
bpy.context.scene.render.engine = 'CYCLES'
bpy.context.preferences.addons[
"cycles"
].preferences.compute_device_type = "CUDA" # or "OPENCL"
# bpy.context.scene.cycles.samples = 32
# Set the device and feature set
bpy.context.scene.cycles.device = "GPU"
result = bpycv.render_data()
# save result
cv2.imwrite(
f"img{img_num}.jpg", result["image"][..., ::-1]
) # transfer RGB image to opencv's BGR
# modifier=cube.modifiers.new("Bubbles", "NODES")
modifier.node_group = geometry_nodes_node_group2(seed=(2, 4, 3), dists=(70.0, 50.0, 50))
cube.select_set(True)
bpy.context.view_layer.objects.active = cube
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.duplicates_make_real()
for i, o in enumerate(bpy.data.objects):
if o.type in ("MESH", "CURVE"):
o.data=o.data.copy()
o["inst_id"] = randint(1, 8)*1000+i
bpy.context.scene.render.engine = 'BLENDER_EEVEE'
bpy.context.scene.render.resolution_y = 512
bpy.context.scene.render.resolution_x = 512
bpy.context.view_layer.cycles.use_denoising = True
bpy.context.view_layer.cycles.denoising_store_passes = True
bpy.context.scene.cycles.samples = 200
# Set the device and feature set
bpy.context.scene.cycles.device = "GPU"
result = bpycv.render_data(render_image=False)
objs = {
obj.name: obj for obj in bpy.data.objects if obj.type in ("MESH", "CURVE")
}
print(len(objs))
cv2.imwrite(f"mask{img_num}.png", np.uint16(result["inst"]))
cv2.imwrite(f"demo-vis{img_num}.jpg", result.vis()[..., ::-1])
First of all, this tool is absolutely perfect for what I need and I love it! It was working fine for me on a simpler scene, but now I am using it to create an image segmentation dataset on a more complex scene with a different material setup. I've now encountered a bug when multiple materials are used for a single mesh.
Here is the simple script I have setup which causes this behavior:
import sys
sys.path.append("/home/mitchell/anaconda3/envs/blender/lib/python3.7/site-packages")
import bpycv
result = bpycv.render_data()
Before running:
After running:
And here you can see vaguely what my materials look like for the city object beforehand:
The first material (default.001
) is applied to all parts of the city after calling render_data()
, whereas before there were 4 different materials for different parts of the city.
I can imagine this would be difficult to support, however one quick way to solve this for me would be the ability to ignore the city object entirely when generating annotations. I'm not sure if this is possible though.
I'm not a blender expert by any means, but if there's anything else you'd like to know I'd be happy to help. Thanks in advance!
Thanks for open-sourcing this handy package.
Wondering whether you have suggestions about controling depth map accuracy in Blender 2.9?
In my test, the reprojection errors of depth points (on the input mesh) is around 1e-3, although I already use opencv's exr format to export depth image in float32 precision.
This seems to say the errors are coming from blender's depth map.
Or it may come from blender's rendering logics e.g.
https://blender.stackexchange.com/questions/87504/blender-cycles-z-coordinate-vs-camera-centered-z-coordinate-orthogonal-distan
OCRTOC2020 Website:
http://www.ocrtoc.org/ocrtoc2020
Details of the whole solution in Chinese: IROS 2020 OCRTOC比赛总结 - Team PHAI Robotics
Hello,
Recently I met a weird bug when trying to build a virtual scanner.
If I directly run bpycv.render_data()
after blender is initialized, the rendered color image looks like this:
However, the result becomes correct if I run bpycv.render_data()
twice:
Same issue can be found on other models, so maybe it is something wrong inside the render function?
Hi,
I am trying to do a semantic segmentation for a dataset containing plants.
My models contain a lot of leaves, which are basically built using support squares, with an image of a leaf on them.
In these images the background is transparent.
In the RGB render, the leaves appear normal, the transparency works - I can only see the leaves, not their supports.
However, in the depth and semantic windows, the leaves appear square (I tried .obj and .fbx formats).
Is there a setting that I can use, so that, the semantic mask is created using only the visible parts, without their transparent support square? Or do I have to fix the plant models first? (In Nvidia Omniverse the segmentation is correct for these models.)
Thanks for this work, I notice that bpycv support cityscape format, I want to ask is there a demo or doc to demonstrate how to save the results as cityscape format?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.