Comments (19)
OK. I will look forward to it. w
from scenerf.
Great! You can watch the repo, so you will be notified when I update it.
from scenerf.
The reconstructed results from the updated code is looking much better, and I can clearly see a more reasonable FOV. I believe some minor details may still need improvement due to insufficient training epochs. I will try evaluating all reconstruction metrics once I finished my training. Thx for the update!
from scenerf.
Awesome! I'm so glad it's working for you. Let me know if there's anything else I can do to help. 😊
from scenerf.
Hi, which TSDF threshold did you use?
Also, when drawing, you need to remove the voxels with label 255.
from scenerf.
Truncated margin was set to 10. I used default settings in depth2tsdf.py and fusion.py. Thanks for your kindly reminding. I wanted to see the overall result so I put all the labels together :)
from scenerf.
The 255 denotes unknown voxels which are voxels that have no lidar rays pass through in all the sequence.
Do you use this function to generate the occupancy?
https://github.com/astra-vision/SceneRF/blob/main/scenerf/scripts/evaluation/eval_sr.py#L11
The idea is to set an increase threshold since the depth error increase with the distance to the vehicle
Because if you consider all voxels within +-10m around the depth as occupied then it means around 20/0.2=100 voxels are set to be occupied for each depth value.
from scenerf.
Yes. I used this function and the default settings in eval_sr.py. I tried to ignore the 255 label and I saw the corresponding no lidar area in GT disappeared, but my prediction result still looks the same.
from scenerf.
Is it the same for other frame? Did you try to draw the mesh? Do the rendered depth images look fine?
from scenerf.
I drew some frames and they all looked the same... Corresponding depth and RGB image looks fine. I attached them below.
from scenerf.
Thanks, let me check it!
What if you change the variable th here to a smaller number? is it still the same?
https://github.com/astra-vision/SceneRF/blob/main/scenerf/scripts/evaluation/eval_sr.py#L12
from scenerf.
It should look like this.
This what I drew months ago. I will need to check
from scenerf.
You can obtain the mesh from this line
https://github.com/astra-vision/SceneRF/blob/main/scenerf/scripts/reconstruction/depth2tsdf.py#L107
from scenerf.
I tried default th=0.25 and th=0.15. They still look the same :(
Many thanks for your prompt response! Very great work. I will check it also.
from scenerf.
Sorry for this, probably I changed smth when cleaning it.
I will update it in few weeks.
from scenerf.
Seem like the threshold is not applied at all.
from scenerf.
Hi @Luciferbobo,
Sorry, I am swamped, I just tried to draw the occupancy prediction using the following code:
def get_grid_coords(dims, resolution):
'''
:param dims: the dimensions of the grid [x, y, z] (i.e. [256, 256, 32])
:return coords_grid: is the center coords of voxels in the grid
'''
# The sensor in centered in X (we go to dims/2 + 1 for the histogramdd)
g_xx = np.arange(0, dims[0] + 1)
# The sensor is in Y=0 (we go to dims + 1 for the histogramdd)
g_yy = np.arange(0, dims[1] + 1)
# The sensor is in Z=1.73. I observed that the ground was to voxel levels above the grid bottom, so Z pose is at 10
# if bottom voxel is 0. If we want the sensor to be at (0, 0, 0), then the bottom in z is -10, top is 22
# (we go to 22 + 1 for the histogramdd)
# ATTENTION.. Is 11 for old grids.. 10 for new grids (v1.1) (https://github.com/PRBonn/semantic-kitti-api/issues/49)
sensor_pose = 10
g_zz = np.arange(0, dims[2] + 1)
# Obtaining the grid with coords...
xx, yy, zz = np.meshgrid(g_xx[:-1], g_yy[:-1], g_zz[:-1])
coords_grid = np.array([xx.flatten(), yy.flatten(), zz.flatten()]).T
coords_grid = coords_grid.astype(np.float)
coords_grid = (coords_grid * resolution) + resolution/2
temp = np.copy(coords_grid)
temp[:, 0] = coords_grid[:, 1]
temp[:, 1] = coords_grid[:, 0]
coords_grid = np.copy(temp)
return coords_grid, g_xx, g_yy, g_zz
def draw(
voxels,
cam_param_path="",
voxel_size=0.04):
voxels[voxels == 255] = 0
grid_coords, _, _, _ = get_grid_coords([voxels.shape[0], voxels.shape[1], voxels.shape[2]], voxel_size)
points = np.vstack([grid_coords.T, voxels.reshape(-1)]).T
# Obtaining voxels with semantic class
points = points[(points[:, 3] != 0)]
vis = o3d.visualization.Visualizer()
vis.create_window(width=1200, height=600)
ctr = vis.get_view_control()
param = o3d.io.read_pinhole_camera_parameters(cam_param_path)
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(points[:, :3])
pcd.estimate_normals()
vis.add_geometry(pcd)
ctr.convert_from_pinhole_camera_parameters(param)
vis.run() # user changes the view and press "q" to terminate
param = vis.get_view_control().convert_to_pinhole_camera_parameters()
o3d.io.write_pinhole_camera_parameters(cam_param_path, param)
path = "Your path to stored TSDF output"
frame_id = "001385.npy"
tsdf_path = os.path.join(path, frame_id)
tsdf = np.load(tsdf_path)
occ = np.zeros_like(tsdf)
occ[tsdf > 0.2 ] = 0
occ[abs(tsdf) < 0.2 ] = 1
draw(occ)
from scenerf.
Hi. Apologize for also being swamped with other things lately. Thank you very much for the update! I tried the function you shared, while the results still seem to have some issues... Here's a comparison between the GT (left) and my prediction (right).
I guess the visualization results produced by Open3D and Matplotlib Axes3D should be similar, so the issue might be with the code in depth2tsdf.py. I must admit that I'm not very familiar with the parameter settings in the TSDFVolume function. I was wondering whether the visualization issue may be due to the parameter settings in this section of the code? Thx! =v=
from scenerf.
Hi @Luciferbobo,
Thank you for your information! I just found a bug related to the reconstruction and have updated them. Could you please try cloning again?
from scenerf.
Related Issues (20)
- About to generate mesh HOT 2
- Questions about code HOT 5
- code question HOT 3
- Something wrong when I want to compute the depth metrics on all frames in each sequence HOT 7
- Some question about compute_transformation and dataset HOT 10
- Bugs in generate_novel_depths.py HOT 3
- About cam_pts_to_angle HOT 4
- How do I set the data if I want to move the image I acquired with this code? HOT 6
- Are there any changes to the code and configuration for indoor scene reconstruction? HOT 6
- Are this method scene-specific? HOT 2
- train semantickitti problem HOT 2
- Some questions about performance against baseline Adabins HOT 2
- Question about FPS in inference HOT 4
- Asking for preprocess tsdf of 13.84 IoU ckpt HOT 5
- No Module named scenerf" HOT 8
- The details of converting image coordinates to spherical coordinates are somewhat confusing HOT 1
- [Question] 3D reconstruction from image slides HOT 2
- Error in Chekcpoints Saving HOT 1
- Question about Function "depth2disp" HOT 2
- Error with training on single gpu. HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from scenerf.