Comments (7)
Sure, looks good. Can you make a pull request?
from multiview_calib.
It's very important to store a calibrated model as a dict in pickle file.
In [11]: pkldata['ba_poses']
Out[11]:
{0: {'R': array([[ 0.52789776, -0.30372616, 0.79314209],
[ 0.16299775, 0.95274017, 0.25635503],
[-0.83352006, -0.00604887, 0.55245608]]),
't': array([-1.25329266, -0.38610411, 0.76373023]),
'K': array([[435.00008899, 0. , 332.99999789],
[ 0. , 582.00003338, 235.99999701],
[ 0. , 0. , 1. ]]),
'dist': array([-3.74936993e-01, 1.26073421e-01, 3.88545638e-05, -9.97291997e-05,
8.18460227e-05]),
'image_shape': [480, 640]},
...
5: {'R': array([[-0.91910905, 0.09263789, -0.38295794],
[-0.05424301, 0.93296044, 0.35586866],
[ 0.39025153, 0.3478549 , -0.85246743]]),
't': array([ 0.77451431, -0.51324259, 1.97152702]),
'K': array([[675.00008628, 0. , 315.99999885],
[ 0. , 900.99990012, 236.00001239],
[ 0. , 0. , 1. ]]),
'dist': array([-2.47093563e-01, 1.73903032e-01, 9.99120080e-05, -9.36428281e-05,
-9.81393826e-05]),
'image_shape': [480, 640]}}
Then, I can load the camera model to do 2d <-> 3d convertion and camera position calculation.
https://github.com/chenxinfeng4/multiview_calib/blob/master/multiview_calib/calibpkl_predict.py
Would you accept this kind of feature. If so, I will create a PR in this weekend.
from multiview_calib.
The library already return this data in ba_poses.json
and ba_points.json
so there is no need create additional ones. Regarding the object CalibPredict, I would not add it. The only functions you need are get_cam_direct_p3d
and get_cam_pos_p3d
which are very simly and can be expressed in a single line of code. So I suggest to simply add them directly in you visualisation function.
from multiview_calib.
Fine, I try to make it simple.
from multiview_calib.
Here is the self contained file:
# draw_camera_model.py
import numpy as np
import plotly
import plotly.graph_objects as go
plotly.offline.init_notebook_mode()
ba_poses = {0:{'K':[[874.9999351 , 0. , 640.00001674],
[ 0. , 875.00007767, 400.00003827],
[ 0. , 0. , 1. ]],
'dist':[-4.92013108e-03, -7.91381985e-03, 0, 0, 0, 0, 0, 0],
'R':[[ 0.78378342, 0.61977314, -0.03955757],
[ 0.08759756, -0.17338846, -0.9809501 ],
[-0.61482536, 0.76538728, -0.19018962]],
't':[18.09726146, 50.29056549, 593.49752867]},
1:{'K':[[1.136e+03 , 0. , 640.00001674],
[ 0. , 1.136e+03 , 400.00003827],
[ 0. , 0. , 1. ]],
'dist':[5.00793716e-02, -9.49329055e-02, 0, 0, 0, 0, 0, 0],
'R':[[-0.48882095, 0.8718914 , 0.02931664],
[ 0.57271159, 0.34607445, -0.74312442],
[-0.65806953, -0.34646481, -0.66851076]],
't':[-24.83672613, 19.63115592, 907.02641429]},
2:{'K':[[875.00007091, 0. , 640.00007958],
[ 0. , 874.99990097, 399.99991745],
[ 0. , 0. , 1. ]],
'dist':[4.99024296e-02, -9.50997289e-02, 0, 0, 0, 0, 0, 0],
'R':[[ 0.18673236, -0.98208336, -0.02536347],
[-0.22417059, -0.0174587 , -0.97439352],
[ 0.95649285, 0.18763655, -0.22341431]],
't':[-9.99416695, 42.44887449, 570.09911752]},
3:{'K':[[1.136e+03 , 0. , 640.00001674],
[ 0. , 1.136e+03 , 400.00003827],
[ 0. , 0. , 1. ]],
'dist':[-4.92013108e-03, -7.91381985e-03, 0, 0, 0, 0, 0, 0],
'R':[[-0.45859272, -0.888345 , 0.02314877],
[-0.60672637, 0.29396643, -0.73855727],
[ 0.6492887 , -0.35274196, -0.67379321]],
't':[-43.52338928, 18.84446863, 917.22939038]},
}
nview = len(ba_poses)
# 1. create a class to calculate camera poses
class CalibPredict:
def __init__(self, poses):
self.poses = poses
self.views = sorted(list(poses.keys()))
for view in self.views:
for item in ['K', 'R', 't', 'dist']:
self.poses[view][item] = np.array(self.poses[view][item])
def get_cam_pos_p3d(self) -> np.ndarray:
"""
Get camera position in 3D.
"""
cam_pos = np.zeros((len(self.views), 3), dtype=float)
for i, view in enumerate(self.views):
param = self.poses[view]
R,t = param['R'], param['t']
cam_pos[i] = (-np.linalg.inv(R) @ t.reshape(-1,1)).ravel()
return cam_pos
def get_cam_direct_p3d(self) -> np.ndarray:
"""
Get camera direction of X,Y,Z
"""
cam_oxyz = np.zeros((len(self.views), 4, 3), dtype=float)
oxyz = np.eye(4)[1:,:] #(3,4)
for i, view in enumerate(self.views):
param = self.poses[view]
R,t = param['R'], param['t']
cam_oxyz[i] = (np.linalg.inv(R) @ (oxyz-t.reshape(3,1))).T
cam_xyz = cam_oxyz[:,1:] - cam_oxyz[:,[0]]
return cam_xyz
# 2. camera plot
def get_cam_pose_vert(center:np.ndarray, rotate:np.ndarray, scale:float=20):
rotation_matrix = rotate.T
# Extract the coordinates of edges
edge_x = []
edge_y = []
edge_z = []
vertices = [
[-1, -1, 0], # Bottom center
[1, -1, 0], # Top vertex 1
[1, 1, 0], # Bottom vertex 2
[-1, 1, 0], # Bottom vertex 3
[-1, -1, 3], # Bottom center
[1, -1, 3], # Top vertex 1
[1, 1, 3], # Bottom vertex 2
[-1, 1, 3], # Bottom vertex 3
[-5, -3, 5], # Bottom center
[5, -3, 5], # Top vertex 1
[5, 3, 5], # Bottom vertex 2
[-5, 3, 5], # Bottom vertex 3
]
vertices = np.array(vertices) * scale
vertices[:,-1]
edges = [
(0, 1), (1, 2), (2, 3), (0, 3), # Bottom edges
(0+4, 1+4), (1+4, 2+4), (2+4, 3+4), (0+4, 3+4), # Top edges
(0, 4), (1, 1+4), (2, 2+4), (3, 3+4), # Bottom to top
(0+8, 1+8), (1+8, 2+8), (2+8, 3+8), (0+8, 3+8), # Top edges
(4, 8), (5, 1+8), (6, 2+8), (7, 3+8), # Top edges
]
x, y, z = zip(*vertices)
x, y, z = rotation_matrix @ np.array([x, y, z]) + center[:,None]
# Extract the coordinates of edges
edge_x = []
edge_y = []
edge_z = []
for s, e in edges:
edge_x += [x[s], x[e], None] # Add None between two edge points to draw separate lines
edge_y += [y[s], y[e], None]
edge_z += [z[s], z[e], None]
return edge_x, edge_y, edge_z
#3. caculate camera position and direction
calibobj = CalibPredict(ba_poses)
nview_direct = calibobj.get_cam_direct_p3d() #X,Y,Z direction for camera skeleton.
nview_direct /= np.linalg.norm(nview_direct, axis=-1, keepdims=True)
camp3d = calibobj.get_cam_pos_p3d()
#4. Plot the cameras model
edge_x, edge_y, edge_z = [], [], []
for i in range(nview):
edge_x_, edge_y_, edge_z_ = get_cam_pose_vert(camp3d[i], nview_direct[i])
edge_x.extend(edge_x_)
edge_y.extend(edge_y_)
edge_z.extend(edge_z_)
camerasModel = go.Scatter3d(
x=edge_x,
y=edge_y,
z=edge_z,
mode='lines',
line=dict(color='blue', width=4)
)
#5. Plot the cameras Height line
edge_x, edge_y, edge_z = [], [], []
for i in range(nview):
x,y,z=camp3d[i]
edge_x.extend([x, x, None])
edge_y.extend([y, y, None])
edge_z.extend([z, 0, None])
camerasHeight = go.Scatter3d(
x=edge_x,
y=edge_y,
z=edge_z,
mode='lines',
line=dict(color='black', width=2)
)
#6. Plot ball objects
theta = np.linspace(0, 4*np.pi, 100)
ball_x = 200 * np.cos(theta)
ball_y = 200 * np.sin(theta)
ball_z = np.linspace(0, 300, 100)
ballObject = go.Scatter3d(
x=ball_x,
y=ball_y,
z=ball_z,
mode='markers',
marker=dict(
size=4,
color='#e87518',
opacity=1
)
)
#7. Set figure style
fig = go.Figure(data=[camerasModel, camerasHeight, ballObject])
fig.update_layout(
width=1000,
height=800,
scene=dict(
xaxis_title='X',
yaxis_title='Y',
zaxis_title='Z',
xaxis = dict(
tickvals=list(range(-600, 600, 200))+[600],
showticklabels=True # Empty list to remove xtick labels
),
yaxis = dict(
tickvals=list(range(-600, 600, 200))+[600],
showticklabels=True # Empty list to remove ytick labels
),
zaxis = dict(
tickvals=list(range(0, 600, 200))+[600],
showticklabels=True # Empty list to remove ztick labels
)
))
plotly.offline.iplot(fig)
from multiview_calib.
I have reorganized your code and create a script here visualisation
I just realized that I no longer have the rights to modify this repo. I'll try to figure this out if possible.
from multiview_calib.
It looks better. Thanks for quick reply.
from multiview_calib.
Related Issues (10)
- HOW to export the fitted Bundle Adjustment Model to predict new_landmark.json HOT 4
- space coordinates. HOT 4
- Optimal intrinsic computation guideline HOT 7
- build_input is too slow HOT 3
- Question about the fixed_scale value HOT 1
- Question about the descriptions in docs/main.pdf HOT 2
- Question about the landmarks.json HOT 4
- Cite your work in paper. HOT 2
- Fix a bug in estimate_scale_point_sets() HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from multiview_calib.