Coder Social home page Coder Social logo

robotcar-dataset-sdk's People

Contributors

chrisros avatar danieledema avatar dbarnes avatar mrgransky avatar mttgdd avatar podgib avatar samarth-robo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robotcar-dataset-sdk's Issues

calibration

I'm confused about the stereo calibration.I download the camera models file but the files confused me.Could you tell me the stereo cameras calibrations?fx,fy,cy,cy ?and extrinsic?

About the extrinsic parameters

I'm currently working on multiple camera coperation in the SLAM systems and I'm really interested in your dataset. But I'm a little confused on the extrinsic parameters in the provided SDK. I don't know the exact meaning of the parameters and I don't find any description either.
According to "SE3MatrixFromComponents.m", the parameters should be a translation vector followed by three Euler angles. And according to the parameters you provided, I guess all the transformation matrices from extrinsic parameters after processing with "SE3MatrixFromComponents.m" should be the transformations from the sensors to the stereo camera which is facing x direction. And the equations should be right:

Trans_4by4 = SE3MatrixFromComponents(sensor_parameters_1by6)
Corresponding_Point_in_stereo_frame = Trans_4by4 * Point_in_sensor_frame

So am I right or not? Could you help me?

And anther question is about the extrinsic parameter calibration of the cameras. Could you share me some experience about the extrinsic parameter calibration of the cameras since your paper doesn't talk much about that? And could you tell me the accuracy of the camera extrinsic calibraions? What's more, are there some camera extrinsic calibration sequences in the dataset? Thank you very much!

The camera parameter of the undistorted images

Hi,

Thank you very much for sharing and publishing the dataset.

I have one problem. I want to use the stereo images, but notice that only the original camera parameter and distortion bin file are provided. I tried to undistort the images using your SDK. But how could I get the new camera parameter? Do you have the undistorted camera parameter file?

Thank you very much for your help

Kind Regards,

Ruihao

How can I align the ground truth with different trajectory.

How should I make this step:

As each ground truth odometry file does not begin at the same location in the urban environment, we manually selected (for each of the 32 trajectories) a moment during which the vehicle was stationary at a common point and manually aligned the ground traces. (Kidnapped radar, 2020)

the extrinsic of rtk is not consistent with the picture in ijrr paper

hello, I plot the position and orientation of rtk with pangolin, but the x-axis is point to the left of the forwarding direction
original
if I rotate the coordinate along the z-axis clockwise with 90 degree, it seems more reasonable and is consistent with extrinsic of ins in ijrr paper.
rot90

Left and right camera extrinsic seems to be wrong

The mono left camera extrinsic:
-0.0905 1.6375 0.2803 0.2079 -0.2339 1.2321
It seems from the car sensor location image that the translation should be close to:
-1.55 -0.59 -0.16.
It seems there is also a mistake with the mono right camera

Possible bug using ProjectLaserIntoCamera.m

Hello,

I ran ProjectLaserIntoCamera.m function but I got some erroneous results. For example, these are the results I get for timestamp=1400076055044636 (2014-05-14-13-59-05) for the centre camera:
image

And after zoom-in:
image 1

It is clear that the samples over the pedestrians are not coherent, and also on the background.

Can you please check if you get the same output, and If not, can you please tell me what did I do wrong?

Thanks,
Adam

How to remove the noisy across-and-down stripe

I know that "LoadImage" can do this for both perspective and fisheye images. But after that the fisheye image is rectified. Since what I want is the raw fisheye image, how to get a raw fisheye image with the noisy stripes removed?

Timestamps between INS, stereo and VO seem misaligned

I've recently started using this dataset and I've notices some strange misalignments between the various timestamps.
The images below show the global pose, relative pose (obtained from VO) and stereo frame at the same (or closest) timestamp. As seem, the INS and VO poses are completely different, with the stereo following the VO.
Do the timestamps need to be modified according to the start time of each sensor?

INS1 VO1 Stereo1

INS2
VO2
Stereo2

not bounded the "pose_timestamps" in RelativeToAbsolutePoses.m

This is just a boundary error, but it is annoying since it happens for the last a few hundred frames of many drives when I try to generate the point cloud for each frame. I though many people might use your SDK, and it may be useful to point it out.


Error:

The error message will be "Matrix dimensions must agree" at the following line 79 of in file RelativeToAbsolutePoses.m

  fractions = (pose_timestamps - vo_timestamps(lower_indices)')./...
    (vo_timestamps(lower_indices+1)'-vo_timestamps(lower_indices)');

I figured out that it is caused by the pose_timestamps it not bounded by vo_timestamps. If the max pose_timestamp is greater than the max vo_timestamps, upper_index in line 43 will be empty and upper_index in line 46 will be 0.

Solution:

We could add one function to bound the pose_timestamp with vo_timestamps, and avoid any changes in your original code.

Add the following line before line 98 at BuildPointcloud.m

laser_timestamps = boundLaserTimestamps(laser_timestamps, ins_file);

Added function is as follows:

function [ bounded_timestamps ] = boundLaserTimestamps( laser_timestamps, vo_file )
% Bound the laser timestamps by the max vo timestamps
% Usage:  [ bounded_timestamps ] = boundLaserTimestamps( laser_timestamps, ins_file )
%
% INPUTS:
%   laser_timestamps: array of UNIX timestamps from laser
%   vo_file: csv file containing relative pose data
%
% OUTPUTS:
%   bounded_timestamps: is the bounded by max vo_timestamps
%
% Author: Junsheng Fu ([email protected])


vo_file_id = fopen(vo_file);
headers = textscan(vo_file_id, '%s', 8, 'Delimiter',',');
vo_data = textscan(vo_file_id, '%u64 %u64 %f %f %f %f %f %f','Delimiter',',');
fclose(vo_file_id);

vo_timestamps = vo_data{1};

% only need to handle the up bound
end_timestamp = min(vo_timestamps(end), laser_timestamps(end,1));   
end_index = find(laser_timestamps(:,1) <= end_timestamp, 1, 'last');

bounded_timestamps = laser_timestamps(1:end_index, :);

end

build_pointcloud divide error

Hello,

similar to transform.py, in line.91 of build_pointcloud.py the divide operator '/' has to be replace by '//'

Best

Marc

Can't Download Dataset

Hi, I tried using this scrape_mrgdatashare.py to download data as shown in the README, however, it kept failing.

The command that I used
python scrape_mrgdatashare.py --choice_sensors stereo_centre,vo --choice_runs_file example_list.txt --downloads_dir /Downloads --datasets_file datasets.csv --username <my-username> --password <my-password>

My example_list.txt

2014-12-17-18-18-43

The error message

requests.exceptions.ConnectionError: HTTPSConnectionPool(host='mrgdatashare.robots.ox.ac.uk', port=443): 
Max retries exceeded with url: / 
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fae0c22b470>: 
Failed to establish a new connection: [Errno 113] No route to host',))

It looked like a network error so I tried to download the GPS data from sequence 2015-02-03-19-43-11 from the website directly, using the link below:
https://mrgdatashare.robots.ox.ac.uk/download/?filename=datasets/2015-02-03-19-43-11/2015-02-03-19-43-11_gps.tar
but the address was unreachable.

Can you please help? Thanks!

image
image

play_radar.py error

Hi, not sure if it's to do with versioning for opencv but I'm receiving a really weird error when I run play_radar.py. Any help would be greatly appreciated, thanks.
image

Possible bug in ProjectLaserIntoCamera.m (line 98)

Hi!
I've tried using ProjectLaserIntoCamera.m on the 2014-12-16-09-14-09 run data from the dataset but get erroneous results when trying to utilize the visual odometry data.

I'm running the following code:
`root_data_path = 'some_path/oxford/2014-12-16-09-14-09';
image_dir = fullfile(root_data_path,'/mono_left');
laser_dir = fullfile(root_data_path,'/lms_rear');

ins_file = fullfile(root_data_path,'vo/vo.csv');

models_dir = 'some_path/robotcar-dataset-sdk/models';
extrinsics_dir = 'some_path/robotcar-dataset-sdk/extrinsics';
timestamp_file = 'some_path/oxford/2014-12-16-09-14-09/mono_left.timestamps';

timestamps = dlmread(timestamp_file);
timestamp_index = 1800;
image_timestamp = timestamps(timestamp_index,1);

ProjectLaserIntoCamera(image_dir, laser_dir, ins_file, models_dir, extrinsics_dir, image_timestamp)`

which plots this image.
orig

Changing line 98 in ProjectLaserIntoCamera.m instead gives the following results.
changed

Have I done something wrong when calling the function or is there an error in the code?

Thanks,
maunzzz

[Suggestion] Add the format of the extrinsics data with README.md

Can you add the format of the extrinsics data with README.md in the extrinsics folder?
After downloaded the data I'm confused about what does the each number means in extrinsic file
Finally I found that how to use the extrinsic data in tranform.py

Each parameters in extrinsic file represent (x, y, z, r, p, y)

Errors from running scripts with python3

OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file /io/opencv/modules/highgui/src/window.cpp, line 583
Traceback (most recent call last):
  File "play_radar.py", line 59, in <module>
    cv2.imshow(title, vis * 2.)  # The data is doubled to improve visualisation
cv2.error: /io/opencv/modules/highgui/src/window.cpp:583: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage

I use python3.5.2, The libgtk2.0-dev and pkg-config have been already installed. Thanks!

Poses from interpolate_ins_poses using RTK has broken sections.

Hi,
I have found broken sections using rtk.csv file with interpolate_ins_poses function.

Screen Shot 2020-04-13 at 10 02 39 AM
Screen Shot 2020-04-13 at 10 03 34 AM
Screen Shot 2020-04-13 at 10 03 59 AM

I used this code snippet to generate the plots:

ins_path = os.path.join(dataset_dir, "images", dataset, "gps/rtk.csv")
timestamp_path = os.path.join(dataset_dir, "images", dataset, f"mono_{view}.timestamps")
    
pose_timestamps = np.loadtxt(timestamp_path)[:, 0].tolist()

origin_timestamp = pose_timestamps[0]
    
print('Interpolating poses...')
insext = np.loadtxt(os.path.join('robotcar_helper/extrinsics', f'mono_{view}.txt')).tolist()
mtx = transform.build_se3_transform(insext)

T_I_Cx = pose.fromMatrix(np.asarray(mtx)).inverse()
T_Cx_C = pose.yRotationDeg(90) * pose.zRotationDeg(90)
T_I_C = T_I_Cx * T_Cx_C

T_W_I0 = pose.xRotationDeg(180)

T_I0_I = interpolate_poses.interpolate_ins_poses(
    ins_path, pose_timestamps[1:], origin_timestamp, use_rtk=True)

T_I0_I = [pose.fromMatrix(np.asarray(i)) for i in T_I0_I]

T_W_Cs = [T_W_I0 * i * T_I_C for i in T_I0_I]
t_W_Cs = np.array([T_W_C.t.ravel() for T_W_C in T_W_Cs])

plt.scatter(t_W_Cs[:, 1], t_W_Cs[:, 0], marker='.', s=4)

plt.axes().set_aspect('equal')
plt.grid()
plt.show()

The traversal used in this example is 2014-11-28-12-07-13.

I have also direclty plot the poses using northing/easting data provided in the rtk.csv file an I have observed similar issues at similar sections of the traversal.

Screen Shot 2020-04-13 at 10 06 56 AM

Screen Shot 2020-04-13 at 10 07 03 AM

Screen Shot 2020-04-13 at 10 06 47 AM

Grasshopper2 images do not fit the GPS/INS file

In this dataset, the mono images (left | rare | right) contain only 164 samples (after extraction) which correspond to a very short straight path and they do not fit the groundtruth provided :
RC_GNSS_INS

However, stereo images fit the path!

Is there some chunks missing for mono camera?

Cheers,

Timestamp resolution/conversion (ROS)

Hi,

I want to convert the timestamps provided into ROS time stamp format (seconds, nanoseconds)

But if I use a standard conversion using rospy from_seconds() throws error with the timestamps.

If I try to convert the timestamp to human readable date time using python time.ctime, I get strange dates
Thu Jan 9 23:25:05 44368617 Fri Jan 10 04:58:15 44368617 Fri Jan 10 10:32:04 44368617 Fri Jan 10 16:06:19 44368617 Fri Jan 10 21:40:35 44368617 Sat Jan 11 03:14:47 44368617

Anyone else have issues with this?
I would like to know the resolution of the timestamps so I can split it into seconds/microseconds from the unix timestamp provided.

build_pointcloud.py and interpolate_poses.py errors

Hello,

In line.84 of build_pointcloud.py, the timestamps[i] should be timestamps[i+1] as the len(poses) isn't equal to len(timestamps). The timestamps' value has been changed by function line.72 (interpolate_ins_poses) or line.75 (interpolate_vo_poses) due to the operation of requested_timestamps.insert(0, origin_timestamp) in line.102 of interpolate_poses.py.

Besides, in line.142 of interpolate_poses.py, the divide operator '//' in fractions should be replaced by '/'.

Best,
Bing

Reflectance measurement unit

Hello,

Thank you for providing this dataset and this associated toolboxe.

I only have one question about the measurement unit used to describe reflectance associated to the points cloud acquired with the rear and front LIDAR (for example returned by the function build_pointcloud). What does the reflectance value returned represent?

Usually reflectance is a percentage of reflected light for a given material, but in your function the value of reflectance does not represent a percentage. With little inspection I find that values are between 101 and ~2000, with a mean value around 400.

Thank you per advance for your answer.

Why 2D LIDAR data provide more information in vertical axis than 3D LIDAR?

In the dataset page, it said,
LIDAR:
2 x SICK LMS-151 2D LIDAR, 270° FoV, 50Hz, 50m range, 0.5° resolution
1 x SICK LD-MRS 3D LIDAR, 85° HFoV, 3.2° VFoV, 4 planes, 12.5Hz, 50m range, 0.125° resolution

when I run the build_point cloud script with same parameter using /lms_front and /ldmrs laser scan folder, I find 2D LIDAR data provide more information in vertical axis than 3D LIDAR, it looks like below, first pic is /lms_front result and second pic is /ldmrs result.
image
image

Can't find vo.csv

Hi

I can't find the vo.csv files in the data, can you please provide the path for these files?

Thanks

transform.py, xrange

in line 140 in transform.py in the function so3_to_quaternion
the function xrange no longer exists in python 3. its just range

Best Marc

Error in CvtColor (); function

Hi Robotcar Dataset Team,

I am using the Robotcar dataset in Opencv ,vsual studio.I have got problem in the conversion of the Raw bayer stereo image(pattern='gbrg' ) to Mat rgb image( cvtColor(); ).

Mat image(height,width, CV_8UC1, Scalar(0,0,255));
image = imread("C:\Users\u20w21\Desktop\bayer.png");
Mat image2;
cvtColor( image, image2, COLOR_BayerRG2RGB)
...........

error demosaic

But i get an error with cv::demosaic function.Could you please help me with the problem.

“SyntaxError: invalid syntax” from running the python scripts

I am using python2.7, and the typing-3.7.4.1 package has been installed successfully.

Traceback (most recent call last):
  File "play_radar.py", line 17, in <module>
    from radar import load_radar, radar_polar_to_cartesian
  File "/home/scopus/Downloads/robotcar-dataset-sdk-3.0/python/radar.py", line 20
    def load_radar(example_path: AnyStr) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, float]:
                               ^
SyntaxError: invalid syntax

Should I use python3? Thanks!

matlab BuildPointcloud doesn't work with rtk data.

The script produces pretty good pointclouds with ins data, but if I switch to the provided rtk groundtruth, something is wrong because I get an almost planar result along the ground truth trajectory.
In the interpolatePoses file it seems wierd that you assume a header with length of 24 because the rtk header is shorter than the previously used ins header. I changed this bit, but it still doesn't work, although with this change it produces results.
Please provide a short example of how the script should be used, if I am using it wrong.

Errors from running build_pointcloud.py

File "robotcar-dataset-sdk-3.1\robotcar-dataset-sdk-3.1\python\build_pointcloud.py", line 21, in
from velodyne import load_velodyne_raw, load_velodyne_binary, velodyne_raw_to_pointcloud
File "..\robotcar-dataset-sdk-3.1\robotcar-dataset-sdk-3.1\python\velodyne.py", line 33
def load_velodyne_binary(velodyne_bin_path: AnyStr):
^
SyntaxError: invalid syntax

When running build_pointcloud.py, I am using Python2.7. should I be using Python3? thanks!

Camera_Model.py Has an issue with python 3

Hello,

It seems that the line 147 in Camera_Model.py no longer works with python 3.
lut = lut.reshape([2, lut.size / 2])

It can easily be fixed by using
lut = lut.reshape([2, lut.size // 2])

This happens because Python 3 does not auto convert floats to integers in division

Best,

Marc

Inquiry about the orientation in the UTM zone 30U

Dear Authors,

Thanks for your public dataset. We are trying to understand how the roll pitch yaw euler angles are computed relative to the UTM zone 30U.

For instance, from the /oxford/rtk/2014-11-11-11-06-25/rtk.csv, we find its first row,

timestamp,latitude,longitude,altitude,northing,easting,down,utm_zone,velocity_north,velocity_east,velocity_down,roll,pitch,yaw
1415704028184063,51.7605920064,-1.2612119076,111.316208,5735841.4578097975,620000.1836618409,-111.316208,30U,0.004,0.004,0.001,-0.01779099999969897,-0.006712000000142968,-0.09384140718030262

In the matlab and python code, we find that the pose of the body frame (B) relative to the world frame (W) is computed as
W_T_B = SE3MatrixFromComponents(northing, easting, down, roll, pitch, yaw).

From this website tool, I have confirmed that the northing and easting coordinates are relative to a coordinate frame (F) whose origin is at the equator with longitude -3 degree. To conform to the NED frame convention, the F frame can has its x along north, its y along east, and its z up. The transverse Mercator projection is understandable for the positions.

But for the orientation, I am baffled. As I understand, the F frame and the local level NED frame (N) differs in north direction by about 51.76 degrees in space.

Are these euler angles computed by waypoint inertial explorer (IE), relative to the local level NED frame (N) (which moves with the car) or relative to the F frame?
From my experience with IE, it outputs the roll pitch heading in the local level ENU frame by default.
But the SE3MatrixFromComponents() function implies that the euler angles are relative to the F frame.

If the euler angles are relative to the F frame, how do you transform the orientation from the N frame to the F frame?

Errors from running matlab scripts

Undefined function or variable 'yticks'.

Error in PlayRadar (line 73)
        yticklabels(azimuths(yticks));

I use matlab R2015a in windows 10.

Pinhole intrinsic calibration for stereo_centre

Regarding bumblebee, I noticed https://github.com/ori-mrg/robotcar-dataset-sdk/tree/master/models only has Pinhole intrinsic calibration for stereo_narrow_left. stereo_narrow_right, stereo_wide_left, stereo_wide_right.
However, the camera images are stored as stereo/left stereo/right and stereo/centre.

According to the paper:
stereo_narrow_<left/right>.txt are for the the stereo/centre and stereo/right images.
stereo_wide_<left/right>.txt are for the the stereo/left and stereo/right images.

So are there two intrinsic calibrations for stereo/right (stereo_narrow_right.txt and stereo_wide_right.txt)? What about intrinsic info for treating stereo/right as a monocular camera?

If for instance I feed images from one of these lenses to a visual odometry algorithm that works on monocular images, which Pinhole fx fy cx cy values would be correct?

build Global Point cloud and absolute cameras poses

Hi @mttgdd

I am wondering if it is possible to build the global point cloud for each scan within the world coordinate system ? and is it possible to obtain the absolute pose for each camera?

the key is that we would like to use your dataset in a way, where the ground truth pose of the cameras (or practically of the vehicle body, or some sensor, that we have a precise relative pose to from each camera) should be available relative to the pointcloud!

So practically we need to be able to merge all the pointclouds (the individual scans) into one big data, and project it precisely on any selected image frame, perspective stereo or omnidirectional camera image also.

Can you please elaborate if you consider these details can be obtained from your dataset?

Thank you in advance for your kind help,
Best regards,
Hichem

Pointcloud Projection Into Images

Hi, RobotCar Team,

Firstly, thank you for your great dataset!

I followed the https://robotcar-dataset.robots.ox.ac.uk/documentation/#pointcloud-projection-into-images to project pointcloud into images using the Python API provided by the SDK. However, the result seems incorrect as the attachment shows. The code is also in the attachment. Do you have any suggestions?

screenshot from 2019-01-16 20-37-21

The code is:

################################################################################
#
# Copyright (c) 2017 University of Oxford
# Authors:
#  Geoff Pascoe ([email protected])
#
# This work is licensed under the Creative Commons
# Attribution-NonCommercial-ShareAlike 4.0 International License.
# To view a copy of this license, visit
# http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter to
# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
#
################################################################################

import os
import re
import numpy as np
import matplotlib.pyplot as plt
import argparse

sys.path.append('../dataset_loaders')
from robotcar_sdk.build_pointcloud import build_pointcloud
from robotcar_sdk.transform import build_se3_transform
from robotcar_sdk.image import load_image
from robotcar_sdk.camera_model import CameraModel

image_dir='../data/deepslam_data/RobotCar/loop_ldmrs/2014-06-26-09-24-58/stereo/centre/'
laser_dir='../data/deepslam_data/RobotCar/loop_ldmrs/2014-06-26-09-24-58/ldmrs'
poses_file='../data/deepslam_data/RobotCar/loop_ldmrs/2014-06-26-09-24-58/vo/vo.csv'  # or  '../data/deepslam_data/RobotCar/loop_ldmrs/2014-06-26-09-24-58/gps/ins.csv'  # 
models_dir='../data/robotcar_camera_models'
extrinsics_dir='/mnt/lustre/sunjiankai/App/robotcar-dataset-sdk/extrinsics/'
save_dir='../data/deepslam_data/RobotCar/loop_ldmrs/2014-06-26-09-24-58/depth/'

# args = parser.parse_args()

model = CameraModel(models_dir, image_dir)

extrinsics_path = os.path.join(extrinsics_dir, model.camera + '.txt')
with open(extrinsics_path) as extrinsics_file:
    extrinsics = [float(x) for x in next(extrinsics_file).split(' ')]

G_camera_vehicle = build_se3_transform(extrinsics)
G_camera_posesource = None

poses_type = re.search('(vo|ins)\.csv', poses_file).group(1)
if poses_type == 'ins':
    with open(os.path.join(extrinsics_dir, 'ins.txt')) as extrinsics_file:
        extrinsics = next(extrinsics_file)
        G_camera_posesource = G_camera_vehicle * build_se3_transform([float(x) for x in extrinsics.split(' ')])
else:
    # VO frame and vehicle frame are the same
    G_camera_posesource = G_camera_vehicle

print('G_camera_posesource: ', G_camera_posesource)

timestamps_path = os.path.join(image_dir, os.pardir, model.camera + '.timestamps')
if not os.path.isfile(timestamps_path):
    timestamps_path = os.path.join(image_dir, os.pardir, os.pardir, model.camera + '.timestamps')

timestamp_list = []
with open(timestamps_path) as timestamps_file:
    for i, line in enumerate(timestamps_file):
        timestamp_list.append(int(line.split(' ')[0]))

for timestamp in timestamp_list[1000:]:
    pointcloud, reflectance = build_pointcloud(laser_dir, poses_file, extrinsics_dir,
                                               timestamp - 1e7, timestamp + 1e7, timestamp)
    
    pointcloud = np.dot(G_camera_posesource, pointcloud)
    print('pointcloud.shape: ', pointcloud[:, 0].max(), pointcloud[:, 1].max(), pointcloud[:, 2].max(), pointcloud[:, 0].min(), pointcloud[:, 1].min(), pointcloud[:, 2].min())
    
    image_path = os.path.join(image_dir, str(timestamp) + '.png')
    image = load_image(image_path, model)

    uv, depth = model.project(pointcloud, image.shape)

    plt.imshow(image)
    plt.hold(True)
    plt.scatter(np.ravel(uv[0, :]), np.ravel(uv[1, :]), s=2, c=depth, edgecolors='none', cmap='jet')
    plt.xlim(0, image.shape[1])
    plt.ylim(image.shape[0], 0)
    plt.xticks([])
    plt.yticks([])
    plt.show()

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.