Coder Social home page Coder Social logo

unitreecamerasdk's Introduction

Introduction

Unitree Robotics is a energetic start-up company that focuses on the development, production and sales of high-performance quadruped robots. It has been interviewed by BBC and CCTV, and is one of the earliest company to publicly sell quadruped robots.

The company has an outstanding leadership in developing robot core components, motion control, robot perception, etc.

We attaches great importance to research and development, and thus independently developed the motors, reducers, controllers, and even some sensors of the quadruped robot.

1.Overview

UnitreeCameraSDK 1.1.0 is a cross-platform library for unitree stereo cameras

The SDK allows depth and color streaming, and provides intrinsic calibration information. The library also offers pointcloud, depth image aligned to color image.

2.Dependencies

OpenCV, version: equal or lager than 4 (need gstreamer)

CMake, version: 2.8 or higher

[OpenGL] for point cloud gui

[GLUT] for point cloud gui

[X11], for point cloud gui

2.Build

cd UnitreeCameraSDK;
mkdir build && cd build;
cmake ..; make

3.Run Examples

Get Camera Raw Frame:

cd UnitreeCameraSDK; 
./bin/example_getRawFrame 

Get Calibration Parameters File

cd UnitreeCameraSDK;
./bin/example_getCalibParamsFile 

Get Rectify Frame

cd UnitreeCameraSDK;
./bin/example_getRectFrame

Get Depth Frame

cd UnitreeCameraSDK;
./bin/example_getDepthFrame

Get Point Cloud:

cd UnitreeCameraSDK; 
./bin/example_getPointCloud

4.send image and listen image sender:put image to another devices

cd UnitreeCameraSDK; 
./bin/example_putImagetrans

listener:get image from another devices

cd UnitreeCameraSDK; 
./bin/example_getimagetrans

unitreecamerasdk's People

Contributors

burgerbank avatar dnaglucose avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

unitreecamerasdk's Issues

Running the examples fails

Hi!

I've installed this SDK as described on my computer, which is connected to the Go1 robot.
I am able to ping the Nvidia boards inside.

But none of the example files work out of the box:

$ ls
example_getCalibParamsFile  example_getDepthFrame  example_getimagetrans  example_getPointCloud  example_getRawFrame  example_getRectFrame  example_putImagetrans

$ ./example_getPointCloud 
Invalid deviceNode!
Segmentation fault (core dumped)

$ ./example_getDepthFrame 
Invalid deviceNode!
Segmentation fault (core dumped)

$ ./example_getimagetrans 
udpSendIntegratedPipe:udpsrc address=192.168.123.15 port=9201 ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (713) open OpenCV | GStreamer warning: Error opening bin: no element "h264parse"
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

$ ./example_getPointCloud 
Invalid deviceNode!
Segmentation fault (core dumped)

$ ./example_getRawFrame 
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Cannot detect any unitree camera!
[UnitreeCameraSDK][ERROR] read to tmp file failed, maybe mkstemp file error!
[UnitreeCameraSDK][ERROR] This camera cannot get internal parameters!
[UnitreeCameraSDK][ERROR] You should flash the calibration parameters or load it from disk.
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1758) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (515) startPipeline OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1057) setProperty OpenCV | GStreamer warning: no pipeline
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1057) setProperty OpenCV | GStreamer warning: no pipeline
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1057) setProperty OpenCV | GStreamer warning: no pipeline
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1057) setProperty OpenCV | GStreamer warning: no pipeline
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1057) setProperty OpenCV | GStreamer warning: no pipeline
Device Position Number:0
[StereoCamera][INFO] Initialize parameters OK!
[StereoCamera][INFO] Start capture ...
^C

$ ./example_getRectFrame 
Invalid deviceNode!
Segmentation fault (core dumped)

I expected at least a window to open to show be the output of one camera.

Does anybody know how to solve these issues?

Inconsistent Image transmission rate to ubuntu PC

Hi,

I have connected my ubuntu laptop to Go1 using ethernet cable. When I run puttrans in go1 and gettrans in my PC, I notice that I get the camera frames at a rate of around 5 fps. It is not consistent. This makes the video feed laggy.

Is this an known issue? IS there any way to get the image data at 30fps?

Unable to execute example_getDepthFrame with Go1

Hello,

I have been using the Unitree Go1 and am interested in acquiring depth images from the camera. Upon executing example_getRawFrame, I have confirmed that I am able to retrieve the RGB images. However, I am unable to execute other functions such as example_getDepthFrame and example_getPointCloud. The specific errors are as follows:

unitree@unitree-desktop:~/Unitree/sdk/UnitreecameraSDK/bins$ ./example_getDepthFrame
Invalid deviceNode!
Segmentation fault (core dumped)

I would greatly appreciate any assistance or guidance in resolving this issue.

Camera Stream for A1

Is there something similar for streaming the Unitree A1 camera data? I couldn't find any documentation.

cann't access the camera in unitree G01.

I am following this link "https://docs.trossenrobotics.com/unitree_go1_docs/getting_started/camera_sdk.html" to setup the camera sdk In Unitree Go1 : -

I have successfully build the unitreecamerasdk and after running this given command "./bin/example_getRawFrame" I am geting following errors.

[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (1758) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Cannot identify device '/dev/video0'.
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (888) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
[ WARN:0] global ../modules/videoio/src/cap_v4l.cpp (887) open VIDEOIO(V4L2:/dev/video0): can't open camera by index
Cannot detect any unitree camera!
[UnitreeCameraSDK][ERROR] read to tmp file failed, maybe mkstemp file error!
[UnitreeCameraSDK][ERROR] This camera cannot get internal parameters!
[UnitreeCameraSDK][ERROR] You should flash the calibration parameters or load it from disk.

I could not figure out the actual problem. Please help me solve this issue.

Get depth as uint16 or float32

Hi,
I'm trying to use the UnitreecameraSDK to retrieve the depth image. I'd expect the depth image to be either uint16 or float32, so I can get the distance values from images. However, when I use the function getDepthFrame(), the image comes as an uint8 image with 3 channels, and it doesn't matter if I set the parameter color=false, I still get the same image type.

Therefore, is there a way to get the depth image with pixels values in mm or meters?

Thanks

Create camera nodes

So i need the camera data (/camera_face/color/image_raw, /camera_face/color/image_raw, /camera_face/color/image_raw, etc...). When i connect to the 13 board ([email protected]) and run 'rostopic list' just returns ERROR: Unable to communicate with master!.
I tried with all the boards (13, 14 and 15)

dynamic version of the libraries in lib/amd64

Hi!

I'm trying to make a pybind11 module that exposes the C++ camera sdk to python in order to send images directly into neural network pipeline, and encountered a problem when trying to compile everything using CMake: right now the amd64 version of libraries are static libraries, which, as I have experimented, do not work as expected after linking with my own dynamic library that contains the pybind11 module (more specifically the problem is undefined symbols: stereo 12 start ebb) which I believe could be due to linking .a with .so libraries, can the .so library of amd64 be made available?

I can provide more details of the problem if needed.

thanks!

get camera info

Im trying to get that info from the camera and create a topic with it, does anybody know how to get that data?

image

Go1 pointcloud

Is there a way to obtain the pointcloud data from depth measurement and publish it as a ros topic?

thanks!

Rotation reference frame system in output_camCalibParams.yaml

Hi, I see there are RightRotation matrix and LeftRotation matrix in the calibration parameter file. And there is only one translation matrix. Does anyone know how the frames are defined? rotation matrix tranform from which frame to which frame? translation matrix translate from which frame to which frame? Thank you!

Wireless transferring images

Hi, can the images captured (especially raw and rect) be transferred to an external computer wirelessly (without ethernet)? I think it would be possible if the Jetson nano board could be connected to Wi-Fi, but seems like we cannot do that...
I also thought to use the raspberry pi as a medium but that would be complex and maybe cause some time delay. So I am wondering is there any way to transfer images wirelessly in real-time directly from Nano?

Running "example_putImagetrans.cc" fails

Dear Unitree,

When I run "example_putImagetrans.cc" according to the tutorial in bilibili, it doesn't come up with the prompt [UnitreeCameraSDK][info] Load camera flash parameters OK!

unitree@unitree-desktop:~/UnitreecameraSDK$ ./bins/example_putImagetrans
[WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp(1757) handleMessage Opencv | Gstreamer warning: Embedded video playback halted; module v4l2srce reported: Internal data stream error.
[WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp(886) open Opencv | Gstreamer warning: unable to start pipeline
[WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp(480) isPipelinePlaying OpencV | Gstreamer warning: Gstreamer: pipeline have not been created

And I don't get the image from the camera...

Does anybody know how to fix the issue?

无法检测到摄像头

我用笔记本连接了go1的局域网并且可以控制机器人移动,但是我运行 ./bins/example_getRawFrame的时候出现如下报错
[UnitreeCameraSDK][ERROR] read to tmp file failed, maybe mkstemp file error! [UnitreeCameraSDK][ERROR] This camera cannot get internal parameters! [UnitreeCameraSDK][ERROR] You should flash the calibration parameters or load it from disk. Segmentation fault
请问应该怎么设置,谢谢

Trouble Transferring Image Data to Onboard Raspberry Pi on Unitree Go1 Edu

I'm currently working on transferring image data from multiple Jetson units to the onboard Raspberry Pi of the Unitree Go1 Edu robot. I've been following the guide available here, specifically Step 7, using ./bins/example_putImagetrans for sending the image data.

The data transfer works well between the Jetson units (with IPs: 192.168.123.13, 192.168.123.14, and 192.168.123.15). However, I've encountered issues when attempting to transfer this image data to the Go1 Edu's onboard Raspberry Pi (IP: 192.168.123.161) using the following Python script to receive the data.

`
import cv2

class camera:

def __init__(self, cam_id=None, width=640, height=480):
    self.width = 640
    self.cam_id = cam_id
    self.width = width
    self.height = height
def get_img(self):
    IpLastSegment = "161"
    cam = self.cam_id
    udpstrPrevData = "udpsrc address=192.168.123." + IpLastSegment + " port="
    udpPORT = [9201, 9202, 9203, 9204, 9205]
    udpstrBehindData = " ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! appsink"
    udpSendIntegratedPipe_0 = udpstrPrevData + str(udpPORT[cam - 1]) + udpstrBehindData
    print(udpSendIntegratedPipe_0)
    self.cap = cv2.VideoCapture(udpSendIntegratedPipe_0)

def demo(self):
    self.get_img()
    while True:
        self.ret, self.frame = self.cap.read()
        self.frame = cv2.resize(self.frame, (self.width, self.height))
        if self.cam_id == 1:
            self.frame = cv2.flip(self.frame, -1)
        if self.frame is not None:
            cv2.imshow("video0", self.frame)
        if cv2.waitKey(2) & 0xFF == ord('q'):
            break
    self.cap.release()
    cv2.destroyAllWindows()

`
Despite adjusting the destination IP to the onboard Raspberry Pi's, the image data doesn't seem to get through as expected.

Any guidance or troubleshooting steps would be highly appreciated.

Could I put the Depth or PointCloud data via UDP ?

Hello,

[Question]
I would like to know only about can I put the Depth or PointCloud data via UDP ?

(I couldn't find about it from documents.)


[Supplement]
I tried the example putImagetrans.cc & getimagetrans.cc .
I succeed to put the video via UDP.
But I couldn't put the Depth at the server side.

image

My Twitter

(Did ~/Unitree/autostart/camerarosnode/cameraRosnode/kill.sh )

pattern (A, video streaming)

>>cam.startCapture(true,false); ///< disable share memory sharing and able image h264 encoding

I tried this.

pattern (B, try to do depth streaming)

    cam.startCapture();                 //disable both.
    cam.startStereoCompute();
        --------------
        if(!cam.getDepthFrame(depth, true, t)){
        --------------

But the server side couldn't put the Depth.

WIN_20220401_18_18_19_Pro

If the stream start with Videos, It appear the IP address and port No.
But with depth (pattern(B) ) , console string stopped. it didn't appear the IP address and port No.

I guess that I should disable the h264 encoding, but I can't find about it from the documents.

Could you answer about the question ?

How to access camera data on the Pi on Go1?

Hi. Is there a way to use the example_getimagetrans, example_putImagetrans or example_getRectFrame on the head nano to pi? I followed the instructions on the documentation, and I am struggling to install OpenCV 4 from source on the raspberry Pi on GO1 as the compilation process using the make command is taking too much time.

The only set up the documentation and video provided was from head nano to PC using a wired connection.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.