This is an implementation of mmWave radar human 3D posture skeletons sensing and some wheels. Much appreciate to TI, TI e2e forum, PreSense Radar team, PyKinect2, and Tensorflow Keras along implementing this project. Please STAR if you think this repository is helpful.
The repository has following parts.
- Posture sensing
- Radar signal processing on raw mmWave radar signal
- Sync between mmWave radar frames and Kinect skeleton frames
- Demonstration
- Wheel codes
The ultimate goal is to reconstruct 3D fullbody human postures using wireless signals. We use directional RF mmWave signal as the signal source, and the camera as the control group. The wireless signal is transmitted from the antenna and bounce back with absorbtion after reaching a barrier. The received signal should imply spatial information that should be able to address with radar signal processing or neural network.
Advantages compared with other posture tracking solutions:
- Privacy-protected. No camera involved. Privacy information will not be captured by any chance.
- Realtime. Have the potential to achieve 60FPS realtime tracking.
- Easy to setup. Compact radar antennae.
This section demonstrates the 3D posture reconstruction results compared with ground truth captured with a depth camera. A model consisting of CNN, RNN, and FCN is trained to reconstruct 3D posture from processed mmWave signal features.
-
Prerequisite
- Hardware
- TI AWR1642EVM-ODS mmWave radar
- TI DCA1000EVM Data capture card
- Xbox Kinect V2
- Software
- TI mmWave Studio v2.01
- TI mmWave SDK
- Download data and save to
./data/
- Python 3.6 environment
- Hardware
-
Codes
- Capture 3D Skeleton Points from Xbox Kinect.
- Extract Spatial Heatmaps (azimuth, elevation heatmaps)
- [Sync Radar Frames with Skeleton Frames from Kinect]
- OpenRadar
- PyKinect2
- Keras
- PreSense Radar
Please STAR this project to make me feel that I helped you to beat "some" competitors :)