Coder Social home page Coder Social logo

lysdexic-audio / jweb-pose-landmarker Goto Github PK

View Code? Open in Web Editor NEW
9.0 2.0 0.0 2.82 MB

A self contained example demonstrating how to use Mediapipe Pose Landmarker with Max's `jweb`

License: GNU General Public License v3.0

CSS 0.08% JavaScript 1.74% HTML 0.26% Max 97.93%
jweb max mediapipe mediapipe-pose mediapipe-pose-estimation body-tracking skeleton-tracking

jweb-pose-landmarker's Introduction

jweb-pose-landmarker

A self contained example demonstrating how to use Mediapipe Pose Landmarker with Max's jweb

Max example screenshot

Pose landmarker model

The pose landmarker model tracks 33 body landmark locations, representing the approximate location of the following body parts:

Pose landmarks image

0 - nose
1 - left eye (inner)
2 - left eye
3 - left eye (outer)
4 - right eye (inner)
5 - right eye
6 - right eye (outer)
7 - left ear
8 - right ear
9 - mouth (left)
10 - mouth (right)
11 - left shoulder
12 - right shoulder
13 - left elbow
14 - right elbow
15 - left wrist
16 - right wrist
17 - left pinky
18 - right pinky
19 - left index
20 - right index
21 - left thumb
22 - right thumb
23 - left hip
24 - right hip
25 - left knee
26 - right knee
27 - left ankle
28 - right ankle
29 - left heel
30 - right heel
31 - left foot index
32 - right foot index

Resources

This example is inspired by an example by Rob Ramirez, which is in turn inspired by MediaPipe in JavaScript.

jweb-pose-landmarker's People

Contributors

lysdexic-audio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

jweb-pose-landmarker's Issues

Input from other jit objects?

[sorry for cross post of questions with cycling 74 forum]
Thanks Lysdexic. This is fantastic work!

I'm interested in feeding in an image/matrix from a Kinect or other IR camera (the goal is being able to send a live image of a person in a dark room that is hopefully readable enough by Pose to make out a body and limbs). For matrix data from the Kinect, I'm using jit.freenect.grab.
Is there a way to feed another jit object directly into the jweb-pose-landmarker object to be analyzed rather than an image from media device/camera?

I'm guessing the js code for jweb-pose-landmarker would have to be changed for a different input, but I don't know where to start.

Thanks again!
Jacob

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.