Coder Social home page Coder Social logo

ajinkyapuar / blazeposebarracuda Goto Github PK

View Code? Open in Web Editor NEW

This project forked from creativeikep/blazeposebarracuda

0.0 1.0 0.0 108 KB

BlazePoseBarracuda is a human 2D/3D pose estimation neural network that runs the Mediapipe Pose (BlazePose) pipeline on the Unity Barracuda.

License: Apache License 2.0

C# 66.31% ShaderLab 23.12% HLSL 10.57%

blazeposebarracuda's Introduction

BlazePoseBarracuda

demo_fitness demo_dance

BlazePoseBarracuda is a human 2D/3D pose estimation neural network that works with a monocular color camera.

BlazePoseBarracuda is Unity Package that runs the Mediapipe Pose(BlazePose) pipeline on the Unity.

BlazePoseBarracuda has 2 neural network models(lite and full) and, can be switched on the realtime (Check for Mediapipe Pose(BlazePose) page for models detail).

BlazePoseBarracuda implementation is inspired by HandPoseBarracuda and I referenced his source code.(Thanks, keijiro!).

Dependencies

BlazePoseBarracuda uses the following sub packages:

Install

BlazePoseBarracuda can be installed from npm or GitHub URL.

Install from npm (Recommend)

BlazePoseBarracuda can be installed by adding following sections to the manifest file (Packages/manifest.json).

To the scopedRegistries section:

{
  "name": "creativeikep",
  "url": "https://registry.npmjs.com",
  "scopes": [ "jp.ikep" ]
}

To the dependencies section:

"jp.ikep.mediapipe.blazepose": "1.1.1"

Finally, the manifest file looks like below:

{
    "scopedRegistries": [
        {
            "name": "creativeikep",
            "url": "https://registry.npmjs.com",
            "scopes": [ "jp.ikep" ]
        }
    ],
    "dependencies": {
        "jp.ikep.mediapipe.blazepose": "1.1.1",
        ...
    }
}

Install from GitHub URL

BlazePoseBarracuda can be installed by adding below URLs from the Unity Package Manager's window

https://github.com/creativeIKEP/PoseDetectionBarracuda.git?path=Packages/PoseDetectionBarracuda#v1.0.0
https://github.com/creativeIKEP/PoseLandmarkBarracuda.git?path=Packages/PoseLandmarkBarracuda#v1.1.0
https://github.com/creativeIKEP/BlazePoseBarracuda.git?path=Packages/BlazePoseBarracuda#v1.1.1

or, appending lines to your manifest file(Packages/manifest.json) dependencies block. Example is below.

{
  "dependencies": {
    "jp.ikep.mediapipe.posedetection": "https://github.com/creativeIKEP/PoseDetectionBarracuda.git?path=Packages/PoseDetectionBarracuda#v1.0.0",
    "jp.ikep.mediapipe.poselandmark": "https://github.com/creativeIKEP/PoseLandmarkBarracuda.git?path=Packages/PoseLandmarkBarracuda#v1.1.0",
    "jp.ikep.mediapipe.blazepose": "https://github.com/creativeIKEP/BlazePoseBarracuda.git?path=Packages/BlazePoseBarracuda#v1.1.1",
    ...
  }
}

Usage Demo

Below code is the demo that estimate human pose from a image and get pose landmark. Check "/Assets/Script/PoseVisuallizer.cs" and "/Assets/Scenes/2DSampleScene.unity" for BlazePoseBarracuda usage demo details in the 2D pose estimation. Check "/Assets/Script/PoseVisuallizer3D.cs" and "/Assets/Scenes/3DSampleScene.unity" for BlazePoseBarracuda usage demo details in the 3D pose estimation.

using UnityEngine;
using Mediapipe.BlazePose;

public class <YourClassName>: MonoBehaviour
{
  // Set "Packages/BlazePoseBarracuda/ResourceSet/BlazePose.asset" on the Unity Editor.
  [SerializeField] BlazePoseResource blazePoseResource;
  // Select neural network models with pull down on the Unity Editor.
  [SerializeField] BlazePoseModel poseLandmarkModel;

  BlazePoseDetecter detecter;

  void Start(){
      detecter = new BlazePoseDetecter(blazePoseResource, poseLandmarkModel);
  }

  void Update(){
      Texture input = ...; // Your input image texture

      // Predict pose by neural network model.
      // Switchable anytime between neural network models with 2nd argment.
      detecter.ProcessImage(input, poseLandmarkModel);

      /*
      `detecter.outputBuffer` is pose landmark result and ComputeBuffer of float4 array type.
      0~32 index datas are pose landmark.
          Check below Mediapipe document about relation between index and landmark position.
          https://google.github.io/mediapipe/solutions/pose#pose-landmark-model-blazepose-ghum-3d
          Each data factors are
          x: x cordinate value of pose landmark ([0, 1]).
          y: y cordinate value of pose landmark ([0, 1]).
          z: Landmark depth with the depth at the midpoint of hips being the origin.
             The smaller the value the closer the landmark is to the camera. ([0, 1]).
             **The use of this value is not recommended. You can use `worldLandmarkBuffer` if z value is needed.**
          w: The score of whether the landmark position is visible ([0, 1]).

      33 index data is the score whether human pose is visible ([0, 1]).
      This data is (score, 0, 0, 0).
      */
      ComputeBuffer result = detecter.outputBuffer;

      /*
      `detecter.worldLandmarkBuffer` is pose world landmark result and ComputeBuffer of float4 array type.
      0~32 index datas are pose world landmark.
          Check below Mediapipe document about relation between index and landmark position.
          https://google.github.io/mediapipe/solutions/pose#pose-landmark-model-blazepose-ghum-3d
          Each data factors are
          x, y and z: Real-world 3D coordinates in meters with the origin at the center between hips.
          w: The score of whether the world landmark position is visible ([0, 1]).

      33 index data is the score whether human pose is visible ([0, 1]).
      This data is (score, 0, 0, 0).
      */
      ComputeBuffer worldLandmarkResult = detecter.worldLandmarkBuffer;

      // `detecter.vertexCount` is count of pose landmark vertices.
      // `detecter.vertexCount` returns 33.
      int count = detecter.vertexCount;

      // Your custom processing from here, i.e. rendering.
      // For example, below is CPU log debug.
      var data = new Vector4[count];
      result.GetData(data);
      Debug.Log("---");
      foreach(var d in data){
        Debug.Log(d);
      }

      worldLandmarkResult.GetData(data);
      Debug.Log("---");
      foreach(var d in data){
        Debug.Log(d);
      }
  }
}

Demo Image

Videos for demo scenes("/Assets/Scenes/2DSampleScene.unity" and "/Assets/Scenes/3DSampleScene.unity") was downloaded from pixabay.

Author

IKEP

LICENSE

Copyright (c) 2021 IKEP

Apache-2.0

blazeposebarracuda's People

Contributors

creativeikep avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.