Coder Social home page Coder Social logo

000box / face-api.js Goto Github PK

View Code? Open in Web Editor NEW

This project forked from justadudewhohacks/face-api.js

0.0 2.0 0.0 102.63 MB

JavaScript API for face detection and face recognition in the browser with tensorflow.js

JavaScript 1.44% TypeScript 98.56%

face-api.js's Introduction

face-api.js

JavaScript API for face detection and face recognition in the browser implemented on top of the tensorflow.js core API (tensorflow/tfjs-core)

Examples

Face Recognition

preview_face-detection-and-recognition

preview_face-recognition_gif

Face Similarity

preview_face-similarity

Face Landmarks

preview_face_landmarks_boxes

preview_face_landmarks

Live Video Face Detection

preview_video-facedetection

Face Alignment

preview_face_alignment

Running the Examples

cd examples
npm i
npm start

Browse to http://localhost:3000/.

About the Package

Face Detection

For face detection, this project implements a SSD (Single Shot Multibox Detector) based on MobileNetV1. The neural net will compute the locations of each face in an image and will return the bounding boxes together with it's probability for each face.

The face detection model has been trained on the WIDERFACE dataset and the weights are provided by yeephycho in this repo.

Face Recognition

For face recognition, a ResNet-34 like architecture is implemented to compute a face descriptor (a feature vector with 128 values) from any given face image, which is used to describe the characteristics of a persons face. The model is not limited to the set of faces used for training, meaning you can use it for face recognition of any person, for example yourself. You can determine the similarity of two arbitrary faces by comparing their face descriptors, for example by computing the euclidean distance or using any other classifier of your choice.

The neural net is equivalent to the FaceRecognizerNet used in face-recognition.js and the net used in the dlib face recognition example. The weights have been trained by davisking and the model achieves a prediction accuracy of 99.38% on the LFW (Labeled Faces in the Wild) benchmark for face recognition.

Face Landmark Detection

This package implements a CNN to detect the 68 point face landmarks for a given face image.

The model has been trained on a variety of public datasets and the model weights are provided by yinguobing in this repo.

Usage

Get the latest build from dist/face-api.js or dist/face-api.min.js and include the script:

<script src="face-api.js"></script>

Or install the package:

npm i face-api.js

Face Detection

Download the weights file from your server and initialize the net (note, that your server has to host the face_detection_model.weights file).

// initialize the face detector
const res = await axios.get('face_detection_model.weights', { responseType: 'arraybuffer' })
const weights = new Float32Array(res.data)
const detectionNet = faceapi.faceDetectionNet(weights)

Detect faces and get the bounding boxes and scores:

// optional arguments
const minConfidence = 0.8
const maxResults = 10

// inputs can be html canvas, img or video element or their ids ...
const myImg = document.getElementById('myImg')
const detections = await detectionNet.locateFaces(myImg, minConfidence, maxResults)

Draw the detected faces to a canvas:

// resize the detected boxes in case your displayed image has a different size then the original
const detectionsForSize = detections.map(det => det.forSize(myImg.width, myImg.height))
const canvas = document.getElementById('overlay')
canvas.width = myImg.width
canvas.height = myImg.height
faceapi.drawDetection(canvas, detectionsForSize, { withScore: false })

You can also obtain the tensors of the unfiltered bounding boxes and scores for each image in the batch (tensors have to be disposed manually):

const { boxes, scores } = detectionNet.forward('myImg')

Face Recognition

Download the weights file from your server and initialize the net (note, that your server has to host the face_recognition_model.weights file).

// initialize the face recognizer
const res = await axios.get('face_recognition_model.weights', { responseType: 'arraybuffer' })
const weights = new Float32Array(res.data)
const recognitionNet = faceapi.faceRecognitionNet(weights)

Compute and compare the descriptors of two face images:

// inputs can be html canvas, img or video element or their ids ...
const descriptor1 = await recognitionNet.computeFaceDescriptor('myImg')
const descriptor2 = await recognitionNet.computeFaceDescriptor(document.getElementById('myCanvas'))
const distance = faceapi.euclidianDistance(descriptor1, descriptor2)

if (distance < 0.6)
  console.log('match')
else
  console.log('no match')

You can also get the face descriptor data synchronously:

const desc = recognitionNet.computeFaceDescriptorSync('myImg')

Or simply obtain the tensor (tensor has to be disposed manually):

const t = recognitionNet.forward('myImg')

Face Landmark Detection

Download the weights file from your server and initialize the net (note, that your server has to host the face_landmark_68_model.weights file).

// initialize the face recognizer
const res = await axios.get('face_landmark_68_model.weights', { responseType: 'arraybuffer' })
const weights = new Float32Array(res.data)
const faceLandmarkNet = faceapi.faceLandmarkNet(weights)

Detect face landmarks:

// inputs can be html canvas, img or video element or their ids ...
const myImg = document.getElementById('myImg')
const landmarks = await faceLandmarkNet.detectLandmarks(myImg)

Draw the detected face landmarks to a canvas:

// adjust the landmark positions in case your displayed image has a different size then the original
const landmarksForSize = landmarks.forSize(myImg.width, myImg.height)
const canvas = document.getElementById('overlay')
canvas.width = myImg.width
canvas.height = myImg.height
faceapi.drawLandmarks(canvas, landmarksForSize, { drawLines: true })

Retrieve the face landmark positions:

const landmarkPositions = landmarks.getPositions()

// or get the positions of individual contours
const jawOutline = landmarks.getJawOutline()
const nose = landmarks.getNose()
const mouth = landmarks.getMouth()
const leftEye = landmarks.getLeftEye()
const rightEye = landmarks.getRightEye()
const leftEyeBbrow = landmarks.getLeftEyeBrow()
const rightEyeBrow = landmarks.getRightEyeBrow()

Compute the Face Landmarks for Detected Faces:

const detections = await detectionNet.locateFaces(input)

// get the face tensors from the image (have to be disposed manually)
const faceTensors = await faceapi.extractFaceTensors(input, detections)
const landmarksByFace = await Promise.all(faceTensors.map(t => faceLandmarkNet.detectLandmarks(t)))

// free memory for face image tensors after we computed their descriptors
faceTensors.forEach(t => t.dispose())

Full Face Detection and Recognition Pipeline

After face detection has been performed, I would recommend to align the bounding boxes of the detected faces before passing them to the face recognition net, which will make the computed face descriptor much more accurate. You can easily align the faces from their face landmark positions as shown in the following example:

// first detect the face locations
const detections = await detectionNet.locateFaces(input)

// get the face tensors from the image (have to be disposed manually)
const faceTensors = (await faceapi.extractFaceTensors(input, detections))

// detect landmarks and get the aligned face image bounding boxes
const alignedFaceBoxes = await Promise.all(faceTensors.map(
  async (faceTensor, i) => {
    const faceLandmarks = await landmarkNet.detectLandmarks(faceTensor)
    return faceLandmarks.align(detections[i])
  }
))

// free memory for face image tensors after we detected the face landmarks
faceTensors.forEach(t => t.dispose())

// get the face tensors for the aligned face images from the image (have to be disposed manually)
const alignedFaceTensors = (await faceapi.extractFaceTensors(input, alignedFaceBoxes))

// compute the face descriptors from the aligned face images
const descriptors = await Promise.all(alignedFaceTensors.map(
  faceTensor => recognitionNet.computeFaceDescriptor(faceTensor)
))

// free memory for face image tensors after we computed their descriptors
alignedFaceTensors.forEach(t => t.dispose())

face-api.js's People

Contributors

justadudewhohacks avatar

Watchers

jellyfang avatar James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.