Coder Social home page Coder Social logo

richmondu / libfaceid Goto Github PK

View Code? Open in Web Editor NEW
482.0 28.0 159.0 377.94 MB

libfaceid is a research framework for prototyping of face recognition solutions. It seamlessly integrates multiple detection, recognition and liveness models w/ speech synthesis and speech recognition.

License: MIT License

Python 98.34% Batchfile 1.56% HTML 0.10%
face-detection face-recognition facenet openface dlib pose-estimation age-detection gender-detection emotion-detection deep-learning

libfaceid's Introduction

libfaceid, a Face Recognition library for everybody

FaceRecognition Made Easy. libfaceid is a Python library for facial recognition that seamlessly integrates multiple face detection and face recognition models.

From Zero to Hero. Learn the basics of Face Recognition and experiment with different models. libfaceid enables beginners to learn various models and simplifies prototyping of facial recognition solutions by providing a comprehensive list of models to choose from. Multiple models for detection and encoding/embedding including classification models are supported from the basic models (Haar Cascades + LBPH) to the more advanced models (MTCNN + FaceNet). The models are seamlessly integrated so that user can mix and match models. Each detector model has been made compatible with each embedding model to abstract you from the differences. Each model differs in speed, accuracy, memory requirements and 3rd-party library dependencies. This enables users to easily experiment with various solutions appropriate for their specific use cases and system requirements. In addition, face liveness detection models are also provided for anti-face spoofing attacks (photo-based, video-based, 3d-mask-based attacks).

Awesome Design. The library is designed so that it is easy to use, modular and robust. Selection of model is done via the constructors while the expose function is simply detect() or estimate() making usage very easy. The files are organized into modules so it is very intuitive to understand and debug. The robust design allows supporting new models in the future to be very straightforward.

Extra Cool Features. The library contains models for predicting your age, gender, emotion and facial landmarks. It also contains TTS text-to-speech (speech synthesizer) and STT speech-to-text (speech recognition) models for voice-enabled and voice-activated capabilities. Voice-enabled feature allows system to speak your name after recognizing your face. Voice-activated feature allows system to listen for a specified word or phrase to trigger the system to do something (wake-word/trigger-word/hotword detection). Web app is also supported for some test applications using Flask so you would be able to view the video capture remotely on another computer in the same network via a web browser.

News:

Date Milestones
2018, Dec 29 Integrated Colorspace histogram concatenation for anti-face spoofing (face liveness detection)
2018, Dec 26 Integrated Google Cloud's STT speech-to-text (speech recognition) for voice-activated capability
2018, Dec 19 Integrated Google's Tacotron TTS text-to-speech (speech synthesis) for voice-enabled capability
2018, Dec 13 Integrated Google's FaceNet face embedding
2018, Nov 30 Committed libfaceid to Github

Background:

With Apple incorporating face recognition technology in iPhone X last year, 2017 and with China implementing nation-wide wide-spread surveillance for social credit system in a grand scale, Face Recognition has become one of the most popular technologies where Deep Learning is used. Face recognition is used for identity authentication, access control, passport verification in airports, law enforcement, forensic investigations, social media platforms, disease diagnosis, police surveillance, casino watchlists and many more.

Modern state of the art Face Recognition solutions leverages graphics processor technologies, GPU, which has dramatically improved over the decades. (In particular, Nvidia released the CUDA framework which allowed C and C++ applications to utilize the GPU for massive parallel computing.) It utilizes Deep Learning (aka Neural Networks) which requires GPU power to perform massive compute operations in parallel. Deep Learning is one approach to Artificial Intelligence that simulates how the brain functions by teaching software through examples, several examples (big data), instead of harcoding the logic rules and decision trees in the software. (One important contribution in Deep Learning is the creation of ImageNet dataset. It pioneered the creation of millions of images, a big data collection of images that were labelled and classified to teach computer for image classifications.) Neural networks are basically layers of nodes where each nodes are connected to nodes in the next layer feeding information. Deepnets are very deep neural networks with several layers made possible using GPU compute power. Many neural networks topologies exists such as Convolutional Neural Networks (CNN) architecture which particulary applies to Computer Vision, from image classification to face recognition.

Introduction:

A facial recognition system is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source. At a minimum, a simple real-time facial recognition system is composed of the following pipeline:

  1. Face Enrollment. Registering faces to a database which includes pre-computing the face embeddings and training a classifier on top of the face embeddings of registered individuals.
  2. Face Capture. Reading a frame image from a camera source.
  3. Face Detection. Detecting faces in a frame image.
  4. Face Encoding/Embedding. Generating a mathematical representation of each face (coined as embedding) in the frame image.
  5. Face Identification. Infering each face embedding in an image with face embeddings of known people in a database.

More complex systems include features such as Face Liveness Detection (to counter spoofing attacks via photo, video or 3d mask), face alignment, face augmentation (to increase the number of dataset of images) and face verification (to confirm prediction by comparing cosine similarity or euclidean distance with each database embedding).

Problem:

libfaceid democratizes learning Face Recognition. Popular models such as FaceNet and OpenFace are not straightforward to use and don't provide easy-to-follow guidelines on how to install and setup. So far, dlib has been the best in terms of documentation and usage but installation is not straightforward, it is slow on CPU and is highly abstracted (abstracts OpenCV as well). Simple models such as OpenCV is good but too basic and lacks documentation of the parameter settings, on classification algorithms and end-to-end pipeline. Pyimagesearch has been great having several tutorials with easy to understand explanations but not much emphasis on model comparisons and seems to aim to sell books so intentions to help the community are not so pure after all (I hate the fact that you need to wait for 2 marketing emails to arrive just to download the source code for the tutorials. But I love the fact that he replies to all questions in the threads). With all this said, I've learned a lot from all these resources so I'm sure you will learn a lot too.

libfaceid was created to somehow address these problems and fill-in the gaps from these resources. It seamlessly integrates multiple models for each step of the pipeline enabling anybody specially beginners in Computer Vision and Deep Learning to easily learn and experiment with a comprehensive face recognition end-to-end pipeline models. No strings attached. Once you have experimented will all the models and have chosen specific models for your specific use-case and system requirements, you can explore the more advanced models like FaceNet.

Design:

libfaceid is designed so that it is easy to use, modular and robust. Selection of model is done via the constructors while the expose function is simply detect() or estimate() making usage very easy. The files are organized into modules so it is very intuitive to understand and debug. The robust design allows supporting new models in the future to be very straightforward.

Only pretrained models will be supported. Transfer learning is the practice of applying a pretrained model (that is trained on a very large dataset) to a new dataset. It basically means that it is able to generalize models from one dataset to another when it has been trained on a very large dataset, such that it is 'experienced' enough to generalize the learnings to new environment to new datasets. It is one of the major factors in the explosion of popularity in Computer Vision, not only for face recognition but most specially for object detection. And just recently, mid-2018 this year, transfer learning has been making good advances to Natural Language Processing ( BERT by Google and ELMo by Allen Institute ). Transfer learning is really useful and it is the main goal that the community working on Reinforcement Learning wants to achieve for robotics.

Features:

Having several dataset of images per person is not possible for some use cases of Face Recognition. So finding the appropriate model for that balances accuracy and speed on target hardware platform (CPU, GPU, embedded system) is necessary. The trinity of AI is Data, Algorithms and Compute. libfaceid allows selecting each model/algorithm in the pipeline.

libfaceid library supports several models for each step of the Face Recognition pipeline. Some models are faster while some models are more accurate. You can mix and match the models for your specific use-case, hardware platform and system requirements.

Face Detection models for detecting face locations

Face Encoding models for generating face embeddings on detected faces

Classification algorithms for Face Identification using face embeddings

  • Naïve Bayes
  • Linear SVM
  • RVF SVM
  • Nearest Neighbors
  • Decision Tree
  • Random Forest
  • Neural Net
  • Adaboost
  • QDA

Face Liveness Detection models for preventing spoofing attacks

Additional models (bonus features for PR):

  • TTS Text-To-Speech (speech synthesis) models for voice-enabled capability
  • STT Speech-To-Text (speech recognition) models for voice-activated capability
  • Face Pose estimator models for predicting face landmarks (face landmark detection)
  • Face Age estimator models for predicting age (age detection)
  • Face Gender estimator models for predicting gender (gender detection)
  • Face Emotion estimator models for predicting facial expression (emotion detection)

Compatibility:

The library and example applications have been tested on Raspberry Pi 3B+ (Python 3.5.3) and Windows 7 (Python 3.6.6) using OpenCV 3.4.3.18, Tensorflow 1.8.0 and Keras 2.0.8. For complete dependencies, refer to requirements.txt. Tested with built-in laptop camera and with a Logitech C922 Full-HD USB webcam.

I encountered DLL issue with OpenCV 3.4.3.18 on my Windows 7 laptop. If you encounter such issue, use OpenCV 3.4.1.15 or 3.3.1.11 instead. Also note that opencv-python and opencv-contrib-python must always have the same version.

Usage:

Installation:

    1. Install Python 3 and Python PIP
       Use Python 3.5.3 for Raspberry Pi 3B+ and Python 3.6.6 for Windows
    2. Install the required Python PIP package dependencies using requirements.txt
       pip install -r requirements.txt

       This will install the following dependencies below:
       opencv-python==3.4.3.18
       opencv-contrib-python==3.4.3.18
       numpy==1.15.4
       imutils==0.5.1
       scipy==1.1.0
       scikit-learn==0.20.0
       mtcnn==0.0.8
       tensorflow==1.8.0
       keras==2.0.8
       h5py==2.8.0
       facenet==1.0.3
       flask==1.0.2
       dlib==19.16.0 # requires CMake
       
       // Installing dlib
       1. Install cmake from https://cmake.org/download/ OR 
       2. pip install https://files.pythonhosted.org/packages/0e/ce/f8a3cff33ac03a8219768f0694c5d703c8e037e6aba2e865f9bae22ed63c/dlib-19.8.1-cp36-cp36m-win_amd64.whl#sha256=794994fa2c54e7776659fddb148363a5556468a6d5d46be8dad311722d54bfcf 


    3. Optional: Install the required Python PIP package dependencies for speech synthesizer and speech recognition for voice capability 
       pip install -r requirements_with_voicecapability.txt

       This will install additional dependencies below:
       playsound==1.2.2
       inflect==0.2.5
       librosa==0.4.2
       unidecode==0.4.20
       pyttsx3==2.7
       gtts==2.0.3
       speechrecognition==3.8.1

       Additional items to install: 
       On Windows, install pypiwin32 using "pip install pypiwin32==223"
       On RPI, 
           sudo apt-get install espeak
           sudo apt-get install python-espeak
           sudo apt-get install portaudio19-dev
           pip3 install pyaudio
           [Microphone Setup on RPI](https://iotbytes.wordpress.com/connect-configure-and-test-usb-microphone-and-speaker-with-raspberry-pi/)

Quickstart (Dummy Guide):

    1. Add your dataset
       ex. datasets/person1/1.jpg, datasets/person2/1.jpg
    2. Train your model with your dataset
       Update training.bat to specify your chosen models
       Run training.bat
    3. Test your model
       Update testing_image.bat to specify your chosen models
       Run testing_image.bat

Folder structure:

    libfaceid
    |
    |   agegenderemotion_webcam.py
    |   testing_image.py
    |   testing_webcam.py
    |   testing_webcam_livenessdetection.py
    |   testing_webcam_voiceenabled.py
    |   testing_webcam_voiceenabled_voiceactivated.py
    |   training.py
    |   requirements.txt
    |   requirements_with_voicecapability.txt
    |   
    +---libfaceid
    |   |   age.py
    |   |   classifier.py
    |   |   detector.py
    |   |   emotion.py
    |   |   encoder.py
    |   |   gender.py
    |   |   liveness.py
    |   |   pose.py
    |   |   speech_synthesizer.py
    |   |   speech_recognizer.py
    |   |   __init__.py
    |   |   
    |   \---tacotron
    |           
    +---models
    |   +---detection
    |   |       deploy.prototxt
    |   |       haarcascade_frontalface_default.xml
    |   |       mmod_human_face_detector.dat
    |   |       res10_300x300_ssd_iter_140000.caffemodel
    |   |       
    |   +---encoding
    |   |       dlib_face_recognition_resnet_model_v1.dat
    |   |       facenet_20180402-114759.pb
    |   |       openface_nn4.small2.v1.t7
    |   |       shape_predictor_5_face_landmarks.dat
    |   |           
    |   +---estimation
    |   |       age_deploy.prototxt
    |   |       age_net.caffemodel
    |   |       emotion_deploy.json
    |   |       emotion_net.h5
    |   |       gender_deploy.prototxt
    |   |       gender_net.caffemodel
    |   |       shape_predictor_68_face_landmarks.dat
    |   |       shape_predictor_68_face_landmarks.jpg
    |   |               
    |   +---liveness
    |   |       colorspace_ycrcbluv_print.pkl
    |   |       colorspace_ycrcbluv_replay.pkl
    |   |       shape_predictor_68_face_landmarks.dat
    |   |               
    |   +---synthesis
    |   |   \---tacotron-20180906
    |   |           model.ckpt.data-00000-of-00001
    |   |           model.ckpt.index
    |   |           
    |   \---training // This is generated during training (ex. facial_recognition_training.py)
    |           dlib_le.pickle
    |           dlib_re.pickle
    |           facenet_le.pickle
    |           facenet_re.pickle
    |           lbph.yml
    |           lbph_le.pickle
    |           openface_le.pickle
    |           openface_re.pickle
    |
    +---audiosets // This is generated during training (ex. facial_recognition_training.py)
    |       Person1.wav or Person1.mp3
    |       Person2.wav or Person2.mp3
    |       Person3.wav or Person3.mp3
    |       
    +---datasets // This is generated by user
    |   +---Person1
    |   |       1.jpg
    |   |       2.jpg
    |   |       ...
    |   |       X.jpg
    |   |       
    |   +---Person2
    |   |       1.jpg
    |   |       2.jpg
    |   |       ...
    |   |       X.jpg
    |   |       
    |   \---Person3
    |           1.jpg
    |           2.jpg
    |           ...
    |           X.jpg
    |           
    \---templates

Pre-requisites:

    1. Add the dataset of images under the datasets directory
       The datasets folder should be in the same location as the test applications.
       Having more images per person makes accuracy much better.
       If only 1 image is possible, then do data augmentation.
         Example:
         datasets/Person1 - contain images of person name Person1
         datasets/Person2 - contain images of person named Person2 
         ...
         datasets/PersonX - contain images of person named PersonX 
    2. Train the model using the datasets. 
       Can use training.py
       Make sure the models used for training is the same for actual testing for better accuracy.

Examples:

    detector models:           0-HAARCASCADE, 1-DLIBHOG, 2-DLIBCNN, 3-SSDRESNET, 4-MTCNN, 5-FACENET
    encoder models:            0-LBPH, 1-OPENFACE, 2-DLIBRESNET, 3-FACENET
    classifier algorithms:     0-NAIVE_BAYES, 1-LINEAR_SVM, 2-RBF_SVM, 3-NEAREST_NEIGHBORS, 4-DECISION_TREE, 5-RANDOM_FOREST, 6-NEURAL_NET, 7-ADABOOST, 8-QDA
    liveness models:           0-EYESBLINK_MOUTHOPEN, 1-COLORSPACE_YCRCBLUV
    speech synthesizer models: 0-TTSX3, 1-TACOTRON, 2-GOOGLECLOUD
    speech recognition models: 0-GOOGLECLOUD, 1-WITAI, 2-HOUNDIFY
    camera resolution:         0-QVGA, 1-VGA, 2-HD, 3-FULLHD

    1. Training with datasets
        Usage: python training.py --detector 0 --encoder 0 --classifier 0
        Usage: python training.py --detector 0 --encoder 0 --classifier 0 --setsynthesizer True --synthesizer 0

    2. Testing with images
        Usage: python testing_image.py --detector 0 --encoder 0 --image datasets/rico/1.jpg

    3. Testing with a webcam
        Usage: python testing_webcam.py --detector 0 --encoder 0 --webcam 0 --resolution 0
        Usage: python testing_webcam_flask.py
               Then open browser and type http://127.0.0.1:5000 or http://ip_address:5000
            
    4. Testing with a webcam with anti-spoofing attacks
        Usage: python testing_webcam_livenessdetection.py --detector 0 --encoder 0 --liveness 0 --webcam 0 --resolution 0

    5. Testing with voice-control
        Usage: python testing_webcam_voiceenabled.py --detector 0 --encoder 0 --speech_synthesizer 0 --webcam 0 
        Usage: python testing_webcam_voiceenabled_voiceactivated.py --detector 0 --encoder 0 --speech_synthesizer 0 --speech_recognition 0 --webcam 0 --resolution 0

    6. Testing age/gender/emotion detection
        Usage: python agegenderemotion_webcam.py --detector 0 --webcam 0 --resolution 0
        Usage: python agegenderemotion_webcam_flask.py
               Then open browser and type http://127.0.0.1:5000 or http://ip_address:5000

Training models with dataset of images:

    from libfaceid.detector import FaceDetectorModels, FaceDetector
    from libfaceid.encoder  import FaceEncoderModels, FaceEncoder
    from libfaceid.classifier  import FaceClassifierModels

    INPUT_DIR_DATASET         = "datasets"
    INPUT_DIR_MODEL_DETECTION = "models/detection/"
    INPUT_DIR_MODEL_ENCODING  = "models/encoding/"
    INPUT_DIR_MODEL_TRAINING  = "models/training/"

    face_detector = FaceDetector(model=FaceDetectorModels.DEFAULT, path=INPUT_DIR_MODEL_DETECTION)
    face_encoder = FaceEncoder(model=FaceEncoderModels.DEFAULT, path=INPUT_DIR_MODEL_ENCODING, path_training=INPUT_DIR_MODEL_TRAINING, training=True)
    face_encoder.train(face_detector, path_dataset=INPUT_DIR_DATASET, verify=verify, classifier=FaceClassifierModels.NAIVE_BAYES)

    // generate audio samples for image datasets using text to speech synthesizer
    OUTPUT_DIR_AUDIOSET       = "audiosets/"
    INPUT_DIR_MODEL_SYNTHESIS = "models/synthesis/"
    from libfaceid.speech_synthesizer import SpeechSynthesizerModels, SpeechSynthesizer
    speech_synthesizer = SpeechSynthesizer(model=SpeechSynthesizerModels.DEFAULT, path=INPUT_DIR_MODEL_SYNTHESIS, path_output=OUTPUT_DIR_AUDIOSET)
    speech_synthesizer.synthesize_datasets(INPUT_DIR_DATASET)

Face Recognition on images:

    import cv2
    from libfaceid.detector import FaceDetectorModels, FaceDetector
    from libfaceid.encoder  import FaceEncoderModels, FaceEncoder

    INPUT_DIR_MODEL_DETECTION = "models/detection/"
    INPUT_DIR_MODEL_ENCODING  = "models/encoding/"
    INPUT_DIR_MODEL_TRAINING  = "models/training/"

    image = cv2.VideoCapture(imagePath)
    face_detector = FaceDetector(model=FaceDetectorModels.DEFAULT, path=INPUT_DIR_MODEL_DETECTION)
    face_encoder = FaceEncoder(model=FaceEncoderModels.DEFAULT, path=INPUT_DIR_MODEL_ENCODING, path_training=INPUT_DIR_MODEL_TRAINING, training=False)

    frame = image.read()
    faces = face_detector.detect(frame)
    for (index, face) in enumerate(faces):
        face_id, confidence = face_encoder.identify(frame, face)
        label_face(frame, face, face_id, confidence)
    cv2.imshow(window_name, frame)
    cv2.waitKey(5000)

    image.release()
    cv2.destroyAllWindows()

Basic Real-Time Face Recognition (w/a webcam):

    import cv2
    from libfaceid.detector import FaceDetectorModels, FaceDetector
    from libfaceid.encoder  import FaceEncoderModels, FaceEncoder

    INPUT_DIR_MODEL_DETECTION = "models/detection/"
    INPUT_DIR_MODEL_ENCODING  = "models/encoding/"
    INPUT_DIR_MODEL_TRAINING  = "models/training/"

    camera = cv2.VideoCapture(webcam_index)
    face_detector = FaceDetector(model=FaceDetectorModels.DEFAULT, path=INPUT_DIR_MODEL_DETECTION)
    face_encoder = FaceEncoder(model=FaceEncoderModels.DEFAULT, path=INPUT_DIR_MODEL_ENCODING, path_training=INPUT_DIR_MODEL_TRAINING, training=False)

    while True:
        frame = camera.read()
        faces = face_detector.detect(frame)
        for (index, face) in enumerate(faces):
            face_id, confidence = face_encoder.identify(frame, face)
            label_face(frame, face, face_id, confidence)
        cv2.imshow(window_name, frame)
        cv2.waitKey(1)

    camera.release()
    cv2.destroyAllWindows()

Real-Time Face Recognition With Liveness Detection (w/a webcam):

    import cv2
    from libfaceid.detector import FaceDetectorModels, FaceDetector
    from libfaceid.encoder  import FaceEncoderModels, FaceEncoder
    from libfaceid.liveness import FaceLivenessModels, FaceLiveness

    INPUT_DIR_MODEL_DETECTION  = "models/detection/"
    INPUT_DIR_MODEL_ENCODING   = "models/encoding/"
    INPUT_DIR_MODEL_TRAINING   = "models/training/"
    INPUT_DIR_MODEL_ESTIMATION = "models/estimation/"
    INPUT_DIR_MODEL_LIVENESS   = "models/liveness/"

    camera = cv2.VideoCapture(webcam_index)
    face_detector = FaceDetector(model=FaceDetectorModels.DEFAULT, path=INPUT_DIR_MODEL_DETECTION)
    face_encoder = FaceEncoder(model=FaceEncoderModels.DEFAULT, path=INPUT_DIR_MODEL_ENCODING, path_training=INPUT_DIR_MODEL_TRAINING, training=False)
    face_liveness = FaceLiveness(model=model_liveness, path=INPUT_DIR_MODEL_ESTIMATION)
    face_liveness2 = FaceLiveness(model=FaceLivenessModels.COLORSPACE_YCRCBLUV, path=INPUT_DIR_MODEL_LIVENESS)

    while True:
        frame = camera.read()
        faces = face_detector.detect(frame)
        for (index, face) in enumerate(faces):

            // Check if eyes are close and if mouth is open
            eyes_close, eyes_ratio = face_liveness.is_eyes_close(frame, face)
            mouth_open, mouth_ratio = face_liveness.is_mouth_open(frame, face)

            // Detect if frame is a print attack or replay attack based on colorspace
            is_fake_print  = face_liveness2.is_fake(frame, face)
            is_fake_replay = face_liveness2.is_fake(frame, face, flag=1)

            // Identify face only if it is not fake and eyes are open and mouth is close
            if is_fake_print or is_fake_replay:
                face_id, confidence = ("Fake", None)
            elif not eyes_close and not mouth_open:
                face_id, confidence = face_encoder.identify(frame, face)

            label_face(frame, face, face_id, confidence)

        // Monitor eye blinking and mouth opening for liveness detection
        total_eye_blinks, eye_counter = monitor_eye_blinking(eyes_close, eyes_ratio, total_eye_blinks, eye_counter, eye_continuous_close)
        total_mouth_opens, mouth_counter = monitor_mouth_opening(mouth_open, mouth_ratio, total_mouth_opens, mouth_counter, mouth_continuous_open)

        cv2.imshow(window_name, frame)
        cv2.waitKey(1)

    camera.release()
    cv2.destroyAllWindows()

Voice-Enabled Real-Time Face Recognition (w/a webcam):

    import cv2
    from libfaceid.detector import FaceDetectorModels, FaceDetector
    from libfaceid.encoder  import FaceEncoderModels, FaceEncoder
    from libfaceid.speech_synthesizer import SpeechSynthesizerModels, SpeechSynthesizer

    INPUT_DIR_MODEL_DETECTION = "models/detection/"
    INPUT_DIR_MODEL_ENCODING  = "models/encoding/"
    INPUT_DIR_MODEL_TRAINING  = "models/training/"
    INPUT_DIR_AUDIOSET        = "audiosets"

    camera = cv2.VideoCapture(webcam_index)
    face_detector = FaceDetector(model=FaceDetectorModels.DEFAULT, path=INPUT_DIR_MODEL_DETECTION)
    face_encoder = FaceEncoder(model=FaceEncoderModels.DEFAULT, path=INPUT_DIR_MODEL_ENCODING, path_training=INPUT_DIR_MODEL_TRAINING, training=False)
    speech_synthesizer = SpeechSynthesizer(model=SpeechSynthesizerModels.DEFAULT, path=None, path_output=None, training=False)

    frame_count = 0
    while True:
        frame = camera.read()
        faces = face_detector.detect(frame)
        for (index, face) in enumerate(faces):
            face_id, confidence = face_encoder.identify(frame, face)
            label_face(frame, face, face_id, confidence)
            if (frame_count % 120 == 0):
                // Speak the person's name
                speech_synthesizer.playaudio(INPUT_DIR_AUDIOSET, face_id, block=False)
        cv2.imshow(window_name, frame)
        cv2.waitKey(1)
        frame_count += 1

    camera.release()
    cv2.destroyAllWindows()

Voice-Activated and Voice-Enabled Real-Time Face Recognition (w/a webcam):

    import cv2
    from libfaceid.detector import FaceDetectorModels, FaceDetector
    from libfaceid.encoder  import FaceEncoderModels, FaceEncoder
    from libfaceid.speech_synthesizer import SpeechSynthesizerModels, SpeechSynthesizer
    from libfaceid.speech_recognizer  import SpeechRecognizerModels,  SpeechRecognizer

    trigger_word_detected = False
    def speech_recognizer_callback(word):
        print("Trigger word detected! '{}'".format(word))
        trigger_word_detected = True

    INPUT_DIR_MODEL_DETECTION = "models/detection/"
    INPUT_DIR_MODEL_ENCODING  = "models/encoding/"
    INPUT_DIR_MODEL_TRAINING  = "models/training/"
    INPUT_DIR_AUDIOSET        = "audiosets"

    camera = cv2.VideoCapture(webcam_index)
    face_detector = FaceDetector(model=FaceDetectorModels.DEFAULT, path=INPUT_DIR_MODEL_DETECTION)
    face_encoder  = FaceEncoder(model=FaceEncoderModels.DEFAULT, path=INPUT_DIR_MODEL_ENCODING, path_training=INPUT_DIR_MODEL_TRAINING, training=False)
    speech_synthesizer = SpeechSynthesizer(model=SpeechSynthesizerModels.DEFAULT, path=None, path_output=None, training=False)
    speech_recognizer  = SpeechRecognizer(model=SpeechRecognizerModels.DEFAULT, path=None)

    // Wait for trigger word/wake word/hot word before starting face recognition
    TRIGGER_WORDS = ["Hey Google", "Alexa", "Activate", "Open Sesame"]
    print("\nWaiting for a trigger word: {}".format(TRIGGER_WORDS))
    speech_recognizer.start(TRIGGER_WORDS, speech_recognizer_callback)
    while (trigger_word_detected == False):
        time.sleep(1)
    speech_recognizer.stop()

    // Start face recognition
    frame_count = 0
    while True:
        frame = camera.read()
        faces = face_detector.detect(frame)
        for (index, face) in enumerate(faces):
            face_id, confidence = face_encoder.identify(frame, face)
            label_face(frame, face, face_id, confidence)
            if (frame_count % 120 == 0):
                // Speak the person's name
                speech_synthesizer.playaudio(INPUT_DIR_AUDIOSET, face_id, block=False)
        cv2.imshow(window_name, frame)
        cv2.waitKey(1)
        frame_count += 1

    camera.release()
    cv2.destroyAllWindows()

Real-Time Face Pose/Age/Gender/Emotion Estimation (w/a webcam):

    import cv2
    from libfaceid.detector import FaceDetectorModels, FaceDetector
    from libfaceid.pose import FacePoseEstimatorModels, FacePoseEstimator
    from libfaceid.age import FaceAgeEstimatorModels, FaceAgeEstimator
    from libfaceid.gender import FaceGenderEstimatorModels, FaceGenderEstimator
    from libfaceid.emotion import FaceEmotionEstimatorModels, FaceEmotionEstimator

    INPUT_DIR_MODEL_DETECTION       = "models/detection/"
    INPUT_DIR_MODEL_ENCODING        = "models/encoding/"
    INPUT_DIR_MODEL_TRAINING        = "models/training/"
    INPUT_DIR_MODEL_ESTIMATION      = "models/estimation/"

    camera = cv2.VideoCapture(webcam_index)
    face_detector = FaceDetector(model=FaceDetectorModels.DEFAULT, path=INPUT_DIR_MODEL_DETECTION)
    face_pose_estimator = FacePoseEstimator(model=FacePoseEstimatorModels.DEFAULT, path=INPUT_DIR_MODEL_ESTIMATION)
    face_age_estimator = FaceAgeEstimator(model=FaceAgeEstimatorModels.DEFAULT, path=INPUT_DIR_MODEL_ESTIMATION)
    face_gender_estimator = FaceGenderEstimator(model=FaceGenderEstimatorModels.DEFAULT, path=INPUT_DIR_MODEL_ESTIMATION)
    face_emotion_estimator = FaceEmotionEstimator(model=FaceEmotionEstimatorModels.DEFAULT, path=INPUT_DIR_MODEL_ESTIMATION)

    while True:
        frame = camera.read()
        faces = face_detector.detect(frame)
        for (index, face) in enumerate(faces):
            age = face_age_estimator.estimate(frame, face_image)
            gender = face_gender_estimator.estimate(frame, face_image)
            emotion = face_emotion_estimator.estimate(frame, face_image)
            shape = face_pose_estimator.detect(frame, face)
            face_pose_estimator.add_overlay(frame, shape)
            label_face(age, gender, emotion)
        cv2.imshow(window_name, frame)
        cv2.waitKey(1)

    camera.release()
    cv2.destroyAllWindows()

Case Study - Face Recognition for Identity Authentication:

One of the use cases of face recognition is for security identity authentication. This is a convenience feature to authenticate with system using one's face instead of inputting passcode or scanning fingerprint. Passcode is often limited by the maximum number of digits allowed while fingerprint scanning often has problems with wet fingers or dry skin. Face authentication offers a more reliable and secure way to authenticate.

When used for identity authentication, face recognition specifications will differ a lot from general face recognition systems like Facebook's automated tagging and Google's search engine; it will be more like Apple's Face ID in IPhone X. Below are guidelines for drafting specifications for your face recognition solution. Note that Apple's Face ID technology will be used as the primary baseline in this case study of identity authentication use case of face recognition. Refer to this Apple's Face ID white paper for more information.

Face Enrollment

  • Should support dynamic enrollment of faces. Tied up with the maximum number of users the existing system supports.
  • Should ask user to move/rotate face (in a circular motion) in order to capture different angles of the face. This gives the system enough flexbility to recognize you at different face angles.
  • IPhone X Face ID face enrollment is done twice for some reason. It is possible that the first scan is for liveness detection only.
  • How many images should be captured? We can store as much image as possible for better accuracy but memory footprint is the limiting factor. Estimate based on size of 1 picture and the maximum number of users.
  • For security purposes and memory related efficiency, images used during enrollment should not be saved. Only the mathematical representations (128-dimensional vector) of the face should be used.

Face Capture

  • Camera will be about 1 foot away from user (Apple Face ID: 10-20 inches).
  • Camera resolution will depend on display panel size and display resolutions. QVGA size is acceptable for embedded solutions.
  • Take into consideration a bad lighting and extremely dark situation. Should camera have a good flash/LED to emit some light. Iphone X has an infrared light to better perform on dark settings.

Face Detection

  • Only 1 face per frame is detected.
  • Face is expected to be within a certain location (inside a fixed box or circular region).
  • Detection of faces will be triggered by a user action - clicking some button. (Not automatic detection).
  • Face alignment may not be helpful as users can be enforced or directed to have his face inside a fixed box or circular region so face is already expected to be aligned for the most cases. But if adding this feature does not affect speed performance, then face alignment ahould be added if possible.
  • Should verify if face is alive via anti-spoofing techniques against picture-based attacks, video-based attacks and 3D mask attacks. Two popular example of liveness detection is detecting eye blinking and mouth opening.

Face Encoding/Embedding

  • Speed is not a big factor. Face embedding and face identification can take 3-5 seconds.
  • Accuracy is critically important. False match rate should be low as much as possible.
  • Can do multiple predictions and get the highest count. Or apply different models for predictions for double checking.

Face Identification

  • Recognize only when eyes are not closed and mouth is not open
  • Images per person should at least be 50 images. Increase the number of images per person by cropping images with different face backgound margin, slight rotations, flipping and scaling.
  • Classification model should consider the maximum number of users to support. For example, SVM is known to be good for less than 100k classes/persons only.
  • Should support unknown identification by setting a threshold on the best prediction. If best prediction is too low, then consider as Unknown.
  • Set the number of consecutive failed attempts allowed before disabling face recognition feature. Should fallback to passcode authentication if identification encounters trouble recognizing people.
  • Images used for successful scan should be added to the existing dataset images during face enrollment making it adaptive and updated so that a person can be recognized with better accuracy in the future even with natural changes in the face appearance (hairstyle, mustache, pimples, etc.)

In addition to these guidelines, the face recognition solution should provide a way to disable/enable this feature as well as resetting the stored datasets during face enrollment.

Case Study - Face Recognition for Home/Office/Hotel Greeting System:

One of the use cases of face recognition is for greeting system used in smart homes, office and hotels. To enable voice capability feature, we use text-to-speech synthesis to dynamically create audio files given some input text.

Speech Synthesis

Speech synthesis is the artificial simulation of human speech by a computer device. It is mostly used for translating text into audio to make the system voice-enabled. Products such as Apple's Siri, Microsoft's Cortana, Amazon Echo and Google Assistant uses speech synthesis. A good speech synthesizer is one that produces accurate outputs that naturally sounds like a real human in near real-time. State-of-the-art speech synthesis includes Deepmind's WaveNet and Google's Tacotron.

Speech Synthesis can be used for some use-cases of Face Recognition to enable voice capability feature. One example is to greet user as he approaches the terminal or kiosk system. Given some input text, the speech synthesizer can generate an audio which can be played upon recognizing a face. For example, upon detecting person arrival, it can be set to say 'Hello PersonX, welcome back...'. Upon departure, it can be set to say 'Goodbye PersonX, see you again soon...'. It can be used in smart homes, office lobbies, luxury hotel rooms, and modern airports.

Face Enrollment

  • For each person who registers/enrolls to the system, create an audio file "PersonX.wav" for some input text such as "Hello PersonX".

Face Identification

  • When a person is identified to be part of the database, we play the corresponding audio file "PersonX.wav".

Performance Optimizations:

Speed and accuracy is often a trade-off. Performance can be optimized depending on your specific use-case and system requirements. Some models are optimized for speed while others are optimized for accuracy. Be sure to test all the provided models to determine the appropriate model for your specific use-case, target platform (CPU, GPU or embedded) and specific requirements. Below are additional suggestions to optimize performance.

Speed

  • Reduce the frame size for face detection.
  • Perform face recognition every X frames only
  • Use threading in reading camera source frames or in processing the camera frames.
  • Update the library and configure the parameters directly.

Accuracy

  • Add more datasets if possible (ex. do data augmentation). More images per person will often result to higher accuracy.
  • Add face alignment if faces in the datasets are not aligned or when faces may be unaligned in actual deployment.
  • Update the library and configure the parameters directly.

References:

Below are links to valuable resoures. Special thanks to all of these guys for sharing their work on Face Recognition. Without them, learning Face Recognition would be difficult.

Codes

Google and Facebook have access to large database of pictures being the best search engine and social media platform, respectively. Below are the face recognition models they have designed for their own system. Be sure to take time to read these papers for better understanding of high-quality face recognition models.

Papers

Contribute:

Have a good idea for improving libfaceid? Please message me in twitter. If libfaceid has helped you in learning or prototyping face recognition system, please be kind enough to give this repository a 'Star'.

libfaceid's People

Contributors

richmondu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libfaceid's Issues

Some import errors

(cv) pi@gateway:~/libfaceid $ python3 agegenderemotion_webcam.py --detector 4 --webcam 0 --resolution 0
ImportError: numpy.core.multiarray failed to import
Traceback (most recent call last):
  File "agegenderemotion_webcam.py", line 3, in <module>
    import cv2
  File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/cv2/__init__.py", line 3, in <module>
    from .cv2 import *
**ImportError: numpy.core.multiarray failed to import**
(cv) pi@gateway:~/libfaceid $ pip3 install -U numpy
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting numpy
Installing collected packages: numpy
  Found existing installation: numpy 1.15.4
    Uninstalling numpy-1.15.4:
      Successfully uninstalled numpy-1.15.4
Successfully installed numpy-1.16.2
(cv) pi@gateway:~/libfaceid $ python3 agegenderemotion_webcam.py --detector 4 --webcam 0 --resolution 0
Traceback (most recent call last):
  File "agegenderemotion_webcam.py", line 6, in <module>
    from libfaceid.encoder  import FaceEncoderModels, FaceEncoder
  File "/home/pi/libfaceid/libfaceid/encoder.py", line 8, in <module>
    from libfaceid.classifier import FaceClassifierModels, FaceClassifier
  File "/home/pi/libfaceid/libfaceid/classifier.py", line 2, in <module>
    from sklearn.svm import SVC
  File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/sklearn/svm/__init__.py", line 13, in <module>
    from .classes import SVC, NuSVC, SVR, NuSVR, OneClassSVM, LinearSVC, \
  File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/sklearn/svm/classes.py", line 4, in <module>
    from .base import _fit_liblinear, BaseSVC, BaseLibSVM
  File "/home/pi/.virtualenvs/cv/lib/python3.5/site-packages/sklearn/svm/base.py", line 8, in <module>
    from . import libsvm, liblinear
**ImportError: /home/pi/.virtualenvs/cv/lib/python3.5/site-packages/sklearn/svm/liblinear.cpython-35m-arm-linux-gnueabihf.so: undefined symbol: cblas_dnrm2**

Import errors

I cloned your code and installed all packages from your requirements.txt.
Tried "testing_webcam_livenessdetection" from your sample and getting following error:

F:\libfaceid-master\libfaceid-master>python testing_webcam_livenessdetection.py --detector 0 --encoder 0 --liveness 0 --webcam 0 --resolution 0
Traceback (most recent call last):
  File "testing_webcam_livenessdetection.py", line 6, in <module>
    from libfaceid.encoder     import FaceEncoderModels, FaceEncoder
  File "F:\libfaceid-master\libfaceid-master\libfaceid\encoder.py", line 6, in <module>
    from imutils import paths      # for FaceEncoderModels.LBPH
ImportError: No module named 'imutils'

'imutils' is installed but still this error is there when I run your sample code.

Training File

when i set detector to MTCNN and Encoder to FaceNet and run training.py file ...i don't get anything in training folder like embedding or label encoding...but when use default encoder detector i get those file...why?

classifier

if i want to use deep cnn as clasifier how do use it? can u help me in it

Unknown Person

Hi , Thanks for the repo , I am not sure what happens when an untrained person is before the camera , the model starts predicting someone already trained. It should ideally predict unknown person .

Also how can i reset a trained model in the folder so that it starts training from scratch , do i need to download a new copy from the repo for that ?

Thanks

bad performance for age and gender detection

a photo below is a female while the result is Male with age [38,43]
00004

My detection code is

import cv2
from libfaceid.detector import FaceDetectorModels, FaceDetector
from libfaceid.encoder  import FaceEncoderModels, FaceEncoder
from libfaceid.pose import FacePoseEstimatorModels, FacePoseEstimator
from libfaceid.age import FaceAgeEstimatorModels, FaceAgeEstimator
from libfaceid.gender import FaceGenderEstimatorModels, FaceGenderEstimator
from libfaceid.emotion import FaceEmotionEstimatorModels, FaceEmotionEstimator

# Set the input directories
INPUT_DIR_MODEL_DETECTION  = "models/detection/"
INPUT_DIR_MODEL_ENCODING= "models/encoding/"
INPUT_DIR_MODEL_ESTIMATION = "models/estimation/"



def label_face(frame, face_rect, face_id, confidence):
    (x, y, w, h) = face_rect
    cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 255, 255), 1)
    if face_id is not None:
        cv2.putText(frame, "{} {:.2f}%".format(face_id, confidence),
            (x+5,y+h-5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA)


def detect_face_age_gender(img_path):
    frame = cv2.imread(img_path)
    detector = FaceDetectorModels.MTCNN
    encoder = FaceEncoderModels.LBPH
    ageestimator = FaceAgeEstimatorModels.CV2CAFFE
    genderestimator = FaceGenderEstimatorModels.CV2CAFFE
    ages = []
    genders = []
    try:
        # Initialize face detection
        face_detector = FaceDetector(model=detector, path=INPUT_DIR_MODEL_DETECTION, minfacesize=120)
        # Initialize face pose/age/gender estimation
        face_age_estimator = FaceAgeEstimator(model=ageestimator, path=INPUT_DIR_MODEL_ESTIMATION)
        face_gender_estimator = FaceGenderEstimator(model=genderestimator, path=INPUT_DIR_MODEL_ESTIMATION)
        faces = face_detector.detect(frame)
        for (index, face) in enumerate(faces):
            (x, y, w, h) = face
            # Detect age, gender
            face_image = frame[y:y + h, h:h + w]
            age = face_age_estimator.estimate(frame, face_image)
            gender = face_gender_estimator.estimate(frame, face_image)
            ages.append(age)
            genders.append(gender)
    except:
        print("Warning, check if models and trained dataset models exists!")
    return ages,genders

test_img = "./00004.jpg"
ages,genders = detect_face_age_gender(test_img)
for i in enumerate(ages):
    print("face id %d ,age: %d ,gender: %s"%(i,int(ages[i]),str(genders[i])))

Invalid parameter - ERROR during training

Hi, Hope you are doing well.
I am getting error as invalid parameter while executing the training.py. Could you please put some light here.
Thanks in advance.

python training.py --detector 4 --encoder 1 --classifier 1
Parameters: FaceDetectorModels.MTCNN FaceEncoderModels.OPENFACE FaceClassifierModels.LINEAR_SVM

Names ['Irfan', 'Sourav']
Irfan: ['1.jpg', '2.jpg', '3.jpg']
SO333572: ['1.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg', '6.jpg']

Using TensorFlow backend.
2020-05-09 10:55:37.133718: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-05-09 10:55:37.158365: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Invalid parameter

Recognizing the face by capturing an image from webcam without opencv and detecting issue

Hi,

I have tried capturing an image using html and passing that captured image to the frame for recognizing , I am facing the issue regarding that.

code:
if flask.request.method == "POST":
image = request.files["image"].read()
npimg = np.fromstring(image, np.uint8)
file = cv2.imdecode(npimg, cv2.IMREAD_COLOR)
frame = file

Issue: I am not able to get the frame and face id for the image captured . Whether the libfaceid is used to detect only the images captured via opencv? Please suggest me to solve

Thanks and Regards

Training data

(not really an issue but more like a question and i couldn't find the answer in the readme...)
Hi,

I am currently testing your package for emotion recognition. I was wondering where did you find the model architecture for the emotion estimator and on which data it was trained (i don't to validate the model on data it was trained with. I am currently testing it on the FER+ and Cohn-Kanade datasets).

Cheers

Issue fixed. File updated to use FaceLivenessModels.

Issue fixed. File updated to use FaceLivenessModels.

Originally posted by @richmondu in #20 (comment)

Thanks sir,
By changing this in the code we got other errors. The code in Liveness.py is changed i guess and
face_liveness = FaceLiveness(model=FaceLivenessModels.EYESBLINK, path=INPUT_DIR_MODEL_ESTIMATION), line 383
gives an error. Replaced the model to "model == FaceLivenessModels.EYESBLINK_MOUTHOPEN:"
Then it gives error: face_liveness.initialize() , line 384
That Initialize() not found.

issue regarding face id and rectangular box for recognized face

I have followed this for realtime face-recognition using flask. I am capturing the image from frontend and accessing it in backend for recognizing. It is not displaying the face id for the image captured. Please help me to solve this issue.

Code:
if flask.request.method == "POST":
image = request.files["image"]
npimg = np.fromfile(image, np.uint8)
file = cv2.imdecode(npimg, cv2.IMREAD_COLOR)

i have written the above code to get the image captured.

testing_webcam.py always returns first person name

testing_webcam.py works partially. It always identify the person/employee as the first person name in the dataset folder.

python testing_webcam.py --detector 2 --encoder 2 --webcam 0 --resolution 0

Any fix for this ?.

File can't be opened for reading! in function 'cv::face::FaceRecognizer::read'

I have been trying basic example provided in read.me i got below error. Please help me

(libfaceid_env) E:\assignment\training\Image_recognition\libfaceid-master>python TRY1.PY
Traceback (most recent call last):
File "TRY1.PY", line 10, in
face_encoder = FaceEncoder(model=FaceEncoderModels.DEFAULT, path=INPUT_DIR_MODEL_ENCODING, path_training=INPUT_DIR_MODEL_TRAINING, training=False)
File "E:\assignment\training\Image_recognition\libfaceid-master\libfaceid\encoder.py", line 48, in init
self._base = FaceEncoder_LBPH(path, path_training, training)
File "E:\assignment\training\Image_recognition\libfaceid-master\libfaceid\encoder.py", line 98, in init
self._clf.read(self._path_training + OUTPUT_LBPH_CLASSIFIER)
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv_contrib\modules\face\src\facerec.cpp:61: error: (-2:Unspecified error) File can't be opened for reading! in function 'cv::face::FaceRecognizer::read'

face recognition with liveness detection on pretrained model

https://github.com/richmondu/libfaceid#real-time-face-recognition-with-liveness-detection-wa-webcam

i got this error
where are can I find lbph_le.pickle or any data to train the model?

models/training/lbph.yml
Traceback (most recent call last):
File "E:/colab/pr_test/test.py", line 269, in
main(parse_arguments(sys.argv[1:]))
File "E:/colab/pr_test/test.py", line 250, in main
run(cam_index, cam_resolution)
File "E:/colab/pr_test/test.py", line 225, in run
process_livenessdetection(detector, encoder, liveness, cam_index, cam_resolution)
File "E:/colab/pr_test/test.py", line 89, in process_livenessdetection
path_training=INPUT_DIR_MODEL_TRAINING, training=False)
File "E:\colab\pr_test\libfaceid\libfaceid\encoder.py", line 48, in init
self._base = FaceEncoder_LBPH(path, path_training, training)
File "E:\colab\pr_test\libfaceid\libfaceid\encoder.py", line 100, in init
self._clf.read(self._path_training + OUTPUT_LBPH_CLASSIFIER)
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv_contrib\modules\face\src\facerec.cpp:61: error: (-2:Unspecified error) File can't be opened for reading! in function 'cv::face::FaceRecognizer::read'

[ WARN:0] terminating async callback

training options

İf you can add train option without align we can try only other options quickly because every train necessary retry align and we loss time,

Other issue program must capture face with realtime from any camera, it's good property for practical test,
thanks for interesting

Errors trying to run certain examples

I'm getting an error trying to run certain programs, like facial_recognition or agegenderemotion_webcam.

Trying to run facial_recognition returns this error:

$ python facial_recognition.py 
/usr/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
Note: Make sure you use the same models for training and testing
Traceback (most recent call last):
  File "facial_recognition.py", line 633, in <module>
    main(parse_arguments(sys.argv[1:]))
  File "facial_recognition.py", line 619, in main
    run()
  File "facial_recognition.py", line 599, in run
    fps = process_facerecognition_livenessdetection_poseagegenderemotion( RESOLUTION_QVGA, None, 0, model_detector=detector, model_recognizer=encoder)
  File "facial_recognition.py", line 349, in process_facerecognition_livenessdetection_poseagegenderemotion
    from libfaceid.pose    import FacePoseEstimatorModels,    FacePoseEstimator
  File "/home/jesus/Documents/tnp/libfaceid/libfaceid/pose.py", line 4, in <module>
    import dlib # for FacePoseEstimatorModels.DLIB68
ImportError: /usr/lib/python3.7/site-packages/dlib.cpython-37m-x86_64-linux-gnu.so: undefined symbol: cblas_dtrsm

LIveness.py is not working in RPI 4B

Livensess.py throws the following error.

Error, check if models and trained dataset models exists!

Training.py throws the error - "Invalid parameter".
Raspberry 4B 4GM RAM - Unable to install Tensorflow. However installs the tf 2.0 using tensorflow-2.0.0-cp37-none-linux_armv7l.whl.

I think training is not working in tf 2.0. it works in tf 1.14. Seems RPI is not compatible with tf 1.0

Appreciated for a solution for RPI 4B

ImportError: cannot import name 'FaceLivenessDetectorModels'

Whenever I try to run facial_recognition.py script it gives an error ImportError: cannot import name 'FaceLivenessDetectorModels'
on this line 348 : from libfaceid.liveness import FaceLivenessDetectorModels, FaceLiveness
I checked the liveness.py file and there was no class defined as "FaceLivenessDetectorModels" but I found another class named ''FaceLivenessModels''. Can anyone suggest me the proper solution?
Regards

testing on images

i am receiving this following error.

NameError: name 'label_face' is not defined

Unknown face

I am using mtcnn as detector, encoder as facenet, classifier linear SVM and also tried neural network
training dataset 100 images each class.
When an unknown face is standing in front of camera it's classifying it from the known people and with confidence score more than 90% . What is wrong with the implementation

anti-spoofing models

Hi, I tried your colorspace_ycrcbluv_replay.pkl model but it always gives me false results, can you give me advice how to fix it?

when I want to run testing_webcam_livenessdetection.py It throws "Error, check if models and trained dataset models exists!"

first I create a virtual environment using python 3.6.8 in CentOS 7
then I activate the virtual environment and installed the requirements using the requirements.txt file and for installing dlib I installed cmake like this:
pip install cmake
and then clone the repo
but when I want to run this file testing_webcam_livenessdetection.py like this:
python testing_webcam_livenessdetection.py
this line from the testing_webcam_livenessdetection.py file throws an exception =
face_encoder = FaceEncoder(model=model_recognizer, path=INPUT_DIR_MODEL_ENCODING, path_training=INPUT_DIR_MODEL_TRAINING, training=False)

and says Error, check if models and trained dataset models exists!

İmport error

from libfaceid.detector import FaceDetectorModels, FaceDetector
ModuleNotFoundError: No module named 'libfaceid'

I didn't change any folder just clone it.

cmake error

thanks for sharing and i have cmake error which version used ?
i have win7
thanks for interesting

Collecting tensorboard<1.9.0,>=1.8.0 (from tensorflow==1.8.0->-r requirements.tx
t (line 9))
Using cached https://files.pythonhosted.org/packages/59/a6/0ae6092b7542cfedba6
b2a1c9b8dceaf278238c39484f3ba03b03f07803c/tensorboard-1.8.0-py3-none-any.whl
Collecting gast>=0.2.0 (from tensorflow==1.8.0->-r requirements.txt (line 9))
Requirement already satisfied: wheel>=0.26 in c:\users\eud\appdata\local\program
s\python\python36\lib\site-packages (from tensorflow==1.8.0->-r requirements.txt
(line 9)) (0.32.3)
Collecting protobuf>=3.4.0 (from tensorflow==1.8.0->-r requirements.txt (line 9)
)
Using cached https://files.pythonhosted.org/packages/e8/df/d606d07cff0fc8d22ab
cc54006c0247002d11a7f2d218eb008d48e76851d/protobuf-3.6.1-cp36-cp36m-win_amd64.wh
l
Collecting pyyaml (from keras==2.0.8->-r requirements.txt (line 10))
Using cached https://files.pythonhosted.org/packages/4f/ca/5fad249c5032270540c
24d2189b0ddf1396aac49b0bdc548162edcf14131/PyYAML-3.13-cp36-cp36m-win_amd64.whl
Collecting psutil (from facenet==1.0.3->-r requirements.txt (line 12))
Using cached https://files.pythonhosted.org/packages/3b/15/62d1eeb4c015e20295e
0197f7de0202bd9e5bcb5529b9503932decde2505/psutil-5.4.8-cp36-cp36m-win_amd64.whl
Collecting requests (from facenet==1.0.3->-r requirements.txt (line 12))
Using cached https://files.pythonhosted.org/packages/7d/e3/20f3d364d6c8e5d2353
c72a67778eb189176f08e873c9900e10c0287b84b/requests-2.21.0-py2.py3-none-any.whl
Collecting matplotlib (from facenet==1.0.3->-r requirements.txt (line 12))
Using cached https://files.pythonhosted.org/packages/b1/56/569c83515c10146fd0a
a09e086816b12e301d0811048e3354a6e9b77ba9a/matplotlib-3.0.2-cp36-cp36m-win_amd64.
whl
Collecting Pillow (from facenet==1.0.3->-r requirements.txt (line 12))
Using cached https://files.pythonhosted.org/packages/bd/39/c76eaf781343162bdb1
cf4854cb3bd5947a87ee44363e5acd6c48d69c4a1/Pillow-5.3.0-cp36-cp36m-win_amd64.whl
Collecting Werkzeug>=0.14 (from flask==1.0.2->-r requirements.txt (line 13))
Using cached https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa2
9c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl
Collecting Jinja2>=2.10 (from flask==1.0.2->-r requirements.txt (line 13))
Using cached https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016
a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl
Collecting click>=5.1 (from flask==1.0.2->-r requirements.txt (line 13))
Using cached https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257
104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl
Collecting itsdangerous>=0.24 (from flask==1.0.2->-r requirements.txt (line 13))

Using cached https://files.pythonhosted.org/packages/76/ae/44b03b253d6fade317f
32c24d100b3b35c2239807046a4c953c7b89fa49e/itsdangerous-1.1.0-py2.py3-none-any.wh
l
Collecting html5lib==0.9999999 (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.8.
0->-r requirements.txt (line 9))
Collecting markdown>=2.6.8 (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.8.0->-
r requirements.txt (line 9))
Using cached https://files.pythonhosted.org/packages/7a/6b/5600647404ba15545ec
37d2f7f58844d690baf2f81f3a60b862e48f29287/Markdown-3.0.1-py2.py3-none-any.whl
Collecting bleach==1.5.0 (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.8.0->-r
requirements.txt (line 9))
Using cached https://files.pythonhosted.org/packages/33/70/86c5fec937ea4964184
d4d6c4f0b9551564f821e1c3575907639036d9b90/bleach-1.5.0-py2.py3-none-any.whl
Requirement already satisfied: setuptools in c:\users\eud\appdata\local\programs
\python\python36\lib\site-packages (from protobuf>=3.4.0->tensorflow==1.8.0->-r
requirements.txt (line 9)) (39.0.1)
Collecting urllib3<1.25,>=1.21.1 (from requests->facenet==1.0.3->-r requirements
.txt (line 12))
Using cached https://files.pythonhosted.org/packages/62/00/ee1d7de624db8ba7090
d1226aebefab96a2c71cd5cfa7629d6ad3f61b79e/urllib3-1.24.1-py2.py3-none-any.whl
Collecting chardet<3.1.0,>=3.0.2 (from requests->facenet==1.0.3->-r requirements
.txt (line 12))
Using cached https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b64
87b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl
Collecting idna<2.9,>=2.5 (from requests->facenet==1.0.3->-r requirements.txt (l
ine 12))
Using cached https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1
cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl
Collecting certifi>=2017.4.17 (from requests->facenet==1.0.3->-r requirements.tx
t (line 12))
Using cached https://files.pythonhosted.org/packages/9f/e0/accfc1b56b57e9750eb
a272e24c4dddeac86852c2bebd1236674d7887e8a/certifi-2018.11.29-py2.py3-none-any.wh
l
Collecting pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 (from matplotlib->facenet==1
.0.3->-r requirements.txt (line 12))
Using cached https://files.pythonhosted.org/packages/71/e8/6777f6624681c8b9701
a8a0a5654f3eb56919a01a78e12bf3c73f5a3c714/pyparsing-2.3.0-py2.py3-none-any.whl
Collecting kiwisolver>=1.0.1 (from matplotlib->facenet==1.0.3->-r requirements.t
xt (line 12))
Using cached https://files.pythonhosted.org/packages/44/72/16630c3392eba03788a
d87949390516bbc488e8e118047a3b824631d21a6/kiwisolver-1.0.1-cp36-none-win_amd64.w
hl
Collecting cycler>=0.10 (from matplotlib->facenet==1.0.3->-r requirements.txt (l
ine 12))
Using cached https://files.pythonhosted.org/packages/f7/d2/e07d3ebb2bd7af69644
0ce7e754c59dd546ffe1bbe732c8ab68b9c834e61/cycler-0.10.0-py2.py3-none-any.whl
Collecting python-dateutil>=2.1 (from matplotlib->facenet==1.0.3->-r requirement
s.txt (line 12))
Using cached https://files.pythonhosted.org/packages/74/68/d87d9b36af36f44254a
8d512cbfc48369103a3b9e474be9bdfe536abfc45/python_dateutil-2.7.5-py2.py3-none-any
.whl
Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->flask==1.0.2->-r requirements.tx
t (line 13))
Using cached https://files.pythonhosted.org/packages/9d/80/9a5daf3ed7b8482e72e
e138cef602b538cfba5c507e24e39fb95c189b16b/MarkupSafe-1.1.0-cp36-cp36m-win_amd64.
whl
Building wheels for collected packages: dlib
Running setup.py bdist_wheel for dlib ... error
Complete output from command c:\users\eud\appdata\local\programs\python\python
36\python.exe -u -c "import setuptools, tokenize;file='C:\Users\eud\AppDa
ta\Local\Temp\pip-install-f6j37fi9\dlib\setup.py';f=getattr(tokenize, 'open
', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(co
de, file, 'exec'))" bdist_wheel -d C:\Users\eud\AppData\Local\Temp\pip-wheel
-4kr5gr6p --python-tag cp36:
running bdist_wheel
running build
running build_py
package init file 'dlib_init_.py' not found (or not a regular file)
running build_ext
Building extension for Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03)
[MSC v.1900 64 bit (AMD64)]
Invoking CMake setup: 'cmake C:\Users\eud\AppData\Local\Temp\pip-install-f6j37
fi9\dlib\tools\python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\eud\AppData\Loca
l\Temp\pip-install-f6j37fi9\dlib\build\lib.win-amd64-3.6 -DPYTHON_EXECUTABLE=c:
users\eud\appdata\local\programs\python\python36\python.exe -DCMAKE_LIBRARY_OUTP
UT_DIRECTORY_RELEASE=C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\b
uild\lib.win-amd64-3.6 -A x64'
-- Building for: NMake Makefiles
CMake Error in CMakeLists.txt:
Generator

  NMake Makefiles

does not support platform specification, but platform

  x64

was specified.

CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
See also "C:/Users/eud/AppData/Local/Temp/pip-install-f6j37fi9/dlib/build/temp
.win-amd64-3.6/Release/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\setup.py", l
ine 257, in
'Topic :: Software Development',
File "c:\users\eud\appdata\local\programs\python\python36\lib\site-packages
setuptools_init_.py", line 129, in setup
return distutils.core.setup(**attrs)
File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\core
.py", line 148, in setup
dist.run_commands()
File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\dist
.py", line 955, in run_commands
self.run_command(cmd)
File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\dist
.py", line 974, in run_command
cmd_obj.run()
File "c:\users\eud\appdata\local\programs\python\python36\lib\site-packages
wheel\bdist_wheel.py", line 188, in run
self.run_command('build')
File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\cmd.
py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\dist
.py", line 974, in run_command
cmd_obj.run()
File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\comm
and\build.py", line 135, in run
self.run_command(cmd_name)
File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\cmd.
py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\dist
.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\setup.py", l
ine 133, in run
self.build_extension(ext)
File "C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\setup.py", l
ine 170, in build_extension
subprocess.check_call(cmake_setup, cwd=build_folder)
File "c:\users\eud\appdata\local\programs\python\python36\lib\subprocess.py"
, line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'C:\Users\eud\AppData\Lo
cal\Temp\pip-install-f6j37fi9\dlib\tools\python', '-DCMAKE_LIBRARY_OUTPUT_D
IRECTORY=C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\build
\lib.win-amd64-3.6', '-DPYTHON_EXECUTABLE=c:\users\eud\appdata\local\progr
ams\python\python36\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:
\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\build\lib.win-a
md64-3.6', '-A', 'x64']' returned non-zero exit status 1.


Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib, scipy, scikit-learn, mtcnn, six, grpcio, ab
sl-py, termcolor, astor, html5lib, markdown, Werkzeug, bleach, protobuf, tensorb
oard, gast, tensorflow, pyyaml, keras, h5py, psutil, urllib3, chardet, idna, cer
tifi, requests, pyparsing, kiwisolver, cycler, python-dateutil, matplotlib, Pill
ow, facenet, MarkupSafe, Jinja2, click, itsdangerous, flask
Running setup.py install for dlib ... error
Complete output from command c:\users\eud\appdata\local\programs\python\pyth
on36\python.exe -u -c "import setuptools, tokenize;file='C:\Users\eud\App
Data\Local\Temp\pip-install-f6j37fi9\dlib\setup.py';f=getattr(tokenize, 'op
en', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(
code, file, 'exec'))" install --record C:\Users\eud\AppData\Local\Temp\pip-r
ecord-zolrus0q\install-record.txt --single-version-externally-managed --compile:

running install
running build
running build_py
package init file 'dlib\__init__.py' not found (or not a regular file)
running build_ext
Building extension for Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:0
  1. [MSC v.1900 64 bit (AMD64)]
    Invoking CMake setup: 'cmake C:\Users\eud\AppData\Local\Temp\pip-install-f6j
    37fi9\dlib\tools\python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\eud\AppData\Lo
    cal\Temp\pip-install-f6j37fi9\dlib\build\lib.win-amd64-3.6 -DPYTHON_EXECUTABLE=c
    :\users\eud\appdata\local\programs\python\python36\python.exe -DCMAKE_LIBRARY_OU
    TPUT_DIRECTORY_RELEASE=C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib
    \build\lib.win-amd64-3.6 -A x64'
    -- Building for: NMake Makefiles
    CMake Error in CMakeLists.txt:
    Generator

     NMake Makefiles
    

    does not support platform specification, but platform

     x64
    

    was specified.

    CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
    CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
    -- Configuring incomplete, errors occurred!
    See also "C:/Users/eud/AppData/Local/Temp/pip-install-f6j37fi9/dlib/build/te
    mp.win-amd64-3.6/Release/CMakeFiles/CMakeOutput.log".
    Traceback (most recent call last):
    File "", line 1, in
    File "C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\setup.py",
    line 257, in
    'Topic :: Software Development',
    File "c:\users\eud\appdata\local\programs\python\python36\lib\site-package
    s\setuptools_init_.py", line 129, in setup
    return distutils.core.setup(**attrs)
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\co
    re.py", line 148, in setup
    dist.run_commands()
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\di
    st.py", line 955, in run_commands
    self.run_command(cmd)
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\di
    st.py", line 974, in run_command
    cmd_obj.run()
    File "c:\users\eud\appdata\local\programs\python\python36\lib\site-package
    s\setuptools\command\install.py", line 61, in run
    return orig.install.run(self)
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\co
    mmand\install.py", line 545, in run
    self.run_command('build')
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\cm
    d.py", line 313, in run_command
    self.distribution.run_command(command)
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\di
    st.py", line 974, in run_command
    cmd_obj.run()
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\co
    mmand\build.py", line 135, in run
    self.run_command(cmd_name)
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\cm
    d.py", line 313, in run_command
    self.distribution.run_command(command)
    File "c:\users\eud\appdata\local\programs\python\python36\lib\distutils\di
    st.py", line 974, in run_command
    cmd_obj.run()
    File "C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\setup.py",
    line 133, in run
    self.build_extension(ext)
    File "C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\setup.py",
    line 170, in build_extension
    subprocess.check_call(cmake_setup, cwd=build_folder)
    File "c:\users\eud\appdata\local\programs\python\python36\lib\subprocess.p
    y", line 291, in check_call
    raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['cmake', 'C:\Users\eud\AppData\
    Local\Temp\pip-install-f6j37fi9\dlib\tools\python', '-DCMAKE_LIBRARY_OUTPUT
    _DIRECTORY=C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\bui
    ld\lib.win-amd64-3.6', '-DPYTHON_EXECUTABLE=c:\users\eud\appdata\local\pro
    grams\python\python36\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=
    C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\build\lib.win
    -amd64-3.6', '-A', 'x64']' returned non-zero exit status 1.


Command "c:\users\eud\appdata\local\programs\python\python36\python.exe -u -c "i
mport setuptools, tokenize;file='C:\Users\eud\AppData\Local\Temp\pip-i
nstall-f6j37fi9\dlib\setup.py';f=getattr(tokenize, 'open', open)(file);cod
e=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))
" install --record C:\Users\eud\AppData\Local\Temp\pip-record-zolrus0q\install-r
ecord.txt --single-version-externally-managed --compile" failed with error code
1 in C:\Users\eud\AppData\Local\Temp\pip-install-f6j37fi9\dlib\

C:\Users\eud\Downloads\libfaceid-master>

webcam liveness detction

HI,

Thank you very much for posting the various types of face recognition solution.

Livenessdetection in flask is possible with the code you have provided in testing_webcam_livenessdetection.py ? can we directly pass the arguments in the code? Please suggest this. Thankyou

UnboundLocalError: local variable 'face_detector' referenced before assignment

I executed agegenderemotion_webcam.py and I got the following error message:

Using TensorFlow backend.
Warning, check if models and trained dataset models exists!
Traceback (most recent call last):
File "/home/omar/eclipse-workspace/libfaceid/agegenderemotion_webcam.py", line 192, in
main(parse_arguments(sys.argv[1:]))
File "/home/omar/eclipse-workspace/libfaceid/agegenderemotion_webcam.py", line 177, in main
run(cam_index, cam_resolution)
File "/home/omar/eclipse-workspace/libfaceid/agegenderemotion_webcam.py", line 147, in run
cam_index)
File "/home/omar/eclipse-workspace/libfaceid/agegenderemotion_webcam.py", line 81, in process_facedetection
faces = face_detector.detect(frame)
UnboundLocalError: local variable 'face_detector' referenced before assignment

Examples are not working

it says Error, check if models and trained dataset models exists! for liveness detection. i changed exception for write error and it says
OpenCV(3.4.3) /io/opencv_contrib/modules/face/src/facerec.cpp:61: error: (-2:Unspecified error) File can't be opened for reading! in function 'read'
i am using opencv 3.4.3 exact requirments as file

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.