Coder Social home page Coder Social logo

jeeliz / jeelizweboji Goto Github PK

View Code? Open in Web Editor NEW
1.1K 43.0 150.0 48.08 MB

JavaScript/WebGL real-time face tracking and expression detection library. Build your own emoticons animated in real time in the browser! SVG and THREE.js integration demos are provided.

Home Page: https://jeeliz.com

License: Apache License 2.0

JavaScript 99.68% Python 0.32%
javascript webgl threejs weboji webcam deep-learning face face-expression computer-vision augmented-reality

jeelizweboji's People

Contributors

bjlaa avatar compscikai avatar davemee avatar timsun28 avatar xavierjs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jeelizweboji's Issues

Is it possible to control the webojiCanvas (weboji) externally?

Hi,

We are trying to control the webojiCanvas (weboji) externally by separating the JEEFACETRANSFERAPI face-detection logic and the control of weboji.

Our idea is to send the face-detection data remotely through the net and control the weboji in the other side.

However, we found that the control of the weboji is deep down in the JEEFACETRANSFERAPI.

We want to ask if there is any way we can achieve what we want through this library, please?

If not, can you suggest what kind of tools, libraries or tutorials we should look for, please?

Thanks for your helps!

Only head tracking (orientation)?

So far, only the jeelizWeboji demos worked for me (other demos were not able to detect my head). All I need is head tracking, with only its orientation (no face expression, emotion, etc). Can a simplest (and maybe more robust) model could be developed, that would require even less CPU resources?

Expected Performance on Mobile?

Reading Issue #26 - #26

"It should work nicely with a Samsung S7..."

When i run the Raccoon Demo on a Samsung Note 10 i am only getting around 5-12 FPS with Chrome, and there arent any errors in the console. Im assuming i should be getting better performance if it should work nicely on a 3 generations old phone?

Error on iOS / Cordova builds

Hi,

We're using this library with great success on Android and macOS Safari, but when we bundle it as part of a Cordova application on iOS we get the following error:

WebGL: INVALID_OPERATION: texImage2D: type HALF_FLOAT_OES but ArrayBufferView is not NULL (k - jeelizFaceTransfer.js:86:267)

This is when testing on an iPad 3 running iOS 9.3.5 (13G36).

Does this error trigger any resonance?

Safari Version 13.0 (15608.1.24.40.4) Error

Hi @xavierjs!

I'm experimenting with the JEELIZFACETRANSFER today and receive this error when previewing in Safari:
[Log] ERROR in ContextFeedForward : – "Your configuration cannot process color buffer float" (jeelizFaceTransfer.js, line 45)

[Log] ERROR in JeelizSVGHelper - CANNOT INITIALIZE JEEFACETRANSFERAPI : errCode = – "GL_INCOMPATIBLE" (JeelizSVGHelper.js, line 34)

I tested Safari on both https://webglreport.com/?v=2 and https://webglreport.com/?v=1.

Any ideas what the issue might be? Let me know if I can send you anything.

Thanks!

Can I make a bounding box a circle?

Hello Jeeliz Team,

Thank you for the great job.

I want to make the shape of the bounding box of the canvas a circle. How to do so?

Regards

Error when not running on localhost

When I try to run a basic script in a server I get this error.

Screenshot_20201005_202620

But it works fine on localhost. I do no get what is the difference.

Tested on Firefox 81.0 (Linux) and Chrome 83.0.4103.116 (Linux)

Can I use an image as a source?

Hi, first of all, thanks for this amazing library!

I'm trying to make an app that tracks the face in the image source. What I did is drawing the image to the HTMLCanvasElement and making it a MediaStream by using .captureStream() and updating the source of the <video> linked to your lib with it.
It seems to barely work, it only works for very limited type of images but it mostly even fails to begin tracking.

Is there a better way to track the face from the single image, which you would recommend? Please let me know.

Thanks!

Seperate Texture files and new emotions

I would first like to thank you for this library! I have been trying to implement different kind of models, but I'm not sure how I would go along with adding multiple texture files to a model.

In my case this would be for example a skin.png and some teeth.png and an eye.png.

Would this be possible to add to the ThreeJeelizHelper.js?

I was also wondering if it would be possible to add different face 'emotions'. For example stick out your tongue or move your eye balls to left/right/up/down? I'm not very experienced with training a model for this, but I was wondering if this could be added to the library or if you could suggest some way for me to add this functionality to my app / your library?

Thank you in advance!

Error in device with WebGL2

Hi @xavierjs, me again :)
I am doing tests and I have encountered a special case. With the Samsung A5 from 2016 ... that has WebGl2 throw the following error:
ERROR in ContextFeedForward: Not enough capabilities
Why is it wrong? Is there any way to capture that error with a browser or device test?
Thanks!!

Getting error while using file jeelizFaceTransfer.js

i am using this file " jeelizFaceTransfer.js" in my project to call a function
but i am getting below error in the console:-
Error:-
image

Steps to reproduce:-

  1. Download the project from :- https://github.com/abhilash26/sit-straight
  2. run index.html in chrome

it will ask for camera permission, click ok
now camera does not open and black screen with loading logo is displayed continuosly and console displays error.

Also try to run this project in Internet Explorer, camera is open and working fine

NOTE:- Issue is with browser compatibility. not working with chrome/Firefox

WEBGL1 Screenshot:-
image

WEBGL2 Screenshot:-
image

Below screenshot for the console output:-
image

How to destroy?

jeelizFaceFilter have 'native' destroy method, how to do it correctly for jeelizWeboji?

running a function if face detected

Hi, how can I run a function if a face is detected/ not detected? I can't seem to find any existing functions in the source file controlling the detection. Is there anything I should be targeting?

Can I control source media stream?

Hi. It looks like very useful app.

I'll think when I use it, I want to control inner of jeelizFaceTransfer.js. for example, controling timing of getUserMedia(), using remote media stream, or using recorded video of my face.
But I can't judge possibility of them because it's source code isn't public.
How should I control it? or another solution, could I see sourcecode of jeelizFaceTransfer.js?

thanks

A question about turtle example

Hello, first of all thank you very much for sharing this amazing project, its incredible to see what you did with javascript 😊
I was looking for the other examples and i saw a turtle with body, and of course i tried to see how a turtle with body looks like but i couldn't make it work it. I'm sorry for my dumb question but, Do you have an instruction to convert the corps_tortue.obj and reference into javascript ?
Thank you very much for your attention.

Have a question about how you handle changing emotion in real time.

I have a question related to this demo (jeelizWeboji/demos/threejs/raccoon).

I think that 2 important roles of JEEFACETRANSFERAPI in this demo are

  1. Calculate direction of the head in real-time just like
    const rotation = JEEFACETRANSFERAPI.get_rotationStabilized(); // in animate()
  2. Also calculate morph coefficients in real-time for morph blending.

I tried to find where those morph coefficients are calculated, but I could only find const morphTargetInfluencesDst = JEEFACETRANSFERAPI.get_morphTargetInfluencesStabilized(); // in successCallBack() of ThreeMorphAnimGeomBuilder().
And this, in contrast to calculating the rotation in animate(), seems like not calculating the coefficients in real-time.
If you don't calculate those morph coefficients in real-time, how do you track clients face expression that changes in real-time?

I guess most of the calculations for animation are done in GPU using GLSL, so I want to know where your API extract information(morph coefficients?) required for morph animation from the video stream.

This demo works pretty well, so I must misunderstand something about morph animation or your API.

If you could help me untangle this problem, I really appreciate it!

Thank you for reading and sharing this awesome project!

Possible to get two emoji on the screen?

I'm playing around with an idea, and trying to get two different raccoons on the screen (from the THREE.js demo) parroting expressions back to each other. I've been looking through the code, but haven't found a good spot to instantiate the second mesh. It always seems like there's some code referring to the first mesh in a global way.

Any suggestions? Places you can recommend I look?

Is jeelizFaceTransfer open source ?

Hi,

First, congrats for this project. it's amazing !
I'd like to learn more about how jeelizFaceTransfer works but the file is minified. Is this an open source project ?
I am trying to make weboji works smooth on mobile devices. The SVG demo is smooth on a high-end laptop but lags on mobile devices (really slow on a 2 year-old Samsung A5). It seems that the face detection is the main cause of those lags. Maybe something like Dlib (landmarks detection) + FACS (facial action coding system) could improve expression detection?

Thanks

Are there plans for NPM package?

Hi! Are there any plans to set this up as an NPM package, like how you have it for FaceFilter? If not I'd be happy to help with a Pull Request and also help maintain it, as I intend to use this for a few different side projects (by the way, you did an amazing job with these libraries and keeping file size small!)

Hide blue box in JEEFACETRANSFERAPI

Hi guys,

I congratulate them you on the project, it's amazing!

I have a question, Is it possible to remove the blue box that appears on the face jeelizFaceTransfer.js?
I need to take a screenshot (canvasScreen.getContext ('2d'). DrawImage () ..) but I do not want to show that frame.

The quick solution I came up with is to create another canvas context with the video and hide the instance JEEFACETRANSFERAPI.switch_displayVideo (false);
But, I do not know if it is the best solution.

Thanks!!

How to reduce CPU-GPU consumption

Hello @xavierjs ,
Thanks for great lib, I'm using this lib for my personal project but it consume quite much CPU and GPU so question that are there any options to optimize this even reduce the frame rate or accuracy?

Get the capacities of devices

Hi @xavierjs,
A question, do you have a some helpers with i can get if the device have or not the capacities for jobs with the libraries?
Is for not load the Jeeliz if the device not have the capacities and search other alternative.
Thanks!

New demos from abhilash26

First of all thank you Jeeliz for this amazing piece of code.

I have created a minified reusable version of this repository Face Primer for my use. Since I want use your technology in various applications. I hope this will help others who just want to use your AI mode for face detection.
I have previously contacted you with my work account and got great feedbacks from you on many issues. I wanted to repay you by sending you the demo of what I was building but it was a non-disclosed client project. This is my home account. I will sure make something wonderful and share a demo with you.

I have few questions.

  1. Did you use tensorflow to make the model?
  2. How can we improve the model with more sample images?

Some suggestions for improving animation which I have used while working on the client project.

  1. I have noticed that the influences (expression) values are having very high precision, this works against animation because the model seems to be moving more than we desire. I solved the issue by using .toFixed() function.
  2. There since the eyes generally close and open at the same time (except for winking) we can average out and use the same value for both the eyes, the same goes for eyebrows. So a function that can calculate a mean value for them would be great.
  3. I had created a mean-displacement function to calculate the change in last-frame and the current frame using the change in rotation and influences values. If the mean-displacement is below a certain threshold we do not update the weboji. This reduce stutter.

is there any way to set a fixed position while _isDetected == false?

Hello,

Is there a way to set the values until the face is detected? (Ex: morph_X = -0.5486651)

I have been able to modify the values and freeze the rotation of waiting stance, but I am trying that during that position, it makes random faces like smiling, getting angry, etc. while the face is not detected but the values are "reset" in 1 ms after i changed...

how can i set this values and avoid the default values?

thanks in advance

detach and reattach camera/canvas

Hello @xavierjs ,
hope you can help us again.
We need to be able to use the camera in other parts of our project.Is there a way to to detach and reattach the camera to the JFT?
We are hoping you would be able to tell us how we could do this.

Many Thanks ,
Kei

Typo in Readme.md

Hello Jeeliz,

Just wanted to inform you that there is a typo in readme.md file
It should be indeed not indead.

Indead, AppleΒ© owns the intellectual property of 3D animated foxes (but not on raccoons yet). Thank you for your understanding.

Nothing major.

Regards,
Abhilash

Can I save this 3D animation video locally?

Hi, This looks cool and interesting. I can use any external video I'm interested in and convert it into 3D animation. I want to know if I can save the converted animation locally instead of just opening the browser to watch it.

How to set the dimension and position of bounding box?

Hello Jeeliz Team,
Thanks again for creating this wonderful library!

I am building a project where the user is generally sitting in between with very little side rotation. I want to set the bounding box to a limited area and not to the full canvas. This will increase the searching speed for face if the tracking fails.
How can I accomplish this?

Regards,
Andy

Capture frame

Is there a way to capture the frame. For example if a button was on the page, on click will capture a frame as JPEG and display that on a page so that it can be shared?

How to get translation values?

Sir,
I want to get the value of how much the user has moved from the center of the screen (i.e translation (XYZ) of the face). I want to move the model just as the user is moving. Even the relative change of the blue square will work.

I feel that jeelizFaceTransfer.js file is responsible for this. It is too complex for me to understand. Its like converted from a machine. I have found no functions that may help me.
Please tell me how to get translation values?

Get coordinates of the face

Hi, guys, me again
In the app that I am working I need to get a picture of the face for each gesture that the user makes. For that I need to get the data of the x, y, width and height coordinates like jeelizFaceFilter does with detectState.
I saw that they added the method get_positionScale() (#5) but I can not get the data correctly and I do not know if it will help me for what I need. Could you tell me if that is the way to go to get that data (x, y, w, h) with JEEFACETRANSFERAPI?

Thank you!

Intellectual claim on Webojis.com

Hey,

First of all, I am a big fan of your work.

I have some questions related to Apple's claim of intellectual property rights on webojis.com

I want to know if Apple has claimed full intellectual property right on implementation i.e. no similar application (irrespective of technology used) can be developed or just few characters (eg. Foxes, Raccoon etc.) I wanted to develop an open source application but with different emojis so that's why I thought its best to consult you guys first.

I am really pissed off at the fact that they have claimed intellectual property rights on something that was a concept before their "implementation". I really can't believe that.

Looking forward for your reply.

Thank you in advance!

Jeeliz Exposure controller in Jeeliz Weboji for improving lighting

Hi,

Sorry for bothering you again.

I have checked your repo https://github.com/jeeliz/jeelizExposureController and I think it is really good. I have a suggestion, since Weboji needs good lighting conditions to perform better, have you tried adding exposure controller to weboji for improving expression results.

This may not work for face filter because it requires a webcam stream canvas/video behind the filter. But for weboji it will improve the results under bad lighting conditions (back-light, low or uneven light). However I believe the tracking effectiveness of the face will remain constant.

Have you tried to integrate them already? If not then is there a reason for not doing so?

I would love to try the same and share my experiences with you. Please guide.

Thanks and Regards,
Andy

detecting live person rather than Mobile video / static image

Is there any possiblity that face-detection should neglect detection from mobile videos / static printed pictures.

I am trying to use this awesome lib for liveness detection and these are hurdles i am facing.

One idea is to look for rectangular shapes around images through edge- detection algo.

Thanks, in advance

Noise in AU's movement

Good day!

I succsessefully transfered jeeliz weboji to my project and my 3D models. But movement of morph targets in model goes with strong twitching (for example mouth and eyelids goes wild), which means, that I get signal from AU's with heavy noise. I tried to fix it on my side, by writing some filters. But maybe there is a way or configs on Jeeliz's side to tune AU's noises?

Change position after change model

Hello,

thanks for share this project, i'm developing a function to let user change the model (Fox and girl).

i use load weboji
`'load_weboji': function(modelURL, matParams, callback){

    return load_model(modelURL, false, function(){

      that.set_materialParameters(matParams);

      if (callback) callback();

    });

  },`

the issue is i'm setting the init position [0, -80 , 0] and works well, but when i change the model
the _ThreeMorphAnimMesh is removed from the scene and set the position to [0,0,0]

i tried with set_positionScale( [0, -80, 0], 1) but i get Cannot read property 'fromArray' of undefined in this line

` set_positionScale: function(vector3Array, scale){

    if (vector3Array){

      _ThreeMorphAnimMesh.position.fromArray(vector3Array);

    }

    if (typeof(scale) !== 'undefined'){

      _ThreeMorphAnimMesh.scale.set(-scale, scale, scale);

    }

  },`

am i doing anything wrong? did i miss something?

Camera box not appearing

Totally cool app!

I am running Chrome Version 67.0.3396.99 (Official Build) (64-bit) and have allowed access to cam.

Nothing happens on the screen with either of your example demos. The cute fox, nothing but a background image with rounded lines on screen. The SVG appears on the other one but nothing else. No camera box in lower corner as shown in your example screenshot.

Is this for MAC/Apple only, or will it run on Windows?

Thanks.

How to control springiness effect for custom 3d models?

In the ThreeJSHelper File, in my assumption rotationSpringCoeff and rotationAmortizationCoeff parameters are responsible for "springiness" effect of the 3d model.

It works great for the fox model. But it doesn't look good for my custom 3d models which I created using this tutorial. For my model I have set these parameters as follows to lower the springiness.
rotationSpringCoeff = 0.0005 and rotationAmortizationCoeff = 0.5

However, I want to use this effect for my 3d model but at the same time I want to control the location and intensity of this effect, for example trunk and ears of an elephant. Is there any specific requirements or code I have to change to achieve this? I have found no information to continue on. Please guide me to the correct path.

Regards,
Andy

Hardware requirements

Hello, thank you for the work you do for this library.
I would like to know the minimum hardware requirements.
a pc with 4gb of ram and an Intel Atom x5-Z8350 processor
with windows 10 can it work?
thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.