Coder Social home page Coder Social logo

Google Coral support about compreface HOT 47 OPEN

exadel-inc avatar exadel-inc commented on May 12, 2024 17
Google Coral support

from compreface.

Comments (47)

gtxaspec avatar gtxaspec commented on May 12, 2024 13

Incorporating Google Coral TPU support in CompreFace would be a game-changing move. Not only has this feature demonstrated notable effectiveness in Frigate, but it's also recognized for its superior performance enhancements.

Its integration with CompreFace would supercharge face recognition capabilities, accelerating processing times. This upgrade could broaden CompreFace's appeal, making it more accessible and valuable for a diverse user base. Enthusiastically anticipating the potential implementation/incorporation of this feature! +1

from compreface.

meks007 avatar meks007 commented on May 12, 2024 5

Hi @pospielov,

I have made progress with EFRS-1114 and managed to get detection working using my Coral USB Accelerator.

There are a couple things:

  • I've changed UWSGI to run as root. Running as www-data would require usermod -a -G plugdev www-data for www-data to get permission to work with the Edge TPU
  • The commit bd8b682 needs to be reverted
  • The container EITHER needs to be privileged or and have /dev/bus/usb mapped; EDIT: It needs both to work.

I've managed to get it working that way. However I only started using CompreFace today and don't fully understand the architecture behind it yet. I'm using the single container approach and I think I can't just replace the calculator?

from compreface.

bropat avatar bropat commented on May 12, 2024 2

Check out here.

from compreface.

pospielov avatar pospielov commented on May 12, 2024 1

https://github.com/exadel-inc/CompreFace/blob/EFRS-1114/embedding-calculator/tpu.Dockerfile.full
here is the new docker file. As I understand in Irina's environment it didn't see the Google Coral.
Probably the problem is with MacOS
If you have Linux, you can try it out - probably it already works.
Here is a doc with example of using Google Coral with docker:
https://medium.com/star-gazers/running-tensorflow-lite-at-the-edge-with-raspberry-pi-google-coral-and-docker-83d01357469c
I think to build it you can use this command:
docker build -t embedding-calculator --build-arg SKIP_TESTS=true .
to run, this:
docker run -p 3000:3000 --privileged -v /dev/bus/usb:/dev/bus/usb embedding-calculator
Then if everything ok in logs, you can open swagger in the browser:
http://localhost:3300/apidocs
and then invoke /find_faces request. If everything ok - then we can try to put it into docker-compose and check the whole system

from compreface.

pospielov avatar pospielov commented on May 12, 2024 1

fe08d90#diff-462581714c2d689beb979af2fa29b0c9122382efafbe9133b4319d79c1c8d6e8
This build problem could be fixed by adding PyYAML==5.4.1 to requirements.txt file
So yes - if you merge master into this branch it should fix this problem. But it could appear other problems, who knows. So I would recommend fixing it directly in requirements.txt file, and merging master only after make sure that everything works

from compreface.

iamacarpet avatar iamacarpet commented on May 12, 2024 1

CUDA is the name of nVidia’s GPGPU cores, and from the file names, it looks like it is using TensorFlow, rather than TensorFlow Lite (required for Coral).

In the PR’s main comment block, she referenced a function where you have to pass in the parameter of β€œTPU”, but I didn’t understand where that referenced.

from compreface.

pospielov avatar pospielov commented on May 12, 2024 1

First, this would be great!
Second, it's not so simple

There are two problems to solve:

  1. Add Google Coral support
  2. Build it for arm devices

What I suggest is first try to build CompreFace with Google Coral support and then try to build it for arm devices.

We have a branch https://github.com/exadel-inc/CompreFace/tree/EFRS-1114, which is quite old, but it's a good idea to start from it. Could you try if it builds and works with Google Coral?

There are two docker files:
https://github.com/exadel-inc/CompreFace/blob/EFRS-1114/embedding-calculator/tpu.Dockerfile
https://github.com/exadel-inc/CompreFace/blob/EFRS-1114/embedding-calculator/tpu.Dockerfile.full
Not sure what is the difference, but it's good to start researching from them

from compreface.

jmorille avatar jmorille commented on May 12, 2024 1

Very nice 2 years killing feature.

Do you have a working TPU Docker Image for Beta test ?

from compreface.

pospielov avatar pospielov commented on May 12, 2024

I totally agree with you that support of Google Coral accelerator would be great.
But I described all the problems here:
#519 (comment)

from compreface.

iamacarpet avatar iamacarpet commented on May 12, 2024

With the PR #580 for Coral support, is this looking any more possible?

I tried looking through the contents of the PR but couldn't actually tell if it's possible to start using without further code changes.

@pospielov or @iamrinni , can you comment?

from compreface.

pospielov avatar pospielov commented on May 12, 2024

Irina did a great job and managed to run compreface-core functionality on Google Coral.
The problem she faced is putting this functionality into a docker container. There are instructions on how to use Coral on Linux, but she has MacOS.
Unfortunately, she left the project, and this task is stuck. I don't have time for it for now. Probably I'll return to it next year or if we find another contributor with Google Coral

from compreface.

iamacarpet avatar iamacarpet commented on May 12, 2024

Thanks @pospielov ,

Can you offer any advice on how to get moving with it?

My python skills are pretty much nonexistent, but if it’s just wrapping everything up in Docker, I might be able to help.

The PR that is pending seems to have Docker images included… Did these not work on testing?

Haven’t properly gotten to grips with the architecture to know how everything hangs together yet, but is the ideal that the only thing that needs replacing is the container(s) running TensorFlow and then it should β€œjust work”?

I understand that might be a time consuming question to answer, only if you have time and any hits in the right direction would be appreciated.

from compreface.

patatman avatar patatman commented on May 12, 2024

Sorry I'm jumping onboard this issue, but I wanted to share my progress:

So I tried running this in my current Test env, and ran into some issues. (Unraid os, which is a stripped linux version)
Fixed most of them, but I couldn't fix some of them(mainly python related, and because of the stripped OS)
So I'm currently spinning up a Ubuntun20.04 VM in my lab to see if I can get this working. I have a spare Coral that I'm going to use for testing.
If I make a succesfull build + running the Swagger stuff I'll let you know.

from compreface.

patatman avatar patatman commented on May 12, 2024

Small update:
Container building went successful, but I can't seem to run it.

[uWSGI] getting INI configuration from uwsgi.ini
*** Starting uWSGI 2.0.19 (64bit) on [Mon Nov 22 16:42:25 2021] ***
compiled with version: 8.3.0 on 22 November 2021 16:30:18
os: Linux-5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 20:00:55 UTC 2021
nodename: 16e81a2d3def
machine: x86_64
clock source: unix
detected number of CPU cores: 3
current working directory: /app/ml
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
setgid() to 33
setuid() to 33
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 0.0.0.0:3000 fd 3
Python version: 3.7.3 (default, Jan 22 2021, 20:04:44)  [GCC 8.3.0]
Python main interpreter initialized at 0x55f50b2fd050
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 218712 bytes (213 KB) for 2 cores
*** Operational MODE: preforking ***
Traceback (most recent call last):
  File "./src/app.py", line 25, in <module>
    from src.init_runtime import init_runtime
  File "./src/init_runtime.py", line 21, in <module>
    from src._logging import init_logging
  File "./src/_logging.py", line 22, in <module>
    from yaml import YAMLLoadWarning
ImportError: cannot import name 'YAMLLoadWarning' from 'yaml' (/usr/local/lib/python3.7/dist-packages/yaml/__init__.py)
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. GAME OVER ***

I noticed the branch from Irina was behind the Master, so I did a merge from master and now it's building again. Unfortunately the build takes quite long, so I'm just waiting on that to see if that fixes the issue.

from compreface.

patatman avatar patatman commented on May 12, 2024

Last update:
Got the container running, the merging from the master seems to have done the trick.
I'm currently trying to scan a image using the Swagger ui, but I'm getting the following error:

{"severity": "CRITICAL", "message": "RuntimeError: Node number 3 (CONV_2D) failed to invoke.\n", "request": {"method": "POST", "path": "/find_faces?limit=0", "filename": "download.jpeg", "api_key": "", "remote_addr": "192.168.1.174"}, "logger": "src.services.flask_.error_handling", "module": "error_handling", "traceback": "Traceback (most recent call last):\n  File \"/usr/local/lib/python3.7/dist-packages/flask/app.py\", line 1950, in full_dispatch_request\n    rv = self.dispatch_request()\n  File \"/usr/local/lib/python3.7/dist-packages/flask/app.py\", line 1936, in dispatch_request\n    return self.view_functions[rule.endpoint](**req.view_args)\n  File \"./src/services/flask_/needs_attached_file.py\", line 32, in wrapper\n    return f(*args, **kwargs)\n  File \"./src/_endpoints.py\", line 72, in find_faces_post\n    face_plugins=face_plugins\n  File \"./src/services/facescan/plugins/mixins.py\", line 44, in __call__\n    faces = self._fetch_faces(img, det_prob_threshold)\n  File \"./src/services/facescan/plugins/mixins.py\", line 51, in _fetch_faces\n    boxes = self.find_faces(img, det_prob_threshold)\n  File \"./src/services/facescan/plugins/facenet/coralmtcnn/coralmtcnn.py\", line 78, in find_faces\n    detect_face_result = fdn.detect_faces(img)\n  File \"/usr/local/lib/python3.7/dist-packages/mtcnn_tflite/MTCNN.py\", line 308, in detect_faces\n    result = stage(img, result[0], result[1])\n  File \"/usr/local/lib/python3.7/dist-packages/mtcnn_tflite/MTCNN.py\", line 357, in __stage1\n    pnetlite.invoke()\n  File \"/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/interpreter.py\", line 493, in invoke\n    self._interpreter.Invoke()\n  File \"/usr/local/lib/python3.7/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py\", line 113, in Invoke\n    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_Invoke(self)\nRuntimeError: Node number 3 (CONV_2D) failed to invoke.\n\n", "build_version": "dev"}
2021-11-22 17:10:35.050104: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303)

I currently don't have time to debug any further, will continue a bit later. Maybe someone can continue based on what I provided.
I pushed my changed to my own fork here: https://github.com/patatman/CompreFace/tree/EFRS-1114

from compreface.

patatman avatar patatman commented on May 12, 2024

I continued today a bit, and I ruled out some issues:

  • The USB is properly passed to the container
  • Google Coral is working, tested this with the Google Example
    image

It really look like this part is trowing the error:

2021-11-23 20:09:37.524992: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303)

I'm not sure why though, and I'm not a programmer. If someone with better understanding of Python or this package is willing to help, that would be awesome! I'm open to do a pair-programming session to debug/solve this issue.

from compreface.

patatman avatar patatman commented on May 12, 2024

I had a closer look at the PR, but even after changing the code to:

    def _calculate_embeddings(self, cropped_images, mode='CPU'):
        """Run forward pass to calculate embeddings"""
        if mode == 'TPU':
            calc_model = self._embedding_calculator_tpu
        else:
            calc_model = self._embedding_calculator_tpu
            # cropped_images = [prewhiten(img).astype(np.float32) for img in cropped_images]

Basicly forcing it to always use the _embedding_calculator_tpu part(If I'm correct). It still gives the same error.

from compreface.

patatman avatar patatman commented on May 12, 2024

The error below doesn't seem Google Coral related, it also generates this error if I use the CPU.

2021-11-23 20:09:37.524992: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303)

I've debugged a ton, and currently I'm stuck on this error:

{
  "message": "RuntimeError: Node number 3 (CONV_2D) failed to invoke.\n"
}

As soon as I try to invoke the find_faces api it generates this error.
The Status api works as expected, and the program doesn't crash. It's just that specific request generates an error.
If I look into that error, some people are saying it's related to the model used. I'm not familiar enough to debug this any further.

Anyone have any ideas to continue? Currently stuck

from compreface.

pospielov avatar pospielov commented on May 12, 2024

Kind of a strange error.
I tried to build by myself. First - I took the clear branch and fixed only requirements.txt as I mentioned.
Then build with:
docker build -t embedding-calculator --build-arg SKIP_TESTS=true -f tpu.Dockerfile .
Then run with and without Google Coral :
docker run -it --name=test -p 3000:3000 embedding-calculator
It worked normally. I mean according to code by default it will use CPU, not TPU.

Then I changed the file coralmtcnn.py the same way as you did.
Built and run with this command (and with Google Coral):
docker run -it --privileged -v /dev/bus/usb:/dev/bus/usb --name=test -p 3000:3000 embedding-calculator
When I tried to run the find_faces endpoint I got this error:

Traceback (most recent call last):
    File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
    rv = self.dispatch_request()
    File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
    File "./src/services/flask_/needs_attached_file.py", line 32, in wrapper
    return f(*args, **kwargs)
    File "./src/_endpoints.py", line 72, in find_faces_post
    face_plugins=face_plugins
    File "./src/services/facescan/plugins/mixins.py", line 46, in __call__
    self._apply_face_plugins(face, face_plugins)
    File "./src/services/facescan/plugins/mixins.py", line 67, in _apply_face_plugins
    raise exceptions.PluginError(f'{plugin} error - {e}')
    src.services.facescan.plugins.exceptions.PluginError: coralmtcnn.Calculator error - Failed to load delegate from libedgetpu.so.1

Looks like the problem with a library.
I also got this warning:

{"severity": "DEBUG", "message": "Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.", 

I can't say where is the problem, I need to dig deeper to fix it.

from compreface.

patatman avatar patatman commented on May 12, 2024

So I tested again with the clean branch like you mentioned above.

  1. Adjust requirements.txt
  2. Build and run successful
  3. adjust embedding-calculator/src/services/facescan/plugins/facenet/coralmtcnn/coralmtcnn.py to ONLY use Coral
  4. Build and run Successful!
  5. remove Coral from USB (while container is running)
  6. Still works? So it wasn't using the Coral after all.
  7. check code inside container, to make sure it's running the version where both are using self._embedding_calculator_tpu
  8. What is happening? haha
    I'm doing some more tests tomorrow.

Your error Failed to load delegate from libedgetpu.so.1 suggests it can't find the Coral. Make sure you have the coral plugged in before you run the container. I was trying to replicate this when I was running step 5 haha.

from compreface.

pospielov avatar pospielov commented on May 12, 2024

Do you have the same error as me if you plug out Coral?

from compreface.

patatman avatar patatman commented on May 12, 2024

No, I can't seem to get the code to run of the Coral. It doesn't matter if I have it plugged in, and with the code adjustment to force is. It will always use the CPU instead of coral.

from compreface.

beetlezap avatar beetlezap commented on May 12, 2024

I can gladly be part of testing on my RPI4+Google Coral running Frigate today

from compreface.

isaacolsen94 avatar isaacolsen94 commented on May 12, 2024

Any updates on support for Google Coral?

from compreface.

pospielov avatar pospielov commented on May 12, 2024

Unfortunately no, we don't have contributors with Google Coral now

from compreface.

archef2000 avatar archef2000 commented on May 12, 2024

I know a bit of python and have a raspberry pi 4 8gb 64bit with a coral ai what is the latest code with coral support.
I would be happy to help.

from compreface.

archef2000 avatar archef2000 commented on May 12, 2024

Are these Dockerfiles ARM64 compatible?

from compreface.

pospielov avatar pospielov commented on May 12, 2024

No, we don't have ARM-compatible dockerfiles. This is another challenge

from compreface.

LordNex avatar LordNex commented on May 12, 2024

I have all that and a Jetson Nano along with a test VMWare environment running on a Dell PowerEdge R620. Although I'm no programmer I do have 26 year of IT background. Mainly networking

from compreface.

pospielov avatar pospielov commented on May 12, 2024

#610 (comment)
I described here what we need to do to support ARM devices.
Will you be able to help with it?

from compreface.

archef2000 avatar archef2000 commented on May 12, 2024

#610 (comment) I described here what we need to do to support ARM devices. Will you be able to help with it?

I would love to.

from compreface.

LordNex avatar LordNex commented on May 12, 2024

#610 (comment)

I described here what we need to do to support ARM devices.

Will you be able to help with it?

So basically run through what you did on the Jetson and that should compile an image?

from compreface.

leccelecce avatar leccelecce commented on May 12, 2024

I would like to help with this. Java programming background but happy with a bit of Python, Typescript etc. Only problem is: my Coral USB order is saying delivery 05/2023 :/ hoping they get some earlier stock!

from compreface.

LordNex avatar LordNex commented on May 12, 2024

I'd like to see it run off the Nano. I have Frigate using the Coral for Object detection. Would be nice to use all those cuda cores for facial recognition. For now I just have it running with Double Take in a VM on a PowerEdge R620 so it's just using xeon processor cores. Although I'm thinking about trying to find an old nvidia card go throw in there and see if that'll help.

from compreface.

ninjasstudio avatar ninjasstudio commented on May 12, 2024

I have 2 pci-e m.2 corals and I'd be happy to help test/debug this weekend.

Corals: 1 Daul-edge m.2 TPU, 1 single M.2 TPU using 2 M.2 to pci-e adapter cards.

Setup: Running Frigate (GPU inference -Nvidia GTX 1080)/Doubletake/Compreface ( CPU inference- Intel Xeon E3 1280). hosted in Docker using Home Assistant OS add-ons as a qemu client on an Ubuntu 22.04 Host.

I started working on adding Compreface GPU builds to the single container builds for use as a home assistant addon last weekend but would much rather do face detection on the corals.

I don't have the USB corals but hopefully its the same python api with config differences.

from compreface.

pospielov avatar pospielov commented on May 12, 2024

Not sure what you mean by "adding Compreface GPU builds to the single container builds".
https://hub.docker.com/r/exadel/compreface/tags
There are already tags with GPU support.
Regarding USB corals - we don't have contributors with Google Corals, so adding its support was frozen.

from compreface.

archef2000 avatar archef2000 commented on May 12, 2024

I have one and would be happy to help bit i only have a RPI

from compreface.

muebau avatar muebau commented on May 12, 2024

I thought to make the same offer πŸ˜„

from compreface.

just5ky avatar just5ky commented on May 12, 2024

Hi, I have two mini PCIe coral TPUs currently being used with Frigate only.
I really want to use CompreFace with it. I will be happy to help develop this feature by testing it and providing logs for troubleshooting.
Running it on i3-12100F, 32Β GB RAM, Quadro T600.
I also have RPi4 but no USB Coral to test ARM build.

P.S i don't know much about coding, but I have build and maintained multi-arch docker containers

from compreface.

just5ky avatar just5ky commented on May 12, 2024

First, this would be great! Second, it's not so simple

There are two problems to solve:

  1. Add Google Coral support
  2. Build it for arm devices

What I suggest is first try to build CompreFace with Google Coral support and then try to build it for arm devices.

We have a branch https://github.com/exadel-inc/CompreFace/tree/EFRS-1114, which is quite old, but it's a good idea to start from it. Could you try if it builds and works with Google Coral?

There are two docker files: https://github.com/exadel-inc/CompreFace/blob/EFRS-1114/embedding-calculator/tpu.Dockerfile https://github.com/exadel-inc/CompreFace/blob/EFRS-1114/embedding-calculator/tpu.Dockerfile.full Not sure what is the difference, but it's good to start researching from them

I do not see any error in tpu.Dockerfile build logs, not sure why its crashing.

❯ docker buildx build -t justsky/compreface:tpu . -f tpu.Dockerfile
[+] Building 668.6s (16/23)
 => [internal] load .dockerignore                                                                                                                                                                                                  0.1s
 => => transferring context: 48B                                                                                                                                                                                                   0.0s
 => [internal] load build definition from tpu.Dockerfile                                                                                                                                                                           0.1s
 => => transferring dockerfile: 2.12kB                                                                                                                                                                                             0.0s
 => [internal] load metadata for docker.io/library/python:3.7-slim                                                                                                                                                                 2.2s
 => [ 1/19] FROM docker.io/library/python:3.7-slim@sha256:85ddc7b500c5e33a6eec44adbde8347a8a07714f03fc638f2cf4b13837bac601                                                                                                         0.0s
 => [internal] load build context                                                                                                                                                                                                  0.0s
 => => transferring context: 9.88kB                                                                                                                                                                                                0.0s
 => CACHED [ 2/19] RUN apt-get update && apt-get install -y build-essential cmake git wget unzip         curl yasm pkg-config libswscale-dev libtbb2 libtbb-dev libjpeg-dev         libpng-dev libtiff-dev libavformat-dev libpq-  0.0s
 => [ 3/19] RUN echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list                                                                                        0.3s
 => [ 4/19] RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -                                                                                                                                         7.8s
 => [ 5/19] RUN apt-get update && apt-get install -y libedgetpu1-std                                                                                                                                                              32.0s
 => [ 6/19] WORKDIR /app/ml                                                                                                                                                                                                        0.0s
 => [ 7/19] COPY requirements.txt .                                                                                                                                                                                                0.1s
 => [ 8/19] RUN pip --no-cache-dir install -r requirements.txt                                                                                                                                                                   226.9s
 => [ 9/19] COPY src src                                                                                                                                                                                                           0.1s
 => [10/19] COPY srcext srcext                                                                                                                                                                                                     0.1s
 => [11/19] RUN pip --no-cache-dir install srcext/mtcnn_tflite/                                                                                                                                                                  165.4s
 => ERROR [12/19] RUN python -m src.services.facescan.plugins.setup                                                                                                                                                              233.4s
------
 > [12/19] RUN python -m src.services.facescan.plugins.setup:
#0 8.995 Collecting tensorflow~=2.5.0
#0 14.27   Downloading tensorflow-2.5.3-cp37-cp37m-manylinux2010_x86_64.whl (460.3 MB)
#0 63.55      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 460.3/460.3 MB 10.9 MB/s eta 0:00:00
#0 65.49   Downloading tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl (454.3 MB)
#0 112.3      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 454.3/454.3 MB 10.4 MB/s eta 0:00:00
#0 114.5 Collecting tf-slim~=1.1.0
#0 114.5   Downloading tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)
#0 114.5      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 352.1/352.1 KB 14.6 MB/s eta 0:00:00
#0 114.7 Collecting typing-extensions~=3.7.4
#0 114.7   Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
#0 114.8 Requirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (0.2.0)
#0 116.0 Collecting numpy~=1.19.2
#0 116.0   Downloading numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl (14.8 MB)
#0 117.4      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.8/14.8 MB 10.8 MB/s eta 0:00:00
#0 118.4 Collecting keras-nightly~=2.5.0.dev
#0 118.5   Downloading keras_nightly-2.5.0.dev2021032900-py2.py3-none-any.whl (1.2 MB)
#0 118.7      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 9.9 MB/s eta 0:00:00
#0 119.2 Collecting wrapt~=1.12.1
#0 119.3   Downloading wrapt-1.12.1.tar.gz (27 kB)
#0 119.3   Preparing metadata (setup.py): started
#0 120.1   Preparing metadata (setup.py): finished with status 'done'
#0 123.1 Collecting grpcio~=1.34.0
#0 123.3   Downloading grpcio-1.34.1-cp37-cp37m-manylinux2014_x86_64.whl (4.0 MB)
#0 123.7      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.0/4.0 MB 10.2 MB/s eta 0:00:00
#0 123.7 Requirement already satisfied: tensorboard~=2.5 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (2.11.2)
#0 123.8 Collecting six~=1.15.0
#0 123.9   Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
#0 124.2 Collecting h5py~=3.1.0
#0 124.2   Downloading h5py-3.1.0-cp37-cp37m-manylinux1_x86_64.whl (4.0 MB)
#0 124.6      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.0/4.0 MB 10.9 MB/s eta 0:00:00
#0 124.6 Requirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (0.4.0)
#0 124.6 Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (3.19.6)
#0 124.6 Requirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (3.3.0)
#0 124.7 Collecting termcolor~=1.1.0
#0 124.7   Downloading termcolor-1.1.0.tar.gz (3.9 kB)
#0 124.8   Preparing metadata (setup.py): started
#0 125.5   Preparing metadata (setup.py): finished with status 'done'
#0 125.6 Collecting flatbuffers~=1.12.0
#0 125.7   Downloading flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
#0 125.7 Requirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (1.6.3)
#0 125.8 Collecting absl-py~=0.10
#0 125.8   Downloading absl_py-0.15.0-py3-none-any.whl (132 kB)
#0 125.8      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 132.0/132.0 KB 23.6 MB/s eta 0:00:00
#0 125.9 Collecting tensorflow-estimator<2.6.0,>=2.5.0rc0
#0 126.1   Downloading tensorflow_estimator-2.5.0-py2.py3-none-any.whl (462 kB)
#0 126.1      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 462.4/462.4 KB 11.2 MB/s eta 0:00:00
#0 126.4 Collecting keras-preprocessing~=1.1.2
#0 126.5   Downloading Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
#0 126.5      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.6/42.6 KB 33.7 MB/s eta 0:00:00
#0 126.5 Requirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (0.40.0)
#0 126.7 Requirement already satisfied: cached-property in /usr/local/lib/python3.7/site-packages (from h5py~=3.1.0->tensorflow~=2.5.0) (1.5.2)
#0 127.0 Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (1.0.1)
#0 127.1 Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (1.8.1)
#0 127.1 Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (57.5.0)
#0 127.1 Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (2.19.0)
#0 127.1 Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (2.24.0)
#0 127.1 Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (0.4.6)
#0 127.2 Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (3.4.3)
#0 127.2 Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (0.6.1)
#0 127.3 Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (0.3.0)
#0 127.3 Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (5.3.0)
#0 127.3 Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (4.9)
#0 127.3 Requirement already satisfied: urllib3<2.0 in /usr/local/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (1.25.11)
#0 127.4 Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow~=2.5.0) (1.3.1)
#0 127.4 Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard~=2.5->tensorflow~=2.5.0) (4.13.0)
#0 127.5 Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow~=2.5.0) (3.0.4)
#0 127.5 Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow~=2.5.0) (2.10)
#0 127.5 Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow~=2.5.0) (2023.5.7)
#0 127.7 Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard~=2.5->tensorflow~=2.5.0) (3.15.0)
#0 127.8 Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /usr/local/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (0.5.0)
#0 127.8 Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow~=2.5.0) (3.2.2)
#0 128.3 Building wheels for collected packages: termcolor, wrapt
#0 128.3   Building wheel for termcolor (setup.py): started
#0 129.3   Building wheel for termcolor (setup.py): finished with status 'done'
#0 129.3   Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4848 sha256=b90b359c0c8bf7061e2b0fbe2e533803da6eca3a8889741f0fcfabf841586335
#0 129.3   Stored in directory: /tmp/pip-ephem-wheel-cache-k5vfzimf/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
#0 129.3   Building wheel for wrapt (setup.py): started
#0 131.9   Building wheel for wrapt (setup.py): finished with status 'done'
#0 131.9   Created wheel for wrapt: filename=wrapt-1.12.1-cp37-cp37m-linux_x86_64.whl size=70783 sha256=d4694c2c27357bde86f0622e1a5f2386225f0f568058633179659b30acd5102c
#0 131.9   Stored in directory: /tmp/pip-ephem-wheel-cache-k5vfzimf/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6
#0 131.9 Successfully built termcolor wrapt
#0 133.7 Installing collected packages: wrapt, typing-extensions, termcolor, tensorflow-estimator, keras-nightly, flatbuffers, six, numpy, keras-preprocessing, h5py, grpcio, absl-py, tf-slim, tensorflow
#0 133.7   Attempting uninstall: wrapt
#0 133.7     Found existing installation: wrapt 1.15.0
#0 133.8     Uninstalling wrapt-1.15.0:
#0 133.8       Successfully uninstalled wrapt-1.15.0
#0 133.9   Attempting uninstall: typing-extensions
#0 133.9     Found existing installation: typing_extensions 4.6.2
#0 133.9     Uninstalling typing_extensions-4.6.2:
#0 134.6       Successfully uninstalled typing_extensions-4.6.2
#0 134.7   Attempting uninstall: termcolor
#0 134.7     Found existing installation: termcolor 2.3.0
#0 134.7     Uninstalling termcolor-2.3.0:
#0 134.7       Successfully uninstalled termcolor-2.3.0
#0 134.8   Attempting uninstall: tensorflow-estimator
#0 134.8     Found existing installation: tensorflow-estimator 2.11.0
#0 134.9     Uninstalling tensorflow-estimator-2.11.0:
#0 135.1       Successfully uninstalled tensorflow-estimator-2.11.0
#0 138.2   Attempting uninstall: flatbuffers
#0 138.2     Found existing installation: flatbuffers 23.5.26
#0 138.3     Uninstalling flatbuffers-23.5.26:
#0 138.3       Successfully uninstalled flatbuffers-23.5.26
#0 138.4   Attempting uninstall: six
#0 138.4     Found existing installation: six 1.16.0
#0 138.4     Uninstalling six-1.16.0:
#0 139.1       Successfully uninstalled six-1.16.0
#0 139.2   Attempting uninstall: numpy
#0 139.2     Found existing installation: numpy 1.21.6
#0 139.8     Uninstalling numpy-1.21.6:
#0 140.8       Successfully uninstalled numpy-1.21.6
#0 145.4   Attempting uninstall: h5py
#0 145.4     Found existing installation: h5py 3.8.0
#0 145.5     Uninstalling h5py-3.8.0:
#0 145.7       Successfully uninstalled h5py-3.8.0
#0 146.2   Attempting uninstall: grpcio
#0 146.3     Found existing installation: grpcio 1.54.2
#0 146.3     Uninstalling grpcio-1.54.2:
#0 146.5       Successfully uninstalled grpcio-1.54.2
#0 146.9   Attempting uninstall: absl-py
#0 146.9     Found existing installation: absl-py 1.4.0
#0 147.0     Uninstalling absl-py-1.4.0:
#0 147.0       Successfully uninstalled absl-py-1.4.0
#0 148.2   Attempting uninstall: tensorflow
#0 148.2     Found existing installation: tensorflow 2.11.0
#0 153.2     Uninstalling tensorflow-2.11.0:
#0 172.9       Successfully uninstalled tensorflow-2.11.0
#0 219.5 Successfully installed absl-py-0.15.0 flatbuffers-1.12 grpcio-1.34.1 h5py-3.1.0 keras-nightly-2.5.0.dev2021032900 keras-preprocessing-1.1.2 numpy-1.19.5 six-1.15.0 tensorflow-2.5.0 tensorflow-estimator-2.5.0 termcolor-1.1.0 tf-slim-1.1.0 typing-extensions-3.7.4.3 wrapt-1.12.1
#0 219.5 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
#0 224.9 WARNING: You are using pip version 22.0.4; however, version 23.1.2 is available.
#0 224.9 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
------
tpu.Dockerfile:42
--------------------
  40 |     COPY srcext srcext
  41 |     RUN pip --no-cache-dir install srcext/mtcnn_tflite/
  42 | >>> RUN python -m src.services.facescan.plugins.setup
  43 |
  44 |     # copy rest of the code
--------------------
ERROR: failed to solve: process "/bin/bash -c python -m src.services.facescan.plugins.setup" did not complete successfully: exit code: 132
─❯ docker buildx build -t justsky/compreface:tpu . -f tpu.Dockerfile.full
[+] Building 87.1s (9/28)
 => [internal] load .dockerignore                                                                                                                                                                                                  0.1s
 => => transferring context: 48B                                                                                                                                                                                                   0.0s
 => [internal] load build definition from tpu.Dockerfile.full                                                                                                                                                                      0.1s
 => => transferring dockerfile: 2.63kB                                                                                                                                                                                             0.0s
 => [internal] load metadata for docker.io/library/debian:buster-slim                                                                                                                                                              3.9s
 => [ 1/24] FROM docker.io/library/debian:buster-slim@sha256:845d301da51ad74998165a70de4196fb4a66e08316c59a4b8237e81a99ad22a2                                                                                                      8.7s
 => => resolve docker.io/library/debian:buster-slim@sha256:845d301da51ad74998165a70de4196fb4a66e08316c59a4b8237e81a99ad22a2                                                                                                        0.0s
 => => sha256:845d301da51ad74998165a70de4196fb4a66e08316c59a4b8237e81a99ad22a2 984B / 984B                                                                                                                                         0.0s
 => => sha256:9d0fb5b9d5318bf507d4507fc846e36a55de7a1198bfc63cf12a2f7c99011efa 529B / 529B                                                                                                                                         0.0s
 => => sha256:4b589adf4404fb807a7225748174f3655b8e1b516e5ff5552f2d13083bafb4c1 1.46kB / 1.46kB                                                                                                                                     0.0s
 => => sha256:99bf4787315b60d97d860ac6d006b7835b2241a601e93c2da4af6ca554be8704 27.14MB / 27.14MB                                                                                                                                   3.6s
 => => extracting sha256:99bf4787315b60d97d860ac6d006b7835b2241a601e93c2da4af6ca554be8704                                                                                                                                          4.6s
 => [internal] load build context                                                                                                                                                                                                  0.0s
 => => transferring context: 9.88kB                                                                                                                                                                                                0.0s
 => [ 2/24] RUN apt-get update                                                                                                                                                                                                    12.3s
 => [ 3/24] RUN apt-get install --no-install-recommends -y     gnupg     ca-certificates     curl     apt-utils     apt-transport-https                                                                                           22.7s
 => [ 4/24] RUN apt-get install -y curl python3.7 python3.7-dev python3.7-distutils                                                                                                                                               38.8s
 => ERROR [ 5/24] RUN update-alternatives --set python /usr/bin/python3.7                                                                                                                                                          0.6s
------
 > [ 5/24] RUN update-alternatives --set python /usr/bin/python3.7:
#0 0.475 update-alternatives: error: no alternatives for python
------
tpu.Dockerfile.full:16
--------------------
  14 |
  15 |     # Set python 3 as the default python
  16 | >>> RUN update-alternatives --set python /usr/bin/python3.7
  17 |
  18 |     # Upgrade pip to latest version
--------------------
ERROR: failed to solve: process "/bin/sh -c update-alternatives --set python /usr/bin/python3.7" did not complete successfully: exit code: 2

from compreface.

pospielov avatar pospielov commented on May 12, 2024

Single container build uses compreface-core image as a base and then adds everything else inside.
So the task is still to make a proper compreface-core image so that you can build a single image later.

from compreface.

meks007 avatar meks007 commented on May 12, 2024

Single container build uses compreface-core image as a base and then adds everything else inside. So the task is still to make a proper compreface-core image so that you can build a single image later.

Right, that much I understand. How do I make a "proper compreface-core" image? I've built and run the embedding-calculator with Coral support and I've a working container for that, but I'm afraid without further guidance I'm unable to go any further than that.

from compreface.

pospielov avatar pospielov commented on May 12, 2024

compreface-core is the name for embedding-calculator image.
Have you managed to build embedding-calculator docker image?
The easiest way to build it:

  1. Go to ./dev folder
  2. Run docker compose build compreface-core

from compreface.

meks007 avatar meks007 commented on May 12, 2024

Not yet, since I've got no reply. I'll try with your instructions.
EDIT: I do have a local image, can I upload it somewhere?

from compreface.

pospielov avatar pospielov commented on May 12, 2024

To upload the image, you can use
docker login - to login to your docker account
docker push <the name of your image>
the name of your image should contain the name of your docker account
you can see examples here:
https://docs.docker.com/engine/reference/commandline/push/#examples

from compreface.

Dvalin21 avatar Dvalin21 commented on May 12, 2024

Just wanted to confirm, is Google Coral TPU M.2 working? To also clarify, since I'm using the M.2, it would put /dev/apex_0 or make available to compreface-core image right?

from compreface.

cliffkujala avatar cliffkujala commented on May 12, 2024

Just wanted to confirm, is Google Coral TPU M.2 working? To also clarify, since I'm using the M.2, it would put /dev/apex_0 or make available to compreface-core image right?

Did this work for you with CompreFace?

Currently, I'm using Coral TPU passed through via LXC>Docker>Frigate. Additionally, I have Intel HWAccel functioning in Frigate, but my Coral is connected via USB. However, if I have the Coral working seamlessly with Frigate, there's no reason to believe, as you've demonstrated here, that I can't utilize the same device callouts in my docker-compose for CompreFace, Deepstack, and CodeProject to also access the Coral?

from compreface.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.