Coder Social home page Coder Social logo

Google Collab error about celeb_recognition HOT 12 CLOSED

adbmdp avatar adbmdp commented on May 26, 2024
Google Collab error

from celeb_recognition.

Comments (12)

shobhit9618 avatar shobhit9618 commented on May 26, 2024 1

The confidence value is not scaled to 1.0 actually, a good threshold would be 0.2.

For training, anything more than 20 pictures works well. On CPU it would be very slow, you can use a colab GPU itself to train this.

Just fyi, the model is not getting trained in the traditional sense. The script creates embeddings for the celeb faces and saves those in a vector space (using annoy library), and mappings are stored in a json file.
During inference, we find the embedding for the celeb to be detected, and compare it with the vector space to find the closest match.

from celeb_recognition.

shobhit9618 avatar shobhit9618 commented on May 26, 2024 1

hey thanks for the PR, I have merged it, feel free to use or modify the code for your application.

from celeb_recognition.

shobhit9618 avatar shobhit9618 commented on May 26, 2024

Thanks for trying out the repo.
I have uploaded the model files to my huggingface repo: https://huggingface.co/resnet151/celeb_detector.

You can install everything using the following commands:

git lfs install
git clone https://huggingface.co/resnet151/celeb_detector
cd celeb_detector
pip install -e .

You should be able to use the colab notebook after that by pointing to the correct model path.

Best way to run this would be in python by directly importing the package:

import cv2
import celeb_detector
out = celeb_detector.get_celebrity(cv2.imread("/path/to/image"))

I will be updating the readme soon.

from celeb_recognition.

adbmdp avatar adbmdp commented on May 26, 2024

Thanks you for your reply!
I'll try this.
If it works i'll try with my own dataset also.

How many celebs are there in your dataset?

Alse your space on huggingface is not working properly : https://huggingface.co/spaces/resnet151/celeb-detector
It just display the same picture as uploaded. But don't display the celeb name.

from celeb_recognition.

shobhit9618 avatar shobhit9618 commented on May 26, 2024

There are 117 celebs, I just used a scraping script to download the images, you can easily scale this to many more celebs. The json file has names of all the celebrities.

For training your own model, you can try to follow the readme, might give you some issues here and there, as that script has not been tested for different scenarios, but the code is pretty straightforward, you should be able to debug that.

Regarding the huggingface space, if the model does not detect any celebrity, it just returns back the same image. Check the json file to see the list of supported celebrities.

image

If you still face any issues, feel free to ping here.

from celeb_recognition.

adbmdp avatar adbmdp commented on May 26, 2024

Thank you. It works!

The confidence is often very low. Is that normal?
Example :
https://www.lfm.ch/wp-content/uploads/2022/10/will-smith-european-premiere-of-aladdin-may-2019-famous.jpg

[{'bbox': (803, 204, 385, 386),
  'celeb_name': 'Will Smith',
  'confidence': 0.25}]

Now i'm gonna try to train my own dataset.
How many pictures per celebrity do you recommend ?
I'll not have access to a GPU right now, do you think i can try a small training on my CPU?

from celeb_recognition.

adbmdp avatar adbmdp commented on May 26, 2024

I've prepared a dataset with four people (with about 30 pictures for each people) in a /celeb_images folder.
In create_celeb_model.py i've change:

if __name__ == "__main__":
	create_model = create_celeb_model("")

by

create_celeb_model("/content/celeb_detector/celeb_detector/celeb_images")

Result is:

Starting face detection and encoding creation
 16%|█▌        | 3/19 [00:00<00:00, 27.00it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
100%|██████████| 19/19 [00:00<00:00, 54.07it/s]
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'NoneType'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
  0%|          | 0/23 [00:00<?, ?it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
 35%|███▍      | 8/23 [00:00<00:00, 74.81it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
 70%|██████▉   | 16/23 [00:00<00:00, 69.05it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'NoneType'
100%|██████████| 23/23 [00:00<00:00, 57.23it/s]
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
  0%|          | 0/22 [00:00<?, ?it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
 73%|███████▎  | 16/22 [00:00<00:00, 76.79it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'NoneType'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
100%|██████████| 22/22 [00:00<00:00, 50.97it/s]
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
 20%|██        | 6/30 [00:00<00:00, 58.06it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
 40%|████      | 12/30 [00:00<00:00, 58.71it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
 63%|██████▎   | 19/30 [00:00<00:00, 60.05it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
100%|██████████| 30/30 [00:00<00:00, 54.54it/s]join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
join() argument must be str, bytes, or os.PathLike object, not 'ndarray'
Encoding and mapping files saved successfully
Building ann index...
Ann index saved successfully

Some json file have been created:
image
The file looks empty, example:
{"JoWilfried_Tsonga": [], "Guy_Forget": [], "Amelie_Mauresmo": []}

And also a /celeb_encodings folder:
image

I don't understand the errors (join() argument must be str, bytes, or os.PathLike object, not 'ndarray')
Can you tell me if I've missed something?
Once again thanks a lot for your help.

from celeb_recognition.

adbmdp avatar adbmdp commented on May 26, 2024

Also, another issue. When testing your model on my local computer I got this:


python3 celeb_detector/celeb_recognition.py --image-path "Users/toto/Downloads/Vin_diesel.jpeg"

Please install `face_recognition_models` with this command before using `face_recognition`:
pip install git+https://github.com/ageitgey/face_recognition_models

I've launch the command but still the same message.... Also face_recognition is in requirements.txt so i don't know why I got this error.

Same with this code:

#test.py
import cv2
import celeb_detector
out = celeb_detector.get_celebrity(cv2.imread("Users/toto/Downloads/Vin_diesel.jpeg"))
python3 test.py
Please install `face_recognition_models` with this command before using `face_recognition`:

pip install git+https://github.com/ageitgey/face_recognition_models

from celeb_recognition.

shobhit9618 avatar shobhit9618 commented on May 26, 2024

I just found a bug in the create_celeb_model.py file, can you take the latest pull and try again? Or you can just copy the latest code from the file and try.

Regarding the face_recognition issue, did you run this command?
pip install git+https://github.com/ageitgey/face_recognition_models

If not, please run this once and check.

from celeb_recognition.

adbmdp avatar adbmdp commented on May 26, 2024

Thank you.
About the missing face_recognition_models it does not work even after a pip install.
After a long search on the web I've found a solution by doing : pip install setuptools

But now i have this error (still on my local computer):

python3 test.py

import cv2
import celeb_detector
out = celeb_detector.get_celebrity(cv2.imread("Users/toto/Downloads/Vin_diesel.jpeg"))
[ WARN:[email protected]] global loadsave.cpp:248 findDecoder imread_('Users/toto/Downloads/Vin_diesel.jpeg'): can't open/read file: check file path/integrity
Traceback (most recent call last):
  File "/Users/toto/Documents/projets/celeb_detector/test.py", line 3, in <module>
    out = celeb_detector.get_celebrity(cv2.imread("Users/toto/Downloads/Vin_diesel.jpeg"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/toto/Documents/projets/celeb_detector/celeb_detector/celeb_recognition.py", line 11, in get_celebrity
    celeb_recog = CelebRecognition()
                  ^^^^^^^^^^^^^^^^^^
  File "/Users/toto/Documents/projets/celeb_detector/celeb_detector/celeb_prediction_main.py", line 24, in __init__
    self.encoder_model = torch.load(vggface_modelpath).to(self.device).eval()
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/toto/Documents/projets/celeb_detector/mon_env/lib/python3.12/site-packages/torch/serialization.py", line 1040, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/toto/Documents/projets/celeb_detector/mon_env/lib/python3.12/site-packages/torch/serialization.py", line 1262, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_pickle.UnpicklingError: invalid load key, 'v'.

About my Google Collab to train a dataset. I now got this error:

Starting face detection and encoding creation
  0%|          | 0/22 [00:01<?, ?it/s]
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-11-77ce613d24ad> in <cell line: 62>()
     60 #       create_model = create_celeb_model("")
     61 
---> 62 create_celeb_model("/content/celeb_detector/celeb_images")

<ipython-input-11-77ce613d24ad> in create_celeb_model(base_path)
     41                                 celeb_encoding[c] = encoding[0]
     42                                 celeb_mapping[folder].append(c)
---> 43                                 ann_index.add_item(c, encoding[0])
     44                 save_json(celeb_mapping)
     45                 pickle.dump(celeb_encoding, open(f"celeb_encodings/{folder}_encoding.pkl", "wb" ))

IndexError: Vector has wrong length (expected 2048, got 2)

I know it's a lot of errors.😅
But i'm learning a lot so i'll keep trying.

I've share a Kaggle with you.

from celeb_recognition.

adbmdp avatar adbmdp commented on May 26, 2024

I made some change in /content/celeb_detector/celeb_detector/celeb_prediction_main.py
And i got no errors

def create_celeb_model(base_path):
    ann_index = AnnoyIndex(2048, 'angular')
    os.makedirs('celeb_encodings', exist_ok=True)
    celeb_mapping = {}
    celeb_encoding = {}
    c = 0
    print("Starting face detection and encoding creation")
    for folder in os.listdir(base_path):
        celeb_mapping[folder] = []
        for image_name in tqdm(os.listdir(os.path.join(base_path, folder))):
            image = cv2.imread(os.path.join(base_path, folder, image_name))
            try:
                encodings, bboxes = celeb_recog.get_encoding(image)
            except Exception as e:
                print(e)
                continue
            if encodings is not None:
                for encoding, bbox in zip(encodings, bboxes):
                    c += 1
                    celeb_encoding[c] = encoding.squeeze()
                    celeb_mapping[folder].append(c)
                    ann_index.add_item(c, encoding.squeeze())
            save_json(celeb_mapping)
            with open(f"celeb_encodings/{folder}_encoding.pkl", "wb") as f:
                pickle.dump(celeb_encoding, f)
            celeb_encoding.clear()
    save_json(celeb_mapping)
    print("Encoding and mapping files saved successfully")
    print("Building ann index...")
    ann_index.build(1000)
    x = ann_index.save("celeb_index.ann")
    if x:
        print("Ann index saved successfully")
    else:
        print("Error in saving ann index")
Starting face detection and encoding creation
  4%|▍         | 1/23 [00:00<00:07,  3.03it/s]Shape of encoding tensor: torch.Size([1, 2048])
  9%|▊         | 2/23 [00:00<00:05,  3.65it/s]Shape of encoding tensor: torch.Size([1, 2048])
 13%|█▎        | 3/23 [00:02<00:17,  1.13it/s]Shape of encoding tensor: torch.Size([1, 2048])
 17%|█▋        | 4/23 [00:02<00:14,  1.33it/s]Shape of encoding tensor: torch.Size([1, 2048])
 22%|██▏       | 5/23 [00:03<00:12,  1.43it/s]Shape of encoding tensor: torch.Size([1, 2048])
 26%|██▌       | 6/23 [00:04<00:13,  1.21it/s]Shape of encoding tensor: torch.Size([1, 2048])
 30%|███       | 7/23 [00:05<00:12,  1.30it/s]Shape of encoding tensor: torch.Size([1, 2048])
 35%|███▍      | 8/23 [00:06<00:16,  1.12s/it]Shape of encoding tensor: torch.Size([1, 2048])
 39%|███▉      | 9/23 [00:08<00:19,  1.37s/it]Shape of encoding tensor: torch.Size([1, 2048])
 43%|████▎     | 10/23 [00:10<00:17,  1.36s/it]Shape of encoding tensor: torch.Size([1, 2048])
 48%|████▊     | 11/23 [00:10<00:13,  1.09s/it]Shape of encoding tensor: torch.Size([1, 2048])
 52%|█████▏    | 12/23 [00:11<00:10,  1.07it/s]Shape of encoding tensor: torch.Size([1, 2048])
 61%|██████    | 14/23 [00:11<00:05,  1.72it/s]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 65%|██████▌   | 15/23 [00:12<00:05,  1.42it/s]Shape of encoding tensor: torch.Size([1, 2048])
 70%|██████▉   | 16/23 [00:13<00:04,  1.56it/s]Shape of encoding tensor: torch.Size([1, 2048])
 74%|███████▍  | 17/23 [00:13<00:03,  1.82it/s]Shape of encoding tensor: torch.Size([1, 2048])
__call__(): incompatible function arguments. The following argument types are supported:
    1. (self: _dlib_pybind11.fhog_object_detector, image: numpy.ndarray, upsample_num_times: int = 0) -> _dlib_pybind11.rectangles

Invoked with: <_dlib_pybind11.fhog_object_detector object at 0x7e241fae0870>, None, 1
 83%|████████▎ | 19/23 [00:14<00:01,  2.61it/s]Shape of encoding tensor: torch.Size([1, 2048])
 87%|████████▋ | 20/23 [00:14<00:01,  2.55it/s]Shape of encoding tensor: torch.Size([1, 2048])
 91%|█████████▏| 21/23 [00:15<00:01,  1.59it/s]Shape of encoding tensor: torch.Size([1, 2048])
 96%|█████████▌| 22/23 [00:16<00:00,  1.51it/s]Shape of encoding tensor: torch.Size([1, 2048])
100%|██████████| 23/23 [00:17<00:00,  1.33it/s]
Shape of encoding tensor: torch.Size([1, 2048])
  3%|▎         | 1/30 [00:00<00:25,  1.13it/s]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
  7%|▋         | 2/30 [00:02<00:29,  1.07s/it]Shape of encoding tensor: torch.Size([1, 2048])
 10%|█         | 3/30 [00:04<00:49,  1.85s/it]Shape of encoding tensor: torch.Size([1, 2048])
 13%|█▎        | 4/30 [00:05<00:32,  1.24s/it]Shape of encoding tensor: torch.Size([1, 2048])
 20%|██        | 6/30 [00:08<00:38,  1.61s/it]Shape of encoding tensor: torch.Size([1, 2048])
 23%|██▎       | 7/30 [00:09<00:34,  1.51s/it]Shape of encoding tensor: torch.Size([1, 2048])
 27%|██▋       | 8/30 [00:10<00:26,  1.22s/it]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 33%|███▎      | 10/30 [00:12<00:21,  1.08s/it]Shape of encoding tensor: torch.Size([1, 2048])
 37%|███▋      | 11/30 [00:12<00:15,  1.20it/s]Shape of encoding tensor: torch.Size([1, 2048])
 40%|████      | 12/30 [00:13<00:13,  1.36it/s]Shape of encoding tensor: torch.Size([1, 2048])
 43%|████▎     | 13/30 [00:13<00:10,  1.66it/s]Shape of encoding tensor: torch.Size([1, 2048])
 47%|████▋     | 14/30 [00:14<00:13,  1.20it/s]Shape of encoding tensor: torch.Size([1, 2048])
 50%|█████     | 15/30 [00:16<00:17,  1.14s/it]Shape of encoding tensor: torch.Size([1, 2048])
 53%|█████▎    | 16/30 [00:17<00:14,  1.04s/it]Shape of encoding tensor: torch.Size([1, 2048])
 57%|█████▋    | 17/30 [00:19<00:18,  1.41s/it]Shape of encoding tensor: torch.Size([1, 2048])
 60%|██████    | 18/30 [00:20<00:14,  1.21s/it]Shape of encoding tensor: torch.Size([1, 2048])
 63%|██████▎   | 19/30 [00:21<00:14,  1.32s/it]Shape of encoding tensor: torch.Size([1, 2048])
 67%|██████▋   | 20/30 [00:22<00:10,  1.04s/it]Shape of encoding tensor: torch.Size([1, 2048])
 73%|███████▎  | 22/30 [00:22<00:05,  1.60it/s]Shape of encoding tensor: torch.Size([1, 2048])
 77%|███████▋  | 23/30 [00:23<00:04,  1.46it/s]Shape of encoding tensor: torch.Size([1, 2048])
 80%|████████  | 24/30 [00:25<00:07,  1.17s/it]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 83%|████████▎ | 25/30 [00:26<00:04,  1.07it/s]Shape of encoding tensor: torch.Size([1, 2048])
 87%|████████▋ | 26/30 [00:27<00:03,  1.11it/s]Shape of encoding tensor: torch.Size([1, 2048])
 93%|█████████▎| 28/30 [00:27<00:01,  1.90it/s]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 97%|█████████▋| 29/30 [00:27<00:00,  2.09it/s]Shape of encoding tensor: torch.Size([1, 2048])
100%|██████████| 30/30 [00:28<00:00,  1.06it/s]
Shape of encoding tensor: torch.Size([1, 2048])
  5%|▍         | 1/22 [00:01<00:21,  1.01s/it]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 14%|█▎        | 3/22 [00:01<00:08,  2.30it/s]Shape of encoding tensor: torch.Size([1, 2048])
 18%|█▊        | 4/22 [00:02<00:08,  2.21it/s]Shape of encoding tensor: torch.Size([1, 2048])
 23%|██▎       | 5/22 [00:02<00:08,  2.07it/s]Shape of encoding tensor: torch.Size([1, 2048])
 27%|██▋       | 6/22 [00:02<00:07,  2.19it/s]Shape of encoding tensor: torch.Size([1, 2048])
 32%|███▏      | 7/22 [00:04<00:10,  1.41it/s]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 36%|███▋      | 8/22 [00:05<00:10,  1.32it/s]Shape of encoding tensor: torch.Size([1, 2048])
 45%|████▌     | 10/22 [00:05<00:06,  1.93it/s]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 50%|█████     | 11/22 [00:05<00:04,  2.31it/s]Shape of encoding tensor: torch.Size([1, 2048])
 55%|█████▍    | 12/22 [00:06<00:06,  1.62it/s]Shape of encoding tensor: torch.Size([1, 2048])
 59%|█████▉    | 13/22 [00:07<00:05,  1.64it/s]Shape of encoding tensor: torch.Size([1, 2048])
 64%|██████▎   | 14/22 [00:07<00:04,  1.90it/s]Shape of encoding tensor: torch.Size([1, 2048])
 68%|██████▊   | 15/22 [00:08<00:03,  2.21it/s]Shape of encoding tensor: torch.Size([1, 2048])
 73%|███████▎  | 16/22 [00:08<00:02,  2.15it/s]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 77%|███████▋  | 17/22 [00:08<00:01,  2.59it/s]__call__(): incompatible function arguments. The following argument types are supported:
    1. (self: _dlib_pybind11.fhog_object_detector, image: numpy.ndarray, upsample_num_times: int = 0) -> _dlib_pybind11.rectangles

Invoked with: <_dlib_pybind11.fhog_object_detector object at 0x7e241fae0870>, None, 1
 86%|████████▋ | 19/22 [00:10<00:01,  1.56it/s]Shape of encoding tensor: torch.Size([1, 2048])
 91%|█████████ | 20/22 [00:18<00:04,  2.49s/it]Shape of encoding tensor: torch.Size([1, 2048])
100%|██████████| 22/22 [00:20<00:00,  1.09it/s]
Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 11%|█         | 2/19 [00:03<00:29,  1.75s/it]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 21%|██        | 4/19 [00:03<00:12,  1.23it/s]Shape of encoding tensor: torch.Size([1, 2048])
 26%|██▋       | 5/19 [00:05<00:13,  1.06it/s]Shape of encoding tensor: torch.Size([1, 2048])
 32%|███▏      | 6/19 [00:05<00:09,  1.39it/s]Shape of encoding tensor: torch.Size([1, 2048])
 37%|███▋      | 7/19 [00:06<00:08,  1.38it/s]Shape of encoding tensor: torch.Size([1, 2048])
 42%|████▏     | 8/19 [00:06<00:07,  1.43it/s]Shape of encoding tensor: torch.Size([1, 2048])
 47%|████▋     | 9/19 [00:11<00:19,  1.91s/it]Shape of encoding tensor: torch.Size([1, 2048])
 53%|█████▎    | 10/19 [00:12<00:15,  1.70s/it]Shape of encoding tensor: torch.Size([1, 2048])
Shape of encoding tensor: torch.Size([1, 2048])
 63%|██████▎   | 12/19 [00:13<00:07,  1.00s/it]Shape of encoding tensor: torch.Size([1, 2048])
 68%|██████▊   | 13/19 [00:14<00:07,  1.19s/it]Shape of encoding tensor: torch.Size([1, 2048])
 74%|███████▎  | 14/19 [00:15<00:05,  1.10s/it]Shape of encoding tensor: torch.Size([1, 2048])
 79%|███████▉  | 15/19 [00:16<00:03,  1.00it/s]Shape of encoding tensor: torch.Size([1, 2048])
__call__(): incompatible function arguments. The following argument types are supported:
    1. (self: _dlib_pybind11.fhog_object_detector, image: numpy.ndarray, upsample_num_times: int = 0) -> _dlib_pybind11.rectangles

Invoked with: <_dlib_pybind11.fhog_object_detector object at 0x7e241fae0870>, None, 1
Shape of encoding tensor: torch.Size([1, 2048])
 95%|█████████▍| 18/19 [00:16<00:00,  2.02it/s]Shape of encoding tensor: torch.Size([1, 2048])
100%|██████████| 19/19 [00:17<00:00,  1.10it/s]Shape of encoding tensor: torch.Size([1, 2048])
Encoding and mapping files saved successfully
Building ann index...
Ann index saved successfully

Now the next strop to make prediciton is to set this files:

celeb_mapping_filepath = "celeb_detector/models/celeb_mapping_117_18012022.json"
celeb_index_annpath = "celeb_detector/models/celeb_index_117_18012022.ann"
vggface_modelpath = "celeb_detector/models/vggface_resnet50.pt"

from celeb_recognition.

adbmdp avatar adbmdp commented on May 26, 2024

And I think it works!
image

Thanks you so much for your help @shobhit9618
I have more tests to do and things to understand. But it's a huge step for me.
I made 2 PRs

from celeb_recognition.

Related Issues (3)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.