Coder Social home page Coder Social logo

hysts / anime-face-detector Goto Github PK

View Code? Open in Web Editor NEW
399.0 399.0 25.0 12.85 MB

Anime Face Detector using mmdet and mmpose

License: MIT License

Python 69.64% Jupyter Notebook 30.36%
anime computer-vision face-detection face-landmark-detection pytorch

anime-face-detector's Introduction

Hi there 👋

GitHub stats

trophy

anime-face-detector's People

Contributors

hysts avatar koke2c95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

anime-face-detector's Issues

Dependency Problems

numpy is missing from requirements.txt and the installation of xtcocotools from requirements.txt fails because it apparently also does not include the dependency on numpy. Adding numpy to the top of requirements.txt would fix the issue.

Another compatibility issue:
AssertionError: MMCV==1.7.1 is used but incompatible. Please install mmcv>=2.0.0rc4, <2.1.0.

Re-thinking anime(Illustration/draw/manga) character face detection

awesome work!

especially face clustering very neat

this work reminds me of

How can Illustration be aligned and what can I do with these 2d landmark?

Scaling and rotating images and crop: FFHQ aligned code and webtoon result

Artstation-Artistic-face-HQ which counts as Illustration Use FFHQ aligned

and new FFHQ aligned https://arxiv.org/abs/2109.09378

but anime Illustration is not the same as real FFHQ, where perspective-related (pose) means destroying the centre, and local parts exaggeration destroying the global

[DO.1] directly k-mean dictionary (run a dataset) proximity aligned

Mention this analysis

[DO.2] because there are not many features can use, add continuous 2D spatial feature (pred), more point and even beyond

this need hack model (might proposed)

[DO.3] Should be used directly as a filter to assist with edge extraction (maximum reserve features)

guide VAE, SGF generation, or anime cross image Synthesis

if the purpose is not to train the generation model, probably use is to extend the dataset.
if training to generate models, will greatly effect generated eye+chin centre aligned
visual lines don't keeping real image features just polylines

Or need more key points in clustering, det box pts (easy [DO.4]), and beyond to the whole image

and thank for your reading this

colab notebook encounters problem while installing dependencies

Hi, the colab notebook looks broken. I used it about 2 weeks ago with out any problem. Basically in dependcie installing phase, when executing "mim install mmcv-full", colab will ask if I want to use an older version to replace pre-installed newer version. I had to choose to install older version to make the detector works.

I retried the colab notebook yesterday, this time if I still chose to replace preinstalled v1.5.0 by v.1.4.2, it will stuck at "building wheel for mmcv-full" for 20 mins and fail. If I chose not to replace preinstalled version and skip mmcv-full, the dependcie installing phase could be completed without error. But when I ran the detector, I got an error "KeyError: 'center'"

Please help.

KeyError                                  Traceback (most recent call last)
[<ipython-input-8-2cb6d21c10b9>](https://localhost:8080/#) in <module>()
     12 image = cv2.imread(input)
     13 
---> 14 preds = detector(image)

6 frames
[/content/anime-face-detector/anime_face_detector/detector.py](https://localhost:8080/#) in __call__(self, image_or_path, boxes)
    145                 boxes = [np.array([0, 0, w - 1, h - 1, 1])]
    146         box_list = [{'bbox': box} for box in boxes]
--> 147         return self._detect_landmarks(image, box_list)

[/content/anime-face-detector/anime_face_detector/detector.py](https://localhost:8080/#) in _detect_landmarks(self, image, boxes)
    101             format='xyxy',
    102             dataset_info=self.dataset_info,
--> 103             return_heatmap=False)
    104         return preds
    105 

[/usr/local/lib/python3.7/dist-packages/mmcv/utils/misc.py](https://localhost:8080/#) in new_func(*args, **kwargs)
    338 
    339             # apply converted arguments to the decorated method
--> 340             output = old_func(*args, **kwargs)
    341             return output
    342 

[/usr/local/lib/python3.7/dist-packages/mmpose/apis/inference.py](https://localhost:8080/#) in inference_top_down_pose_model(model, imgs_or_paths, person_results, bbox_thr, format, dataset, dataset_info, return_heatmap, outputs)
    385             dataset_info=dataset_info,
    386             return_heatmap=return_heatmap,
--> 387             use_multi_frames=use_multi_frames)
    388 
    389         if return_heatmap:

[/usr/local/lib/python3.7/dist-packages/mmpose/apis/inference.py](https://localhost:8080/#) in _inference_single_pose_model(model, imgs_or_paths, bboxes, dataset, dataset_info, return_heatmap, use_multi_frames)
    245                 data['image_file'] = imgs_or_paths
    246 
--> 247         data = test_pipeline(data)
    248         batch_data.append(data)
    249 

[/usr/local/lib/python3.7/dist-packages/mmpose/datasets/pipelines/shared_transform.py](https://localhost:8080/#) in __call__(self, data)
    105         """
    106         for t in self.transforms:
--> 107             data = t(data)
    108             if data is None:
    109                 return None

[/usr/local/lib/python3.7/dist-packages/mmpose/datasets/pipelines/top_down_transform.py](https://localhost:8080/#) in __call__(self, results)
    287         joints_3d = results['joints_3d']
    288         joints_3d_visible = results['joints_3d_visible']
--> 289         c = results['center']
    290         s = results['scale']
    291         r = results['rotation']

KeyError: 'center'

ONNX support?

Is it possible to export to onnx format? Are there any suggestions if exporting is required?

mmpose library has been upgraded to 1.0.0, causing incompatibility

As the title...

Here is the error on Github Action: https://github.com/narugo1992/games_character_ranking/actions/runs/4632168314/jobs/8195918863#step:9:129

Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/runner/work/games_character_ranking/games_character_ranking/ranking/__main__.py", line [100](https://github.com/narugo1992/games_character_ranking/actions/runs/4632168314/jobs/8195918863#step:9:101), in <module>
    cli()
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/click/core.py", line [105](https://github.com/narugo1992/games_character_ranking/actions/runs/4632168314/jobs/8195918863#step:9:106)5, in main
    rv = self.invoke(ctx)
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/home/runner/work/games_character_ranking/games_character_ranking/ranking/__main__.py", line 56, in update
    create_ranking_project(game, output_dir, number, icon_size, mode, min_recent_count, f'{recent_days} days')
  File "/home/runner/work/games_character_ranking/games_character_ranking/ranking/project.py", line 97, in create_ranking_project
    all_chars, images_dir, count, icon_size, mode, _existing_image_filenames)
  File "/home/runner/work/games_character_ranking/games_character_ranking/ranking/project.py", line 36, in create_ranking_table
    logo_image = get_logo(ch, min_size=icon_size)
  File "/home/runner/work/games_character_ranking/games_character_ranking/ranking/games.py", line 88, in get_logo
    from .image import find_heads
  File "/home/runner/work/games_character_ranking/games_character_ranking/ranking/image.py", line 7, in <module>
    from anime_face_detector import create_detector
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/anime_face_detector/__init__.py", line 5, in <module>
    from .detector import LandmarkDetector
  File "/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/anime_face_detector/detector.py", line 12, in <module>
    from mmpose.apis import inference_top_down_pose_model, init_pose_model
ImportError: cannot import name 'inference_top_down_pose_model' from 'mmpose.apis' (/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/site-packages/mmpose/apis/__init__.py)

When I set mmpose<1, it's okay.

How do you implement clustering of face landmarks?

Thank you for sharing this wonderful project. I am curious about how do you implement clustering of face landmarks. Can you describe that in detail? Or can you sharing some related papers or projects? Thanks in advance.

大神看看我

我是OpenMMLab的,看见这个repo觉得很有意思,想找你约稿,可以嘛!(加这个vx号联系: OpenMMLabwx)

安装问题

我安装mmcv==1.7.0,提示mmcv>=2.0.0rc4, <2.1.0. 安装mmcv==2.0.0,提示mmcv>=1.3.17, <=1.8.0,目前安装的具体版本是多少啊。

how to implement anime face identification with this detector

Thanks for sharing such a nice work! I was wondering if it is possible to implement anime face identification based on this detector. Do you have any plan on this? Will we have a good identification accuracy using this detector? Many thanks!

Question About Training Dataset

Thanks for your work! It’s very interesting!!
May I ask you some questions?
Did you manually annotate landmarks for the images generated by the TADNE model? And how many images does your training dataset include?

安装问题

按照你们给的安装教程,安装不成功。

mmcv is updated to 1.7.1

Hi,

mmcv is updated to 1.7.1 as of 2023.

Here are errors

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
[<ipython-input-10-53e3c28fb569>](https://localhost:8080/#) in <module>
      5 import numpy as np
      6 
----> 7 import anime_face_detector

2 frames

[/usr/local/lib/python3.8/dist-packages/mmpose/__init__.py](https://localhost:8080/#) in <module>
     22 
     23 
---> 24 assert (mmcv_version >= digit_version(mmcv_minimum_version)
     25         and mmcv_version <= digit_version(mmcv_maximum_version)), \
     26     f'MMCV=={mmcv.__version__} is used but incompatible. ' \
AssertionError: MMCV==1.7.1 is used but incompatible. Please install mmcv>=1.3.8, <=1.7.0.

image

image

安装问题。

你们项目的版本安装一直是一个问题,按照给的安装文档经常失败,经常性的不兼容。。。你们这个安装文档应该及时更新的。

Question about the annotation tool for landmark

Thanks for your great work!
May I ask which tool do you use to annotate the landmarks?
I find the detector seems to perform not so well on the manga images. So I want to manually annotate some manga images.
Besides, when you trained the landmarks detector, did you train the model from scratch or fine-tune on the pretrained mmpose model?

Citation Issue

Hi, @hysts

First of all, thank you so much for the great work!

I'm a graduate student and have used your pretrained model to generate landmark points as ground truth.
I'm currently finishing up my thesis writing and want to cite your github repo.

I don't known if I overlooked something, but I couldn't find the citation information in the README page.
Is there anyway to cite this repo?

Thank you.

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

There is an error in demo.ipynb

First of all, thank you for sharing your program.

Today I tried to run the program in GoogleColab and got the following error in the import anime_face_detector section.
Do you know any solutions?

Thank you.

ImportError Traceback (most recent call last)
in ()
5 import numpy as np
6
----> 7 import anime_face_detector

7 frames
/usr/lib/python3.7/importlib/init.py in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129

ImportError: /usr/local/lib/python3.7/dist-packages/mmcv/_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol:_ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.