Coder Social home page Coder Social logo

chenglongma / skintoneclassifier Goto Github PK

View Code? Open in Web Editor NEW
67.0 3.0 9.0 3.94 MB

An easy-to-use library for skin tone classification

Home Page: https://chenglongma.com/SkinToneClassifier/

License: GNU General Public License v3.0

Python 100.00%
face-detection image-processing image-recognition image-segmentation skin-detection

skintoneclassifier's Introduction

stone logo model illustration

PyPI - Python Version PyPI PyPI - Downloads GitHub release (latest by date including pre-releases) GitHub License youtube Open In Colab Discord GitHub Repo stars

An easy-to-use library for skin tone classification.

This can be used to detect face or skin area in the specified images. The detected skin tones are then classified into the specified color categories. The library finally generates results to report the detected faces (if any), dominant skin tones and color categories.

Check out the Changelog for the latest updates.

If you find this project helpful, please consider giving it a star โญ. It would be a great encouragement for me!


Table of Contents

Video tutorials

youtube

Please visit the following video tutorials if you have no programming background or are unfamiliar with how to use Python and this library ๐Ÿ’–

Playlist

playlist

Click here to show more.

1. How to install Python and stone

YouTube Video Views

installation

2. Use stone in GUI mode

YouTube Video Views

use gui mode

3. Use stone in CLI mode

YouTube Video Views

use cli mode

4. Use stone in Python scripts

Please refer to this notebook Open In Colab for more information.

More videos are coming soon...

Installation

Tip

Since v1.2.3, we have made the GUI mode optional.

Install from pip

Install the CLI mode only

pip install skin-tone-classifier --upgrade

It is useful for users who want to use this library in non-GUI environments, e.g., servers or Open In Colab.

Install the CLI mode and the GUI mode

pip install skin-tone-classifier[all] --upgrade

It is useful for users who are not familiar with the command line interface and want to use the GUI mode.

Install from source

git clone [email protected]:ChenglongMa/SkinToneClassifier.git
cd SkinToneClassifier
pip install -e . --verbose

Tip

If you encounter the following problem:

ImportError: DLL load failed while importing _core: The specified module could not be found

Please download and install Visual C++ Redistributable at here.

Then this error will be gone.

HOW TO USE

Tip

You can combine the following documents, the video tutorials above and the running examples Open In Colab to understand the usage of this library more intuitively.

Quick Start

Use stone in a GUI

โœจ Since v1.2.0, we have provided a GUI version of stone for users who are not familiar with the command line interface.

stone GUI

Instead of typing commands in the terminal, you can use the config GUI of stone to process the images.

Steps:

  1. Open the terminal that can run stone (e.g., PowerShell in Windows or Terminal in macOS).
  2. Type stone (without any parameters) or stone --gui and press Enter to open the GUI.
  3. Specify the parameters in each tab.
  4. Click the Start button to start processing the images.

Hopefully, this can make it easier for you to use stone ๐Ÿป!

Tip

  1. It is recommended to install v1.2.3+, which supports Python 3.9+.

    If you have installed v1.2.0, please upgrade to v1.2.3+ by running

    pip install skin-tone-classifier[all] --upgrade

  2. If you encounter the following problem:

    This program needs access to the screen. Please run with a Framework build of python, and only when you are logged in on the main display of your Mac.

    Please launch the GUI by running pythonw -m stone in the terminal. References:

Use stone in command line interface (CLI)

To detect the skin tone in a portrait, e.g.,

Demo picture

Just run:

stone -i /path/to/demo.png --debug

Then, you can find the processed image in ./debug/color/faces_1 folder, e.g.,

processed demo picture

In this image, from left to right you can find the following information:

  1. detected face with a label (Face 1) enclosed by a rectangle.
  2. dominant colors.
    1. The number of colors depends on settings (default is 2), and their sizes depend on their proportion.
  3. specified color palette and the target label is enclosed by a rectangle.
  4. you can find a summary text at the bottom.

Furthermore, there will be a report file named result.csv which contains more detailed information, e.g.,

file image type face id dominant 1 percent 1 dominant 2 percent 2 skin tone tone label accuracy(0-100)
demo.png color 1 #C99676 0.67 #805341 0.33 #9D7A54 CF 86.27

Interpretation of the table

  1. file: the filename of the processed image.
    • NB: The filename pattern of report image is <file>-<face id>.<extension>
  2. image type: the type of the processed image, i.e., color or bw (black/white).
  3. face id: the id of the detected face, which matches the reported image. NA means no face has been detected.
  4. dominant n: the n-th dominant color of the detected face.
  5. percent n: the percentage of the n-th dominant color, (0~1.0).
  6. skin tone: the skin tone category of the detected face.
  7. tone label: the label of skin tone category of the detected face.
  8. accuracy: the accuracy of the skin tone category of the detected face, (0~100). The larger, the better.

Detailed Usage

To see the usage and parameters, run:

stone -h (or --help)

Output in console:

usage: stone [-h] [-i IMAGE FILENAME [IMAGE FILENAME ...]] [-r] [-t IMAGE TYPE] [-p PALETTE [PALETTE ...]]
             [-l LABELS [LABELS ...]] [-d] [-bw] [-o DIRECTORY] [--n_workers WORKERS] [--n_colors COLORS]
             [--new_width WIDTH] [--scale SCALE] [--min_nbrs NEIGHBORS] [--min_size WIDTH [HEIGHT ...]]
             [--threshold THRESHOLD] [-v]

Skin Tone Classifier

options:
  -h, --help            show this help message and exit
  -i IMAGE FILENAME [IMAGE FILENAME ...], --images IMAGE FILENAME [IMAGE FILENAME ...]
                        Image filename(s) or URLs to process;
                        Supports multiple values separated by space, e.g., "a.jpg b.png";
                        Supports directory or file name(s), e.g., "./path/to/images/ a.jpg";
                        Supports URL(s), e.g., "https://example.com/images/pic.jpg" since v1.1.0+.
                        The app will search all images in current directory in default.
  -r, --recursive       Whether to search images recursively in the specified directory.
  -t IMAGE TYPE, --image_type IMAGE TYPE
                        Specify whether the input image(s) is/are colored or black/white.
                        Valid choices are: "auto", "color" or "bw",
                        Defaults to "auto", which will be detected automatically.
  -p PALETTE [PALETTE ...], --palette PALETTE [PALETTE ...]
                        Skin tone palette;
                        Supports RGB hex value leading by "#" or RGB values separated by comma(,),
                        E.g., "-p #373028 #422811" or "-p 255,255,255 100,100,100"
  -l LABELS [LABELS ...], --labels LABELS [LABELS ...]
                        Skin tone labels; default values are the uppercase alphabet list leading by the image type ('C' for 'color'; 'B' for 'Black&White'), e.g., ['CA', 'CB', ..., 'CZ'] or ['BA', 'BB', ..., 'BZ'].
  -d, --debug           Whether to generate report images, used for debugging and verification.The report images will be saved in the './debug' directory.
  -bw, --black_white    Whether to convert the input to black/white image(s).
                        If true, the app will use the black/white palette to classify the image.
  -o DIRECTORY, --output DIRECTORY
                        The path of output file, defaults to current directory.
  --n_workers WORKERS   The number of workers to process the images, defaults to the number of CPUs in the system.
  --n_colors COLORS     CONFIG: the number of dominant colors to be extracted, defaults to 2.
  --new_width WIDTH     CONFIG: resize the images with the specified width. Negative value will be ignored, defaults to 250.
  --scale SCALE         CONFIG: how much the image size is reduced at each image scale, defaults to 1.1
  --min_nbrs NEIGHBORS  CONFIG: how many neighbors each candidate rectangle should have to retain it.
                        Higher value results in less detections but with higher quality, defaults to 5.
  --min_size WIDTH [HEIGHT ...]
                        CONFIG: minimum possible face size. Faces smaller than that are ignored, defaults to "90 90".
  --threshold THRESHOLD
                        CONFIG: what percentage of the skin area is required to identify the face, defaults to 0.15.
  -v, --version         Show the version number and exit.

Use Cases

1. Process multiple images

1.1 Multiple filenames

stone -i (or --images) a.jpg b.png https://example.com/images/pic.jpg

1.2 Images in some folder(s)

stone -i ./path/to/images/

NB: Supported image formats: .jpg, .gif, .png, .jpeg, .webp, .tif.

In default (i.e., stone without -i option), the app will search images in current folder.

2. Specify color categories

2.1 Use HEX values

stone -p (or --palette) #373028 #422811 #513B2E

NB: Values start with '#' and are separated by space.

2.2 Use RGB tuple values

stone -p 55,48,40 66,40,17 251,242,243

NB: Values split by comma ',', multiple values are still separated by space.

3. Specify category labels

You can assign the labels for the skin tone categories, for example:

"CA": "#373028",
"CB": "#422811",
"CC": "#513B2E",
...

To achieve this, you can use the -l (or --labels) option:

3.1 Specify the labels directly using spaces as delimiters, e.g.,

stone -l A B C D E F G H

3.2 Specify the range of labels based on this pattern: <start><sep><end><sep><step>.

Specifically,

  • <start>: the start label, can be a letter (e.g., A) or a number (e.g., 1);
  • <end>: the end label, can be a letter (e.g., H) or a number (e.g., 8);
  • <step>: the step to generate the label sequence, can be a number (e.g., 2 or -1), defaults to 1.
  • <sep>: the separator between <start> and <end>, can be one of these symbols: -, ,, ~, :, ;, _.

Examples:

stone -l A-H-1

which is equivalent to stone -l A-H and stone -l A B C D E F G H.

stone -l A-H-2

which is equivalent to stone -l A C E G.

stone -l 1-8

which is equivalent to stone -l 1 2 3 4 5 6 7 8.

stone -l 1-8-3

which is equivalent to stone -l 1 4 7.

Important

Please make sure the number of labels is equal to the number of colors in the palette.

4. Specify output folder

The app puts the final report (result.csv) in current folder in default.

To change the output folder:

stone -o (or --output) ./path/to/output/

The output folder will be created if it does not exist.

In result.csv, each row is showing the color information of each detected face. If more than one faces are detected, there will be multiple rows for that image.

5. Store report images for debugging

stone -d (or --debug)

This option will store the report image (like the demo portrait above) in ./path/to/output/debug/<image type>/faces_<n> folder, where <image type> indicates if the image is color or bw (black/white); <n> is the number of faces detected in the image.

By default, to save storage space, the app does not store report images.

Like in the result.csv file, there will be more than one report images if 2 or more faces were detected.

6. Specify the types of the input image(s)

6.1 The input are color images

stone -t (or --image_type) color

6.2 The input are black/white images

stone -t (or --image_type) bw

6.3 In default, the app will detect the image type automatically, i.e.,

stone -t (or --image_type) auto

For color images, we use the color palette to detect faces:

#373028 #422811 #513b2e #6f503c #81654f #9d7a54 #bea07e #e5c8a6 #e7c1b8 #f3dad6 #fbf2f3
#373028
#422811
#513B2E
#6F503C
#81654F
#9D7A54
#BEA07E
#E5C8A6
#E7C1B8
#F3DAD6
#FBF2F3

(Please refer to our paper above for more details.)

For bw images, we use the bw palette to detect faces:

#FFFFFF #F0F0F0 #E0E0E0 #D0D0D0 #C0C0C0 #B0B0B0 #A0A0A0 #909090 #808080 #707070 #606060 #505050 #404040 #303030 #202020 #101010 #000000
#FFFFFF
#F0F0F0
#E0E0E0
#D0D0D0
#C0C0C0
#B0B0B0
#A0A0A0
#909090
#808080
#707070
#606060
#505050
#404040
#303030
#202020
#101010
#000000

(Please refer to Leigh, A., & Susilo, T. (2009). Is voting skin-deep? Estimating the effect of candidate ballot photographs on election outcomes. Journal of Economic Psychology, 30(1), 61-70. for more details.)

7. Convert the color images to black/white images

and then do the classification using bw palette

stone -bw (or --black_white)

For example:

Demo picture

1. Input

Black/white Demo picture

2. Convert to black/white image

Report image

3. The final report image

NB: we did not do the opposite, i.e., convert black/white images to color images because the current AI models cannot accurately "guess" the color of the skin from a black/white image. It can further bias the analysis results.

8. Tune parameters of face detection

The rest parameters of CONFIG are used to detect face. Please refer to https://stackoverflow.com/a/20805153/8860079 for detailed information.

9. Multiprocessing settings

stone --n_workers <Any Positive Integer>

Use --n_workers to specify the number of workers to process images in parallel, defaults to the number of CPUs in your system.

10. Used as a library by importing into other projects

You can refer to the following code snippet:

import stone
from json import dumps

# process the image
result = stone.process(image_path, image_type, palette, *other_args, return_report_image=True)
# show the report image
report_images = result.pop("report_images")  # obtain and remove the report image from the `result`
face_id = 1
stone.show(report_images[face_id])

# convert the result to json
result_json = dumps(result)

stone.process is the main function to process the image. It has the same parameters as the command line version.

It will return a dict, which contains the process result and report image(s) (if required, i.e., return_report_image=True).

You can further use stone.show to show the report image(s). And convert the result to json format.

The result_json will be like:

{
  "basename": "demo",
  "extension": ".png",
  "image_type": "color",
  "faces": [
    {
      "face_id": 1,
      "dominant_colors": [
        {
          "color": "#C99676",
          "percent": "0.67"
        },
        {
          "color": "#805341",
          "percent": "0.33"
        }
      ],
      "skin_tone": "#9D7A54",
      "tone_label": "CF",
      "accuracy": 86.27
    }
  ]
}

Citation

If you are interested in our work, please cite:

@article{https://doi.org/10.1111/ssqu.13242,
    author = {Rej\'{o}n Pi\tilde{n}a, Ren\'{e} Alejandro and Ma, Chenglong},
    title = {Classification Algorithm for Skin Color (CASCo): A new tool to measure skin color in social science research},
    journal = {Social Science Quarterly},
    volume = {n/a},
    number = {n/a},
    pages = {},
    keywords = {colorism, measurement, photo elicitation, racism, skin color, spectrometers},
    doi = {https://doi.org/10.1111/ssqu.13242},
    url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/ssqu.13242},
    eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/ssqu.13242},
    abstract = {Abstract Objective A growing body of literature reveals that skin color has significant effects on people's income, health, education, and employment. However, the ways in which skin color has been measured in empirical research have been criticized for being inaccurate, if not subjective and biased. Objective Introduce an objective, automatic, accessible and customizable Classification Algorithm for Skin Color (CASCo). Methods We review the methods traditionally used to measure skin color (verbal scales, visual aids or color palettes, photo elicitation, spectrometers and image-based algorithms), noting their shortcomings. We highlight the need for a different tool to measure skin color Results We present CASCo, a (social researcher-friendly) Python library that uses face detection, skin segmentation and k-means clustering algorithms to determine the skin tone category of portraits. Conclusion After assessing the merits and shortcomings of all the methods available, we argue CASCo is well equipped to overcome most challenges and objections posed against its alternatives. While acknowledging its limitations, we contend that CASCo should complement researchers. toolkit in this area.}
}

Contributing

๐Ÿ‘‹ Welcome to SkinToneClassifier! We're excited to have your contributions. Here's how you can get involved:

  1. ๐Ÿ’ก Discuss New Ideas: Have a creative idea or suggestion? Start a discussion in the Discussions tab to share your thoughts and gather feedback from the community.

  2. โ“ Ask Questions: Got questions or need clarification on something in the repository? Feel free to open an Issue labeled as a "question" or participate in Discussions.

  3. ๐Ÿ› Issue a Bug: If you've identified a bug or an issue with the code, please open a new Issue with a clear description of the problem, steps to reproduce it, and your environment details.

  4. โœจ Introduce New Features: Want to add a new feature or enhancement to the project? Fork the repository, create a new branch, and submit a Pull Request with your changes. Make sure to follow our contribution guidelines.

  5. ๐Ÿ’– Funding: If you'd like to financially support the project, you can do so by sponsoring the repository on GitHub. Your contributions help us maintain and improve the project.

Disclaimer

The images used in this project are from Flickr-Faces-HQ Dataset (FFHQ), which is licensed under the Creative Commons BY-NC-SA 4.0 license.

Thank you for considering contributing to SkinToneClassifier. We value your input and look forward to collaborating with you!

skintoneclassifier's People

Contributors

chenglongma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

skintoneclassifier's Issues

Issue with processing .jpg images in folder using GUI

Using GUI Mode and having a lot of trouble with the processing of images. All images are .jpg and getting the following error when selecting an entire folder with 22 images:

The program is processing your images...
Please wait for the program to finish.

Processing images: 0%| | 0/1 [00:00<?, ?images/s]Kenneth is not found or is not a valid image.

Processing images: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 1.77images/s]
Processing images: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 1.77images/s]

Any ideas or troubleshooting?

Skin Tone Classifier Execution in Google Colab"

Issue Description:

I encountered an issue when using the Skin Tone Classifier library in a Google Colab environment. The problem appears to be related to Qt platform plugins, and I received the following error messages:
qt.qpa.plugin: Could not find the Qt platform plugin "offscreen" in "/usr/local/lib/python3.10/dist-packages/cv2/qt/plugins"
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb.

Steps to Reproduce:
Install the Skin Tone Classifier library using pip install skin-tone-classifier --upgrade.
Run the following command in a Jupyter Notebook or Google Colab environment:
stone -i /path/to/image.jpg --debug

Expected Behavior:
I expected the Skin Tone Classifier to process the image and provide results as described in the library's documentation.

Actual Behavior:
Instead, I encountered the error messages mentioned above, which prevented the library from functioning correctly.

System Information:
Operating System: Google Colab (Colab's default environment)
Python Version: 3.10
Skin Tone Classifier Version: 1.0.0

My attempts
I attempted to resolve this issue by following the suggestions in the error message, but it did not resolve the problem. The issue seems to be related to the Qt platform plugins, specifically "offscreen."

Please let me know if there are any specific logs or additional information needed to diagnose and address this problem.
Thank you for your assistance.

Images using the black and white flag without specifying bw

If I run the command, stone -i ~/<path to image> --debug or explicitly give it the color option, stone -i ~/<path to image> color --debug, my image always appears in the bw folder and has the black and white color palette show up in debug mode. The only colored image I was able to get to work was the lena image from here http://www.lenna.org/. Any thoughts on what I'm doing wrong? Aside from that, great project!

AttributeError: module 'stone' has no attribute 'process'

The command line Version works, but:

I followed the instructions for using the library.

pip install skin-tone-classifier --upgrade

import stone

result = stone.process('xxx.jpg', image_type='color')

But then:

AttributeError: module 'stone' has no attribute 'process'

Python 3.10.11

Setting color palette throws TypeError: unhashable type: 'list'

When trying to set a custom palette, e.g. in code:

stone.process(
        tmp_path, image_type='color', n_dominant_colors=5, tone_palette=["#6f503c", "#81654f", "#9d7a54", "#bea07e", "#e5c8a62"], tone_labels=[1, 2, 3, 4, 5])

I get TypeError: unhashable type: 'list' in File "/Python/Python310/lib/site-packages/stone/api.py", line 73, in process skin_tone_palette = normalize_palette(tone_palette)

I think this is the correct way, but I might be wrong, otherwise this is a bug.

best regards

Ernst-Georg

Not able to use stone command

I'm trying to test the library, but following the read.me steps I was unable to use the command stone. Am I missing something?
When I run the installation again, that cmd returns:

Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: skin-tone-classifier in c:\users\appdata\roaming\python\python39\site-packages (0.2.1)
Requirement already satisfied: opencv-python>=4.6.0.66 in c:\users\appdata\roaming\python\python39\site-packages (from skin-tone-classifier) (4.8.0.74)
Requirement already satisfied: numpy>=1.21.5 in c:\users\appdata\roaming\python\python39\site-packages (from skin-tone-classifier) (1.24.3)
Requirement already satisfied: colormath>=3.0.0 in c:\users\appdata\roaming\python\python39\site-packages (from skin-tone-classifier) (3.0.0)
Requirement already satisfied: tqdm>=4.64.0 in c:\users\appdata\roaming\python\python39\site-packages (from skin-tone-classifier) (4.65.0)
Requirement already satisfied: networkx>=2.0 in c:\users\appdata\roaming\python\python39\site-packages (from colormath>=3.0.0->skin-tone-classifier) (3.1)
Requirement already satisfied: colorama in c:\users\\appdata\roaming\python\python39\site-packages (from tqdm>=4.64.0->skin-tone-classifier) (0.4.6)

I'm using python 3.9.13 and pip 23.1.2.
Windows 11.
Also, when I try to use the stone command, and this not exist.

Argument to disable face detection

I had to clone the repository and manually edit the code to disable face dection. I want to run the SkinToneClassifier over images for which the face is already cropped. Running the present code (version 0.1.11) results in either 'NA' (which is what I want) or some small subregions of the face (which I want to avoid right now).

Some examples:
2_2_0_1_1_10_2-1
2_2_0_1_1_12_2-1

Error processing image

If I don't use "--debug" comand, I get result.csv file with:
file,image type,face id,dominant 1,props 1,dominant 2,props 2,skin tone,PERLA,accuracy(0-100)
1,Error processing image 1: not enough values to unpack (expected 2, got 1)

With "--debug" I get for same image:
file,image type,face id,dominant 1,props 1,dominant 2,props 2,skin tone,PERLA,accuracy(0-100)
1-1,bw,1,#BD9F99,0.64,#696156,0.36,#909090,BH,84.18

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.