Welcome to the car plate detection project using YOLOv8! This repository provides a step-by-step guide to preparing data, training an object detection model with YOLOv8, and running inference with the trained model.
The basic usage is based on the YOLOv8 tutorial, customized for the current dataset to guide you through preparing data and training a model. This tutorial will cover everything from installation to training the YOLOv8 object detection model with a custom dataset and then exporting it for inference.
You can find the dataset used in this project here.
Also here is the kaggle notebook of this project.
This package is tested on Ubuntu 20.04 with Python 3.9.12. First, create your virtual environment:
python -m venv venv
source venv/bin/activate
Next, install all dependencies:
pip install -r requirements.txt
To use YOLOv8 for your object detection task, structure your data as follows:
- In the root directory of your dataset, create two folders named images and labels. For this tutorial, we consider data/ in the root of our project as the dataset root.
- Images can be in jpg or png formats.
- Create a config file in yaml format specifying the paths to the root and images directories.
- Separating train, validation, and test partitions is optional. If you do this, create subdirectories within both images and labels folders. Specify these paths in the config file.
- Labels must be in txt format. For each bounding box in an image, include a row in the corresponding label file with the following structure (no commas): class_label bbx_x_center bbx_y_center bbx_width bbx_height.
The data directory should be structured like this:
data
โโโ images
โ โโโ test
โ โ โโโ Cars27.png
โ โ โโโ ...
โ โโโ train
โ โ โโโ Cars0.png
โ โ โโโ ...
โ โโโ validation
โ โโโ Cars10.png
โ โโโ ...
โโโ labels
โโโ test
โ โโโ Cars27.txt
โ โโโ ...
โโโ train
โ โโโ Cars0.txt
โ โโโ ...
โโโ validation
โโโ Cars10.txt
โโโ ...
To train your own object detection model, you can run:
python main.py
You can customize the following arguments:
- -rpr or --remove_prev_runs: Whether you want to remove previous runs.
- -p or --prepare: Whether you want to implement data preparation.
- -t or --train: Whether you want to implement training.
- -e or --export: Whether you want to export a saved model.
To get predictions from a YOLO saved model, run:
python inference.py --model_path 'path/to/model' --image_path 'path/to/test_image' --output_name 'output.png'
The default path for the saved model is runs/detect/train/weights/best.pt
. The test image can be in jpg
or png
format. The result of the model's predicted bounding boxes will be saved in the runs
directory as a png
file.
Note: You might need to change datasets_dir, weights_dir, or runs_dir in .config/Ultralytics/settings.yaml based on the root of your project.
Also for getting predictions from a YOLO saved model on test videos you can run:
python inference.py --model_path 'path/to/model' --video_path 'path/to/test_video' --output_name 'output.avi'