- The main steps involved:
- <> Data logistics
- <> Training the data
- <> Optimizing the model
- <> Integrating with RPI
- There are two datasets that you can use:
- Open Notebook file and run the commands sequentially.
- Connect to T4 GPU under resources on the right side.
- Under this section: edit the API Key fetched from Roboflow.
!pip install roboflow
from roboflow import Roboflow
rf = Roboflow(api_key="apikeyxxx")
project = rf.workspace("mochoye").project("license-plate-detector-ogxxg")
version = project.version(1)
dataset = version.download("yolov8")
- You can test your model perfomance after training:
!python /content/Licence-Plate-Detection-and-Recognition-using-YOLO-V8-EasyOCR/ultralytics/yolo/v8/detect/predict.py model='/content/Licence-Plate-Detection-and-Recognition-using-YOLO-V8-EasyOCR/best.pt' source ='directory'
- the source can be:
- Live camera
source = 0
- For images
source = 'directory.png'
- Use whatever images you have to test the model
- video
source = 'directory'
- After training is done, follow the directory to see the test results.
Speed: 0.2ms pre-process, 2.8ms inference, 0.0ms loss, 3.6ms post-process per image
Saving runs/detect/train/predictions.json...
Results saved to runs/detect/train
- <> Now that we have a quantized and trained tested model, let's test it in Raspberry pi.
-
see how to work with Ultralytics on Raspberry Pi.
-
Test the model best.pt
-
This can be done for live camera, image and also video.
-
Integrate OCR to get detected data.
-
Control GPIO pins after the data is detected.
-
Further studies.