HURRICANE_OCR_BETA
Collection
2 items
β’
Updated
β’
1
Fast and Accurate Thai License Plate Detection powered by YOLOv8
Try Demo β’ Report Issue β’ Request Feature
HurricaneOD (Hurricane Object Detector) is a high-performance YOLOv8-based object detection model specifically fine-tuned for detecting Thai license plates in vehicle images. This model is optimized for:
| Feature | Description |
|---|---|
| π― Accuracy | Optimized for Thai license plates |
| β‘ Speed | Real-time detection (~50ms/image) |
| πΎ Size | Lightweight (6MB) |
| π Platform | CPU & GPU support |
| π§ Easy to Use | Simple API with Ultralytics |
pip install ultralytics huggingface_hub pillow
from huggingface_hub import hf_hub_download
# Download model weights
model_path = hf_hub_download(
repo_id="Rattatammanoon/hurricaneod-thai-plate-detector",
filename="HurricaneOD_beta.pt"
)
print(f"Model downloaded to: {model_path}")
from ultralytics import YOLO
from PIL import Image
# Load model
model = YOLO(model_path)
# Detect license plates in image
results = model.predict(
"car_image.jpg",
conf=0.25, # Confidence threshold
iou=0.45, # IoU threshold for NMS
verbose=False
)
# Process results
for result in results:
boxes = result.boxes
for box in boxes:
# Get bounding box coordinates
coords = box.xyxy[0].tolist()
x1, y1, x2, y2 = coords
confidence = box.conf[0].item()
class_id = box.cls[0].item()
print(f"Detected plate: [{x1:.0f}, {y1:.0f}, {x2:.0f}, {y2:.0f}] (conf: {confidence:.2f})")
# Crop license plate region
img = Image.open("car_image.jpg")
plate_crop = img.crop((x1, y1, x2, y2))
plate_crop.save("detected_plate.jpg")
Combine with Hurricane OCR for complete plate reading:
from ultralytics import YOLO
from transformers import AutoProcessor, AutoModelForVision2Seq
from peft import PeftModel
from PIL import Image
import torch
# 1. Load plate detector (HurricaneOD)
detector = YOLO(model_path)
# 2. Load OCR model (Hurricane OCR)
ocr_processor = AutoProcessor.from_pretrained("scb10x/typhoon-ocr1.5-2b")
ocr_base = AutoModelForVision2Seq.from_pretrained(
"scb10x/typhoon-ocr1.5-2b",
torch_dtype=torch.float16,
device_map="auto"
)
ocr_model = PeftModel.from_pretrained(ocr_base, "Rattatammanoon/hurricane-ocr-v1")
ocr_model.eval()
# 3. Complete pipeline: Detection β OCR
img = Image.open("car_image.jpg")
# Detect plate
results = detector.predict(img, conf=0.25)
if results and len(results[0].boxes) > 0:
box = results[0].boxes[0]
coords = box.xyxy[0].tolist()
x1, y1, x2, y2 = coords
# Crop plate
plate_crop = img.crop((x1, y1, x2, y2))
# Run OCR
pixel_values = ocr_processor(images=plate_crop, return_tensors="pt").pixel_values
with torch.no_grad():
generated_ids = ocr_model.generate(pixel_values, max_length=512)
text = ocr_processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(f"π Detected plate at: ({x1:.0f}, {y1:.0f}, {x2:.0f}, {y2:.0f})")
print(f"π OCR Result:\n{text}")
| Parameter | Value |
|---|---|
| Base Model | YOLOv8n (Nano) |
| Training Epochs | 100 |
| Batch Size | 16 |
| Image Size | 640x640 |
| Learning Rate | 0.01 |
| Training Date | 2025-12-18T09:03:21.059407 |
| Framework | Ultralytics YOLOv8 |
| Tip | Description |
|---|---|
| β Confidence Threshold | Use 0.25-0.5 for best balance |
| β Image Size | Input 640x640 for optimal speed/accuracy |
| β Batch Processing | Process multiple images for efficiency |
| β GPU Acceleration | Use CUDA for 10x faster inference |
| β οΈ Image Quality | Better lighting = better detection |
# Process multiple images efficiently
images = ["car1.jpg", "car2.jpg", "car3.jpg"]
results = model.predict(images, conf=0.25, batch=8)
for i, result in enumerate(results):
print(f"Image {i+1}: {len(result.boxes)} plates detected")
# Process video stream
results = model.predict(
source="traffic_video.mp4",
conf=0.25,
stream=True, # Stream results for memory efficiency
save=True # Save annotated video
)
for result in results:
# Process each frame
boxes = result.boxes
print(f"Frame: {len(boxes)} plates detected")
| Device | Inference Time | FPS |
|---|---|---|
| NVIDIA RTX 3060 | ~10ms | ~100 |
| CPU (Intel i7) | ~50ms | ~20 |
| CPU (Apple M1) | ~30ms | ~33 |
Measured on 640x640 images
# Adjust confidence for your use case
results = model.predict(
"image.jpg",
conf=0.5, # Higher = fewer false positives
iou=0.45, # NMS threshold
max_det=10 # Maximum detections per image
)
# Export to ONNX for deployment
model.export(format="onnx")
# Export to TensorRT for NVIDIA GPUs
model.export(format="engine")
# Export to CoreML for iOS
model.export(format="coreml")
# Export to TFLite for mobile
model.export(format="tflite")
from ultralytics import YOLO
# Load pretrained model
model = YOLO("Rattatammanoon/hurricaneod-thai-plate-detector")
# Fine-tune on your dataset
results = model.train(
data="your_data.yaml",
epochs=100,
imgsz=640,
batch=16,
device=0 # GPU ID
)
This model is licensed under Apache 2.0. See LICENSE for details.
If you use HurricaneOD in your research or project, please cite:
@misc{hurricaneod-beta-2025,
author = {HurricaneOD Team},
title = {HurricaneOD - Thai License Plate Detector},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{https://huggingface.co/Rattatammanoon/hurricaneod-thai-plate-detector}}
}
This project builds upon:
Special thanks to the Ultralytics team for developing YOLOv8! π