Fire and Smoke Detection with 3D Position Estimation Using YOLO

Authors

  • Yuwono Dinata Universitas Ciputra Surabaya
  • Wan Universitas Ciputra Surabaya
  • Richard Sutanto Universitas Ciputra Surabaya

DOI:

https://doi.org/10.33022/ijcs.v13i6.4541

Keywords:

computer vision, fire and smoke, fire detection, machine learning, 3D localization

Abstract

Fire and smoke detection systems are essential for safety, but traditional methods often face issues like false alarms and poor localization accuracy. This study integrates advanced object detection models, a confidence-based voting mechanism, and 3D localization to address these challenges. Using three cameras, the system detects fire or smoke and estimates its 3D position (x, y, z) through bounding box depth estimation and camera placement. A voting mechanism enhances reliability by requiring validation from at least two cameras with a confidence threshold of 0.5. YOLOv5s achieved 92% accuracy, 96% precision, 95% recall, mAP50 of 98%, and 87.02 FPS, making it suitable for real-time use. YOLOFM-NADH+C3 offered comparable accuracy (92%) but better localization precision with a 0.22 cm error versus YOLOv5s’ 1.53 cm, albeit at a slower FPS (54.82). Experiments confirm the system’s ability to reduce false positives and localize fire/smoke accurately under challenging conditions.

 

Downloads

Published

30-12-2024