Utilizing YOLO for Robot and Goal Detection in Wheeled Soccer Robots

Authors

  • Muhammad Surya Universitas Dinamika Bangsa
  • Afrizal Toscany Universitas Dinamika Bangsa
  • Chindra Saputra Universitas Dinamika Bangsa
  • Yovi Pratama Universitas Dinamika Bangsa
  • M Irwan Bustami Universitas Dinamika Bangsa

DOI:

https://doi.org/10.33022/ijcs.v14i1.4575

Keywords:

YOLOv11, Object Detection, Robo Soccer, Omnidirectional Camera, Deep Leaning

Abstract

The ability to detect objects in real-time is a crucial factor in enhancing a robot's performance in understanding and adapting to dynamic environments. This research aims to develop and implement an object detection system on a wheeled soccer robot using the YOLOv11 algorithm, applied to images generated by omnidirectional and front-facing cameras. The system leverages deep learning technology for data labeling, model training, and performance evaluation. Testing was conducted by comparing the object detection results from both types of cameras, as well as analyzing performance metrics such as precision, recall, F1-score, and accuracy. The results show that the YOLOv11 model is effective in detecting objects in real-time, with a detection accuracy of 95.91% for the front camera and 96.7% for the omnidirectional camera. The highest precision and recall were recorded in the robot class, with precision of 99.12% and recall of 97.40% for the front camera, and precision of 96.5% and recall of 97.8% for the omnidirectional camera. The use of a combination of cameras proved to expand the robot's field of vision, enhancing object detection accuracy in dynamic environments. This research contributes to the implementation of object detection systems in robotics, particularly in the context of robot soccer competitions.

Downloads

Published

07-02-2025