Implementasi Algoritma Deep Q-Network (DQN) pada Lampu Lalu Lintas Adaptif Berdasarkan Waktu Tunggu dan Arus Kendaraan
DOI:
https://doi.org/10.33022/ijcs.v13i5.4372Keywords:
Reinforcement Learning, Deep Q-Network, SUMO, Adaptive Traffic LightAbstract
Traffic congestion often occurs due to the alternating road closure system during repairs on one side of the road, forcing vehicles to take turns passing the unrepaired side. The use of temporary traffic lights with fixed timing is often ineffective because it does not account for the imbalance in traffic flow from both sides of the road. To address this issue, this study implements the Deep Q-Network (DQN) algorithm to optimize traffic light duration based on vehicle wait times and traffic flow. Testing was conducted using the SUMO simulator, focusing on the parameters of epoch, exploration rate, and discount factor that affect the performance of the DQN agent. The results show that DQN achieves optimal performance when configured with an exploration rate of 1 and a discount factor of 0.9, after training for 50 epochs and testing for 10 epochs. In this configuration, DQN proves to be more adaptive in managing traffic lights compared to conventional methods that use fixed timing for green and red lights. Although the fairness value of DQN is lower, it successfully reduces congestion and improves overall traffic efficiency.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Ridho Amanda Putra, Yoanda Alim Syahbana, Ananda
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.