Repository logo

Deep reinforcement learning agents for dynamic spectrum access in television whitespace cognitive radio networks

dc.contributor.authorUkpong, Udeme C.en_US
dc.contributor.authorIdowu-Bismark, Olabodeen_US
dc.contributor.authorAdetiba, Emmanuelen_US
dc.contributor.authorKala, Jules R.en_US
dc.contributor.authorOwolabi, Emmanuelen_US
dc.contributor.authorOshin, Oluwadamilolaen_US
dc.contributor.authorAbayomi, Abdultaofeeken_US
dc.contributor.authorDare, Oluwatobi E.en_US
dc.date.accessioned2025-03-08T18:15:38Z
dc.date.available2025-03-08T18:15:38Z
dc.date.issued2024-12
dc.date.updated2025-03-06T12:51:43Z
dc.description.abstractBusinesses, security agencies, institutions, and individuals depend on wireless communication to run their day-to-day activities successfully. The ever-increasing demand for wireless communication services, coupled with the scarcity of available radio frequency spectrum, necessitates innovative approaches to spectrum management. Cognitive Radio (CR) technology has emerged as a pivotal solution, enabling dynamic spectrum sharing among secondary users while respecting the rights of primary users. However, the basic setup of CR technology is insufficient to manage spectrum congestion, as it lacks the ability to predict future spectrum holes, leading to interferences. With predictive intelligence and Dynamic Spectrum Access (DSA), a CR can anticipate when and where other users will be using the radio frequency spectrum, allowing it to overcome this limitation. Reinforcement Learning (RL) in CRs helps predict spectral changes and identify optimal transmission frequencies. This work presents the development of Deep RL (DRL) models for enhanced DSA in TV Whitespace (TVWS) cognitive radio networks using Deep Q-Networks (DQN) and Quantile-Regression (QR-DQN) algorithms. The implementation was done in the Radio Frequency Reinforcement Learning (RFRL) Gym, a training environment of the RF spectrum designed to provide comprehensive functionality. Evaluations show that the DQN model achieves a 96.34 % interference avoidance rate compared to 95.97 % of QRDQN. Average latency was estimated at 1 millisecond and 3.33 milliseconds per packet, respectively. Therefore DRL proves to be a more flexible, scalable, and adaptive approach to dynamic spectrum access, making it particularly effective in the complex and constantly evolving wireless spectrum environment.en_US
dc.format.extent16 pen_US
dc.identifier.citationUkpong, U.C. et al. 2025. Deep reinforcement learning agents for dynamic spectrum access in television whitespace cognitive radio networks. Scientific African. 27: 1-16. doi:10.1016/j.sciaf.2024.e02523en_US
dc.identifier.doi10.1016/j.sciaf.2024.e02523
dc.identifier.issn2468-2276
dc.identifier.urihttps://hdl.handle.net/10321/5839
dc.language.isoenen_US
dc.publisherElsevier BVen_US
dc.publisher.urihttps://doi.org/10.1016/j.sciaf.2024.e02523en_US
dc.relation.ispartofScientific African; Vol. 27en_US
dc.subjectCognitive radio networksen_US
dc.subjectDeep reinforcement learningen_US
dc.subjectDQNen_US
dc.subjectDynamic spectrum accessen_US
dc.subjectQR-DQNen_US
dc.subjectTelevision whitespaceen_US
dc.subjectRFRL gymen_US
dc.titleDeep reinforcement learning agents for dynamic spectrum access in television whitespace cognitive radio networksen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Ukpong et al_2024.pdf
Size:
4.09 MB
Format:
Adobe Portable Document Format
Description:
Loading...
Thumbnail Image
Name:
Scientific African Copyright clearance.docx
Size:
141.13 KB
Format:
Microsoft Word XML
Description: