Search In this Thesis
   Search In this Thesis  
العنوان
Markov Decision Processes and Their Applications in Wireless Communication Networks \
المؤلف
Saad, Islam Shehata Abdelfattah.
هيئة الاعداد
باحث / اسالم شحاته عبد الفتاح سعد
مشرف / شريف ابراهيم محمود ربيع
shrfrabia@hotmail.com
مشرف / عمرو محمد محمد عبد الرازق
مناقش / محمد أبوزهاد أبوزيد احمد
مناقش / ياسمين أبو السعود صالح متولى
الموضوع
Mathematics.
تاريخ النشر
2021.
عدد الصفحات
111 p. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
الهندسة (متفرقات)
تاريخ الإجازة
5/9/2021
مكان الإجازة
جامعة الاسكندريه - كلية الهندسة - الرياضيات و الفيزياء الهندسية
الفهرس
Only 14 pages are availabe for public view

from 151

from 151

Abstract

Cognitive radio networks (CRNs) aim to improve spectrum utilization and allow users to move freely. Cognitive radio technology enhances the utilization of the spectrum by permitting the unlicensed user (secondary user SU) to use the channel without affecting the licensed user (primary user PU). A major issue is that the SU needs to sense the channel frequently, which leads to energy wasting. This problem can be dealt with using sensing energy optimization as well as the concept of sensing interval. The sensing interval is defined as the number of time slots after which the SU should sense the channel again. In order to improve its performance, the SU has to decide at each decision epoch the proper action to be taken from the available set of actions. For example, it has to decide whether to sense the channel or stay idle to save energy. If it decides to sense the channel, it should decide the optimal sensing energy and the optimal sensing interval length. Such decisions are taken in order to maximize the total accumulated rewards. Consequently, the SU has to solve a sequential decision problem. The environment is random due to the harvested energy amount and the PU activity. This motivates using the Markov decision process (MDP) and its variations to help the SU make the optimal decisions. Actually, this thesis concentrates on the mixed observable Markov decision process (MOMDP). This is because the state of the system consists of the available energy of the SU which is observable and the PU activity which is partially observable due to the sensing errors. Firstly, a CRN with energy harvesting is considered while applying the sensing interval approach. The problem of determining the optimal sensing energy and the optimal sensing interval is considered in order to maximize the SU throughput and minimize both the consumed energy and the interference with the PU. By formulating this optimization problem as an MOMDP, a dynamic policy for the SU is generated taking into account the total accumulated SU reward. Numerical results show that the proposed policy outperforms existing results, especially when the SU prefers the transmission mode to the energy-saving mode. Additionally, the numerical results show that applying sensing interval concept for both idle and busy sensing results is better than applying it for the idle case only.