Search In this Thesis
   Search In this Thesis  
العنوان
Enhancing Autonomous Systems Localization Accuracy in the Presence of Sensor Uncertainties and Environmental
Impacts
/
المؤلف
Osman,Hassan Khaled Hassan Wagih
هيئة الاعداد
باحث / حسن وجيه
مشرف / شريف حماد
مناقش / حسام الدين حسن
مناقش / محمد جابر محمد جابر
تاريخ النشر
2023
عدد الصفحات
83P.:
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
الهندسة الميكانيكية
تاريخ الإجازة
1/1/2023
مكان الإجازة
جامعة عين شمس - كلية الهندسة - ميكاترونيك
الفهرس
Only 14 pages are availabe for public view

from 122

from 122

Abstract

Autonomous systems have been evolving rapidly in the last few decades. Research and development of autonomous systems modules enables such systems to perform complex tasks in an accurate and efficient manner. Such development has contributed to the stable growth of the world economy due to it’s direct impact on productivity and labour reduction. This allows people to focus on more important tasks.
Autonomous mobile systems have been the new challenge to modern robotics and automotive engineering. The work in this thesis addresses some of the challenges related to modern autonomous mobile systems. The autonomous localization module is crucial for any autonomous mobile system. Several onboard sensors are used to detect the vehicle’s location such as cameras, satellite global positioning system, light detection and ranging, etc. Sensor fusion techniques fuse data from these sensors in order to provide an accurate estimate of the vehicle’s pose. This thesis focuses on enhancing camera-based localization.
A neural network based machine learning model that is able to refine the pose estimate calculated by visual odometry algorithms is proposed. Visual Odomtery algorithms allow the vehicle localization module to estimate the incremental changes in the vehicle’s pose by detecting the variations between the successive camera frames. This thesis addresses both monocular and stereo Visual Odometry algorithms using drift reduction machine learning models that correlate the errors in the Visual Odometry algorithms with the physical changes in the image and hence reduce such errors. The work in this thesis proposes two different types of machine learning models, one dedicated for translation and one for orientation.
The Drift Reduction Neural Networks (DRNN) were designed in a manner that enables them to generalize on the training data and to evade over fitting. The DRNNs were also able to adapt to different navigating environments and scenarios. Work in this thesis also proposed a hybrid Visual Odometry algorithms that utilizes the developed machine learning models, monocular Visual Odometry and stereo Visual Odometry.
Results showed the efficacy and robustness of the proposed algorithms as they were able to improve the orientation error with up to (78%) and the translation error with up to (89.9%) when compared with normal Visual Odometry algorithms.