Search In this Thesis
   Search In this Thesis  
العنوان
Deep Learning Techniques for Natural Language Processing/
المؤلف
El Hennawy,Mariam Essam
هيئة الاعداد
باحث / مريم عصام الدين سيد الحناوي
مشرف / حازم محمود عباس
مناقش / نوال أحمد الفيشاوى
مناقش / هدى قرشى محمد
تاريخ النشر
2019.
عدد الصفحات
123p.:
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
الهندسة الكهربائية والالكترونية
تاريخ الإجازة
1/1/2019
مكان الإجازة
جامعة عين شمس - كلية الهندسة - كهرباء حاسبات
الفهرس
Only 14 pages are availabe for public view

from 130

from 130

Abstract

Natural Language Processing (NLP) is concerned with the interaction between machine and human. It is an area that aims at learning how to process a huge amount of data to understand, interpret and manipulate human language (natural language). This field is an intersection between artificial language, linguistics and computer science. NLP is a challenging problem that aims to understand the meaning of the natural language to perform some difficult tasks such as sentiment analysis, text summarization, question answering, machine translation, speech recognition, textual entailment, dialogue agents, text-to-speech, speech segmentation and many other challenging problems.
Transfer learning is one approach that could be used to better train deep neural networks for NLP applications. It plays a key role in initializing a network in computer vision applications since implementing a network from scratch can be time-consuming. NLP shares a similar concept about transferring from largescale data. Recent studies show that pretrained language models can be used to achieve state-of-the-art results on a wide range of NLP tasks such as sentiment analysis, question answering, text summarization, and textual entailment. Recent studies demonstrated the efficiency of the models that only employ self-attention without using neither recurrent neural networks (RNN) nor convolutional neural networks (CNN) in several tasks such as machine translation, text summarization and sentiment analysis. In this thesis, we demonstrate that a free RNN/CNN self-attention model used for sentiment analysis can be improved with 2.53% by using contextualized word representation learned in a language modeling task.