A Deep Learning Model for Answering Why-Questions in Arabic

Authors

DOI:

https://doi.org/10.33022/ijcs.v12i2.3183

Keywords:

Arabic, Question-answering, Why question-answering, Deep-learning, Dynamic memory network

Abstract

The subfield of natural language processing (NLP) known as question answering (QA) involves providing answers to questions posed in natural language. Answering “why” questions has long been a challenging task for QA systems, given the complexity of the reasoning involved. In this paper, we propose a deep learning model for answering “why” questions in Arabic. Recent advances in neural network models have yielded promising results across a range of tasks, particularly with the integration of attention and memory mechanisms. Our proposed model is based on the dynamic memory network (DMN), an architecture that utilizes attention and memory mechanisms to locate and extract relevant information for answering a question. We evaluate the performance of our DMN-based model in answering Arabic “why” questions using the LEMAZA dataset, achieving an F-score of 78.61%. Our findings suggest that DMN-based models hold promise for addressing the challenge of answering “why” questions in Arabic and other languages.

Author Biography

Tahani Alwaneen, King Saud University

A lecturer at Qasem University. Currently a PhD student in the Department of Computer Science at King Saud University, Riyadh, Saudi Arabia.

Downloads

Published

30-04-2023