QARR-FSQA: Question-Answer Replacement and Removal Pretraining Framework for Few-Shot Question Answering

Citation

Tan, Siao Wah and Lee, Chin Poo and Lim, Kian Ming and Tee, Connie and Alqahtani, Ali (2024) QARR-FSQA: Question-Answer Replacement and Removal Pretraining Framework for Few-Shot Question Answering. IEEE Access, 12. pp. 159280-159295. ISSN 2169-3536

[img] Text
QARR-FSQA_ Question-Answer Replacement and Removal Pretraining Framework for Few-Shot Question Answering.pdf - Published Version
Restricted to Repository staff only

Download (2MB)

Abstract

In Natural Language Processing, creating training data for question answering (QA) systems typically requires significant effort and expertise. This challenge is amplified in few-shot scenarios where only a limited number of training samples are available. This paper proposes a novel pretraining framework to enhance few-shot question answering (FSQA) capabilities. It begins with the selection of the Discrete Reasoning Over the Content of Paragraphs (DROP) dataset, designed for English reading comprehension tasks involving various reasoning types. Data preprocessing converts question-answer pairs into a predefined template, consisting of a concatenated sequence of the question, a mask token with a prefix, and the context, forming the input sequence, while the target sequence includes the question and answer. The QuestionAnswer Replacement and Removal (QARR) technique augments the dataset by integrating the answer into the question and selectively removing words. Various templates for question-answer pairs are introduced. Models like BART, T5, and LED are then used to evaluate the framework’s performance, undergoing further pretraining on the augmented dataset with their respective architectures and optimization objectives. The study also investigates the impact of different templates on model performance in few-shot QA tasks. Evaluated on three datasets in few-shot scenarios, the QARR-T5 method outperforms state-of-the-art FSQA techniques, achieving the highest F1 scores of 81.7% in 16-shot and 32-shot, 82.7% in 64-shot, and 84.5% in 128-shot on the SQuAD dataset. This demonstrates the framework’s effectiveness in improving models’ generalization and performance on new datasets with limited samples, advancing few-shot QA

Item Type: Article
Uncontrolled Keywords: Natural language processing, few-shot question answering
Subjects: Q Science > QA Mathematics > QA71-90 Instruments and machines
Divisions: Faculty of Information Science and Technology (FIST)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 04 Dec 2024 05:56
Last Modified: 04 Dec 2024 05:56
URII: http://shdl.mmu.edu.my/id/eprint/13223

Downloads

Downloads per month over past year

View ItemEdit (login required)