Transformer-Based Sequence Modeling Short Answer Assessment Framework

Citation

Sharmila, P. and Sonai Muthu Anbananthen, Kalaiarasi and Chelliah, Deisy and Parthasarathy, S. and Balasubramaniam, Baarathi and Nathan Lurudusamy, Saravanan (2024) Transformer-Based Sequence Modeling Short Answer Assessment Framework. HighTech and Innovation Journal, 5 (3). pp. 627-639. ISSN 2723-9535

[img] Text
document (7).pdf - Published Version
Restricted to Repository staff only

Download (1MB)

Abstract

Automated subjective assessment presents a significant challenge due to the complex nature of human language and reasoning characterized by semantic variability, subjectivity, language ambiguity, and judgment levels. Unlike objective exams, subjective assessments involve diverse answers, posing difficulties in automated scoring. The paper proposes a novel approach that integrates advanced natural language processing (NLP) techniques with principled grading methods to address this challenge. Combining Transformer-based Sequence Language Modeling with sophisticated grading mechanisms aims to develop more accurate and efficient automatic grading systems for subjective assessments in education. The proposed approach consists of three main phases: Content Summarization: Relevant sentences are extracted using self-attention mechanisms, enabling the system to effectively summarize the content of the responses. Key Term Identification and Comparison: Key terms are identified within the responses and treated as overt tags. These tags are then compared to reference keys using cross-attention mechanisms, allowing for a nuanced evaluation of the response content. Grading Process: Responses are graded using a weighted multi-criteria decision method, which assesses various quality aspects and assigns partial scores accordingly. Experimental results on the SQUAD dataset demonstrate the approach’s effectiveness, achieving an impressive F-score of 86%. Furthermore, significant improvements in metrics like ROUGE, BLEU, and METEOR scores were observed, validating the efficacy of the proposed approach in automating subjective assessment tasks.

Item Type: Article
Uncontrolled Keywords: Attention Model, Sequence Language Modeling, Subjective Assessment, Transformer
Subjects: Q Science > QA Mathematics > QA71-90 Instruments and machines
Divisions: Faculty of Information Science and Technology (FIST)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 04 Dec 2024 06:41
Last Modified: 04 Dec 2024 06:41
URII: http://shdl.mmu.edu.my/id/eprint/13236

Downloads

Downloads per month over past year

View ItemEdit (login required)