Citation
Al Hauna, Asyafa Ditra and Yunus, Andi Prademon and Fukui, Masanori and Khomsah, Siti (2025) Enhancing LLM Efficiency: A Literature Review of Emerging Prompt Optimization Strategies. International Journal on Robotics, Automation and Sciences, 7 (1). pp. 72-83. ISSN 2682-860X![]() |
Text
1311-Article Text-13279-3-10-20250516.pdf - Published Version Restricted to Repository staff only Download (420kB) |
Abstract
This study focuses on enhancing the performance of Large Language Models (LLMs) through innovative prompt engineering techniques aimed at optimizing outputs without the high computational costs of model fine-tuning or retraining. The primary objective is to investigate efficient alternatives, such as black-box prompt optimization and ontology-based prompt refinement, which improve LLM performance by refining prompts externally while maintaining the model's internal parameters. The study explores various prompt optimization techniques, including instruction-based, role-based, question-answering, and contextual prompting, alongside advanced methods like CoT and ToT prompting. Methodologically, the research involves a comprehensive literature review, benchmarking prompt optimization techniques against existing models using standard datasets such as Big-Bench Hard and GSM8K. The study evaluates the performance of approaches like APE, PromptAgent, self-consistency prompting, and many more. The results demonstrate that these techniques significantly enhance LLM performance, particularly in tasks requiring complex reasoning, multi-step problem-solving, and domain-specific knowledge integration. The findings suggest that prompt engineering is crucial for improving LLM efficiency without excessive resource demands. However, challenges remain in ensuring prompt scalability, transferability, and generalization across different models and tasks. The study highlights the need for further research on integrating ontologies and automated prompt generation to refine LLM precision and adaptability, particularly in low-resource settings. These advancements will be vital for maximizing the utility of LLMs in increasingly complex and diverse applications.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Prompt Optimization, Prompt engineering, Black-Box, Ontology, Large Language Models |
Subjects: | Q Science > Q Science (General) |
Depositing User: | Ms Suzilawati Abu Samah |
Date Deposited: | 26 Jun 2025 01:22 |
Last Modified: | 26 Jun 2025 01:22 |
URII: | http://shdl.mmu.edu.my/id/eprint/14068 |
Downloads
Downloads per month over past year
![]() |