Hallucination Mitigation Techniques in Large Language Models | ||||
International Journal of Intelligent Computing and Information Sciences | ||||
Volume 24, Issue 4, December 2024, Page 73-81 PDF (321.59 K) | ||||
Document Type: Original Article | ||||
DOI: 10.21608/ijicis.2024.336135.1365 | ||||
![]() | ||||
Authors | ||||
Mohamed Ali Mohamed Abdelghafour ![]() ![]() ![]() | ||||
1Computer Science department, Computer and Information Science, Ain Shams University, Cairo, Egypt | ||||
2Computer Science Department, Computer and Information Science, Ain Shams University, Cairo, Egypt | ||||
3Computer Science Department, Computer and Information Science, Ain Shams University, Cairo, Egypt | ||||
Abstract | ||||
Large language models (LLMs) have demonstrated impressive natural language understanding and generation capabilities, enabling advancements in diverse fields such as customer support, healthcare, and content creation. However, a significant challenge with LLMs is their tendency to produce factually inaccurate or nonsensical information, commonly known as hallucination. Hallucinations not only compromise the reliability of these models but can also lead to serious ethical and practical issues, particularly in high-stakes applications. This survey comprehensively reviews recent advancements in hallucination mitigation strategies for LLMs. We explore retrieval-augmented models, which enhance factual grounding by integrating external knowledge sources; human feedback mechanisms, such as reinforcement learning, which improve accuracy by aligning model responses with human evaluations; knowledge augmentation techniques that embed structured knowledge bases for enhanced consistency; and controlled generation, which restricts output to ensure alignment with factual constraints. Additionally, we examine the challenges of integrating these techniques and the limitations of current methods, including scalability, resource intensity, and dependency on quality data. Finally, we discuss future research directions to improve factual reliability in LLMs and explore hybrid solutions to create accurate and adaptable models for a wider range of real-world applications. | ||||
Keywords | ||||
Large Language Models; Hallucinations; Retrieval-Augmentation; Knowledge-Augmentation; Human Feedback | ||||
Statistics Article View: 426 PDF Download: 606 |
||||