keyboard_arrow_up
In-Context Learning for Scalable and Online Hallucination Detection in RAGS

Authors

Nicolò Cosimo Albanese, Amazon Web Services (AWS), Italy

Abstract

Ensuring fidelity to source documents is crucial for the responsible use of Large Language Models (LLMs) in Retrieval Augmented Generation (RAG) systems. We propose a lightweight method for real-time hallucination detection, with potential to be deployed as a model-agnostic microservice to bolster reliability. Using in-context learning, our approach evaluates response factuality at the sentence level without annotated data, promoting transparency and user trust. Compared to other prompt-based and semantic similarity baselines from recent literature, our method improves hallucination detection F1 scores by at least 11%, with consistent performance across different models. This research offers a practical solution for real-time validation of response accuracy in RAG systems, fostering responsible adoption, especially in critical domains where document fidelity is paramount.

Keywords

Large Language Models, Hallucinations, Prompt Engineering, Generative AI, Responsible AI.

Full Text  Volume 14, Number 7