Factuality & Hallucination
Knowledge Base Adherence Test
When AI has RAG context, can you tell if it actually used the provided documents or fell back on its training data?
Retrieval-Augmented Generation (RAG) systems provide AI with specific documents to answer questions. But AI models don't always use the provided context correctly. They may ignore relevant context, answer from training data instead, misinterpret retrieved documents, or blend RAG results with hallucinations. Can you spot when the AI goes off-script?
10 questionsExpert~8 min