Large language models (LLMs) can generate false facts, often referred to as "hallucinations". The technique is often used for ...