Hallucinations
In AI systems, hallucination refers to instances when the model generates responses that are factually incorrect, misleading, or simply made up. These hallucinations can appear realistic, but they often contain information that does not exist or is inaccurate.
Hallucination in Generative AI occurs when the model generates outputs that are not based on real data or facts. Since Gen AI is a language model, it generates responses by predicting the next word or phrase based on patterns it has learned from large datasets of text. However, it doesn’t always check facts or verify the correctness of its information, leading to “hallucinated” outputs.