Hallucination (AI)

In the context of AI, a hallucination happens when a system like ChatGPT generates an answer that sounds confident but is actually false or made up. For example, it might give you a detailed explanation about a scientific study that does not exist or invent a fake historical fact. These mistakes are not intentional. The AI is not trying to lie, but the results can be misleading if you're not aware of them.

Hallucinations occur because language models do not truly know facts. They generate responses based on patterns in the data they were trained on. If the training data was unclear or incomplete, or if the model has to guess in a situation it has not seen before, it may produce a wrong answer that still sounds very believable. That is why it is important to double-check critical information from AI tools.

Comparing 0