The AI’s inescapable rulebook
Now let’s apply this to your AI. Its rulebook is the vast dataset it was trained on. It has ingested a significant portion of human knowledge, but that knowledge is itself a finite, inconsistent, and incomplete system. It contains contradictions, falsehoods, and, most importantly, gaps.
An AI, operating purely within its training data, is like a manager who refuses to think outside the company manual. When faced with a query that falls into one of Gödel’s gaps – a question where the answer is true but not provable from its data – the AI does not have the human capacity to say, “I do not know,” or to seek entirely new information. Its core programming is to respond. So, it does what the OpenAI paper describes: it auto-completes, or hallucinates. It creates a plausible-sounding reality based on the patterns in its data.
The AI invents a financial figure because the pattern suggests a number should be there. It cites a non-existent regulatory case because the pattern of legal language is persuasive. It designs a product feature that is physically impossible because the training data contains both engineering truths and science fiction.
The AI’s hallucination is not simply a technical failure; it is a Gödelian inevitability. It is the system’s attempt to be complete, which forces it to become inconsistent, unless the system says, “I don’t know,” in which case the system would be consistent but incomplete. Interestingly. OpenAI’s latest model has a feature billed as an improvement – namely its “abstention rate” (the rate at which the model admits that it cannot provide an answer). This rate has gone from about 1% in previous models to over 50% in GPT-5.