All the “hallucinating” and “AI lies” chatter is based on one big misunderstanding.

The fundamental goal of GPT is for machines to have language skills so that they can interact with you like a real person.

They have been trained on billions of web pages to create content, not to answer accurately every possible question (and there are technical and philosophical reasons for this).

If you are worried that something like ChatGPT will give incorrect answers to a question, you can point it to your organisation’s knowledge base and provide answers from there. The accuracy of the answers will be as good as your knowledge base.

In order to give correct answers to factual questions, the model needs to be primed on specific topics.

This is bad news for those who thing ChatGPT is a a talking encyclopaedia. But it is good news for companies that can invest in priming the model to provide real, factual data for customer support, research, etc.

Let’s take a step-by-step look at how and why GPT hallucinates, and how to prevent it in the following gist.

Contact me to get your business started on this topic.