Search

AI Hallucination Might Not Go Away After All

hallucination2

Generative AI, with the capability of generating text, image, video, music, and code, could add 2.6 to 4.4 trillion dollars to the economy annually, estimated McKinsey Global Institute. All the existing AI chatbots, including ChatGPT, Claude 2, and Bard, have the problem of hallucination. Other modes of AI, such as image and code generation, often have a language component. So hallucinating text could have an impact on all modes of AI generation. On Aug 1st, the Fortune magazine reported that some experts have cast doubts on whether hallucination will be fixed at all.

The president of Anthropic, the company behind Claude 2, Daniela Amodei indicated that current large language models are essentially built to predict the next word and some level of inaccuracy is unavoidable. Large language models do not really understand the text generated. When generated text fits reality and people’s expectation, it is powerful and impressive. Yet, there are no inherent mechanisms to guarantee that it will fit. Not only the training data might contain false information, the probabilistic nature of word prediction makes it possible for AI to make mistakes. Similarly, Emily Bender, a linguistics professor from University of Washington, believes that hallucination cannot be fixed. Sam Altman who leads OpenAI said that he would not take AI output as completely true.

I think we will get the hallucination problem to a much, much better place. I think it will take us a year and a half, two years. Something like that. But at that point we won’t still talk about these. There’s a balance between creativity and perfect accuracy, and the model will need to learn when you want one or the other.

While this remark from Altman sounds optimistic, it does not eliminate the suspicion that hallucination is here to stay. Shane Orlick, the president of Jasper, a marketing content AI company, suggested that hallucination is an “added bonus” in that it enables new ideas and perspectives in content creation. Orlick further said that if customers are concerned about accuracy, his company might offer Anthropic’s model. But as noted above, Anthropic’s president acknowledged herself that some level of inaccuracy is always there.

Making things up may not be a big problem if the purpose is to generate marketing content that is unique and attractive. But if the purpose is to write news, as Google has pitched to news companies, or to do legal research or to treat a patient, inaccuracy may be unacceptable.

So what might we do about AI hallucination?

1. Basics
Set the expectations right: let all parties know that mistakes are possible
Verify the output: manually check the results against some reliable source
Use a system message to give context, such as “act as a physics professor and provide factual answers”.

2. Low tech approaches
2.1 Use better prompts
a. Your prompts should be clear, concise, and specific, with no ambiguity.

For example, instead of asking “tell me about regression”, you should try something like “Explain what multiple linear regression is and how it is different from simple linear regression”.

b. Include context or examples
For instance, give the specific setting or conditions where you need to use multiple linear regression.

c. Give step-by-step instructions
It is known that ChatGPT can give wrong answers to complex math problems. But step-by-step instructions tend to lead to correct answers. Also refer to prompt chaining.

d. Ask again if in doubt
You may ask questions for AI to think again, such as “Are you sure that is correct? I heard that …”
Obviously, if this is about coding, we can feed the error message back to AI.

2.2 Adjust temperature
For example, in the model text-davinci-003 in GPT-3, set the temperature closer to 0 (instead of 1) will make the result more predictable (instead of random). This likely will lead to less hallucination.

2.3 Experiment with different models
For example, you can try the same task with ChatGPT, Claude 2, and Bard and choose the one that gives the best output.

3. Advanced approaches
3.1 Provide further training for AI models

a. Use diverse, high-quality, and domain-specific data to further train your AI models so that the models are exposed to a wide range of scenarios.

b. Reinforcement Learning from Human Feedback
Have humans interact with the AI model to identify and correct any errors or false information. This human-in-the-loop training process helps refine the model.

3.2 Add tools
Using NLP techniques, such as sentiment analysis or entity recognition, may help better understand the context of the questions. The additional information may be fed to AI to generate more accurate answers.

3.3 Combine models
Using an additional LLM to evaluate AI output might improve the truthfulness.

3.4 Add a reliable source
Verifying AI output with a search engine or an internal database may reduce hallucination. For example, a system may search and find an internal document that is relevant to a question, then the document is processed by an LLM to generate a concise answer. Refer to Retrieval Augmented Generation (RAG) (often with vector databases).

3.5 Other approaches
a. check if stochastically sampled responses diverge
b. train a classifier using an LLM’s activation values
c. Go multimodal with both text and image as prompts (such as using Microsoft Kosmos-1)

4. Use AI in tasks that don’t require accuracy
Using AI for creative tasks where inaccuracy is not just more acceptable, but possibly beneficial.

MORE CREATIVE IDEAS

Share this post

Leave a Comment