What are hallucinations?
Language Learning Models (LLMs) such as ChatGPT use vast amounts of textdata to learn how to generate human-like text. You can think of this as a modelstudying for a big test. It crams in as much knowledge from books, articles,and web pages as it can. But, unlike us, it doesn't actually understand whatit's learning. It doesn't have thoughts or experiences. Instead, it learns manycomplex patterns.
When you ask it a question, it uses all the information it has learnedto come up with a response. It generates this response based on probability.The model selects words and phrases that it thinks are most likely to followthe previous ones. Most of the time this gives incredible accurate results.
Sometimes, models can generate information that seems plausible butisn't true; this is a hallucination. The LLM starts making up facts that soundbelievable but are not actually correct.
We can do alot to make hallucinations a rare occurrence.
Why are hallucinations so problematic?
LLMs have become so good we can let our guard down; we have come to relyon them too much. It's becomes very easy to copy and paste without checking andmiss a hallucination.
"LLMs have become so good we can let our guard down; we have come to rely on them too much."
When we use LLMs as part of our services our customers expect ourresponses to be true. This is especially critical in education.
How do we manage the risk of hallucinations?
Although there is no watertight way to avoid hallucinations altogether,we can manage the risk. All technology and all employees get it wrongsometimes. If we put enough safeguards in place the risk becomes so low thatthe benefits our our service outweigh the risk by a large factor.
What are Tutello's safeguards?