- SW2.ai
- Posts
- Hallucinating LLMs
Hallucinating LLMs
I’ve spent a decent chunk of time this week researching hallucinating LLMs. If you’ve spent any time interacting with a large language model, you’ve surely encountered this phenomenon. It is, to put it bluntly, when the LLM doesn’t actually know the answer or solution, and it — for whatever reason — just decides to make shit up.
This strange happening was something I’d been aware of prior to this week, but it was put into stark contrast as I was digging into a speaking submission for GlueCon. The submission (“How to Prevent Catastrophic LLM Hallucinations”) dives into remediation procedures to make sure that an LLM isn’t seeing things that aren’t there.
Despite sounding like a Philip K. Dick novel, LLMs hallucinating is actually a pretty significant problem, one that is not simply embroiled in the debate over “consciousness.” Just imagine the use of LLMs in healthcare systems. This is not only already being rolled out, but sure to grow in both use cases and the criticality of decisions being made in those use cases. Obviously, it’s less than optimal to have an LLM “hallucinate” a made-up response (or solution) in a situation where a human life (or even just human health and welfare) is directly on the line.
In my looking, I found several “work-arounds” that people are using in an attempt to remediate any possible hallucinations. There are even a couple of startups rolling out some solutions, but in most cases those solutions seem either vertically oriented, or a fairly narrow sliver of a horizontal market. In short, I have yet to find a technical solution that is an overarching “scaffolding” for hallucination prevention and remediation. As always, if you’ve seen it (or are starting it), let me know.
One last bit: There is this open source package manager that recently launched. If you’re working with (or playing with) OpenAPI files and different foundational AI models, give it a look.
Until next time…