How to Live with AI Hallucinations: A case study involving mapping out AI Insect farm techs
Despite tremendous advances in reducing and mitigating AI hallucinations, the truth is LLMs are still a language model, which in particular is still a statistical model. It was trained to make likely-guesses on the text and the answer it provides.
We have no choice but live with the fact that LLMs sometimes provide answers which we have to verify and doublecheck. However, if we need to do anything at scale with chatGPT or other LLMs, it is counter productive to have to then manually check the answers one-by-one. For e.g. I was looking at about 20 AI Insect Farm startups and use genAI to figure out their technologies, end-market industries/customers. There's is a lot of 'double-checking' to do if I were to manually do so.
Eventually, with some 'fresh-from-the-oven' tools from my fantastic colleagues at Mathlabs, we adopted the strategy of 'Always Cite Your Sources'. This sounds like a bibliography exercise or something you read/hear about in the legal-world, but there is no better safeguards than to always cite and check your sources.
That said, allow me to show case some of the values I got from this strategy. Usually, to find out what the technologies and endmarket customers of a list of startups, it would take me hours if not days of searching and then reading through websites and articles and often times marketing contents. Now, within minutes, I got this helpful summary of what the different technologies and usecases/end market industries are. I could also ask for an overview of the perception of the company's product and technology, with diverse sources being provided in the output.
What are your thoughts on how to use LLMs but live with hallucinations?