LLMs are most useful for experts on a topic…

…because experts are more likely to know what they don’t know. When users don’t know what they don’t know, so-called “hallucinations” are less likely to be detected and this seems to be a growing problem, likely exacerbated by the Dunning-Kruger effect. Well, this is my take on them.

In the study cited in the article, several LLM models are asked to summarize news articles to measure how often they “hallucinated” or made up facts.

The LLM models showed different rates of “hallucination”, with OpenAI having the lowest (about 3%), followed by Meta (about 5%), Anthropic’s Claude 2 system (over 8%), and Google’s Palm chat with the highest (27%).

Source