Using AI at work: Hype vs. reality
A recent New York Times story offers insights.
Using LLMs as a “second pair of eyes” or as a fallible assistant seems to work well. Automation also works effectively when the instructions are clear and the objectives are defined unambiguously. In both cases, human agency remains central.
Use case #15 in the article, “Review medical literature,” reminded me of a study I shared earlier (How do LLMs report scientific text?). The study showed that LLMs systematically exaggerate claims they found in the original text. The user in this case is a medical imaging scientist and is aware of the danger. When a tool isn’t foolproof, the user’s expertise and awareness make all the difference.
The high-demand use cases are quickly scaling into independent businesses with more standardized output, often with LLMs as the core wrapper. I suspect some are marketed as “magic,” and to resist that hype, users will need a combination of expertise and awareness.