Join Nostr
2026-04-08 10:16:09 UTC
in reply to

Nate Gaylinn on Nostr: Why do you say "AI is operating in substantially different modes when it's ...

Why do you say "AI is operating in substantially different modes when it's hallucinating to when it's retrieving factual information"? My understanding is that LLMs and the tooling built up around them have no notion of what is "factual," but merely produce statistically probable text. The error rate has gone down, but I think the main innovation driving that is just doing several hidden prompts for every interaction with the user, asking the LLM to self correct before it says anything. That's not distinguishing factual information, though, just filtering out low probability responses (which are more likely to be errors, but not necessarily).