I'll check out that paper when I get the chance. Sounds interesting. But, still, these models *don't* model uncertainty, right? They don't know what they know or how they know it. They don't have any notion of authoritative sources, or even where the text they're reproducing cones from. They don't have a knowledge graph or relational database of facts. They don't have any notion of logical correctness, except for correct examples in their training corpus and "tools" if those are provided / used. Right?
That's my disconnect here. Yes, we're doing more elaborate training to reduce the error rate. But I think what we call "hallucination" is just a way of describing the fundamental operation that LLMs do (without necessarily implying anything about correctness). New techniques constrain and reinforce that hallucination to make it less error prone, but... it's still hallucinating, all the time. There is no "factual mode" that I know of. Right?