Roberto von Archimboldi on Nostr: There are probably some actual AI researchers on here, maybe they will be able to ...
There are probably some actual AI researchers on here, maybe they will be able to answer this:
LLMs are next word prediction machines. It is extraordinary to me that they have so much semantic accuracy. In other words it is astonishing that they ever produce anything truthful in response to a query. It fact they are so accurate that people have begun to talk about hallucinations. The machine never believes anything. It just churns out plausible sentences. It is striking that the subset of woefully misleading sentences is small enough to merit the label of a hallucination.
I get the impression that developers think that without changing the basic mechanism, ie without trying to give the thing a semantic model, you can solve the hallucination problem. Why is that a realistic assumption?
#LLM, #AI
Published at
2024-09-10 10:47:33 UTCEvent JSON
{
"id": "be82624d6bb3715e3079b2f125cf737d9ea484791fdd2714e4bf418f13a71ee6",
"pubkey": "14965443867d679e8aefaf4b76d6af788bf00e8b2dcff9cbe51a110f2ba4c6e7",
"created_at": 1725965253,
"kind": 1,
"tags": [
[
"proxy",
"https://kolektiva.social/@RobertoArchimboldi/113112858864131758",
"web"
],
[
"t",
"llm"
],
[
"t",
"ai"
],
[
"proxy",
"https://kolektiva.social/users/RobertoArchimboldi/statuses/113112858864131758",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://kolektiva.social/users/RobertoArchimboldi/statuses/113112858864131758",
"pink.momostr"
],
[
"-"
]
],
"content": "There are probably some actual AI researchers on here, maybe they will be able to answer this:\nLLMs are next word prediction machines. It is extraordinary to me that they have so much semantic accuracy. In other words it is astonishing that they ever produce anything truthful in response to a query. It fact they are so accurate that people have begun to talk about hallucinations. The machine never believes anything. It just churns out plausible sentences. It is striking that the subset of woefully misleading sentences is small enough to merit the label of a hallucination. \n\nI get the impression that developers think that without changing the basic mechanism, ie without trying to give the thing a semantic model, you can solve the hallucination problem. Why is that a realistic assumption? \n\n#LLM, #AI",
"sig": "349bb4b9b9a46f6adbc5fdec954553b913ff2dded7ec040ba51a5d1246fbda98c0b720404abde68808601011fd7a63a8b6c36ef655080eaa0979ee4c5cdfb626"
}