Andrew Anglin on Nostr: LLM job replacement is almost as big of a hoax as "AGI." Hallucinations are ...
LLM job replacement is almost as big of a hoax as "AGI." Hallucinations are fundamental to the technology.
Mythos isn’t available for review, but according to Anthropic’s own data, it only hallucinates 3% less than Opus. This whole industry/media hype around it sure looks like a desperate attempt to prevent the bubble pop.
Wow Firefox had a bunch of unfixed bugs? Who could have imagined such a thing. Can Mythos keep it from crashing with ten tabs open on a MacBook with 16gb ram?
Can it write code that doesn’t take more time to fix than it would to write it manually?
It’s all, conveniently, a secret.
Regardles, no one doubted that more compute and development would lead to more “powerful” models, but the reality is that multipass didn’t fix hallucinations, can in fact make them worse with this telephone effect and there is no other solution anyone can think of.
LLMs are efficient research assistants, on a personal level where they can be fact-checked. I don’t read think tank papers anymore, I have them summarized in normal language, but even on a fixed data set, all of these LLMs still hallucinate, it’s just that I can catch it because I understand the underlying concepts. This can’t scale. And it doesn’t even replace a professional research assistant, just maybe makes their job easier.
I think these lunatics actually thought “a world war doesn’t matter because by the time shit gets real AI will fix it.” Instead, what we are likely to see is the economic fallout from this war being blamed on the AI bubble.
I’ll write this stuff essay length in future.
Published at
2026-04-25 19:52:18 UTCEvent JSON
{
"id": "d3e707d0b4b5818d733de2e6329f92eb8bd480a80490657a57e73a0cf9c6bf94",
"pubkey": "b619800980392f1c08db041f83f8d6755c38aabcc2747125ef5f7e822dc1545f",
"created_at": 1777146738,
"kind": 1,
"tags": [
[
"imeta",
"url https://image.nostr.build/b405b8ed275b83ce11cafea68102f0ad140274f00c3a3fb9dea1f9e22e0d1b7a.jpg",
"dim 768x414",
"blurhash L67Kxcs%${?cnf%M_4%M.9%LxZxu",
"sha256 af2efbb18b15e983afa94c96530d32ea22d6fb36baa713ffc03a9fa8794243ca"
]
],
"content": "LLM job replacement is almost as big of a hoax as \"AGI.\" Hallucinations are fundamental to the technology. \n\nMythos isn’t available for review, but according to Anthropic’s own data, it only hallucinates 3% less than Opus. This whole industry/media hype around it sure looks like a desperate attempt to prevent the bubble pop. \n\nWow Firefox had a bunch of unfixed bugs? Who could have imagined such a thing. Can Mythos keep it from crashing with ten tabs open on a MacBook with 16gb ram? \n\nCan it write code that doesn’t take more time to fix than it would to write it manually? \n\nIt’s all, conveniently, a secret. \n\nRegardles, no one doubted that more compute and development would lead to more “powerful” models, but the reality is that multipass didn’t fix hallucinations, can in fact make them worse with this telephone effect and there is no other solution anyone can think of. \n\nLLMs are efficient research assistants, on a personal level where they can be fact-checked. I don’t read think tank papers anymore, I have them summarized in normal language, but even on a fixed data set, all of these LLMs still hallucinate, it’s just that I can catch it because I understand the underlying concepts. This can’t scale. And it doesn’t even replace a professional research assistant, just maybe makes their job easier. \n\nI think these lunatics actually thought “a world war doesn’t matter because by the time shit gets real AI will fix it.” Instead, what we are likely to see is the economic fallout from this war being blamed on the AI bubble. \n\nI’ll write this stuff essay length in future. \nhttps://image.nostr.build/b405b8ed275b83ce11cafea68102f0ad140274f00c3a3fb9dea1f9e22e0d1b7a.jpg",
"sig": "66cfd63abd3c963166b52c0ec460aac171cdb4c3509322bc008e8635d0c4fa3035aea6a8f96129766e730cd2253ff9cae98a29017901c7d80fdfce502e84e6cb"
}