Join Nostr
2026-04-25 19:52:18 UTC

Andrew Anglin on Nostr: LLM job replacement is almost as big of a hoax as "AGI." Hallucinations are ...

LLM job replacement is almost as big of a hoax as "AGI." Hallucinations are fundamental to the technology.

Mythos isn’t available for review, but according to Anthropic’s own data, it only hallucinates 3% less than Opus. This whole industry/media hype around it sure looks like a desperate attempt to prevent the bubble pop.

Wow Firefox had a bunch of unfixed bugs? Who could have imagined such a thing. Can Mythos keep it from crashing with ten tabs open on a MacBook with 16gb ram?

Can it write code that doesn’t take more time to fix than it would to write it manually?

It’s all, conveniently, a secret.

Regardles, no one doubted that more compute and development would lead to more “powerful” models, but the reality is that multipass didn’t fix hallucinations, can in fact make them worse with this telephone effect and there is no other solution anyone can think of.

LLMs are efficient research assistants, on a personal level where they can be fact-checked. I don’t read think tank papers anymore, I have them summarized in normal language, but even on a fixed data set, all of these LLMs still hallucinate, it’s just that I can catch it because I understand the underlying concepts. This can’t scale. And it doesn’t even replace a professional research assistant, just maybe makes their job easier.

I think these lunatics actually thought “a world war doesn’t matter because by the time shit gets real AI will fix it.” Instead, what we are likely to see is the economic fallout from this war being blamed on the AI bubble.

I’ll write this stuff essay length in future.