Troed Sångberg on Nostr: tl;dr: The whole LLM-based "AI" revolution we're living through right now was started ...
tl;dr: The whole LLM-based "AI" revolution we're living through right now was started by Google releasing a paper detailing the architecture LLMs now use. That was almost a decade ago.
These models are trained once and then used for various tasks, but they don't acquire new abilities or learn from usage. Their neural networks are frozen.
A possible architecture for AI models that can evolve was presented a few months ago. Again by Google.
Don't base your opinions on what AI can and cannot do on one single architecture.
https://www.youtube.com/watch?v=VTQR9n3aqNUPublished at
2026-04-29 14:44:06 UTCEvent JSON
{
"id": "2715955692cfef621997c74a00e16fc69220218b9232024a319f92a90ac54eb3",
"pubkey": "2aad90c13120482441b028cb1eefad12fafca60fa257c35fe7aedf5c94c6b90a",
"created_at": 1777473846,
"kind": 1,
"tags": [
[
"proxy",
"https://swecyb.com/@troed/116488525991880022",
"web"
],
[
"proxy",
"https://swecyb.com/users/troed/statuses/116488525991880022",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://swecyb.com/users/troed/statuses/116488525991880022",
"pink.momostr"
],
[
"-"
]
],
"content": "tl;dr: The whole LLM-based \"AI\" revolution we're living through right now was started by Google releasing a paper detailing the architecture LLMs now use. That was almost a decade ago.\n\nThese models are trained once and then used for various tasks, but they don't acquire new abilities or learn from usage. Their neural networks are frozen.\n\nA possible architecture for AI models that can evolve was presented a few months ago. Again by Google.\n\nDon't base your opinions on what AI can and cannot do on one single architecture.\n\nhttps://www.youtube.com/watch?v=VTQR9n3aqNU",
"sig": "8e9d1293f39555db7a217c1e1fe2a48d35cc81c7c308e550af301d231a3a9c7641de8ca46bc3745a7072e36abf8052169bf6acd1031fac9a805cf0ee302f6121"
}