Peter Bloem on Nostr: I made the familiar point yesterday during a panel discussion that even if AI were ...
I made the familiar point yesterday during a panel discussion that even if AI were approaching human intelligence, we're nowhere near human power efficiency (in "intelligence per watt")
But is that actually true anymore? We need huge power to train models, but once we have them trained, distilled and quantized, what do we have?
The human brain draws abt 20 Watts. An M4 doing local inference on a heavily quantized LLM won't draw more than 40 Watts.
Published at
2025-11-28 15:06:56 UTCEvent JSON
{
"id": "7634f69fd59314cff4c7d36c4c55ea167d578577766f0f2daf4307465a436378",
"pubkey": "dca476af1f6be197c44daa9d72a0124577b7d49d3f448843a898d82fb2dc0282",
"created_at": 1764342416,
"kind": 1,
"tags": [
[
"proxy",
"https://sigmoid.social/@pbloem/115627944575322273",
"web"
],
[
"proxy",
"https://sigmoid.social/users/pbloem/statuses/115627944575322273",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://sigmoid.social/users/pbloem/statuses/115627944575322273",
"pink.momostr"
],
[
"-"
]
],
"content": "I made the familiar point yesterday during a panel discussion that even if AI were approaching human intelligence, we're nowhere near human power efficiency (in \"intelligence per watt\")\n\nBut is that actually true anymore? We need huge power to train models, but once we have them trained, distilled and quantized, what do we have?\n\nThe human brain draws abt 20 Watts. An M4 doing local inference on a heavily quantized LLM won't draw more than 40 Watts.",
"sig": "e8c1f07bc22ca1edb09a518deaff89969339eec00bccf2a33657b0f52e307e66d354ddbb4f485e825d6dc62ed78b22a834610a9a967a4c974d648b7e5024251d"
}