Jeremy Kahn on Nostr: "AI" product development is in a strategic falsifiability crisis The crisis is shaped ...
"AI" product development is in a strategic falsifiability crisis
The crisis is shaped like the No True Scotsman fallacy, Sunk Cost fallacy, and conspiracy theories ALL AT THE SAME TIME
there is no possible accumulated evidence that would lead boosters to conclude that "an LLM isn't an appropriate artifact to address this problem"
I see them throwing LLMs at *already beyond-adequately solved problems* (units conversion! chess! constraint fitting!), getting *worse* results, and carrying on
Published at
2025-06-17 16:43:05 UTCEvent JSON
{
"id": "ad2e5f637967bb81a350f6c3c6ee08e203b4da24a7b00bc844a8e834850c19ff",
"pubkey": "8a84428d4fe2db6a33893bd193716515f8260f2918053523f2dc7e550688a839",
"created_at": 1750178585,
"kind": 1,
"tags": [
[
"proxy",
"https://dair-community.social/@trochee/114699703770848155",
"web"
],
[
"proxy",
"https://dair-community.social/users/trochee/statuses/114699703770848155",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://dair-community.social/users/trochee/statuses/114699703770848155",
"pink.momostr"
],
[
"-"
]
],
"content": "\"AI\" product development is in a strategic falsifiability crisis\n\nThe crisis is shaped like the No True Scotsman fallacy, Sunk Cost fallacy, and conspiracy theories ALL AT THE SAME TIME\n\nthere is no possible accumulated evidence that would lead boosters to conclude that \"an LLM isn't an appropriate artifact to address this problem\"\n\nI see them throwing LLMs at *already beyond-adequately solved problems* (units conversion! chess! constraint fitting!), getting *worse* results, and carrying on",
"sig": "50b91a78ab3f6ae51e66549d839ccbfefd6cac9a986a49e3213d8805501d8dc77b84546bbad6b9a91844fd85ba9592f0b5fc57a340a4c5071b079437a809696e"
}