quoting nevent1q…3cylThere's a decision-making method gaining traction in AI circles right now.
You spin up five AI agents, each with a distinct thinking role: first-principles thinker, contrarian, systems thinker, domain expert, pragmatist. Plus a chairman who runs the process.
Each agent drafts an answer independently. The chairman strips the names off and circulates them anonymously. Each agent critiques the others without knowing who wrote what. Then the chairman synthesises the best answer.
The anonymity is the genuinely good part. It forces evaluation on content, not on who said it. Ego gets removed from the room. That alone makes it better than most boardroom decisions.
But it's still modelled on a boardroom. And a boardroom is not where the best decisions get made.
Here's what's missing.
1. No real iteration. Ideas develop the same way people develop: through feedback and iteration. In this model, the iteration is perfunctory. Ideas get assessed, not evolved.
2. No brainstorming. Effective brainstorming deliberately separates "divergent thinking" (the process of imagining all possibilities, including crazy ones) from convergent thinking (the process of whittling them down to what's possible within the constraints of time, space and budgets).
The best discussions with the best ideas happen when you go wide first, then narrow. This protocol puts a convergent black-hat thinker in the room at the same time as the divergent thinkers. That collapses the exploration phase before it even starts.
3. Optimised for evaluation, not ideation. It's designed to select the best existing idea rather than generate one that's better than any individual agent could produce.
4. The chairman selects rather than facilitates. There's a difference between choosing the best idea in the room and facilitating the development of the best collective idea. A chairman does the first. A facilitator does the second. Different function entirely. The first is optimized for efficacy of process and efficiency of outcome, the second is optimized for the quality and robustness of the decision.
5. Swimlane identities. Each agent operates within a narrow role. "Domain expert." "Contrarian." Knowledge seldom comes from people locked into narrow identities. It comes from the collision between them.
The net result: no ideation, a marginally better selection process, and a high risk that an underdeveloped idea gets chosen. Worse, because the process looks robust and complex, the output runs the risk of getting acted on with more confidence than it deserves.
I studied this exact problem during my thesis. I had groups communicating anonymously, analysing a poem (I was an English major), exchanging ideas across rounds. The quality of analysis improved dramatically in the second round - not because the evaluation got sharper, but because people were building on each other's thinking, measurably. They were interchanging ideas, not just ranking them.
I've seen the same thing running mastermind groups since. When different minds get in a room and start sparring and introducing ideas, a network effect kicks in. Ideas emerge that no single person in the room could ever have produced. That process creates ideas through a principle that we call "advance and extend". And it's what this five-agent model doesn't have.
The fix isn't complicated. Let agents build on each other's drafts across multiple rounds instead of just critiquing them. Replace the chairman with a facilitator agent whose job is synthesis, not selection. Separate the divergent phase from the convergent phase. And stop assigning swimlane identities that constrain what each agent is allowed to think.
The five-agent idea is a step forward. But it's a step from a bad process to a less bad process.
The leap - to a process that produces genuinely emergent thinking - requires a different model entirely. And that model already exists. It just hasn't been applied to AI ... yet.
Gigi on Nostr: Another one: ...
Another one: