In 1973, the Yom Kippur War shocked the Israeli Defense Forces, not because the intelligence wasn't there, but because no one questioned the prevailing assumptions. The attack had been foretold by data, intercepted messages, and human intuition alike. But groupthink, that seductive undertow of consensus, drowned the signal in noise. Years later, this failure inspired the doctrine now known in intelligence and military circles as the "10th Man Strategy": if nine people in a room agree, it becomes the duty of the tenth to disagree, regardless of how compelling the majority appears.

This isn’t just a historical anecdote. It’s a universal truth dressed in operational doctrine. Groupthink is not only an impediment to intelligence, it is an endemic flaw of human collectives: boardrooms, think tanks, startup teams, political war rooms. Wherever ego meets urgency, wherever performance incentives reward harmony over heresy, the consensus becomes a trap. The 10th Man isn’t a rebel. He’s not a contrarian for sport. He is the immune system of high-functioning decision-making.

Yet in an age where digital groupthink spreads faster than influenza, we must ask: who plays the 10th Man when the room is no longer physical? When your team lives in Slack threads and Zoom rooms, when executives repeat each other like algorithmic reflections, and when your biggest competitor is a consensus-driven committee powered by mood boards and KPIs, how do you safeguard against collective delusion?

Enter ChatGPT

It’s an odd proposition, perhaps even laughable at first: a large language model trained on the internet's archive of consensus, conspiracy, and contradiction, being deployed as a safeguard against consensus itself. But this irony is exactly what makes it useful. Because ChatGPT does not need social approval. It doesn't fear demotion. It cannot be bullied into silence or swept up in the charisma of a visionary CEO. It exists in the perfect posture for the 10th Man: eternally ready to entertain an opposing view, without ego, without loyalty, without fear.

And here lies the genius application of artificial intelligence in executive decision-making: not as a mirror, but as a blade. Not as a tool for efficiency, but as a tool for dissonance. In fact, when properly prompted, when given the role of The Disruptor, ChatGPT becomes something uncanny: a detached provocateur, an artificial devil’s advocate, a simulation of dissent designed not to be right, but to make you righter.

This isn’t a hypothetical exercise. It’s an operational tactic that could save companies, correct policy, and prevent catastrophe. Think about your last team decision, the one everyone agreed on. Maybe it was the decision to launch that feature ahead of schedule, to pivot the brand toward a trending narrative, to chase a Series B at a valuation that felt just slightly too optimistic. In the afterglow of excitement, it probably felt bulletproof. Consensus feels good. But it also makes you blind.

Now imagine this: before executing the plan, you copy the memo, the deck, the bullet-pointed strategy into a prompt window. You tell your assistant, not to summarize, not to praise, not to speed it up, but to break it apart. To simulate the voice of the dissenter. To identify not what’s great about your idea, but what’s fragile, flawed, miscalculated. You instruct it to play the 10th Man.

What emerges is not a prophecy. But it is an inoculation. A way to stress-test logic before reality does. And unlike a human 10th Man, who might hold back out of respect, fear, exhaustion, or internal politics, your chat module doesn't care. It is algorithmically immune to hierarchy. It does not need to be liked to be valuable. It is the purest, most objective version of dissent you can inject into a boardroom.

But this raises a more profound truth about our time. We are entering an age where tools are only as good as the roles we assign to them. AI can be a calculator, a writer, a search engine. Or it can be a counterintelligence node for your internal consensus. It can model alternative realities, not just optimize the one you’re chasing. In this way, AI is not an assistant. It’s an internal statecraft mechanism. And you, the user, become something closer to a sovereign, a leader orchestrating opposition within your own camp, so that the real enemy doesn’t catch you by surprise.

Of course, the utility of the 10th Man doesn’t begin or end with ChatGPT. It’s a philosophy. It’s a discipline of thinking that refuses to settle for agreement. It demands friction. It hunts for blind spots. And it trains you to hear the signal within the silence of a nodding room. But in a world addicted to confirmation bias, where even the smartest teams suffer from collective tunnel vision, a synthetic dissenter might be the only one brave enough to say the thing you don’t want to hear.

And let’s be clear, ChatGPT isn’t always right. In fact, it often isn’t. But that’s not the point. The 10th Man isn’t there to offer better ideas. He’s there to challenge yours. His function is interruption. His allegiance is not to correctness but to the integrity of the process. And when wielded with the right intention, this AI becomes not a yes-man, but a no-machine. Not a servant, but a saboteur of your assumptions.

So how do we formalize this? How do we train executives, boards, and entrepreneurs to integrate this counterintelligence methodology into their process? It starts with a prompt. A ritual. A moment of discipline before every critical decision where someone, or something, is asked to disagree.

And here’s where it gets interesting: ChatGPT’s true strength isn’t in the answers it provides, but in the questions it helps you ask. If you prompt it like an intern, it behaves like one. But prompt it like a military analyst, a political saboteur, or a cynical board member, and it steps into character with uncanny precision. The AI becomes the role you assign it.

Which is why we must treat prompting not as command-giving, but as role-assignment. In the theater of your strategy, you don’t just feed the script to ChatGPT and wait for applause. You cast it in a role, and that role, today, is the 10th Man.

Let us formalize it. Let us build it into the decision cycle. Let us make it protocol, not novelty.

This is where the article becomes instruction. What follows is a living ritual, a prompt designed not to flatter, but to fracture. A prompt to activate ChatGPT’s value not as consensus builder, but as consensus destroyer. Use it before major decisions. Use it before product launches. Use it when your gut says “yes” and your team says “absolutely.” That’s when you need it most.

Prompt:

“Act as the 10th Man. A room of nine executives have reviewed the following plan and are unanimously convinced of its merit. Your job is to find every flaw, blind spot, risk, or dangerous assumption within this strategy—regardless of how compelling the consensus may be. Focus on structural vulnerabilities, historical parallels, political and cultural friction, and psychological or financial overconfidence. Do not summarize. Do not praise. Do not hedge. Your sole duty is to find what everyone else is missing.”

That’s it. One ritual. One line. But in that line is an entire discipline of intelligence.