There's a scene in The Godfather that everyone remembers: Michael Corleone says he wants to keep his hands clean, that the family business is going legit. Meanwhile, the bodies keep piling up.
Yeah. OpenAI just closed a deal with the Pentagon — the U.S. Department of Defense — and, in a coincidence that would be hilarious if it weren't so ominous, the head of the company's hardware division just quit.
Let's chew on this together.
What actually happened
OpenAI, the very same company that was born as a nonprofit organization with the stated mission of "ensuring artificial intelligence benefits all of humanity," has been morphing into a corporate war machine over the past few months. First came the conversion to a for-profit entity. Then the billions from Microsoft. Now, a direct contract with the Pentagon.
And right in the middle of this transition, the guy responsible for building the hardware that brings all of this to life — the physical heart of the operation, the chips, the servers, the infrastructure that makes ChatGPT run — just stood up from his desk and walked out.
Come on, tell me that's not a sign.
The elephant in the room nobody wants to see
Mainstream financial media will spin this departure as "personal decisions" or "pursuing new challenges." The official narrative will be polished, gift-wrapped in a press release dripping with HR-speak.
But anyone who's read two pages of Nassim Taleb knows that actions speak louder than words. When someone with skin in the game — someone who's on the inside, who knows what's being built, who sees what goes on behind the curtain — decides to leave right after a deal with the most powerful military machine on the planet, that's not a coincidence. That's information.
Remember Ilya Sutskever? The guy tried to stage an internal coup at OpenAI, got thrown out the window, and now works at another company. Remember the entire AI safety team that jumped ship? Every single one of these departures is a piece of the puzzle. And the picture taking shape isn't pretty.
Military AI isn't science fiction — it's the next business cycle
Let's be practical here, because at the end of the day, anyone reading this wants to know where the money's going.
The militarization of artificial intelligence is one of the biggest investment cycles that will define the next decade. Palantir already rode this wave — the stock multiplied several times over. Anduril, founded by Palmer Luckey (yes, the Oculus guy), turned into a defense-tech monster valued at tens of billions. And now OpenAI is stepping into the arena.
This means two things:
First, defense money is the most guaranteed money on the planet. The U.S. military budget doesn't shrink. Doesn't matter if the president is Republican or Democrat. The Pentagon always pays. So from a purely financial standpoint, OpenAI is making a rational and potentially very lucrative move.
Second, there's a reputational and talent cost. The best AI minds in the world — the engineers, researchers, the people who actually make the magic happen — a lot of them joined OpenAI because of the original mission. "Benefit humanity." Building autonomous weapons for the world's largest military wasn't exactly on the onboarding slide deck.
And when you lose talent of that caliber, it's not like losing a junior analyst. It's like the Golden State Warriors losing Curry. It changes everything.
What this means if you invest
If you have exposure to AI companies — directly or indirectly through Microsoft, Nvidia, or any tech ETF — you need to understand that the sector is entering a completely different phase. The "cute" phase of AI, with chatbots and image generators, is giving way to the heavy phase: government contracts, military applications, surveillance at scale.
This isn't necessarily bad for your portfolio. It could be excellent, actually. Lockheed Martin and Raytheon made a lot of people rich.
But know where you're putting your money. Be clear-eyed about it. Don't hide behind narratives of "democratizing knowledge" when the end product might be an autonomous drone.
As Walter White would say: "I am not in danger. I am the danger."
The question is: does OpenAI know this — or is it still pretending to be the chemistry teacher?