There's a scene in The Godfather where Michael Corleone says: "It's not personal, it's business." Bullshit — it's always personal when it involves weapons, money, and conscience.
Caitlin Kalinowski, who led OpenAI's robotics division, resigned. The reason? The company's deal with the Pentagon. Plain and simple. The woman looked at the military contract, looked in the mirror, and decided she couldn't stay.
And before some LinkedIn guru shows up saying she "threw her career away," pay attention: this is called skin in the game. Nassim Taleb laid it out beautifully — if you're not willing to pay the price for your convictions, you don't have convictions. You have personal branding.
The Deal Nobody Wants to Talk About Properly
OpenAI, the very same company that was born as a nonprofit "for the benefit of humanity," is looking more and more like every other tech giant that sniffs out government money like a dog sniffs out a bone.
The Pentagon deal isn't exactly news in Silicon Valley. Google already went through this with Project Maven in 2018 — when employees revolted against the use of AI in military drones and the company backed down (or at least pretended to). Microsoft embraced Army contracts without blinking. Amazon supplies cloud infrastructure to the American security apparatus like it's selling coffee at the drive-thru.
But OpenAI was supposed to be different. At least that's what they sold us.
Sam Altman built a narrative that the company existed to ensure artificial intelligence was developed in a "safe and beneficial" way. Safe and beneficial. Write that phrase down. Now put it next to a contract with the United States Department of Defense and tell me you don't smell the contradiction.
Hypocrisy Has a Price — and It's Steep
Look, I'm not naive. I'm pro-market. I understand that companies need revenue, that government contracts are fat, and that the world isn't the Garden of Eden. War exists. National defense exists. Military technology has existed since the first human being tied a rock to a stick.
The problem isn't the contract itself. The problem is the lie.
You can't sell yourself as the ethical savior of artificial intelligence and then sign off on a military deal without at least having the decency to explain to your team — and to the public — how this fits into your original mission.
Kalinowski's departure is a signal. And signals matter more than pretty press releases.
When senior people, people who are actually building the product for real, start slamming the door on their way out, that tells you more about the internal state of a company than any quarterly report. Remember the exodus of Google Brain researchers? Remember when Anthropic was born precisely because a bunch of OpenAI people couldn't take the direction the company was heading anymore?
The Pattern Repeats Itself
It's always the same movie, just a different cast:
- Company is born with an idealistic pitch
- Company grows and needs real money
- Company strikes deals that contradict the original pitch
- People with principles leave
- Company hires people who don't care
- The pitch becomes an empty shell
OpenAI is already at step 4. Maybe step 5.
And this should matter to you — whether you're an investor, an entrepreneur, or just a regular citizen who uses ChatGPT every day. Because the company that's shaping how artificial intelligence will work on this planet is making decisions that affect a hell of a lot more than a stock price.
The Question That Lingers
Kalinowski put her skin in the game. She gave up a leadership position at one of the hottest companies on the planet because the moral price tag got too high.
What about you? If the company you work for did something that contradicts everything you believe in, would you have the guts to stand up and walk out — or would you sit there quietly counting the days until your next bonus?
Don't answer me. Answer yourself. In silence. Because that answer says everything about who you really are.