New Yorker Investigation Targets Sam Altman Power

A sharp look at why the New Yorker investigation targets Sam Altman reveals deeper failures in OpenAI governance and AI oversight.

New Yorker Investigation Targets Sam Altman Power

Sam Altman Was Never the Plot Twist. The Plot Twist Was Thinking the Structure Would Save Us

The wild part of the New Yorker investigation targets Sam Altman story is not Sam Altman. It’s that so many smart people looked at OpenAI’s weird nonprofit-capped-profit-Rube-Goldberg structure and thought, ah yes, this should keep power in check.

Come on.

I’ve spent enough time around founders to know the species. The guy who can look three different people in the eye, promise each of them a slightly different future, and somehow leave all three feeling lucky to be in the room. In tech, we don’t call that dangerous at first. We call it vision. We put it on a podcast. We hand it a term sheet.

So no, my reaction to the New Yorker piece was not shock. It was more like: wait, we’re only now acting surprised that the company building civilization-scale AI turned into a charisma contest?

I’m not saying that as a hater. I’m saying it as a founder who has sat through enough pitch meetings in SoMa, Brooklyn, London, and one deeply cursed hotel lobby in Dubai to recognize the pattern. Silicon Valley has always rewarded the operator who can sell ten futures at once. The strange part was believing OpenAI’s structure would somehow purify that instinct. Mamma mia.

The Most Expensive Vibe Check in Silicon Valley

This matters because OpenAI is not just juicy founder gossip for people who wear $180 “minimalist” sneakers. It’s one of the most important companies on earth. Once you’re at that level, personality problems become public-interest problems very fast.

The reporting summarized in the draft points out that OpenAI just pulled off the biggest fundraising round in Valley history and is sitting at a valuation so absurd it barely sounds real. People are already talking about the company like a future trillion-dollar empire. That’s not startup territory anymore. That’s power-company territory. Nation-state-adjacent territory.

And once you’re there, “can we trust the CEO?” stops being a spicy group chat topic and becomes a governance question with geopolitical consequences. OpenAI isn’t just shipping a fun image toy or a smarter autocomplete. Governments, militaries, schools, media companies, and half the Fortune 500 are reorganizing themselves around what happens inside that building.

That’s why the fact that the New Yorker investigation targets Sam Altman lands harder than a normal founder profile. This is not really about one man’s reputation. It’s about whether the company building frontier AI runs on actual oversight or on vibes, loyalty, and founder gravity.

If it’s the second one, we’re all downstream from a very expensive vibe check.

And yes, the benevolent steward story was always a little too Netflix for me. The guy warning that AI could rival pandemics and nuclear war, while also asking for more power to build it faster? Maybe that’s sincere. Maybe. But it is also unbelievably convenient.

I’ve seen this movie before, just with worse hoodies and smaller checks. The founder says, trust me, I’m the only one worried enough to do this responsibly. Everyone nods because he sounds thoughtful and probably references history. Six months later you realize “responsibly” mostly meant “with fewer people able to stop me.”

OpenAI’s Original Sin

OpenAI’s big promise was never just the product. It was the structure. This wasn’t supposed to be another company claiming it wanted to make the world better while quietly optimizing for shareholder value and office cold brew.

It started as a nonprofit. The board was supposed to prioritize humanity’s safety over the company’s success, even over its survival. Read that again. Not growth. Not market share. Not some polished nonsense about stakeholders. Humanity first. Company second.

Beautiful idea. Also kind of insane.

Because you cannot build one of the most powerful companies in history and pretend it’s a monastery with GPUs. You can write nonprofit language into the charter. You can tell everyone the mission is sacred. But once the machine starts attracting capital, talent, state interest, and ego, the moral framing stops acting like a safeguard and starts acting like premium branding.

That’s why this hits harder than Uber being Uber or WeWork doing WeWork things back in the Adam Neumann fever-dream years. OpenAI didn’t just say we’re here to disrupt. It basically said: we may be building the most dangerous technology humanity has ever touched, so don’t worry, we invented a special structure to protect the public from ourselves.

If that structure turns out to be flimsy, the problem is not cosmetic. It’s the whole plot.

The New Yorker reporting, as summarized here, says Altman publicly emphasized AI risk and safeguards while allegedly working privately to gain more control and resist regulation. If that’s true, that’s not normal founder messiness. That’s the cautionary language itself becoming part of the strategy.

And look, I get the temptation. Founder brain is a hell of a drug. It is ridiculously easy to confuse “I care deeply about this” with “therefore I should control more of it.” Those are not the same sentence. I’ve felt that impulse over tiny boring decisions affecting maybe six people and a Stripe account. Scale it up to AGI and the human ego starts writing itself a heroic screenplay.

If my nonna heard me say “nonprofit-origin myth,” she’d probably smack me with a wooden spoon. But still. That’s what this looks like now. Not a governance breakthrough. A myth that made concentration of power feel morally elegant.

The Founder Playbook: Say “Existential Risk,” Mean “Don’t Slow Me Down”

Here’s my hotter take. AI safety language can work like luxury branding. It signals seriousness. It creates moral atmosphere. It tells politicians, journalists, and the public that the people in charge are not just rich nerds with compute budgets, but philosopher-kings carrying the burden of history.

And sometimes they even mean it. That’s what makes it so effective.

But I’ve watched enough tech people discover ethics exactly when regulation gets close to know the pattern. First, tell the world the technology is too powerful to ignore. Then explain that clumsy lawmakers might ruin everything. Then gently suggest that only a very small circle of enlightened builders — ideally including you, funny how that works — can guide humanity through the danger.

The timing in this case is what really made me raise an eyebrow. While questions about trustworthiness and governance were blowing up again, Altman was also out there publishing grand policy visions about superintelligence, regulation, and redistributing the gains. Maybe some of those ideas are good. Some probably are. But when the same person asking for public trust also benefits from keeping the rules fuzzy, I get skeptical fast.

At that point, “AI safety” starts sounding less like a public mission and more like a moat strategy with better typography.

There’s a psychological trick here that founders, including me on my worst days, are extremely vulnerable to. If enough people tell you you’re uniquely insightful, you start believing your urgency is a substitute for accountability. You think, I see the stakes more clearly than everyone else, so friction is the real danger. From the inside, that feels profound. From the outside, it can look a lot like empire-building.

A friend asked me in Milan last month, over a stupidly expensive espresso in Brera, why AI discourse feels so spiritually manipulative. Brutal question. Perfect question. I think the answer is that when founders talk about extinction risk, they’re not just describing danger. They’re positioning themselves as the priests of danger.

And once you own the fear, you get to sell the solution.

Sam Altman speaking at a tech conference, surrounded by audience members, highlighting his influence in the industry.

The 2023 Firing Wasn’t a Glitch. It Was the Demo.

If anyone still thinks OpenAI’s structure can save us, I’d like to direct them to that surreal weekend in 2023 when Sam Altman got fired and then basically unfired himself through sheer network power.

That was not a glitch. That was the product demo.

On paper, the board had authority. In reality, the ecosystem around the company — employees, investors, partners, allies, media pressure, fear of losing access, fear of losing money, fear of losing the future itself — made that authority almost impossible to use for more than five minutes.

That’s the part people keep skipping past. Governance that can be overrun in a weekend is not governance. It’s decorative trim.

The reporting says Altman brought in crisis PR people and mobilized powerful allies. At the same time, there was a financing deal in the background that would have let employees cash out huge amounts of stock. So now imagine being inside that storm. You are not just debating ethics or truthfulness in some clean seminar-room way. You are standing between a founder comeback and a liquidity event big enough to change people’s lives.

Of course the structure buckled. There was too much money in the room. Too much status. Too much momentum. Too much fear.

And then you get the quote that says the quiet part out loud: “We immediately went to war.”

Exactly.

Not we initiated a careful governance review. Not we respected the nonprofit’s process. War. Because that’s what happens when a supposedly mission-first institution collides with real power. The moral language evaporates and everybody reaches for the sharp objects.

I’ll admit something slightly embarrassing. When that whole saga happened, part of me admired the move. Not morally. Tactically. The guy got removed and then rallied the entire table around his indispensability in real time. That is founder judo at Olympic level.

And that is precisely why it’s dangerous.

We are way too easily impressed by the exact traits that make oversight fail.

This Is Bigger Than Sam Altman

At a certain point I stop caring whether Sam Altman is uniquely manipulative, uniquely misunderstood, or just a very normal Silicon Valley power broker scaled up to absurd stakes. The more important point is that frontier AI keeps concentrating world-shaping systems in leaders whose incentives are nearly impossible to audit from the outside.

That’s why the New Yorker investigation targets Sam Altman but lands on something much bigger. The trust architecture is broken. “Trust me, I’m worried too” is not a governance model. It’s branding.

The draft cites concerns from people like Ilya Sutskever and Dario Amodei, which matters because these are not random posters with anime avatars and a God complex. These were central insiders. It also cites claims from a board member that Altman was “not constrained by the truth,” plus complaints from Microsoft executives that he had misrepresented or abandoned agreements.

Maybe some of that is colored by internal power struggles. Of course it is. Silicon Valley fights are never pure. Everyone has incentives, grudges, stock options, and a Substack draft fermenting somewhere in their soul. But when serious people across different contexts keep raising the same kind of trust concerns, the responsible response is not “well, geniuses are complicated.”

No grazie.

What matters now is the boring stuff. The unfashionable stuff. Independent oversight with actual teeth. Conflict-of-interest transparency. Rules that do not disappear the second another zero gets added to the valuation. Boards that cannot be emotionally blackmailed by employees whose paper wealth is on the line. External review that isn’t handpicked by the company like some Michelin guide for ethics.

And yes, that sounds less sexy than “build AGI for the benefit of humanity.” Good. Real accountability is usually boring. It’s meetings, disclosures, recusal rules, audits, reporting lines, enforcement powers. Founders hate that stuff because it slows the narrative down.

Societies need it because narratives are cheap.

I think that’s the real thing exposed here. Not that one ambitious man may have acted like an ambitious man. Silicon Valley has been mass-producing that archetype since before I had chest hair. The real collapse is that we wrapped an old power pattern in nonprofit language, AI safety vocabulary, and world-saving aesthetics, then acted shocked when it behaved like concentrated power.

If you want my blunt version over drinks: OpenAI governance looks less like a revolutionary safeguard and more like a very elaborate story people told themselves so they could feel okay handing historic power to familiar founder instincts. Same opera, nicer lighting.

And I say that with some humility, because founders are extremely good at believing our own mythology. I’ve done it on a tiny scale. I’ve told myself that because I cared the most, I should decide the most. Usually that’s just fear wearing a tailored jacket.

So here’s where I land. If the people building civilization-altering AI keep asking us to trust their intentions more than their institutions, we should assume the institutions are the weak part. That’s true whether the New Yorker investigation targets Sam Altman today, another AI prince next year, or some future lab leader with better media training and worse instincts.

Before AGI changes the world, maybe answer one very unsexy question first: why are we still building these companies so that one charismatic guy can bend the whole thing back toward himself by Monday morning?