New York Times report reveals OpenAI's internal concerns about Sam Altman's trustworthiness

iconTechFlow
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
The New York Times’ on-chain report reveals internal concerns at OpenAI regarding CEO Sam Altman’s trustworthiness. Based on over 100 interviews and internal documents, the report alleges that Altman misled the board and removed safety protocols. A 70-page document from former chief scientist Ilya Sutskever details alleged dishonesty by Altman, including false statements about GPT-4. Anthropic’s Dario Amodei privately remarked, “OpenAI’s problem is Sam himself.” The report also underscores OpenAI’s transition from a nonprofit focused on safety to a commercial enterprise, with key protocols reportedly disregarded. New token listings and corporate strategy shifts are now under heightened scrutiny in the AI and crypto sectors.

Written by Xiao Bing, Shenchao TechFlow

In the autumn of 2023, OpenAI's chief scientist Ilya Sutskever sat at his computer, completing a 70-page document.

This document is compiled from Slack message logs, HR communication records, and internal meeting minutes, solely to answer one question: Can Sam Altman, the person in charge of what may be the most dangerous technology in human history, be trusted?

Sutskever's answer, written on the first line of the first page, with the list title: "Sam exhibits a consistent pattern of behavior..."

First: Lying.

Today, two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published an extensive investigative report in The New Yorker, interviewing over 100 individuals and obtaining previously undisclosed internal memos, along with more than 200 pages of private notes left by Anthropic’s founder Dario Amodei during his time at OpenAI. The story pieced together from these documents is far more troubling than the 2023 power struggle: how OpenAI gradually transformed from a nonprofit founded to ensure human safety into a commercial machine, with nearly every safety safeguard dismantled by the same individual.

Amodei’s conclusion in the note is more straightforward: “The problem with OpenAI is Sam himself.”

OpenAI's "original sin" setting

To understand the significance of this report, it’s important to clarify just how unique OpenAI is.

In 2015, Altman and a group of Silicon Valley elites did something nearly unprecedented in business history: they used a nonprofit organization to develop what could be the most powerful technology in human history. The board’s mandate was clearly stated: safety takes precedence over the company’s success, and even over its survival. In plain terms, if OpenAI’s AI ever became dangerous, the board was obligated to shut the company down themselves.

The entire architecture is based on one assumption: the person in charge of AGI must be extremely honest.

What if I guess wrong?

The core bomb of the report was the 70-page document. Sutskever doesn’t play office politics—he is one of the world’s top AI scientists. But by 2023, he had become increasingly convinced of one thing: Altman was consistently lying to executives and the board.

A specific example: In December 2022, Altman assured the board of directors during a meeting that several features of the upcoming GPT-4 had passed security reviews. Board member Toner requested to see the approval documents, only to discover that two of the most controversial features—user-customized fine-tuning and personal assistant deployment—had not been approved by the security panel at all.

Even more astonishing events occurred in India: an employee reported to another board member the "violation" — Microsoft had released an early version of ChatGPT in India without completing the required security reviews.

Sutskever also noted another detail in the memo: Altman had told former CTO Mira Murati that the security approval process wasn’t that important, as the company’s general counsel had already approved it. Murati went to confirm with the general counsel, who replied: "I don’t know where Sam got that impression from."

Amodei's 200-page private notes

Sutskever’s documents read like a prosecutor’s indictment. Amodei’s 200-plus pages of notes resemble a diary written by a witness at the crime scene.

During his years as head of safety at OpenAI, Amodei witnessed the company gradually retreat under commercial pressure. In his notes, he recorded a key detail about Microsoft’s 2019 investment: he had inserted a “merger and assistance” clause into OpenAI’s charter, stating that if another company found a safer path to AGI, OpenAI should cease competition and instead assist that company. This was the most important safety safeguard he valued in the entire deal.

When signing the deal quickly, Amodei discovered something: Microsoft had obtained veto power over this clause. What does that mean? Even if one day a competitor found a better path, Microsoft could simply block OpenAI’s obligation to assist. The clause was still on paper, but from the day it was signed, it was worthless.

Amodei later left OpenAI and founded Anthropic. The competition between the two companies stems from fundamental disagreements about how AI should be developed.

The vanished 20% hashing power commitment

There’s a detail in the report that sends chills down your spine, regarding OpenAI’s “Superalignment Team.”

In mid-2023, Altman emailed a PhD student at Berkeley researching "deceptive alignment" (AI behaving well in tests but acting differently once deployed), expressing deep concern about the issue and considering establishing a $1 billion global research prize. The student was inspired, took a leave of absence, and joined OpenAI.

Then Altman changed his mind: instead of external awards, he would establish an internal "Superalignment Team." The company publicly announced that it would allocate "20% of its existing compute" to this team, with a potential value exceeding $1 billion. The announcement was phrased in extremely serious terms, stating that if alignment issues are not resolved, AGI could lead to "humans being disempowered, or even human extinction."

Jan Leike, who was appointed to lead the team, later told reporters that the commitment itself was a highly effective "talent retention tool."

In reality, four people who worked on or closely with the team said that the actual allocated computing power was only 1% to 2% of the company’s total capacity, and consisted of the oldest hardware. The team was later disbanded without completing its mission.

When journalists requested to interview OpenAI personnel working on "existential safety" research, the company’s PR response was both absurd and ironic: "That’s not an actual... thing."

Altman himself is at ease. He told reporters that his "intuition doesn't align with many traditional AI safety approaches," and that OpenAI will still pursue "safety projects, or at least projects related to safety."

The Outdated CFO and the Upcoming IPO

The New Yorker’s report was only half the bad news that day. On the same day, The Information broke another major story: OpenAI’s CFO, Sarah Friar, had a serious disagreement with Altman.

Friar privately told her colleagues that she believes OpenAI is not ready for an IPO this year. Two reasons: the volume of procedural and organizational work remaining is too large, and the financial risk from Altman’s承诺 of $600 billion in computing spending over five years is too high. She isn’t even sure OpenAI’s revenue growth can sustain those commitments.

But Altman wants to pursue an IPO in the fourth quarter of this year.

Even more bizarrely, Friar no longer reports directly to Altman. Starting in August 2025, she now reports to Fidji Simo, CEO of OpenAI’s applications business. And Simo just took a medical leave last week. Consider this situation: a company racing toward its IPO, with the CEO and CFO fundamentally at odds, the CFO not reporting to the CEO, and the CFO’s supervisor currently on leave.

Even Microsoft's internal executives couldn't stand it anymore, saying Altman "distorted facts, reneged on promises, and continually overturned already-agreed-upon deals." One Microsoft executive even said: "I think there's a real chance he'll ultimately be remembered as a fraud on the level of Bernie Madoff or SBF."

Altman's "two-faced" portrayal

A former OpenAI board member described to reporters two traits in Altman. This passage may be the most cutting character sketch in the entire report.

The director said that Altman possesses an extremely rare combination of traits: he has a strong desire to please and be liked in every face-to-face interaction, while simultaneously exhibiting a near sociopathic indifference to the consequences of deceiving others.

It is extremely rare for both traits to appear in one person. But for a salesperson, this is the perfect gift.

The article includes a fitting analogy: Jobs was famous for his "reality distortion field," which could convince the world of his vision. But even Jobs never told customers, "If you don’t buy my MP3 player, the people you love will die."

Altman has said something similar about AI.

Why a CEO's character issue is everyone's risk

If Altman were merely the CEO of a regular tech company, these allegations would be, at best, compelling business gossip. But OpenAI is not ordinary.

According to its own claims, it is developing what may be the most powerful technology in human history—capable of reshaping the global economy and labor markets (OpenAI itself just released a policy white paper on AI-induced job displacement), and also usable for creating large-scale biochemical weapons or launching cyberattacks.

All safety safeguards have become meaningless. The founder’s nonprofit mission has been replaced by a rush toward an IPO. The former chief scientist and former head of security both deem the CEO “untrustworthy.” Partners have compared the CEO to SBF. Under these circumstances, on what basis does this CEO unilaterally decide when to release an AI model that could alter the fate of humanity?

After reading the report, Gary Marcus (NYU AI professor and long-time AI safety advocate) wrote: If a future OpenAI model can create large-scale biochemical weapons or launch catastrophic cyberattacks, are you really comfortable letting Altman be the sole person to decide whether to release it?

OpenAI's response to The New Yorker was concise: "Much of this article recycles previously reported events, using anonymous claims and selective anecdotes, with sources that clearly have personal agendas."

Altman's response: Do not address the specific allegations, do not deny the memo's authenticity, only question the motives.

On the corpse of a nonprofit, a money tree grew.

The decade of OpenAI, written as a story outline, is this:

A group of idealists concerned about AI risks founded a mission-driven nonprofit. The organization made extraordinary technological breakthroughs. The breakthroughs attracted massive capital. Capital demanded returns. The mission began to yield. The safety team was disbanded. Questioners were purged. The nonprofit structure was converted into a for-profit entity. The board, once empowered to shut down the company, is now filled with the CEO’s allies. The company that once pledged 20% of its computing power to safeguard humanity now has PR staff saying, "That wasn’t a real thing."

The protagonist of the story was given the same label by over a hundred eyewitnesses: "Unconstrained by truth."

He is preparing to take this company public with a valuation exceeding $850 billion.

This information is compiled from public reports by The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and other media outlets.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.