Why Many in the U.S. Dislike Sam Altman Amid the OpenAI Legal Battle

iconChainthink
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
A federal court in Oakland is hearing Elon Musk’s lawsuit against OpenAI, focusing on claims of unjust enrichment and breach of a charitable trust. Musk is seeking $134 billion in damages and demands the removal of Sam Altman and Greg Brockman, while opposing OpenAI’s transition to a for-profit model. OpenAI describes the case as driven by competitive jealousy. The trial, expected to last four weeks, will determine whether OpenAI violated its non-profit mission. Meanwhile, global regulatory frameworks such as MiCA and CFT continue to shape the evolving AI and crypto landscape.

The jury entered Federal Court Room 9 in Oakland, California, yesterday, with nine individuals seated as an "advisory jury" to listen to a trial expected to last four weeks, after which they will provide recommendations to Judge Rogers. Today, Tuesday, opening statements are about to begin.

On the same day jury selection took place yesterday, OpenAI announced a new revised agreement with Microsoft. This agreement eliminates one thing: Microsoft’s exclusive license to OpenAI’s intellectual property—exactly the final lock OpenAI placed on itself when it transitioned to a "capped profit" structure in 2019.

What exactly is Musk suing for?

Reuters and CNBC’s trial diary reviewed a list of claims two weeks before the trial. When Musk initially filed suit in 2024, he raised 26 claims, including securities fraud, racketeering (RICO), and antitrust violations. Today, only two remain for trial: unjust enrichment and breach of charitable trust.

The remaining 24 claims were either dismissed by the judge at the motion stage or withdrawn by Musk himself. Days before trial, he voluntarily dropped the allegations of fraud, narrowing the case to its core and simplest point: “OpenAI promised me it would always be non-profit—and now it isn’t.”

For this single claim, Musk is seeking up to $134 billion in damages. According to his complaint, all compensation would be returned to OpenAI’s nonprofit entity, while also demanding the removal of Altman and Brockman and the reversal of the entire for-profit conversion. This is the “true core” of the lawsuit. The issue is not about stock allocation—it’s about who ultimately owns OpenAI.

Judge Gonzalez Rogers divided the trial into two phases: first, determining liability, to be concluded by mid-May; if liability is established, the second phase will address damages. The jury participates only in the first phase and serves in an advisory capacity only. The final decision rests with the judge. This means that for Musk, winning the "narrative battle" is more critical than winning on damages—convincing the jury that "the company made promises to donors and then systematically dismantled those promises." If these nine individuals agree, the judge will complete the remaining puzzle.

OpenAI’s strategy is almost a mirror image: convince the jury that Musk’s real motive for filing the lawsuit was competitive jealousy, not any breach of trust. On the day of jury selection, OpenAI’s official account struck first: “We can’t wait to present our evidence in court—the truth and the law are on our side. This lawsuit has always been a baseless, jealousy-driven attempt at competitive sabotage… and now we finally have the opportunity to make Musk testify under oath before a California jury.”

Note the phrase "make Musk swear under oath." This is the strategy—what OpenAI truly wants is to portray Musk as the founder of xAI who lost to OpenAI in the public courtroom. Convincing the judge is secondary; this way, ordinary California residents on the jury will enter the courtroom with this lens.

How was OpenAI's "lock" removed?

To understand why Musk is so upset, you first need to understand the three locks OpenAI set for itself in 2019—each with a clear design purpose.

You’ll notice something. In 2019, OpenAI was convincing donors that “even if we make money, our profits are limited—we must stop at some point.” On April 27, 2026, OpenAI is convincing investors that “we have no brakes.”

The explanation for the profit cap is straightforward. In his 2025 employee letter, Altman wrote, “A profit cap makes sense in a world with only one AGI company, but not when multiple competitors exist.” In plain terms: Now that there are rivals, I need to be able to earn more.

The breakdown of the AGI trigger clause is the most subtle. Originally, “achieving AGI would terminate Microsoft’s commercial license” meant that AGI was public goods, belonging to humanity, and OpenAI would not privatize it. The revised version places AGI under the custody and determination of an “independent expert panel,” extends Microsoft’s license until 2032, explicitly includes “models beyond AGI,” and grants Microsoft permission to independently pursue AGI. This is a version that has not only changed the lock but replaced the very key to defining who or what AGI is.

The final round was an exclusive license. Its breakdown occurred the moment Musk’s jury was seated. Decoupling the revenue share from “OpenAI’s technological progress” means that even if OpenAI were to publicly announce the achievement of AGI tomorrow, no commercial terms would be triggered to change as a result.

Elon Musk’s side will argue in court that this is an intentional dismantling of safeguards. OpenAI’s side will argue that this is a necessary adjustment in a competitive environment. But there is one thing both sides will not dispute: the 2019 “self-restraint checklist” now has not a single item remaining.

"Scam Altman"—why do so many people dislike Altman?

On the day of jury selection, X was far more lively than the courtroom. Two hours after the official OpenAI account launched its attack, Musk fired off seven consecutive tweets in retaliation—fast-paced, forceful wording, and a dense rhythm. A classic Musk-style barrage. He gave Altman a nickname: Scam Altman.

He also shared a video clip of OpenAI former board member Helen Toner, who clearly stated in the podcast, "Sam is a liar."

“Sam is a liar” was not first said by Musk. Mira Murati, former CTO of OpenAI, said it when she left; Ilya Sutskever said it during the “failed coup” that led to Altman’s firing; and Jan Leike publicly said it when he resigned along with the entire superalignment team.

People who dislike Sam Altman can be divided into three groups, each with different reasons.

The first group was the former OpenAI board. The defining event for this group was the five-day firing saga in November 2023. The board cited "not being consistently candid in communications with the board" as the reason.

What exactly was caught? In May 2024, Helen Toner publicly stated that the board learned about their company’s release of a product set to reshape the global AI industry from Twitter. She also claimed that Altman concealed his ownership of the OpenAI Startup Fund, repeatedly asserting, “I have no financial interest in the company,” until he was forced to admit it in April 2024.

Provided inaccurate information to the board multiple times during security procedures. Two executives reported to the board that Altman engaged in "psychological abuse" and provided screenshot evidence of "lying and manipulation." After Toner published a research paper that OpenAI disliked, Altman attempted to remove her from the board.

The second group consists of the former OpenAI safety faction.

In May 2024, OpenAI’s Superalignment team nearly collapsed overnight. Leading the resignation was Jan Leike, one of OpenAI’s most senior AI safety researchers. His resignation post on X was among the sharpest exit letters in the English-speaking AI community that year, stating that “safety culture and processes had been sacrificed for shiny products.”

Next came Ilya Sutskever, co-founder and chief scientist of OpenAI, one of the key figures behind the failed coup. Shortly after, CTO Mira Murati—who had temporarily taken over during Altman’s dismissal—chief research officer Bob McGrew, and vice president of research Barret Zoph all resigned within the same week. The “non-disparagement agreement” scandal emerged afterward, with departing employees required to sign confidentiality agreements or forfeit their equity.

The third group consists of the契约派 from the old Silicon Valley; this group is the hardest to define and the largest.

They include early donors like Musk from 2015, early OpenAI employees who genuinely believed in the nonprofit mission, many angel investors who bet on early-stage startups in Silicon Valley, and a significant number of neutral observers who view OpenAI as a common heritage of humanity.

What these people have in common is that they once paid non-monetary costs for OpenAI’s promise—reputation, time, trust, and social capital. What they find hardest to forgive about Altman is specific: every time OpenAI removed its own “locks,” Altman claimed it was “for the mission.”

When the profit cap was removed, he said, “To ensure OpenAI can continue investing in AGI research”; when the AGI trigger clause was rewritten, he said, “To enable OpenAI to fulfill its mission even after AGI is achieved”; when Microsoft’s exclusivity was lifted, he said, “To allow OpenAI to move toward a broader ecosystem of collaboration.”

This is also why some people in Silicon Valley find themselves reluctantly siding with Musk in this lawsuit.

The weight of a promise made in Silicon Valley will be revealed four weeks from now.

By now, you’ve probably realized: they’re not fighting over money.

Money is not an issue for OpenAI. By 2026, Altman will be the CEO of OpenAI, a private AI company valued at over $500 billion, and he won’t lack funds. By 2026, Musk will have advanced to the Grok-5 era at xAI, with Anthropic as his target and OpenAI as his goal to surpass—he certainly won’t lack funds either.

They are fighting over something that almost no one outside a small circle of long-time Silicon Valley insiders cares about: Can a nonprofit organization that has raised funds from society under the banner of "the common good," accumulated moral capital, recruited talent, and obtained regulatory exemptions transform itself into an ordinary for-profit company jointly controlled by a CEO and venture capitalists over the course of a decade?

If this is allowed, then in the future, every AI startup could do the same. “Nonprofit” would become a cheap early narrative tool to get through media headlines, regulatory scrutiny, and employee recruitment—only to quietly dissolve once the valuation becomes large enough.

If Musk wins, Silicon Valley may experience a long-overdue sense of awkwardness: the things you said in 2015 will still be quoted verbatim in 2026, forcing you to testify under oath in a California federal court. If OpenAI wins, the world continues to operate as Silicon Valley has for the past decade—telling stories early, emphasizing scale later, and systematically dismantling the contracts between story and scale along the way.

The answer will come in four weeks. But the words “Scam Altman” have already been etched into social media and will remain regardless of the outcome. The reason Altman has made so many people angry is that he made those who trusted him feel deceived. How much money was made is secondary.

However, being scammed cannot be undone by a court ruling.

Source:律动 BlockBeats

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.