Sam Altman: The Apocalyptic Capitalist and the Business of Fear

iconBlockbeats
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
Sam Altman, a key figure in Silicon Valley, has built a $20 billion investment empire by promoting narratives of an AI apocalypse. OpenAI, once a non-profit, is now a $100+ billion commercial entity. Critics label his approach hypocritical—warning of AI risks while accelerating its development. Altman strategically leverages regulatory pressure, maintaining influence through charismatic leadership. Traders using technical analysis for crypto often examine support and resistance levels when assessing market sentiment around AI-driven tech stocks and related crypto assets.
Article by Sleepy.txt


In 2016, The New Yorker published a feature on Sam Altman titled "Sam Altman's Destiny." At the time, he was 31 years old and already the president of Y Combinator, Silicon Valley's most powerful incubator.


There’s a detail in the article saying Ultraman loves speeding, owns five sports cars, and enjoys renting planes to fly. He told the reporter that he has two bags, one of which is an emergency escape bag ready to go at any time.


He also prepared firearms, gold, potassium iodide (for protection against nuclear radiation), antibiotics, batteries, water, a military-grade gas mask from the Israel Defense Forces, and a plot of land in Big Sur, California—a renowned coastal destination—where he could fly to for refuge at any time.


Ten years later, Ultraman became the person most dedicated to creating the apocalypse—and most dedicated to selling the ark. He warned the world that AI would destroy humanity while personally accelerating that process; he claimed he wasn’t in it for the money while building a personal investment empire worth $2 billion; he called for regulation while removing everyone who tried to hit the brakes.


He is less a schizophrenic madman or a flawless con artist, and more the most standard and successful product produced by the vast machine of Silicon Valley. His “destiny” was to forge humanity’s collective anxiety into his scepter and crown.


The end of the world is a good business.


Ultraman's business model can be explained in one sentence: packaging a business as a holy war for the survival of humanity.


He has been refining this strategy since the YC era. He transformed YC from a small operation providing early-stage startups with tens of thousands of dollars into a vast entrepreneurial empire. He established a YC Research lab to fund projects that aren’t profitable but sound ambitious. He told reporters that YC’s goal is to support “all important fields.”


At OpenAI, he took this approach to its extreme, selling a packaged worldview: AI apocalypse plus a redemption solution.


He is better than anyone at illustrating the "existential risk" posed by AI. He co-signed a letter with hundreds of scientists stating that AI’s risks are comparable to nuclear war. During his testimony before the Senate, he said: “We feel a flicker of fear about (AI’s potential)—and people should be glad about that.” He implied that this fear is itself a beneficial warning.


Each of these statements could make headlines, each giving OpenAI free advertising. This meticulously crafted fear is the most efficient lever for attention. Between a technology that “boosts efficiency” and one that “might destroy humanity,” which excites capital and the media more? The answer is obvious.


As for the redemption part, he already has a ready product: Worldcoin. When fear is implanted into public consciousness, selling the solution becomes inevitable—a silver orb the size of a basketball scanning human irises worldwide, claiming it’s to distribute money to everyone in the age of AI. The story sounds compelling, but the practice of exchanging biometric data for money quickly raised alarms among governments in multiple countries. Dozens of nations, including Kenya, Spain, Brazil, India, and Colombia, have halted or launched investigations into Worldcoin over data privacy concerns.



But to Ultraman, this might not matter at all. What matters is that through this project, he successfully positioned himself as the one and only person with a solution.


Selling fear and hope together is the most efficient business model of this era.


Regulation is my weapon, not my chain.


How can someone who constantly talks about the end of the world run a business? Ultraman’s answer: turn regulation into your weapon.


In May 2023, he testified before the U.S. Congress for the first time. Instead of complaining about regulation like other tech CEOs, he proactively requested: “Please regulate us.” He proposed a licensing system for AI, whereby only companies with a license could develop large models. This projected an image of a highly responsible industry leader, but at that time, OpenAI was far ahead technologically, and a strict, high-barrier regulatory framework would primarily serve to keep all potential competitors out.


However, over time, as competitors like Google and Anthropic caught up technologically and the open-source community began to gain momentum, Altman’s stance on regulation underwent a subtle shift. He began emphasizing in various forums that overly strict regulation—particularly mandatory pre-release reviews for AI companies—could stifle innovation and be “catastrophic.”


At this point, regulation is no longer a moat, but a stumbling block.


When in absolute dominance, he calls for regulation to lock in advantage; when that advantage fades, he calls for freedom to seek breakthroughs. He even attempts to extend his reach to the very upstream of the supply chain. He proposed a $7 trillion chip initiative, seeking capital support from entities like the UAE’s sovereign wealth fund, aiming to reshape the global semiconductor landscape. This goes far beyond the scope of a CEO—it resembles the ambitions of someone seeking to influence global dynamics.



Behind all of this is OpenAI’s rapid transformation from a nonprofit organization into a commercial powerhouse. Founded in 2015 with the mission to “safely ensure AGI benefits all of humanity,” it established a “capped profit” subsidiary in 2019. By early 2024, outsiders discovered that the word “safely” had been quietly removed from OpenAI’s mission statement. Although its corporate structure remains “capped profit,” its commercialization has clearly accelerated. This is mirrored by explosive revenue growth—from tens of millions of dollars in 2022 to over $10 billion in annualized revenue by 2024, with its valuation surging from $29 billion to the $100 billion range.


When someone begins gazing at the stars and talking about humanity’s fate, it’s best to first check where their wallet has landed.


The charisma-based leader's exemption


On November 17, 2023, Altman was removed by the board he personally selected, citing "lack of candor in communications with the board."


What happened over the next five days was less a business struggle than a referendum on faith. President Greg Brockman resigned; over 700 employees, representing 95% of the company, jointly petitioned the board to step down or face a mass defection to Microsoft; Microsoft’s CEO Nadella publicly sided with them, saying he would welcome Altman back anytime. In the end, Altman returned triumphantly, reinstated to his former position, and purged nearly all board members who had opposed him.


Why can a CEO, officially labeled by the board as "dishonest," return unscathed and even gain greater power?


The ousted board member Helen Toner later disclosed details: Altman concealed from the board his actual control over OpenAI’s venture fund; repeatedly lied about critical safety protocols at the company; and even the launch of ChatGPT was something the board learned about from Twitter. Any one of these allegations would be enough to remove a CEO a hundred times over.


But Ultraman is fine, because he’s not just an ordinary CEO—he’s a “charismatic leader.”


This is a concept proposed by sociologist Max Weber a century ago, describing a form of authority that does not come from a position or law, but from the leader’s own “charismatic personality.” Followers believe in him not because he has done something right, but because he is who he is. This belief is irrational. When the leader makes a mistake or is challenged, followers’ first reaction is not to question the leader, but to attack the challenger.


That's how OpenAI's employees are. They don't believe in procedural fairness of the board; they only believe in the "destiny" represented by Altman, and they feel that the board members are "hindering human progress."


After Altman's return, OpenAI's safety team was quickly disbanded. Ilya Sutskever, the chief scientist who led the effort to oust Altman, also left the company. In May 2024, Jan Leike, head of the safety team, resigned, writing on Twitter: "The company's safety culture and processes have been sacrificed to launch shiny products."



In the presence of a charismatic leader, facts don't matter, processes don't matter, and security doesn't matter. The only thing that matters is faith.


The prophets on the assembly line


Sam Altman is simply the latest and most successful model to emerge from Silicon Valley’s "prophet" production line.


On this production line, there are still many people we know well.


Take Musk, for example. In 2014, he repeatedly claimed that “AI is summoning the demon.” Yet his Tesla is the world’s largest robotics company and one of the most complex AI applications. After his fallout with Altman, he founded xAI in 2023, declaring open war. Just one year later, xAI’s valuation surpassed $20 billion. He warns of the demon’s arrival while simultaneously building another one himself. This self-contradictory dual narrative mirrors Altman’s exactly.


Take Zuckerberg as another example. A few years ago, he bet the entire company’s future on the metaverse, burning nearly $90 billion, only to realize it was a dead end. He quickly pivoted, shifting the company’s core narrative from the metaverse to AGI. In 2025, he announced the creation of the "Superintelligence Lab" and personally recruited top talent. It’s the same grand vision for humanity’s future, the same astronomical capital investment, and the same messiah-like posture.



There’s also Peter Thiel. As Altman’s mentor, he is more like the chief architect of this entire pipeline. While investing in companies that promote the “technological singularity” and “immortality,” he simultaneously buys land in New Zealand and builds doomsday bunkers, obtaining citizenship after only 12 days there. His company, Palantir, is one of the world’s largest data surveillance firms, with clients primarily in government and military sectors. He prepares for the collapse of civilization while building the sharpest surveillance tools for those in power. During the early 2026 military operation against Iran, Palantir’s AI platform served as the brain, integrating massive amounts of data from spy satellites, communications intercepts, drones, and Claude model analyses—transforming chaotic information into actionable intelligence in real time to ultimately pinpoint and eliminate targets.


Each of them plays a dual role: sounding the alarm that the end is near, while simultaneously pushing for its arrival. This is not dissociative identity disorder—it is a business model proven by capital markets to be the most efficient. They capture attention, capital, and power by manufacturing and selling structural anxiety. They are both the product of this system and its architect—the evil behind the grand narrative.


Silicon Valley has long been more than just a source of technology—it is a factory for manufacturing "modern myths."


Why does this trick always work?


Every few years, a new prophet emerges from Silicon Valley, sweeping up capital, media, and public attention with a grand narrative of apocalypse and redemption. This trick is repeated over and over again—and yet it never fails. Each step of the process is precisely engineered to exploit specific vulnerabilities in human cognition.


Step 1: Manage the rhythm of fear, not just create fear.


The potential risks of AI are real, but they could have been discussed calmly. It is this group that deliberately chose the most dramatic way to present them, and they have precise control over the timing of their release of fear.


When to instill fear in the public, when to offer hope, and when to raise the alarm again are all carefully designed. Fear is fuel, but the timing and method of ignition are the true expertise.


Step two: Turn the incomprehensibility of technology into a source of authority.


AI is a complete black box to the vast majority of people. When something becomes too complex to be fully understood, people instinctively cede the right to explain it to “those who understand it best.” They deeply understand this dynamic and have turned it into a structural advantage—the more they portray AI as mysterious, dangerous, and beyond ordinary comprehension, the more indispensable they become.


The terrifying aspect of this logic is that it is self-reinforcing. Any external criticism is automatically dismissed because the critic is deemed “not knowledgeable enough.” Regulators don’t understand the technology, so their judgments are unreliable; academic critics haven’t built models on the front lines, so their concerns are merely theoretical. In the end, only they themselves are qualified to judge their own actions.


Step 3: Replace "benefit" with "meaning" to encourage followers to voluntarily abandon criticism.


This is the layer of the system most difficult to detect and its most enduring source of power. They never sell just a job or a product—they sell a story of cosmic significance: you are deciding the fate of humanity. Once this narrative is accepted, followers willingly abandon independent judgment, because questioning a leader’s motives in the face of a mission关乎「人类存亡」makes one feel insignificant, even like an obstacle to history. It leads people to willingly surrender their critical thinking—and to view this surrender as a noble choice.


Put these three steps together, and you’ll understand why this system is so hard to shake. It doesn’t rely on lies—it relies on a precise understanding of human cognition. It first creates a fear you can’t ignore, then monopolizes the interpretation of that fear, and finally turns you into its most loyal advocate by giving you “meaning.”


In this system, Ultraman is by far the most smoothly operating model.


Whose destiny?


Ultraman has consistently said he owns no equity in OpenAI and only takes a symbolic salary, which was the foundation of his "powered by love" narrative.


But Bloomberg calculated in 2024 that his personal net worth is approximately $2 billion, primarily stemming from a series of venture capital investments over the past decade. His early investment in the payment company Stripe reportedly yielded returns in the hundreds of millions of dollars; his investment in Reddit’s IPO also generated substantial profits. He also invested in the fusion company Helion, advocating that the future of AI depends on energy breakthroughs while simultaneously making a major bet on fusion—right before OpenAI entered negotiations with Helion for a large-scale power purchase agreement. He claims he avoided involvement in the negotiations, but the chain of利益 is obvious to anyone.



He does not hold direct equity in OpenAI, but he has built a vast, personally centered investment empire around it. Every grand sermon he delivers on the future of humanity adds value to the expansion of this empire.


Now, do you have a new understanding of his emergency survival pack, filled with firearms, gold, and antibiotics, and that plot of land in Big Sur, ready to fly to at any moment?


He never hides any of it. The go-bag is real, the bunker is real, and his obsession with the apocalypse is real. But he is also the one working hardest to bring about the end. These two things are not contradictory, because in his logic, the apocalypse doesn’t need to be stopped—it just needs to be positioned for in advance. He is obsessed with playing the role of the only one who sees the future clearly and prepares for it.


Whether preparing a physical emergency kit or building a financial empire around OpenAI, it’s all the same at its core: securing for yourself the most certain winning position in a future you’re actively shaping—one full of uncertainty.


In February 2026, he had barely finished declaring his red line—“AI must not be used for war”—when he signed a contract with the Pentagon. This isn’t hypocrisy; it’s an inherent requirement of his business model. Moral posturing is part of the product; commercial contracts are the source of profit. He must simultaneously play the compassionate savior and the cold, unyielding harbinger of doom, because only by embodying both roles can his story continue and his “destiny” be made clear.


The real danger has never been AI, but those who believe they have the right to define humanity's fate.


Click to learn about the open positions at BlockBeats


Welcome to the official community of律动 BlockBeats:

Telegram subscription group: https://t.me/theblockbeats

Telegram group: https://t.me/BlockBeats_App

Official Twitter account: https://twitter.com/BlockBeatsAsia

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.