Original | Odaily Planet Daily (@OdailyChina)
Author | Azuma (@azuma_eth)

On the evening of February 27, OpenAI announced the completion of a $110 billion funding round at a $730 billion pre-money valuation.
The funding for this round comes from three major players: Amazon has invested $50 billion (an initial investment of $15 billion, with the remaining $35 billion to be disbursed in installments over the coming months upon fulfillment of specific conditions), NVIDIA has invested $30 billion (through aggregate procurement of 5 GW of computing power), and SoftBank has also invested $30 billion.
After the funding round was completed, OpenAI founder Sam Altman sequentially thanked the three major investors on his personal X account. Notably, Sam Altman’s order of thanks was Amazon, Microsoft, NVIDIA, and SoftBank — with Microsoft, a longstanding shareholder and key partner that did not participate in this round, mentioned immediately after Amazon, the investor with the largest committed funding.

Longtime AI sector observer Aakash Gupta points out that while most people are focused on the astronomical figure of $110 billion, the most critical takeaway from Sam Altman’s remarks lies in two overlooked technical terms: “Stateless API” and “Stateful Runtime Environment,” which have been acquired by Microsoft and Amazon, respectively.
Behind the technical terminology lies the present and future of AI
The core difference between a Stateless API and a Stateful Runtime Environment lies in the words "Stateless" and "Stateful".
The "stateless" in Stateless API means the server does not retain persistent state across requests — each call performs one inference: you ask a question, the AI answers, and once the request lifecycle ends, the system does not retain context or continue running. In contrast, the "stateful" Runtime Environment means a continuously existing execution environment — the Agent has historical memory, can persist over time, collaborate across tasks, and execute long-term operations.
Stateless API is currently the dominant form of LLM commercialization. Industries such as finance, retail, manufacturing, and healthcare integrate AI primarily through this approach, embedding it into existing systems (e.g., various Q&A assistants, document summarization, search enhancement, etc.). The advantage of this model lies in enabling enterprises to rapidly add AI capabilities within their existing architectures without needing to restructure organizations or processes, achieving functional improvements with minimal friction. However, as model capabilities converge, computing costs continue to decline, and price competition intensifies, token-based Stateless APIs are more likely to become standardized and commoditized, potentially facing sustained pressure on marginal profits.
In contrast, the Stateful Runtime Environment currently has limited commercial adoption, but what it represents is not merely a functional improvement—it is a paradigm shift in business operations. It doesn’t just answer questions; it can be viewed as a digital workforce capable of actively executing tasks. This means the budgets it impacts extend beyond mere API call fees to include automation, process management, and even portions of human labor costs. For this reason, market expectations for the Stateful Runtime Environment far exceed its current scale.
Aakash Gupta also noted that by 2026 and 2027, nearly all companies’ roadmaps will revolve around “autonomous agent workflows” rather than one-time API calls, and companies making significant investments in AI will increasingly favor systems that operate sustainably, collaborate across tools, and maintain context over the long term.
In the simplest terms, Stateless API represents the present, while Stateful Runtime Environment represents the future.
What did Microsoft and Amazon take respectively?
On the day the financing was completed, Microsoft and Amazon each announced their latest partnership agreements with OpenAI.
In the announcement, Microsoft stated that the terms of the partnership jointly announced by Microsoft and OpenAI in October 2025 will remain unchanged (including OpenAI’s commitment to purchase $250 billion worth of Azure services). Azure remains the exclusive cloud provider for OpenAI’s Stateless API, and any Stateless API calls to OpenAI models resulting from collaborations between OpenAI and third parties (including Amazon) will be hosted on Azure; OpenAI’s first-party products, including Frontier, will also continue to be hosted on Azure.
Amazon stated in the announcement that AWS will collaborate with OpenAI to build a Stateful Runtime Environment powered by OpenAI models, available to AWS customers through Amazon Bedrock, enabling enterprises to build generative AI applications and agents at production scale; AWS will also become the exclusive third-party cloud distribution partner for OpenAI Frontier; the existing multi-year partnership between AWS and OpenAI, valued at $38 billion, will be expanded to $100 billion over eight years, with OpenAI consuming 2 GW of Trainium compute capacity on AWS infrastructure to support the Stateful Runtime Environment, Frontier, and other advanced workloads; OpenAI and Amazon will also co-develop customized models to support Amazon’s customer-facing applications.
Comparing the two announcements makes the current situation very clear.
Microsoft has locked in today’s traffic engine with a $250 billion agreement and exclusive service rights: whenever OpenAI’s Stateless API is invoked, Azure bills behind the scenes—regardless of the customer or channel, all traffic ultimately flows back to Azure. This generates highly predictable cash flow, but the issue lies in the declining profit margins of the Stateless API: while usage volume may continue to grow, actual profits may not remain stable in the long term.
On the other side, Amazon has secured the foundational hosting rights for the AI Agent era with $50 billion in real capital and $100 billion in expansion agreements for AWS. Once Agents become the core productivity vehicle for enterprises, the long-term resources—compute, storage, orchestration systems, workflow automation, and cross-tool collaboration—will all be anchored within AWS’s operating environment.
One controls the current cash flow, the other bets on the future productivity structure.
OpenAI's decentralized staking
Before the future arrives, no one knows whether Microsoft’s or Amazon’s choices are right or wrong. But it is certain that, under these two clearly defined agreements with explicit separation of interests, OpenAI’s leverage is significantly increasing.
Over the past few years, OpenAI has been heavily reliant on Microsoft for cloud infrastructure. Microsoft is not only a major shareholder with a 27% stake but also the controller of the infrastructure. This partnership provided OpenAI with valuable early resource advantages, but it also naturally tilted the bargaining power in Microsoft’s favor. With Amazon’s strong entry into the scene, a direct competition between Microsoft and Amazon is inevitable as both vie for future service rights with OpenAI.
For OpenAI, this is a classic diversified betting strategy—avoiding deep dependence on any single cloud provider, ensuring future growth isn't entirely constrained by one party, and using future business as leverage to secure better terms.
Neither Microsoft nor Amazon can afford to walk away from OpenAI today. When neither side can leave the table, bargaining power naturally returns to OpenAI.
