NVIDIA invests $200 million in CoreWeave and unveils the Vera CPU for AI data centers.

iconMetaEra
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
NVIDIA announced a $200 million investment in CoreWeave, purchasing Class A shares at $87.20 per share. The funding supports CoreWeave’s AI and crypto-driven expansion to build more than 5 gigawatts of AI capacity by 2030. The cloud provider plans to spend up to $60 billion on NVIDIA hardware, including the new Vera CPU—NVIDIA’s first standalone CPU offering, aimed at competing with Intel and AMD in the data center market. Inflation data trends may impact future hardware spending.
Jensen Huang's computing empire is launching a "supply-side revolution" armed with the Vera CPU, which could fundamentally transform the model and cost structure of AI data centers.

Article author and source: AI New Era

$2 billion, $87.20 per share; CoreWeave surged 12% on the day, pushing its market cap past $52 billion.

This is a precise strike against the industry.

Huang Renxun poured money into CoreWeave not to buy equity, but to secure a “pass” to the next five years of compute capacity: a 5-gigawatt AI factory plan before 2030 and up to $6 billion in NVIDIA hardware procurement.

And most importantly, Vera CPU is entering the market as a standalone infrastructure option.

This means NVIDIA is no longer content with just selling GPUs—it aims to take control of the "heart" of data centers and directly challenge Intel and AMD in their core territories.

The Computing Power Ambition Behind $2 Billion in Investment

On January 26, 2026, NVIDIA CEO Jensen Huang announced that NVIDIA would purchase $2 billion worth of Class A common shares of CoreWeave at $87.20 per share. Following the announcement, CoreWeave’s stock price rose 12%, pushing its market capitalization above $52 billion.

Behind this transaction is NVIDIA's determination to enter the CPU market.

This investment is aimed at accelerating CoreWeave’s ambitious plan to build over 5 gigawatts (1 gigawatt equals 1 billion watts) of AI infrastructure by 2030. CoreWeave will purchase up to $6 billion in Nvidia hardware, including those Vera CPU chips.

CoreWeave, as a cloud service provider closely partnered with NVIDIA, will become the first customer to deploy NVIDIA’s Vera CPU as an independent infrastructure option.

This means NVIDIA is offering its CPU as a standalone product for the first time, rather than as part of an integrated system, directly challenging Intel and AMD in the data center processor market.

Under the cooperation agreement, CoreWeave will receive priority access to NVIDIA’s next-generation computing architecture, including the Rubin platform, Vera CPU, and BlueField storage systems.

Huang Renxun explained this investment as: “This investment reflects our confidence in CoreWeave’s growth, management team, and business model.” However, he emphasized that the focus of the collaboration is more on integrating the engineering capabilities of both companies to accelerate the deployment of computing infrastructure.

At the time of this transaction, NVIDIA is facing competitive pressure from large tech companies developing their own AI processors, such as Google’s TPU, which are being adopted by companies like Anthropic.

OpenAI is simultaneously collaborating with chip design company Broadcom to develop its own AI accelerator and has reached an agreement with AMD, a major competitor to NVIDIA, to purchase GPUs.

Huang Renxun’s move strengthens the alliance with CoreWeave, one of the most aggressive and deeply integrated cloud service providers that have risen over the past three years to meet the demand for NVIDIA chips from large tech companies and enterprise clients. Prior to this acquisition, NVIDIA already held $3.3 billion worth of CoreWeave stock; this new share purchase increases NVIDIA’s ownership stake in CoreWeave to over 11%.

The far-reaching strategy behind Vera CPU's independent deployment

Previously, NVIDIA's CPUs were sold only as components of computing systems, bundled with GPU chips. The standalone release of the Vera CPU signifies NVIDIA's transition from a "component supplier" to a "platform ecosystem builder."

NVIDIA stated that as AI agent applications accelerate in adoption, server CPUs are increasingly becoming a key bottleneck to overall system performance. Relying solely on GPU performance gains is no longer sufficient to meet market demand; a CPU platform with matching performance is essential.

AI agents differ from traditional AI models in that they require continuous operation, state retention, and complex decision-making, placing higher demands on a system’s memory bandwidth and processor coordination.

The Vera CPU is specifically designed for such application scenarios, with the goal of handling the most demanding AI and computational workloads.

From a technical specification standpoint, the Vera CPU with 227 billion transistors represents a significant leap in NVIDIA's processor design.

This processor is built on a next-generation custom Arm architecture, featuring 88 custom Olympus Arm cores and 176 threads, and introduces NVIDIA's so-called "Spatial Multi-Threading" technology.

Unlike traditional hyper-threading, spatial multithreading physically partitions core resources, enabling the Vera CPU to handle 176 concurrent threads while maintaining deterministic performance.

In terms of memory performance, Vera is equipped with 1.5 TB of system memory—three times that of the previous-generation Grace CPU—and achieves 1.2 TB/s memory bandwidth using SOCAMM LPDDR5X technology. Additionally, through NVLink-C2C interconnect technology, its coherent memory interconnect speed reaches up to 1.8 TB/s, triple that of the previous-generation Grace.

Vera has also undergone significant upgrades in cache design. Each core features 2MB of L2 cache (twice that of Grace) and a 162MB shared L3 cache (a 42% improvement). This enables Vera to share data more rapidly with its paired GPU, significantly enhancing overall system efficiency.

For commercial CPUs, Vera consumes only 50W, which is a remarkably low power draw for an 88-core CPU, significantly reducing energy consumption and cooling requirements in data centers.

For cloud providers like CoreWeave that specialize in AI workloads, the ability to purchase Vera CPUs independently offers greater architectural flexibility. They can freely combine computing resources based on the characteristics of their customers’ workloads, without being forced to buy entire rack-scale solutions. By offering standalone Vera CPUs, NVIDIA provides customers with a workaround to bypass the bottlenecks of traditional x86 architectures.

NVIDIA’s next-generation Vera CPU will be paired with the next-generation Rubin GPU. The Rubin GPU is expected to feature HBM4 memory with a bandwidth of up to 22 TB/s, 2.75 times that of the Blackwell GPU. In the consumer electronics space, NVIDIA is also preparing an ARM-based N1/N1X CPU chip for the next generation of AI PCs.

CoreWeave's stock has become a barometer for public market investors' enthusiasm for AI. The stock has more than doubled since its IPO last year, but it has also declined by about half from its peak in June of last year. Now, Jensen Huang's $2 billion bet is not only a vote of confidence in the company, but also a strong endorsement of the future of AI computing power it represents.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.