Amid sustained growth in AI computing power demand and intensifying supply-demand imbalances in memory chips, CXL (Compute Express Link) is transitioning from a niche technology to an industry focal point. Samsung Electronics, SK Hynix, and Micron Technology have all ramped up their investments, while Google and NVIDIA have entered the space to validate its potential, making this sector the new frontier of competition in memory following HBM.
Samsung recently published a paper at the IEEE academic conference, revealing the latest advancements in its CXL memory system, "Pangea v2." According to the Korean media "Korea Economic Daily," the system achieves 10.2 times higher data transfer performance compared to traditional interconnect methods such as RDMA, while reducing long-standing bottlenecks in conventional memory architectures by up to 96%, marking a significant technological breakthrough in the CXL field.
On the demand side, tech giants are providing real-world validation for this technology. According to The Information, Google has begun deploying CXL in its data centers and is installing controllers to manage data traffic between CPUs and large external memory pools. NVIDIA plans to support the CXL 3.1 standard in its upcoming Vera CPU later this year, an move regarded by the industry as the largest real-world test of CXL to date.
Despite clear industry momentum, widespread commercial adoption of CXL faces a key constraint: the technology requires all CPUs, GPUs, memory, and networking devices within a data center to support the same standard—the complexity of cross-industry ecosystem coordination remains the most difficult barrier to its adoption.
Samsung "Pangea v2": Significant performance leap with expanded memory pool exceeding 5.5 TB
The "Pangea v2" system demonstrated by Samsung represents the company's latest technological advancement in the CXL field.
According to Korea Economic Daily, the system is based on the CXL 2.0 standard introduced in 2020 by companies such as Intel and NVIDIA, integrating 22 CXL DRAM modules (CMM-D) into a single shared memory pool capable of supporting up to 5.5 TB of memory capacity accessible by multiple servers. Samsung collaborated with global semiconductor design company Marvell and AI infrastructure company Liquid AI during development.
On the performance level, Pangea v2 improves data transfer capability by 10.2 times compared to traditional RDMA solutions, with bottleneck improvements of up to 96%.
Given that the CXL standard has now been updated to version 3.2, Samsung has announced plans to release its "Pangea v3" based on the latest specification within 2026.
The three major storage vendors have fully entered the market, accelerating the formation of the competitive landscape.
SK Hynix is also accelerating its CXL development.
According to Korea Economic Daily, the company launched its first CXL DRAM in 2022, followed in 2023 by a product compatible with CXL 2.0, and its CMM-DDR5 96GB memory solution has already received customer certification in 2025. Park Joon-deok, Head of DRAM Marketing at SK Hynix, stated that the company will maintain its technological leadership with a second-generation product supporting CXL 3.0.
Micron released its own CXL memory modules in 2024, officially entering the market. With the three major memory manufacturers having completed their deployments, the competitive landscape in the CXL sector has begun to take shape.
Google and NVIDIA validate demand; AI memory efficiency becomes the core driver.
The core reason CXL has attracted market attention is its ability to effectively address the long-standing issue of low memory utilization in AI servers.
Under the current architecture, each GPU and CPU relies on dedicated memory, with memory utilization typically ranging from 20% to 30% under normal conditions. CXL enables multiple GPUs and CPUs to dynamically share a unified memory pool, significantly improving resource utilization efficiency.
According to The Information, citing two Google employees, Google has been the first to deploy CXL in production and is evaluating how to more deeply integrate external memory pools into its systems to accelerate processor access to external memory.
NVIDIA's Vera CPU will support the CXL 3.1 standard, and its large-scale market adoption will serve as a key indicator of whether CXL can evolve from experimental projects by a few companies into a reliable industry solution.
Jin Kim, CEO of South Korean CXL startup Xcena, said: "AI infrastructure requires massive amounts of memory, and rising memory prices are forcing our target customers to improve memory utilization efficiency—there is currently no other solution that can replace CXL to enhance memory efficiency."
High barriers to ecological collaboration and uncertainty remain regarding the timeline for widespread adoption.
The large-scale adoption of CXL faces fundamental ecosystem challenges.
Bernstein Research semiconductor analyst Mark Li noted: "For CXL to work properly, you need compatibility across the CPU, GPU, memory, and software. Very few companies can simultaneously control all these products and drive coordinated innovation—NVIDIA is one, and Google is another."
Historically, AMD launched CXL-enabled server chips in 2022, followed by Intel in 2023, but commercial adoption of both products has been very limited. Even though Google has begun deploying CXL in production environments, industry engineers generally agree that current CXL technology has not yet fully met all the needs of large cloud providers.
After the CXL Consortium finalizes a new specification, chip designers must spend one to two years redesigning processors, component manufacturers then develop compatible controllers and switches, memory vendors modify memory modules, and server manufacturers conduct months of compatibility testing—this lengthy industry-wide collaboration represents the practical barrier that CXL must overcome to achieve widespread adoption. The market’s first real-world validation will come this year with NVIDIA’s Vera CPU, offering the most valuable reference to date.
