CXL 满足 AI 对开放行业标准互连的需求
By Debendra Das Sharma , EETimes (July 16, 2024)
We have been in a virtuous cycle of innovation for decades now where dramatic improvements in compute capability—primarily driven by extraordinary advances in transistor and process technologies—have enabled diverse applications. These applications, including generative-AI, are driving an insatiable demand for heterogeneous computing, memory bandwidth, memory capacity and interconnect bandwidth to satisfy the demand of applications.
CXL 2.0 Integrity and Data Encryption Security Module Compute Express Link (CXL) 3.1 Controller CXL 3.0 IP |
The need for a robust interconnect standard
While PCIe is a great interconnect, emerging data-centric applications pose a new set of challenges requiring enhancements to PCIe:
- High-performance heterogeneous computing with shared coherent memory space.
- Overcoming the memory bandwidth bottleneck of DDR parallel bus and providing tiered cost-effective memory support.
- Minimizing stranded resources in data centers by pooling memory and accelerators across multiple servers.
- Enabling distributed computing through low-latency load-store-based message passing and shared memory across a large pool of servers, including coherent near in-memory compute.
E-mail This Article | Printer-Friendly Page |