Automotive, AI bandwidth demands push interconnect performance
By Gary Hilson, EETimes (March 14, 2022)
To meet the bandwidth demands of rapidly advancing automotive and artificial intelligence (AI) and machine learning (ML) workloads, companies such as Rambus are beginning to change where Peripheral Component Interconnect Express (PCIe) interconnects are getting used, effectively joining PCIe and CXL data planes to optimize interconnect performance.
There are probably few surprises to be found in the latest iteration of PCIe as many industry players contributed to its development — the PCIe special interest group (SIG) now boasts 900 members. PCIe has become somewhat ubiquitous in computing over the past two decades, enabling other mature and emerging standards such as Non-Volatile Memory Express (NVMe) and Compute Express Link (CXL).
Similar to its predecessors, PCIe 6.0 is aimed at data–intensive environments such as data centers, high–performance computing (HPC), AI and ML. But as the modern vehicle continues its evolution as a server on wheels — nay, a data center on wheels — many storage technologies are making the trip to automotive applications, including solid-state drives (SSDs) that use both NVMe and PCIe.