IBM 生态系统能否将AI芯片性能提高1000倍?
By Sally Ward-Foxton, EETimes (December 3, 2020)
As the AI hardware landscape starts to become more clearly defined, we are seeing three main paradigms. Some of the chip industry big hitters are adapting their existing compute architectures for AI accelerators (Intel, Nvidia). Then we have the big data center players (Amazon, Google) who are throwing money at the problem and developing their own accelerator architectures, but keeping them for their own use. And finally we have the startups: around 70 at last count, working on novel compute architectures for every AI niche from the data center to the IoT.
The running theme is the siloed approach; all the companies are battling it out as individuals. Can any single company, even one as large as Intel or Google, achieve the kind of phenomenal performance gains required by cutting-edge, rapidly developing AI algorithms?
Enter IBM, with an interdisciplinary approach to advancing AI hardware like nothing we’ve seen so far in this space. The company has set up an organization, the AI Hardware Center, based in IBM Research’s lab in Albany, New York, and is building an ecosystem of partners to work together on IBM’s goal.
E-mail This Article | Printer-Friendly Page |