Multi Protocol Endpoint IP Core for Safe and Secure Ethernet Network
You are here:
General Purpose Neural Processing Unit (NPU)
The QC Perform is the mid-sized member of the Chimera family of scalable, programmable NPUs. Delivering up to 8 TOPs of AI performance, the Chimera QC Perform process is suitable for a wide range of SoCs. And the QC Perforrm GPNPU packs that processing power in less than 1 sq mm in leading edge process nodes, making it an easy-to-choose addition to even the most cost sensitive designs.
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (SoC) developers, Quadric’s Chimera™ General Purpose Neural Processing Unit (GPNPU) family has a simple yet powerful architecture with demonstrated improved matrix-computation performance over the traditional approach. Its crucial differentiation is its ability to execute diverse workloads with great flexibility all in a single processor scaling from 1 TOP to 864 TOPs.
The Chimera GPNPU family provides a unified processor architecture that can handle matrix and vector operations and scalar (control) code in one execution pipeline. These workloads are traditionally handled separately by an NPU, DSP, and realtime CPU. The Chimera GPNPU is entirely driven by code, empowering developers to continuously optimize the performance of their models and algorithms throughout the device’s lifecycle.
Modern System-on-Chip (SoC) architectures deploy complex algorithms that mix traditional C++ based code with newly emerging and fast-changing machine learning (ML) inference code. This combination of graph code commingled with C++ code is found in numerous chip subsystems, most prominently in vision and imaging subsystems, radar and lidar processing, communications baseband subsystems, and a variety of other data- rich processing pipelines. Only Quadric’s Chimera GPNPU architecture can deliver high ML inference performance and run complex, data-parallel C++ code on the same fully programmable processor.
Compared to other ML inference architectures that force the software developer to artificially partition an algorithm solution between two or three different kinds of processors, Quadric’s Chimera processors deliver a massive uplift in software developer productivity while also providing current-day graph processing efficiency coupled with long-term future-proof flexibility.
Quadric’s Chimera GPNPUs are licensable processor IP cores delivered in synthesizable source RTL form. Blending the best attributes of both neural processing units (NPUs) and digital signal processors (DSPs), Chimera GPNPUs are aimed at inference applications in a variety of high-volume end applications including mobile devices, digital home applications, automotive and network edge compute systems.
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (SoC) developers, Quadric’s Chimera™ General Purpose Neural Processing Unit (GPNPU) family has a simple yet powerful architecture with demonstrated improved matrix-computation performance over the traditional approach. Its crucial differentiation is its ability to execute diverse workloads with great flexibility all in a single processor scaling from 1 TOP to 864 TOPs.
The Chimera GPNPU family provides a unified processor architecture that can handle matrix and vector operations and scalar (control) code in one execution pipeline. These workloads are traditionally handled separately by an NPU, DSP, and realtime CPU. The Chimera GPNPU is entirely driven by code, empowering developers to continuously optimize the performance of their models and algorithms throughout the device’s lifecycle.
Modern System-on-Chip (SoC) architectures deploy complex algorithms that mix traditional C++ based code with newly emerging and fast-changing machine learning (ML) inference code. This combination of graph code commingled with C++ code is found in numerous chip subsystems, most prominently in vision and imaging subsystems, radar and lidar processing, communications baseband subsystems, and a variety of other data- rich processing pipelines. Only Quadric’s Chimera GPNPU architecture can deliver high ML inference performance and run complex, data-parallel C++ code on the same fully programmable processor.
Compared to other ML inference architectures that force the software developer to artificially partition an algorithm solution between two or three different kinds of processors, Quadric’s Chimera processors deliver a massive uplift in software developer productivity while also providing current-day graph processing efficiency coupled with long-term future-proof flexibility.
Quadric’s Chimera GPNPUs are licensable processor IP cores delivered in synthesizable source RTL form. Blending the best attributes of both neural processing units (NPUs) and digital signal processors (DSPs), Chimera GPNPUs are aimed at inference applications in a variety of high-volume end applications including mobile devices, digital home applications, automotive and network edge compute systems.
查看 General Purpose Neural Processing Unit (NPU) 详细介绍:
- 查看 General Purpose Neural Processing Unit (NPU) 完整数据手册
- 联系 General Purpose Neural Processing Unit (NPU) 供应商