DPU for Convolutional Neural Network
The DPU IP can be integrated as a block in the programmable logic (PL) of the selected Zynq®-7000 SoC and Zynq UltraScale™+ MPSoC devices with direct connections to the processing system (PS). To use DPU, you should prepare the instructions and input image data in the specific memory address that DPU can access. The DPU operation also requires the application processing unit (APU) to service interrupts to coordinate data transfer.
查看 DPU for Convolutional Neural Network 详细介绍:
- 查看 DPU for Convolutional Neural Network 完整数据手册
- 联系 DPU for Convolutional Neural Network 供应商
Deep Learning Processor IP
- Deep Learning Processor
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- PPA-optimized flexible AI processor IP
- High performance-efficient deep learning accelerator for edge and end-point inference
- Ultra Low Power Edge AI Processor
- Neural Network Processor IP