MIPI D-PHY Universal IP - 4.5Gbps/lane, MIPI D-PHY v2.5 Compliant in TSMC 22ULP
Ultra-low-power AI/ML processor and accelerator
NPU Clusters:
• Optimized for spatial neural networks (e.g. CNNs, ResNets, MobileNets)
• Sparsity exploitation
• Peak MAC performance: 160 GOPS
FETA Cluster:
• Optimized for temporal neural networks (e.g. RNNs like LSTM or GRU)
• Smart temporal feature extraction engine
查看 Ultra-low-power AI/ML processor and accelerator 详细介绍:
- 查看 Ultra-low-power AI/ML processor and accelerator 完整数据手册
- 联系 Ultra-low-power AI/ML processor and accelerator 供应商
Block Diagram of the Ultra-low-power AI/ML processor and accelerator
Video Demo of the Ultra-low-power AI/ML processor and accelerator
Emotion detection running from a coin cell battery
AI IP
- RT-630 Hardware Root of Trust Security Processor for Cloud/AI/ML SoC FIPS-140
- RT-630-FPGA Hardware Root of Trust Security Processor for Cloud/AI/ML SoC FIPS-140
- NPU IP for Embedded AI
- RISC-V-based AI IP development for enhanced training and inference
- Tessent AI IC debug and optimization
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof