RTP packetizer IP-cores for JPEG XS compressed video encapsulation over ST2110-22
You are here:
AI Inference IP for 2 to >100 TOPS at low power, low die area
特色
- modular from 2 to >100 TOPS,
- scalable: as you double the silicon area, you double the throughput in TOPS (it is throughput that matters),
- low latency: nnMAX loads weights fast, so performance at batch = 1 is usually as good as large batch sizes; this is critical for edge applications,
- low cost: nnMAX uses the MACs with 60-80% utilization, whereas existing solutions are often <25%. This means NMAX gets more throughput out of less silicon area,
- low power: nnMAX uses on-chip SRAM very efficiently to generate high bandwidth so we need little DRAM. Data Center class performance is achievable with 1 LPDDR4 DRAM for ResNet-50 and 2 for YOLOv3,
- able to run any kind of neural network or multiple at once,
- programmed using Tensorflow or Caffe.
查看 AI Inference IP for 2 to >100 TOPS at low power, low die area 详细介绍:
- 查看 AI Inference IP for 2 to >100 TOPS at low power, low die area 完整数据手册
- 联系 AI Inference IP for 2 to >100 TOPS at low power, low die area 供应商
Block Diagram of the AI Inference IP for 2 to >100 TOPS at low power, low die area
