You are here:
IP cores for ultra-low power AI-enabled devices
Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler. It provides immediate visual and spatial feedback based on sensory inputs, which is a necessity for live augmentation of the human senses.
Ideal for battery-powered mobile, XR and IoT devices
Why nearbAI?
Highly computationally efficient and flexible NPUs
Let's do a custom benchmark together:
provide us with your use case:
• Quantized or unquantized NN model(s):
ONNX, TensorFlow (Lite), PyTorch, or Keras
• Constraints:
Average power & energy per inference, silicon area, latency, memories, frame rate, image resolution, foundry + technology node
- Optimized neural network inferencing for visual, spatial and other applications
- Unparallelled flexibility: customized & optimized for the customer’s use case
- Produces the most optimal NPU IP core for the customer’s use case: power, area, latency and memories trade-off
- Minimized development & integration time
Ideal for battery-powered mobile, XR and IoT devices
Why nearbAI?
Highly computationally efficient and flexible NPUs
- Enable lightweight devices with long battery life ... with ultra-low power, run heavily optimized AI-based functions locally
- Enable truly immersive experiences ... achieve sensors-to-displays latency within the response time of the human senses
- Enable smart and flexible capabilities ... fill the gap between “swiss-army knife” XR / AI mobile processor chips and limited-capability edge IoT / AI chips
Let's do a custom benchmark together:
provide us with your use case:
• Quantized or unquantized NN model(s):
ONNX, TensorFlow (Lite), PyTorch, or Keras
• Constraints:
Average power & energy per inference, silicon area, latency, memories, frame rate, image resolution, foundry + technology node
查看 IP cores for ultra-low power AI-enabled devices 详细介绍:
- 查看 IP cores for ultra-low power AI-enabled devices 完整数据手册
- 联系 IP cores for ultra-low power AI-enabled devices 供应商
Block Diagram of the IP cores for ultra-low power AI-enabled devices
NPU IP
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- General Purpose Neural Processing Unit (NPU)
- NPU - Deep Neural Network Programmable Accelerator (CNN)
- AI accelerator (NPU) IP - 1 to 20 TOPS
- AI accelerator (NPU) IP - 16 to 32 TOPS
- AI accelerator (NPU) IP - 32 to 128 TOPS