AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors
The AON1000™ IP is part of the AONVoice™ processor family, AON’s application-specific edge AI processors for deep neural network inferencing at the edge. Unlike general purpose processors, DSPs and dedicated processors that rely on third party AI algorithms, AON’s processors optimize accuracy at super-low power by embedding proprietary, use-case specific neural network architecture and integrating tuned inference algorithms. AON processors also support training with a unique data augmentation tool.
The AON1000™ compact AI processing engine delivers the highest hit rate accuracy per microwatt available in the industry under real-world, noisy conditions.
AON1000™ Hardware IP can be integrated in a standalone chip or in a sensor, such as a microphone, allowing the Application Processor to stay in idle state during the always-on listening state.
AONDevices also offers the SW algorithm AON1000 for porting to a third-party DSP for less power sensitive applications.
查看 AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors 详细介绍:
- 查看 AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors 完整数据手册
- 联系 AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors 供应商
AI IP
- RT-630 Hardware Root of Trust Security Processor for Cloud/AI/ML SoC FIPS-140
- RT-630-FPGA Hardware Root of Trust Security Processor for Cloud/AI/ML SoC FIPS-140
- NPU IP for Embedded AI
- RISC-V-based AI IP development for enhanced training and inference
- Tessent AI IC debug and optimization
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof