Find IP Sell IP Chip Talk About Us Contact Us
Log In

All IPs > Processor > AI Processor > Chimera GPNPU

Chimera GPNPU

From Quadric

Description

The Chimera GPNPU stands as a powerful neural processing unit tailor-made for on-device AI computing. This processor architecture revolutionizes the landscape of SoC design, providing a unified execution pipeline that integrates both matrix and vector operations with control code typically handled by separate cores. Such integration boosts developer productivity and enhances performance significantly. The Chimera GPNPU's ability to run diverse AI models—including classical backbones, vision transformers, and large language models—demonstrates its adaptability to future AI developments. Its scalable design enables handling of extensive computational workloads reaching up to 864 TOPs, making it suitable for a wide array of applications including automotive-grade AI solutions.

This licensable processor core is built with a unique hybrid architecture that combines Von Neuman and 2D SIMD matrix instructions, facilitating efficient execution of a myriad array of data processing tasks. The Chimera GPNPU has been optimized for integration, allowing seamless incorporation into modern SoC designs for high-speed and power-efficient computing. Key features include a robust instruction set tailored for ML tasks, effective memory optimization strategies, and a systematic approach to on-chip data handling, all working to minimize power usage while maximizing throughput and computational accuracy.

Furthermore, the Chimera GPNPU not only meets contemporary demands of AI processing but is forward-compatible with potential advancements in machine learning models. Through comprehensive safety enhancements, it addresses stringent automotive safety requirements, ensuring reliable performance in critical applications like ADAS and enhanced in-cabin monitoring systems. This combination of performance, efficiency, and scalability positions the Chimera GPNPU as a pivotal tool in the advancement of AI-driven technologies within industries demanding high reliability and long-term support.

Features
  • Hybrid Von Neuman + 2D SIMD architecture
  • Compiler-driven memory management
  • Scalable to 864 TOPs
Foundries & Process Nodes
Foundry Process Nodes
Intel Foundry 14nm
TSMC 14nm FinFET
Tech Specs
Class Value
Categories Processor > AI Processor
Platform Level IP > Processor Core Independent
Processor > CPU
Processor > DSP Core
Graphic & Peripheral > GPU
Multimedia > VGA
Instruction Word 64b single instruction issue per clock
Pipeline 7-stage, in-order
Instruction Cache 128/256K configurable
Local L2 Memory 1MB to 16MB configurable
Machine Learning Inference Optimized for INT8
Memory Interfaces AXI
Availability All Countries & Regions
Applications
  • Automotive
  • Digital Home
  • Network Edge
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to evaluate IP, download trial versions and datasheets, and manage your evaluation workflow!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Loading...
Chat to Volt about this page

Chatting with Volt