Is this your business? Claim it to manage your IP and profile
The Chimera GPNPU stands as a powerful neural processing unit tailor-made for on-device AI computing. This processor architecture revolutionizes the landscape of SoC design, providing a unified execution pipeline that integrates both matrix and vector operations with control code typically handled by separate cores. Such integration boosts developer productivity and enhances performance significantly. The Chimera GPNPU's ability to run diverse AI models—including classical backbones, vision transformers, and large language models—demonstrates its adaptability to future AI developments. Its scalable design enables handling of extensive computational workloads reaching up to 864 TOPs, making it suitable for a wide array of applications including automotive-grade AI solutions. This licensable processor core is built with a unique hybrid architecture that combines Von Neuman and 2D SIMD matrix instructions, facilitating efficient execution of a myriad array of data processing tasks. The Chimera GPNPU has been optimized for integration, allowing seamless incorporation into modern SoC designs for high-speed and power-efficient computing. Key features include a robust instruction set tailored for ML tasks, effective memory optimization strategies, and a systematic approach to on-chip data handling, all working to minimize power usage while maximizing throughput and computational accuracy. Furthermore, the Chimera GPNPU not only meets contemporary demands of AI processing but is forward-compatible with potential advancements in machine learning models. Through comprehensive safety enhancements, it addresses stringent automotive safety requirements, ensuring reliable performance in critical applications like ADAS and enhanced in-cabin monitoring systems. This combination of performance, efficiency, and scalability positions the Chimera GPNPU as a pivotal tool in the advancement of AI-driven technologies within industries demanding high reliability and long-term support.
The Chimera SDK offers a comprehensive suite for developing and optimizing AI applications destined for the Chimera GPNPU. This software development toolkit streamlines the integration of both machine learning and traditional C++ code, providing developers with the flexibility to seamlessly transition applications from conceptualization to execution. Offering support for a wide range of ML models and formats, the Chimera SDK efficiently compiles ML graph code into executable C++ code, ensuring compatibility with existing system infrastructures. Emphasizing ease-of-use and flexibility, the Chimera SDK lets developers access its powerful tools either online via the DevStudio platform or offline through a standalone installation on private servers or clouds. The environment supports compiling, simulating, and profiling of code, enabling developers to optimize performance while minimizing power usage. Additionally, its integration with the Chimera LLVM C++ Compiler and Instruction Set Simulator provides detailed insights into application performance, helping in fine-tuning code to meet specific design requirements. The SDK also provides a robust graph compiler that accepts input graphs in ONNX format from leading ML frameworks like TensorFlow and PyTorch. It performs comprehensive optimizations on these graphs, enhancing execution efficiency and reducing resource consumption. With the Chimera SDK, developers have the capability to redefine ML operator use and memory allocation dynamically, offering unprecedented flexibility and control over AI application deployment.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to evaluate IP, download trial versions and datasheets, and manage your evaluation workflow!
To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.