Find IP Sell IP About Us Contact Us
Log In
Something here

All IPs > Processor > AI Processor

AI Processor Semiconductor IPs

The AI Processor category within our semiconductor IP catalog is dedicated to state-of-the-art technologies that empower artificial intelligence applications across various industries. AI processors are specialized computing engines designed to accelerate machine learning tasks and perform complex algorithms efficiently. This category includes a diverse collection of semiconductor IPs that are built to enhance both performance and power efficiency in AI-driven devices.

AI processors play a critical role in the emerging world of AI and machine learning, where fast processing of vast datasets is crucial. These processors can be found in a range of applications from consumer electronics like smartphones and smart home devices to advanced robotics and autonomous vehicles. By facilitating rapid computations necessary for AI tasks such as neural network training and inference, these IP cores enable smarter, more responsive, and capable systems.

In this category, developers and designers will find semiconductor IPs that provide various levels of processing power and architectural designs to suit different AI applications, including neural processing units (NPUs), tensor processing units (TPUs), and other AI accelerators. The availability of such highly specialized IPs ensures that developers can integrate AI functionalities into their products swiftly and efficiently, reducing development time and costs.

As AI technology continues to evolve, the demand for robust and scalable AI processors increases. Our semiconductor IP offerings in this category are designed to meet the challenges of rapidly advancing AI technologies, ensuring that products are future-ready and equipped to handle the complexities of tomorrow’s intelligence-driven tasks. Explore this category to find cutting-edge solutions that drive innovation in artificial intelligence systems today.

All semiconductor IP
16
IPs available

MetaTF

MetaTF is a comprehensive development environment for neural networks, perfectly aligned with BrainChip's vision of enhancing Edge AI. It simplifies the creation, training, and testing of neural networks for use on BrainChip’s Akida neural processor by supporting the automatic conversion of TensorFlow models. This tool leverages the Python programming language and popular libraries like NumPy and Jupyter notebooks, allowing developers to work within a familiar ecosystem. A standout feature of MetaTF is its ability to transform Convolutional Neural Networks (CNNs) into Spiking Neural Networks (SNNs), optimizing them for low-latency, energy-efficient applications at the edge without necessitating new framework learning. In addition to the conversion capabilities, MetaTF offers a range of pre-created network models in their model zoo. This rich repository provides a variety of quantized Keras models converted using CNN2SNN, giving developers a head start in building efficient, performant solutions. Its integration with BrainChip's Akida explores true on-device intelligence by allowing developers to innovate without limitations traditionally posed by less efficient networks.

BrainChip
88 Views
AI Processor
View Details Datasheet

NMP-750

The NMP-750 is engineered for edge computing, targeting markets like automotive, augmented and virtual reality, smart infrastructure, and communications. It's designed to address advanced applications such as mobility management, autonomous control, multi-camera processing, and energy management. This IP can provide up to 16 TOPS of performance with a memory capacity reaching 16 MB, featuring a choice between a RISC-V or Arm Cortex-R/A 32-bit CPU. It also includes 3 x AXI4 interfaces at 128 bits each, supporting high-throughput and low-latency operations vital for real-time applications and large-scale data processing needs.

AIM Future
49 Views
AI Processor
View Details Datasheet

NMP-550

AIM Future's NMP-550 is tailored for tasks demanding more extensive computational proficiency. It is a performance-efficient accelerator aimed at applications within markets such as automotive, mobile computing, AR/VR, drones, robotics, and medical devices. It excels in activities like driver monitoring, image and video analytics, super resolution, and security compliance. This IP offers remarkable processing power with up to 6 TOPS and incorporates up to 6 MB of local memory. It's equipped with options for a RISC-V or Arm Cortex-M/A 32-bit CPU, complemented by 3 x AXI4 interfaces of 128-bit width, supporting fast and flexible data handling suitable for complex operational environments.

AIM Future
29 Views
AI Processor
View Details Datasheet

NMP-350

The NMP-350 is a highly efficient endpoint accelerator designed to deliver optimal performance in power and cost-sensitive applications. It caters to an array of markets, such as automotive, AIoT, Industry 4.0, smart appliances, and wearables. The IP is engineered to manage tasks like driver authentication, digital mirrors, personalization, predictive maintenance, machine automation, and health monitoring. Its architecture is crafted to deliver up to 1 TOPS with up to 1 MB of local memory, featuring a RISC-V or Arm Cortex-M 32-bit CPU and 3 x AXI4 interfaces, each 128 bits wide. These capabilities ensure that the NMP-350 meets the high standards needed for efficient processing in both energy usage and performance output.

AIM Future
24 Views
AI Processor
View Details Datasheet

Origin E2

The Origin E2 offers a balanced AI inference solution designed for power and area efficiency, suitable for smartphones, edge nodes, and various consumer devices. Employing an efficient packet-based architecture, the E2 supports a range of neural networks including RNNs, CNNs, LSTMs, and DNNs, providing performance from 1 to 20 TOPS. The core advantage of the E2 lies in its ability to handle extensive AI processing tasks with minimal latency and reduced power consumption, bypassing the typical hardware-specific constraints. This flexibility allows designers to utilize trained AI models without compromising their precision or accuracy, which is crucial for maintaining high-quality AI functions. With its scalable architecture, the E2 efficiently manages resources and ensures high utilization rates, delivering superior performance with an impressive power efficiency of 18 TOPS/W. This makes it an ideal choice for applications requiring consistent AI processing capabilities, such as smartphones and various edge devices, where space and power are limited.

Expedera
22 Views
40nm
All Foundries
AI Processor
View Details Datasheet

Origin E8

The Origin E8 NPU is built for performance-intensive applications requiring exceptional throughput and low latency, suitable for data centers and advanced automotive systems. Capable of delivering up to 128 TOPS with its single-core and PetaOps with multiple cores, the E8 addresses the needs of sophisticated AI tasks with unparalleled efficiency. Utilizing an innovative packet-based architecture, the E8 achieves parallel execution, significantly improving resource allocation and performance predictability without altering pre-trained neural network models. Its efficient design mitigates hardware-specific optimization requirements, enhancing system scalability and reducing complexity. The E8 excels in scenarios requiring robust AI processing such as autonomous driving and complex data center operations. Offering high customization, this IP core aligns with specific processing needs, ensuring that AI models execute with precision and minimal resource usage, even under demanding conditions, as reflected in its leading efficiency at 18 TOPS/W.

Expedera
22 Views
40nm
All Foundries
AI Processor
View Details Datasheet

Origin E1

The Origin E1 neural engines are designed for applications requiring consistent low-power AI processing, particularly in consumer electronics like home appliances and smartphones. By leveraging a unique packet-based architecture, the E1 effectively minimizes power consumption and system cost while delivering 1 TOPS in performance. This architecture allows for parallel execution across neural network layers, optimizing resource utilization and performance predictability. The E1 is tailored for use in always-sensing technologies, such as cameras in smart devices that continuously process visual data to enhance user interaction. These capabilities require precise power management, which is achieved through the E1's minimal external memory usage. Delivering up to 18 TOPS per watt, the E1 LittleNPU is particularly optimized for executing high-quality neural networks developed by top OEMs. Expedera's E1 NPU enables manufacturers to deploy AI applications without needing external memory, which reduces both the power footprint and silicon area. The architecture's deterministic performance ensures all tasks are executed without data loss or back pressure, making it a quintessential choice for power-sensitive applications.

Expedera
21 Views
40nm
All Foundries
AI Processor
View Details Datasheet

Wormhole

The Wormhole processor series offers robust solutions for balancing power and computational throughput efficiency, specifically geared toward AI applications. This series includes the n150s and n300s models, designed with differing capacities to serve varying demands of AI computing. The n150s, with its standard size and 3/4 length, incorporates a single Wormhole processor and is optimized for applications that require up to 160 watts. This design makes it perfect for scalable AI solutions that do not compromise on performance, ensuring efficient processing power for complex AI models. The n300s elevates the capabilities further by featuring dual Wormhole processors, doubling the computational power, and is designed to operate at up to 300 watts. This model is particularly suitable for high-output environments such as data centers or for users requiring intensive AI model computations. With its standard height and substantial power envelope, the n300s provides a comprehensive solution for large-scale AI operations. The Wormhole family is built for AI environments demanding high computational power and efficiency, delivering peak performance without excessive power draw, ideally suited for cutting-edge AI developments and inferences in demanding scenarios.

Tenstorrent
21 Views
All Foundries
AI Processor
View Details Datasheet

NPU

OPENEDGES's NPU is a powerful neural processing unit developed to accelerate machine learning tasks in embedded systems. This specialized processor is designed for efficient execution of neural networks, making it suitable for applications ranging from artificial intelligence to edge computing. The NPU supports various neural network models and can be integrated into a wide array of systems, enhancing the capabilities of devices without increasing their power footprint significantly. By offloading intricate computations from the main processor, it improves the system's overall throughput, allowing for faster and more responsive AI applications. In addition to performance efficiency, the NPU boasts compatibility with standard machine learning frameworks, facilitating easy integration and deployment. Its ability to process large volumes of data rapidly makes it ideal for real-time analytics and intelligent data processing in numerous sectors.

OPENEDGES Technology
19 Views
AI Processor
View Details Datasheet

Grayskull

Grayskull is a state-of-the-art AI Graph Processor designed for optimized artificial intelligence computation. It comes in two variations catering to different performance needs: the e75 and the e150. The e75 is a compact, low-profile PCIe Gen 4 board, equipped with a single Grayskull processor. This entry-level board is ideal for applications requiring moderate power, operating at 75 watts, making it suitable for personal use or scaling small AI projects. The card is half-length, allowing for easy integration into smaller systems. On the other hand, the e150 variant is engineered for higher demands. It is a standard height board, extending to three-quarters in length, ensuring it can manage more significant workloads. This version also includes a Grayskull processors, pushing the boundaries of power efficiency by operating at up to 200 watts. This increased power cap allows for greater throughput, making it more suitable for intensive AI computations in either professional or research settings. Grayskull processors are fundamentally designed to maximize the use of SRAM for its self-attention layers, optimizing speed and reducing latency compared to traditional DRAM implementations. This technique ensures rapid data processing, necessary for sophisticated AI models, supporting diverse AI applications and research projects.

Tenstorrent
18 Views
All Foundries
AI Processor
View Details Datasheet

Low Power Neural Network Processor

The Low Power Neural Network Processor is designed to facilitate resource-efficient AI processing on IoT devices. With power and area optimization at its core, this processor supports both integer inference and offers substantial operations per second, suitable for neural network tasks. It is fully customizable and integrates smoothly into existing infrastructures, complete with a comprehensive firmware solution to bridge current machine learning libraries. It's ideal for smart grid and industrial IoT applications, ensuring efficient distributed learning without taxing power resources.

Low Power Futures
18 Views
AI Processor
View Details Datasheet

InferX AI

Flex Logix's InferX AI platform offers a highly efficient AI processing solution designed to seamlessly integrate into systems on chips (SoCs). It provides exceptional batch processing capabilities for vision AI, supporting low power and cost-effective operations due to its highly efficient architecture. Leveraging INT8, INT16, and BF16 precision, InferX delivers high accuracy needed for modern AI workloads. One of the standout features of InferX is its reconfigurable architecture, which allows it to adapt and efficiently run new models, such as YOLO and even Transformers. This flexibility ensures that new operators can be implemented accurately, allowing multiple models to be run concurrently. Beyond just AI, InferX also streamlines integration into existing SoCs and software stacks, making it an attractive choice for developers looking to take advantage of state-of-the-art AI capabilities without significant restructuring. InferX demonstrates proven reliability, having been incorporated into dozens of chips and production qualified. It provides the performance of high-end discrete AI processors within a smaller, more power-efficient footprint, making it ideal for applications needing rapid and flexible AI processing. Its capabilities enable comprehensive support for modern AI applications, ensuring they remain robust and adaptable.

Flex Logix
17 Views
TSMC
AI Processor
View Details Datasheet

RAPTOR

RAPTOR is a Neural Processing Unit (NPU) designed specifically for energy-efficient AI and Machine Learning applications. Capable of executing deep neural networks with impressive efficiency, RAPTOR is suited for far-edge AI systems that require low power but high-performance computational capabilities. This IP features a flexible architecture accommodating up to 128 processing units, optimizing computing at the smallest power consumption threshold. RAPTOR includes a robust AI stack, facilitating the deployment of neural networks with minimized inference latency and power draw, catering particularly to artificial intelligence vis-a-vis data processing tasks. Additionally, RAPTOR is equipped with advanced SDK tools to speed up the transition from designed models in data science programs to functional applications on physical devices. Its well-engineered integration support ensures it meets diverse market demands while maintaining optimal energy parameters.

Dolphin Design
17 Views
TSMC
AI Processor
View Details Datasheet

Chimera GPNPU

The Chimera General Purpose Neural Processing Unit, or GPNPU, offers a transformative approach to AI computing. By integrating high-performance machine learning inference with the execution of scalar and vector operations in a single pipeline, it simplifies the SoC design, enhancing both flexibility and processing capability. The architecture is designed to handle diverse workloads across multiple application domains efficiently. Engineered for flexibility, the Chimera GPNPU synthesizes the capabilities of NPUs and DSPs into a unified processor. This amalgamation facilitates seamless handling of machine learning tasks along with traditional signal processing functions within a single hardware entity. As a licensable IP core, it empowers developers to integrate advanced AI functions into diverse products, from consumer electronics to automotive systems. Uniquely, the GPNPU architecture supports a broad spectrum of AI workloads, allowing it to adapt to the continuously evolving landscape of machine learning models. This is achieved through an innovative pipeline that supports modeless handling of matrix, vector, and scalar instructions, providing the ability to respond dynamically to changing ML graph processing needs. The Chimera family is also characterized by its future-proof design, offering scalable performance that enables users to implement it across various process technologies, adapting effortlessly to specific application needs. This adaptability, combined with robust support for developers through its SDK, makes the GPNPU an optimal choice for organizations looking to future-proof their AI solutions.

Quadric
16 Views
All Foundries
AI Processor
View Details Datasheet

Origin E6

Origin E6 is a cutting-edge NPU IP tailored for high-demand edge inference applications, offering performance from 16 to 32 TOPS. This model supports significant AI models such as image transformers and stable diffusion, as well as complex traditional AI networks like CNNs and LSTMs. Its packet-based architecture facilitates efficient resource use, achieving parallel processing across neural network layers to enhance performance. Designed for versatility, the E6 covers a broad range of applications including smartphones, vehicles, and AR/VR systems, offering support for both standard and custom neural networks. The architecture is crafted to deliver max efficiency in processing workloads, maintaining up to 90% processor utilization, thereby significantly reducing power and memory overheads. Customization capabilities are a major highlight of the E6, adapting to specific customer PPA requirements to provide the most relevant configurations. This flexibility ensures that the E6 meets the demands of both current and emerging AI networks while maintaining its high power efficiency at 18 TOPS/W, enhancing performance in a variety of cutting-edge devices.

Expedera
16 Views
40nm
All Foundries
AI Processor
View Details Datasheet

nearbAI

The nearbAI IP cores are cutting-edge solutions tailored for ultra-low power AI-enabled chips, designed to provide immediate visual and spatial feedback through efficient neural processing. Each core operates as a neural processing unit (NPU) and includes a neural network compiler, making it ideal for live sensory augmentation. It is well-suited for applications requiring real-time data processing, such as battery-powered mobile devices, XR, and IoT applications. These cores are optimized for minimal power consumption while balancing factors like area and latency, ensuring efficient performance across different use cases. They support seamless local processing, maintain high data security, and even offer optional cloud interoperability. nearbAI cores can independently handle data processing, eliminating the need for extensive signal transmission between the sensors and processing units. The architecture of nearbAI supports a broad spectrum of neural networks with zero-latency switching between them, enhancing overall computational efficiency. This includes features like real-time pattern recognition and multi-modal sensor fusion, which are crucial for immersive experiences and high-stakes applications such as face recognition and 3D spatial detection. Flexible and scalable across various silicon technologies from 40nm to 4nm, the nearbAI IP is also complemented by an optimizing compiler that allows for smooth integration and rapid prototyping efforts, thus accelerating time-to-market.

easics
14 Views
AI Processor
View Details Datasheet