Find IP Sell IP AI Assistant Chip Talk About Us
Log In

All IPs > Processor > AI Processor

AI Processor Semiconductor IPs

The AI Processor category within our semiconductor IP catalog is dedicated to state-of-the-art technologies that empower artificial intelligence applications across various industries. AI processors are specialized computing engines designed to accelerate machine learning tasks and perform complex algorithms efficiently. This category includes a diverse collection of semiconductor IPs that are built to enhance both performance and power efficiency in AI-driven devices.

AI processors play a critical role in the emerging world of AI and machine learning, where fast processing of vast datasets is crucial. These processors can be found in a range of applications from consumer electronics like smartphones and smart home devices to advanced robotics and autonomous vehicles. By facilitating rapid computations necessary for AI tasks such as neural network training and inference, these IP cores enable smarter, more responsive, and capable systems.

In this category, developers and designers will find semiconductor IPs that provide various levels of processing power and architectural designs to suit different AI applications, including neural processing units (NPUs), tensor processing units (TPUs), and other AI accelerators. The availability of such highly specialized IPs ensures that developers can integrate AI functionalities into their products swiftly and efficiently, reducing development time and costs.

As AI technology continues to evolve, the demand for robust and scalable AI processors increases. Our semiconductor IP offerings in this category are designed to meet the challenges of rapidly advancing AI technologies, ensuring that products are future-ready and equipped to handle the complexities of tomorrow’s intelligence-driven tasks. Explore this category to find cutting-edge solutions that drive innovation in artificial intelligence systems today.

All semiconductor IP
142
IPs available

Akida 2nd Generation

The 2nd Generation Akida builds upon BrainChip's neuromorphic legacy, broadening the range of supported complex network models with enhancements in weight and activation precision up to 8 bits. This generation introduces additional energy efficiency, performance optimizations, and greater accuracy, catering to a broader set of intelligent applications. Notably, it supports advanced features like Temporal Event-Based Neural Networks (TENNs), Vision Transformers, and extensive use of skip connections, which elevate its capabilities within spatio-temporal and vision-based applications. Designed for a variety of industrial, automotive, healthcare, and smart city applications, the 2nd Generation Akida boasts on-chip learning which maintains data privacy by eliminating the need to send sensitive information to the cloud. This reduces latency and secures data, crucial for future autonomous and IoT applications. With its multipass processing capabilities, Akida addresses the challenge of limited hardware resources smartly, processing complex models efficiently on the edge. Offering a flexible and scalable IP platform, it is poised to enhance end-user experiences across various industries by enabling efficient real-time AI processing on compact devices. The introduction of long-range skip connections further supports intricate neural networks like ResNet and DenseNet, showcasing Akida's potential to drive deeper model efficiencies without excessive host CPU calculation dependence.

BrainChip
AI Processor, Digital Video Broadcast, IoT Processor, Multiprocessor / DSP, Security Protocol Accelerators, Vision Processor
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

KL730 AI SoC

The KL730 AI SoC is powered by Kneron's innovative third-generation reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This architecture offers enhanced efficiency for the latest CNNnetwork architectures and serves transformer applications by reducing DDR bandwidth requirements significantly. The chip excels in video processing, supporting 4K 60FPS output and excelling in areas such as noise reduction and low-light imaging. It's ideal for applications in intelligent security, autonomous driving, and video conferencing, among others.

Kneron
TSMC
28nm
A/D Converter, AI Processor, Amplifier, Audio Interfaces, Camera Interface, Clock Generator, CPU, GPU, USB, Vision Processor
View Details

Origin E1

The Origin E1 is an optimized neural processing unit (NPU) targeting always-on applications in devices like home appliances, smartphones, and security cameras. It provides a compact, energy-efficient solution with performance tailored to 1 TOPS, making it ideal for systems needing low-power and minimal area. The architecture is built on Expedera's unique packet-based approach, which enables enhanced resource utilization and deterministic performance, significantly boosting efficiency while avoiding the pitfalls of traditional layer-based architectures. The architecture is fine-tuned to support standard and custom neural networks without requiring external memory, preserving privacy and ensuring fast processing. Its ability to process data in parallel across multiple layers results in predictive performance with low power and latency. Always-sensing cameras leveraging the Origin E1 can continuously analyze visual data, facilitating smoother and more intuitive user interactions. Successful field deployment in over 10 million devices highlights the Origin E1's reliability and effectiveness. Its flexible design allows for adjustments to meet the specific PPA requirements of diverse applications. Offered as Soft IP (RTL) or GDS, this engine is a blend of efficiency and capability, capitalizing on the full scope of Expedera's software tools and custom support features.

Expedera
13 Categories
View Details

ADAS and Autonomous Driving

The ADAS and Autonomous Driving solutions from KPIT focus on advancing autonomy in vehicles by addressing key challenges in technology, regulatory compliance, and consumer trust. The solutions include rigorous testing environments and simulation frameworks to explore corner cases and ensure safety. KPIT's autonomous driving solutions aim to enhance feature development, integrate AI-driven decision-making, and provide robust validation environments that empower automakers to deliver safer, reliable autonomous vehicles.

KPIT Technologies
AI Processor, IoT Processor, Platform Security
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 accelerator module by Axelera AI is engineered for AI inference on edge devices with power and budget constraints. It leverages the quad-core Metis AIPU, delivering exceptional AI processing in a compact form factor. This solution is ideal for a range of applications, including computer vision in constrained environments, providing robust support for multiple camera feeds and parallel neural networks. With its easy integration and the comprehensive Voyager SDK, it simplifies the deployment of advanced AI models, ensuring high prediction accuracy and efficiency. This module is optimized for NGFF (Next Generation Form Factor) M.2 sockets, boosting the capability of any processing system with modest space and power requirements.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, CPU, Processor Core Dependent, Vision Processor, WMV
View Details

Metis AIPU PCIe AI Accelerator Card

The PCIe AI Accelerator Card powered by Metis AIPU offers unparalleled AI inference performance suitable for intensive vision applications. Incorporating a single quad-core Metis AIPU, it provides up to 214 TOPS, efficiently managing high-volume workloads with low latency. The card is further enhanced by the Voyager SDK, which streamlines application deployment, offering an intuitive development experience and ensuring simple integration across various platforms. Whether for real-time video analytics or other demanding AI tasks, the PCIe Accelerator Card is designed to deliver exceptional speed and precision.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor, WMV
View Details

SCR9 Processor Core

Designed for high-demand applications in server and computing environments, the SCR9 Processor Core stands as a robust 64-bit RISC-V solution. It features a 12-stage superscalar, out-of-order pipeline to handle intensive processing tasks, further empowered by its versatile floating-point and vector processing units. The core is prepared to meet extensive computing needs with support for up to 16-core clustering and seamless AOSP or Linux operating systems integration.\n\nInvesting in powerful memory subsystems including L1, L2, and shared L3 caches enhances data handling, while features like memory coherency ensure fluid operation in multi-core settings. Extensions in cryptography and vector operations further diversify its application potential, establishing the SCR9 as an ideal candidate for cutting-edge data tasks.\n\nFrom enterprise servers to personal computing devices, video processing, and high-performance computations for AI and machine learning, the SCR9 delivers across an array of demanding scenarios. Its design integrates advanced power and process technologies to cater to complex computing landscapes, embodying efficiency and innovation in processor core technology.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Cores
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Origin E8

The Origin E8 NPU by Expedera is engineered for the most demanding AI deployments such as automotive systems and data centers. Capable of delivering up to 128 TOPS per core and scalable to PetaOps with multiple cores, the E8 stands out for its high performance and efficient processing. Expedera's packet-based architecture allows for parallel execution across varying layers, optimizing resource utilization, and minimizing latency, even under strenuous conditions. The E8 handles complex AI models, including large language models (LLMs) and standard machine learning frameworks, without requiring significant hardware-specific changes. Its support extends to 8K resolutions and beyond, ensuring coverage for advanced visualization and high-resolution tasks. With its low deterministic latency and minimized DRAM bandwidth needs, the Origin E8 is especially suitable for high-performance, real-time applications. The high-speed processing and flexible deployment benefits make the Origin E8 a compelling choice for companies seeking robust and scalable AI infrastructure. Through customized architecture, it efficiently addresses the power, performance, and area considerations vital for next-generation AI technologies.

Expedera
12 Categories
View Details

Veyron V2 CPU

Ventana's Veyron V2 CPU represents the pinnacle of high-performance AI and data center-class RISC-V processors. Engineered to deliver world-class performance, it supports extensive data center workloads, offering superior computational power and efficiency. The V2 model is particularly focused on accelerating AI and ML tasks, ensuring compute-intensive applications run seamlessly. Its design makes it an ideal choice for hyperscale, cloud, and edge computing solutions where performance is non-negotiable. This CPU is instrumental for companies aiming to scale with the latest in server-class technology.

Ventana Micro Systems
AI Processor, CPU, Processor Core Dependent, Processor Cores
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDARIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

KL520 AI SoC

The KL520 AI SoC is a groundbreaking chip that initiated Kneron's journey in edge AI. It is characterized by its compact size, energy efficiency, and robust performance, suitable for a host or companion AI co-processor role. Compatible with multiple 3D sensor technologies, the KL520 excels in smart home applications like smart locks and cameras. Its small power footprint allows operations on simple power supplies like AA batteries, setting it apart in the market.

Kneron
AI Processor, Camera Interface, Clock Generator, CPU, GPU, Vision Processor
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators are sophisticated tools designed to empower developers with complete control over their processor configurations. These generators allow users to customize their core-hubs at both the Instruction Set Architecture (ISA) and microarchitecture levels, offering unparalleled flexibility and adaptability in design. Such capabilities enable fine-tuning of processor specifications to meet specific application needs, fostering innovation within the RISC-V ecosystem. By leveraging the Core-hub Generators, developers can streamline their chip design process, ensuring efficient and seamless integration of custom features. This toolset not only simplifies the design process but also reduces time-to-silicon, making it ideal for industries seeking rapid advancements in their technological capabilities. The user-friendly interface and robust support of these generators make them a preferred choice for developing cutting-edge processors. InCore Semiconductors’ RISC-V Core-hub Generators represent a significant leap forward in processor design technology, emphasizing ease of use, cost-effectiveness, and scalability. As demand for tailored and efficient processors grows, these generators are set to play a pivotal role in shaping the future of semiconductor design, driving innovation across multiple sectors.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Chimera GPNPU

The Chimera GPNPU is a general-purpose neural processing unit designed to address key challenges faced by system on chip (SoC) developers when deploying machine learning (ML) inference solutions. It boasts a unified processor architecture capable of executing matrix, vector, and scalar operations within a single pipeline. This architecture integrates the functions of a neural processing unit (NPU), digital signal processor (DSP), and other processors, which significantly simplifies code development and hardware integration. The Chimera GPNPU can manage various ML networks, including classical frameworks, vision transformers, and large language models, all within a single processor framework. Its flexibility allows developers to optimize performance across different applications, from mobile devices to automotive systems. The GPNPU family is fully synthesizable, making it adaptable to a range of performance requirements and process technologies, ensuring long-term viability and adaptability to changing ML workloads. The Cortex's sophisticated design includes a hybrid Von Neumann and 2D SIMD matrix architecture, predictive power management, and sophisticated memory optimization techniques, including an L2 cache. These features help reduce power usage and enhance performance by enabling the processor to efficiently handle complex neural network computations and DSP algorithms. By merging the best qualities of NPUs and DSPs, the Chimera GPNPU establishes a new benchmark for performance in AI processing.

Quadric
All Foundries
All Process Nodes
AI Processor, AMBA AHB / APB/ AXI, CPU, DSP Core, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, VGA, Vision Processor
View Details

xcore.ai

xcore.ai is a versatile platform specifically crafted for the intelligent IoT market. It hosts a unique architecture with multi-threading and multi-core capabilities, ensuring low latency and high deterministic performance in embedded AI applications. Each xcore.ai chip contains 16 logical cores organized in two multi-threaded processor 'tiles' equipped with 512kB of SRAM and a vector unit for enhanced computation, enabling both integer and floating-point operations. The design accommodates extensive communication infrastructure within and across xcore.ai systems, providing scalability for complex deployments. Integrated with embedded PHYs for MIPI, USB, and LPDDR, xcore.ai is capable of handling a diverse range of application-specific interfaces. Leveraging its flexibility in software-defined I/O, xcore.ai offers robust support for AI, DSP, and control processing tasks, making it an ideal choice for enhancing IoT device functionalities. With its support for FreeRTOS, C/C++ development environment, and capability for deterministic processing, xcore.ai guarantees precision in performance. This allows developers to partition xcore.ai threads optimally for handling I/O, control, DSP, and AI/ML tasks, aligning perfectly with the specific demands of various applications. Additionally, the platform's power optimization through scalable tile clock frequency adjustment ensures cost-effective and energy-efficient IoT solutions.

XMOS Semiconductor
TSMC
20nm
19 Categories
View Details

KL630 AI SoC

The KL630 AI SoC introduces state-of-the-art NPU architecture, being the first to support both Int4 precision and transformer neural networks. It offers notable energy efficiency and is built on the ARM Cortex A5 CPU, providing up to 1eTOPS@int4. The KL630 supports various AI frameworks, making it suitable for a wide array of edge AI devices and applications that require advanced ISP capabilities and 5M@30FPS HDR imaging.

Kneron
ADPCM, AI Processor, Camera Interface, CPU, GPU, USB, Vision Processor
View Details

NaviSoC

The NaviSoC is a cutting-edge system-on-chip (SoC) that integrates a GNSS receiver and an application processor on one silicon die. Known for its high precision and reliability, it provides users with a compact and energy-efficient solution for various applications. Capable of supporting all GNSS bands and constellations, it offers fast time-to-first-fix, centimeter-level accuracy, and maintains high sensitivity even in challenging environments. The NaviSoC's flexible design allows it to be customized to meet specific user requirements, making it suitable for a wide range of applications, from location-based services to asset tracking and smart agriculture. The incorporation of a RISC-V application microcontroller, along with an array of peripherals and interfaces, introduces expanded functionality, optimizing it for advanced IoT and industrial applications. Engineered for power efficiency, the NaviSoC supports a range of supply voltages, ensuring low power consumption across its operations. The chip's design provides for efficient integration into existing systems with the support of a comprehensive SDK and IDE, allowing developers to tailor solutions to their precise needs in embedded systems and navigation infrastructures.

ChipCraft
TSMC
800nm
20 Categories
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Origin E2

The Origin E2 from Expedera is engineered to perform AI inference with a balanced approach, excelling under power and area constraints. This IP is strategically designed for devices ranging from smartphones to edge nodes, providing up to 20 TOPS performance. It features a packet-based architecture that enables parallel execution across layers, improving resource utilization and performance consistency. The engine supports a wide variety of neural networks, including transformers and custom networks, ensuring compatibility with the latest AI advancements. Origin E2 caters to high-resolution video and audio processing up to 4K, and is renowned for its low latency and enhanced performance. Its efficient structure keeps power consumption down, helping devices run demanding AI tasks more effectively than with conventional NPUs. This architecture ensures a sustainable reduction in the dark silicon effect while maintaining high operating efficiencies and accuracy thanks to its TVM-based software support. Deployed successfully in numerous smart devices, the Origin E2 guarantees power efficiency sustained at 18 TOPS/W. Its ability to deliver exceptional quality across diverse applications makes it a preferred choice for manufacturers seeking robust, energy-conscious solutions.

Expedera
12 Categories
View Details

Polar ID Biometric Security System

The Polar ID Biometric Security System is Metalenz's revolutionary solution for face authentication, delivering a new level of biometric security through advanced polarization imaging. Unlike conventional facial recognition systems, Polar ID captures the unique 'polarization signature' of each human face, offering enhanced security by effortlessly differentiating between real faces and potential spoofing attempts with sophisticated 3D masks. This innovative use of meta-optics not only enhances security but also reduces the need for complex optical modules traditionally required in consumer devices. Polar ID stands out in its ability to operate efficiently under varied lighting conditions, from bright sunlight to complete darkness. It achieves this with more than 10 times the resolution of current structured light systems, ensuring reliable and secure facial recognition performance even when users wear glasses or face coverings. By operating in the near-infrared spectrum, Polar ID extends its utility to scenarios previously challenging for facial recognition technology, thus broadening its application range. Designed for mass-market deployment, Polar ID minimizes the footprint and complexity of face unlock systems. By doing away with bulky modules, it offers a compact and cost-effective alternative while maintaining high-security standards. This innovation enables widespread adoption in consumer electronics, allowing a seamless integration into smartphones, tablets, and other mobile devices, potentially replacing less secure biometric methods like fingerprint recognition.

Metalenz Inc.
TSMC
16nm
10 Categories
View Details

SCR7 Application Core

The SCR7 Application Core is an epitome of advanced data processing capability within the RISC-V framework, designed to handle the computational requirements of sophisticated applications. This Linux-capable 64-bit core incorporates a dual-issue, out-of-order 12-stage pipeline featuring vector and cryptography extensions, optimizing operations for heavy data-centric workloads.\n\nIts memory subsystem includes robust L1, L2, and MMU configurations that equip the SCR7 with the prerequisite tools for expansive architectural frameworks. Support for dual-core multiprocessor configurations and efficient cache coherency ensures streamlined operations, catering to diverse processing needs across networked and AI applications.\n\nApplications in high-performance computing, AI, and networking thrive with the SCR7 core's energy-efficient, data-allied design. From video processing to computer vision, this core empowers developers to transcend traditional limitations, ushering in a new era of computational capability for enterprise and consumer technologies alike.

Syntacore
AI Processor, CPU, IoT Processor, Microcontroller, Processor Cores
View Details

Veyron V1 CPU

The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.

Ventana Micro Systems
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Cores
View Details

Origin E6

Expedera's Origin E6 NPU is crafted to enhance AI processing capabilities in cutting-edge devices such as smartphones, AR/VR headsets, and automotive systems. It offers scalable performance from 16 to 32 TOPS, adaptable to various power and performance needs. The E6 leverages Expedera's packet-based architecture, known for its highly efficient execution of AI tasks, enabling parallel processing across multiple workloads. This results in better resource management and higher performance predictability. Focusing on both traditional and new AI networks, Origin E6 supports large language models as well as complex data processing tasks without requiring additional hardware optimizations. Its comprehensive software stack, based on TVM, simplifies the integration of trained models into practical applications, providing seamless support for mainstream frameworks and quantization options. Origin E6's deployment reflects meticulous engineering, optimizing memory usage and processing latency for optimal functionality. It is designed to tackle challenging AI applications in a variety of demanding environments, ensuring consistent high-performance outputs and maintaining superior energy efficiency for next-generation technologies.

Expedera
AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Coprocessor, CPU, DSP Core, GPU, IoT Processor, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

NeuroMosAIc Studio

NeuroMosAIc Studio is a comprehensive software platform designed to maximize AI processor utilization through intuitive model conversion, mapping, simulation, and profiling. This advanced software suite supports Edge AI models by optimizing them for specific application needs. It offers precision analysis, network compression, and quantization tools to streamline the process of deploying AI models across diverse hardware setups. The platform is notably adept at integrating multiple AI functions and facilitating edge training processes. With tools like the NMP Compiler and Simulator, it allows developers to optimize functions at different stages, from quantization to training. The Studio's versatility is crucial for developers seeking to enhance AI solutions through customized model adjustments and optimization, ensuring high performance across AI systems. NeuroMosAIc Studio is particularly valuable for its edge training support and comprehensive optimization capabilities, paving the way for efficient AI deployment in various sectors. It offers a robust toolkit for AI model developers aiming to extract the maximum performance from hardware in dynamic environments.

AiM Future
AI Processor, CPU, IoT Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor stands as a flagship ARM-based server processor spearheaded by T-Head, featuring an intricate architecture designed by the company itself. Utilizing advanced multi-core technology, the processor incorporates up to 128 high-performance ARMv9 CPU cores, each complete with its own substantial cache for enhanced data access speed. The processor is adeptly configured to handle intensive computing tasks, supported by a robust off-chip memory system with 8-channel DDR5, reaching peak bandwidths up to 281GB/s. An impressive I/O subsystem featuring PCIe 5.0 interfaces facilitates extensive data throughput capabilities, making it highly suitable for high-demand applications. Compliant with modern energy efficiency standards, the processor boasts innovative multi-die packaging to maintain optimal heat dissipation, ensuring uninterrupted performance in data centers. This processor excels in cloud services, big data computations, video processing, and AI inference operations, offering the speed and efficiency required for next-generation technological challenges.

T-Head
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator II (DNA-II) is an advanced IP core that elevates neural processing capabilities for edge AI applications. It is adaptable to various systems, exhibiting remarkable efficiency through its runtime reconfigurable interconnects, which aid in managing both transformer and convolutional neural networks. Designed for scalability, DNA-II supports numerous applications ranging from 1k MACs to extensive SoC implementations. DNA-II's architecture enables optimal parallelism by dynamically managing data paths between compute units, ensuring minimized on-chip memory bandwidth and maximizing operational efficiency. Paired with the MERA software stack, it provides seamless integration and optimization of neural network tasks, significantly enhancing computation ordering and resource distribution. Its applicability extends across various industry demands, massively increasing the operational efficiency of AI tasks at the edge. DNA-II, the pivotal force in the SAKURA-II Accelerator, brings innovative processing strength in compact formats, driving forward the development of edge-based generative AI and other demanding applications.

EdgeCortix Inc.
AI Processor, Audio Processor, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

Avispado

The Avispado is a sleek and efficient 64-bit RISC-V in-order processing core tailored for applications where energy efficiency is key. It supports a 2-wide in-order issue, emphasizing minimal area and power consumption, which makes it ideal for energy-conscious system-on-chip designs. The core is equipped with direct support for unaligned memory accesses and is multiprocessor-ready, providing a versatile solution for modern AI needs. With its small footprint, Avispado is perfect for machine learning systems requiring little energy per operation. This core is fully compatible with RISC-V Vector Specification 1.0, interfacing seamlessly with Semidynamics' vector units to support vector instructions that enhance computational efficiency. The integration with Gazzillion Misses™ technology allows support for extensive memory latency workloads, ideal for key applications in data center machine learning and recommendation systems. The Avispado also features a robust set of RISC-V instruction set extensions for added capability and operates smoothly within Linux environments due to comprehensive memory management unit support. Multiprocessor-ready design ensures flexibility in embedding many Avispado cores into high-bandwidth systems, facilitating powerful and efficient processing architectures.

Semidynamics
AI Processor, AMBA AHB / APB/ AXI, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, WMA
View Details

CodaCache Last-Level Cache

CodaCache provides a comprehensive solution for enhancing SoC performance through advanced caching techniques, optimizing data access, and improving power efficiency. This last-level cache complements NoC applications by minimizing memory latency and power consumption. Its configurable design offers flexible memory organization, supporting diverse caching requirements and real-time processing. CodaCache is designed to seamlessly integrate with existing SoC infrastructures, accelerating development timelines and enhancing data reusability. It aids in reducing layout congestion and timing closure issues, resulting in better resource management and performance optimization across a range of electronic design applications.

Arteris
AI Processor, Embedded Memories, I/O Library, NAND Flash, ONFI Controller, SDRAM Controller, SRAM Controller, Standard cell
View Details

Ultra-Low-Power 64-Bit RISC-V Core

The Ultra-Low-Power 64-Bit RISC-V Core developed by Micro Magic, Inc. is a highly efficient processor designed to deliver robust performance while maintaining minimal power consumption. This core operates at a remarkable 5GHz frequency while consuming only 10mW of power at 1GHz, making it an ideal solution for applications where energy efficiency is critical. The design leverages innovative techniques to sustain high performance with low voltage operation, ensuring that it can handle demanding processing tasks with reliability. This RISC-V core showcases Micro Magic's expertise in providing high-speed silicon solutions without compromising on power efficiency. It is particularly suited for applications that require both computational prowess and energy conservation, making it an optimal choice for modern SoC (System-on-Chip) designs. The core's architecture is crafted to support a wide range of high-performance computing requirements, offering flexibility and adaptability across various applications and industries. Its integration into larger systems can significantly enhance the overall energy efficiency and speed of electronic devices, contributing to advanced technological innovations.

Micro Magic, Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

EW6181 GPS and GNSS Silicon

The EW6181 is a cutting-edge multi-GNSS silicon solution offering the lowest power consumption and high sensitivity for exemplary accuracy across a myriad of navigation applications. This GNSS chip is adept at processing signals from numerous satellite systems including GPS L1, Glonass, BeiDou, Galileo, and several augmentation systems like SBAS. The integrated chip comprises an RF frontend, a digital baseband processor, and an ARM microcontroller dedicated to operating the firmware, allowing for flexible integration across devices needing efficient power usage. Designed with a built-in DC-DC converter and LDOs, the EW6181 silicon streamlines its bill of materials, making it perfect for battery-powered devices, providing extended operational life without compromising on performance. By incorporating patent-protected algorithms, the EW6181 achieves a remarkably compact footprint while delivering superior performance characteristics. Especially suited for dynamic applications such as action cameras and wearables, its antenna diversity capabilities ensure exceptional connectivity and positioning fidelity. Moreover, by enabling cloud functionality, the EW6181 pushes boundaries in power efficiency and accuracy, catering to connected environments where greater precision is paramount.

etherWhere Corporation
TSMC
7nm
3GPP-5G, AI Processor, Bluetooth, CAN, CAN XL, CAN-FD, FlexRay, GPS, Optical/Telecom, Photonics, RF Modules, W-CDMA
View Details

Tyr Superchip

The Tyr Superchip is engineered to tackle the most daunting computational challenges in edge AI, autonomous driving, and decentralized AIoT applications. It merges AI and DSP functionalities into a single, unified processing unit capable of real-time data management and processing. This all-encompassing chip solution handles vast amounts of sensor data necessary for complete autonomous driving and supports rapid AI computing at the edge. One of the key challenges it addresses is providing massive compute power combined with low-latency outputs, achieving what traditional architectures cannot in terms of energy efficiency and speed. Tyr chips are surrounded by robust safety protocols, being ISO26262 and ASIL-D ready, making them ideally suited for the critical standards required in automotive systems. Designed with high programmability, the Tyr Superchip accommodates the fast-evolving needs of AI algorithms and supports modern software-defined vehicles. Its low power consumption, under 50W for higher-end tasks, paired with a small silicon footprint, ensures it meets eco-friendly demands while staying cost-effective. VSORA’s Superchip is a testament to their innovative prowess, promising unmatched efficiency in processing real-time data streams. By providing both power and processing agility, it effectively supports the future of mobility and AI-driven automation, reinforcing VSORA’s position as a forward-thinking leader in semiconductor technology.

VSORA
AI Processor, Audio Processor, CAN XL, CPU, Interleaver/Deinterleaver, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Akida IP

Akida IP is BrainChip's pioneering neuromorphic processor, crafted to mimic the human brain's analytic capabilities by processing only essential sensor inputs. This localized processing greatly enhances efficiency and privacy, as it significantly reduces the need for cloud data transactions. The processor offers scalable architecture supporting up to 256 nodes interconnected via a mesh network with each node composed of four configurable Neural Network Layer Engines. This event-based technology cuts down operations drastically compared to traditional methods, promoting lower power consumption. With robust support for on-chip learning and incremental learning capabilities, Akida IP is apt for a diverse range of applications and environments. The neural network processor adapts to real-time data seamlessly, creating new avenues for personalized and private on-device AI experiences. The architecture of Akida IP allows it to run complete neural networks, managing various neural network functions in hardware, thus optimizing resource utilization and power efficiency. Integrating Akida IP into systems is straightforward with BrainChip's development ecosystem, facilitating easy evaluation, design, and deployment processes. The Akida PCIe board and additional platform offerings, like the Raspberry Pi kit, promote seamless development and integration for intelligent AI endpoints, perfectly aligning with BrainChip's mission to streamline the implementation of edge AI solutions.

BrainChip
TSMC
28nm
AI Processor, Cryptography Cores, IoT Processor, Platform Security, Vision Processor
View Details

Hanguang 800 AI Accelerator

Hanguang 800 AI Accelerator is a revolutionary AI processing powerhouse designed by T-Head to maximize computational efficiency for artificial intelligence applications. By leveraging state-of-the-art chip design, it achieves unparalleled processing speeds, significantly reducing inference times for machine learning tasks. Its architecture supports intricate AI algorithms adeptly, allowing for swift model training and execution across billions of parameters. Optimized for neural network operations, the accelerator adeptly caters to dense mathematical computations with minimal power consumption. Incorporating adaptive learning mechanisms, it autonomously refines its processing strategies in real-time, catalyzing efficiency gains. This is complemented by an advanced cooling system that manages thermal output without compromising computational power. Hanguang 800's application spectrum is broad, encompassing cloud-based AI services, edge computing, and embedded AI solutions in devices, positioning it as an ideal solution for industries demanding high-speed and scalable AI processing capabilities. Its integration into T-Head's ecosystem underscores the company’s commitment to delivering top-tier performance hardware for next-gen intelligent systems.

T-Head
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

RWM6050 Baseband Modem

The RWM6050 baseband modem from Blu Wireless underpins their mmWave solutions, providing a powerful platform for high-bandwidth, multi-gigabit connectivity. Co-developed with Renesas, this modem pairs seamlessly with mmWave RF chipsets to offer a configurable radio interface, capable of scaling data across sectors requiring both access and backhaul services. This modem features flexible channelization and modulation coding schemes, enabling it to handle diverse data transmission needs with remarkable efficacy. Integrated dual modems and a mixed-signal front-end allow for robust performance in varying deployment scenarios. The RWM6050 supports multiple frequency bands, and its modulation capabilities enable it to adapt dynamically to optimize throughput under different operational conditions. The modem includes advanced beamforming support and digital front-end processing, which facilitates enhanced data routing and network synchronization. These features are pivotal for managing shifting network loads and ensuring resilient performance amidst irregular traffic and environmental variances. A real-time scheduler further augments its capabilities, enabling dynamic response to complex connectivity challenges faced in modern communication landscapes.

Blu Wireless Technology Ltd
3GPP-5G, 3GPP-LTE, AI Processor, AMBA AHB / APB/ AXI, CPRI, Ethernet, HBM, Multi-Protocol PHY, Optical/Telecom, Receiver/Transmitter, W-CDMA, Wireless Processor
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator stands out as a high-performance, energy-efficient edge co-processor designed to handle advanced AI tasks. Tailored for real-time, batch-one AI inferencing, it supports multi-billion parameter models, such as Llama 2 and Stable Diffusion, while maintaining low power consumption. The core technology leverages a dynamic neural accelerator for runtime reconfigurability and exceptional parallel processing, making it ideal for edge-based generative AI applications. With its flexible architecture, SAKURA-II facilitates the seamless execution of diverse AI models concurrently, without compromising on efficiency or speed. Integrated with the MERA compiler framework, it ensures easy deployment across various hardware systems, supporting frameworks like PyTorch and TensorFlow Lite for seamless integration. This AI accelerator excels in AI models for vision, language, and audio, fostering innovative content creation across these domains. Moreover, SAKURA-II supports a robust DRAM bandwidth, far surpassing competitors, ensuring superior performance for large language and vision models. It offers support for significant neural network demands, making it a powerful asset for developers in the edge AI landscape.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

aiWare

aiWare stands out as a premier hardware IP for high-performance neural processing, tailored for complex automotive AI applications. By offering exceptional efficiency and scalability, aiWare empowers automotive systems to harness the full power of neural networks across a wide variety of functions, from Advanced Driver Assistance Systems (ADAS) to fully autonomous driving platforms. It boasts an innovative architecture optimized for both performance and energy efficiency, making it capable of handling the rigorous demands of next-generation AI workloads. The aiWare hardware features an NPU designed to achieve up to 256 Effective Tera Operations Per Second (TOPS), delivering high performance at significantly lower power. This is made possible through a thoughtfully engineered dataflow and memory architecture that minimizes the need for external memory bandwidth, thus enhancing processing speed and reducing energy consumption. The design ensures that aiWare can operate efficiently across a broad range of conditions, maintaining its edge in both small and large-scale applications. A key advantage of aiWare is its compatibility with aiMotive's aiDrive software, facilitating seamless integration and optimizing neural network configurations for automotive production environments. aiWare's development emphasizes strong support for AI algorithms, ensuring robust performance in diverse applications, from edge processing in sensor nodes to high central computational capacity. This makes aiWare a critical component in deploying advanced, scalable automotive AI solutions, designed specifically to meet the safety and performance standards required in modern vehicles.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

iModeler for PDK Model Generation

iModeler simplifies the creation of process design kits (PDKs) for passive devices with rapid model synthesis across various nodes. Leveraging full-band 3D electromagnetic simulation, iModeler enhances the speed and accuracy of PDK modeling. The tool integrates seamlessly with Cadence Virtuoso, allowing users to control model parameters and configure complex on-chip passive devices such as inductors and capacitors. With its rich library of templates and dedication to optimizing parametric models, iModeler stands as a vital resource for the efficient and precise design of on-chip passive devices within different process environments.

Xpeedic
AI Processor, Embedded Memories
View Details

KL530 AI SoC

Equipped with a new NPU architecture, the KL530 AI SoC stands out as a versatile chip supporting INT4 precision and transformers with remarkable efficiency. Designed for low-power consumption, it attains up to 1 TOPS@INT4 while maintaining high processing efficiency. Its smart ISP enhances image quality through AI inference. The KL530 is ideal for AIoT applications and multimedia processing, providing high-efficiency compression with less than 500 ms cold start time.

Kneron
AI Processor, Camera Interface, Clock Generator, CPU, GPU, Vision Processor
View Details

Azurite Core-hub

Azurite Core-hub is an innovative processor solution that excels in performance, catering to challenging computational tasks with efficiency and speed. Designed with the evolving needs of industries in mind, Azurite leverages cutting-edge RISC-V architecture to deliver high performance while maintaining scalability and flexibility in design. This processor core stands out for its ability to streamline tasks and simplify the complexities often associated with processor integration. The Azurite Core-hub's architecture is tailored to enhance computation-intensive applications, ensuring rapid execution and robust performance. Its open-source RISC-V base supports easy integration and freedom from vendor lock-in, providing users the liberty to customize their processors according to specific project needs. This adaptability makes Azurite an ideal choice for sectors like AI/ML where high performance is crucial. InCore Semiconductors has fine-tuned the Azurite Core-hub to serve as a powerhouse in the processor core market, ensuring that it meets rigorous performance benchmarks. It offers a seamless blend of high efficiency and user-friendly configurability, making it a versatile asset for any design environment.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

C100 IoT Control and Interconnection Chip

The Chipchain C100 is a sophisticated, single-chip solution designed for Internet of Things (IoT) applications. It is built around a 32-bit RISC-V CPU operating at speeds of up to 1.5GHz, supplemented by embedded RAM and ROM, ensuring exceptional computational efficiency. The C100 integrates several essential features for IoT use, including Wi-Fi capability, multiple data transmission interfaces, and built-in ADCs, LDOs, and temperature sensors. This integration is aimed at simplifying and expediting application development across various domains ranging from security to healthcare.

Shenzhen Chipchain Technologies Co., Ltd.
TSMC
7nm, 12nm, 16nm
16 Categories
View Details

RISC-V Core IP

The RISC-V Core IP from AheadComputing represents a pinnacle of modern processor architecture. Specializing in 64-bit application processors, this IP is designed to leverage the open-standard RISC-V architecture, ensuring flexibility while pushing the boundaries of performance. Its architecture is tailored to deliver outstanding per-core performance, making it ideal for applications requiring significant computational power combined with the benefits of open-source standards. Engineered with efficiency and compatibility in mind, the RISC-V Core IP by AheadComputing caters to a wide array of applications, from consumer electronics to advanced computing systems. It supports the development of highly efficient CPUs that not only excel in speed but also offer scalability across different computing environments. This makes it a highly versatile choice for developers aiming to adopt a powerful yet adaptable processing core. The AheadComputing RISC-V Core IP is also known for its configurability, making it suitable for various market needs and future technological developments. Built on the experience and expertise of its development team, this IP remains at the frontier of innovative processor design, enabling clients to harness cutting-edge computing solutions prepped for next-generation challenges.

AheadComputing Inc.
All Foundries
22nm, 28nm
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Cores
View Details

SiFive Intelligence X280

Tailored specifically for AI and machine learning requirements at the edge, the SiFive Intelligence X280 brings powerful capabilities to data-intensive applications. This processor line is part of the high-performance AI data flow processors from SiFive, designed to offer scalable vector computation capabilities. Key features include handling demanding AI workloads, efficient data flow management, and enhanced object detection and speech recognition processing. The X280 is equipped with vector processing capabilities that include a 512-bit vector length, single vector ALU VCIX (1024-bit), plus a host of new instructions optimized for machine learning operations. These features provide a robust platform for addressing energy-efficient inference tasks, driven by the need for high-performance yet low-power computing solutions. Key to the X280's appeal is its ability to interface seamlessly with popular machine learning frameworks, enabling developers to deploy models with ease and flexibility. Additionally, its compatibility with SiFive Intelligence Extensions and TensorFlow Lite enhances its utility in delivering consistent, high-quality AI processing in various applications, from automotive to consumer devices.

SiFive, Inc.
AI Processor, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is an innovative ultra-low power microcontroller designed for always-on sensing applications, bringing intelligence directly to the sensor edge. This processor utilizes the processing power of spiking neural networks, combined with a nimble RISC-V processor core, to form a singular chip solution. Its design supports next-generation AI and signal processing capabilities, all while operating within a very narrow power envelope, crucial for battery-powered and latency-sensitive devices. This microcontroller's architecture supports advanced on-chip signal processing capabilities that include both Spiking Neural Networks (SNNs) and Deep Neural Networks (DNNs). These processing capabilities enable rapid pattern recognition and data processing similar to how the human brain functions. Notably, it operates efficiently under sub-milliwatt power consumption and offers fast response times, making it an ideal choice for devices such as wearables and other portable electronics that require continuous operation without significant energy draw. The T1 is also equipped with diverse interface options, such as QSPI, I2C, UART, JTAG, GPIO, and a front-end ADC, contained within a compact 2.16mm x 3mm, 35-pin WLCSP package. The device boosts applications by enabling them to execute with incredible efficiency and minimal power, allowing for direct connection and interaction with multiple sensor types, including audio and image sensors, radar, and inertial units for comprehensive data analysis and interaction.

Innatera Nanosystems
TSMC
28nm, 65nm
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Vision Processor, Wireless Processor
View Details

Universal DSP Library

The Universal DSP Library is an adaptable collection of digital signal processing components, seamlessly integrated into the AMD Vivado ML Design Suite. This library supports a variety of common DSP tasks, including filtering, mixing, and approximations, all while providing the integral logic necessary for connecting DSP systems. By minimizing development time and enabling rapid assembly of signal processing chains, the library facilitates both rapid prototyping and sophisticated design within FPGA environments. It provides raw VHDL source code and IP blocks, paired with comprehensive documentation and bit-true software models for preliminary evaluation and development. Supporting a multitude of processing types such as continuous wave and pulse processing, the library delivers significant flexibility for developers. This ranges from real and complex signal processing to accommodating multiple independent data channels. All components are designed to operate within the standardized AXI4-Stream protocol, ensuring an easy integration process with other systems. The inclusion of out-of-the-box solutions for FIR, CIC filters, and CORDIC highlights the library's capability to cover repetitive DSP tasks, allowing developers to concentrate on more project-specific challenges. The Universal DSP Library not only streamlines design with its modularity and ease of use, but it also offers solutions for optimizing performance across different application areas. Its utility spans digital signal processing, communication systems, and even medical diagnostics, underscoring its versatility and essential role in modern FPGA-based development initiatives.

Enclustra GmbH
2D / 3D, AI Processor, Building Blocks, DSP Core
View Details

CTAccel Image Processor on Alveo U200

The CTAccel Image Processor for Alveo U200 represents a pinnacle of image processing acceleration, catering to the massive data produced by the explosion of smartphone photography. Through the offloading of intensive image processing tasks from CPUs to FPGAs, it achieves notable gains in performance and efficiency for data centers. By using an FPGA as a heterogenous coprocessor, the CIP speeds up typical workflows—such as image encoding and decoding—up to six times, while drastically cutting latency by fourfold. Its architecture allows for expanded compute density, meaning less rack space and reduced operational costs for managing data centers. This is crucial for handling the everyday influx of image data driven by social media and cloud storage. The solution maintains full software compatibility with popular tools like ImageMagick and OpenCV, meaning migration is seamless and straightforward. Moreover, the system's remote reconfiguration capabilities enable users to optimize processing for varying scenarios swiftly, ensuring peak performance without the need for server restarts.

CTAccel Ltd.
AI Processor, DLL, Graphics & Video Modules, Image Conversion, JPEG, JPEG 2000, Vision Processor
View Details

ZIA DV700 Series

The ZIA DV700 Series neural processing unit by Digital Media Professionals showcases superior proficiency in handling deep neural networks, tailored for high-reliability AI systems such as autonomous vehicles and robotics. This series excels in real-time image, video, and voice processing, emphasizing both efficiency and safety crucial for applications requiring accurate and speedy analysis. Leveraging FP16 floating-point precision, these units ensure robust AI model deployment without necessitating additional training, maintaining high inference precision for critical applications. Devised with versatility in mind, the DV700 supports a plethora of AI models, facilitating mobile, space-efficient integration across multiple platforms. Engineered to handle diverse DNN configurations, the ZIA DV700 stands out with hardware architectures optimized for inference processing. Its extensive application spread includes object detection, semantic segmentation, pose estimation, and more. By supporting standard AI development frameworks like Caffe, Keras, and TensorFlow, users can seamlessly develop AI applications with DV700's robust SDK and development tools. The IP core's design integrates a high-bandwidth on-chip RAM and weight compression, further boosting processing performance. Optimizing for enhanced AI inference tasks, the DV700 Series continues to be indispensable in high-stakes environments.

Digital Media Professionals Inc.
AI Processor, Interrupt Controller, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

Jotunn8 AI Accelerator

The Jotunn8 is engineered to redefine performance standards for AI datacenter inference, supporting prominent large language models. Standing as a fully programmable and algorithm-agnostic tool, it supports any algorithm, any host processor, and can execute generative AI like GPT-4 or Llama3 with unparalleled efficiency. The system excels in delivering cost-effective solutions, offering high throughput up to 3.2 petaflops (dense) without relying on CUDA, thus simplifying scalability and deployment. Optimized for cloud and on-premise configurations, Jotunn8 ensures maximum utility by integrating 16 cores and a high-level programming interface. Its innovative architecture addresses conventional processing bottlenecks, allowing constant data availability at each processing unit. With the potential to operate large and complex models at reduced query costs, this accelerator maintains performance while consuming less power, making it the preferred choice for advanced AI tasks. The Jotunn8's hardware extends beyond AI-specific applications to general processing (GP) functionalities, showcasing its agility. By automatically selecting the most suitable processing paths layer-by-layer, it optimizes both latency and power consumption. This provides its users with a flexible platform that supports the deployment of vast AI models under efficient resource utilization strategies. This product's configuration includes power peak consumption of 180W and an impressive 192 GB on-chip memory, accommodating sophisticated AI workloads with ease. It aligns closely with theoretical limits for implementation efficiency, accentuating VSORA's commitment to high-performance computational capabilities.

VSORA
AI Processor, Interleaver/Deinterleaver, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

KL720 AI SoC

The KL720 AI SoC offers an industry-leading performance-to-power ratio of 0.9 TOPS per Watt, rendering it suitable for high-performance applications. It's two to four times more power-efficient than its competitors, making it ideal for devices requiring significant processing capability. The chip facilitates real-time processing of 4K images and supports full HD video and 3D sensing, addressing applications in IP cams, smart TVs, AI glasses, and more, enabling gesture control for gaming and voice recognition.

Kneron
AI Processor, Audio Interfaces, AV1, CPU, GPU, Image Conversion, Vision Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt