Find IP Sell IP AI Assistant Chip Talk About Us
Log In

All IPs > Platform Level IP > Multiprocessor / DSP

Multiprocessor and DSP Semiconductor IP

In the realm of semiconductor IP, the Multiprocessor and Digital Signal Processor (DSP) category plays a crucial role in enhancing the processing performance and efficiency of a vast array of modern electronic devices. Semiconductor IPs in this category are designed to support complex computational tasks, enabling sophisticated functionalities in consumer electronics, automotive systems, telecommunications, and more. With the growing need for high-performance processing in a compact and energy-efficient form, multiprocessor and DSP IPs have become integral to product development across industries.

The multiprocessor IPs are tailored to provide parallel processing capabilities, which significantly boost the computational power required for intensive applications. By employing multiple processing cores, these IPs allow for the concurrent execution of multiple tasks, leading to faster data processing and improved system performance. This is especially vital in applications such as gaming consoles, smartphones, and advanced driver-assistance systems (ADAS) in vehicles, where seamless and rapid processing is essential.

Digital Signal Processors are specialized semiconductor IPs used to perform mathematical operations on signals, allowing for efficient processing of audio, video, and other types of data streams. DSPs are indispensable in applications where real-time data processing is critical, such as noise cancellation in audio devices, image processing in cameras, and signal modulation in communication systems. By providing dedicated hardware structures optimized for these tasks, DSP IPs deliver superior performance and lower power consumption compared to general-purpose processors.

Products in the multiprocessor and DSP semiconductor IP category range from core subsystems and configurable processors to specialized accelerators and integrated solutions that combine processing elements with other essential components. These IPs are designed to help developers create cutting-edge solutions that meet the demands of today’s technology-driven world, offering flexibility and scalability to adapt to different performance and power requirements. As technology evolves, the importance of multiprocessor and DSP IPs will continue to grow, driving innovation and efficiency across various sectors.

All semiconductor IP
122
IPs available

Akida 2nd Generation

The 2nd Generation Akida builds upon BrainChip's neuromorphic legacy, broadening the range of supported complex network models with enhancements in weight and activation precision up to 8 bits. This generation introduces additional energy efficiency, performance optimizations, and greater accuracy, catering to a broader set of intelligent applications. Notably, it supports advanced features like Temporal Event-Based Neural Networks (TENNs), Vision Transformers, and extensive use of skip connections, which elevate its capabilities within spatio-temporal and vision-based applications. Designed for a variety of industrial, automotive, healthcare, and smart city applications, the 2nd Generation Akida boasts on-chip learning which maintains data privacy by eliminating the need to send sensitive information to the cloud. This reduces latency and secures data, crucial for future autonomous and IoT applications. With its multipass processing capabilities, Akida addresses the challenge of limited hardware resources smartly, processing complex models efficiently on the edge. Offering a flexible and scalable IP platform, it is poised to enhance end-user experiences across various industries by enabling efficient real-time AI processing on compact devices. The introduction of long-range skip connections further supports intricate neural networks like ResNet and DenseNet, showcasing Akida's potential to drive deeper model efficiencies without excessive host CPU calculation dependence.

BrainChip
AI Processor, Digital Video Broadcast, IoT Processor, Multiprocessor / DSP, Security Protocol Accelerators, Vision Processor
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

The Tianqiao-70 CPU core by StarFive is a low-power RISC-V processor, designed specifically to address the needs of commercial applications that prioritize energy efficiency. This 64-bit CPU core is versatile, catering to various sectors including mobile devices, IoT applications, and intelligent consumer electronics that demand performance without compromising on power. Designed with efficient power utilization at its core, the Tianqiao-70 is tailored to offer high computation capacity while keeping energy consumption minimal, thereby extending device battery life and reducing operational costs. This processor core supports a vast spectrum of computational tasks while maintaining low-level power metrics, an essential factor in mobile and embedded applications. Through its effective design, the Tianqiao-70 continues to support a wide array of tasks efficiently, allowing businesses to lower their energy usage while achieving powerful processing capabilities. This core stands as an ideal solution for forward-thinking organizations that value sustainability and legacy support in their tech stack.

StarFive
TSMC
28nm
CPU, Multiprocessor / DSP, Processor Cores
View Details

CXL 3.1 Switch

Panmnesia's CXL 3.1 Switch is a pivotal component in networking a vast array of CXL-enabled devices, setting the bar with its exceptional scalability and diverse connectivity. The switch supports seamless integration of hundreds of devices including memory, CPUs, and accelerators, facilitating flexible, high-performance configurations suited to demanding applications in data centers and beyond. Panmnesia's design enables easy scalability and efficient memory node expansion, reflecting their dedication to resource-efficient memory management. The CXL 3.1 Switch features a robust architecture that supports a wide array of network topologies, allowing for multi-level switching and complex node configurations. Its design addresses the unique challenges of composable server architecture, enabling fine-grained resource allocation. The switch leverages Panmnesia's proprietary CXL technology, underpinning its ability to perform management tasks across integrated memory spaces with minimal overhead, crucial for achieving high-speed, low-latency data exchange. Incorporating CXL standards, it is fully compatible with both legacy and next-generation devices, ensuring broad interoperability. The architecture allows servers to tailor resource availability by employing type-specific CXL features, such as port-based routing and multi-level switching. These features empower operators with the tools to configure extensive networks of diverse devices efficiently, thereby maximizing data center performance while minimizing costs.

Panmnesia
All Foundries
All Process Nodes
CXL, D2D, Multiprocessor / DSP, PCI, Processor Core Dependent, Processor Core Independent, RapidIO, SAS, SATA, V-by-One
View Details

Metis AIPU PCIe AI Accelerator Card

The PCIe AI Accelerator Card powered by Metis AIPU offers unparalleled AI inference performance suitable for intensive vision applications. Incorporating a single quad-core Metis AIPU, it provides up to 214 TOPS, efficiently managing high-volume workloads with low latency. The card is further enhanced by the Voyager SDK, which streamlines application deployment, offering an intuitive development experience and ensuring simple integration across various platforms. Whether for real-time video analytics or other demanding AI tasks, the PCIe Accelerator Card is designed to deliver exceptional speed and precision.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor, WMV
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

The H.264 FPGA Encoder and CODEC Micro Footprint Cores from A2e Technologies are industry-leading solutions optimized for high-speed video encoding with minimal latency. Specially tailored for FPGA applications, this core ensures compliance with the H.264 Baseline and offers configurations to suit varying performance needs, such as low-cost evaluation licenses for flexibility. These cores are noted for their exceptionally compact size and rapid processing capabilities, enabling them to achieve 1080p at 60 frames per second with remarkable efficiency. One of the project's standout features is the 1ms latency at 1080p30, which is among the fastest in the industry. This core also supports custom configurations, allowing adjustments to pixel depth, resolution, and more, making it a versatile choice for developers looking to integrate video encoding in their systems. Moreover, these cores are ITAR compliant, offering a secure and adaptable solution for high-performance FPGA design. The scalability and customization options, including support for various pixel depths and resolutions, make these H.264 cores suitable for a wide array of applications, from real-time video streaming to embedded systems in industrial automation. By leveraging this advanced technology, A2e Technologies provides a robust solution that meets stringent industry standards and addresses specific customer needs effectively.

A2e Technologies
AMBA AHB / APB/ AXI, Arbiter, H.264, Multiprocessor / DSP, TICO, USB
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDARIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators are sophisticated tools designed to empower developers with complete control over their processor configurations. These generators allow users to customize their core-hubs at both the Instruction Set Architecture (ISA) and microarchitecture levels, offering unparalleled flexibility and adaptability in design. Such capabilities enable fine-tuning of processor specifications to meet specific application needs, fostering innovation within the RISC-V ecosystem. By leveraging the Core-hub Generators, developers can streamline their chip design process, ensuring efficient and seamless integration of custom features. This toolset not only simplifies the design process but also reduces time-to-silicon, making it ideal for industries seeking rapid advancements in their technological capabilities. The user-friendly interface and robust support of these generators make them a preferred choice for developing cutting-edge processors. InCore Semiconductors’ RISC-V Core-hub Generators represent a significant leap forward in processor design technology, emphasizing ease of use, cost-effectiveness, and scalability. As demand for tailored and efficient processors grows, these generators are set to play a pivotal role in shaping the future of semiconductor design, driving innovation across multiple sectors.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Chimera GPNPU

The Chimera GPNPU is a general-purpose neural processing unit designed to address key challenges faced by system on chip (SoC) developers when deploying machine learning (ML) inference solutions. It boasts a unified processor architecture capable of executing matrix, vector, and scalar operations within a single pipeline. This architecture integrates the functions of a neural processing unit (NPU), digital signal processor (DSP), and other processors, which significantly simplifies code development and hardware integration. The Chimera GPNPU can manage various ML networks, including classical frameworks, vision transformers, and large language models, all within a single processor framework. Its flexibility allows developers to optimize performance across different applications, from mobile devices to automotive systems. The GPNPU family is fully synthesizable, making it adaptable to a range of performance requirements and process technologies, ensuring long-term viability and adaptability to changing ML workloads. The Cortex's sophisticated design includes a hybrid Von Neumann and 2D SIMD matrix architecture, predictive power management, and sophisticated memory optimization techniques, including an L2 cache. These features help reduce power usage and enhance performance by enabling the processor to efficiently handle complex neural network computations and DSP algorithms. By merging the best qualities of NPUs and DSPs, the Chimera GPNPU establishes a new benchmark for performance in AI processing.

Quadric
All Foundries
All Process Nodes
AI Processor, AMBA AHB / APB/ AXI, CPU, DSP Core, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, VGA, Vision Processor
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

eSi-3264

Standing at the pinnacle of eSi-RISC's processor cores, the eSi-3264 offers a powerful 32/64-bit architecture with DSP extensions designed for intensive computing tasks. Its unique ability to process both SIMD fixed and floating-point operations makes it ideal for advanced applications requiring complex digital signal processing with minimal hardware footprint. The eSi-3264 excels in applications needing DSP capabilities due to its fully pipelined MAC unit and the support for dual and quad 64-bit accumulations. The architecture supports a wide range of application-specific instructions and enhanced memory management via configurable caches and an optional MMU. Leveraging industry-standard interfaces, it allows seamless integration with existing chip architectures. These capabilities, coupled with high code density and efficient power management strategies, reinforce its suitability for next-generation multimedia, signal processing, and control systems looking to maximize performance and minimize power consumption.

eSi-RISC
TSMC
180nm
CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator II (DNA-II) is an advanced IP core that elevates neural processing capabilities for edge AI applications. It is adaptable to various systems, exhibiting remarkable efficiency through its runtime reconfigurable interconnects, which aid in managing both transformer and convolutional neural networks. Designed for scalability, DNA-II supports numerous applications ranging from 1k MACs to extensive SoC implementations. DNA-II's architecture enables optimal parallelism by dynamically managing data paths between compute units, ensuring minimized on-chip memory bandwidth and maximizing operational efficiency. Paired with the MERA software stack, it provides seamless integration and optimization of neural network tasks, significantly enhancing computation ordering and resource distribution. Its applicability extends across various industry demands, massively increasing the operational efficiency of AI tasks at the edge. DNA-II, the pivotal force in the SAKURA-II Accelerator, brings innovative processing strength in compact formats, driving forward the development of edge-based generative AI and other demanding applications.

EdgeCortix Inc.
AI Processor, Audio Processor, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor stands as a flagship ARM-based server processor spearheaded by T-Head, featuring an intricate architecture designed by the company itself. Utilizing advanced multi-core technology, the processor incorporates up to 128 high-performance ARMv9 CPU cores, each complete with its own substantial cache for enhanced data access speed. The processor is adeptly configured to handle intensive computing tasks, supported by a robust off-chip memory system with 8-channel DDR5, reaching peak bandwidths up to 281GB/s. An impressive I/O subsystem featuring PCIe 5.0 interfaces facilitates extensive data throughput capabilities, making it highly suitable for high-demand applications. Compliant with modern energy efficiency standards, the processor boasts innovative multi-die packaging to maintain optimal heat dissipation, ensuring uninterrupted performance in data centers. This processor excels in cloud services, big data computations, video processing, and AI inference operations, offering the speed and efficiency required for next-generation technological challenges.

T-Head
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

Avispado

The Avispado is a sleek and efficient 64-bit RISC-V in-order processing core tailored for applications where energy efficiency is key. It supports a 2-wide in-order issue, emphasizing minimal area and power consumption, which makes it ideal for energy-conscious system-on-chip designs. The core is equipped with direct support for unaligned memory accesses and is multiprocessor-ready, providing a versatile solution for modern AI needs. With its small footprint, Avispado is perfect for machine learning systems requiring little energy per operation. This core is fully compatible with RISC-V Vector Specification 1.0, interfacing seamlessly with Semidynamics' vector units to support vector instructions that enhance computational efficiency. The integration with Gazzillion Misses™ technology allows support for extensive memory latency workloads, ideal for key applications in data center machine learning and recommendation systems. The Avispado also features a robust set of RISC-V instruction set extensions for added capability and operates smoothly within Linux environments due to comprehensive memory management unit support. Multiprocessor-ready design ensures flexibility in embedding many Avispado cores into high-bandwidth systems, facilitating powerful and efficient processing architectures.

Semidynamics
AI Processor, AMBA AHB / APB/ AXI, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, WMA
View Details

Ultra-Low-Power 64-Bit RISC-V Core

The Ultra-Low-Power 64-Bit RISC-V Core developed by Micro Magic, Inc. is a highly efficient processor designed to deliver robust performance while maintaining minimal power consumption. This core operates at a remarkable 5GHz frequency while consuming only 10mW of power at 1GHz, making it an ideal solution for applications where energy efficiency is critical. The design leverages innovative techniques to sustain high performance with low voltage operation, ensuring that it can handle demanding processing tasks with reliability. This RISC-V core showcases Micro Magic's expertise in providing high-speed silicon solutions without compromising on power efficiency. It is particularly suited for applications that require both computational prowess and energy conservation, making it an optimal choice for modern SoC (System-on-Chip) designs. The core's architecture is crafted to support a wide range of high-performance computing requirements, offering flexibility and adaptability across various applications and industries. Its integration into larger systems can significantly enhance the overall energy efficiency and speed of electronic devices, contributing to advanced technological innovations.

Micro Magic, Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Tyr Superchip

The Tyr Superchip is engineered to tackle the most daunting computational challenges in edge AI, autonomous driving, and decentralized AIoT applications. It merges AI and DSP functionalities into a single, unified processing unit capable of real-time data management and processing. This all-encompassing chip solution handles vast amounts of sensor data necessary for complete autonomous driving and supports rapid AI computing at the edge. One of the key challenges it addresses is providing massive compute power combined with low-latency outputs, achieving what traditional architectures cannot in terms of energy efficiency and speed. Tyr chips are surrounded by robust safety protocols, being ISO26262 and ASIL-D ready, making them ideally suited for the critical standards required in automotive systems. Designed with high programmability, the Tyr Superchip accommodates the fast-evolving needs of AI algorithms and supports modern software-defined vehicles. Its low power consumption, under 50W for higher-end tasks, paired with a small silicon footprint, ensures it meets eco-friendly demands while staying cost-effective. VSORA’s Superchip is a testament to their innovative prowess, promising unmatched efficiency in processing real-time data streams. By providing both power and processing agility, it effectively supports the future of mobility and AI-driven automation, reinforcing VSORA’s position as a forward-thinking leader in semiconductor technology.

VSORA
AI Processor, Audio Processor, CAN XL, CPU, Interleaver/Deinterleaver, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator stands out as a high-performance, energy-efficient edge co-processor designed to handle advanced AI tasks. Tailored for real-time, batch-one AI inferencing, it supports multi-billion parameter models, such as Llama 2 and Stable Diffusion, while maintaining low power consumption. The core technology leverages a dynamic neural accelerator for runtime reconfigurability and exceptional parallel processing, making it ideal for edge-based generative AI applications. With its flexible architecture, SAKURA-II facilitates the seamless execution of diverse AI models concurrently, without compromising on efficiency or speed. Integrated with the MERA compiler framework, it ensures easy deployment across various hardware systems, supporting frameworks like PyTorch and TensorFlow Lite for seamless integration. This AI accelerator excels in AI models for vision, language, and audio, fostering innovative content creation across these domains. Moreover, SAKURA-II supports a robust DRAM bandwidth, far surpassing competitors, ensuring superior performance for large language and vision models. It offers support for significant neural network demands, making it a powerful asset for developers in the edge AI landscape.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

M3000 Graphics Processor

The M3000 Graphics Processor from Digital Media Professionals is designed to deliver exceptional performance in 3D graphics rendering within compact, power-efficient packages. Optimized for high-performance visual computing, the M3000 provides state-of-the-art support for OpenGL ES 3.0, ensuring top-notch graphics output for embedded devices and edge computing. Exhibiting a scalable architecture, the M3000 allows precise customization to meet diverse performance and area efficiency requirements. It stands out with its prowess in handling VR and AR applications, demanding intensive graphical calculations and rendering. Through the use of DMP’s proprietary graphics architect, Musashi, the M3000 achieves unparalleled efficiency in power, performance, and area (PPA). Aligning with current industry needs, the processor supports multiple application interfaces including IoT devices, smartphones, automotive systems, and more. Its versatility extends to customization of graphical throughput, making the M3000 a pivotal component in devices that require advanced graphics processing capabilities.

Digital Media Professionals Inc.
2D / 3D, ADPCM, GPU, Multiprocessor / DSP
View Details

XCM_64X64

Functioning as a comprehensive cross-correlator, the XCM_64X64 facilitates efficient and precise signal processing required in synthetic radar receivers and advanced spectrometers. Designed on IBM's 45nm SOI CMOS technology, it supports ultra-low power operation at about 1.5W for the entire array, with a sampling performance of 1GSps across a bandwidth of 10MHz to 500MHz. The ASIC is engineered to manage high-throughput data channels, a vital component for high-energy physics and space observation instruments.

Pacific MicroCHIP Corp.
X-Fab
40nm
3GPP-LTE, ADPCM, D/A Converter, Multiprocessor / DSP, Oversampling Modulator, Power Management, Receiver/Transmitter, RF Modules, VGA, Wireless Processor
View Details

Software-Defined High PHY

The Software-Defined High PHY from AccelerComm is designed for adaptability and high efficiency across ARM processor architectures. This product brings flexibility in software-defined radio applications by facilitating easy optimization for different platforms, considering power and capacity requirements. It allows integration without hardware acceleration based on the needs of specific deployments.\n\nA key feature of the Software-Defined High PHY is its capability for customization. Users can tailor this IP to work optimally across various platforms, either independently or coupled with hardware-accelerated functionalities. This ensures the high-performance needed for modern network demands is met without unnecessary resource consumption.\n\nPerfect for scenarios needing O-RAN compliance, this PHY solution supports high adaptability and scalability for different use cases. It is ideal for developers who require robust communication solutions tuned for efficient execution in varying environmental conditions, contributing to lower latency and higher throughput in network infrastructures.

AccelerComm Limited
3GPP-5G, 3GPP-LTE, AMBA AHB / APB/ AXI, Multiprocessor / DSP, Processor Core Independent
View Details

SiFive Performance

The SiFive Performance family represents a new benchmark in computing efficiency and performance. These RISC-V processors are aimed at addressing the demands of modern workloads, including web servers, multimedia processing, networking, and storage in data centers. With its high throughput, out-of-order cores ranging from three-wide to six-wide configurations, and dedicated vector engines for AI tasks, the SiFive Performance family promises remarkable energy and area efficiency. This not only enables high compute density but also reduces costs and energy consumption, making it an optimal choice for contemporary data center applications. A hallmark of the Performance family is its scalability for various applications, including mobile, consumer, and edge infrastructure. The portfolio includes a range of models like the six-wide, out-of-order P870 core, capable of scaling up to a 256-core cluster, and the P650, known for its four-issue, out-of-order architecture supporting up to a 16-core cluster. Furthermore, the family includes the P550 series, which sets standards with its three-issue, out-of-order design, offering superior performance in an energy-efficient footprint. In addition to delivering exceptional computing power, the SiFive Performance processors excel in scenarios where power, footprint, and cost are crucial factors. With the potential for configurations up to 512 cores, these processors are designed to meet the growing demand for high-performance computing across multiple sectors.

SiFive, Inc.
CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

RISC-V Core IP

The RISC-V Core IP from AheadComputing represents a pinnacle of modern processor architecture. Specializing in 64-bit application processors, this IP is designed to leverage the open-standard RISC-V architecture, ensuring flexibility while pushing the boundaries of performance. Its architecture is tailored to deliver outstanding per-core performance, making it ideal for applications requiring significant computational power combined with the benefits of open-source standards. Engineered with efficiency and compatibility in mind, the RISC-V Core IP by AheadComputing caters to a wide array of applications, from consumer electronics to advanced computing systems. It supports the development of highly efficient CPUs that not only excel in speed but also offer scalability across different computing environments. This makes it a highly versatile choice for developers aiming to adopt a powerful yet adaptable processing core. The AheadComputing RISC-V Core IP is also known for its configurability, making it suitable for various market needs and future technological developments. Built on the experience and expertise of its development team, this IP remains at the frontier of innovative processor design, enabling clients to harness cutting-edge computing solutions prepped for next-generation challenges.

AheadComputing Inc.
All Foundries
22nm, 28nm
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Cores
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is an innovative ultra-low power microcontroller designed for always-on sensing applications, bringing intelligence directly to the sensor edge. This processor utilizes the processing power of spiking neural networks, combined with a nimble RISC-V processor core, to form a singular chip solution. Its design supports next-generation AI and signal processing capabilities, all while operating within a very narrow power envelope, crucial for battery-powered and latency-sensitive devices. This microcontroller's architecture supports advanced on-chip signal processing capabilities that include both Spiking Neural Networks (SNNs) and Deep Neural Networks (DNNs). These processing capabilities enable rapid pattern recognition and data processing similar to how the human brain functions. Notably, it operates efficiently under sub-milliwatt power consumption and offers fast response times, making it an ideal choice for devices such as wearables and other portable electronics that require continuous operation without significant energy draw. The T1 is also equipped with diverse interface options, such as QSPI, I2C, UART, JTAG, GPIO, and a front-end ADC, contained within a compact 2.16mm x 3mm, 35-pin WLCSP package. The device boosts applications by enabling them to execute with incredible efficiency and minimal power, allowing for direct connection and interaction with multiple sensor types, including audio and image sensors, radar, and inertial units for comprehensive data analysis and interaction.

Innatera Nanosystems
TSMC
28nm, 65nm
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Vision Processor, Wireless Processor
View Details

SiFive Intelligence X280

Tailored specifically for AI and machine learning requirements at the edge, the SiFive Intelligence X280 brings powerful capabilities to data-intensive applications. This processor line is part of the high-performance AI data flow processors from SiFive, designed to offer scalable vector computation capabilities. Key features include handling demanding AI workloads, efficient data flow management, and enhanced object detection and speech recognition processing. The X280 is equipped with vector processing capabilities that include a 512-bit vector length, single vector ALU VCIX (1024-bit), plus a host of new instructions optimized for machine learning operations. These features provide a robust platform for addressing energy-efficient inference tasks, driven by the need for high-performance yet low-power computing solutions. Key to the X280's appeal is its ability to interface seamlessly with popular machine learning frameworks, enabling developers to deploy models with ease and flexibility. Additionally, its compatibility with SiFive Intelligence Extensions and TensorFlow Lite enhances its utility in delivering consistent, high-quality AI processing in various applications, from automotive to consumer devices.

SiFive, Inc.
AI Processor, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

XCM_64X64_A

The XCM_64X64_A is a powerful array designed for cross-correlation operations, integrating 128 ADCs each capable of 1GSps. Targeted at high-precision synthetic radar and radiometer systems, this ASIC delivers ultra-low power consumption around 0.5W, ensuring efficient performance over a wide bandwidth range from 10MHz to 500MHz. Built on IBM's 45nm SOI CMOS technology, it forms a critical component in systems requiring rapid data sampling and intricate signal processing, all executed with high accuracy, making it ideal for airborne and space-based applications.

Pacific MicroCHIP Corp.
X-Fab
40nm
3GPP-LTE, ADPCM, D/A Converter, Multiprocessor / DSP, Oversampling Modulator, Power Management, Receiver/Transmitter, RF Modules, VGA, Wireless Processor
View Details

ZIA DV700 Series

The ZIA DV700 Series neural processing unit by Digital Media Professionals showcases superior proficiency in handling deep neural networks, tailored for high-reliability AI systems such as autonomous vehicles and robotics. This series excels in real-time image, video, and voice processing, emphasizing both efficiency and safety crucial for applications requiring accurate and speedy analysis. Leveraging FP16 floating-point precision, these units ensure robust AI model deployment without necessitating additional training, maintaining high inference precision for critical applications. Devised with versatility in mind, the DV700 supports a plethora of AI models, facilitating mobile, space-efficient integration across multiple platforms. Engineered to handle diverse DNN configurations, the ZIA DV700 stands out with hardware architectures optimized for inference processing. Its extensive application spread includes object detection, semantic segmentation, pose estimation, and more. By supporting standard AI development frameworks like Caffe, Keras, and TensorFlow, users can seamlessly develop AI applications with DV700's robust SDK and development tools. The IP core's design integrates a high-bandwidth on-chip RAM and weight compression, further boosting processing performance. Optimizing for enhanced AI inference tasks, the DV700 Series continues to be indispensable in high-stakes environments.

Digital Media Professionals Inc.
AI Processor, Interrupt Controller, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

Jotunn8 AI Accelerator

The Jotunn8 is engineered to redefine performance standards for AI datacenter inference, supporting prominent large language models. Standing as a fully programmable and algorithm-agnostic tool, it supports any algorithm, any host processor, and can execute generative AI like GPT-4 or Llama3 with unparalleled efficiency. The system excels in delivering cost-effective solutions, offering high throughput up to 3.2 petaflops (dense) without relying on CUDA, thus simplifying scalability and deployment. Optimized for cloud and on-premise configurations, Jotunn8 ensures maximum utility by integrating 16 cores and a high-level programming interface. Its innovative architecture addresses conventional processing bottlenecks, allowing constant data availability at each processing unit. With the potential to operate large and complex models at reduced query costs, this accelerator maintains performance while consuming less power, making it the preferred choice for advanced AI tasks. The Jotunn8's hardware extends beyond AI-specific applications to general processing (GP) functionalities, showcasing its agility. By automatically selecting the most suitable processing paths layer-by-layer, it optimizes both latency and power consumption. This provides its users with a flexible platform that supports the deployment of vast AI models under efficient resource utilization strategies. This product's configuration includes power peak consumption of 180W and an impressive 192 GB on-chip memory, accommodating sophisticated AI workloads with ease. It aligns closely with theoretical limits for implementation efficiency, accentuating VSORA's commitment to high-performance computational capabilities.

VSORA
AI Processor, Interleaver/Deinterleaver, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Universal Chiplet Interconnect Express (UCIe)

The Universal Chiplet Interconnect Express (UCIe) by Extoll is a cutting-edge technology designed to meet the increasing demand for seamless integration of chiplets within a system. UCIe offers a highly efficient interconnect framework that underpins the foundational architecture of heterogeneous systems, enabling enhanced interoperability and performance across various chip components. UCIe distinguishes itself by offering an ultra-low power profile, making it a preferred option for power-sensitive applications. Its design focuses on facilitating high bandwidth data transfer, essential for modern computing environments that require the handling of vast amounts of data with speed and precision. Furthermore, UCIe supports a diverse range of process nodes, ensuring it integrates well with existing and emerging technologies. This innovation plays a pivotal role in accelerating the transition to advanced chiplet-based architectures, enabling developers to create systems that are both scalable and efficient. By providing a robust interconnect solution, UCIe helps reduce overall system complexity, lowers development costs, and improves design flexibility — making it an indispensable tool for forward-thinking semiconductor designs.

Extoll GmbH
All Foundries
28nm, 28nm SLP
AMBA AHB / APB/ AXI, D2D, Gen-Z, Multiprocessor / DSP, Processor Core Independent, V-by-One, VESA
View Details

eSi-ADAS

The eSi-ADAS suite is a high-performance radar processing solution primarily designed to enhance ADAS systems. It comprises a comprehensive set of radar accelerator IPs, such as FFT and CFAR engines, alongside tracking capabilities powered by Kalman filter technology. This setup facilitates real-time monitoring of diverse radar environments. Automotive and UAV sectors benefit significantly from eSi-ADAS, as it ensures precise situational awareness necessary for modern safety and collision avoidance systems. By offloading computationally intensive tasks from the central processing unit, it optimizes performance and power efficiency. This enables the handling of complex scenarios, from short-range radar operations to simultaneous tracking of numerous objects.

EnSilica
AI Processor, CAN XL, CAN-FD, Flash Controller, Multiprocessor / DSP, Processor Core Independent
View Details

Network Protocol Accelerator Platform

The Network Protocol Accelerator Platform (NPAP) is a high-performance solution that accelerates TCP/UDP/IP protocols within FPGA- and ASIC-based systems. Developed alongside the Fraunhofer Heinrich-Hertz-Institute, this platform offers customizable high-bandwidth and low-latency communication capabilities essential for Ethernet links ranging from 1G to 100G. It's designed for various hardware applications, providing turnkey solutions and integrates synthesizable HDL codes capable of being implemented directly into FPGAs. At its core, NPAP enhances CPU performance by handling TCP/UDP/IP processing within programmable logic, thereby boosting network throughput while minimizing latency. The platform's modular architecture supports full line-rate processing up to 70 Gbps in FPGAs and over 100 Gbps in ASICs. It features bi-directional data paths supporting multiple, parallel TCP engines designed for scalable network processing. Its utility extends to FPGA-based SmartNICs, networked storage such as iSCSI, and even high-speed video transmissions. The NPAP can be evaluated via a Remote Evaluation System, allowing potential users to conduct a hands-on assessment through a remote connection to MLE's lab, providing flexibility and saving integration time.

Missing Link Electronics
AMBA AHB / APB/ AXI, Ethernet, MIL-STD-1553, Multiprocessor / DSP, RapidIO, Safe Ethernet, SATA, USB, V-by-One
View Details

WiseEye2 AI Solution

WiseEye2 AI Solution by Himax revolutionizes edge computing in AI applications with its unique blend of an ultralow power CMOS image sensor and the HX6538 AI microcontroller. This solution is specifically engineered for battery-powered applications that require continuous operation, yet consume minimal power. The HX6538 microcontroller boasts unmatched power efficiency and performance gains, driven by its ARM-based architecture sporting Cortex M55 CPU and Ethos U55 NPU. This enables highly complex and accurate AI computations to be made directly at the endpoint, without exorbitant power usage.<br> <br> In terms of security and functionality, the WiseEye2 incorporates sophisticated cryptography engines and a layered power management system. These features ensure the solution not only processes data efficiently but also safeguards sensitive information. Its prowess in executing intricate AI models and seamless sensor fusion makes it an ideal player in the AIoT landscape, powering intelligent devices across various verticals from smart home solutions to advanced security systems.<br> <br> Himax's WiseEye2 thus extends its capabilities beyond typical AI solutions, facilitating continuous, real-time processing that is both resource-conservative and remarkably thorough. This blend of low-energy operation with high computational capability positions WiseEye2 as a frontline solution in the push towards smarter, more secure IoT ecosystems.

Himax Technologies, Inc.
AI Processor, Audio Processor, Cryptography Cores, Embedded Security Modules, Multiprocessor / DSP, Other, Platform Security, Processor Cores, Security Subsystems, Vision Processor
View Details

Digital Radio (GDR)

The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.

GIRD Systems, Inc.
3GPP-5G, 3GPP-LTE, 802.11, Coder/Decoder, CPRI, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Independent
View Details

2D FFT

The 2D FFT from Dillon Engineering efficiently handles two-dimensional data transformation applications. It is particularly beneficial in scenarios where large-scale data analysis and image processing tasks require swift execution. The core exemplifies Dillon's expertise in enhancing processing speeds while maintaining high-quality output, making it indispensable for projects involving complex two-dimensional signal processing. Dillon's 2D FFT is designed to operate with internal or external memory configurations, supporting high throughput and flexibility in memory management. By utilizing dual FFT engines, it ensures efficient handling of horizontal and vertical data streams, making it suitable for tasks involving multidimensional data like images or video streams. This FFT Core is highly adaptable due to its flexible architecture enabled by the ParaCore Architect™ tool, which ensures that it can be easily customized to meet specific design and performance criteria. Thus, Dillon's 2D FFT stands as a crucial component for developers seeking to incorporate effective, reliable, and fast two-dimensional FFT processing into their systems.

Dillon Engineering, Inc.
Multiprocessor / DSP, PLL, Processor Core Independent
View Details

RISCV SoC - Quad Core Server Class

Dyumnin's RISCV SoC is built around a robust 64-bit quad-core server class RISC-V CPU, offering various subsystems that cater to AI/ML, automotive, multimedia, memory, and cryptographic needs. This SoC is notable for its AI accelerator, including a custom CPU and tensor flow unit designed to expedite AI tasks. Furthermore, the communication subsystem supports a wide array of protocols like PCIe, Ethernet, and USB, ensuring versatile connectivity. As for the automotive sector, it includes CAN and SafeSPI IPs, reinforcing its utility in diverse applications such as automotive systems.

Dyumnin Semiconductors
TSMC
14nm, 28nm, 32nm
2D / 3D, 3GPP-5G, 802.11, AI Processor, CPU, DDR, LCD Controller, LIN, Mobile DDR Controller, Multiprocessor / DSP, Other, Processor Core Dependent, SAS, USB, V-by-One, VGA
View Details

JPEG FPGA Cores

Designed for still image and video compression, the JPEG FPGA Cores by A2e Technologies deliver high-speed performance in processing JPEG Baseline with true grayscale capability. Engineered for efficiency, these cores can manage 140 Megapixels per second for 4:2:0 and 4:2:2 images, positioning them as some of the fastest in the sector. Their compact size, using less than 500 slices in a Xilinx Spartan 6 FPGA, sets a new standard for resource utilization. These cores feature a simple integration interface via FIFO for both input and output, making them adaptable to a wide range of projects. Designed with the capability for customization, they offer JPEG compliance (ISO/IEC 10918-1), programmable quantization tables, and are adaptable for any image size up to 16K x 16K. This flexibility is complemented by options for greyscale and YUV 4:2:0 compressions. Suitable for various applications, these cores can support all JPEG format scan configurations, providing a versatile tool for developers. The deliverables include a complete verification environment and bit-accurate software models, proving their readiness for immediate integration and testing. These comprehensive features make the JPEG FPGA Cores an ideal solution for developers seeking efficient, high-speed image processing capabilities.

A2e Technologies
AMBA AHB / APB/ AXI, JPEG, Multiprocessor / DSP
View Details

RAIV General Purpose GPU

The RAIV General Purpose GPU (GPGPU) from Siliconarts is engineered to provide high-performance acceleration for a wide array of computational tasks. As a versatile GPGPU, RAIV is designed to facilitate advancements in numerous sectors impacted by the fourth industrial revolution, including autonomous vehicles, the Internet of Things (IoT), virtual reality (VR), and data centers. Its primary function is to enhance the processing speed of complex data sets and enable intricate computations with improved efficiency. Particularly adept at managing intensive parallel processes, the RAIV GPGPU is instrumental in applications requiring substantial data throughput and computational power. Its architecture enables it to handle significant volumes of data with low latency while maintaining energy-efficient operations. This makes the RAIV an ideal choice for applications where computational precision and speed are paramount, such as in machine learning and data analytics. Through its integration with existing systems, the RAIV enhances performance and introduces new potentials in data computation and process automation. This GPGPU exemplifies Siliconarts' commitment to fostering innovation across diverse technology ecosystems, providing developers with tools to push the frontiers of what's possible in digital processing.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

Tianqiao-80 High-Efficiency 64-bit RISC-V CPU

StarFive's Tianqiao-80 CPU stands as a model of efficiency optimized for high-performance applications. With a design tailored for scenarios such as desktop computing and mobile technology, this 64-bit RISC-V CPU core integrates seamlessly into environments requiring swift data processing and the ability to manage complex workloads efficiently. The CPU offers comprehensive optimization to handle varying computational tasks with impressive energy efficiency. It integrates 64-bit architecture and is specially designed to leverage performance in AI and automotive applications, providing robust support for modern mobile and AI-driven ecosystems. The Tianqiao-80 is engineered to minimize power consumption while maximizing throughput, resulting in reduced operational costs and prolonged device lifespans, offering users a seamless balance of speed, energy consumption, and output efficiency.

StarFive
TSMC
28nm
CPU, Multiprocessor / DSP, Processor Cores
View Details

Trifecta-GPU

The Trifecta-GPU is a sophisticated family of COTS PXIe/CPCIe GPU Modules by RADX Technologies, designed for substantial computational acceleration and ease of use in PXIe/CPCIe platforms. Powered by the NVIDIA RTX A2000 Embedded GPU, it boasts up to 8.3 FP32 TFLOPS performance, becoming a preferred choice for modular Test & Measurement (T&M) and Electronic Warfare (EW) systems. It integrates seamlessly into systems, supporting MATLAB, Python, and C/C++ programming, making it versatile for signal processing, machine learning, and deep learning inference applications. A highlight of the Trifecta-GPU is its remarkable computing prowess coupled with its design that fits within power and thermal constraints of legacy and modern chassis. It is available in both single and dual-slot variants, with the capability to dissipate power effectively, allowing users to conduct fast signal analysis and execute machine learning algorithms directly where data is acquired within the system. With its peak performance setting new standards for cost-effective compute acceleration, the Trifecta-GPU also supports advanced computing frameworks, ensuring compatibility with a myriad of applications and enhancing signal classification and geolocation tasks. Its hardware capabilities are complemented by extensive software interoperability, supporting both Windows and Linux environments, further cementing its position as a top-tier solution for demanding applications.

RADX Technologies, Inc.
AI Processor, CPU, DSP Core, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

VisualSim Architect

VisualSim Architect offers a rich modeling and simulation environment used to explore the performance, power, and functionality of entire systems before they are built. By employing system-level design, users can simulate various scenarios and configurations to better understand potential limitations and opportunities. This platform assists in determining accurate assessments of system behavior under diverse conditions, therefore eliminating unforeseen complications during the development phase. It paves the way for rapid prototyping by providing virtual representations of hardware and software components, thus ensuring a more effective system design process. The tool supports comprehensive exploration across various domains like hardware-software partitioning, autonomous driver assistance, and more. It allows users to incorporate various scenarios, such as credit-based arbitration in data movement or mapping radar systems for aircraft interactions. By leveraging a graphical interface, the platform helps engineers visualize and manage system resources efficiently. Furthermore, the batch mode simulation feature facilitates large-scale system analysis for parameter optimization, ensuring robust design under varied operational conditions. One of the key strengths of VisualSim Architect lies in its adaptability, allowing it to run on multiple operating systems including Windows, Linux, and Mac OS/X. By being platform-independent, it ensures versatility across different hardware infrastructures and deployment requirements. This flexibility is further reinforced with its support for open-source XML libraries which help craft detailed simulations applicable to real-world contexts. It’s also complemented by a robust academic program that democratizes access to system engineering exploration, fostering a deeper understanding and capability in future engineers.

Mirabilis Design
CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

Cobra – Xilinx Kintex-7 Platform

The Cobra platform, based on the Xilinx Kintex-7 architecture, serves as a versatile development environment for testing and iterative designs of various IPs. This platform is tailored to optimize the prototyping and development processes by providing a high-performance FPGA foundation. The Kintex-7 series is known for its balance of high performance and logic capacity, making it suitable for a range of applications where precision and efficiency are crucial. Cobra facilitates seamless integration and testing of DisplayPort solutions, offering developers a convenient platform to validate designs prior to ASIC production. It supports comprehensive data management, encryption-verification processes, and multimedia operations all in one framework, reducing the design cycle and overall project costs. The platform is also equipped with advanced features for real-time signal processing and enhanced data throughput, catering to industries from automotive to broadcast technology. The Cobra platform, thus, accelerates the development timeline while continuing to maintain high fidelity and robust functionality necessary in modern electronic design workflows.

Trilinear Technologies
AMBA AHB / APB/ AXI, CPU, Multiprocessor / DSP, Processor Core Independent, USB
View Details

GateMate FPGA

Cologne Chip AG's GateMate FPGA series is designed for small to medium-sized applications, providing an optimal balance of performance and cost. This FPGA family boasts incredible logic capacity and power efficiency, making them a versatile choice for engineers. With a package size tailored for PCB compatibility, these FPGAs are suitable for a wide range of uses, from educational projects to industrial-scale productions. The GateMate FPGA employs an innovative architecture featuring CPE programmable elements, allowing for efficient multiplier construction and enhanced memory capabilities. Supporting a variety of applications, these FPGAs are designed to facilitate high-speed communications with built-in SerDes interfaces. Their synthesis process uses the Yosys framework, while chip programming is seamlessly managed by the open-source openFPGALoader. Produced using GlobalFoundries' 28 nm Super Low Power process, these devices ensure a sturdy supply chain and reliable performance. With features such as quad SPI interface for fast configuration, extensive GPIO support, and low power consumption modes, the GateMate FPGA stands out as a high-performance, cost-effective solution for modern digital designs.

Cologne Chip AG
GLOBALFOUNDARIES
28nm SLP
AMBA AHB / APB/ AXI, CAN-FD, Cell / Packet, CPU, Ethernet, JPEG, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Core Independent, RapidIO, Receiver/Transmitter
View Details

Pipelined FFT

Dillon Engineering's Pipelined FFT Core is designed to streamline continuous data transformations with minimal memory usage. This architecture is optimal for low-latency applications where consistent data flow and smooth signal processing are required, making it an essential solution for environments where real-time performance is critical. The core features a single butterfly per rank pipeline structure, allowing for continuous data processing without significant delays. The incorporation of both fixed and floating point operations enhances its versatility, catering to a broad spectrum of industry needs. Its design supports innovative synchronization methods that ensure data consistency and accuracy throughout the operational cycle. Employing Dillon's advanced ParaCore Architect™, the Pipelined FFT delivers enhanced customizability, adapting swiftly to different architectures and design requirements. This makes it particularly suitable for applications involving real-time data analytics and signal processing where efficiency and precision are priorities. Its streamlined approach and minimal resource reliance mark it as a choice component for sophisticated digital systems.

Dillon Engineering, Inc.
Multiprocessor / DSP, PLL, Processor Core Independent
View Details

Vega eFPGA

The Vega eFPGA is a flexible programmable solution crafted to enhance SoC designs with substantial ease and efficiency. This IP is designed to offer multiple advantages such as increased performance, reduced costs, secure IP handling, and ease of integration. The Vega eFPGA boasts a versatile architecture allowing for tailored configurations to suit varying application requirements. This IP includes configurable tiles like CLB (Configurable Logic Blocks), BRAM (Block RAM), and DSP (Digital Signal Processing) units. The CLB part includes eight 6-input Lookup Tables that provide dual outputs, and also an optional configuration with a fast adder having a carry chain. The BRAM supports 36Kb dual-port memory and offers flexibility for different configurations, while the DSP component is designed for complex arithmetic functions with its 18x20 multipliers and a wide 64-bit accumulator. Focused on allowing easy system design and acceleration, Vega eFPGA ensures seamless integration and verification into any SoC design. It is backed by a robust EDA toolset and features that allow significant customization, making it adaptable to any semiconductor fabrication process. This flexibility and technological robustness places the Vega eFPGA as a standout choice for developing innovative and complex programmable logic solutions.

Rapid Silicon
CPU, Embedded Memories, Multiprocessor / DSP, Processor Core Independent, Vision Processor, WMV
View Details

Nerve IIoT Platform

The Nerve IIoT Platform by TTTech Industrial Automation is a sophisticated edge computing solution that bridges the gap between industrial environments and digital business models. Designed for machine builders, it supports real-time data exchange, offering a robust infrastructure that connects physical machines directly with IT systems. The platform optimizes machine performance by allowing for remote management and software deployment. Nerve's architecture is highly modular, making it adaptable to specific industrial needs. It features cloud-managed services that enable seamless application deployments across multiple devices, straight from the cloud or on-premises infrastructure. By supporting various hardware, from simple gateways to industrial PCs, the platform is scalable and capable of growing with business demands. Security is a pivotal aspect of Nerve, offering both IEC 62443 certification for safe deployment and regular penetration tests to ensure integrity and protection. Its integration capabilities with protocols like OPC UA, MQTT, and others allow for enhanced data collection and real-time analytics, promoting efficiency and reducing operational costs through predictive maintenance and system optimization.

TTTech Industrial Automation AG
15 Categories
View Details

SoC Platform

SEMIFIVE's SoC platform provides a rapid and cost-effective path to developing purpose-built custom silicon solutions. By leveraging silicon-proven IPs and optimized design methodologies, this platform aims to lower costs, reduce risks, and accelerate time-to-market. The platform is distinguished by its domain-specific architecture, which supports a pre-configured and verified IP pool, allowing for a swift hardware/software bring-up. This platform empowers companies to turn their ideas into silicon efficiently, minimizing development costs and engineering risks. The SoC platform's benefits include significant cost savings with up to 50% lower non-recurring engineering (NRE) costs compared to the industry average. It also ensures a rapid turnaround time, up to 50% faster than typical timelines, due to its pre-verified design components and reuse capabilities. The platform's pre-established IP pools, which are selected, configured, implemented, and verified, simplify the engagement model, promising a comprehensive suite of services from architecture to manufacturing. Technical highlights of the SoC platform include configurations for AI inference with a SiFive quad-core U74 RISC-V CPU, LPDDR4x memory interfaces, and PCIe Gen4 connectivity, optimized for applications like big data analytics and vision processing. The SoC platform also supports AIoT and HPC applications, featuring scalable configurations for various performance demands and integration flexibility for customer-specific IPs.

SEMIFIVE
Samsung
5nm, 8nm LPP, 14nm
11 Categories
View Details

Atrevido

The Atrevido is a 64-bit RISC-V core designed for out-of-order processing, providing exceptional performance for applications needing high bandwidth and low latency. It features a 2/3/4-wide configurable out-of-order issue and completion mechanism, ensuring a seamless handling of complex, memory-intensive operations. The core is multiprocessor ready, equipped with direct hardware support for unaligned memory accesses, and supports various RISC-V extensions for enhanced functionality. This IP is particularly adept at handling machine learning workloads, key-value stores, and recommendation systems, thanks to its integration with Semidynamics' Gazzillion Misses™ technology. This technology enables the Atrevido core to sustain full memory bandwidth even with smaller processing cores, minimizing the need for a large silicon footprint. With support for the RISC-V Vector Specification 1.0, Atrevido is vector-ready, allowing for dense encoding of computational instructions and efficient handling of sparse tensor weights. Additional features of the Atrevido core include its Linux readiness, with full MMU support, and its compatibility with cache-coherent multiprocessing environments. This makes it beneficial for constructing systems on chips that require numerous cores, delivering scalability and performance tailored to extensive processing needs.

Semidynamics
AI Processor, AMBA AHB / APB/ AXI, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, WMA
View Details

Prodigy Universal Processor

The Prodigy Universal Processor by Tachyum is a groundbreaking innovation in the field of computer processors, widely recognized as the world's first universal processor. It is adept at handling a broad spectrum of applications including high-performance computing, AI development, and deep machine learning. Prodigy stands out by integrating the capabilities of CPUs, GPGPUs, and TPUs into a cohesive architecture. This integration allows for extraordinary computational power at reduced energy consumption and enhanced server utilization. One of the key features of the Prodigy series is its ability to outperform conventional industry processors with up to 18 times better performance and 6 times greater efficiency per watt. The processor architecture manages to combine general-purpose computing with high-performance workloads seamlessly, simplifying the programming and deployment process for a myriad of applications. The Prodigy range is designed to handle the complexities of today's technological demands. With 256 high-performance cores supported by DDR5 memory controllers and PCIe 5.0 lanes, it positions itself as an exemplary solution for scenarios requiring extensive data handling like cloud computing, data analytics, and large-scale databases. Its emulation systems allow for smooth transitions from existing architectures, making it a versatile choice for engineers and developers.

Tachyum Inc.
AI Processor, Building Blocks, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt