Find IP Sell IP AI Assistant Chip Talk About Us
Log In

All IPs > Processor > Coprocessor

Coprocessor Semiconductor IP Solutions

In the realm of modern computing, coprocessor semiconductor IPs play a crucial role in augmenting system capabilities. A coprocessor is a supplementary processor that executes specific tasks more efficiently than the primary central processing unit (CPU). These coprocessors are specialized semiconductor IPs utilized in devices requiring enhanced computational power for particular functions such as graphics rendering, encryption, mathematical calculations, and artificial intelligence (AI) processing.

Coprocessors are integral in sectors where high performance and efficiency are paramount. For instance, in the gaming industry, a graphics processing unit (GPU) acts as a coprocessor to handle the high demand for rendering visuals, thus alleviating the burden from the CPU. Similarly, AI accelerators in smartphones and servers offload intensive AI computation tasks to speed up processing while conserving power.

You will find various coprocessor semiconductor IP products geared toward enhancing computational specialization. These include digital signal processors (DSPs) for processing real-time audio and video signals and hardware encryption coprocessors for securing data transactions. With the rise in machine learning applications, tensor processing units (TPUs) have become invaluable, offering massively parallel computing to efficiently manage AI workloads.

By incorporating these coprocessor semiconductor IPs into a system design, manufacturers can achieve remarkable improvements in speed, power efficiency, and processing power. This enables the development of cutting-edge technology products across a range of fields from personal electronics to autonomous vehicles, ensuring optimal performance in specialized computing tasks.

All semiconductor IP
35
IPs available

Origin E1

The Origin E1 is an optimized neural processing unit (NPU) targeting always-on applications in devices like home appliances, smartphones, and security cameras. It provides a compact, energy-efficient solution with performance tailored to 1 TOPS, making it ideal for systems needing low-power and minimal area. The architecture is built on Expedera's unique packet-based approach, which enables enhanced resource utilization and deterministic performance, significantly boosting efficiency while avoiding the pitfalls of traditional layer-based architectures. The architecture is fine-tuned to support standard and custom neural networks without requiring external memory, preserving privacy and ensuring fast processing. Its ability to process data in parallel across multiple layers results in predictive performance with low power and latency. Always-sensing cameras leveraging the Origin E1 can continuously analyze visual data, facilitating smoother and more intuitive user interactions. Successful field deployment in over 10 million devices highlights the Origin E1's reliability and effectiveness. Its flexible design allows for adjustments to meet the specific PPA requirements of diverse applications. Offered as Soft IP (RTL) or GDS, this engine is a blend of efficiency and capability, capitalizing on the full scope of Expedera's software tools and custom support features.

Expedera
13 Categories
View Details

SCR9 Processor Core

Designed for high-demand applications in server and computing environments, the SCR9 Processor Core stands as a robust 64-bit RISC-V solution. It features a 12-stage superscalar, out-of-order pipeline to handle intensive processing tasks, further empowered by its versatile floating-point and vector processing units. The core is prepared to meet extensive computing needs with support for up to 16-core clustering and seamless AOSP or Linux operating systems integration.\n\nInvesting in powerful memory subsystems including L1, L2, and shared L3 caches enhances data handling, while features like memory coherency ensure fluid operation in multi-core settings. Extensions in cryptography and vector operations further diversify its application potential, establishing the SCR9 as an ideal candidate for cutting-edge data tasks.\n\nFrom enterprise servers to personal computing devices, video processing, and high-performance computations for AI and machine learning, the SCR9 delivers across an array of demanding scenarios. Its design integrates advanced power and process technologies to cater to complex computing landscapes, embodying efficiency and innovation in processor core technology.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Cores
View Details

Origin E8

The Origin E8 NPU by Expedera is engineered for the most demanding AI deployments such as automotive systems and data centers. Capable of delivering up to 128 TOPS per core and scalable to PetaOps with multiple cores, the E8 stands out for its high performance and efficient processing. Expedera's packet-based architecture allows for parallel execution across varying layers, optimizing resource utilization, and minimizing latency, even under strenuous conditions. The E8 handles complex AI models, including large language models (LLMs) and standard machine learning frameworks, without requiring significant hardware-specific changes. Its support extends to 8K resolutions and beyond, ensuring coverage for advanced visualization and high-resolution tasks. With its low deterministic latency and minimized DRAM bandwidth needs, the Origin E8 is especially suitable for high-performance, real-time applications. The high-speed processing and flexible deployment benefits make the Origin E8 a compelling choice for companies seeking robust and scalable AI infrastructure. Through customized architecture, it efficiently addresses the power, performance, and area considerations vital for next-generation AI technologies.

Expedera
12 Categories
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDARIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

TimbreAI T3

The TimbreAI T3 is a premier AI inference engine from Expedera, designed specifically for audio noise reduction in power-sensitive devices like wireless headsets. Achieving up to 3.2 GOPS at an ultra-low power consumption of less than 300 µW, the T3 caters to devices with stringent power and area constraints. This product features Expedera's advanced packet-based architectural strategy, setting a new standard for efficiency in embedded AI applications. Ported easily across different foundries, the TimbreAI T3 ensures seamless performance without requiring alterations to pre-trained models, safeguarding accuracy and maintaining consistent results. Its comprehensive support for common audio neural networks and standard NN functions underscores its versatility and adaptability within varied use cases. Exemplified by successful integration in over 10 million devices globally, the T3 is proof of Expedera's commitment to delivering high-quality, energy-efficient semiconductor solutions. Coupled with expedited development and integration capabilities, this IP represents a vital component in enhancing audio processing capabilities across contemporary electronic devices.

Expedera
Audio Processor, Building Blocks, Coprocessor, IoT Processor, Vision Processor
View Details

xcore.ai

xcore.ai is a versatile platform specifically crafted for the intelligent IoT market. It hosts a unique architecture with multi-threading and multi-core capabilities, ensuring low latency and high deterministic performance in embedded AI applications. Each xcore.ai chip contains 16 logical cores organized in two multi-threaded processor 'tiles' equipped with 512kB of SRAM and a vector unit for enhanced computation, enabling both integer and floating-point operations. The design accommodates extensive communication infrastructure within and across xcore.ai systems, providing scalability for complex deployments. Integrated with embedded PHYs for MIPI, USB, and LPDDR, xcore.ai is capable of handling a diverse range of application-specific interfaces. Leveraging its flexibility in software-defined I/O, xcore.ai offers robust support for AI, DSP, and control processing tasks, making it an ideal choice for enhancing IoT device functionalities. With its support for FreeRTOS, C/C++ development environment, and capability for deterministic processing, xcore.ai guarantees precision in performance. This allows developers to partition xcore.ai threads optimally for handling I/O, control, DSP, and AI/ML tasks, aligning perfectly with the specific demands of various applications. Additionally, the platform's power optimization through scalable tile clock frequency adjustment ensures cost-effective and energy-efficient IoT solutions.

XMOS Semiconductor
TSMC
20nm
19 Categories
View Details

3D Imaging Chip

The 3D Imaging Chip developed by Altek is a remarkable example of cutting-edge imaging technology, tailored to meet the evolving needs of industries that require three-dimensional image capture. This chip is designed to provide high-resolution 3D images, making it ideal for applications in areas such as augmented reality, virtual reality, and industrial automation. The chip integrates advanced depth-sensing capabilities, ensuring that it can capture intricate details with high fidelity. This feature is particularly beneficial for tasks that require precise measurements and accurate representations of spatial relationships, such as in robotics and autonomous systems. The 3D Imaging Chip's robustness also contributes to its functionality in challenging environments, making it suitable for a variety of industrial applications. Additionally, this chip is engineered to operate efficiently, incorporating power management techniques that enhance its performance while minimizing energy consumption. Altek's focus on innovation is evident in the chip's ability to deliver enhanced depth perception and its integration with AI technologies, which significantly expand its application potential.

Altek Corporation
GLOBALFOUNDARIES, UMC
28nm, 65nm
A/D Converter, Coprocessor, Oversampling Modulator, Photonics, Sensor
View Details

DisplayPort Transmitter

The DisplayPort Transmitter is a highly advanced solution designed to seamlessly transmit high-definition audio and video data between devices. It adheres to the latest VESA standards, ensuring it can handle DisplayPort 1.4 and 2.1 specifications with ease. The transmitter is engineered to support a plethora of audio interfaces including I2S, SPDIF, and DMA, making it highly adaptable to a wide range of consumer and professional audio-visual equipment. With features focused on AV sync and timing recovery, it ensures smooth and uninterrupted data flow even in the most demanding applications. This transmitter is particularly beneficial for those wishing to integrate top-of-the-line audio and video synchronization within their projects, offering customizable sound settings that can accommodate unique user requirements. It's robust enough to be used across industry sectors, from high-end consumer electronics like gaming consoles and home theater systems to professional equipment used in broadcast and video wall displays. Moreover, the DisplayPort Transmitter's architecture facilitates seamless integration into existing FPGA and ASIC systems without a hitch in performance. Comprehensive compliance testing ensures that it is compatible with a wide base of devices and technologies, making it a dependable choice for developers looking to provide comprehensive DisplayPort solutions. Whether it's enhancing consumer electronics or powering complex industry-specific systems, the DisplayPort Transmitter is built to deliver exemplary performance.

Trilinear Technologies
AMBA AHB / APB/ AXI, Coprocessor, HDMI, Input/Output Controller, PCI, PowerPC, RapidIO, SATA, USB, V-by-One
View Details

Origin E2

The Origin E2 from Expedera is engineered to perform AI inference with a balanced approach, excelling under power and area constraints. This IP is strategically designed for devices ranging from smartphones to edge nodes, providing up to 20 TOPS performance. It features a packet-based architecture that enables parallel execution across layers, improving resource utilization and performance consistency. The engine supports a wide variety of neural networks, including transformers and custom networks, ensuring compatibility with the latest AI advancements. Origin E2 caters to high-resolution video and audio processing up to 4K, and is renowned for its low latency and enhanced performance. Its efficient structure keeps power consumption down, helping devices run demanding AI tasks more effectively than with conventional NPUs. This architecture ensures a sustainable reduction in the dark silicon effect while maintaining high operating efficiencies and accuracy thanks to its TVM-based software support. Deployed successfully in numerous smart devices, the Origin E2 guarantees power efficiency sustained at 18 TOPS/W. Its ability to deliver exceptional quality across diverse applications makes it a preferred choice for manufacturers seeking robust, energy-conscious solutions.

Expedera
12 Categories
View Details

Veyron V1 CPU

The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.

Ventana Micro Systems
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Cores
View Details

Origin E6

Expedera's Origin E6 NPU is crafted to enhance AI processing capabilities in cutting-edge devices such as smartphones, AR/VR headsets, and automotive systems. It offers scalable performance from 16 to 32 TOPS, adaptable to various power and performance needs. The E6 leverages Expedera's packet-based architecture, known for its highly efficient execution of AI tasks, enabling parallel processing across multiple workloads. This results in better resource management and higher performance predictability. Focusing on both traditional and new AI networks, Origin E6 supports large language models as well as complex data processing tasks without requiring additional hardware optimizations. Its comprehensive software stack, based on TVM, simplifies the integration of trained models into practical applications, providing seamless support for mainstream frameworks and quantization options. Origin E6's deployment reflects meticulous engineering, optimizing memory usage and processing latency for optimal functionality. It is designed to tackle challenging AI applications in a variety of demanding environments, ensuring consistent high-performance outputs and maintaining superior energy efficiency for next-generation technologies.

Expedera
AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Coprocessor, CPU, DSP Core, GPU, IoT Processor, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

RISC-V Hardware-Assisted Verification

The RISC-V Hardware-Assisted Verification by Bluespec is designed to expedite the verification process for RISC-V cores. This platform supports both ISA and system-level testing, adding robust features such as verifying standard and custom ISA extensions along with accelerators. Moreover, it offers scalable access through the AWS cloud, making verification available anytime and anywhere. This tool aligns with the needs of modern developers, ensuring thorough testing within a flexible and accessible framework.

Bluespec
AMBA AHB / APB/ AXI, Coprocessor, CPU, Input/Output Controller, Peripheral Controller
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

VIDIO 12G SDI FMC Daughter Card

The VIDIO 12G SDI FMC Daughter Card is engineered to facilitate next-level broadcast video applications. Equipped with 12G SDI and 10G IP interfaces, this card supports 4K resolution at 60 frames per second, making it compatible with various AMD and Intel development boards. Manufactured with the latest chip technology, the card utilizes a single board design, incorporating full-size edge launch BNCs along with an SFP+ cage. This enables developers to integrate additional SFP-BNC inputs and outputs, extending versatility in signal configurations. The card is thoroughly tested for quality, fulfilling requirements for jitter performance over extended cable runs, ensuring signal reliability in demanding broadcast environments. With a focus on ease of use, it requires no software for initialization, making it ready-to-use for rapid deployment.

Nextera Video
TSMC, UMC
180nm, 250nm
Coprocessor, Fibre Channel, GPU, Graphics & Video Modules, Peripheral Controller, SATA, USB, V-by-One, VME Controller
View Details

DolphinWare IPs

DolphinWare IPs is a versatile portfolio of intellectual property solutions that enable efficient SoC design. This collection includes various control logic components such as FIFO, arbiter, and arithmetic components like math operators and converters. In addition, the logic components span counters, registers, and multiplexers, providing essential functionalities for diverse industrial applications. The IPs in this lineup are meticulously designed to ensure data integrity, supported by robust verification IPs for AXI4, APB, SD4.0, and more. This comprehensive suite meets the stringent demands of modern electronic designs, facilitating seamless integration into existing design paradigms. Beyond their broad functionality, DolphinWare’s offerings are fundamental to applications requiring specific control logic and data integrity solutions, making them indispensable for enterprises looking to modernize or expand their product offerings while ensuring compliance with industry standards.

Dolphin Technology
TSMC
28nm, 32/28nm
Building Blocks, Coprocessor, Cryptography Cores, Receiver/Transmitter
View Details

DisplayPort Receiver

The DisplayPort Receiver is an essential component for receiving and interpreting high-quality audio and video data streams from a DisplayPort source. Compatible with the latest VESA DisplayPort standards, this receiver is built to handle both screen and audio signals with precision and minimal latency. It integrates sophisticated timing recovery features and boasts compliance with I2S and SPDIF audio protocols, ensuring that it remains versatile across different devices and applications. This receiver is designed to serve industries such as consumer electronics and professional video production, where reliability in signal reception and minimal downtime are crucial. Its capability to work seamlessly with multiple interfaces makes it a versatile asset for developers aiming to build robust multimedia systems, whether it be digital televisions, gaming devices, or large-scale video walls. Equipped to sync efficiently with various compilers on architectures like x86 and ARM, it guarantees that integration is both smooth and effective, validating its potential as a component for high-performance SoCs and FPGAs. The DisplayPort Receiver stands out with its real-time performance capabilities and ensures that the final output maintains high fidelity, catering to sectors that require uncompromised audio-visual quality.

Trilinear Technologies
AMBA AHB / APB/ AXI, Coprocessor, HDMI, Input/Output Controller, PCI, PowerPC, RapidIO, SATA, USB, V-by-One
View Details

FortiPKA-RISC-V

The FortiPKA-RISC-V is a highly specialized Public Key Algorithm coprocessor designed to streamline cryptographic operations by integrating modular multiplication with protections against side-channel and fault injection threats. It operates without the need for Montgomery domain transformations, optimizing the coprocessor’s performance while reducing area requirements. Tailored for applications demanding high efficiency and security, FortiPKA-RISC-V ensures robust performance in public key operations, suitable for secure communications and data protection scenarios. With a focus on reduced latency and power efficiency, this coprocessor can be implemented in various platforms, enhancing protection in devices and systems The coprocessor is part of FortifyIQ's line of advanced IP solutions, providing a technology-agnostic approach that is adaptable to diverse environments. The FortiPKA-RISC-V offers benefits in applications requiring stringent security and efficiency, making it an ideal solution for developers looking to enhance cryptographic functionalities.

FortifyIQ
AMBA AHB / APB/ AXI, Coprocessor, Cryptography Cores, Platform Security, Security Protocol Accelerators, Security Subsystems, Vision Processor
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is an innovative ultra-low power microcontroller designed for always-on sensing applications, bringing intelligence directly to the sensor edge. This processor utilizes the processing power of spiking neural networks, combined with a nimble RISC-V processor core, to form a singular chip solution. Its design supports next-generation AI and signal processing capabilities, all while operating within a very narrow power envelope, crucial for battery-powered and latency-sensitive devices. This microcontroller's architecture supports advanced on-chip signal processing capabilities that include both Spiking Neural Networks (SNNs) and Deep Neural Networks (DNNs). These processing capabilities enable rapid pattern recognition and data processing similar to how the human brain functions. Notably, it operates efficiently under sub-milliwatt power consumption and offers fast response times, making it an ideal choice for devices such as wearables and other portable electronics that require continuous operation without significant energy draw. The T1 is also equipped with diverse interface options, such as QSPI, I2C, UART, JTAG, GPIO, and a front-end ADC, contained within a compact 2.16mm x 3mm, 35-pin WLCSP package. The device boosts applications by enabling them to execute with incredible efficiency and minimal power, allowing for direct connection and interaction with multiple sensor types, including audio and image sensors, radar, and inertial units for comprehensive data analysis and interaction.

Innatera Nanosystems
TSMC
28nm, 65nm
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Vision Processor, Wireless Processor
View Details

Calibrator for AI-on-Chips

The ONNC Calibrator is engineered to ensure high precision in AI System-on-Chips using post-training quantization (PTQ) techniques. This tool enables architecture-aware quantization, which helps maintain 99.99% precision even with fixed-point architecture, such as INT8. Designed for diverse heterogeneous multicore setups, it supports multiple engines within a single chip architecture and employs rich entropy calculation techniques. A major advantage of the ONNC Calibrator is its efficiency; it significantly reduces the time required for quantization, taking only seconds to process standard computer vision models. Unlike re-training methods, PTQ is non-intrusive, maintains network topology, and adapts based on input distribution to provide quick and precise quantization suitable for modern neural network frameworks such as ONNX and TensorFlow. Furthermore, the Calibrator's internal precision simulator uses hardware control registers to maintain precision, demonstrating less than 1% precision drop in most computer vision models. It adapts flexibly to various hardware through its architecture-aware algorithms, making it a powerful tool for maintaining the high performance of AI systems.

Skymizer
All Foundries
All Process Nodes
AI Processor, Coprocessor, Cryptography Cores, DDR, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

RV32EC_P2 Processor Core

The RV32EC_P2 Processor Core is a compact, high-efficiency RISC-V processor designed for low-power, small-scale embedded applications. Featuring a 2-stage pipeline architecture, it efficiently executes trusted firmware. It supports the RISC-V RV32E base instruction set, complemented by compression and optional integer multiplication instructions, greatly optimizing code size and runtime efficiency. This processor accommodates both ASIC and FPGA workflows, offering tightly-coupled memory interfaces for robust design flexibility. With a simple machine-mode architecture, the RV32EC_P2 ensures swift data access. It boasts extended compatibility with AHB-Lite and APB interfaces, allowing seamless interaction with memory and I/O peripherals. Designed for enhanced power management, it features an interrupt system and clock-gating abilities, effectively minimizing idle power consumption. Developers can benefit from its comprehensive toolchain support, ensuring smooth firmware and virtual prototype development through platforms such as the ASTC VLAB. Further distinguished by its vectored interrupt system and support for application-specific instruction sets, the RV32EC_P2 is adaptable to various embedded applications. Enhancements include wait-for-interrupt commands for reduced power usage during inactivity and multiple timer interfaces. This versatility, along with integrated GNU and Eclipse tools, makes the RV32EC_P2 a prime choice for efficient, low-power technology integrations.

IQonIC Works
Audio Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

iCEVision

iCEVision is an evaluation platform for the iCE40 UltraPlus FPGA featuring rapid prototyping capabilities for connectivity functions. It allows designers to test key connectivity features, facilitating quick solution implementation and confirmation of design integrity. iCEVision supports common camera interfaces such as ArduCam CSI and PMOD, aiding in seamless integration into existing workflows. Compatible with tools like Lattice Diamond Programmer and iCEcube2, which are available for free download, iCEVision supports customization by allowing easy reprogramming of onboard SPI Flash. This platform is equipped with practical user interfaces to ensure simple connectivity and programming. Designed with a streamlined user experience in mind, iCEVision includes preloaded RGB demo applications and a bootloader for straightforward USB programming. This makes it an excellent choice for developers aiming to maximize productivity and ensure robust device connections.

DPControl
14 Categories
View Details

Cobalt GNSS Receiver

Cobalt is a cutting-edge ultra-low power GNSS receiver that broadens the capability of IoT System-on-Chip (SoC) by integrating GNSS functionality efficiently. Cobalt is designed to address the needs of mass-market applications that are primarily constrained by size and cost, targeting sectors such as logistics, agriculture, and mobility services. This receiver incorporates sophisticated embedded processing with cloud assistance to enhance power efficiency while maintaining sensitivity. It supports multiple constellations like Galileo, GPS, and Beidou, ensuring reliable and precise positioning. Developed in collaboration with CEVA DSP and backed by the European Space Agency, Cobalt is optimized for use with modern SoCs, facilitating the integration of GNSS without excessive resource consumption. Cobalt offers shared resources between GNSS and modem functions, reducing the footprint and cost of implementation. Its innovative software-defined receiver can operate as both a standalone and cloud-assisted solution, making it versatile and adaptable to a variety of market needs. This positions Cobalt as an ideal solution for IoT devices requiring dependable localization features without compromising on battery life.

Ubiscale
All Foundries
All Process Nodes
Coprocessor, CPRI, Ethernet, GPS, JESD 204A / JESD 204B, PLL, Wireless USB
View Details

PACE - Photonic Arithmetic Computing Engine

PACE, or the Photonic Arithmetic Computing Engine, is a revolutionary hardware designed to harness the capabilities of photonics for computational purposes. Drawing on the low latency and energy efficiency of optical technologies, PACE aims to deliver significant enhancements in computing speed. This system aligns with cutting-edge standards, promoting increased efficiency and performance across computing tasks. Capitalizing on the photonic paradigm, PACE provides a glimpse into the future of high-speed computation, emphasizing the reduction of power consumption and boosting of operational throughput.

Lightelligence
2D / 3D, AI Processor, Building Blocks, Coprocessor, JESD 204A / JESD 204B, Processor Core Independent, Vision Processor
View Details

RV32IC_P5 Processor Core

The RV32IC_P5 Processor Core from IQonIC Works is designed for medium-scale embedded applications requiring higher performance and enhanced processing capabilities. This 5-stage pipeline processor supports the RISC-V RV32I base instruction set, alongside standard extension instructions such as 'A' for atomic operations, boosting system efficiency. Its ability to run a mix of trusted firmware and user applications allows for diverse operational integration. Supporting both ASIC and FPGA design flows, the RV32IC_P5 incorporates advanced memory interfaces, offering optional instruction and write-back data caches for improved performance and adaptability. It features a tightly-coupled scratchpad memory architecture, ensuring efficient data handling and reduced latency. The processor's architecture also incorporates AHB-Lite interfaces and optional physical memory protections for robust security management. With a vectored interrupt system and support for platform-specific instruction extensions, the RV32IC_P5 provides tailored options for DSP operations. Its toolchain support includes GNU and Eclipse environments, affording developers a streamlined path from concept to execution. The RV32IC_P5 demonstrates a firm focus on power efficiency and enhanced processing capabilities, making it ideal for complex applications demanding reliable processing power and flexibility.

IQonIC Works
Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Specialty Microcontrollers

Advanced Silicon's Specialty Microcontrollers are architecturally innovative RISC-V based solutions tailored for high-performance applications like image processing. Encompassing cutting-edge co-processing units, these microcontrollers redefine potential in embedded system functionalities. The controllers integrate both performance and practicality, enabling sophisticated algorithms crucial for modern image processing tasks. They come embedded with touch firmware featuring machine learning algorithms that handle user input recognition, enhancing interactive user interfaces. This positions Advanced Silicon at the forefront of the industry, propelling user-friendly and intuitive technology solutions. Their SOC Touch Controllers are structured on a 32-bit RISC architecture, featuring integrated capacitive AFE interfaces, power management units, and robust communication interfaces, suited for smaller touch screens ranging between 10 to 27 inches. For larger screens, up to 84 inches, their multi-chip solutions combine advanced 32-bit DSP architecture with powerful capacitive sensing AFEs, offering comprehensive performance for interactive applications.

Advanced Silicon
Coprocessor, CPU, DSP Core, IoT Processor, Microcontroller, Processor Cores
View Details

ADICSYS Soft eFPGA

ADICSYS, a subsidiary of EASii IC, offers customizable Field Programmable Gate Array (FPGA) technology. This venture bridges the gap between ASIC design and FPGA solutions, boasting over a decade of experience in eFPGA projects and cutting-edge semiconductor products. The soft eFPGA IP is designed for ASICs and SOCs, independent of specific technology, and seamlessly integrates with standard RTL design processes. The main features of ADICSYS's offering include synthesizable IP that fits smoothly into standard design flows, allowing RTL to FPGA bitstream translation. These eFPGAs are known for their scalability and customizable nature, developed through a robust RTL to bitstream compilation flow. This design flexibility helps reduce the risk of errors, accelerates development, and enhances debugging capabilities in complex systems. The soft FPGA IP can be customized in terms of architecture parameters such as Look-Up Table (LUT) count and routing density, and it can be adapted for specific design constraints like power and area. By incorporating ADICSYS's solutions, users benefit from decreased time to market and increased reliability and adaptability in the field.

EASii IC
All Foundries
All Process Nodes
Coprocessor, CPU, Processor Cores
View Details

Neural Network Accelerator

Gyrus AI's Neural Network Accelerator is an advanced solution designed for edge computing applications. The IP combines high-performance processing with low power consumption, utilizing native graph processing to optimize neural network tasks. This technology allows for efficient handling of computational workflows associated with deep learning models, significantly reducing clock cycles by several folds. With its unique capability to operate at 30 TOPS/W, the Neural Network Accelerator achieves remarkable performance metrics. It utilizes significantly less memory, providing a compact solution that fits into small die areas with high utilization rates. This makes it ideal for implementing demanding AI tasks in various edge devices, ensuring that performance is not sacrificed for power efficiency. Gyrus AI's development of accompanying software tools ensures seamless integration with existing AI models, aiding in the deployment of neural networks on this unique hardware. The accelerator's architecture supports various model structures with high adaptability, proving to be a versatile engine for AI tasks across multiple domains. This IP's enhanced processing capability positions it as a pioneering solution for high-speed, efficient AI operations on edge devices.

Gyrus AI
AI Processor, Coprocessor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor
View Details

Origami Programmer

The Origami Programmer is a sophisticated eFPGA configuration tool, simplifying the design of custom architectures with an intuitive GUI. It automates the synthesis, placement, routing, and generates the necessary bitstream optimized for Menta's eFPGA architecture. This tool eliminates the need for additional FPGA configuration utilities, facilitating an effortless design flow. Customers can seamlessly integrate the programmer in SoC designs, benefiting from end-to-end configuration capabilities. With comprehensive support for RTL written in VHDL, Verilog, and SystemVerilog, Origami Programmer ensures seamless adaptation across various design constraints and advanced static timing analysis. It showcases a streamlined interface and enhanced scripting capabilities in TCL, catering to both standalone operations and API integrations. By supporting Linux platforms, it encompasses a broad range of applications from basic design trials to more complex field applications, promising rapid performance assessments and precise design configurations.

Menta
Coprocessor, IoT Processor, Multiprocessor / DSP, Processor Core Independent
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator (ICA) is a pioneering hardware platform that brings a significant leap in high-performance computing. Utilizing a unique architecture that melds intelligent software-defined hardware with real-time configuration optimization, the Maverick-2 is capable of adapting dynamically to HPC and AI workloads. This means it can provide unmatched performance and efficiency tailored to specific demands and ensure its readiness to tackle future workload requirements. Maverick-2 excels in facilitating application development by negating the need for domain-specific languages and avoiding time-consuming porting processes. With native support for a range of popular programming languages such as C, C++, FORTRAN, OpenMP, and Kokkos, developers can accelerate the innovation process effectively. Furthermore, the anticipated integration with CUDA, HIP/ROCm, and major AI frameworks embodies next-level versatility to meet broad industrial requirements. Targeted to excel in a variety of applications, the Maverick-2 ICA is suitable for diverse fields including energy, fluid dynamics, seismic analysis, fintech, and life sciences, among others. Its design supports massively parallel computing applications, making it an ideal choice for sectors that rely on complex computational tasks. Industries can benefit from its robust adaptability, efficiency, and ability to deliver optimal performance consistently across numerous applications, paving the way for innovative developments.

Next Silicon Ltd.
AI Processor, Coprocessor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor
View Details

Adaptive Digital Signal Processor

Menta's Adaptive Digital Signal Processor combines the prowess of adaptability with efficient processing capabilities specifically designed for stringent signal processing tasks. Incorporating reconfigurable DSP blocks within the FPGA fabric leads to tailored solutions, perfectly aligned with the demands of intricate applications in telecommunications and automotive sectors. These processors bolster performance, offering real-time computational abilities and elevated processing speed, suiting operations that require precise, adaptable, and dynamic calculations. With a strong emphasis on flexibility, the processor scales efficiently, addressing varying computational load, which is crucial for different market segments. Menta ensures that these processors meet rigorous industry standards, providing robust solutions with simplified integration into diverse system architectures.

Menta
Coprocessor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Standard cell, Vision Processor
View Details

RecAccel N3000 PCIe for AI Recommendation System

The RecAccel N3000 PCIe Card is engineered to revolutionize recommendation systems through extraordinary efficiency and precision. It harnesses the capabilities of AI-optimized hardware to streamline data processing, empowering enterprises to deploy robust recommendation services with unparalleled accuracy. This card is specifically tailored for data-rich AI environments, facilitating high-speed computation necessary for real-time recommendations. Built to excel in demanding workloads, the RecAccel N3000 supports INT8 precision, augmenting the DLRM (Deep Learning Recommendation Model) performance with superb power efficiency. Its unique architecture ensures that it can handle millions of inferences per Joule, making it a superior choice for businesses focused on cutting-edge AI applications. With an emphasis on compatibility, the RecAccel N3000 PCIe Card integrates smoothly into diverse computing setups, minimizing installation complexity while maximizing output. It significantly elevates the capabilities of AI systems to deliver prompt and highly accurate recommendations, reinforcing business intelligence and customer engagement across platforms.

Neuchips Corporation
TSMC
16nm, 28nm
AI Processor, AMBA AHB / APB/ AXI, Coprocessor, Embedded Security Modules, Ethernet, Vision Processor
View Details

ASIPs

Application Specific Instruction Set Processors (ASIPs) from Wasiela are designed for customizability and efficiency across a range of applications. These processors feature specialized instruction sets to maximize performance in targeted tasks, acting as hardware accelerators that optimize power usage and performance in specific operating contexts.

Wasiela
Audio Processor, Coprocessor, CPU, IoT Processor, Multiprocessor / DSP, Processor Cores
View Details

RecAccel AI Platform for High-Accuracy Computing

The RecAccel AI Platform is designed for striking precision in complex AI computations, catering specifically to domains that demand high accuracy and speed. It integrates AI-specific hardware with optimized frameworks, substantially boosting the performance of machine learning tasks that form the backbone of modern AI applications. This platform supports full-stack AI deployment, marrying hardware efficiency with adaptable software environments. It is engineered to process vast datasets rapidly, contributing to quick and reliable insights that drive informed decision-making in enterprise settings. By enhancing the accuracy and speed of AI inference, it allows businesses to maintain a competitive edge in data analytics and artificial intelligence. Its architecture offers compatibility with standard AI frameworks and models, ensuring that enterprises can effortlessly adopt and deploy this technology. The RecAccel AI Platform exemplifies cutting-edge innovation by facilitating streamlined AI operations while minimizing energy consumption, setting a benchmark for high-precision computing in the AI domain.

Neuchips Corporation
TSMC
16nm, 28nm
AI Processor, AMBA AHB / APB/ AXI, Coprocessor, Cryptography Cores, Embedded Security Modules, Ethernet, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

Silhouse - Rapid Machine Vision Solutions

Silhouse offers an advanced acceleration platform engineered to enhance machine vision applications. By leveraging a collection of image processing IP cores optimized with FPGA technology, Silhouse provides high-performance solutions that significantly boost image processing speeds and efficiency. This technology suits various industries, including automotive, medical imaging, and robotics. Clients can modify solutions with pre-designed component blocks or additional custom developments to cater to specific needs. Silhouse is aligned with rapid development cycles, ensuring swift integration into existing infrastructures, thus reducing time-to-market for advanced vision-based applications.

Institute of Electronics and Computer Science
Coprocessor, Graphics & Video Modules
View Details

PanAccelerator

PanAccelerator from Panmnesia is designed as a versatile AI accelerator, equipped with cutting-edge connectivity options courtesy of CXL technology. It transforms traditional AI acceleration frameworks by disaggregating computing and memory resources to facilitate efficient large-scale AI service deployments. This accelerator exemplifies Panmnesia's dedication to crafting high-performance computing solutions that effectively balance cost and power efficiency without sacrificing processing speed. The accelerator embodies a harmonious integration of CXL's cache coherent interconnect, enhancing shared memory scalability and significantly improving computational throughput. This empowers AI models to execute with superior speed, fostering an environment that is conducive to efficient massive data processing. By leveraging CXL, the PanAccelerator addresses memory limitations often encountered with traditional AI models, enabling more flexible resource management adaptable to fluctuating processing demands. Through its robust design optimized for parallel vector and tensor processing, the accelerator not only increases process efficiencies but also reduces the infrastructure costs associated with large-scale AI deployments. PanAccelerator encapsulates Panmnesia's vision of advancing AI capabilities by making it possible to easily manage and deploy different computational processes, offering unparalleled advantages in AI strategy execution for enterprises pursuing transformative technological enhancements.

Panmnesia
All Foundries
All Process Nodes
802.11, AI Processor, Audio Processor, Building Blocks, Coprocessor, CPU, Processor Core Dependent, Vision Processor
View Details
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt