All IPs > Processor
The 'Processor' category in the Silicon Hub Semiconductor IP catalog is a cornerstone of modern electronic device design. Processor semiconductor IPs serve as the brain of electronic devices, driving operations, processing data, and performing complex computations essential for a multitude of applications. These IPs include a wide variety of specific types such as CPUs, DSP cores, and microcontrollers, each designed with unique capabilities and applications in mind.
In this category, you'll find building blocks, which are fundamental components for constructing more sophisticated processors, and coprocessors that augment the capabilities of a main processor, enabling efficient handling of specialized tasks. The versatility of processor semiconductor IPs is evident in subcategories like AI processors, audio processors, and vision processors, each tailored to meet the demands of today’s smart technologies. These processors are central to developing innovative products that leverage artificial intelligence, enhance audio experiences, and enable complex image processing capabilities, respectively.
Moreover, there are security processors that empower devices with robust security features to protect sensitive data and communications, as well as IoT processors and wireless processors that drive connectivity and integration of devices within the Internet of Things ecosystem. These processors ensure reliable and efficient data processing in increasingly connected and smart environments.
Overall, the processor semiconductor IP category is pivotal for enabling the creation of advanced electronic devices across a wide range of industries, from consumer electronics to automotive systems, providing the essential processing capabilities needed to meet the ever-evolving technological demands of today's world. Whether you're looking for individual processor cores or fully integrated processing solutions, this category offers a comprehensive selection to support any design or application requirement.
Continuing the evolution of AI at the edge, the 2nd Generation Akida provides enhanced capabilities for modern applications. This upgrade implements 8-bit quantization for increased precision and introduces support for Vision Transformers and temporal event-based neural networks. The platform handles advanced cognitive tasks seamlessly with heightened accuracy and significantly reduced energy consumption. Designed for high-performance AI tasks, it supports complex network models and utilizes skip connections to enhance speed and efficiency.
Speedcore embedded FPGA (eFPGA) IP represents a notable advancement in integrating programmable logic into ASICs and SoCs. Unlike standalone FPGAs, eFPGA IP lets designers tailor the exact dimensions of logic, DSP, and memory needed for their applications, making it an ideal choice for areas like AI, ML, 5G wireless, and more. Speedcore eFPGA can significantly reduce system costs, power requirements, and board space while maintaining flexibility by embedding only the necessary features into production. This IP is programmable using the same Achronix Tool Suite employed for standalone FPGAs. The Speedcore design process is supported by comprehensive resources and guidance, ensuring efficient integration into various semiconductor projects.
The Speedster7t FPGA family is crafted for high-bandwidth tasks, tackling the usual restrictions seen in conventional FPGAs. Manufactured using the TSMC 7nm FinFET process, these FPGAs are equipped with a pioneering 2D network-on-chip architecture and a series of machine learning processors for optimal high-bandwidth performance and AI/ML workloads. They integrate interfaces for high-paced GDDR6 memory, 400G Ethernet, and PCI Express Gen5 ports. This 2D network-on-chip connects various interfaces to upward of 80 access points in the FPGA fabric, enabling ASIC-like performance, yet retaining complete programmability. The product encourages users to start with the VectorPath accelerator card which houses the Speedster7t FPGA. This family offers robust tools for applications such as 5G infrastructure, computational storage, and test and measurement.
The NMP-750 serves as a high-performance accelerator IP for edge computing solutions across various sectors, including automotive, smart cities, and telecommunications. It supports sophisticated applications such as mobility control, factory automation, and energy management, making it a versatile choice for complex computational tasks. With a high throughput of up to 16 TOPS and a memory capacity scaling up to 16 MB, this IP ensures substantial computing power for edge devices. It is configured with a RISC-V or Arm Cortex-R/A 32-bit CPU and incorporates multiple AXI4 interfaces, optimizing data exchanges between Host, CPU, and peripherals. Optimized for edge environments, the NMP-750 enhances spectral efficiency and supports multi-camera stream processing, paving the way for innovation in smart infrastructure management. Its scalable architecture and energy-efficient design make it an ideal component for next-generation smart technologies.
KPIT offers a comprehensive solution for Autonomous Driving and Advanced Driver Assistance Systems. This suite facilitates the widespread adoption of Level 3 and above autonomy in vehicles, providing high safety standards through robust testing and validation frameworks. The integration of AI-driven decision-making extends beyond perception to enhance the intelligence of autonomous systems. With a commitment to addressing existing challenges such as localization issues, AI limitations, and validation fragmentation, KPIT empowers automakers to produce vehicles that are both highly autonomous and reliable.
The KL730 AI SoC is equipped with a state-of-the-art third-generation reconfigurable NPU architecture, delivering up to 8 TOPS of computational power. This innovative architecture enhances computational efficiency, particularly with the latest CNN networks and transformer applications, while reducing DDR bandwidth demands. The KL730 excels in video processing, offering support for 4K 60FPS output and boasts capabilities like noise reduction, wide dynamic range, and low-light imaging. It is ideal for applications such as intelligent security, autonomous driving, and video conferencing.
The NMP-350 is designed to offer exceptional efficiency in AI processing, specifically targeting endpoint accelerations. This IP is well-suited for markets that require minimal power consumption and cost-effectiveness, such as automotive, AIoT/Sensors, Industry 4.0, smart appliances, and wearables. It enables a wide variety of applications, including driver authentication, digital mirrors, machine automation, and health monitoring. Technically, it delivers up to 1 TOPS and supports up to 1 MB local memory. The architecture is based on the RISC-V or Arm Cortex-M 32-bit CPU, ensuring effective processing capabilities for diverse tasks. Communication is managed via three AXI4 interfaces, each 128 bits wide, to handle Host, CPU, and Data interactions efficiently. The NMP-350 provides a robust foundation for developing advanced AI applications at the edge. Designed for ultimate flexibility, it aids in predictive maintenance and personalization processes in smart environments. With its streamlined architecture, it provides unmatched performance for embedded solutions, enabling seamless integration into existing hardware ecosystems.
KPIT's digital solutions harness cloud and edge analytics to modernize vehicle data management, optimizing efficiency and security in connected mobility. With a focus on overcoming data overload and ensuring compliance with regulatory standards, these solutions enable secure and scalable cloud environments for vehicle connectivity. The edge computing aspect enhances system responsiveness by processing data within vehicles, promoting innovation and dynamic feature development.
KPIT's propulsion technologies cover both traditional internal combustion engines and modern electric powertrains. By focusing on reducing the total cost of ownership for new energy vehicles, KPIT helps OEMs streamline development cycles and enhance vehicle quality. The company's platform supports agile software updates and sustains efforts on sustainable practices by increasing offerings in zero-emission vehicles (ZEVs) and exploring alternative fuels like hydrogen. With solutions spanning engine subsystems, transmission, and driveline optimization, KPIT addresses the intricate balance needed between legacy and emerging automotive platforms.
The Metis AIPU PCIe AI Accelerator Card provides an unparalleled performance boost for AI tasks by leveraging multiple Metis AIPUs within a single setup. This card is capable of delivering up to 856 TOPS, supporting complex AI workloads such as computer vision applications that require rapid and efficient data processing. Its design allows for handling both small-scale and extensive applications with ease, ensuring versatility across different scenarios. By utilizing a range of deep learning models, including YOLOv5 and ResNet-50, this AI accelerator card processes up to 12,800 FPS for ResNet-50 and an impressive 38,884 FPS for MobileNet V2-1.0. The card’s architecture enables high throughput, making it particularly suited for video analytics tasks where speed is crucial. The card also excels in scenarios that demand high energy efficiency, providing best-in-class performance at a significantly reduced operational cost. Coupled with the Voyager SDK, the Metis PCIe card integrates seamlessly into existing AI systems, enhancing development speed and deployment efficiency.
The ORC3990 is a sophisticated System on Chip (SoC) solution designed for low-power sensor-to-satellite communication within the LEO satellite spectrum. Utilizing Totum's DMSS technology, it achieves superior doppler performance, facilitating robust connectivity for IoT devices. The integration of an RF transceiver, power amplifiers, ARM CPUs, and memory components makes it a highly versatile module. Leveraging advanced power management technology, this SoC supports a battery life that exceeds ten years, even within industrial temperature ranges from -40 to +85°C. It's optimized for usage with Totum's global LEO satellite network, ensuring substantial indoor signal coverage without the need for additional GNSS components. Efficiency is a key feature, with the chip operating in the 2.4 GHz ISM band, providing unparalleled connectivity regardless of location. Compact in design, comparable in size to a business card, and designed for easy mounting, the ORC3990 offers sought-after versatility for IoT applications. The ability to function with excellent TCO in terms of cost compared to terrestrial IoT solutions makes it a valuable asset for any IoT deployment focused on sustainability and longevity.
KPIT's engineering and design solutions focus on accelerating vehicle development through new-age design and simulation techniques. This approach enables cost-efficient transformation and adherence to sustainability standards, offering integrated electrification solutions and cutting-edge design methodologies. KPIT's solutions in vehicle engineering support electric and hybrid vehicle innovation with advanced CAD tools, virtual prototyping, and AI augmentation.
Origin E1 neural engines are expertly adjusted for networks that are typically employed in always-on applications. These include devices such as home appliances, smartphones, and edge nodes requiring around 1 TOPS performance. This focused optimization makes the E1 LittleNPU processors particularly suitable for cost- and area-sensitive applications, making efficient use of energy and reducing processing latency to negligible levels. The design also incorporates a power-efficient architecture that maintains low power consumption while handling always-sensing data operations. This enables continuous sampling and analysis of visual information without compromising on efficiency or user privacy. Additionally, the architecture is rooted in Expedera's packet-based design which allows for parallel execution across layers, optimizing performance and resource utilization. Market-leading efficiency with up to 18 TOPS/W further underlines Origin E1's capacity to deliver outstanding AI performance with minimal resources. The processor supports standard and proprietary neural network operations, ensuring versatility in its applications. Importantly, it accommodates a comprehensive software stack that includes an array of tools such as compilers and quantizers to facilitate deployment in diverse use cases without requiring extensive re-designs. Its application has already seen it deployed in over 10 million devices worldwide, in various consumer technology formats.
Spec-TRACER is an integrated requirements lifecycle management application, purpose-built for FPGA and ASIC design environments. It supports the comprehensive management of design specifications and facilitates traceability across the development process, from initial specification capture through verification. This tool is invaluable in projects requiring stringent accountability and regulatory compliance, as seen in aerospace and automotive sectors. It enhances project consistency by ensuring that all design requirements are traceable, verifiable, and adhered to throughout the development phase. With features that allow for detailed analysis, reporting, and change management, Spec-TRACER simplifies the complexity of managing design requirements. Teams can achieve enhanced coordination and transparency, verifying that all specifications are met and documented appropriately, thus utilizing thorough and documented processes for effective project management.
Designed for high-performance environments such as data centers and automotive systems, the Origin E8 NPU cores push the limits of AI inference, achieving up to 128 TOPS on a single core. Its architecture supports concurrent running of multiple neural networks without context switching lag, making it a top choice for performance-intensive tasks like computer vision and large-scale model deployments. The E8's flexibility in deployment ensures that AI applications can be optimized post-silicon, bringing performance efficiencies previously unattainable in its category. The E8's architecture and sustained performance, alongside its ability to operate within strict power envelopes (18 TOPS/W), make it suitable for passive cooling environments, which is crucial for cutting-edge AI applications. It stands out by offering PetaOps performance scaling through its customizable design that avoids penalties typically faced by tiled architectures. The E8 maintains exemplary determinism and resource utilization, essential for running advanced neural models like LLMs and intricate ADAS tasks. Furthermore, this core integrates easily with existing development frameworks and supports a full TVM-based software stack, allowing for seamless deployment of trained models. The expansive support for both current and emerging AI workloads makes the Origin E8 a robust solution for the most demanding computational challenges in AI.
The Veyron V2 CPU extends Ventana Micro Systems' commitment to delivering top-tier performance capabilities within the RISC-V framework. Designed for data center-class workloads, the Veyron V2 enhances efficiencies across cloud and hyperscale ecosystems, making it a strategic choice for enterprise operations requiring unparalleled processing strength and adaptability. Building upon the foundation laid by its predecessor, the Veyron V2 improves both processing speed and efficiency, providing an edge in handling diverse applications in data-intensive contexts. Its enhanced architecture supports extensible instruction sets, bridging the computational needs between various enterprise, automotive, and artificial intelligence markets with precision and reliability. Integrated within the Veyron product line's broader ecosystem, the V2 CPU emphasizes Ventana’s dedication to fostering adaptable, forward-thinking computing environments. It assures businesses of scalable performance improvements, aligning seamlessly with existing systems to promote effortless adoption across complex and varied IT landscapes.
The D25F processor is specifically built for high-frequency operations, offering low gate count as well as extreme power efficiency. Known for its robust design, it suits applications where performance and energy consumption are critical considerations, fitting industries that demand reliability and proficiency in their operations.
The Metis AIPU M.2 Accelerator Module is a powerful AI processing solution designed for edge devices. It offers a compact design tailored for applications requiring efficient AI computations with minimized power consumption. With a focus on video analytics and other high-demand tasks, this module transforms edge devices into AI-capable systems. Equipped with the Metis AIPU, the M.2 module can achieve up to 3,200 FPS for ResNet-50, providing remarkable performance metrics for its size. This makes it ideal for deployment in environments where space and power availability are limited but computational demands are high. It features an NGFF (Next Generation Form Factor) socket, ensuring it can be easily integrated into a variety of systems. The module leverages Axelera's Digital-In-Memory-Computing technology to enhance neural network inference speed while maintaining power efficiency. It's particularly well-suited for applications such as multi-channel video analytics, offering robust support for various machine learning frameworks, including PyTorch, ONNX, and TensorFlow.
The eSi-3200 is a robust 32-bit processor focused on achieving low-cost and low-power performance ideal for embedded control systems. Its architecture, designed without cache, ensures deterministic performance suited for real-time applications. Leveraging a modified-Harvard memory architecture, it supports simultaneous instruction and data fetching. Within its 5-stage pipeline, it achieves high GHz clock frequencies and delivers a wide array of arithmetic computations. It boasts a set of 104 basic instructions and supports IEEE-754 compliant floating-point operations alongside a diverse set of optional application-specific instructions tailored to optimize performance. Its capacity to perform complex operations makes it adaptable to various computational needs without excessive power use.
The RV12 RISC-V Processor is a versatile, highly configurable single-issue CPU designed for the embedded market, adhering to the RV32I and RV64I RISC-V instructions. This processor implements a Harvard architecture, enabling simultaneous access to instruction and data memory, enhancing overall performance. The RV12 is part of Roa Logic's extensive CPU family, which is characterized by flexibility and underpinning efficient resource utilization for embedded systems.
BrainChip's Akida is an advanced neuromorphic processor that excels in efficiency and performance, processing data similar to the human brain by focusing on essential sensory inputs. This approach drastically reduces power consumption and latency compared to conventional methods by keeping AI local to the chip. Akida’s architecture, which scales to support up to 256 nodes, allows for high efficiency with a small footprint. Nodes in the Akida system integrate Neural Network Layer Engines configurable as either convolutional or fully connected, maximizing processing power by handling data sparsity through event-based operations.
The eSi-1600 is a compact 16-bit RISC CPU crafted for efficiency in both cost and power consumption. This processor shows performance characteristics akin to more costly 32-bit CPUs, while maintaining a system cost competitive with 8-bit processors. It suits applications requiring control in mature mixed-signal processes, needing less than 64kB of memory. The architecture supports up to 16 general-purpose registers, offering 92 basic instructions and 10 addressing modes. The small gate count keeps silicon area at a minimum, fostering significant power savings especially when operating at reduced frequencies, thus extending battery life in embedded applications. Equipped with a 5-stage pipeline, it manages a high clock frequency, even within seasoned manufacturing processes, entailing less power wastage and efficient performance management.
The NaviSoC by ChipCraft is a sophisticated GNSS receiver system integrated with an application processor on a single piece of silicon. Known for its compact design, the NaviSoC provides exceptional performance in terms of precision, reliability, and security, complemented with low power consumption. This well-rounded GNSS solution is customizable to meet diverse application needs, making it suitable for IoT, Lane-level Navigation, UAV, and more. Designed to handle a wide range of GNSS applications, the NaviSoC is well-suited for scenarios that demand high accuracy and efficiency. Its architecture supports applications such as asset tracking, smart agriculture, and time synchronization while maintaining stringent security protocols. The flexibility in its design allows for adaptation and scalability depending on specific user requirements. The NaviSoC continuously aims to advance GNSS technology by delivering a holistic integration of processing capabilities. It stands as a testament to ChipCraft's innovative strides in creating dynamic, high-performance semiconductor solutions that excel in global positioning and navigation. The module's efficiency and adaptability offer a robust foundation for future GNSS system developments.
SCR9 is tailored for entry-level server-class applications and high-performance computing. This 64-bit RISC-V core supports a range of extensions, including vector operations and scalar cryptography. Utilizing a dual-issue 12-stage pipeline, SCR9 excels in environments requiring Linux-based operations, enabling advanced data processing capabilities like those needed in AI and personal computing devices.
The iniCPU is a compact but highly flexible processing core developed by Inicore for a broad range of applications. Based on established RISC architectures, it ensures efficient instruction execution and multitasking in embedded systems. With its capability to integrate seamlessly with other system components, iniCPU is highly adaptable for both FPGA and ASIC technologies. Its lightweight design optimizes resource use without compromising performance quality, which is crucial for applications in consumer electronics, industrial control, and robotics. IniCPU's emphasis on low power consumption makes it particularly advantageous for battery-dependent devices and applications aiming for energy efficiency. The core’s comprehensive support for standard interfaces and peripheral modules further enhances its integration ease, allowing developers to leverage its full capacities when designing complex systems.
The Origin E2 family of NPU cores is tailored for power-sensitive devices like smartphones and edge nodes that seek to balance power, performance, and area efficiency. These cores are engineered to handle video resolutions up to 4K, as well as audio and text-based neural networks. Utilizing Expedera’s packet-based architecture, the Origin E2 ensures efficient parallel processing, reducing the need for device-specific optimizations, thus maintaining high model accuracy and adaptability. The E2 is flexible and can be customized to fit specific use cases, aiding in mitigating dark silicon and enhancing power efficiency. Its performance capacity ranges from 1 to 20 TOPS and supports an extensive array of neural network types including CNNs, RNNs, DNNs, and LSTMs. With impressive power efficiency rated at up to 18 TOPS/W, this NPU core keeps power consumption low while delivering high performance that suits a variety of applications. As part of a full TVM-based software stack, it provides developers with tools to efficiently implement their neural networks across different hardware configurations, supporting frameworks such as TensorFlow and ONNX. Successfully applied in smartphones and other consumer electronics, the E2 has proved its capabilities in real-world scenarios, significantly enhancing the functionality and feature set of devices.
Cortus's High Performance RISC-V Processor represents the pinnacle of processing capability, designed for demanding applications that require high-speed computing and efficient task handling. It features the world’s fastest RISC-V 64-bit instruction set architecture, implemented in an Out-of-Order (OoO) execution core, supporting both single-core and multi-core configurations for unparalleled processing throughput. This processor is particularly suited for high-end computing tasks in environments ranging from desktop computing to artificial intelligence workloads. With integrated features such as a multi-socket cache coherent system and an on-chip vector plus AI accelerator, it delivers exceptional computation power, essential for tasks such as bioinformatics and complex machine learning models. Moreover, the processor includes coherent off-chip accelerators, such as CNN accelerators, enhancing its utility in AI-driven applications. The design flexibility extends its application to consumer electronics like laptops and supercomputers, positioning the High Performance RISC-V Processor as an integral part of next-gen technology solutions across multiple domains.
The AndeShape platform supports AndesCore processor system development by providing a versatile infrastructure composed of Platform IP, hardware development platforms, and an ICE Debugger. This allows for efficient integration and rapid prototyping, offering flexibility in design and development across a comprehensive set of hardware options. It aims to reduce design risk and accelerate time-to-market.
The AX45MP processor is a multi-core, 64-bit CPU core designed for high-performance computing environments. It supports vector processing and includes features like a level-2 cache controller to enhance data handling and processing speeds. This makes it ideal for rigorous computational tasks including scientific computing and large-scale data processing environments.
The eSi-3250 represents a high-performance 32-bit RISC core designed to efficiently integrate into ASIC and FPGA contexts, where slower or off-chip memories are employed. It addresses high-performance demands through separate instruction and data caches and supports configurations in size and associativity to enhance both performance and power efficiency. The optional memory management unit (MMU) supports physical or virtual memory deployments while ensuring secure and efficient memory management. This processor excels in providing exceptional code density alongside a diverse set of arithmetic and application-specific instructions. Its capacity to handle multiple interrupts and its robustness in high-frequency processes make it suitable for demanding embedded applications
The AI Camera Module from Altek is an innovative integration of image sensor technology and intelligent processing, designed to cater to the burgeoning needs of AI in imaging. It combines rich optical design capabilities with software-hardware amalgamation competencies, delivering multiple AI camera models that assist clients in achieving differentiated AI + IoT needs. This flexible camera module excels in edge computing by supporting high-resolution requirements such as 2K and 4K, thereby becoming an indispensable tool in environments demanding detailed image analysis. The AI Camera Module allows for superior adaptability in performing functions such as facial detection and edge computation, thus broadening its applicability across industries. Altek's collaboration with major global brands fortifies the AI Camera Module's position in the market, ensuring it meets diverse client specifications. Whether used in security, industrial, or home automation applications, this module effectively integrates into various systems to deliver enhanced visual processing capabilities.
The KL630 AI SoC embodies next-generation AI chip technology with a pioneering NPU architecture. It uniquely supports Int4 precision and transformer networks, offering superb computational efficiency combined with low power consumption. Utilizing an ARM Cortex A5 CPU, it supports a range of AI frameworks and is built to handle scenarios from smart security to automotives, providing robust capability in both high and low light conditions.
Aimed at performance-driven environments, the NMP-550 is an efficient accelerator IP optimized for diverse markets, including automotive, mobile, AR/VR, drones, and medical devices. This IP is crucial for applications such as driver monitoring, fleet management, image and video analytics, and compliance in security systems. The NMP-550 boasts a processing power of up to 6 TOPS and integrates up to 6 MB of local memory, empowering it to handle complex tasks with ease. It runs on a RISC-V or Arm Cortex-M/A 32-bit CPU and supports multiple high-speed interfaces, specifically three AXI4, 128-bit connections that manage Host, CPU, and Data traffic. This IP is engineered for environments demanding high performance with efficient power use, addressing modern technological challenges in real-time analytics and surveillance. The NMP-550 is adept at improving system intelligence, allowing for enhanced decision-making processes in connected devices.
TimbreAI T3 is engineered to serve as an ultra-low-power AI inference engine optimized for audio processing tasks, such as noise reduction in devices like wireless headsets. The core can execute 3.2 billion operations per second while consuming an exceptionally low power of 300 µW, making it an ideal choice for portable devices where battery efficiency is paramount. This core utilizes Expedera’s packet-based architecture to achieve significant power efficiency and performance within the stringent power and silicon area constraints typical of consumer audio devices. The T3's design precludes the need for external memory, further reducing system power and chip footprint while allowing for quick deployments across various platforms. Pre-configured to support commonly used audio neural networks, TimbreAI T3 ensures seamless integration into existing product architectures without necessitating hardware alterations or compromising model accuracy. Its user-friendly software stack further simplifies the deployment process, providing essential tools needed for successful AI integration in mass-market audio devices.
The A25 processor series, part of AndesCore CPU portfolio, features a 32-bit high-performance core designed to handle diverse applications with efficiency. It offers capabilities such as data prefetch, exceptional power efficiency, and flexible application support, making it suitable for varied market needs across numerous platforms.
The ABX Platform by Racyics utilizes Adaptive Body Biasing (ABB) technology to drive performance in ultra-low voltage scenarios. This platform is tailored for extensive applications requiring ultra-low power as well as high performance. The ABB generator, along with the standard cells and SRAM IP, form the core of the ABX Platform, providing efficient compensation for process variations, supply voltage fluctuations, and temperature changes.\n\nFor automotive applications, the ABX Platform delivers notable improvements in leakage power, achieving up to 76% reduction for automotive-grade applications with temperatures reaching 150°C. The platform's RBB feature substantially enhances leakage control, making it ideal for automotive uses. Beyond automotive, the ABX Platform's FBB functionality significantly boosts performance, offering up to 10.3 times the output at 0.5V operation compared to non-bias implementations.\n\nExtensively tested and silicon-proven, the ABX Platform ensures reliability and power efficiency with easy integration into standard design flows. The solution also provides tight cornering and ABB-aware implementations for improved Power-Performance-Area (PPA) metrics. As a turnkey solution, it is designed for seamless integration into existing systems and comes with a free evaluation kit for potential customers to explore its capabilities before committing.
The Automotive AI Inference SoC by Cortus is a cutting-edge chip designed to revolutionize image processing and artificial intelligence applications in advanced driver-assistance systems (ADAS). Leveraging RISC-V expertise, this SoC is engineered for low power and high performance, particularly suited to the rigorous demands of autonomous driving and smart city infrastructures. Built to support Level 2 to Level 4 autonomous driving standards, this AI Inference SoC features powerful processing capabilities, enabling complex image processing algorithms akin to those used in advanced visual recognition tasks. Designed for mid to high-end automotive markets, it offers adaptability and precision, key to enhancing the safety and efficiency of driver support systems. The chip's architecture allows it to handle a tremendous amount of data throughput, crucial for real-time decision-making required in dynamic automotive environments. With its advanced processing efficiency and low power consumption, the Automotive AI Inference SoC stands as a pivotal component in the evolution of intelligent transportation systems.
The MIPITM V-NLM-01 is specialized for efficient image noise reduction using non-local mean (NLM) algorithms. This resourceful hard core supports parameterized search-window sizes and a customizable number of bits per pixel to enhance visual output quality remarkably. Designed to facilitate HDMI outputs at resolutions up to 2048×1080 at frame rates ranging from 30 to 60 fps, it delivers flexibility for numerous imaging applications. Its efficient implementation renders it suitable for tasks demanding high-speed processing and precise noise reduction in video outputs. The MIPITM V-NLM-01’s algebraic approach to noise reduction ensures exceptional image clarity and fidelity, making it indispensable for high-definition video processing environments. Its adaptability for variable processing requirements makes it a robust solution for current and future video standards.
The RISC-V Hardware-Assisted Verification platform by Bluespec is engineered to offer an efficient and comprehensive approach to verifying RISC-V cores. It accelerates the verification process, allowing developers to confirm the functionality of their designs at both the core and system levels. The platform supports testing in diverse environments, including RTOS and Linux, which makes it versatile for a broad spectrum of applications. A distinguishing feature of this platform is its ability to verify standard ISA extensions as well as custom ISA extensions and accelerators. This capability is crucial for projects that require additional customization beyond the standard RISC-V instruction sets. Furthermore, by facilitating anytime, anywhere access through cloud-based solutions like AWS, it enhances the scalability and accessibility of verification processes. The platform is a valuable tool for developers who work on cutting-edge RISC-V applications, providing them with the confidence to validate their designs rigorously and efficiently. This verification tool is essential for developers aiming for high assurance in the correctness and performance of their systems.
The AndesCore range includes high-performance 32-bit and 64-bit CPU core families tailored for emerging market segments. These processors, adhering to the RISC-V technology and AndeStar V5 ISA, span across several series such as the Compact, 25-Series, 27-Series, 40-Series, and 60-Series. Each series is designed for specific applications, offering features like high per-MHz performance, vector processing units (VPUs), branch prediction, and memory management enhancements like MemBoost, which optimizes memory bandwidth and latency.
Cortus CIoT25 is an inventive solution aimed at enhancing IoT connectivity with its ultra-energy-efficient RISC-V architecture. Supporting Sub-1 GHz unlicensed ISM bands, the CIoT25 makes IoT devices smarter and more efficient, reducing both power consumption and operational costs significantly. The design is specifically crafted for smart home devices and low-power sensor networks, offering unparalleled integration in the IoT domain. The CIoT25's unique architecture ensures high adaptability to varying IoT environments, enabling tailored performance delivery for distinct applications. Its comprehensive support for diverse communication protocols makes it an ideal candidate for multi-platform IoT setups, leading to widespread adoption among IoT service providers seeking reliable communication hardware. With the increasing demands of connected environments, the CIoT25 meets the intricate requirements of modern applications by offering seamless functionality over extended periods, largely due to its low operational energy demand. This microcontroller is set to empower a new wave of IoT devices with its intelligent resource management and superior data handling capabilities.
eSi-Comms represents EnSilica’s suite of communication IP blocks, designed to enhance modern communication systems through flexible, parameterized IP. These IPs are optimized for a range of air interface standards, including 4G, 5G, Wi-Fi, and DVB, providing a robust framework for both custom and standardized wireless designs.\n\nThe flexibility of eSi-Comms IP allows it to be configured for various interfacing standards, supporting high-level synchronization, equalization, and modulation techniques. The suite includes advanced DSP algorithms and control loops that ensure reliable communication links, vital for applications like wireless sensors and cellular networks.\n\nEnSilica also supports software-defined radio (SDR) applications by offering hardware accelerators compatible with processor cores like ARM, enhancing processing power while maintaining flexibility. This adaptability makes eSi-Comms IP a valuable asset in developing efficient, high-performance communication solutions that can quickly adapt to changing technological demands.
Origin E6 NPU cores are cutting-edge solutions designed to handle the complex demands of modern AI models, specializing in generative and traditional networks such as RNN, CNN, and LSTM. Ranging from 16 to 32 TOPS, these cores offer an optimal balance of performance, power efficiency, and feature set, making them particularly suitable for premium edge inference applications. Utilizing Expedera’s innovative packet-based architecture, the Origin E6 allows for streamlined multi-layer parallel processing, ensuring sustained performance and reduced hardware load. This helps developers maintain network adaptability without incurring latency penalties or the need for hardware-specific optimizations. Additionally, the Origin E6 provides a fully scalable solution perfect for demanding environments like next-generation smartphones, automotive systems, and consumer electronics. Thanks to a comprehensive software suite based around TVM, the E6 supports a broad span of AI models, including transformers and large language models, offering unparalleled scalability and efficiency. Whether for use in AR/VR platforms or advanced driver assistance systems, the E6 NPU cores provide robust solutions for high-performance computing needs, facilitating numerous real-world applications.
The PolarFire FPGA Family is designed to deliver cost-efficient and ultra-low power solutions across a spectrum of mid-range applications. It is ideal for a variety of markets that include industrial automation, communications, and automotive sectors. These FPGAs are equipped with transceivers that range from 250 Mbps to 12.7 Gbps, which enables flexibility in handling diverse data throughput requirements efficiently. With capacities ranging from 100K to 500K Logic Elements (LEs) and up to 33 Mbits of RAM, the PolarFire FPGAs provide the perfect balance of power efficiency and performance. These characteristics make them suitable for use in applications that demand strong computational power and data processing while maintaining energy consumption at minimal levels. Additionally, the PolarFire FPGA Family is known for integrating best-in-class security features, offering exceptional reliability which is crucial for critical applications. The architecture is built to facilitate easy incorporation into various infrastructure setups, enhancing scalability and adaptability for future technological advancements. This flexibility ensures that the PolarFire FPGAs remain at the forefront of the semiconductor industry, providing solutions that meet the evolving needs of customers worldwide.
The RISC-V Core-hub Generators by InCore Semiconductors provide an advanced level of customization for developing processor cores. These generators are tailored to support configuration at both the Instruction Set Architecture (ISA) and the microarchitecture levels, enabling designers to create cores that meet specific functional and performance needs. By allowing detailed customization, these generators support a wide range of applications, from simple embedded devices to complex industrial systems. The Core-hub Generators are designed to streamline the SoC development process by integrating optimized SoC fabrics and UnCore components such as programming interfaces, debugging tools, and interrupt controllers. This comprehensive integration facilitates efficient communication between different processing units and peripheral devices, thereby enhancing overall system performance. InCore's generators leverage the flexibility and scalability of RISC-V technology, promoting innovation and accelerating the deployment of custom silicon solutions. This makes them an ideal choice for designers looking to build cutting-edge SoCs with enhanced capabilities and reduced development times.
The xcore.ai platform stands as an economical and high-performance solution for intelligent IoT applications. Designed with a unique multi-threaded micro-architecture, it supports applications requiring deterministic performance with low latency. The architecture features 16 logical cores, split between two multi-threaded processor tiles, which are equipped with 512 kB of SRAM and a vector unit for both integer and floating-point computations. This platform excels in enabling high-speed interprocessor communications, allowing tight integration among processors and across multiple xcore.ai SoCs. The xcore.ai offers scalable performance, adapting the tile clock frequency to meet specific application requirements, which optimizes power consumption. Its ability to handle DSP, AI/ML, and I/O processing within a singular development environment makes it a versatile choice for creating smart, connected products. The adaptability of the xcore.ai extends to various market applications such as voice and audio processing. It supports embedded PHYs for MIPI, USB, and LPDDR control processing, and utilizes FreeRTOS across multiple threads for robust multi-threading performance. On an AI and ML front, the platform includes a 256-bit vector processing unit that supports 8-bit to 32-bit operations, delivering exceptional AI performance with up to 51.2 GMACC/s. All these features are packaged within a development environment that simplifies the integration of multiple application-specific components. This makes xcore.ai an essential platform for developers aiming to leverage intelligent IoT solutions that scale with application needs.
The Chimera GPNPU series stands as a pivotal innovation in the realm of on-device artificial intelligence computing. These processors are engineered to address the challenges faced in machine learning inference deployment, offering a unified architecture that integrates matrix, vector, and scalar operations seamlessly. By consolidating what traditionally required multiple processors, such as NPUs, DSPs, and real-time CPUs, into a single processing core, Chimera GPNPU reduces system complexity and optimizes performance. This series is designed with a focus on handling diverse, data-parallel workloads, including traditional C++ code and the latest machine learning models like vision transformers and large language models. The fully programmable nature of Chimera GPNPUs allows developers to adapt and optimize model performance continuously, providing a significant uplift in productivity and flexibility. This capability ensures that as new neural network models emerge, they can be supported without the necessity of hardware redesign. A remarkable feature of these processors is their scalability, accommodating intensive workloads up to 864 TOPs and being particularly suited for high-demand applications like automotive safety systems. The integration of ASIL-ready cores allows them to meet stringent automotive safety standards, positioning Chimera GPNPU as an ideal solution for ADAS and other automotive use cases. The architecture's emphasis on reducing memory bandwidth constraints and energy consumption further enhances its suitability for a wide range of high-performance, power-sensitive applications, making it a versatile solution for modern automotive and edge computing.
The Mixed-Signal CODEC offered by Archband Labs is engineered to enhance the performance of audio and voice devices, handling conversions between analog and digital signals efficiently. Designed to cater to various digital audio interfaces such as PWM, PDM, PCM conversions, I2S, and TDM, it ensures seamless integration into complex audio systems. Well-suited for low-power and high-performance applications, this CODEC is frequently deployed in audio systems across consumer electronics, automotive, and edge computing devices. Its robust design ensures reliable operation within wearables, smart home devices, and advanced home entertainment systems, handling pressing demands for clarity and efficiency in audio signal processing. Engineers benefit from its extensive interfacing capabilities, supporting a spectrum of audio inputs and outputs. The CODEC's compact architecture ensures ease of integration, allowing manufacturers to develop innovative and enhanced audio platforms that meet diverse market needs.
The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic, Inc. is a groundbreaking processor core designed for efficiency in both power consumption and performance. Operating at a mere 10mW at 1GHz, this core leverages advanced design techniques to run at reduced voltages without sacrificing performance, achieving clock speeds up to 5 GHz. This innovation is particularly valuable for applications requiring high-speed processing while maintaining low power usage, making it ideal for portable and battery-operated devices. Micro Magic's 64-bit RISC-V architecture embraces a streamlined design that minimizes energy consumption and maximizes processing throughput. The core's architecture is optimized for high performance under low-power conditions, which is essential for modern electronics that require prolonged battery life and environmental sustainability. This core supports a wide range of applications from consumer electronics to automotive systems where energy efficiency and computational power are paramount. The RISC-V core also benefits from Micro Magic's suite of integrated design tools, which streamline the development process and enable seamless integration into larger systems. With a focus on reducing total ownership costs and enhancing product life cycle, Micro Magic's RISC-V core stands out as a versatile and eco-friendly solution in the semiconductor market.
The Veyron V1 CPU by Ventana Micro Systems illustrates their commitment to advancing high-performance computing solutions within the RISC-V family of processors. This cutting-edge processor is tailored specifically for high-demand workloads in data centers, ensuring reliable and efficient operation across various performance metrics. The Veyron V1 embodies Ventana’s design philosophy of combining competitive efficiency levels with robust processing capabilities. Characterized by its versatility, the V1 CPU stands out for its adaptability across a wide range of data center applications. The Veyron V1 seamlessly integrates into existing infrastructures while offering scalable performance improvements without the burden of excessive power consumption. This makes it an ideal choice for businesses seeking to enhance their computing prowess without substantial energy overhead. Ventana’s focus on maintaining high standards of efficiency ensures that the Veyron V1 CPU meets the stringent demands of modern data-intensive environments. It offers enhanced instruction sets and high-speed processing, making it well-suited to serve dynamic and evolving enterprise needs. The presence of innovative technologies in the Veyron V1 advances RISC-V architecture, showcasing the practical adaptability of Ventana’s IP offerings.