All IPs > Processor
The 'Processor' category in the Silicon Hub Semiconductor IP catalog is a cornerstone of modern electronic device design. Processor semiconductor IPs serve as the brain of electronic devices, driving operations, processing data, and performing complex computations essential for a multitude of applications. These IPs include a wide variety of specific types such as CPUs, DSP cores, and microcontrollers, each designed with unique capabilities and applications in mind.
In this category, you'll find building blocks, which are fundamental components for constructing more sophisticated processors, and coprocessors that augment the capabilities of a main processor, enabling efficient handling of specialized tasks. The versatility of processor semiconductor IPs is evident in subcategories like AI processors, audio processors, and vision processors, each tailored to meet the demands of today’s smart technologies. These processors are central to developing innovative products that leverage artificial intelligence, enhance audio experiences, and enable complex image processing capabilities, respectively.
Moreover, there are security processors that empower devices with robust security features to protect sensitive data and communications, as well as IoT processors and wireless processors that drive connectivity and integration of devices within the Internet of Things ecosystem. These processors ensure reliable and efficient data processing in increasingly connected and smart environments.
Overall, the processor semiconductor IP category is pivotal for enabling the creation of advanced electronic devices across a wide range of industries, from consumer electronics to automotive systems, providing the essential processing capabilities needed to meet the ever-evolving technological demands of today's world. Whether you're looking for individual processor cores or fully integrated processing solutions, this category offers a comprehensive selection to support any design or application requirement.
Brainchip's Akida Neural Processor IP represents a groundbreaking approach to edge AI processing by employing a neuromorphic design that mimics natural brain function for efficient and accurate data processing directly on the device. This IP stands out due to its event-based processing capability, which significantly reduces power consumption while providing high-speed inferencing and on-the-fly learning. Akida's architecture is designed to operate independently of traditional cloud services, thereby enhancing data privacy and security. This localized processing approach enables real-time systems to act on immediate sensor inputs, offering instantaneous reactions. Additionally, the architecture supports flexible neural network configurations, allowing it to adapt to various tasks by tailoring the processing nodes to specific application needs. The Akida Neural Processor IP is supported by Brainchip's MetaTF software, which simplifies the creation and deployment of AI models by providing tools for model conversion and optimization. Moreover, the platform's inherent scalability and customization features make it versatile for numerous industry applications, including smart home devices, automotive systems, and more.
The 2nd Generation Akida platform is a substantial advancement in Brainchip's neuromorphic processing technology, expanding its efficiency and applicability across more complex neural network models. This advanced platform introduces support for Temporal Event-Based Neural Nets and Vision Transformers, aiming to enhance AI performance for various spatio-temporal and sensory applications. It's designed to drastically cut model size and required computations while boosting accuracy. Akida 2nd Generation continues to enable Edge AI solutions by integrating features that improve energy efficiency and processing speed while keeping model storage requirements low. This makes it an ideal choice for applications that demand high-performance AI in Edge devices without needing cloud connectivity. Additionally, it incorporates on-chip learning, which eliminates the need to send sensitive data to the cloud, thus enhancing security and privacy. The platform is highly flexible and scalable, accommodating a wide array of sensory data types and applications, from real-time robotics to healthcare monitoring. It's specifically crafted to run independently of the host CPU, enabling efficient processing in compact hardware setups. With this generation, Brainchip sets a new standard for intelligent, power-efficient solutions at the edge.
Speedcore embedded FPGA (eFPGA) IP represents a notable advancement in integrating programmable logic into ASICs and SoCs. Unlike standalone FPGAs, eFPGA IP lets designers tailor the exact dimensions of logic, DSP, and memory needed for their applications, making it an ideal choice for areas like AI, ML, 5G wireless, and more. Speedcore eFPGA can significantly reduce system costs, power requirements, and board space while maintaining flexibility by embedding only the necessary features into production. This IP is programmable using the same Achronix Tool Suite employed for standalone FPGAs. The Speedcore design process is supported by comprehensive resources and guidance, ensuring efficient integration into various semiconductor projects.
The Speedster7t FPGA family is crafted for high-bandwidth tasks, tackling the usual restrictions seen in conventional FPGAs. Manufactured using the TSMC 7nm FinFET process, these FPGAs are equipped with a pioneering 2D network-on-chip architecture and a series of machine learning processors for optimal high-bandwidth performance and AI/ML workloads. They integrate interfaces for high-paced GDDR6 memory, 400G Ethernet, and PCI Express Gen5 ports. This 2D network-on-chip connects various interfaces to upward of 80 access points in the FPGA fabric, enabling ASIC-like performance, yet retaining complete programmability. The product encourages users to start with the VectorPath accelerator card which houses the Speedster7t FPGA. This family offers robust tools for applications such as 5G infrastructure, computational storage, and test and measurement.
The NMP-750 is engineered as a performance accelerator tailored for edge computing applications that demand robust processing power and versatility. It is ideally deployed in environments such as automotive, AMR and UAV systems, AR/VR applications, as well as smart infrastructure projects like smart buildings, factories, and cities. Its design principles aim to enhance security and surveillance systems while supporting advanced telecommunications solutions. This comprehensive IP can attain up to 16 TOPS, thereby addressing needs for high throughput and efficiency in data processing tasks. The NMP-750 includes up to 16 MB of local memory, utilizing either RISC-V or Arm Cortex-R or A 32-bit CPUs to manage operational complexity through three 128-bit AXI4 interfaces for host, CPU, and data processes. This infrastructure not only ensures rapid data-handling capabilities but also optimizes system-level operations for various emerging technologies. Ideal for managing multi-camera stream processing and enhancing spectral efficiency, it is equally suited for mobility and autonomous control systems—key to future smart city and factory applications. The NMP-750's support for comprehensive automation and data analytics offers companies the potential to develop cutting-edge technologies, driving industry standards across domains.
ADAS and Autonomous Driving technology by KPIT focuses on advancing L3+ autonomy, providing scalable and safe autonomous mobility solutions. This technology addresses fundamental challenges such as consumer safety, localized infrastructure dependencies, and comprehensive validation approaches. With the ever-evolving landscape of autonomous driving, ensuring robust AI solutions beyond mere perception is crucial for elevating autonomy levels in vehicles. By integrating innovative technology and adhering to regulatory standards, KPIT empowers automakers to offer safe and reliable autonomous vehicles that meet consumer trust and performance expectations.
The Origin E1 is a highly efficient neural processing unit (NPU) designed for always-on applications across home appliances, smartphones, and edge nodes. It is engineered to deliver approximately 1 Tera Operations per Second (TOPS) and is tailored for cost- and area-sensitive deployment. Featuring the LittleNPU architecture, the Origin E1 excels in low-power environments, making it an ideal solution for devices where minimal power consumption and area are critical. This NPU capitalizes on Expedera's innovative packet-based execution strategy, which allows it to perform parallel layer execution for optimal resource use, cutting down on latency, power, and silicon area. The E1 supports a variety of network types commonly used in consumer electronics, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and more. A significant advantage of Origin E1 is its scalability and market-leading power efficiency, achieving 18 TOPS/W and supporting standard, custom, and proprietary networks. With a robust software stack and support for popular AI frameworks like TensorFlow and ONNX, it ensures seamless integration into a diverse range of AI applications.
The Origin E8 neural processing unit (NPU) stands out for its extreme performance capabilities, designed to serve demanding applications such as high-end automotive systems and data centers. Capable of delivering up to 128 TOPS per core, this NPU supports the most advanced AI workloads seamlessly, whether in autonomous vehicles or data-intensive environments. By employing Expedera's packet-based architecture, Origin E8 ensures efficient parallel processing across layers and achieves impressive scalability without the drawbacks of increased power and area penalties associated with tiled architectures. It allows running extensive AI models that cater to both standard and custom requirements without compromising on model accuracy. The NPU features a comprehensive software stack and full support for a variety of frameworks, ensuring ease of deployment across platforms. Scalability up to PetaOps and support for resolutions as high as 8K make the Origin E8 an excellent solution for industries that demand unrivaled performance and adaptability.
Vehicle Engineering & Design Solutions by KPIT revolve around transforming vehicle development through cutting-edge design and simulation technologies. By employing advanced Computer Aided Design (CAD) and virtual prototyping, KPIT enhances product development and market entry speed. The focus is on aligning vehicle aesthetics with functional performance, ensuring that vehicles not only appeal to modern consumers but also comply with modern sustainability mandates. KPIT’s holistic approach offers comprehensive solutions that simplify the design and validation processes, fostering innovation in both conventional and electric vehicle configurations.
The eSi-3200 is a 32-bit processor core focused on delivering low power and cost-efficient solutions. This core is well-suited to embedded control applications where deterministic performance is crucial. Its modified-Harvard memory architecture provides simultaneous instruction and data access, optimizing speed and performance. Incorporating an extensive instruction set, the eSi-3200 includes optional features such as single-precision floating point instructions. Its architecture caters to both high and low-power applications, ensuring efficient resource utilization. As a cacheless design, it offers predictable performance, beneficial for real-time control applications. This core supports a wide range of peripherals and interfaces, facilitated by its AMBA architecture compatibility. Its ease of integration into existing systems, along with comprehensive debugging support, makes it a reliable choice for achieving sophisticated control in embedded systems.
The NMP-350 is a low-power and cost-effective end-point accelerator designed to cater to applications across various industries. This accelerator finds its niche in markets such as automotive, AIoT, and Industry 4.0, where efficiency and scalability are critical. With potential applications in driver authentication, digital mirrors, and personalized user experiences, it is also applicable in predictive maintenance systems, machine automation, and health monitoring. Technically, the NMP-350 boasts an impressive capacity of up to 1 TOPS (Tera Operations Per Second), supported by up to 1 MB of local memory. The system is based on a flexible architecture utilizing either RISC-V or Arm Cortex-M 32-bit CPUs, accommodating three AXI4 interfaces with 128 bits each dedicated to host, CPU, and data processes. This composition assures its capability to handle a multitude of tasks efficiently while maintaining a low power profile. Its integration into smart appliances and wearable technologies showcases its versatility, providing industry players with a robust solution for building smarter and more reliable products. As industries move towards more interconnected and intelligent systems, the NMP-350 provides the necessary technology to drive innovation forward.
The D25F from Andes Technology is a feature-rich processor core built on a 32-bit, 5-stage pipeline architecture. It supports the DSP/SIMD P-extension, offering enhanced signal processing and computational capabilities, making it ideal for media processing, IoT applications, and other compute-intensive tasks. This core balances high throughput with power efficiency, leveraging a well-optimized pipeline to ensure reduced processing delays and improved execution times. Its compatibility with the RISC-V instruction set allows it to integrate into a variety of customizable systems.
The Origin E2 is a versatile, power- and area-optimized neural processing unit (NPU) designed to enhance AI performance in smartphones, edge nodes, and consumer devices. This NPU supports a broad range of AI networks such as RNNs, LSTMs, CNNs, DNNs, and others, ensuring minimal latency while optimizing for power and area efficiency. Origin E2 is notable for its adaptable architecture, which facilitates seamless parallel execution across multiple neural network layers, thus maximizing resource utilization and providing deterministic performance. With performance capabilities scalable from 1 to 20 TOPS, the Origin E2 maintains excellent efficiency up to 18 TOPS per Watt, reflecting its superior design strategy over traditional layer-based solutions. This NPU's software stack supports prevalent frameworks like TensorFlow and ONNX, equipped with features such as mixed precision quantization and multi-job APIs. It’s particularly suitable for applications that require efficient processing of video, audio, and text-based neural networks, offering leading-edge performance in power-constrained environments.
The eSi-1600 is a 16-bit processor core tailored for cost-sensitive and power-efficient applications. Setting itself apart, it delivers exceptional performance typical of 32-bit processors while maintaining the affordability and system cost of an 8-bit device. Its reduced footprint makes it an excellent choice for integration into ASICs or FPGA designs. With its RISC architecture, the eSi-1600 supports an extensive range of instructions optimized for high code density and low power consumption. Its pipeline design ensures efficient execution of operations, allowing for high-speed data processing even in energy-constrained environments. This core facilitates easy migration paths to more advanced versions while providing robust peripheral interfaces using the AMBA bus protocol. It's engineered for low-power applications, presenting an optimal solution for compact, high-performance embedded designs.
TimbreAI T3 is purpose-built to deliver ultra-low-power AI for audio processing, providing critical noise reduction capabilities especially in power-constrained devices such as headsets. With energy needs as low as 300 microwatts, it offers a specialized solution for enhancing audio experiences without compromising battery life. This AI interface engine supports up to 3.2 GOPS performance without requiring external memory, thereby optimizing power efficiency and reducing silicon area. Its packet-based architecture leverages Expedera's expertise in low-power design to maintain audio clarity while reducing the chip's size and power consumption. TimbreAI T3 is readily deployable across a wide array of devices, with support for popular audio processing neural networks and frameworks like TensorFlow and ONNX. This unit's seamless integration and sustained field performance make it a preferred choice for companies aiming to enhance the audio features of their wearable tech and other portable devices.
Cortus's High Performance RISC-V Processor represents the pinnacle of processing capability, designed for demanding applications that require high-speed computing and efficient task handling. It features the world’s fastest RISC-V 64-bit instruction set architecture, implemented in an Out-of-Order (OoO) execution core, supporting both single-core and multi-core configurations for unparalleled processing throughput. This processor is particularly suited for high-end computing tasks in environments ranging from desktop computing to artificial intelligence workloads. With integrated features such as a multi-socket cache coherent system and an on-chip vector plus AI accelerator, it delivers exceptional computation power, essential for tasks such as bioinformatics and complex machine learning models. Moreover, the processor includes coherent off-chip accelerators, such as CNN accelerators, enhancing its utility in AI-driven applications. The design flexibility extends its application to consumer electronics like laptops and supercomputers, positioning the High Performance RISC-V Processor as an integral part of next-gen technology solutions across multiple domains.
The NaviSoC by ChipCraft is a sophisticated GNSS receiver system integrated with an application processor on a single piece of silicon. Known for its compact design, the NaviSoC provides exceptional performance in terms of precision, reliability, and security, complemented with low power consumption. This well-rounded GNSS solution is customizable to meet diverse application needs, making it suitable for IoT, Lane-level Navigation, UAV, and more. Designed to handle a wide range of GNSS applications, the NaviSoC is well-suited for scenarios that demand high accuracy and efficiency. Its architecture supports applications such as asset tracking, smart agriculture, and time synchronization while maintaining stringent security protocols. The flexibility in its design allows for adaptation and scalability depending on specific user requirements. The NaviSoC continuously aims to advance GNSS technology by delivering a holistic integration of processing capabilities. It stands as a testament to ChipCraft's innovative strides in creating dynamic, high-performance semiconductor solutions that excel in global positioning and navigation. The module's efficiency and adaptability offer a robust foundation for future GNSS system developments.
The NMP-550 stands as a performance efficiency accelerator, particularly crafted for applications that demand high computational power combined with energy efficiency. This IP is especially suited for markets including automotive, mobile devices, AR/VR, and security-focused technologies. Its applicability spares a wide spectrum, fostering innovation in driver monitoring, fleet management, and advanced image or video analytics. Along with intruder detection and compliance systems, it bolsters its utility in medical devices for enhanced diagnostic capabilities. Technologically, the NMP-550 delivers up to 6 TOPS, which provides a significant boost in data processing capability. It features up to 6 MB of local memory, ensuring swift and effective data management. The design is underpinned by a choice of RISC-V or Arm Cortex-M or A 32-bit CPUs, along with three AXI4 interfaces supporting 128 bits each, allocated for host, CPU, and data handling. Such specification allows this accelerator to proficiently tackle tasks of various computational demands with resilience and efficiency. Its design caters to cross-disciplinary needs, making it an excellent fit for drone operations, robotics, and security systems requiring real-time processing and decision-making capabilities. With the inherent ability to process substantially more data at improved efficiencies, this IP aligns well with the future of immersive and interactive application deployments.
The ORC3990 is a sophisticated System on Chip (SoC) solution designed for low-power sensor-to-satellite communication within the LEO satellite spectrum. Utilizing Totum's DMSS technology, it achieves superior doppler performance, facilitating robust connectivity for IoT devices. The integration of an RF transceiver, power amplifiers, ARM CPUs, and memory components makes it a highly versatile module. Leveraging advanced power management technology, this SoC supports a battery life that exceeds ten years, even within industrial temperature ranges from -40 to +85°C. It's optimized for usage with Totum's global LEO satellite network, ensuring substantial indoor signal coverage without the need for additional GNSS components. Efficiency is a key feature, with the chip operating in the 2.4 GHz ISM band, providing unparalleled connectivity regardless of location. Compact in design, comparable in size to a business card, and designed for easy mounting, the ORC3990 offers sought-after versatility for IoT applications. The ability to function with excellent TCO in terms of cost compared to terrestrial IoT solutions makes it a valuable asset for any IoT deployment focused on sustainability and longevity.
The PolarFire FPGA Family by Microsemi is engineered to deliver cost-effectiveness alongside exceptional power efficiency, positioning itself as the optimal choice for mid-range FPGA applications. Crafted to offer transceivers ranging from 250 Mbps to a robust 12.7 Gbps, these FPGAs cater to diverse bandwidth requirements. With logic elements spanning 100K to 500K and incorporating up to 33 Mbits of RAM, the PolarFire series seamlessly addresses demanding processing needs while ensuring secure and reliable performance. At the heart of its design philosophy is a focus on best-in-class security features combined with high reliability, making it particularly relevant for industries like automotive, industrial, and communication infrastructures where failure is not an option. It supports applications that require low power consumption without sacrificing performance, which is increasingly important in today's energy-conscious environments. These FPGAs find their versatility in a range of applications, from driving advancements in ADAS in the automotive industry to supporting broadband and 5G mobile infrastructures in telecommunications. The family also extends its use cases to data center technologies, highlighting its adaptability and efficiency in both digital and analog processing fields. With such a broad spectrum of applicability, the PolarFire FPGA Family stands as a shining example in Microsemi's product arsenal, delivering solutions tuned for innovation and performance.
The Chimera GPNPU stands as a powerful neural processing unit tailor-made for on-device AI computing. This processor architecture revolutionizes the landscape of SoC design, providing a unified execution pipeline that integrates both matrix and vector operations with control code typically handled by separate cores. Such integration boosts developer productivity and enhances performance significantly. The Chimera GPNPU's ability to run diverse AI models—including classical backbones, vision transformers, and large language models—demonstrates its adaptability to future AI developments. Its scalable design enables handling of extensive computational workloads reaching up to 864 TOPs, making it suitable for a wide array of applications including automotive-grade AI solutions. This licensable processor core is built with a unique hybrid architecture that combines Von Neuman and 2D SIMD matrix instructions, facilitating efficient execution of a myriad array of data processing tasks. The Chimera GPNPU has been optimized for integration, allowing seamless incorporation into modern SoC designs for high-speed and power-efficient computing. Key features include a robust instruction set tailored for ML tasks, effective memory optimization strategies, and a systematic approach to on-chip data handling, all working to minimize power usage while maximizing throughput and computational accuracy. Furthermore, the Chimera GPNPU not only meets contemporary demands of AI processing but is forward-compatible with potential advancements in machine learning models. Through comprehensive safety enhancements, it addresses stringent automotive safety requirements, ensuring reliable performance in critical applications like ADAS and enhanced in-cabin monitoring systems. This combination of performance, efficiency, and scalability positions the Chimera GPNPU as a pivotal tool in the advancement of AI-driven technologies within industries demanding high reliability and long-term support.
The RV12 RISC-V Processor is a highly customizable, single-core processor that aligns with the RV32I and RV64I standards. Crafted for the embedded market, this processor is part of Roa Logic's robust family of 32-bit and 64-bit CPUs based on the RISC-V instruction set. The RV12 architecture employs a Harvard structure, enabling concurrent access to instruction and data memory, which boosts performance and efficiency. Designed to meet the demands of modern embedded applications, the RV12 employs a single-issue architecture, optimizing processing effectiveness without the complexity of multi-threading. This processing unit is compatible with a diverse set of applications, offering scalability and versatility while maintaining alignment with industry standard specifications. Additionally, the RV12 comes equipped with a full suite of supportive resources, including testbenches and in-depth documentation. Such features facilitate seamless integration and deployment in various applications, ranging from consumer electronics to more complex industrial uses.
The ABX Platform by Racyics utilizes Adaptive Body Biasing (ABB) technology to drive performance in ultra-low voltage scenarios. This platform is tailored for extensive applications requiring ultra-low power as well as high performance. The ABB generator, along with the standard cells and SRAM IP, form the core of the ABX Platform, providing efficient compensation for process variations, supply voltage fluctuations, and temperature changes.\n\nFor automotive applications, the ABX Platform delivers notable improvements in leakage power, achieving up to 76% reduction for automotive-grade applications with temperatures reaching 150°C. The platform's RBB feature substantially enhances leakage control, making it ideal for automotive uses. Beyond automotive, the ABX Platform's FBB functionality significantly boosts performance, offering up to 10.3 times the output at 0.5V operation compared to non-bias implementations.\n\nExtensively tested and silicon-proven, the ABX Platform ensures reliability and power efficiency with easy integration into standard design flows. The solution also provides tight cornering and ABB-aware implementations for improved Power-Performance-Area (PPA) metrics. As a turnkey solution, it is designed for seamless integration into existing systems and comes with a free evaluation kit for potential customers to explore its capabilities before committing.
The Automotive AI Inference SoC by Cortus is a cutting-edge chip designed to revolutionize image processing and artificial intelligence applications in advanced driver-assistance systems (ADAS). Leveraging RISC-V expertise, this SoC is engineered for low power and high performance, particularly suited to the rigorous demands of autonomous driving and smart city infrastructures. Built to support Level 2 to Level 4 autonomous driving standards, this AI Inference SoC features powerful processing capabilities, enabling complex image processing algorithms akin to those used in advanced visual recognition tasks. Designed for mid to high-end automotive markets, it offers adaptability and precision, key to enhancing the safety and efficiency of driver support systems. The chip's architecture allows it to handle a tremendous amount of data throughput, crucial for real-time decision-making required in dynamic automotive environments. With its advanced processing efficiency and low power consumption, the Automotive AI Inference SoC stands as a pivotal component in the evolution of intelligent transportation systems.
Cortus CIoT25 is an inventive solution aimed at enhancing IoT connectivity with its ultra-energy-efficient RISC-V architecture. Supporting Sub-1 GHz unlicensed ISM bands, the CIoT25 makes IoT devices smarter and more efficient, reducing both power consumption and operational costs significantly. The design is specifically crafted for smart home devices and low-power sensor networks, offering unparalleled integration in the IoT domain. The CIoT25's unique architecture ensures high adaptability to varying IoT environments, enabling tailored performance delivery for distinct applications. Its comprehensive support for diverse communication protocols makes it an ideal candidate for multi-platform IoT setups, leading to widespread adoption among IoT service providers seeking reliable communication hardware. With the increasing demands of connected environments, the CIoT25 meets the intricate requirements of modern applications by offering seamless functionality over extended periods, largely due to its low operational energy demand. This microcontroller is set to empower a new wave of IoT devices with its intelligent resource management and superior data handling capabilities.
The iniCPU is an 8-bit microprocessor core compatible with the M6809, providing an efficient computing power solution for embedded applications within system-on-chip designs. This processor core is optimized for high-level programming and control tasks, enabling a reduction in component count and system cost while enhancing performance and system integration capabilities.<br/><br/>Key features include full 6809 software compatibility, address expansion via page mode, and multiple external interfaces designed to work seamlessly with S/D RAM, I/O, and other peripherals. This adaptability is paired with robust hardware debugging circuits, facilitating a straightforward integration process and speeding up development cycles.<br/><br/>Designed for operation at 40MHz with peak performance of up to 10 MIPS, the iniCPU ensures swift and efficient data processing. Its 100% technology-independent and fully synchronous architecture makes it a resilient and versatile choice for various embedded system applications, providing scalability and flexibility required in modern electronic designs.
Creonic offers a diverse array of miscellaneous FEC (Forward Error Correction) and DSP (Digital Signal Processing) IP cores, catering to various telecommunications and broadcast standards. This collection of IP cores includes highly specialized solutions like ultrafast BCH decoders and FFT/IFFT processors, which are critical for managing high-throughput data streams and maintaining signal fidelity. These IP cores embody the latest in processing technology, delivering precise error correction and signal transformation functions that are essential in complex communication networks. Their integration capabilities are made easy with detailed hardware specifications and software models, designed for flexibility across different platforms and applications. The rigorous development process guarantees that each core adheres to market standards, optimizing performance and ensuring operational reliability. Creonic's portfolio of miscellaneous FEC and DSP cores stands out for its innovative contributions to digital communications, providing unique solutions that meet the sophisticated requirements of modern connectivity.
The Mixed-Signal CODEC offered by Archband Labs stands out as a versatile solution integrating both analog and digital functionalities. This CODEC is designed to meet the demands of various audio and voice processing applications, ensuring high fidelity and low power consumption. Equipped with robust conversion capabilities, it's suitable for a range of environments from wearable tech to automotive systems, ensuring clear and precise sound reproduction. The CODEC forms a crucial part of devices like smart home appliances and AR/VR gadgets, where audio quality is paramount.
The Origin E6 NPU is engineered for high-performance on-device AI tasks in smartphones, AR/VR headsets, and other consumer electronics requiring cutting-edge AI models and technologies. This neural processing unit balances power and performance effectively, delivering between 16 to 32 TOPS per core while catering to a range of AI workloads including image transformers and point cloud analysis. Utilizing Expedera’s unique packet-based architecture, the Origin E6 offers superior resource utilization and ensures performance with deterministic latency, avoiding the penalties typically associated with tiled architectures. Origin E6 supports advanced AI models such as Stable Diffusion and Transformers, providing optimal performance for both current and predicted future AI workloads. The NPU integrates seamlessly into chip designs with a comprehensive software stack supporting popular AI frameworks. Its field-proven architecture, deployed in millions of devices, offers manufacturers the flexibility to design AI-enabled devices that maximize user experience while maintaining cost efficiency.
The AndeShape Platforms include a range of systems designed for developing with AndesCore processors. These platforms are split into categories such as microcontroller platforms and FPGA development kits. They offer integrated solutions with pre-configured IP blocks to simplify the design process for complex systems. Through its assortment of hardware development tools, AndeShape platforms cater to various stages of product development from inception to demonstration, making it easier for engineers to create efficient, scalable solutions.
The eSi-3250 is a high-performance 32-bit processor core designed for applications requiring robust caching solutions. Its architecture includes configurable instruction and data caches, optimizing the handling of slow on-chip and off-chip memories. With an optional memory management unit, it supports advanced virtual memory management. This processor core integrates high-performance features such as user and supervisor modes, multiple interrupts, and a configurable pipeline. The eSi-3250 also supports custom user-defined instructions, offering versatility for custom application needs. Its efficient design is suitable for power-sensitive systems needing high data throughput. eSi-3250's extensive compatibility with AMBA protocols makes it easy to integrate with diverse system architectures and third-party IPs. This enhances its utility in creating multi-core systems and sophisticated processing environments, ensuring efficient resource usage and high operational efficiency.
The AX45MP is a high-performance processor core from Andes Technology, designed for demanding computational tasks. It features a 64-bit architecture with dual-issue capability, enhancing its data throughput. This processor is suited for applications needing robust data and memory handling, including AI, machine learning, and signal processing. Its architecture includes instruction and data prefetch capabilities, alongside a sophisticated cache management system to improve execution speed and efficiency. The AX45MP operates on a multicore setup supporting up to 8 cores, providing exceptional parallel processing power for complex applications.
The KL730 AI SoC is equipped with a state-of-the-art third-generation reconfigurable NPU architecture, delivering up to 8 TOPS of computational power. This innovative architecture enhances computational efficiency, particularly with the latest CNN networks and transformer applications, while reducing DDR bandwidth demands. The KL730 excels in video processing, offering support for 4K 60FPS output and boasts capabilities like noise reduction, wide dynamic range, and low-light imaging. It is ideal for applications such as intelligent security, autonomous driving, and video conferencing.
Akida IP is Brainchip's pioneering neuromorphic processor technology that mimics the human brain's function to efficiently analyze sensor inputs directly at the acquisition point. This digital processor achieves outstanding performance while significantly lowering power consumption and maintaining high processing precision. Edge AI tasks are handled locally on the chip, minimizing latency and enhancing privacy through reduced cloud dependence. The scalable architecture supports up to 256 nodes interconnected via a mesh network, each featuring Neural Network Layer Engines that can be tailored for convolutional or fully connected operations. The event-based processing technology of Akida leverages the natural data sparsity found in activations and weights, cutting down the number of operations by significant margins, thus saving power and improving performance. As a highly adaptable platform, Akida supports on-chip learning and various quantization options, ensuring customized AI solutions without costly cloud retraining. This approach not only secures data privacy but also lowers operational costs, offering edge AI solutions with unprecedented speed and efficiency. Akida's breakthrough capabilities address core issues in AI processing, such as data movement, by instantiating neural networks directly in hardware. This leads to reduced power consumption and increased processing speed. Furthermore, the IP is supported by robust tools and an environment conducive to easy integration and deployment, making it highly attractive to industries seeking efficient, scalable AI solutions.
The eSi-Floating Point component provides robust floating point capabilities to eSi-RISC embedded processor cores. This feature is crucial for applications requiring high precision and complex arithmetic processing, such as digital signal processing and scientific computations. The component supports both single and double-precision floating point operations, adhering to the IEEE-754 standard. Designed for efficiency, eSi-Floating Point optimizes resource use while maximizing computational performance, making it suitable for resource-constrained environments without sacrificing precision. This component's architecture enables significant performance improvements in data processing tasks, allowing for enhanced data throughput and reduced computational time. eSi-Floating Point integrates seamlessly with the eSi-RISC architecture, providing a unified system solution that elevates processing capabilities without extensive redesigns. Its use in applications demanding precision calculation and high-speed processing emphasizes its value in fields such as audio processing, high-accuracy sensor hubs, and control systems.
AndesCore processors are designed as high-performance CPU cores targeting a variety of market segments. These processors, built on the RISC-V compatible AndeStar™ architecture, provide scalable solutions suitable for applications in AI, IoT, and more. The V5 core family is renowned for its performance efficiency and flexibility, featuring 32-bit and 64-bit cores that support an array of computing tasks. Each core is engineered to meet diverse application requirements, ranging from streamlined low-power designs to high-throughput models.
The Neural Processing Unit (NPU) from OPENEDGES offers a state-of-the-art deep learning accelerator, optimized for edge computing with advanced mixed-precision computation. Featuring a powerful network compiler for efficient memory usage, it handles complex neural network operations while minimizing DRAM traffic. Its layered architecture supports modern algorithmic needs, including transformers, allowing for parallel processing of neural layers. The NPU provides significant improvements in compute density and energy efficiency, targeting applications from automotive to surveillance, where high-speed, low-power processing is critical.
Opus Encoder/Decoder is a high-performance audio codec designed for efficient and flexible sound processing. Known for its cross-platform compatibility, the Opus codec offers exceptional compression rates without sacrificing audio quality. It supports a wide range of audio frequencies and bitrates, rendering it versatile for applications ranging from internet telephony to streaming and audio archiving. This encoder/decoder ensures minimal latency, critical for seamless audio streaming, and adapts dynamically to varying network conditions to maintain optimal sound clarity. The Opus technology is pivotal for modern high-quality audio demands due to its adaptability and superior performance metrics.
The RISC-V Hardware-Assisted Verification platform by Bluespec is engineered to offer an efficient and comprehensive approach to verifying RISC-V cores. It accelerates the verification process, allowing developers to confirm the functionality of their designs at both the core and system levels. The platform supports testing in diverse environments, including RTOS and Linux, which makes it versatile for a broad spectrum of applications. A distinguishing feature of this platform is its ability to verify standard ISA extensions as well as custom ISA extensions and accelerators. This capability is crucial for projects that require additional customization beyond the standard RISC-V instruction sets. Furthermore, by facilitating anytime, anywhere access through cloud-based solutions like AWS, it enhances the scalability and accessibility of verification processes. The platform is a valuable tool for developers who work on cutting-edge RISC-V applications, providing them with the confidence to validate their designs rigorously and efficiently. This verification tool is essential for developers aiming for high assurance in the correctness and performance of their systems.
The Arria 10 System on Module (SoM) is designed with an emphasis on embedded and automotive vision applications. This compact module leverages Altera's Arria 10 SoC devices in a sleek 29x29 mm package, offering a plethora of interfaces while maintaining a small, efficient form factor. It features an Altera Arria 10 SoC FPGA with a range from 160 to 480 KLEs, coupled with a Cortex A9 Dual-Core CPU. This enables robust integration and performance for demanding applications. The module's power management system ensures a seamless power-up and -down sequence, requiring only a 12V supply from the baseboard. Its dual DDR4 memory interfaces provide up to 2.4 Gbit/s per pin, offering a total bandwidth of up to 230 Gbit/s for both CPU and FPGA memory systems. This module supports a wide array of high-speed interfaces, including PCIe Gen3 x8, 10/40 Gbit/s Ethernet, DisplayPort, and 12G SDI, making it suitable for complex imaging and communication tasks. Additional features include up to 32 LVDS lanes for configurable RX or TX, two USB interfaces with OTG support, and ARM I²C, SPI, and GPIO interface signals. Furthermore, the Arria 10 SoM includes pre-configured IP for memory controllers and an Angstrom Linux distribution, facilitating rapid development and deployment of applications.
NeuroMosAIc Studio is a comprehensive software platform designed to accelerate AI development and deployment across various domains. This platform serves as an essential toolkit for transforming neural network models into hardware-optimized formats specific for AiM Future's accelerators. With broad functionalities including conversion, quantization, compression, and optimization of neural networks, it empowers AI developers to enhance model performance and efficiency. The studio facilitates advanced precision analysis and adjustment, ensuring models are tuned to operate optimally within hardware constraints while maintaining accuracy. Its capability to generate C code and provide runtime libraries aids in seamless integration within target environments, enhancing the capability of developers to leverage AI accelerators fully. Through this suite, companies gain access to an array of tools including an NMP compiler, simulator, and support for NMP-aware training. These tools allow for optimized training stages and quantization of models, providing significant operational benefits in AI-powered solutions. NeuroMosAIc Studio, therefore, contributes to reducing development cycles and costs while ensuring top-notch performance of deployed AI applications.
Spec-TRACER is a powerful tool for managing the lifecycle of FPGA and ASIC requirements. It provides a unified platform for capturing, managing, and tracing requirements, making complex designs more manageable and traceable throughout their lifecycle. This tool is specifically tailored to comply with stringent industry standards for user and design requirements, aligning with hardware and software deliverables. By facilitating clear requirement management, Spec-TRACER ensures thorough traceability and accountability, reducing risks of design deviations and enhancing communication across development teams. This results in a streamlined workflow where requirements can be easily documented, tracked, and matched with design outputs effectively. Spec-TRACER excels in capturing detailed analyzes and facilitating robust reporting, aligning closely with processes required in domains such as aerospace and defense. Its capacity to support comprehensive requirements management protocols makes it indispensable for projects demanding high levels of compliance and verification rigor, ultimately enhancing the quality and reliability of final products.
The Veyron V1 CPU by Ventana Micro Systems illustrates their commitment to advancing high-performance computing solutions within the RISC-V family of processors. This cutting-edge processor is tailored specifically for high-demand workloads in data centers, ensuring reliable and efficient operation across various performance metrics. The Veyron V1 embodies Ventana’s design philosophy of combining competitive efficiency levels with robust processing capabilities. Characterized by its versatility, the V1 CPU stands out for its adaptability across a wide range of data center applications. The Veyron V1 seamlessly integrates into existing infrastructures while offering scalable performance improvements without the burden of excessive power consumption. This makes it an ideal choice for businesses seeking to enhance their computing prowess without substantial energy overhead. Ventana’s focus on maintaining high standards of efficiency ensures that the Veyron V1 CPU meets the stringent demands of modern data-intensive environments. It offers enhanced instruction sets and high-speed processing, making it well-suited to serve dynamic and evolving enterprise needs. The presence of innovative technologies in the Veyron V1 advances RISC-V architecture, showcasing the practical adaptability of Ventana’s IP offerings.
The Metis AIPU PCIe AI Accelerator Card represents a powerful computing solution for high-demand AI applications. This card, equipped with a single Metis AI Processing Unit, delivers extraordinary processing capabilities, reaching up to 214 Tera Operations Per Second (TOPS). Designed to handle intensive computing tasks, it is particularly suited for applications requiring substantial computational power and rapid data processing, such as real-time video analytics and AI-driven operations in various industrial and retail environments. This accelerator card integrates seamlessly into PCIe slots, providing developers with an easy-to-deploy solution enhanced by Axelera AI's Voyager Software Development Kit. The kit simplifies the deployment of neural networks, making it a practical tool for both seasoned developers and newcomers to AI technology. The card's power efficiency is a standout feature, aimed at reducing operational costs while ensuring optimal performance. With its innovative architecture, the Metis AIPU PCIe AI Accelerator Card not only meets but exceeds the needs of modern AI applications, ensuring users can harness significant processing power without the overheads associated with traditional systems.
The ULYSS MCU range from Cortus is a powerful suite of automotive microcontrollers designed to address the complex demands of modern automotive applications. These MCUs are anchored by a highly optimized 32/64-bit RISC-V architecture, delivering impressive performance levels from 120MHz to 1.5GHz, making them suitable for a variety of automotive functions such as body control, safety systems, and infotainment. ULYSS MCUs are engineered to accommodate extensive application domains, providing reliability and efficiency within harsh automotive environments. They feature advanced processing capabilities and are designed to integrate seamlessly into various automotive systems, offering developers a versatile platform for building next-generation automotive solutions. The ULYSS MCU family stands out for its scalability and adaptability, enabling manufacturers to design robust automotive electronics tailored to specific needs while ensuring cost-effectiveness. With their support for a wide range of automotive networking and control applications, ULYSS MCUs are pivotal in the development of reliable, state-of-the-art automotive systems.
The Veyron V2 CPU extends Ventana Micro Systems' commitment to delivering top-tier performance capabilities within the RISC-V framework. Designed for data center-class workloads, the Veyron V2 enhances efficiencies across cloud and hyperscale ecosystems, making it a strategic choice for enterprise operations requiring unparalleled processing strength and adaptability. Building upon the foundation laid by its predecessor, the Veyron V2 improves both processing speed and efficiency, providing an edge in handling diverse applications in data-intensive contexts. Its enhanced architecture supports extensible instruction sets, bridging the computational needs between various enterprise, automotive, and artificial intelligence markets with precision and reliability. Integrated within the Veyron product line's broader ecosystem, the V2 CPU emphasizes Ventana’s dedication to fostering adaptable, forward-thinking computing environments. It assures businesses of scalable performance improvements, aligning seamlessly with existing systems to promote effortless adoption across complex and varied IT landscapes.
The Cortus Lotus 1 is a multifaceted microcontroller that packs a robust set of features for a range of applications. This cost-effective, low-power SoC boasts RISC-V architecture, making it suitable for advanced control systems such as motor control, sensor interfacing, and battery-operated devices. Operating up to 40 MHz, its RV32IMAFC CPU architecture supports floating-point operations and hardware-accelerated integer processing, optimizing performance for computationally demanding applications. Designed to enhance code density and reduce memory footprint, Lotus 1 incorporates 256 KBytes of Flash memory and 24 KBytes of RAM, enabling the execution of complex applications without external memory components. Its six independent 16-bit timers with PWM capabilities are perfectly suited for controlling multi-phase motors, positioning it as an ideal choice for power-sensitive embedded systems. This microcontroller's connectivity options, including multiple UARTs, SPI, and TWI controllers, ensure seamless integration within a myriad of systems. Lotus 1 is thus equipped to serve a wide range of market needs, from personal electronics to industrial automation, ensuring flexibility and extended battery life across sectors.
The eSi-1650 represents an upgrade with an integrated instruction cache, offering significant power and area efficiency improvements. Targeted at applications where memory speed is a constraint, this 16-bit RISC processor core optimizes power usage and performance. It is particularly efficient for mature process nodes using non-volatile memory such as OTP or Flash. This IP includes an expanded instruction set with versatile addressing modes and optional user-defined instructions. Its cache feature allows the CPU to achieve higher operating frequencies by overcoming limitations imposed by slower memory. As a result, the eSi-1650 is ideal for embedded systems operating at high performance levels while still managing power consumption effectively. With its hardware debug capabilities and excellent configurability, the eSi-1650 addresses complex application needs. It integrates effortlessly into designs utilizing AMBA peripheral buses, supporting a wide range of third-party IP cores, and enhancing overall system capability.
The Low Power RISC-V CPU IP from SkyeChip is crafted to deliver efficient computation with minimal power consumption. Featuring the RISC-V RV32 instruction set, it supports a range of functions with full standard compliance for instruction sets and partial support where necessary. Designed exclusively for machine mode, it incorporates multiple vectorized interrupts and includes comprehensive debugging capabilities. This CPU IP is well-suited for integration into embedded systems where power efficiency and processing capability are crucial.
Trimension SR200 is a single IC UWB chip designed for mobile applications, integrating both ranging and radar capabilities to facilitate enhanced device interactions. This chip is essential for mobile devices requiring high precision in location tracking and interaction detection. Benefiting from its compact integration, it supports seamless wireless communications, catering to the advancement of mobile technology through superior UWB features.