The CPU, or Central Processing Unit, is the central component of computer systems, acting as the brain that executes instructions and processes data. Our category of CPU semiconductor IPs offers a diverse selection of intellectual properties that enable the development of highly efficient and powerful processors for a wide array of applications, from consumer electronics to industrial systems. Semiconductor IPs in this category are designed to meet the needs of modern computing, offering adaptable and scalable solutions for different technology nodes and design requirements.
These CPU semiconductor IPs provide the core functionalities required for the development of processors capable of handling complex computations and multitasking operations. Whether you're developing systems for mobile devices, personal computers, or embedded systems, our IPs offer optimized solutions that cater to the varying demands of power consumption, processing speed, and operational efficiency. This ensures that you can deliver cutting-edge products that meet the market's evolving demands.
Within the CPU semiconductor IP category, you'll find a range of products including RISC (Reduced Instruction Set Computer) processors, multi-core processors, and customizable processor cores among others. Each product is designed to integrate seamlessly with other system components, offering enhanced compatibility and flexibility in system design. These IP solutions are developed with the latest architectural advancements and technological improvements to support next-generation computing needs.
Selecting the right CPU semiconductor IP is crucial for achieving target performance and efficiency in your applications. Our offerings are meticulously curated to provide comprehensive solutions that are robust, reliable, and capable of supporting diverse computing applications. Explore our CPU semiconductor IP portfolio to find the perfect components that will empower your innovative designs and propel your products into the forefront of technology.
Spec-TRACER is a robust requirements lifecycle management platform tailored for FPGA and ASIC projects. Focusing on facilitating seamless requirements capture, management, and traceability, it ensures that every stage of the design process is aligned with the initial specifications. Its analytical features further enable a comprehensive evaluation of design progress, promoting efficiency and thoroughness throughout the development lifecycle.
The Origin E1 neural engines by Expedera redefine efficiency and customization for low-power AI solutions. Specially crafted for edge devices like home appliances and security cameras, these engines serve ultra-low power applications that demand continuous sensing capabilities. They minimize power consumption to as low as 10-20mW, keeping data secure and eliminating the need for external memory access. The advanced packet-based architecture enhances performance by facilitating parallel layer execution, thereby optimizing resource utilization. Designed to be a perfect fit for dedicated AI functions, Origin E1 is tailored to support specific neural networks efficiently while reducing silicon area and system costs. It supports various neural networks, from CNNs to RNNs, making it versatile for numerous applications. This engine is also one of the most power-efficient in the industry, boasting an impressive 18 TOPS per Watt. Origin E1 also offers a full TVM-based software stack for easy integration and performance optimization across customer platforms. It supports a wide array of data types and networks, ensuring flexibility and sustained power efficiency, averaging 80% utilization. This makes it a reliable choice for OEMs looking for high performance in always-sensing applications, offering a competitive edge in both power efficiency and security.
The Origin E8 NPUs represent Expedera's cutting-edge solution for environments demanding the utmost in processing power and efficiency. This high-performance core scales its TOPS capacity between 32 and 128 with single-core configurations, addressing complex AI tasks in automotive and data-centric operational settings. The E8’s architecture stands apart due to its capability to handle multiple concurrent tasks without any compromise in performance. This unit adopts Expedera's signature packet-based architecture for optimized parallel execution and resource management, removing the necessity for hardware-specific tweaks. The Origin E8 also supports high input resolutions up to 8K and integrates well across standard and custom neural networks, enhancing its utility in future-forward AI applications. Leveraging a flexible, scalable design, the E8 IP cores make use of an exhaustive software suite to augment AI deployment. Field-proven and already deployed in a multitude of consumer vehicles, Expedera's Origin E8 provides a robust, reliable choice for developers needing optimized AI inference performance, ideally suited for data centers and high-power automobile systems.
The Veyron V2 CPU takes the innovation witnessed in its predecessor and propels it further, offering unparalleled performance for AI and data center-class applications. This successor to the V1 CPU integrates seamlessly into environments requiring high computational power and efficiency, making it perfect for modern data challenges. Built upon RISC-V's architecture, it provides an open-standard alternative to traditional closed processor models. With a heavy emphasis on AI and machine learning workloads, Veyron V2 is designed to excel in handling complex data-centric tasks. This CPU can quickly adapt to multifaceted requirements, proving indispensable from enterprise servers to hyperscale data centers. Its superior design enables it to outperform many contemporary alternatives, positioning it as a lead component for next-generation computing solutions. The processor's adaptability allows for rapid and smooth integration into existing systems, facilitating quick upgrades and enhancements tailored to specific operational needs. As the Veyron V2 CPU is highly energy-efficient, it empowers data centers to achieve greater sustainability benchmarks without sacrificing performance.
The NaviSoC, a flagship product of ChipCraft, combines a GNSS receiver with an on-chip application processor, providing an all-in-one solution for high-precision navigation and timing applications. This product is designed to meet the rigorous demands of industries such as automotive, UAVs, and smart agriculture. One of its standout features is the ability to support all major global navigation satellite systems, offering versatile functionality for various professional uses. The NaviSoC is tailored for high efficiency, delivering performance that incorporates low power consumption with robust computational capabilities. Specifically tailored for next-generation applications, NaviSoC offers flexibility through its ability to be adapted for different tasks, making it a preferred choice for many industries. It integrates seamlessly into systems requiring precision and reliability, providing developers with a wide array of programmable peripherals and interfaces. The foundational design ethos of the NaviSoC revolves around minimizing power usage while ensuring high precision and accuracy, making it an ideal component for battery-powered and portable devices. Additionally, ChipCraft provides integrated software development tools and navigation firmware, ensuring that clients can capitalize on fast time-to-market for their products. The design of the NaviSoC takes a comprehensive approach, factoring in real-world application requirements such as temperature variation and environmental challenges, thus providing a resilient and adaptable product for diverse uses.
The SCR9 Processor Core is a cutting-edge processor designed for entry-level server-class and personal computing applications. Featuring a 12-stage dual-issue out-of-order pipeline, it supports robust RISC-V extensions including vector operations and a high-complexity memory system. This core is well-suited for high-performance computing, offering exceptional power efficiency with multicore coherence and the ability to integrate accelerators, making it suitable for areas like AI, ML, and enterprise computing.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
The eSi-3200 is a compact 32-bit processor core created for low-power and high-efficiency scenarios. Designed for seamless integration into ASICs and FPGAs, it operates optimally in embedded control applications. The architecture supports up to 32 general purpose registers and a broad instruction set aimed at minimizing power and resource usage. This processor features a continuous instruction pipeline and optional floating point unit, making it capable of handling complex arithmetic and signal processing tasks with ease. Its design, void of a cache system, allows deterministic timing crucial for real-time control applications. The processor's capabilities are accentuated by robust debugging tools and AMBA-compliant connectivity, which facilitate straightforward system integration. This makes the eSi-3200 an ideal choice for engineers looking to design responsive and energy-efficient control systems.
Origin E2 NPUs focus on delivering power and area efficiency, making them ideal for on-device AI applications in smartphones and edge nodes. These processing units support a wide range of neural networks, including video, audio, and text-based applications, all while maintaining impressive performance metrics. The unique packet-based architecture ensures effective performance with minimal latency and eliminates the need for hardware-specific optimizations. The E2 series offers customization options allowing it to fit specific application needs perfectly, with configurations supporting up to 20 TOPS. This flexibility represents significant design advancements that help increase processing efficiency without introducing latency penalties. Expedera's power-efficient design results in NPUs with industry-leading performance at 18 TOPS per Watt. Further augmenting the value of E2 NPUs is their ability to run multiple neural network types efficiently, including LLMs, CNNs, RNNs, and others. The IP is field-proven, deployed in over 10 million consumer devices, reinforcing its reliability and effectiveness in real-world applications. This makes the Origin E2 an excellent choice for companies aiming to enhance AI capabilities while managing power and area constraints effectively.
The iniCPU IP core is a sophisticated processor module by Inicore designed to provide robust computational capabilities for system-on-chip developments. It integrates smoothly into a variety of applications, offering a balance of performance and low power consumption. This CPU core is adaptable, allowing for design scalability across multiple applications from consumer electronics to industrial automation. Engineered for efficiency, the iniCPU is structured to handle complex workloads with ease, serving as a pivotal component in integrated system solutions. Its architecture supports extensive interfacing capabilities, which ensures the core can be coupled with various peripherals, enhancing system-wide functionality and performance. Inicore's iniCPU stands out in its versatility, supporting seamless transition from FPGA prototyping to ASIC deployment. This flexibility shortens product development cycles and helps companies bring innovative products to market faster. The IP core’s robust design methodology ensures it meets stringent industry standards for reliability and performance.
The RV12 is a flexible RISC-V CPU designed for embedded applications. It stands as a single-core processor, compatible with RV32I and RV64I architectures, offering a configurable solution that adheres to the industry-standard RISC-V instruction set. The processor's Harvard architecture supports concurrent instruction and data memory accesses, optimizing its operation for a wide array of embedded tasks.
The eSi-1600 is a 16-bit central processing unit designed for cost-effective and low-power operation, making it ideal for integration into ASICs and FPGAs. It is engineered to deliver performance comparable to more advanced processors, with costs rivaling those of simpler CPUs. This processor is well-suited for control applications requiring under 64kB of memory in mixed-signal processes. With a RISC architecture comprising 16 or 32 general-purpose registers, the eSi-1600 supports optimal instruction density and execution efficiency. Its capabilities are leveraged through features like user-defined instructions and sophisticated interrupt handling, enhancing performance for various applications. The processor's efficient pipeline architecture allows for high-frequency operation, even in older process nodes, while minimizing power consumption. Comprehensive hardware debugging facilities, including JTAG support and an optional memory protection unit, accompany the eSi-1600. This makes it a flexible and efficient solution for applications needing robust control processing, serving as a bridge between 8-bit efficiency and higher performance 32-bit systems.
The High Performance RISC-V Processor from Cortus represents the forefront of high-end computing, designed for applications demanding exceptional processing speeds and throughput. It features an out-of-order execution core that supports both single-core and multi-core configurations for diverse computing environments. This processor specializes in handling complex tasks requiring multi-threading and cache coherency, making it suitable for applications ranging from desktops and laptops to high-end servers and supercomputers. It includes integrated vector and AI accelerators, enhancing its capability to manage intensive data-processing workloads efficiently. Furthermore, this RISC-V processor is adaptable for advanced embedded systems, including automotive central units and AI applications in ADAS, providing enormous potential for innovation and performance across various markets.
The Chimera GPNPU by Quadric is a versatile processor specifically designed to enhance machine learning inference tasks on a broad range of devices. It provides a seamless blend of traditional digital signal processing (DSP) and neural processing unit (NPU) capabilities, which allow it to handle complex ML networks alongside conventional C++ code. Designed with a focus on adaptability, the Chimera GPNPU architecture enables easy porting of various models and software application programming, making it a robust solution for rapidly evolving AI technologies. A key feature of the Chimera GPNPU is its scalable design, which extends from 1 to a remarkable 864 TOPs, catering to applications from standard to advanced high-performance requirements. This scalability is coupled with its ability to support a broad range of ML networks, such as classic backbones, vision transformers, and large language models, fulfilling various computational needs across industries. The Chimera GPNPU also excels in automotive applications, including ADAS and ECU systems, due to its ASIL-ready design. The processor's hybrid architecture merges Von Neumann and 2D SIMD matrix capabilities, promoting efficient execution of scalar, vector, and matrix operations. It boasts a deterministic execution pipeline and extensive customization options, including configurable instruction caches and local register memories that optimize memory usage and power efficiency. This design effectively reduces off-chip memory accesses, ensuring high performance while minimizing power consumption.
Engineered for high-performance tasks, the eSi-3250 is a 32-bit processor core tailor-made for systems demanding significant computational power coupled with slow memory interfacing. It is particularly well-suited for integration where caching plays a pivotal role due to the involved use of high-latency memories, such as external flash. The core is equipped with configurable instruction and data caches and features a wide range of interrupts, accommodating user and supervisor modes efficiently. It supports the integration of a memory management unit for enhanced memory protection and virtual memory implementation. Delivering superior performance with a structured architectural design, the eSi-3250 is adept at managing both power and performance needs. It is widely applicable to areas needing enhanced processing capabilities within tightly controlled memory access environments.
The Origin E6 neural engines are built to push the boundaries of what's possible in edge AI applications. Supporting the latest in AI model innovations, such as generative AI and various traditional networks, the E6 scales from 16 to 32 TOPS, aimed at balancing performance, efficiency, and flexibility. This versatility is essential for high-demand applications in next-generation devices like smartphones, digital reality setups, and consumer electronics. Expedera’s E6 employs packet-based architecture, facilitating parallel execution that leads to optimal resource usage and eliminating the need for dedicated hardware optimizations. A standout feature of this IP is its ability to maintain up to 90% processor utilization even in complex multi-network environments, thus proving its robustness and adaptability. Crafted to fit various use cases precisely, E6 offers a comprehensive TVM-based software stack and is well-suited for tasks that require simultaneous running of numerous neural networks. This has been proven through its deployment in over 10 million consumer units. Its design effectively manages power and system resources, thus minimizing latency and maximizing throughput in demanding scenarios.
The RISC-V Hardware-Assisted Verification by Bluespec is designed to expedite the verification process for RISC-V cores. This platform supports both ISA and system-level testing, adding robust features such as verifying standard and custom ISA extensions along with accelerators. Moreover, it offers scalable access through the AWS cloud, making verification available anytime and anywhere. This tool aligns with the needs of modern developers, ensuring thorough testing within a flexible and accessible framework.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
InCore's RISC-V Core-hub Generators are a revolutionary tool designed to give developers unparalleled control over their SoC designs. These generators enable the customization of core-hub configurations down to the ISA and microarchitecture level, promoting a tailored approach to chip creation. Built around the robustness of the RISC-V architecture, the Core-hub Generators support versatile application needs, allowing designers to innovate without boundaries. The Core-hub concept is pivotal to speeding up SoC development by offering a framework of diverse cores and optimized fabric components, including essential RISC-V UnCore features like PLICs, Debug, and Trace components. This systemic flexibility ensures that each core hub aligns with specific customer requirements, providing a bespoke design experience that enhances adaptability and resource utilization. By integrating efficient communication protocols and optimized processing capabilities, InCore's Core-hub Generators foster seamless data exchange across modules. This is essential for developing next-gen semiconductor solutions that require both high performance and security. Whether used in embedded systems, high-performance industrial applications, or sophisticated consumer electronics, these generators stand as a testament to InCore's commitment to innovation and engineering excellence.
The xcore.ai platform by XMOS Semiconductor is a sophisticated and cost-effective solution aimed specifically at intelligent IoT applications. Harnessing a unique multi-threaded micro-architecture, xcore.ai provides superior low latency and highly predictable performance, tailored for diverse industrial needs. It is equipped with 16 logical cores divided across two multi-threaded processing tiles. These tiles come enhanced with 512 kB of SRAM and a vector unit supporting both integer and floating-point operations, allowing it to process both simple and complex computational demands efficiently. A key feature of the xcore.ai platform is its powerful interprocessor communication infrastructure, which enables seamless high-speed communication between processors, facilitating ultimate scalability across multiple systems on a chip. Within this homogeneous environment, developers can comfortably integrate DSP, AI/ML, control, and I/O functionalities, allowing the device to adapt to specific application requirements efficiently. Moreover, the software-defined architecture allows optimal configuration, reducing power consumption and achieving cost-effective intelligent solutions. The xcore.ai platform shows impressive DSP capabilities, thanks to its scalar pipeline that achieves up to 32-bit floating-point operations and peak performance rates of up to 1600 MFLOPS. AI/ML capabilities are also robust, with support for various bit vector operations, making the platform a strong contender for AI applications requiring homogeneous computing environments and exceptional operator integration.
The Titanium Ti375 FPGA from Efinix boasts a high-density, low-power configuration, ideal for numerous advanced computing applications. Built on the well-regarded Quantum compute fabric, this FPGA integrates a robust set of features including a hardened RISC-V block, SerDes transceiver, and LPDDR4 DRAM controller, enhancing its versatility in challenging environments. The Ti375 model is designed with an intuitive I/O interface, allowing seamless communication and data handling. Its innovative architecture ensures minimal power consumption without compromising on processing speed, making it highly suitable for portable and edge devices. The inclusion of MIPI D-PHY further expands its applications in image processing and high-speed data transmission tasks. This FPGA is aligned with current market demands, emphasizing efficiency and scalability. Its architecture allows for diverse design challenges, supporting applications that transcend traditional boundaries. Efinix’s commitment to delivering sophisticated yet energy-efficient solutions is embodied in the Titanium Ti375, enabling new possibilities in the realm of computing.
The Trion FPGA family by Efinix addresses the dynamic needs of edge computing and IoT applications. These devices range from 4K to 120K logic elements, balancing computational capability with efficient power usage for a wide range of general-purpose applications. Trion FPGAs are designed to empower edge devices with rapid processing capabilities and flexible interfacing. They support a diverse array of use-cases, from industrial automation systems to consumable electronics requiring enhanced connectivity and real-time data processing. Offering a pragmatic solution for designers, Trion FPGAs integrate seamlessly into existing systems, facilitating swift development and deployment. They provide unparalleled adaptability to meet the intricate demands of modern technological environments, thereby enabling innovative edge and IoT solutions to flourish.
The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic represents a significant advancement in energy-efficient computing. This core, operating at an astonishingly low 10mW while running at 1GHz, sets a new standard for low-power design in processors. Micro Magic's proprietary methods ensure that this core maintains high performance even at reduced voltages, making it a perfect fit for applications where power conservation is crucial. Micro Magic's RISC-V core is designed to deliver substantial computational power without the typical energy costs associated with traditional architectures. With capabilities that make it suitable for a wide array of high-demand tasks, this core leverages sophisticated design approaches to achieve unprecedented power efficiency. The core's impressive performance metrics are complemented by Micro Magic's specialized tools, which aid in integrating the core into larger systems. Whether for embedded applications or more demanding computational roles, the Ultra-Low-Power 64-Bit RISC-V Core offers a compelling combination of power and performance. The design's flexibility and power efficiency make it a standout among other processors, reaffirming Micro Magic's position as a leader in semiconductor innovation. This solution is poised to influence how future processors balance speed and energy usage significantly.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The Dynamic Neural Accelerator II by EdgeCortix is a pioneering neural network core that combines flexibility and efficiency to support a broad array of edge AI applications. Engineered with run-time reconfigurable interconnects, it facilitates exceptional parallelism and efficient data handling. The architecture supports both convolutional and transformer neural networks, offering optimal performance across varied AI use cases. This architecture vastly improves upon traditional IP cores by dynamically reconfiguring data paths, which significantly enhances parallel task execution and reduces memory bandwidth usage. By adopting this approach, the DNA-II boosts its processing capability while minimizing energy consumption, making it highly effective for edge AI applications that require high output with minimal power input. Furthermore, the DNA-II's adaptability enables it to tackle inefficiencies often seen in batching tasks across other IP ecosystems. The architecture ensures that high utilization and low power consumption are maintained across operations, profoundly impacting sectors relying on edge AI for real-time data processing and decision-making.
The Veyron V1 CPU represents an efficient, high-performance processor tailored to address a myriad of data center demands. As an advanced RISC-V architecture processor, it stands out by offering competitive performance compatible with the most current data center workloads. Designed to excel in efficiency, it marries performance with a sustainable energy profile, allowing for optimal deployment in various demanding environments. This processor brings flexibility to developers and data center operators by providing extensive customization options. Veyron V1's robust architecture is meant to enhance throughput and streamline operations, facilitating superior service provision across cloud infrastructures. Its compatibility with diverse integration requirements makes it ideal for a broad swath of industrial uses, encouraging scalability and robust data throughput. Adaptability is a key feature of Veyron V1 CPU, making it a preferred choice for enterprises looking to leverage RISC-V's open standards and extend the performance of their platforms. It aligns seamlessly with Ventana's broader ecosystem of products, creating excellence in workload delivery and resource management within hyperscale and enterprise environments.
Dream Chip Technologies' Arria 10 System on Module (SoM) emphasizes embedded and automotive vision applications. Utilizing Altera's Arria 10 SoC Devices, the SoM is compact yet packed with powerful capabilities. It features a dual-core Cortex A9 CPU and supports up to 480 KLEs of FPGA logic elements, providing ample space for customization and processing tasks. The module integrates robust power management features to ensure efficient energy usage, with interfaces for DDR4 memory, PCIe Gen3, Ethernet, and 12G SDI among others, housed in a form factor measuring just 8 cm by 6.5 cm. Engineered to support high-speed data processing, the Arria 10 SoM includes dual DDR4 memory interfaces and 12 transceivers at 12 Gbit/s and above. It provides comprehensive connectivity options, including two USB ports, Gigabit Ethernet, and multiple GPIOs with level-shifting capabilities. This level of integration makes it optimal for developing solutions for automotive systems, particularly in scenarios requiring high-speed data and image processing. Additionally, the SoM comes with a suite of reference designs, such as the Intel Arria 10 Golden System Reference Design, to expedite development cycles. This includes pre-configured HPS and memory controller IP, as well as customized U-Boot and Angström Linux distributions, further enriching its utility in automotive and embedded domains.
The AON1020 expands AI processing capabilities to encompass not only voice and audio recognition but also a variety of sensor applications. It leverages the power of the AONSens Neural Network cores, offering a comprehensive solution that integrates Verilog RTL technology to support both ASIC and FPGA products. Key to the AON1020's appeal is its versatility in addressing various sensor data, such as human activity detection. This makes it indispensable in applications requiring nuanced responses to environmental inputs, from motion to gesture awareness. It deploys these capabilities while minimizing energy demands, aligning perfectly with the needs of battery-operated and wearable devices. By executing real-time analytics on device-stored data, the AON1020 ensures high accuracy in environments fraught with noise and user variability. Its architecture allows it to detect multiple commands simultaneously, enhancing device interaction while maintaining low power consumption. Thus, the AON1020 is not only an innovator in sensor data interaction but also a leader in ensuring extended device functionality without compromising energy efficiency or processing accuracy.
The Low Power RISC-V CPU IP from SkyeChip is crafted to deliver efficient computation with minimal power consumption. Featuring the RISC-V RV32 instruction set, it supports a range of functions with full standard compliance for instruction sets and partial support where necessary. Designed exclusively for machine mode, it incorporates multiple vectorized interrupts and includes comprehensive debugging capabilities. This CPU IP is well-suited for integration into embedded systems where power efficiency and processing capability are crucial.
The SiFive Essential family embodies a customizable range of processor cores, designed to fulfill various market-specific requirements. From small microcontroller units (MCUs) to more complex 64-bit processors capable of running operating systems, the Essential series provides flexibility in design and functionality. These processors support a diverse set of applications including IoT devices, real-time controls, and control plane processing. They offer scalable performance through sophisticated pipeline architectures, catering to both embedded and rich-OS environments. The Essential series offers advanced configurations which can be tailored to optimize for power and area footprint, making it suitable for devices where space and energy are limited. This aligns well with the needs of edge devices and other applications where efficiency and performance must meet in a balanced manner.
The Codasip RISC-V BK Core Series offers a versatile solution for a broad range of computing needs, from low-power embedded devices to high-performance applications. These cores leverage the open RISC-V instruction set architecture (ISA), enabling designers to take advantage of expansive customization options that optimize performance and efficiency. The BK Core Series supports high configurability, allowing users to adapt the microarchitecture and extend the instruction set based on specific application demands. Incorporating advanced features like zero-overhead loops and SIMD instructions, the BK Core Series is designed to handle computationally intensive tasks efficiently. This makes them ideal for applications in audio processing, AI, and other scenarios requiring high-speed data processing and computation. Additionally, these cores are rigorously verified to meet industry standards, ensuring robustness and reliability even in the most demanding environments. The BK Core Series also aligns with Codasip's focus on functional safety and security. These processors come equipped with features to bolster system reliability, helping prevent cyber threats and errors that could lead to system malfunctions. This makes the BK Core Series an excellent choice for industries that prioritize safety, such as automotive and industrial automation.
The Cortus ULYSS range of automotive microcontrollers is engineered to meet the demands of sophisticated automotive applications, extending from body control to ADAS and infotainment systems. Utilizing a RISC-V architecture, these microcontrollers provide high performance and efficiency suitable for automotive tasks. Each variant within the ULYSS family caters to specific automotive functions, with capabilities ranging from basic energy management to complex networking and ADAS processing. For instance, the ULYSS1 caters to body control applications with a single-core CPU, while the ULYSS3 provides robust networking capabilities with a quad-core, lockstep MPU operating up to 1.5 GHz. The ULYSS line is structured to offer scalability and flexibility, allowing automotive manufacturers to integrate these solutions seamlessly into various components of a vehicle's electronic system. This focus on adaptability helps Cortus provide both a cost-effective and high-performance solution for its automotive partners.
The eSi-1650 enhances low-power processing applications with integrated instruction caching capabilities. This 16-bit processor core is tailored to deliver high performance in mature process nodes where external memory technologies set clock speed limits. Utilizing OTP or Flash for program memory, the instruction cache minimizes power usage and reduces the need for large RAM shadows. Its architecture allows for an efficient system by supporting user-defined instructions and maintaining high code density, made possible through intermixed 16 and 32-bit instructions. The compact 5-stage pipelined core is designed to manage power consumption effectively, offering significant advantages over traditional larger bit-width processors. Hardware debugging and multiprocessor support are paired with configurable interfaces and peripherals to provide a comprehensive embedded system solution. The eSi-1650 is particularly adept in environments where reducing power consumption and increasing efficiency are crucial, serving applications from control systems to advanced computing tasks.
The eSi-3264 processor core provides advanced DSP functionality within a 32/64-bit architecture, enhanced by Single Instruction, Multiple Data (SIMD) operations. This high-performance CPU is crafted to excel in tasks demanding significant digital signal processing power, such as audio processing or motion control applications. It incorporates advanced SIMD DSP extensions and floating point support, optimizing the core for parallel data processing. The architecture supplies options for extensive custom configurations including instruction and data caches to tailor performance to the specific demands of high-speed and low-power operations. The eSi-3264's hardware debug capabilities combined with its versatile pipeline make it an ideal match for high-precision computing environments where performance and efficiency are crucial. Its ability to handle complex arithmetic operations efficiently with minimal silicon area further cements its position as a leading solution in DSP-focused applications.
The RISC-V CPU IP N Class from Nuclei System Technology is designed for microcontroller applications, integrating a 32-bit architecture ideal for AIoT environments. This product is noted for its flexibility, supporting various configurations to meet specific customization needs. It adheres to the RISC-V open standard and features a robust ecosystem, including tools like SDKs and RTOS/Linux, creating a full CPU-subsystem solution.
Avispado is a sophisticated 64-bit RISC-V core that emphasizes efficiency and adaptability within in-order execution frameworks. It's engineered to cater to energy-efficient SoC designs, making it an excellent choice for machine learning applications with its compact design and ability to seamlessly communicate with RISC-V Vector Units. By utilizing the Gazzillion Misses™ technology, the Avispado core effectively handles high sparsity in tensor weights, resulting in superior energy efficiency per operation. This core features a 2-wide in-order configuration and supports the RISC-V Vector Specification 1.0 as well as Semidynamics' Open Vector Interface. With support for large memory capacities, it includes complete MMU features and is Linux-ready, ensuring it's prepared for demanding computational tasks. The core's native CHI interface can be fine-tuned to AXI, promoting cache-coherent multiprocessing capabilities. Avispado is optimized for various demanding workloads, with optional extensions for specific needs such as bit manipulation and cryptography. The core's customizable configuration allows changes to its instruction and data cache sizes (I$ and D$ from 8KB to 32KB), ensuring it meets specific application demands while retaining operational efficiency.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
Hanguang 800 is a sophisticated AI acceleration chip tailored for demanding neural network tasks. Developed with T-Head's cutting-edge technology, it excels in delivering high throughput for deep learning workloads. This chip employs a robust architecture optimized for AI computations, providing unprecedented performance improvements in neural network execution. It's particularly suited for scenarios requiring large-scale AI processing, such as image recognition and natural language processing. The chip's design facilitates the rapid conversion of high complex AI models into real-time applications, enabling enterprises to harness the full potential of AI in their operations.
Optimized for high-performance tasks, the SCR7 Application Core is a 64-bit RISC-V processor with robust Linux capability. Tailored for powerful data-intensive applications, this core features a 12-stage out-of-order pipeline and supports vector operations, making it ideal for AI, ML, and high-performance computing applications. It integrates seamlessly with multicore environments, offering comprehensive memory management and high-level interrupt systems, facilitated by standard interfaces for broad compatibility.
TySOM boards are embedded system prototyping platforms that integrate high-performance FPGAs, including Xilinx Zynq and Microchip PolarFire SoCs. These boards are ideal for developing applications like automotive ADAS, AI, and industrial automation. By leveraging industry-standard interfaces, TySOM boards ensure versatility and wide compatibility, making them a cornerstone in rapid application development.
The Xinglian-500 Interconnect Fabric is a self-developed solution by StarFive that focuses on providing consistent memory coherence in multicore CPU and SoC implementations. This IP solution is pivotal in constructing multicore systems by connecting various CPU clusters, I/O devices, and DDR, ensuring efficient data management and communication within high-performance systems. It introduces a network-on-chip (NoC) mechanism that supports multiple CPU clusters, enhancing the overall system performance through streamlined communication paths. The Xinglian-500 is engineered to maintain memory coherence across the SoC environment, making it an invaluable component for developers looking to optimize multicore processing solutions. Due to its scalable architecture, the Xinglian-500 offers flexibility in configuration, readily adapting to the growing demands of computational efficiency. It is designed to support both consumer and enterprise-level applications, enabling lengthy and complex operations with enhanced bandwidth management and reduced latency.
Riviera-PRO caters to engineers striving to meet the demands of modern FPGA and SoC designs. This tool enhances testbench productivity while enabling extensive automation through its advanced simulation capabilities. Engineers can leverage its ability to simulate complex systems, ensuring optimal performance and minimizing errors in the design phase. Its robustness lies in its high-performance simulation engine, adept at handling varying levels of complexity.
aiWare represents a specialized hardware IP core designed for optimizing neural network performance in automotive AI applications. This neural processing unit (NPU) delivers exceptional efficiency for a spectrum of AI workloads, crucial for powering automated driving systems. Its design is focused on scalability and versatility, supporting applications ranging from L2 regulatory tasks to complex multi-sensor L3+ systems, ensuring flexibility to accommodate evolving technological needs. The aiWare hardware is integrated with advanced features like industry-leading data bandwidth management and deterministic processing, ensuring high efficiency across diverse workloads. This makes it a reliable choice for automotive sectors striving for ASIL-B certification in safety-critical environments. aiWare's architecture utilizes patented dataflows to maximize performance while minimizing power consumption, critical in automotive scenarios where resource efficiency is paramount. Additionally, aiWare is supported by an innovative SDK that simplifies the development process through offline performance estimation and extensive integration tools. These capabilities reduce the dependency on low-level programming for neural network execution, streamlining development cycles and enhancing the adaptability of AI applications in automotive domains.
ALINT-PRO is designed for thorough analysis of RTL code, addressing common issues such as simulation mismatches and synthesis optimization. This tool ensures code portability and reuse by identifying potential problems early in the design phase, thus saving time and resources in subsequent stages. Its focus on optimizing design coding practices makes it indispensable for teams striving for efficient, error-free design workflows.
The FPGA-Modul Artix 7A100T-2C is built on the robust Xilinx Artix-7 platform, featuring the XC7A100T-2CSG324C FPGA. It incorporates 100k logic cells and supports DDR3 interfaces, making it adept for applications that require cost-effective performance, such as industrial automation and digital signal processing. The module's design emphasizes a low power consumption profile and a mid-range DSP capability, ensuring balanced processing power and energy efficiency. It is suitable for use in education sectors as well as commercial deployments, where moderate performance is needed without incurring the costs of high-end FPGAs.
Bluespec's Portable RISC-V Cores offer a versatile and adaptable solution for developers seeking cross-platform compatibility with support for FPGAs from Achronix, Xilinx, Lattice, and Microsemi. These cores come with support for operating systems like Linux and FreeRTOS, providing developers with a seamless and open-source toolset for application development. By leveraging Bluespec’s extensive compatibility and open-source frameworks, developers can benefit from efficient, versatile RISC-V application deployment.
SiFive Automotive E6-A reflects a significant step forward for RISC-V in the automotive domain, emphasizing safety and performance. The E6-A series is designed with automotive-grade robustness, adhering to modern functional safety standards such as ISO26262 ASIL B. This series is tailored to meet the stringent demands of contemporary automotive applications, including advanced driver-assistance systems (ADAS) and infotainment. The 32-bit E6-A processors are particularly optimized for achieving balanced power and performance metrics, critical for real-time in-vehicle applications. These processors support functional safety requirements and are built to operate effectively within the constrained environments typical of automotive systems. With support for secure, deterministic processing, the E6-A series fosters enhanced security and performance in vehicles, ensuring that these processes can be executed reliably and efficiently. SiFive collaborates closely with automotive OEMs to ensure their solutions align well with the industry's future technology roadmaps.
Designed for applications requiring exceptional energy efficiency and computational effectiveness, the Tianqiao-80 High-Efficiency 64-bit RISC-V CPU provides a robust solution for modern computing needs. Tailored for high-performance scenarios, this CPU core offers considerable advantages in both mobile and desktop environments, meeting the increasing demands for intelligent and responsive technology. The Tianqiao-80 features an innovative design that enhances processing efficiency, making it an ideal fit for applications such as artificial intelligence, automotive systems, and desktop computing. With its 64-bit architecture, the core efficiently manages resource-intensive tasks while maintaining competitive power usage, thus delivering enhanced operational effectiveness. This processor is also characterized by its ability to integrate seamlessly into diverse computing ecosystems, supporting high-performance interfaces and rapid data processing. Its architectural enhancements ensure that it meets the needs of modern computing, providing a reliable and versatile option for developers working across a wide spectrum of digital technologies.
Designed with an 8051 instruction set architecture, the Y51 operates on a 2-clock machine cycle. This design optimizes processing efficiency and supports a wide range of applications that require precise computing performance combined with low power consumption.