All IPs > Platform Level IP > Processor Core Independent
In the ever-evolving landscape of semiconductor technologies, processor core independent IPs play a crucial role in designing flexible and scalable digital systems. These semiconductor technologies offer the versatility of enabling functionalities independent of a specific processor core, making them invaluable for a variety of applications where flexibility and reusability are paramount.
Processor core independent semiconductor IPs are tailored to function across different processor architectures, avoiding the constraints tied to any one specific core. This characteristic is particularly beneficial in embedded systems, where designers aim to balance cost, performance, and power efficiency while ensuring seamless integration. These IPs provide solutions that accommodate diverse processing requirements, from small-scale embedded controllers to large-scale data centers, making them essential components in the toolkit of semiconductor design engineers.
Products in this category often include memory controllers, I/O interfaces, and various digital signal processing blocks, each designed to operate autonomously from the central processor's architecture. This independence allows manufacturers to leverage these IPs in a broad array of devices, from consumer electronics to automotive systems, without the need for extensive redesigns for different processor families. Moreover, this flexibility championed by processor core independent IPs significantly accelerates the time-to-market for many devices, offering a competitive edge in high-paced industry environments.
Furthermore, the adoption of processor core independent IPs supports the development of customized, application-specific integrated circuits (ASICs) and system-on-chips (SoCs) that require unique configurations, without the overhead of processor-specific dependencies. By embracing these advanced semiconductor IPs, businesses can ensure that their devices are future-proof, scalable, and capable of integrating new functionalities as technologies advance without being hindered by processor-specific limitations. This adaptability positions processor core independent IPs as a vital cog in the machine of modern semiconductor design and innovation.
Panmnesia's CXL 3.1 Switch is a pivotal component in networking a vast array of CXL-enabled devices, setting the bar with its exceptional scalability and diverse connectivity. The switch supports seamless integration of hundreds of devices including memory, CPUs, and accelerators, facilitating flexible, high-performance configurations suited to demanding applications in data centers and beyond. Panmnesia's design enables easy scalability and efficient memory node expansion, reflecting their dedication to resource-efficient memory management. The CXL 3.1 Switch features a robust architecture that supports a wide array of network topologies, allowing for multi-level switching and complex node configurations. Its design addresses the unique challenges of composable server architecture, enabling fine-grained resource allocation. The switch leverages Panmnesia's proprietary CXL technology, underpinning its ability to perform management tasks across integrated memory spaces with minimal overhead, crucial for achieving high-speed, low-latency data exchange. Incorporating CXL standards, it is fully compatible with both legacy and next-generation devices, ensuring broad interoperability. The architecture allows servers to tailor resource availability by employing type-specific CXL features, such as port-based routing and multi-level switching. These features empower operators with the tools to configure extensive networks of diverse devices efficiently, thereby maximizing data center performance while minimizing costs.
Exostiv offers comprehensive functionality for in-depth monitoring and capturing of internal FPGA signals at operational speeds. It enables engineers to conduct precise and cost-effective analysis of their FPGA designs in realistic environments, overcoming the limitations of traditional simulation approaches. By providing extensive data capture abilities, Exostiv is an essential tool for minimizing engineering costs and ensuring the highest levels of design integrity. At the core of Exostiv is its versatility in compatibility, supporting a wide range of FPGA devices and ensuring adaptability to various prototyping boards. Its integration is bolstered by a range of connector options—including QSFP28 and Samtec ARF-6—providing small-footprint solutions ideal for space-tight configurations. With impressive data rates and bandwidth options, Exostiv propels performance analysis to new heights by allowing accurate trace capture and design visualization at speed. Engineers benefit from Exostiv’s ability to perform real-time signal monitoring directly on FPGA prototypes. This leads to substantial reductions in potential bugs reaching production, as the tool highlights discrepancies that might not be visible during simulations. Whether used for debugging or for SoC pre-production testing, Exostiv plays a vital role in streamlining engineering workflows, offering a blend of ease-of-use and powerful capabilities to address the most demanding validation scenarios.
The Origin E1 is an optimized neural processing unit (NPU) targeting always-on applications in devices like home appliances, smartphones, and security cameras. It provides a compact, energy-efficient solution with performance tailored to 1 TOPS, making it ideal for systems needing low-power and minimal area. The architecture is built on Expedera's unique packet-based approach, which enables enhanced resource utilization and deterministic performance, significantly boosting efficiency while avoiding the pitfalls of traditional layer-based architectures. The architecture is fine-tuned to support standard and custom neural networks without requiring external memory, preserving privacy and ensuring fast processing. Its ability to process data in parallel across multiple layers results in predictive performance with low power and latency. Always-sensing cameras leveraging the Origin E1 can continuously analyze visual data, facilitating smoother and more intuitive user interactions. Successful field deployment in over 10 million devices highlights the Origin E1's reliability and effectiveness. Its flexible design allows for adjustments to meet the specific PPA requirements of diverse applications. Offered as Soft IP (RTL) or GDS, this engine is a blend of efficiency and capability, capitalizing on the full scope of Expedera's software tools and custom support features.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
The PCIe AI Accelerator Card powered by Metis AIPU offers unparalleled AI inference performance suitable for intensive vision applications. Incorporating a single quad-core Metis AIPU, it provides up to 214 TOPS, efficiently managing high-volume workloads with low latency. The card is further enhanced by the Voyager SDK, which streamlines application deployment, offering an intuitive development experience and ensuring simple integration across various platforms. Whether for real-time video analytics or other demanding AI tasks, the PCIe Accelerator Card is designed to deliver exceptional speed and precision.
The Origin E8 NPU by Expedera is engineered for the most demanding AI deployments such as automotive systems and data centers. Capable of delivering up to 128 TOPS per core and scalable to PetaOps with multiple cores, the E8 stands out for its high performance and efficient processing. Expedera's packet-based architecture allows for parallel execution across varying layers, optimizing resource utilization, and minimizing latency, even under strenuous conditions. The E8 handles complex AI models, including large language models (LLMs) and standard machine learning frameworks, without requiring significant hardware-specific changes. Its support extends to 8K resolutions and beyond, ensuring coverage for advanced visualization and high-resolution tasks. With its low deterministic latency and minimized DRAM bandwidth needs, the Origin E8 is especially suitable for high-performance, real-time applications. The high-speed processing and flexible deployment benefits make the Origin E8 a compelling choice for companies seeking robust and scalable AI infrastructure. Through customized architecture, it efficiently addresses the power, performance, and area considerations vital for next-generation AI technologies.
The Digital Connected Solutions offered by KPIT are designed to bridge the gap between in-vehicle systems and the connected world. Emphasizing on personalization and safety, these solutions feature a variety of interactive and data-driven technologies including augmented reality head-up displays, gesture controls, and AI-based customization. KPIT's framework ensures seamless integration of these technologies, transforming vehicles into intelligent, user-centric marketplaces.
The ORC3990 SoC is a state-of-the-art solution designed for satellite IoT applications within Totum's DMSS™ network. This low-power sensor-to-satellite system integrates an RF transceiver, ARM CPUs, memories, and PA to offer seamless IoT connectivity via LEO satellite networks. It boasts an optimized link budget for effective indoor signal coverage, eliminating the need for additional GNSS components. This compact SoC supports industrial temperature ranges and is engineered for a 10+ year battery life using advanced power management.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The RISC-V Core-hub Generators are sophisticated tools designed to empower developers with complete control over their processor configurations. These generators allow users to customize their core-hubs at both the Instruction Set Architecture (ISA) and microarchitecture levels, offering unparalleled flexibility and adaptability in design. Such capabilities enable fine-tuning of processor specifications to meet specific application needs, fostering innovation within the RISC-V ecosystem. By leveraging the Core-hub Generators, developers can streamline their chip design process, ensuring efficient and seamless integration of custom features. This toolset not only simplifies the design process but also reduces time-to-silicon, making it ideal for industries seeking rapid advancements in their technological capabilities. The user-friendly interface and robust support of these generators make them a preferred choice for developing cutting-edge processors. InCore Semiconductors’ RISC-V Core-hub Generators represent a significant leap forward in processor design technology, emphasizing ease of use, cost-effectiveness, and scalability. As demand for tailored and efficient processors grows, these generators are set to play a pivotal role in shaping the future of semiconductor design, driving innovation across multiple sectors.
The Chimera GPNPU is a general-purpose neural processing unit designed to address key challenges faced by system on chip (SoC) developers when deploying machine learning (ML) inference solutions. It boasts a unified processor architecture capable of executing matrix, vector, and scalar operations within a single pipeline. This architecture integrates the functions of a neural processing unit (NPU), digital signal processor (DSP), and other processors, which significantly simplifies code development and hardware integration. The Chimera GPNPU can manage various ML networks, including classical frameworks, vision transformers, and large language models, all within a single processor framework. Its flexibility allows developers to optimize performance across different applications, from mobile devices to automotive systems. The GPNPU family is fully synthesizable, making it adaptable to a range of performance requirements and process technologies, ensuring long-term viability and adaptability to changing ML workloads. The Cortex's sophisticated design includes a hybrid Von Neumann and 2D SIMD matrix architecture, predictive power management, and sophisticated memory optimization techniques, including an L2 cache. These features help reduce power usage and enhance performance by enabling the processor to efficiently handle complex neural network computations and DSP algorithms. By merging the best qualities of NPUs and DSPs, the Chimera GPNPU establishes a new benchmark for performance in AI processing.
xcore.ai is a versatile platform specifically crafted for the intelligent IoT market. It hosts a unique architecture with multi-threading and multi-core capabilities, ensuring low latency and high deterministic performance in embedded AI applications. Each xcore.ai chip contains 16 logical cores organized in two multi-threaded processor 'tiles' equipped with 512kB of SRAM and a vector unit for enhanced computation, enabling both integer and floating-point operations. The design accommodates extensive communication infrastructure within and across xcore.ai systems, providing scalability for complex deployments. Integrated with embedded PHYs for MIPI, USB, and LPDDR, xcore.ai is capable of handling a diverse range of application-specific interfaces. Leveraging its flexibility in software-defined I/O, xcore.ai offers robust support for AI, DSP, and control processing tasks, making it an ideal choice for enhancing IoT device functionalities. With its support for FreeRTOS, C/C++ development environment, and capability for deterministic processing, xcore.ai guarantees precision in performance. This allows developers to partition xcore.ai threads optimally for handling I/O, control, DSP, and AI/ML tasks, aligning perfectly with the specific demands of various applications. Additionally, the platform's power optimization through scalable tile clock frequency adjustment ensures cost-effective and energy-efficient IoT solutions.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
The Origin E2 from Expedera is engineered to perform AI inference with a balanced approach, excelling under power and area constraints. This IP is strategically designed for devices ranging from smartphones to edge nodes, providing up to 20 TOPS performance. It features a packet-based architecture that enables parallel execution across layers, improving resource utilization and performance consistency. The engine supports a wide variety of neural networks, including transformers and custom networks, ensuring compatibility with the latest AI advancements. Origin E2 caters to high-resolution video and audio processing up to 4K, and is renowned for its low latency and enhanced performance. Its efficient structure keeps power consumption down, helping devices run demanding AI tasks more effectively than with conventional NPUs. This architecture ensures a sustainable reduction in the dark silicon effect while maintaining high operating efficiencies and accuracy thanks to its TVM-based software support. Deployed successfully in numerous smart devices, the Origin E2 guarantees power efficiency sustained at 18 TOPS/W. Its ability to deliver exceptional quality across diverse applications makes it a preferred choice for manufacturers seeking robust, energy-conscious solutions.
Time-Triggered Ethernet is a specialized communication protocol developed to incorporate the deterministic properties of traditional time-triggered systems within the robust and widely used Ethernet networking technology. It serves industries that require high precision and reliable data transmissions, like aerospace and automotive systems, where safety is paramount and timing is critical. This protocol extends conventional Ethernet by adding timestamping and scheduling features, enabling precise control over data transmission times. By doing so, it ensures that data packets are transmitted predictably within fixed timeslots, providing a network solution that combines the widespread adoption of Ethernet with high determinism demands. Time-Triggered Ethernet thus bridges the gap between standard Ethernet's flexibility and the strict timing requirements of critical systems. Applications of Time-Triggered Ethernet span from integrating advanced avionics systems to enabling reliable communication in autonomous vehicle networks. Its design supports modularity and scalability, allowing it to adapt as systems become more complex or requirements change, without sacrificing the precise timing and reliability essential for real-time communications in critical applications.
Expedera's Origin E6 NPU is crafted to enhance AI processing capabilities in cutting-edge devices such as smartphones, AR/VR headsets, and automotive systems. It offers scalable performance from 16 to 32 TOPS, adaptable to various power and performance needs. The E6 leverages Expedera's packet-based architecture, known for its highly efficient execution of AI tasks, enabling parallel processing across multiple workloads. This results in better resource management and higher performance predictability. Focusing on both traditional and new AI networks, Origin E6 supports large language models as well as complex data processing tasks without requiring additional hardware optimizations. Its comprehensive software stack, based on TVM, simplifies the integration of trained models into practical applications, providing seamless support for mainstream frameworks and quantization options. Origin E6's deployment reflects meticulous engineering, optimizing memory usage and processing latency for optimal functionality. It is designed to tackle challenging AI applications in a variety of demanding environments, ensuring consistent high-performance outputs and maintaining superior energy efficiency for next-generation technologies.
The Yitian 710 Processor stands as a flagship ARM-based server processor spearheaded by T-Head, featuring an intricate architecture designed by the company itself. Utilizing advanced multi-core technology, the processor incorporates up to 128 high-performance ARMv9 CPU cores, each complete with its own substantial cache for enhanced data access speed. The processor is adeptly configured to handle intensive computing tasks, supported by a robust off-chip memory system with 8-channel DDR5, reaching peak bandwidths up to 281GB/s. An impressive I/O subsystem featuring PCIe 5.0 interfaces facilitates extensive data throughput capabilities, making it highly suitable for high-demand applications. Compliant with modern energy efficiency standards, the processor boasts innovative multi-die packaging to maintain optimal heat dissipation, ensuring uninterrupted performance in data centers. This processor excels in cloud services, big data computations, video processing, and AI inference operations, offering the speed and efficiency required for next-generation technological challenges.
The Dynamic Neural Accelerator II (DNA-II) is an advanced IP core that elevates neural processing capabilities for edge AI applications. It is adaptable to various systems, exhibiting remarkable efficiency through its runtime reconfigurable interconnects, which aid in managing both transformer and convolutional neural networks. Designed for scalability, DNA-II supports numerous applications ranging from 1k MACs to extensive SoC implementations. DNA-II's architecture enables optimal parallelism by dynamically managing data paths between compute units, ensuring minimized on-chip memory bandwidth and maximizing operational efficiency. Paired with the MERA software stack, it provides seamless integration and optimization of neural network tasks, significantly enhancing computation ordering and resource distribution. Its applicability extends across various industry demands, massively increasing the operational efficiency of AI tasks at the edge. DNA-II, the pivotal force in the SAKURA-II Accelerator, brings innovative processing strength in compact formats, driving forward the development of edge-based generative AI and other demanding applications.
GSHARK is a high-performance GPU IP designed to accelerate graphics on embedded devices. Known for its extreme power efficiency and seamless integration, this GPU IP significantly reduces CPU load, making it ideal for use in devices like digital cameras and automotive systems. Its remarkable track record of over one hundred million shipments underscores its reliability and performance. Engineered with TAKUMI's proprietary architecture, GSHARK integrates advanced rendering capabilities. This architecture supports real-time, on-the-fly graphics processing similar to that found in PCs, smartphones, and gaming consoles, ensuring a rich user experience and efficient graphics applications. This IP excels in environments where power consumption and performance balance are crucial. GSHARK is at the forefront of embedded graphics solutions, providing significant improvements in processing speed while maintaining low energy usage. Its architecture easily handles demanding graphics rendering tasks, adding considerable value to any embedded system it is integrated into.
The Talamo SDK is a comprehensive software development toolkit designed to facilitate the creation and deployment of advanced neuromorphic AI models on Innatera's Spiking Neural Processor (SNP). It integrates into the PyTorch environment, allowing developers to seamlessly build, train, and deploy neural network models, enhancing flexibility and accessibility in developing AI applications tailored to specific needs without requiring detailed expertise in neuromorphic computing. Talamo enhances the development workflow by offering standard and custom function integration, model zoo access, and application pipeline construction. The SDK provides profiling and optimization tools to ensure applications are both efficient and performant, allowing for quick design iterations. It also includes a hardware architecture simulator, enabling developers to validate and iterate their designs rapidly before implementation on actual hardware. With the Talamo SDK, developers can exploit the SNP's heterogeneous computing capabilities, utilizing its diverse architectural elements to optimize application performance. Additionally, its support for end-to-end application development, without the necessity of deep SNN knowledge, allows for broader reach and application, from research to industrial solutions.
The Tyr Superchip is engineered to tackle the most daunting computational challenges in edge AI, autonomous driving, and decentralized AIoT applications. It merges AI and DSP functionalities into a single, unified processing unit capable of real-time data management and processing. This all-encompassing chip solution handles vast amounts of sensor data necessary for complete autonomous driving and supports rapid AI computing at the edge. One of the key challenges it addresses is providing massive compute power combined with low-latency outputs, achieving what traditional architectures cannot in terms of energy efficiency and speed. Tyr chips are surrounded by robust safety protocols, being ISO26262 and ASIL-D ready, making them ideally suited for the critical standards required in automotive systems. Designed with high programmability, the Tyr Superchip accommodates the fast-evolving needs of AI algorithms and supports modern software-defined vehicles. Its low power consumption, under 50W for higher-end tasks, paired with a small silicon footprint, ensures it meets eco-friendly demands while staying cost-effective. VSORA’s Superchip is a testament to their innovative prowess, promising unmatched efficiency in processing real-time data streams. By providing both power and processing agility, it effectively supports the future of mobility and AI-driven automation, reinforcing VSORA’s position as a forward-thinking leader in semiconductor technology.
The Ultra-Low-Power 64-Bit RISC-V Core developed by Micro Magic, Inc. is a highly efficient processor designed to deliver robust performance while maintaining minimal power consumption. This core operates at a remarkable 5GHz frequency while consuming only 10mW of power at 1GHz, making it an ideal solution for applications where energy efficiency is critical. The design leverages innovative techniques to sustain high performance with low voltage operation, ensuring that it can handle demanding processing tasks with reliability. This RISC-V core showcases Micro Magic's expertise in providing high-speed silicon solutions without compromising on power efficiency. It is particularly suited for applications that require both computational prowess and energy conservation, making it an optimal choice for modern SoC (System-on-Chip) designs. The core's architecture is crafted to support a wide range of high-performance computing requirements, offering flexibility and adaptability across various applications and industries. Its integration into larger systems can significantly enhance the overall energy efficiency and speed of electronic devices, contributing to advanced technological innovations.
Digital Media Professionals offers ZIA Stereo Vision, a robust solution for achieving high-accuracy depth estimation through stereoscopic imaging. Suited for applications in robotics and automated systems, ZIA SV deploys a sophisticated semi-global matching algorithm to derive reliable distance measurements from stereo camera inputs. This IP core excels in pre-processing and post-processing steps to optimize depth map accuracy. By supporting grayscale images of up to 8-bit depth and providing detailed disparity maps, it forms the backbone for various machine vision tasks. Built with efficiency in mind, ZIA SV supports AMBA AXI4 interface, ensuring seamless integration within high throughput data environments. Ideal for autonomous navigation systems, ZIA SV facilitates accurate real-time depth perception. This capability, combined with its minimal resource footprint, makes it a preferred choice for compact, power-sensitive applications needing reliable stereo vision processing.
The SAKURA-II AI Accelerator stands out as a high-performance, energy-efficient edge co-processor designed to handle advanced AI tasks. Tailored for real-time, batch-one AI inferencing, it supports multi-billion parameter models, such as Llama 2 and Stable Diffusion, while maintaining low power consumption. The core technology leverages a dynamic neural accelerator for runtime reconfigurability and exceptional parallel processing, making it ideal for edge-based generative AI applications. With its flexible architecture, SAKURA-II facilitates the seamless execution of diverse AI models concurrently, without compromising on efficiency or speed. Integrated with the MERA compiler framework, it ensures easy deployment across various hardware systems, supporting frameworks like PyTorch and TensorFlow Lite for seamless integration. This AI accelerator excels in AI models for vision, language, and audio, fostering innovative content creation across these domains. Moreover, SAKURA-II supports a robust DRAM bandwidth, far surpassing competitors, ensuring superior performance for large language and vision models. It offers support for significant neural network demands, making it a powerful asset for developers in the edge AI landscape.
The SiFive Essential family of processors caters to a wide range of applications with its highly customizable IP cores. Designed to meet specific market needs, these processors range from small, power-efficient designs for microcontrollers to fully-featured, Linux-capable superscalar processors. Featuring configurable microarchitectures, the Essential family provides scalability in performance and power, making it an ideal choice for diverse applications from IoT to real-time control. The Essential series is highly adaptable, with series like the 2-Series offering power and area optimization through a 2-4 stage pipeline for embedded solutions. In contrast, the 6-Series and 7-Series provide extensive configurability with their 8-stage pipelines suited for more demanding applications requiring 32/64-bit capable systems. Emphasizing flexibility, the Essential family enables a mix-and-match capability, allowing designers to combine deterministic real-time processors with higher performance application cores. With built-in Trace + Debug capabilities and open-soc security features, the Essential processors provide an integrated solution for efficient and secure processing.
aiWare stands out as a premier hardware IP for high-performance neural processing, tailored for complex automotive AI applications. By offering exceptional efficiency and scalability, aiWare empowers automotive systems to harness the full power of neural networks across a wide variety of functions, from Advanced Driver Assistance Systems (ADAS) to fully autonomous driving platforms. It boasts an innovative architecture optimized for both performance and energy efficiency, making it capable of handling the rigorous demands of next-generation AI workloads. The aiWare hardware features an NPU designed to achieve up to 256 Effective Tera Operations Per Second (TOPS), delivering high performance at significantly lower power. This is made possible through a thoughtfully engineered dataflow and memory architecture that minimizes the need for external memory bandwidth, thus enhancing processing speed and reducing energy consumption. The design ensures that aiWare can operate efficiently across a broad range of conditions, maintaining its edge in both small and large-scale applications. A key advantage of aiWare is its compatibility with aiMotive's aiDrive software, facilitating seamless integration and optimizing neural network configurations for automotive production environments. aiWare's development emphasizes strong support for AI algorithms, ensuring robust performance in diverse applications, from edge processing in sensor nodes to high central computational capacity. This makes aiWare a critical component in deploying advanced, scalable automotive AI solutions, designed specifically to meet the safety and performance standards required in modern vehicles.
The BlueLynx Chiplet Interconnect facilitates seamless communication between chiplets, vital for modern semiconductor designs that emphasize modularity and efficiency. This technology supports both physical and link layer interfaces, adhering to the Universal Chiplet Interconnect Express (UCIe) and Open Compute Project (OCP) Bunch of Wires (BoW) standards. BlueLynx ensures high-speed data transfer, offering customizable options to tailor designs for specific workloads and application needs. Optimized for AI, high-performance computing, and mobile markets, BlueLynx's die-to-die adaptability provides system architects with the leeway to integrate a variety of packaging types and process nodes, including 2D, advanced 2.5D, and innovative 3D packaging options. The solution is recognized for delivering a balance of bandwidth, energy efficiency, and latency, ensuring robust system performance while minimizing power consumption. This IP has been silicon-proven across multiple process nodes, including advanced technologies like 3nm, 4nm, and 5nm, and is supported by major semiconductor foundries. It offers valuable features such as low latency, improved PPA (Power, Performance, Area), and industry-standard compliance, positioning it as a reliable and high-performing interconnect solution within the semiconductor industry.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
The Software-Defined High PHY from AccelerComm is designed for adaptability and high efficiency across ARM processor architectures. This product brings flexibility in software-defined radio applications by facilitating easy optimization for different platforms, considering power and capacity requirements. It allows integration without hardware acceleration based on the needs of specific deployments.\n\nA key feature of the Software-Defined High PHY is its capability for customization. Users can tailor this IP to work optimally across various platforms, either independently or coupled with hardware-accelerated functionalities. This ensures the high-performance needed for modern network demands is met without unnecessary resource consumption.\n\nPerfect for scenarios needing O-RAN compliance, this PHY solution supports high adaptability and scalability for different use cases. It is ideal for developers who require robust communication solutions tuned for efficient execution in varying environmental conditions, contributing to lower latency and higher throughput in network infrastructures.
Azurite Core-hub is an innovative processor solution that excels in performance, catering to challenging computational tasks with efficiency and speed. Designed with the evolving needs of industries in mind, Azurite leverages cutting-edge RISC-V architecture to deliver high performance while maintaining scalability and flexibility in design. This processor core stands out for its ability to streamline tasks and simplify the complexities often associated with processor integration. The Azurite Core-hub's architecture is tailored to enhance computation-intensive applications, ensuring rapid execution and robust performance. Its open-source RISC-V base supports easy integration and freedom from vendor lock-in, providing users the liberty to customize their processors according to specific project needs. This adaptability makes Azurite an ideal choice for sectors like AI/ML where high performance is crucial. InCore Semiconductors has fine-tuned the Azurite Core-hub to serve as a powerhouse in the processor core market, ensuring that it meets rigorous performance benchmarks. It offers a seamless blend of high efficiency and user-friendly configurability, making it a versatile asset for any design environment.
The IFC_1410 Intelligent FMC Carrier in AMC form factor is an advanced modular platform accommodating a broad spectrum of functionalities within a compact framework. It is built on the powerful NXP QorIQ T Series processors alongside Xilinx Artix-7 and Kintex UltraScale devices, making it suitable for high-performance applications. This carrier board serves as a foundational component in system designs, promoting flexibility and ease of integration. Its multi-purpose architecture is tailored for various complex systems, enabling developers to extend the capabilities of their VME data acquisition and control systems far beyond traditional limits. By harnessing the synergistic potential of cutting-edge processor technologies and FPGA platforms, the IFC_1410 carrier board delivers exceptional processing power and scalability necessary for high-energy physics and many industrial applications requiring intense computational capacity.
The SiFive Performance family represents a new benchmark in computing efficiency and performance. These RISC-V processors are aimed at addressing the demands of modern workloads, including web servers, multimedia processing, networking, and storage in data centers. With its high throughput, out-of-order cores ranging from three-wide to six-wide configurations, and dedicated vector engines for AI tasks, the SiFive Performance family promises remarkable energy and area efficiency. This not only enables high compute density but also reduces costs and energy consumption, making it an optimal choice for contemporary data center applications. A hallmark of the Performance family is its scalability for various applications, including mobile, consumer, and edge infrastructure. The portfolio includes a range of models like the six-wide, out-of-order P870 core, capable of scaling up to a 256-core cluster, and the P650, known for its four-issue, out-of-order architecture supporting up to a 16-core cluster. Furthermore, the family includes the P550 series, which sets standards with its three-issue, out-of-order design, offering superior performance in an energy-efficient footprint. In addition to delivering exceptional computing power, the SiFive Performance processors excel in scenarios where power, footprint, and cost are crucial factors. With the potential for configurations up to 512 cores, these processors are designed to meet the growing demand for high-performance computing across multiple sectors.
The Codasip RISC-V BK Core Series represents a lineup of processor cores that leverage the open standard architecture of RISC-V to deliver highly customizable computational solutions. These cores provide a balance between power efficiency and performance, making them ideal for a broad range of applications, including IoT devices and embedded systems. The BK series cores are designed to be versatile, supporting a variety of operating systems while allowing full customization to meet specific workload demands. This flexibility empowers designers to implement custom instructions and optimize the cores for particular applications without compromising on power budgets. The series is compliant with RISC-V standards, ensuring seamless integration with other RISC-V based solutions.
The ZIA DV700 Series neural processing unit by Digital Media Professionals showcases superior proficiency in handling deep neural networks, tailored for high-reliability AI systems such as autonomous vehicles and robotics. This series excels in real-time image, video, and voice processing, emphasizing both efficiency and safety crucial for applications requiring accurate and speedy analysis. Leveraging FP16 floating-point precision, these units ensure robust AI model deployment without necessitating additional training, maintaining high inference precision for critical applications. Devised with versatility in mind, the DV700 supports a plethora of AI models, facilitating mobile, space-efficient integration across multiple platforms. Engineered to handle diverse DNN configurations, the ZIA DV700 stands out with hardware architectures optimized for inference processing. Its extensive application spread includes object detection, semantic segmentation, pose estimation, and more. By supporting standard AI development frameworks like Caffe, Keras, and TensorFlow, users can seamlessly develop AI applications with DV700's robust SDK and development tools. The IP core's design integrates a high-bandwidth on-chip RAM and weight compression, further boosting processing performance. Optimizing for enhanced AI inference tasks, the DV700 Series continues to be indispensable in high-stakes environments.
The Jotunn8 is engineered to redefine performance standards for AI datacenter inference, supporting prominent large language models. Standing as a fully programmable and algorithm-agnostic tool, it supports any algorithm, any host processor, and can execute generative AI like GPT-4 or Llama3 with unparalleled efficiency. The system excels in delivering cost-effective solutions, offering high throughput up to 3.2 petaflops (dense) without relying on CUDA, thus simplifying scalability and deployment. Optimized for cloud and on-premise configurations, Jotunn8 ensures maximum utility by integrating 16 cores and a high-level programming interface. Its innovative architecture addresses conventional processing bottlenecks, allowing constant data availability at each processing unit. With the potential to operate large and complex models at reduced query costs, this accelerator maintains performance while consuming less power, making it the preferred choice for advanced AI tasks. The Jotunn8's hardware extends beyond AI-specific applications to general processing (GP) functionalities, showcasing its agility. By automatically selecting the most suitable processing paths layer-by-layer, it optimizes both latency and power consumption. This provides its users with a flexible platform that supports the deployment of vast AI models under efficient resource utilization strategies. This product's configuration includes power peak consumption of 180W and an impressive 192 GB on-chip memory, accommodating sophisticated AI workloads with ease. It aligns closely with theoretical limits for implementation efficiency, accentuating VSORA's commitment to high-performance computational capabilities.
SystemBIST is an advanced product offering from Intellitech that provides a plug-and-play solution for flexible FPGA configuration and embedded JTAG testing. It stands out with its proprietary architecture that allows for efficient, codeless configuration of field-programmable gate arrays (FPGAs) as well as built-in system testing capabilities. SystemBIST is designed to be vendor-neutral, supporting any FPGA or CPLD compliant with the IEEE 1532 or IEEE 1149.1 standards. This design enables robust anti-tamper measures and enhances system reliability by embedding JTAG test patterns directly into PCBs.
Monolithic Microsystems from Imec are revolutionizing how electronic integration is perceived by offering a platform that seamlessly combines microelectronics and microsystems. These systems are engineered to provide high functionality while maintaining a compact footprint, making them ideal for applications in areas like sensing, actuation, and control across a variety of sectors including industrial automation, medical devices, and consumer electronics. The Monolithic Microsystems platform enables the integration of various subsystems onto a single semiconductor chip, thereby reducing the size, power consumption, and cost of complex electronic devices. This not only streamlines device architecture but also enhances reliability and performance by mitigating the interconnect challenges associated with multi-chip assemblies. Imec’s comprehensive resources and expertise in semiconductor manufacturing are harnessed to deliver solutions that meet the rigorous demands of cutting-edge applications. From design to production, the Monolithic Microsystems offer a leap in capability for next-generation devices, facilitating innovations that require robust, integrated microsystem technologies.
Network on Chip (NOC-X) from Extoll is designed to address the complex communication needs of multi-core processors and chiplet-based systems. This advanced interconnect solution facilitates high-speed data exchange between various functional blocks within a chip, ensuring seamless operation and enhanced performance. NOC-X offers a scalable architecture that can support increasing data traffic demands within integrated circuits. Its design focuses on reducing communication latency and optimizing throughput, crucial factors for efficient processor performance. This capability makes NOC-X ideal for high-performance computing scenarios and applications requiring rigorous data movement and processing with minimal latency. Incorporating the NOC-X into a system not only enhances the data handling capabilities but also contributes to overall power efficiency by minimizing energy expenditure per data bit transferred. By facilitating coherent and synchronized communication across multiple modules, NOC-X aids in the development of systems that are both responsive and reliable. Its flexibility and efficiency make it a key component in designing next-generation digital systems.
The ONNC Calibrator is engineered to ensure high precision in AI System-on-Chips using post-training quantization (PTQ) techniques. This tool enables architecture-aware quantization, which helps maintain 99.99% precision even with fixed-point architecture, such as INT8. Designed for diverse heterogeneous multicore setups, it supports multiple engines within a single chip architecture and employs rich entropy calculation techniques. A major advantage of the ONNC Calibrator is its efficiency; it significantly reduces the time required for quantization, taking only seconds to process standard computer vision models. Unlike re-training methods, PTQ is non-intrusive, maintains network topology, and adapts based on input distribution to provide quick and precise quantization suitable for modern neural network frameworks such as ONNX and TensorFlow. Furthermore, the Calibrator's internal precision simulator uses hardware control registers to maintain precision, demonstrating less than 1% precision drop in most computer vision models. It adapts flexibly to various hardware through its architecture-aware algorithms, making it a powerful tool for maintaining the high performance of AI systems.
The Camera ISP Core is designed to optimize image signal processing by integrating sophisticated algorithms that produce sharp, high-resolution images while requiring minimal logic. Compatible with RGB Bayer and monochrome image sensors, this core handles inputs from 8 to 14 bits and supports resolutions from 256x256 up to 8192x8192 pixels. Its multi-pixel processing capabilities per clock cycle allow it to achieve performance metrics like 4Kp60 and 4Kp120 on FPGA devices. It uses AXI4-Lite and AXI4-Stream interfaces to streamline defect correction, lens shading correction, and high-quality demosaicing processes. Advanced noise reduction features, both 2D and 3D, are incorporated to handle different lighting conditions effectively. The core also includes sophisticated color and gamma corrections, with HDR processing for combining multiple exposure images to improve dynamic range. Capabilities such as auto focus and saturation, contrast, and brightness control are further enhanced by automatic white balance and exposure adjustments based on RGB histograms and window analyses. Beyond its core features, the Camera ISP Core is available with several configurations including the HDR, Pro, and AI variations, supporting different performance requirements and FPGA platforms. The versatility of the core makes it suitable for a range of applications where high-quality real-time image processing is essential.
The Universal Chiplet Interconnect Express (UCIe) by Extoll is a cutting-edge technology designed to meet the increasing demand for seamless integration of chiplets within a system. UCIe offers a highly efficient interconnect framework that underpins the foundational architecture of heterogeneous systems, enabling enhanced interoperability and performance across various chip components. UCIe distinguishes itself by offering an ultra-low power profile, making it a preferred option for power-sensitive applications. Its design focuses on facilitating high bandwidth data transfer, essential for modern computing environments that require the handling of vast amounts of data with speed and precision. Furthermore, UCIe supports a diverse range of process nodes, ensuring it integrates well with existing and emerging technologies. This innovation plays a pivotal role in accelerating the transition to advanced chiplet-based architectures, enabling developers to create systems that are both scalable and efficient. By providing a robust interconnect solution, UCIe helps reduce overall system complexity, lowers development costs, and improves design flexibility — making it an indispensable tool for forward-thinking semiconductor designs.
Cyclic Design's G13 and G13X IPs are crafted for 512-byte correction blocks, suited for NAND devices with 2KB and 4KB pages. Transitioning from traditional single bit correction using Hamming codes, these IPs support higher bit corrections essential as NAND technologies advance. The G13 IP offers a modular, customizable drop-in upgrade enhancing existing controller architectures with minimal investment, ensuring compatibility with both existing hardware and software.
The CC-100 Power Optimizer is an integral part of CurrentRF's energy harvesting technology, primarily featured in products like PowerStic and Exodus devices. This compact IC, measuring 1.4mm x 1.4mm, is designed to be embedded in the ground lead of DC-Link capacitors, offering significant energy savings. By harvesting, inverting, and recycling a portion of the normally wasted current, the CC-100 Power Optimizer effectively cancels a part of the incoming current impulse. This feedback mechanism reduces battery charging requirements, extending the driving range of electric vehicles by up to 10%, making it an economical solution for manufacturers striving to enhance efficiency without considerable expense. This IP is particularly effective in electric and hydrogen fuel cell vehicles, where it helps to alleviate deep discharges of key capacitors in the vehicle's electrical system. As a result, it aids in reducing the recharge demand, consequent battery drain, and thus extends not only the longevity of the battery system but also the range per charge. The CC-100 Power Optimizer supports manufacturers’ designs by decreasing power consumption while improving the overall eco-friendliness of vehicle technology. Its implementation is particularly advantageous in high-voltage systems, where interfacing with cable shields and disconnecting waste currents can lead to substantial performance enhancements. The CC-100 IC stands as a testament to how innovative design in power management can lead to substantial energy savings and operational improvements.
The eSi-ADAS suite is a high-performance radar processing solution primarily designed to enhance ADAS systems. It comprises a comprehensive set of radar accelerator IPs, such as FFT and CFAR engines, alongside tracking capabilities powered by Kalman filter technology. This setup facilitates real-time monitoring of diverse radar environments. Automotive and UAV sectors benefit significantly from eSi-ADAS, as it ensures precise situational awareness necessary for modern safety and collision avoidance systems. By offloading computationally intensive tasks from the central processing unit, it optimizes performance and power efficiency. This enables the handling of complex scenarios, from short-range radar operations to simultaneous tracking of numerous objects.
The Ncore Cache Coherent Interconnect is designed to address the challenges of multicore ASICs by ensuring efficient inter-core communication and synchronization within SoCs. It provides a high-bandwidth interconnect fabric, supporting multiple protocols and a range of processor designs, including Arm and RISC-V architectures. This coherent interconnect leverages system scalability and integration ease, meeting the rigorous demands of safety-critical environments like those in automotive applications. Ncore is engineered to reduce complexity and optimize power usage while maintaining high-performance standards, ultimately enhancing reliability in complex multi-core system designs.
The 2D FFT from Dillon Engineering efficiently handles two-dimensional data transformation applications. It is particularly beneficial in scenarios where large-scale data analysis and image processing tasks require swift execution. The core exemplifies Dillon's expertise in enhancing processing speeds while maintaining high-quality output, making it indispensable for projects involving complex two-dimensional signal processing. Dillon's 2D FFT is designed to operate with internal or external memory configurations, supporting high throughput and flexibility in memory management. By utilizing dual FFT engines, it ensures efficient handling of horizontal and vertical data streams, making it suitable for tasks involving multidimensional data like images or video streams. This FFT Core is highly adaptable due to its flexible architecture enabled by the ParaCore Architect™ tool, which ensures that it can be easily customized to meet specific design and performance criteria. Thus, Dillon's 2D FFT stands as a crucial component for developers seeking to incorporate effective, reliable, and fast two-dimensional FFT processing into their systems.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The RV32EC_P2 Processor Core is a compact, high-efficiency RISC-V processor designed for low-power, small-scale embedded applications. Featuring a 2-stage pipeline architecture, it efficiently executes trusted firmware. It supports the RISC-V RV32E base instruction set, complemented by compression and optional integer multiplication instructions, greatly optimizing code size and runtime efficiency. This processor accommodates both ASIC and FPGA workflows, offering tightly-coupled memory interfaces for robust design flexibility. With a simple machine-mode architecture, the RV32EC_P2 ensures swift data access. It boasts extended compatibility with AHB-Lite and APB interfaces, allowing seamless interaction with memory and I/O peripherals. Designed for enhanced power management, it features an interrupt system and clock-gating abilities, effectively minimizing idle power consumption. Developers can benefit from its comprehensive toolchain support, ensuring smooth firmware and virtual prototype development through platforms such as the ASTC VLAB. Further distinguished by its vectored interrupt system and support for application-specific instruction sets, the RV32EC_P2 is adaptable to various embedded applications. Enhancements include wait-for-interrupt commands for reduced power usage during inactivity and multiple timer interfaces. This versatility, along with integrated GNU and Eclipse tools, makes the RV32EC_P2 a prime choice for efficient, low-power technology integrations.
FlexNoC is a high-performance Network-on-Chip (NoC) interconnect that facilitates efficient on-chip communication, enabling developers to create physically aware NoCs quickly. It's renowned for supporting various topologies through its adaptable architecture. By incorporating physical awareness features, FlexNoC simplifies timing closure, streamlines design processes, and reduces power consumption. Developers benefit from shorter turn-around times and enhanced design scalability, making it ideal for both small and large-scale SoCs. Equipped with comprehensive security and quality of service features, FlexNoC integrates seamlessly into existing design frameworks to support advanced systems.
The DQ80251 is a revolutionary microcontroller core, executing instructions with an ultra-high-performance quad-pipelined architecture. This core is optimized for both 16-bit and 32-bit embedded applications, offering unmatched speed and reliability. It supports classic 8051 instruction compatibility while pushing the limits with enhancements that accelerate processing power significantly. With a small gate size of 13,500 and the ability to handle 8MB code space, the DQ80251 core also climbs to a performance level of 75.08 DMIPS, making it a vital component in applications demanding quick data processing.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!