All IPs > Platform Level IP
Platform Level IP is a critical category within the semiconductor IP ecosystem, offering a wide array of solutions that are fundamental to the design and efficiency of semiconductor devices. This category includes various IP blocks and cores tailored for enhancing system-level performance, whether in consumer electronics, automotive systems, or networking applications. Suitable for both embedded control and advanced data processing tasks, Platform Level IP encompasses versatile components necessary for building sophisticated, multicore systems and other complex designs.
Subcategories within Platform Level IP cover a broad spectrum of integration needs:
1. **Multiprocessor/DSP (Digital Signal Processing)**: This includes specialized semiconductor IPs for handling tasks that require multiple processor cores working in tandem. These IPs are essential for applications needing high parallelism and performance, such as media processing, telecommunications, and high-performance computing.
2. **Processor Core Dependent**: These semiconductor IPs are designed to be tightly coupled with specific processor cores, ensuring optimal compatibility and performance. They include enhancements that provide seamless integration with one or more predetermined processor architectures, often used in specific applications like embedded systems or custom computing solutions.
3. **Processor Core Independent**: Unlike core-dependent IPs, these are flexible solutions that can integrate with a wide range of processor cores. This adaptability makes them ideal for designers looking to future-proof their technological investments or who are working with diverse processing environments.
Overall, Platform Level IP offers a robust foundation for developing flexible, efficient, and scalable semiconductor devices, catering to a variety of industries and technological requirements. Whether enhancing existing architectures or pioneering new designs, semiconductor IPs in this category play a pivotal role in the innovation and evolution of electronic devices.
Continuing the evolution of AI at the edge, the 2nd Generation Akida provides enhanced capabilities for modern applications. This upgrade implements 8-bit quantization for increased precision and introduces support for Vision Transformers and temporal event-based neural networks. The platform handles advanced cognitive tasks seamlessly with heightened accuracy and significantly reduced energy consumption. Designed for high-performance AI tasks, it supports complex network models and utilizes skip connections to enhance speed and efficiency.
The NMP-750 serves as a high-performance accelerator IP for edge computing solutions across various sectors, including automotive, smart cities, and telecommunications. It supports sophisticated applications such as mobility control, factory automation, and energy management, making it a versatile choice for complex computational tasks. With a high throughput of up to 16 TOPS and a memory capacity scaling up to 16 MB, this IP ensures substantial computing power for edge devices. It is configured with a RISC-V or Arm Cortex-R/A 32-bit CPU and incorporates multiple AXI4 interfaces, optimizing data exchanges between Host, CPU, and peripherals. Optimized for edge environments, the NMP-750 enhances spectral efficiency and supports multi-camera stream processing, paving the way for innovation in smart infrastructure management. Its scalable architecture and energy-efficient design make it an ideal component for next-generation smart technologies.
The Connected Vehicle Solutions by KPIT focus on integrating in-vehicle systems with the broader connected world, transforming the cockpit experience. Utilizing high-resolution displays, augmented reality, and AI-driven personalization, these solutions improve productivity, safety, and user engagement. The company's advancements in over-the-air updates facilitate seamless vehicle interactions and connectivity, ushering in new revenue streams for OEMs while overcoming the challenges of system integration and market competitiveness.
The NMP-350 is designed to offer exceptional efficiency in AI processing, specifically targeting endpoint accelerations. This IP is well-suited for markets that require minimal power consumption and cost-effectiveness, such as automotive, AIoT/Sensors, Industry 4.0, smart appliances, and wearables. It enables a wide variety of applications, including driver authentication, digital mirrors, machine automation, and health monitoring. Technically, it delivers up to 1 TOPS and supports up to 1 MB local memory. The architecture is based on the RISC-V or Arm Cortex-M 32-bit CPU, ensuring effective processing capabilities for diverse tasks. Communication is managed via three AXI4 interfaces, each 128 bits wide, to handle Host, CPU, and Data interactions efficiently. The NMP-350 provides a robust foundation for developing advanced AI applications at the edge. Designed for ultimate flexibility, it aids in predictive maintenance and personalization processes in smart environments. With its streamlined architecture, it provides unmatched performance for embedded solutions, enabling seamless integration into existing hardware ecosystems.
KPIT provides state-of-the-art solutions for vehicle diagnostics and aftersales service, essential for the maintenance of software-intensive vehicles. The iDART framework offers comprehensive diagnostic functions and enhances service operations through AI-guided systems. This framework facilitates the transition to a unified, future-proof diagnostic ecosystem, reducing downtime and ensuring optimal vehicle performance. KPIT's solutions streamline complex diagnostic processes, making vehicles easier to manage and repair over their lifespans, enhancing customer satisfaction and loyalty.
The Metis AIPU PCIe AI Accelerator Card provides an unparalleled performance boost for AI tasks by leveraging multiple Metis AIPUs within a single setup. This card is capable of delivering up to 856 TOPS, supporting complex AI workloads such as computer vision applications that require rapid and efficient data processing. Its design allows for handling both small-scale and extensive applications with ease, ensuring versatility across different scenarios. By utilizing a range of deep learning models, including YOLOv5 and ResNet-50, this AI accelerator card processes up to 12,800 FPS for ResNet-50 and an impressive 38,884 FPS for MobileNet V2-1.0. The card’s architecture enables high throughput, making it particularly suited for video analytics tasks where speed is crucial. The card also excels in scenarios that demand high energy efficiency, providing best-in-class performance at a significantly reduced operational cost. Coupled with the Voyager SDK, the Metis PCIe card integrates seamlessly into existing AI systems, enhancing development speed and deployment efficiency.
KPIT's engineering and design solutions focus on accelerating vehicle development through new-age design and simulation techniques. This approach enables cost-efficient transformation and adherence to sustainability standards, offering integrated electrification solutions and cutting-edge design methodologies. KPIT's solutions in vehicle engineering support electric and hybrid vehicle innovation with advanced CAD tools, virtual prototyping, and AI augmentation.
The ORC3990 is a sophisticated System on Chip (SoC) solution designed for low-power sensor-to-satellite communication within the LEO satellite spectrum. Utilizing Totum's DMSS technology, it achieves superior doppler performance, facilitating robust connectivity for IoT devices. The integration of an RF transceiver, power amplifiers, ARM CPUs, and memory components makes it a highly versatile module. Leveraging advanced power management technology, this SoC supports a battery life that exceeds ten years, even within industrial temperature ranges from -40 to +85°C. It's optimized for usage with Totum's global LEO satellite network, ensuring substantial indoor signal coverage without the need for additional GNSS components. Efficiency is a key feature, with the chip operating in the 2.4 GHz ISM band, providing unparalleled connectivity regardless of location. Compact in design, comparable in size to a business card, and designed for easy mounting, the ORC3990 offers sought-after versatility for IoT applications. The ability to function with excellent TCO in terms of cost compared to terrestrial IoT solutions makes it a valuable asset for any IoT deployment focused on sustainability and longevity.
WAVE6 is a sophisticated multi-standard video codec designed to handle an array of video standards such as AV1, HEVC, AVC, and VP9. Capable of efficiently managing high-resolution video encoding and decoding processes, WAVE6 offers unmatched performance for applications demanding 4K and 8K resolutions. The technology incorporates a dual-core architecture that doubles operational efficiency and is crucial for high-throughput sectors like data centers and surveillance systems. Key features include support for color depth adaptations ranging from 8-bit to 10-bit and advanced power efficiency mechanisms. The WAVE6 codec is notable for incorporating features such as Chips&Media’s unique lossless frame buffer compression technology, CFrame™, to significantly minimize external memory bandwidth usage. With a streamlined architecture that simplifies video processing tasks, this codec supports multiple interface standards, enhancing your system's scalability and integration. High versatility makes WAVE6 a preferred choice for modern multimedia processing units, providing effective solutions for bandwidth challenges while maintaining superior image quality. WAVE6's efficient resource management and multi-instance capabilities make it a standout product in environments requiring low power consumption and high output precision. It facilitates color space conversion, bit-depth switching, and offers secondary interface options, tailoring it for a diverse range of implementation scenarios, from mobile technology to media broadcasting facilities.
Origin E1 neural engines are expertly adjusted for networks that are typically employed in always-on applications. These include devices such as home appliances, smartphones, and edge nodes requiring around 1 TOPS performance. This focused optimization makes the E1 LittleNPU processors particularly suitable for cost- and area-sensitive applications, making efficient use of energy and reducing processing latency to negligible levels. The design also incorporates a power-efficient architecture that maintains low power consumption while handling always-sensing data operations. This enables continuous sampling and analysis of visual information without compromising on efficiency or user privacy. Additionally, the architecture is rooted in Expedera's packet-based design which allows for parallel execution across layers, optimizing performance and resource utilization. Market-leading efficiency with up to 18 TOPS/W further underlines Origin E1's capacity to deliver outstanding AI performance with minimal resources. The processor supports standard and proprietary neural network operations, ensuring versatility in its applications. Importantly, it accommodates a comprehensive software stack that includes an array of tools such as compilers and quantizers to facilitate deployment in diverse use cases without requiring extensive re-designs. Its application has already seen it deployed in over 10 million devices worldwide, in various consumer technology formats.
Designed for high-performance environments such as data centers and automotive systems, the Origin E8 NPU cores push the limits of AI inference, achieving up to 128 TOPS on a single core. Its architecture supports concurrent running of multiple neural networks without context switching lag, making it a top choice for performance-intensive tasks like computer vision and large-scale model deployments. The E8's flexibility in deployment ensures that AI applications can be optimized post-silicon, bringing performance efficiencies previously unattainable in its category. The E8's architecture and sustained performance, alongside its ability to operate within strict power envelopes (18 TOPS/W), make it suitable for passive cooling environments, which is crucial for cutting-edge AI applications. It stands out by offering PetaOps performance scaling through its customizable design that avoids penalties typically faced by tiled architectures. The E8 maintains exemplary determinism and resource utilization, essential for running advanced neural models like LLMs and intricate ADAS tasks. Furthermore, this core integrates easily with existing development frameworks and supports a full TVM-based software stack, allowing for seamless deployment of trained models. The expansive support for both current and emerging AI workloads makes the Origin E8 a robust solution for the most demanding computational challenges in AI.
The Metis AIPU M.2 Accelerator Module is a powerful AI processing solution designed for edge devices. It offers a compact design tailored for applications requiring efficient AI computations with minimized power consumption. With a focus on video analytics and other high-demand tasks, this module transforms edge devices into AI-capable systems. Equipped with the Metis AIPU, the M.2 module can achieve up to 3,200 FPS for ResNet-50, providing remarkable performance metrics for its size. This makes it ideal for deployment in environments where space and power availability are limited but computational demands are high. It features an NGFF (Next Generation Form Factor) socket, ensuring it can be easily integrated into a variety of systems. The module leverages Axelera's Digital-In-Memory-Computing technology to enhance neural network inference speed while maintaining power efficiency. It's particularly well-suited for applications such as multi-channel video analytics, offering robust support for various machine learning frameworks, including PyTorch, ONNX, and TensorFlow.
SCR9 is tailored for entry-level server-class applications and high-performance computing. This 64-bit RISC-V core supports a range of extensions, including vector operations and scalar cryptography. Utilizing a dual-issue 12-stage pipeline, SCR9 excels in environments requiring Linux-based operations, enabling advanced data processing capabilities like those needed in AI and personal computing devices.
The H.264 FPGA Encoder and CODEC Micro Footprint Cores are designed to offer superior video compression capabilities, ensuring minimal latency with a remarkable sub-1ms delay for 1080p30. This licensable core is notable for its compliance with ITAR standards, making it adaptable for various strategic applications. It facilitates 1080p60 baseline support with a single compact core that's touted as the fastest and smallest in its class. These cores are customizable, allowing for tailored pixel depths and unique resolutions that can be modified based on the specific requirements of a project. Moreover, the flexibility of these cores extends to various encoding flavors, including H.264 Encoder, CODEC, and I-Frame Only Encoder, which further enhances their usage in a wide range of applications. A low-cost evaluation license is also available, making the cores accessible for diverse testing and development scenarios.
Cortus's High Performance RISC-V Processor represents the pinnacle of processing capability, designed for demanding applications that require high-speed computing and efficient task handling. It features the world’s fastest RISC-V 64-bit instruction set architecture, implemented in an Out-of-Order (OoO) execution core, supporting both single-core and multi-core configurations for unparalleled processing throughput. This processor is particularly suited for high-end computing tasks in environments ranging from desktop computing to artificial intelligence workloads. With integrated features such as a multi-socket cache coherent system and an on-chip vector plus AI accelerator, it delivers exceptional computation power, essential for tasks such as bioinformatics and complex machine learning models. Moreover, the processor includes coherent off-chip accelerators, such as CNN accelerators, enhancing its utility in AI-driven applications. The design flexibility extends its application to consumer electronics like laptops and supercomputers, positioning the High Performance RISC-V Processor as an integral part of next-gen technology solutions across multiple domains.
The Origin E2 family of NPU cores is tailored for power-sensitive devices like smartphones and edge nodes that seek to balance power, performance, and area efficiency. These cores are engineered to handle video resolutions up to 4K, as well as audio and text-based neural networks. Utilizing Expedera’s packet-based architecture, the Origin E2 ensures efficient parallel processing, reducing the need for device-specific optimizations, thus maintaining high model accuracy and adaptability. The E2 is flexible and can be customized to fit specific use cases, aiding in mitigating dark silicon and enhancing power efficiency. Its performance capacity ranges from 1 to 20 TOPS and supports an extensive array of neural network types including CNNs, RNNs, DNNs, and LSTMs. With impressive power efficiency rated at up to 18 TOPS/W, this NPU core keeps power consumption low while delivering high performance that suits a variety of applications. As part of a full TVM-based software stack, it provides developers with tools to efficiently implement their neural networks across different hardware configurations, supporting frameworks such as TensorFlow and ONNX. Successfully applied in smartphones and other consumer electronics, the E2 has proved its capabilities in real-world scenarios, significantly enhancing the functionality and feature set of devices.
The AndeShape platform supports AndesCore processor system development by providing a versatile infrastructure composed of Platform IP, hardware development platforms, and an ICE Debugger. This allows for efficient integration and rapid prototyping, offering flexibility in design and development across a comprehensive set of hardware options. It aims to reduce design risk and accelerate time-to-market.
The H-Series PHY supports the latest in high-speed memory interfaces, specifically engineered for comprehensive compatibility with a range of memory standards. By generating extensive support ecosystems including Design Acceleration Kits, this PHY aims to streamline integration and enhance performance for high-demand applications. With significant emphasis on minimizing die size, while optimizing both performance and latency, this PHY is particularly useful for graphics and compute-intensive operations where speed and reliability are paramount.
Aimed at performance-driven environments, the NMP-550 is an efficient accelerator IP optimized for diverse markets, including automotive, mobile, AR/VR, drones, and medical devices. This IP is crucial for applications such as driver monitoring, fleet management, image and video analytics, and compliance in security systems. The NMP-550 boasts a processing power of up to 6 TOPS and integrates up to 6 MB of local memory, empowering it to handle complex tasks with ease. It runs on a RISC-V or Arm Cortex-M/A 32-bit CPU and supports multiple high-speed interfaces, specifically three AXI4, 128-bit connections that manage Host, CPU, and Data traffic. This IP is engineered for environments demanding high performance with efficient power use, addressing modern technological challenges in real-time analytics and surveillance. The NMP-550 is adept at improving system intelligence, allowing for enhanced decision-making processes in connected devices.
The Automotive AI Inference SoC by Cortus is a cutting-edge chip designed to revolutionize image processing and artificial intelligence applications in advanced driver-assistance systems (ADAS). Leveraging RISC-V expertise, this SoC is engineered for low power and high performance, particularly suited to the rigorous demands of autonomous driving and smart city infrastructures. Built to support Level 2 to Level 4 autonomous driving standards, this AI Inference SoC features powerful processing capabilities, enabling complex image processing algorithms akin to those used in advanced visual recognition tasks. Designed for mid to high-end automotive markets, it offers adaptability and precision, key to enhancing the safety and efficiency of driver support systems. The chip's architecture allows it to handle a tremendous amount of data throughput, crucial for real-time decision-making required in dynamic automotive environments. With its advanced processing efficiency and low power consumption, the Automotive AI Inference SoC stands as a pivotal component in the evolution of intelligent transportation systems.
The AndesCore range includes high-performance 32-bit and 64-bit CPU core families tailored for emerging market segments. These processors, adhering to the RISC-V technology and AndeStar V5 ISA, span across several series such as the Compact, 25-Series, 27-Series, 40-Series, and 60-Series. Each series is designed for specific applications, offering features like high per-MHz performance, vector processing units (VPUs), branch prediction, and memory management enhancements like MemBoost, which optimizes memory bandwidth and latency.
Origin E6 NPU cores are cutting-edge solutions designed to handle the complex demands of modern AI models, specializing in generative and traditional networks such as RNN, CNN, and LSTM. Ranging from 16 to 32 TOPS, these cores offer an optimal balance of performance, power efficiency, and feature set, making them particularly suitable for premium edge inference applications. Utilizing Expedera’s innovative packet-based architecture, the Origin E6 allows for streamlined multi-layer parallel processing, ensuring sustained performance and reduced hardware load. This helps developers maintain network adaptability without incurring latency penalties or the need for hardware-specific optimizations. Additionally, the Origin E6 provides a fully scalable solution perfect for demanding environments like next-generation smartphones, automotive systems, and consumer electronics. Thanks to a comprehensive software suite based around TVM, the E6 supports a broad span of AI models, including transformers and large language models, offering unparalleled scalability and efficiency. Whether for use in AR/VR platforms or advanced driver assistance systems, the E6 NPU cores provide robust solutions for high-performance computing needs, facilitating numerous real-world applications.
The PolarFire FPGA Family is designed to deliver cost-efficient and ultra-low power solutions across a spectrum of mid-range applications. It is ideal for a variety of markets that include industrial automation, communications, and automotive sectors. These FPGAs are equipped with transceivers that range from 250 Mbps to 12.7 Gbps, which enables flexibility in handling diverse data throughput requirements efficiently. With capacities ranging from 100K to 500K Logic Elements (LEs) and up to 33 Mbits of RAM, the PolarFire FPGAs provide the perfect balance of power efficiency and performance. These characteristics make them suitable for use in applications that demand strong computational power and data processing while maintaining energy consumption at minimal levels. Additionally, the PolarFire FPGA Family is known for integrating best-in-class security features, offering exceptional reliability which is crucial for critical applications. The architecture is built to facilitate easy incorporation into various infrastructure setups, enhancing scalability and adaptability for future technological advancements. This flexibility ensures that the PolarFire FPGAs remain at the forefront of the semiconductor industry, providing solutions that meet the evolving needs of customers worldwide.
The Chimera GPNPU series stands as a pivotal innovation in the realm of on-device artificial intelligence computing. These processors are engineered to address the challenges faced in machine learning inference deployment, offering a unified architecture that integrates matrix, vector, and scalar operations seamlessly. By consolidating what traditionally required multiple processors, such as NPUs, DSPs, and real-time CPUs, into a single processing core, Chimera GPNPU reduces system complexity and optimizes performance. This series is designed with a focus on handling diverse, data-parallel workloads, including traditional C++ code and the latest machine learning models like vision transformers and large language models. The fully programmable nature of Chimera GPNPUs allows developers to adapt and optimize model performance continuously, providing a significant uplift in productivity and flexibility. This capability ensures that as new neural network models emerge, they can be supported without the necessity of hardware redesign. A remarkable feature of these processors is their scalability, accommodating intensive workloads up to 864 TOPs and being particularly suited for high-demand applications like automotive safety systems. The integration of ASIL-ready cores allows them to meet stringent automotive safety standards, positioning Chimera GPNPU as an ideal solution for ADAS and other automotive use cases. The architecture's emphasis on reducing memory bandwidth constraints and energy consumption further enhances its suitability for a wide range of high-performance, power-sensitive applications, making it a versatile solution for modern automotive and edge computing.
The RISC-V Core-hub Generators by InCore Semiconductors provide an advanced level of customization for developing processor cores. These generators are tailored to support configuration at both the Instruction Set Architecture (ISA) and the microarchitecture levels, enabling designers to create cores that meet specific functional and performance needs. By allowing detailed customization, these generators support a wide range of applications, from simple embedded devices to complex industrial systems. The Core-hub Generators are designed to streamline the SoC development process by integrating optimized SoC fabrics and UnCore components such as programming interfaces, debugging tools, and interrupt controllers. This comprehensive integration facilitates efficient communication between different processing units and peripheral devices, thereby enhancing overall system performance. InCore's generators leverage the flexibility and scalability of RISC-V technology, promoting innovation and accelerating the deployment of custom silicon solutions. This makes them an ideal choice for designers looking to build cutting-edge SoCs with enhanced capabilities and reduced development times.
The RF/Analog offerings from Certus Semiconductor represent cutting-edge solutions designed to maximize the potential of wireless and high-frequency applications. Built upon decades of experience and extensive patent-backed technology, these products comprise individual RF components and full-chip transceivers that utilize sophisticated analog technology. Certus's solutions include silicon-proven RF IP and full-chip RF products that offer advanced low-power front-end capabilities for wireless devices. High-efficiency transceivers cover a range of standards like LTE and WiFi, alongside other modern communication protocols. The design focus extends to optimizing power management units (PMU), RF signal chains, and phase-locked loops (PLLs), providing a full-bodied solution that meets high-performance criteria while minimizing power requirements. With the ability to adapt to various process nodes, products in this category are constructed to offer definitive control over power output, noise figures, and gain. This adaptability ensures that they align seamlessly with diverse operational requirements, while cutting-edge developments in IoT and radar technologies exemplify Certus's commitment to innovation. Their RF/Analog IP line is a testament to their leadership in ultra-low power solutions for next-generation wireless applications.
The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic, Inc. is a groundbreaking processor core designed for efficiency in both power consumption and performance. Operating at a mere 10mW at 1GHz, this core leverages advanced design techniques to run at reduced voltages without sacrificing performance, achieving clock speeds up to 5 GHz. This innovation is particularly valuable for applications requiring high-speed processing while maintaining low power usage, making it ideal for portable and battery-operated devices. Micro Magic's 64-bit RISC-V architecture embraces a streamlined design that minimizes energy consumption and maximizes processing throughput. The core's architecture is optimized for high performance under low-power conditions, which is essential for modern electronics that require prolonged battery life and environmental sustainability. This core supports a wide range of applications from consumer electronics to automotive systems where energy efficiency and computational power are paramount. The RISC-V core also benefits from Micro Magic's suite of integrated design tools, which streamline the development process and enable seamless integration into larger systems. With a focus on reducing total ownership costs and enhancing product life cycle, Micro Magic's RISC-V core stands out as a versatile and eco-friendly solution in the semiconductor market.
GSHARK is a family of GPU cores targeted for embedded devices, such as digital cameras and automotive systems. Known for its high performance and low power consumption, GSHARK effectively minimizes the CPU load while maintaining outstanding graphic rendering capabilities. This solution supports high reliability, proven by millions of shipments within commercial silicon. The architecture of GSHARK adapts PC, smartphone, and console-grade graphics technologies to embedded systems, enhancing the user experience in human-machine interfaces. Its capability to handle dynamic graphics with low resource usage makes it an ideal choice for embedded systems.
The iniDSP is a 16-bit digital signal processor core optimized for high-performance computational tasks across diverse applications. It boasts a dynamic instruction set, capable of executing complex algorithms efficiently, making it ideal for real-time data processing in telecommunications and multimedia systems. Designed for seamless integration, the iniDSP supports a variety of interface options and is compatible with existing standard IP cores, facilitating easy adaptation into new or existing systems. Inicore's structured design methodology ensures the processor is technology-independent, making it suitable for both FPGA and ASIC implementations. The core's modular construction allows customization to meet specific application needs, enhancing its functionality for specialized uses. Its high-performance architecture is also balanced with power-efficient operations, making it an ideal choice for devices where energy consumption is a critical consideration. Overall, iniDSP embodies a potent mix of flexibility and efficiency for DSP applications.
The Dynamic Neural Accelerator II Architecture (DNA-II) by EdgeCortix is a sophisticated neural network IP core structured for extensive parallelism and efficiency enhancement. Distinguished by its run-time reconfigurable interconnects between computing elements, DNA-II supports a broad spectrum of AI models, including both convolutional and transformer networks, making it suitable for diverse edge AI applications. With its scalable performance starting from 1K MACs, the DNA-II architecture integrates easily with many SoC and FPGA applications. This architecture provides a foundation for the SAKURA-II AI Accelerator, supporting up to 240 TOPS in processing capacity. The unique aspect of DNA-II is its utilization of advanced data path configurations to optimize processing parallelism and resource allocation, thereby minimizing on-chip memory bandwidth limitations. The DNA-II is particularly noted for its superior computational capabilities, ensuring that AI models operate with maximum efficiency and speed. Leveraging its patented run-time reconfigurable data paths, it significantly increases hardware performance metrics and energy efficiency. This capability not only enhances the compute power available for complex inference tasks but also reduces the power footprint, which is critical for edge-based deployments.
The Cortus Lotus 1 is a multifaceted microcontroller that packs a robust set of features for a range of applications. This cost-effective, low-power SoC boasts RISC-V architecture, making it suitable for advanced control systems such as motor control, sensor interfacing, and battery-operated devices. Operating up to 40 MHz, its RV32IMAFC CPU architecture supports floating-point operations and hardware-accelerated integer processing, optimizing performance for computationally demanding applications. Designed to enhance code density and reduce memory footprint, Lotus 1 incorporates 256 KBytes of Flash memory and 24 KBytes of RAM, enabling the execution of complex applications without external memory components. Its six independent 16-bit timers with PWM capabilities are perfectly suited for controlling multi-phase motors, positioning it as an ideal choice for power-sensitive embedded systems. This microcontroller's connectivity options, including multiple UARTs, SPI, and TWI controllers, ensure seamless integration within a myriad of systems. Lotus 1 is thus equipped to serve a wide range of market needs, from personal electronics to industrial automation, ensuring flexibility and extended battery life across sectors.
The software-defined High PHY from AccelerComm is tailored for ARM processor architectures, offering flexibility to accommodate a wide array of platforms. This configurable solution can operate independently or, if needed, with hardware acceleration, depending on the application's specific power and performance needs. It's suitable for use in situations where adaptability is paramount and is optimized to minimize latency and maximize efficiency across different platforms, adhering to O-RAN standards, and enhancing system capability by integrating with other IP offerings such as accelerators.
WAVE5 is a mature multi-standard video codec IP designed to meet the demands of high-performance multimedia processing. Supporting AV1, HEVC, AVC, and VP9 decoding, WAVE5 delivers exceptional speed and efficiency, making it suitable for various applications including data centers, surveillance, and set-top boxes. This codec achieves stunning high-resolution performance with 4K and 8K video streams, providing a comprehensive suite of virtualization and encoding tools that maximize resource utility. The WAVE5 architecture is optimized for both power and bandwidth efficiency, thanks to its dual-core setup and state-of-the-art clock gating techniques. It supports extensive pixel depth options and advanced features such as multi-instance support and frame buffer compression. This can considerably reduce latency while simultaneously optimizing image quality across diverse platforms. Applications that require reliable, high-speed video processing benefit greatly from WAVE5’s dependable multi-format compatibility and robust interface options. Equipped with extensive video formatting options and features, WAVE5 allows color format conversions, deep pixel manipulations, and effective data handling functionalities. These features create a seamless operation for developers aiming to integrate high-performing video codec solutions with broad compatibility across modern video standards.
DMP’s ZIA Stereo Vision solution is engineered for depth perception and environmental sensing, leveraging stereo image inputs to compute real-time distance maps. This technology applies stereo matching techniques such as Semi-Global Matching (SGM) to accurately deduce depth from 4K resolution images, paving the way for precision applications in autonomous vehicles and robotic systems. The system employs pre- and post-processing techniques to optimize image alignment and refine depth calculations, achieving high accuracy with low latency. By interfacing through the AMBA AXI4 protocol, it ensures easy integration into existing processing chains, requiring minimal reconfiguration for operation. DMP’s expertise in small footprint, high-performance IP allows the ZIA Stereo Vision to deliver industry-leading depth perception capabilities while maintaining a compact profile, suitable for embedded applications needing robust environmental mapping.
The 3D Imaging Chip from Altek is a sophisticated piece of technology designed to enhance depth perception in imaging applications. This chip is deeply rooted in Altek's extensive expertise in 3D sensing technology, developed over several years to provide optimal solutions for various devices requiring mid to long-range detection capabilities. It's particularly effective in enhancing accuracy in depth recognition, crucial for applications such as autonomous vehicles and complex robotics. With its integration capabilities, the 3D Imaging Chip stands out by seamlessly combining hardware and software solutions, offering a comprehensive package from modules to complete chip solutions. This versatility allows the chip to be employed across various industries, wherever precision and depth detection are paramount. It facilitates improved human-machine interaction, making it ideal for sectors like virtual reality and advanced surveillance systems. Engineered to support sophisticated algorithms, the 3D Imaging Chip optimizes performance in real-time image processing. This makes it a pivotal sensor solution that addresses the growing demand for 3D imaging applications, providing clarity and reliability essential for next-generation technology.
The Arria 10 System on Module (SoM) is designed with a focus on embedded and automotive vision applications, leveraging the robust capabilities of the Arria 10 SoC devices. Packed in a compact form factor of 8 cm by 6.5 cm, this module incorporates a multitude of interfaces, offering immense flexibility and a wide array of functionalities suitable for high-performance tasks. This SoM integrates an Altera Arria 10 FPGA with 160 to 480 KLEs along with a Cortex A9 Dual Core CPU, ensuring efficient computational performance. It features a sophisticated power management system and support for dual DDR4 memory interfaces, optimizing power distribution and memory efficiency for safety-critical applications which demand precision and reliability. The Arria 10 SoM is crafted to maximize data throughput, with capabilities such as PCIe Gen3 x8 and 10/40 GBit/s Ethernet interfaces, alongside dedicated clocking arrangements for minimized jitter. Supporting high-speed data transmissions via multiple LVDS lanes and USB interfaces, it's engineered to handle demanding operations in sophisticated systems requiring rapid processing speeds and expansive interfacing.
The NPU by OPENEDGES, known as ENLIGHT, is a cutting-edge neural processing unit tailored for deep learning applications requiring high computational efficiency. It introduces an innovative approach by utilizing mixed-precision computation (4/8-bit), heavily optimizing processing power and reducing DRAM traffic through advanced scheduling and layer partitioning techniques. This NPU offers superior energy efficiency and compute density, making it significantly more effective than competing alternatives. It is highly customizable, accommodating varying core sizes to meet specific market demands, ensuring a broad application reach from AI and ML tasks to edge computing requirements. ENLIGHT enhances performance with its DNN-optimized vector engine and advanced algorithm supports, including convolution and non-linear activation functions. Its toolkit supports popular formats like ONNX and TFLite, simplifying integration and accelerating the development process for complex neural network models in high-performance environments.
The Talamo SDK is a powerful development toolkit engineered to advance the creation of sophisticated spiking neural network-based applications. It melds seamlessly with PyTorch, offering developers an accessible workflow for model building and deployment. This SDK extends the PyTorch ecosystem by providing the necessary infrastructure to construct, train, and implement spiking neural networks effectively. A distinguishing feature of Talamo SDK lies in its ability to map trained neural models onto the diverse computing layers inherent in the spiking neural processor hardware. This is complemented by an architecture simulator enabling fast validation, which accelerates the iterative design process by simulating hardware behavior and helping optimize power and performance metrics. Developers will appreciate the end-to-end application support within Talamo SDK, including the integration of standard neural network operations alongside spiking models, allowing for a comprehensive application pipeline. With ready-to-use models, even those without detailed SNN knowledge can develop powerful AI-driven applications swiftly, benefiting from high-level profiling and optimization tools.
Time-Triggered Ethernet (TTEthernet) represents a cutting-edge networking solution, engineered for applications requiring deterministic real-time communication. By implementing time scheduling methods, TTEthernet ensures high precision and fault-tolerant communication over Ethernet, catering to the needs of cyber-physical systems across aerospace, automotive, and industrial sectors. The protocol is distinguished by its capability to handle safety and high availability requirements directly at the network level, thus bypassing application layers. This level of assurance is attained through a robust system of redundancy management and fault-tolerant clock synchronization, as standardized in SAE AS6802.\n\nThe protocol promotes a standardized approach to network design, facilitating seamless integration with a wide array of Ethernet components and maintaining compatibility with IEEE 802.3 standards. This feature is crucial for simplifying the complexities of high-availability and fault-tolerant systems. By allowing for precise scheduling and replicated packet transmission, TTEthernet significantly enhances network reliability. In cases of network faults, this feature ensures that communication is maintained without interruption, supporting fail-operational safety systems.\n\nAdditionally, TTEthernet is scalable from smaller networks to expansive systems, maintaining optimal safety, performance, and security levels. The platform's ability to partition traffic classes permits the convergence of different protocols within a single network, enhancing its adaptability and application range. As a result, TTEthernet underpins numerous critical applications by ensuring both real-time responsiveness and robust data handling capabilities, ultimately reducing time-to-market for integrated solutions.
The ULYSS MCU range from Cortus is a powerful suite of automotive microcontrollers designed to address the complex demands of modern automotive applications. These MCUs are anchored by a highly optimized 32/64-bit RISC-V architecture, delivering impressive performance levels from 120MHz to 1.5GHz, making them suitable for a variety of automotive functions such as body control, safety systems, and infotainment. ULYSS MCUs are engineered to accommodate extensive application domains, providing reliability and efficiency within harsh automotive environments. They feature advanced processing capabilities and are designed to integrate seamlessly into various automotive systems, offering developers a versatile platform for building next-generation automotive solutions. The ULYSS MCU family stands out for its scalability and adaptability, enabling manufacturers to design robust automotive electronics tailored to specific needs while ensuring cost-effectiveness. With their support for a wide range of automotive networking and control applications, ULYSS MCUs are pivotal in the development of reliable, state-of-the-art automotive systems.
The NeuroSense AI Chip, an ultra-low power neuromorphic frontend, is engineered for wearables to address the challenges of power efficiency and data accuracy in health monitoring applications. This tiny AI chip is designed to process data directly at the sensor level, which includes tasks like heart rate measurement and human activity recognition. By performing computations locally, NeuroSense minimizes the need for cloud connections, thereby ensuring privacy and prolonging battery life. The chip excels in accuracy, significantly outperforming conventional algorithm-based solutions by offering three times better heart rate accuracy. This is achieved through its ability to reduce power consumption to below 100µW, allowing users to experience extended device operation without frequent recharging. The NeuroSense supports a simple configuration setup, making it suitable for integration into a variety of wearable devices such as fitness trackers, smartwatches, and health monitors. Its capabilities extend to advanced features like activity matrices, enabling devices to learn new human activities and classify tasks according to intensity levels. Additional functions include monitoring parameters like oxygen saturation and arrhythmia, enhancing the utility of wearable devices in providing comprehensive health insights. The chip's integration leads to reduced manufacturing costs, a smaller IC footprint, and a rapid time-to-market for new products.
The NeuroVoice AI Chip offers a revolutionary solution for voice processing, harnessing neuromorphic frontend technology to provide ultra-low power consumption and superior noise resilience. It is designed for hearables and smart voice-controlled devices, ensuring efficient operation even in high-noise environments. This chip processes audio data on-device, eliminating the need for continuous cloud connectivity while enhancing user privacy. By integrating NASP technology, the NeuroVoice chip excels in voice activity detection, smart voice control, and voice extraction, making it ideal for applications in earbuds, voice access systems, and smart home devices. Its ability to only transmit or recognize human voice while muting background sounds significantly improves command clarity and user interactions, especially in environments prone to irregular noises. The chip is designed to adapt to various audio inputs, providing capabilities for clear communication, enhancing speech intelligibility, and offering features like voice passthrough in hearing aids. With power consumption kept below 150µW, it allows for prolonged device usage and efficient battery management, making it an ideal component for modern voice-activated devices and hearing assistance technologies.
The Tyr Superchip is engineered to facilitate high performance computing in AI and data processing domains, with a focus on scalability and power efficiency. Designed around a revolutionary multi-core architecture, it features fully programmable cores that are suitable for any AI or general-purpose algorithms, ensuring high flexibility and adaptability. This product is crucial for industries requiring cutting-edge processing capabilities without the overhead of traditional systems, thanks to its support for CUDA-free operations and efficient algorithm execution that minimizes energy consumption.
The SiFive Essential family is all about customization and configurability, meeting diverse market conditions with a range from low-power embedded systems to high-performance application processors. SiFive's Essential processors boast scalability in performance, allowing them to cater to applications like IoT devices and real-time control. This customization extends to their architecture, facilitating specific configurations to match exact needs, highlighting their utility in varied industrial applications.
Efinix's Titanium Ti375 FPGA is a high-density device designed for applications demanding low power consumption alongside robust processing capabilities. This FPGA is embedded with the Quantum® compute fabric, an architecture that delivers significant power, performance, and area benefits. Notably, the Ti375 incorporates a hardened quad-core RISC-V block, various high-speed transceivers for protocols like PCIe Gen4, and supports LPDDR4 DRAM for efficient memory operations. The Ti375 excels in its ability to facilitate high-speed communications and sophisticated data processing, owing in part to its multiple full-duplex transceivers. These transceivers support a swath of industries by enabling data rates up to 16 Gbps for PCIe interfaces or up to 10 Gbps for Ethernet links. Additionally, the FPGA is equipped with advanced MIPI D-PHY functionalities, crucial for applications in the fields of imaging and vision. This versatile FPGA supports the development of complex systems, from industrial automation to advanced consumer electronics, by offering features like extensive I/O configurations and on-board debugging capabilities. With the comprehensive Efinity software suite, developers can streamline the transition from RTL design to bitstream generation, enhancing project timelines significantly. Whether used as a standalone solution or integrated into a larger system, the Ti375 provides an adaptable framework for modern design challenges.
The General Purpose Accelerator, known as Aptos, from Ascenium is a state-of-the-art innovation designed to redefine computing efficiency. Unlike traditional CPUs, Aptos is an integrated solution that enhances performance across all generic software applications without requiring modifications to the code. This technology utilizes a unique compiler-driven approach and simplifies CPU architecture, making it adept at executing a wide range of computational tasks with significant energy efficiency. At the heart of the Aptos design is the capability to handle tasks typically managed by out-of-order RISC CPUs, yet it does so with a streamlined and parallel approach, allowing data centers to move past current performance barriers. The architecture is aligned with the LLVM compiler, ensuring that it remains source-code compatible with numerous programming languages, an advantage when future-proofing investments in software infrastructure. The efficiency gains from Aptos are notably due to its ability to handle standard high-level language software in a more efficient manner, achieving nearly four times the efficiency compared to existing state-of-the-art CPUs. This is instrumental in reducing the energy footprint of data centers globally, aligning with broader sustainability goals by cutting carbon emissions and operational costs. Moreover, this makes the technology extremely appealing to organizations seeking tangible ROI through energy savings and performance enhancements.
Eliyan’s NuLink technology revolutionizes die-to-die connections in the semiconductor landscape by delivering robust performance and energy efficiency using industry-standard packaging. The NuLink PHY is designed to optimize serial high-speed die-to-die links, accommodating custom and standard interconnect schemes like UCIe and BoW. It achieves significant benchmarks in terms of power efficiency, bandwidth, and scalability, providing the same benefits typical of advanced packaging techniques but within a standard packaging framework. This versatility enables broader cost-effective solutions by circumventing the high cost and complexity often associated with silicon interposers. NuLink Die-to-Die PHY stands out for its integration flexibility, supporting both silicon and organic substrate environments while maintaining superior data throughput and minimal latency. This innovation is particularly beneficial for system architects aiming to maximize performance within chiplet-based architectures, allowing the strategic incorporation of elements such as high-bandwidth memory and silicon photonics. NuLink further advances system integration by enabling simultaneous bidirectional signaling (SBD), doubling the effective data bandwidth on the same interface line. This singular feature is pivotal for intensive processing applications like AI and machine learning, where robust and rapid data interchange is critical. Eliyan’s NuLink can be implemented in diverse application scenarios, showcasing its ability to manage large-scale, multi-die integrations without the customary bottlenecks of area and mechanical structure. By leading system designs away from vendor-specific, cost-prohibitive supply chains, Eliyan empowers designers with increased freedom and efficiency, further underpinning its groundbreaking role in die-to-die connectivity and beyond.
aiWare is a cutting-edge hardware solution dedicated to facilitating neural processing for automotive AI applications. As part of aiMotive’s advanced offerings, the aiWare NPU (Neural Processing Unit) provides a scalable AI inference platform optimized for cost-sensitive and multi-sensor automotive applications ranging from Level 2 to Level 4 driving automation. With its unique SDK focused on neural network optimization, aiWare offers up to 256 Effective TOPS per core, on par with leading industry efficiency benchmarks. The aiWare hardware IP integrates smoothly into automotive systems due to its ISO 26262 ASIL B certification, making it suitable for production environments requiring rigorous safety standards. Its innovative architecture utilizes both on-chip local memory and dense on-chip RAM for efficient data handling, significantly reducing external memory needs. This focus on minimizing off-chip traffic enhances the overall performance while adhering to stringent automotive requirements. Optimized for high-speed operation, aiWare can reach up to 1024 TOPS, providing flexibility across a wide range of AI workloads including CNNs, LSTMs, and RNNs. Designed for easy layout and software integration, aiWare supports essential activation and pooling functions natively, allowing maximum processing efficiency for neural networks without host CPU interference. This makes it an exemplary choice for automotive-grade AI, supporting various advanced driving capabilities and applications.
Avispado is a 64-bit in-order RISC-V processor core engineered for efficiency and versatility within energy-conscious systems. This core supports a 2-wide in-order pipeline, which allows for streamlined instruction decoding and execution. Its compact design fits well in SOCs aimed at machine learning markets where power and space efficiency are crucial yet it retains the capacity to handle demanding processing tasks. With the inclusion of Gazzillion Missesâ„¢ technology, Avispado can handle high sparseness in data effectively, particularly beneficial in machine learning workloads. The core's full compatibility with RISC-V vector specifications and open vector interfaces offers flexibility in deploying various vector solutions, reducing the energy demanded by operations. Avispado is multiprocessor-ready and supports cache-coherent environments, ensuring it can be scaled as operations demand, from minimal cores up to comprehensive systems. It is suitable for applications looking to leverage high throughput with minimized silicon investments, making it a favorable choice for efficiently deploying machine learning and recommendation system support.
ChipJuice is an innovative tool for reverse engineering integrated circuits, uniquely designed for comprehensive IC analysis and security evaluation. This versatile software tool supports digital forensics, backdoor research, and IP infringement investigations, making it indispensable for labs, government entities, and semiconductor companies. ChipJuice operates efficiently across various IC architectures, allowing users to extract internal architecture details and generate detailed reports, including netlists and hardware description language files. The tool's intuitive user interface and high-performance processing algorithms make it accessible to users of different expertise levels, from beginners to advanced professionals. It is capable of handling a wide range of chips, regardless of their size, technology node, or complexity, providing a scalable solution for diverse reverse engineering tasks. ChipJuice's automated standard cell research feature further enhances its analytic capabilities, enabling efficient identification and cataloging of IC components. Moreover, ChipJuice facilitates a seamless analysis process by simply using electronic images of a chip's digital core. This allows for precise signal tracing and thorough IC evaluation, supporting its users' strategic objectives in security audits and architectural exploration. ChipJuice is an essential tool for those seeking to delve deep into ICs for security validation and developmental insights.