Find IP Sell IP Chip Talk About Us Contact Us
Log In

All IPs > Processor > Vision Processor

Vision Processor Semiconductor IPs

Vision processors are a specialized subset of semiconductor IPs designed to efficiently handle and process visual data. These processors are pivotal in applications that require intensive image analysis and computer vision capabilities, such as artificial intelligence, augmented reality, virtual reality, and autonomous systems. The primary purpose of vision processor IPs is to accelerate the performance of vision processing tasks while minimizing power consumption and maximizing throughput.

In the world of semiconductor IP, vision processors stand out due to their ability to integrate advanced functionalities such as object recognition, image stabilization, and real-time analytics. These processors often leverage parallel processing, machine learning algorithms, and specialized hardware accelerators to perform complex visual computations efficiently. As a result, products ranging from high-end smartphones to advanced driver-assistance systems (ADAS) and industrial robots benefit from improved visual understanding and processing capabilities.

The semiconductor IPs for vision processors can be found in a wide array of products. In consumer electronics, they enhance the capabilities of cameras, enabling features like face and gesture recognition. In the automotive industry, vision processors are crucial for delivering real-time data processing needed for safety systems and autonomous navigation. Additionally, in sectors such as healthcare and manufacturing, vision processor IPs facilitate advanced imaging and diagnostic tools, improving both precision and efficiency.

As technology advances, the demand for vision processor IPs continues to grow. Developers and designers seek IPs that offer scalable architectures and can be customized to meet specific application requirements. By providing enhanced performance and reducing development time, vision processor semiconductor IPs are integral to pushing the boundaries of what's possible with visual data processing and expanding the capabilities of next-generation products.

All semiconductor IP
66
IPs available

Akida Neural Processor IP

Brainchip's Akida Neural Processor IP represents a groundbreaking approach to edge AI processing by employing a neuromorphic design that mimics natural brain function for efficient and accurate data processing directly on the device. This IP stands out due to its event-based processing capability, which significantly reduces power consumption while providing high-speed inferencing and on-the-fly learning. Akida's architecture is designed to operate independently of traditional cloud services, thereby enhancing data privacy and security. This localized processing approach enables real-time systems to act on immediate sensor inputs, offering instantaneous reactions. Additionally, the architecture supports flexible neural network configurations, allowing it to adapt to various tasks by tailoring the processing nodes to specific application needs. The Akida Neural Processor IP is supported by Brainchip's MetaTF software, which simplifies the creation and deployment of AI models by providing tools for model conversion and optimization. Moreover, the platform's inherent scalability and customization features make it versatile for numerous industry applications, including smart home devices, automotive systems, and more.

Brainchip
1191 Views
TSMC
28nm
AI Processor, Vision Processor
View Details Datasheet

Akida 2nd Generation

The 2nd Generation Akida platform is a substantial advancement in Brainchip's neuromorphic processing technology, expanding its efficiency and applicability across more complex neural network models. This advanced platform introduces support for Temporal Event-Based Neural Nets and Vision Transformers, aiming to enhance AI performance for various spatio-temporal and sensory applications. It's designed to drastically cut model size and required computations while boosting accuracy. Akida 2nd Generation continues to enable Edge AI solutions by integrating features that improve energy efficiency and processing speed while keeping model storage requirements low. This makes it an ideal choice for applications that demand high-performance AI in Edge devices without needing cloud connectivity. Additionally, it incorporates on-chip learning, which eliminates the need to send sensitive data to the cloud, thus enhancing security and privacy. The platform is highly flexible and scalable, accommodating a wide array of sensory data types and applications, from real-time robotics to healthcare monitoring. It's specifically crafted to run independently of the host CPU, enabling efficient processing in compact hardware setups. With this generation, Brainchip sets a new standard for intelligent, power-efficient solutions at the edge.

Brainchip
635 Views
TSMC
28nm
AI Processor, Multiprocessor / DSP, Vision Processor
View Details Datasheet

Origin E1

The Origin E1 is a highly efficient neural processing unit (NPU) designed for always-on applications across home appliances, smartphones, and edge nodes. It is engineered to deliver approximately 1 Tera Operations per Second (TOPS) and is tailored for cost- and area-sensitive deployment. Featuring the LittleNPU architecture, the Origin E1 excels in low-power environments, making it an ideal solution for devices where minimal power consumption and area are critical. This NPU capitalizes on Expedera's innovative packet-based execution strategy, which allows it to perform parallel layer execution for optimal resource use, cutting down on latency, power, and silicon area. The E1 supports a variety of network types commonly used in consumer electronics, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and more. A significant advantage of Origin E1 is its scalability and market-leading power efficiency, achieving 18 TOPS/W and supporting standard, custom, and proprietary networks. With a robust software stack and support for popular AI frameworks like TensorFlow and ONNX, it ensures seamless integration into a diverse range of AI applications.

Expedera
54 Views
TSMC
7nm, 10nm
AI Processor, AMBA AHB / APB/ AXI, Content Protection Software, CPU, GPU, Receiver/Transmitter, Vision Processor
View Details Datasheet

Origin E8

The Origin E8 neural processing unit (NPU) stands out for its extreme performance capabilities, designed to serve demanding applications such as high-end automotive systems and data centers. Capable of delivering up to 128 TOPS per core, this NPU supports the most advanced AI workloads seamlessly, whether in autonomous vehicles or data-intensive environments. By employing Expedera's packet-based architecture, Origin E8 ensures efficient parallel processing across layers and achieves impressive scalability without the drawbacks of increased power and area penalties associated with tiled architectures. It allows running extensive AI models that cater to both standard and custom requirements without compromising on model accuracy. The NPU features a comprehensive software stack and full support for a variety of frameworks, ensuring ease of deployment across platforms. Scalability up to PetaOps and support for resolutions as high as 8K make the Origin E8 an excellent solution for industries that demand unrivaled performance and adaptability.

Expedera
54 Views
TSMC
7nm, 10nm
AI Processor, AMBA AHB / APB/ AXI, GPU, Receiver/Transmitter, Vision Processor
View Details Datasheet

Origin E2

The Origin E2 is a versatile, power- and area-optimized neural processing unit (NPU) designed to enhance AI performance in smartphones, edge nodes, and consumer devices. This NPU supports a broad range of AI networks such as RNNs, LSTMs, CNNs, DNNs, and others, ensuring minimal latency while optimizing for power and area efficiency. Origin E2 is notable for its adaptable architecture, which facilitates seamless parallel execution across multiple neural network layers, thus maximizing resource utilization and providing deterministic performance. With performance capabilities scalable from 1 to 20 TOPS, the Origin E2 maintains excellent efficiency up to 18 TOPS per Watt, reflecting its superior design strategy over traditional layer-based solutions. This NPU's software stack supports prevalent frameworks like TensorFlow and ONNX, equipped with features such as mixed precision quantization and multi-job APIs. It’s particularly suitable for applications that require efficient processing of video, audio, and text-based neural networks, offering leading-edge performance in power-constrained environments.

Expedera
51 Views
TSMC
7nm, 10nm
AI Processor, AMBA AHB / APB/ AXI, GPU, Receiver/Transmitter, Vision Processor
View Details Datasheet

TimbreAI T3

TimbreAI T3 is purpose-built to deliver ultra-low-power AI for audio processing, providing critical noise reduction capabilities especially in power-constrained devices such as headsets. With energy needs as low as 300 microwatts, it offers a specialized solution for enhancing audio experiences without compromising battery life. This AI interface engine supports up to 3.2 GOPS performance without requiring external memory, thereby optimizing power efficiency and reducing silicon area. Its packet-based architecture leverages Expedera's expertise in low-power design to maintain audio clarity while reducing the chip's size and power consumption. TimbreAI T3 is readily deployable across a wide array of devices, with support for popular audio processing neural networks and frameworks like TensorFlow and ONNX. This unit's seamless integration and sustained field performance make it a preferred choice for companies aiming to enhance the audio features of their wearable tech and other portable devices.

Expedera
49 Views
TSMC
7nm, 10nm
Audio Processor, Vision Processor
View Details Datasheet

Automotive AI Inference SoC

The Automotive AI Inference SoC by Cortus is a cutting-edge chip designed to revolutionize image processing and artificial intelligence applications in advanced driver-assistance systems (ADAS). Leveraging RISC-V expertise, this SoC is engineered for low power and high performance, particularly suited to the rigorous demands of autonomous driving and smart city infrastructures. Built to support Level 2 to Level 4 autonomous driving standards, this AI Inference SoC features powerful processing capabilities, enabling complex image processing algorithms akin to those used in advanced visual recognition tasks. Designed for mid to high-end automotive markets, it offers adaptability and precision, key to enhancing the safety and efficiency of driver support systems. The chip's architecture allows it to handle a tremendous amount of data throughput, crucial for real-time decision-making required in dynamic automotive environments. With its advanced processing efficiency and low power consumption, the Automotive AI Inference SoC stands as a pivotal component in the evolution of intelligent transportation systems.

Cortus SAS
45 Views
AI Processor, Audio Processor, Multiprocessor / DSP, Processor Core Dependent, Vision Processor, W-CDMA
View Details Datasheet

Origin E6

The Origin E6 NPU is engineered for high-performance on-device AI tasks in smartphones, AR/VR headsets, and other consumer electronics requiring cutting-edge AI models and technologies. This neural processing unit balances power and performance effectively, delivering between 16 to 32 TOPS per core while catering to a range of AI workloads including image transformers and point cloud analysis. Utilizing Expedera’s unique packet-based architecture, the Origin E6 offers superior resource utilization and ensures performance with deterministic latency, avoiding the penalties typically associated with tiled architectures. Origin E6 supports advanced AI models such as Stable Diffusion and Transformers, providing optimal performance for both current and predicted future AI workloads. The NPU integrates seamlessly into chip designs with a comprehensive software stack supporting popular AI frameworks. Its field-proven architecture, deployed in millions of devices, offers manufacturers the flexibility to design AI-enabled devices that maximize user experience while maintaining cost efficiency.

Expedera
45 Views
TSMC
7nm, 10nm
AI Processor, AMBA AHB / APB/ AXI, GPU, Receiver/Transmitter, Vision Processor
View Details Datasheet

AX45MP

The AX45MP is a high-performance processor core from Andes Technology, designed for demanding computational tasks. It features a 64-bit architecture with dual-issue capability, enhancing its data throughput. This processor is suited for applications needing robust data and memory handling, including AI, machine learning, and signal processing. Its architecture includes instruction and data prefetch capabilities, alongside a sophisticated cache management system to improve execution speed and efficiency. The AX45MP operates on a multicore setup supporting up to 8 cores, providing exceptional parallel processing power for complex applications.

Andes Technology
43 Views
CPU, IoT Processor, Processor Cores, Vision Processor
View Details Datasheet

KL730 AI SoC

The KL730 AI SoC is equipped with a state-of-the-art third-generation reconfigurable NPU architecture, delivering up to 8 TOPS of computational power. This innovative architecture enhances computational efficiency, particularly with the latest CNN networks and transformer applications, while reducing DDR bandwidth demands. The KL730 excels in video processing, offering support for 4K 60FPS output and boasts capabilities like noise reduction, wide dynamic range, and low-light imaging. It is ideal for applications such as intelligent security, autonomous driving, and video conferencing.

Kneron
43 Views
TSMC
12nm
A/D Converter, AI Processor, Amplifier, Audio Interfaces, Camera Interface, Clock Generator, GPU, Vision Processor
View Details Datasheet

Akida IP

Akida IP is Brainchip's pioneering neuromorphic processor technology that mimics the human brain's function to efficiently analyze sensor inputs directly at the acquisition point. This digital processor achieves outstanding performance while significantly lowering power consumption and maintaining high processing precision. Edge AI tasks are handled locally on the chip, minimizing latency and enhancing privacy through reduced cloud dependence. The scalable architecture supports up to 256 nodes interconnected via a mesh network, each featuring Neural Network Layer Engines that can be tailored for convolutional or fully connected operations. The event-based processing technology of Akida leverages the natural data sparsity found in activations and weights, cutting down the number of operations by significant margins, thus saving power and improving performance. As a highly adaptable platform, Akida supports on-chip learning and various quantization options, ensuring customized AI solutions without costly cloud retraining. This approach not only secures data privacy but also lowers operational costs, offering edge AI solutions with unprecedented speed and efficiency. Akida's breakthrough capabilities address core issues in AI processing, such as data movement, by instantiating neural networks directly in hardware. This leads to reduced power consumption and increased processing speed. Furthermore, the IP is supported by robust tools and an environment conducive to easy integration and deployment, making it highly attractive to industries seeking efficient, scalable AI solutions.

Brainchip
42 Views
TSMC
28nm
AI Processor, Vision Processor
View Details Datasheet

NeuroMosAIc Studio

NeuroMosAIc Studio is a comprehensive software platform designed to accelerate AI development and deployment across various domains. This platform serves as an essential toolkit for transforming neural network models into hardware-optimized formats specific for AiM Future's accelerators. With broad functionalities including conversion, quantization, compression, and optimization of neural networks, it empowers AI developers to enhance model performance and efficiency. The studio facilitates advanced precision analysis and adjustment, ensuring models are tuned to operate optimally within hardware constraints while maintaining accuracy. Its capability to generate C code and provide runtime libraries aids in seamless integration within target environments, enhancing the capability of developers to leverage AI accelerators fully. Through this suite, companies gain access to an array of tools including an NMP compiler, simulator, and support for NMP-aware training. These tools allow for optimized training stages and quantization of models, providing significant operational benefits in AI-powered solutions. NeuroMosAIc Studio, therefore, contributes to reducing development cycles and costs while ensuring top-notch performance of deployed AI applications.

AiM Future
40 Views
AI Processor, IoT Processor, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details Datasheet

Guiliani GUI Framework

The Guiliani GUI Framework is a robust and user-friendly software designed to facilitate the quick creation of high-quality graphical user interfaces. This powerful tool is object-oriented and customizable, enabling the development of stylish and dynamic GUIs. The framework supports various design elements, such as widgets, skins, and fonts, to create an engaging user experience. Guiliani's innovative run-time engine concept allows for a fast time-to-market, particularly advantageous for white goods, industrial applications, and automotive interfaces. With its WYSIWYG PC Streaming Editor (GSE), Guiliani offers an intuitive and efficient workflow for GUI developers, ensuring seamless integration of data and events into the user interface. Its modular approach also makes it ideal for resource-limited embedded devices, providing a vast array of graphical assets to meet diverse industry needs. Additionally, it leverages TES's D/AVE graphics accelerators for enhanced IP performance, making it a tried-and-tested solution across millions of devices globally. The Guiliani framework is optimized for a variety of target markets such as consumer electronics, medical devices, and transportation, making it extremely versatile. Embedded devices using this framework benefit from reduced development cycles and enhanced user interface quality, providing a significant advantage in rapidly evolving technological landscapes.

TES Electronic Solutions
36 Views
Peripheral Controller, Vision Processor
View Details Datasheet

ZIA Stereo Vision

ZIA Stereo Vision is an advanced stereoscopic vision module designed to provide precise distance estimation. By combining left and right camera inputs, it leverages semi-global matching algorithms to derive depth maps essential for applications like autonomous vehicles and robotic navigation. It operates under varying image resolutions and provides high-speed processing, ensuring integration into systems where rapid environmental mapping is crucial. Its hardware design optimizes power, space, and performance metrics, making it ideal for high-demand use cases that rely on accurate spatial awareness.

Digital Media Professionals Inc.
32 Views
2D / 3D, GPU, Graphics & Video Modules, Processor Core Independent, Vision Processor
View Details Datasheet

KL630 AI SoC

The KL630 AI SoC embodies next-generation AI chip technology with a pioneering NPU architecture. It uniquely supports Int4 precision and transformer networks, offering superb computational efficiency combined with low power consumption. Utilizing an ARM Cortex A5 CPU, it supports a range of AI frameworks and is built to handle scenarios from smart security to automotives, providing robust capability in both high and low light conditions.

Kneron
31 Views
TSMC
12nm
ADPCM, AI Processor, Camera Interface, GPU, Vision Processor
View Details Datasheet

AI Camera Module

Altek's AI Camera Module exemplifies innovation in the realm of smart imaging solutions, designed to serve as a critical component in AI recognition and video processing systems. This module integrates advanced image processing capabilities, enabling it to deliver superior high-resolution images that are indispensable for AI-driven applications. With an expert blend of lens design and software integration, the module achieves optimal performance in AI and IoT contexts. This modular solution is highly adaptable, supporting edge computing to meet real-time data processing needs. It can cater to high-resolution demands such as 2K and 4K video quality, enhancing detail and clarity for surveillance or autonomous platforms. Its rich functionality spans a range of use cases, from facial recognition and tracking to complex video analytics, ensuring clients have a flexible solution that fits into various AI ecosystems. Altek’s AI Camera Module is designed for seamless integration, offering capabilities that span across consumer electronics, industrial applications, and smart cities. It stands out by providing robust performance and high adaptability to different environments, harnessing machine learning algorithms to improve precision and efficiency. The module's collaboration potential with global brands underlines its reliability and advanced technological framework, making it a go-to choice for organizations aiming to excel in high-end AI+IoT implementations.

Altek Corporation
30 Views
AI Processor, JPEG, MIPI, Peripheral Controller, USB, Vision Processor
View Details Datasheet

xcore.ai

The xcore.ai product line from XMOS represents a pioneering approach towards versatile and high-performance microcontroller solutions. Engineered to blend control, DSP, artificial intelligence, and low-latency input/output processing, the xcore.ai platform is optimized for a wide range of applications. This includes consumer electronics, industrial automation, and automotive industries where real-time data processing and robust computational power are crucial. With its advanced processing capabilities, xcore.ai facilitates the development of smart products by integrating AI functions directly into devices, making them more responsive and capable. This line of microcontrollers supports audio signal processing and voice control technologies, which are essential for modern smart home and entertainment applications. xcore.ai is uniquely designed to handle multiple data streams with precision while maintaining the low power consumption needed for sustainable product development. The product leverages XMOS's commitment to providing cycle-accurate software programmability, which allows developers to quickly adapt and customize hardware functions to meet specific needs. By fostering an environment where software and hardware seamlessly interact, xcore.ai not only supports rapid prototyping and deployment but also ensures long-term durability in demanding environments.

XMOS Semiconductor
29 Views
TSMC, UMC
10nm, 28nm
AI Processor, Audio Processor, CPU, DSP Core, IoT Processor, Vision Processor
View Details Datasheet

CTAccel Image Processor on Intel Agilex FPGA

The CTAccel Image Processor (CIP) on Intel Agilex FPGA offers a high-performance image processing solution that shifts workload from CPUs to FPGA technology, significantly enhancing data center efficiency. Using the Intel Agilex 7 FPGAs and SoCs F-Series, which are built on the 10 nm SuperFin process, the CIP can boost image processing speed by 5 to 20 times while reducing latency by the same measure. This enhancement is crucial for accommodating the explosive growth of image data in data centers due to smartphone proliferation and extensive use of cloud storage. The Agilex FPGA's advanced features include transceiver rates up to 58 Gbps, versatile DSP blocks supporting both fixed-point and floating-point operations, and high-performance cryptographic capabilities. These features facilitate substantial performance improvements in image transcoding, thumbnail generation, and image recognition tasks, reducing total cost of ownership by enabling data centers to maintain higher compute densities with lower operational costs. Moreover, the CIP's support for mainstream image processing software such as ImageMagick and OpenCV ensures seamless integration and deployment. The FPGA's capability for remote reconfiguration allows it to adapt swiftly to custom usage scenarios without server downtimes, enhancing maintenance and operational flexibility.

CTAccel Ltd.
28 Views
AI Processor, DLL, Graphics & Video Modules, Vision Processor
View Details Datasheet

RayCore MC Ray Tracing GPU

The RayCore MC is a state-of-the-art real-time path and ray-tracing GPU that delivers high-definition, photo-realistic graphics with exceptional energy efficiency. Utilizing advanced path tracing technology, this GPU excels in rendering complex 3D images by simulating natural light behaviors such as global illumination and soft shadows. Its small form factor and low-power architecture make it ideal for mobile and embedded devices, supporting a broad range of high-end applications from gaming to augmented reality. Optimized with a MIMD (Multiple Instruction, Multiple Data) architecture, the RayCore MC supports independent parallel computation, enabling effective real-time path and ray tracing regardless of the graphic complexity. As a fully hardwired solution, it ensures linear scalability, enhancing graphics performance as it scales up in multi-core configurations. This GPU is designed to cater to the high demands of photo-realistic rendering in movies, education, simulations, and more. The RayCore MC uniquely supports immersive game environments and high-intensity virtual applications. Its sophisticated hardware design and support for advanced features facilitate cost-effective, low-power graphics solutions, making it an industry leader in cutting-edge GPU technology.

Siliconarts, Inc.
28 Views
All Foundries
All Process Nodes
2D / 3D, GPU, Vision Processor
View Details Datasheet

Hanguang 800 AI Accelerator

Hanguang 800 AI Accelerator leads the front in artificial intelligence processing, delivering robust performance for complex machine learning tasks. T-Head's Hanguang 800 excels in accelerating inference, providing computational efficiency demanded by AI-centric applications. Designed for high-speed AI workloads, the Hanguang 800 integrates sophisticated neural network computing capabilities. Its architecture is optimized to handle large volumes of data processing, making it ideal for deep learning inference which requires high parallel computing power. The design underscores T-Head's strength in innovation, aligning efficient power consumption with high processing speeds, thereby making the Hanguang 800 a competitive choice for next-gen AI solutions across industries in need of cutting-edge processing efficiencies.

T-Head
27 Views
SMIC, TSMC
12nm, 28nm
AI Processor, Security Processor, Vision Processor
View Details Datasheet

Neuropixels Probe

The Neuropixels Probe represents a significant breakthrough in the field of neuroscience research, offering unprecedented resolution and data gathering capabilities. Designed by Imec for use in in vivo studies, this probe enables researchers to acquire signals from thousands of neurons simultaneously, providing invaluable insights into brain function and neurology. With its high-density electrode array, the Neuropixels Probe delivers precise neural recordings, capturing a vast range of neuronal activity across different brain regions. This enables a deep and comprehensive understanding of neural pathways and functions, pivotal for advancing neurological and psychiatric research. Imec's world-leading semiconductor expertise ensures the Neuropixels Probe is equipped with the latest advancements in microfabrication technology, making it highly compatible with current laboratory equipment and methods. This innovation facilitates seamless integration with existing setups while opening new vistas for exploration in neuroscience.

Imec
27 Views
Digital Video Broadcast, Microcontroller, Sensor, Vision Processor
View Details Datasheet

PB8051 Microcontroller Core for Xilinx FPGAs

The PB8051 is at the forefront of Roman-Jones' offerings, exemplifying a sophisticated implementation of the famed 8051 Microcontroller Family, specifically tailored for integration with Xilinx FPGAs. This product is a superb emulation core that bridges legacy software with modern hardware, ensuring that engineers can utilize existing 8051 object code within contemporary FPGA environments. Designed with compatibility in mind, it incorporates essential microcontroller features such as timers and serial ports, ensuring seamless integration into a wide range of applications. Equipped to handle a multitude of FPGA models, the PB8051 is backed by a well-documented design flow that includes support for VHDL and Verilog programming languages. This flexibility not only allows for ease of use among seasoned engineers but also ensures a rapid learning curve for newcomers taking on FPGA design tasks. The core’s small footprint—around 300 slices—enables efficient usage of FPGA resources, while maintaining performance standards that are on par with the original 8051 devices. Featuring a unique architecture centered around the PicoBlaze softcore microcontroller, the PB8051 maximizes operational efficiency and minimizes resource consumption. The system is capable of executing legacy code at enhanced speeds, thanks to a sophisticated emulation strategy that increases clock rate while reducing core size. The provision for user customization, simulation tools, and a broad range of support options highlights Roman-Jones’ dedication to facilitating a seamless user experience. These characteristics make the PB8051 an invaluable asset for anyone leveraging the power of Xilinx’s FPGA technology to build the systems of tomorrow.

Roman-Jones, Inc.
27 Views
CPU, Microcontroller, Vision Processor
View Details Datasheet

CVC Verilog Simulator

The CVC Verilog Simulator from Tachyon Design Automation is a comprehensive solution for simulating electronic hardware models following the IEEE 1364 2005 Verilog HDL standard. This simulator distinguishes itself by compiling Verilog into native X86_64 machine instructions, allowing for rapid execution as a simple Linux binary. It supports both compiled and interpreted simulation modes, enabling efficient elaboration of designs and quick iteration cycles during the design phase. The simulator boasts a large gate and RTL capacity, enhanced by its 64-bit support which enables faster simulations compared to traditional 32-bit systems. To further augment its high speed, CVC integrates features like toggle coverage with per-instance and tick period controls. These allow designers to maintain oversight over signal changes and states throughout the simulation process. Additionally, CVC provides robust support for various interfaces and simulation techniques, including full PLI (programming language interfaces) and DPI (direct programming interface) support, ensuring seamless integration and high-speed interaction with external C/C++ applications. This simulator also supports various design state dump formats which enhance compatibility with GTKWave, a common tool used for waveform viewing.

Tachyon Design Automation
26 Views
All Foundries
All Process Nodes
AMBA AHB / APB/ AXI, CPU, Input/Output Controller, Receiver/Transmitter, Vision Processor
View Details Datasheet

KL520 AI SoC

The KL520 AI SoC by Kneron marked a significant breakthrough in edge AI technology, offering a well-rounded solution with notable power efficiency and performance. This chip can function as a host or as a supplementary co-processor to enable advanced AI features in diverse smart devices. It is highly compatible with a range of 3D sensor technologies and is perfectly suited for smart home innovations, facilitating long battery life and enhanced user control without reliance on external cloud services.

Kneron
26 Views
TSMC
12nm
AI Processor, Camera Interface, Clock Generator, GPU, Vision Processor
View Details Datasheet

NeuroSense AI Chip for Wearables

The NeuroSense AI Chip, an ultra-low power neuromorphic frontend, is engineered for wearables to address the challenges of power efficiency and data accuracy in health monitoring applications. This tiny AI chip is designed to process data directly at the sensor level, which includes tasks like heart rate measurement and human activity recognition. By performing computations locally, NeuroSense minimizes the need for cloud connections, thereby ensuring privacy and prolonging battery life. The chip excels in accuracy, significantly outperforming conventional algorithm-based solutions by offering three times better heart rate accuracy. This is achieved through its ability to reduce power consumption to below 100µW, allowing users to experience extended device operation without frequent recharging. The NeuroSense supports a simple configuration setup, making it suitable for integration into a variety of wearable devices such as fitness trackers, smartwatches, and health monitors. Its capabilities extend to advanced features like activity matrices, enabling devices to learn new human activities and classify tasks according to intensity levels. Additional functions include monitoring parameters like oxygen saturation and arrhythmia, enhancing the utility of wearable devices in providing comprehensive health insights. The chip's integration leads to reduced manufacturing costs, a smaller IC footprint, and a rapid time-to-market for new products.

Polyn Technology Ltd.
26 Views
AI Processor, Peripheral Controller, Processor Core Independent, Vision Processor
View Details Datasheet

Vector Unit

The Semidynamics Vector Unit is a powerful processing element designed for applications requiring complex parallel computations such as those found in machine learning and AI workloads. Its remarkable configurability allows it to be adapted for different data types ranging from 8-bit integers to 64-bit floating-point numbers, supporting standards up to RVV 1.0. The unit can perform a wide array of operations due to its included arithmetic units for addition, subtraction, and complex tasks like multiplication and logic operations. PHased to deliver exceptional performance, the Vector Unit leverages a cross-vector-core network that ensures high bandwidth connectivity among its vector cores, capable of scaling up to 32 cores. This feature helps maximize operational efficiency, allowing tasks to be distributed across multiple cores for optimized performance and power efficiency. Its design caters to extensive data path configurations, allowing users to choose from DLEN options ranging from 128 bits to an impressive 2048 bits in width. Moreover, this Vector Unit supports flexible hardware setups by aligning vector register lengths (VLEN) with the data path requirements, offering up to an 8X ratio between VLEN and DLEN. This capability enhances its adaptability, allowing it to absorb memory latencies effectively, making it particularly suitable for AI inferencing tasks that require rapid iteration and heavy computational loads. Its integration with existing Semidynamics technologies like the Tensor Unit ensures a seamless performance boost across hardware configurations.

Semidynamics
26 Views
AI Processor, GPU, Processor Core Independent, Vision Processor
View Details Datasheet

KL530 AI SoC

Kneron's KL530 introduces a modern heterogeneous AI chip design featuring a cutting-edge NPU architecture with support for INT4 precision. This chip stands out with its high computational efficiency and minimized power usage, making it ideal for a variety of AIoT and other applications. The KL530 utilizes an ARM Cortex M4 CPU, bringing forth powerful image processing and multimedia compression capabilities, while maintaining a low power footprint, thus fitting well with energy-conscious devices.

Kneron
25 Views
TSMC
12nm
AI Processor, Camera Interface, Clock Generator, GPU, Vision Processor
View Details Datasheet

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module by Axelera AI is a cutting-edge AI acceleration tool designed for edge applications. Its compact form factor, combined with powerful AI processing technology, enables real-time data processing and analysis. Equipped with 512MB of dedicated LPDDR4x memory, this module is capable of handling multiple data streams simultaneously. Its breakthrough digital in-memory compute architecture facilitates remarkable energy efficiency, consuming far less power than traditional GPUs while maintaining top-notch performance standards. This module is ideal for applications requiring high-speed computation, such as computer vision tasks involving multi-channel video analytics and quality inspections, thereby enhancing operational efficiency and reducing latency in decision-making processes. Whether deployed in retail, security, or industrial settings, the Metis AIPU M.2 Accelerator Module provides users with significant performance gains at a lower cost, facilitating seamless integration into existing systems. With a practical design for next-generation form factor M.2 sockets, this accelerator module opens the way for innovative AI-enabled solutions in diverse contexts, promising scalability and adaptability to future technological advancements.

Axelera AI
25 Views
TSMC
28nm
AI Processor, AMBA AHB / APB/ AXI, Processor Core Dependent, Vision Processor
View Details Datasheet

SiFive Intelligence X280

Designed to meet the future needs of AI technology, the SiFive Intelligence family introduces AI dataflow processors with scalable vector compute capabilities. The X280 model emphasizes high-performance scalar and vector computing suitable for AI workloads, data flow management, and complex processing tasks. By integrating SiFive Matrix Engine technology, the X280 enhances compute capabilities with a 512-bit vector length ensuring efficient computation flows. The platform is scalable, supporting integrations from entry-level to high-performance needs, whilst maintaining a focus on power efficiency and footprint reduction.

SiFive, Inc.
25 Views
AI Processor, Multiprocessor / DSP, Processor Cores, Security Subsystems, Vision Processor
View Details Datasheet

Tianqiao-80 High-Efficiency 64-bit RISC-V CPU

The Tianqiao-80 is optimized for high-efficiency applications, bringing a balance between performance and power consumption. This CPU core targets scenarios that demand robust computational capabilities with an emphasis on efficiency, such as mobile devices, artificial intelligence, and automotive systems. Tailored specifically for high-efficiency needs, it provides the computational power needed while maintaining energy conservation. This 64-bit RISC-V CPU core offers extensive application coverage, ranging from desktop solutions to AI-driven functionalities. It is designed to meet the requirements of both mobile computing and desktop environments, ensuring that it supports modern, demanding applications efficiently. The architecture focuses on performance without compromise to energy consumption, making it a versatile option across various market segments. Part of the robust Tianqiao lineup, this core can be deployed in numerous configurations, supporting the expansion of IoT and AI technologies. It reflects StarFive's commitment to providing powerful yet energy-smart solutions that address evolving technological needs.

StarFive
25 Views
TSMC
28nm
CPU, IoT Processor, Processor Cores, Vision Processor
View Details Datasheet

Tianqiao-90 High-Performance RISC-V CPU Core

The Tianqiao-90 is a robust RISC-V CPU core designed for high-end applications, including data centers and advanced computation scenarios. It incorporates leading-edge design features such as superscalar and deep out-of-order execution, making it suitable for performance-intensive environments. The architecture supports standard RISC-V RV64GCBH extensions and has undergone substantial optimizations for performance and frequency, achieving SPECint2006 scores of 9.4 per GHz. Developed with multi-core scalability in mind, Tianqiao-90 allows configurations ranging from single-core to quad-core, enhancing its versatility for various applications. This CPU core simplifies SoC development processes and can be widely utilized across sectors like machine learning, mobile devices, and network communications. Its adaptability for memory coherence makes it ideal for multi-core systems. With a fabrication process at TSMC's 12nm node, the Tianqiao-90 offers exemplary efficiency in power consumption and area effectiveness, crucial for enterprise-level computation. Its high-frequency operation, reaching up to 2GHz, provides the necessary power for demanding computing tasks, ensuring swift and reliable performance in all deployed scenarios.

StarFive
25 Views
TSMC
28nm
AI Processor, CPU, IoT Processor, Processor Cores, Vision Processor
View Details Datasheet

KL720 AI SoC

Designed for high power efficiency, the KL720 AI SoC achieves a superior performance-per-watt ratio, positioning it as a leader in energy-efficient edge AI solutions. Built for use cases prioritizing processing power and reduced costs, it delivers outstanding capabilities for flagship devices. The KL720 is particularly well-suited for IP cameras, smart TVs, and AI glasses, accommodating high-resolution images and videos along with advanced 3D sensing and language processing tasks.

Kneron
24 Views
TSMC
12nm
AI Processor, Audio Interfaces, AV1, GPU, Image Conversion, Vision Processor
View Details Datasheet

Smart Vision Processing Platform - JH7110

The JH7110 carries forward StarFive's legacy with enhancements in smart vision processing platforms, addressing the ever-evolving needs of AI-driven visual applications. Its architectural sophistication enables it to adeptly manage complex visual data processing, supporting intelligent systems that rely on quick and accurate vision-based insights. This platform is suited for high-demand environments where the precision and speed of visual data handling are paramount. By incorporating advanced processing units capable of high throughput, the JH7110 meets the challenging requirements of modern AI workloads seamlessly. This integration supports a broad spectrum of applications, from robotics to immersive visual technologies. StarFive has crafted the JH7110 to act as a pivotal component in smart environments, capitalizing on superior processing power and efficiency to deliver outstanding performance across varied intelligent systems. By efficiently managing energy consumption while delivering high-speed data processing, it supports diverse applications in future-ready technology landscapes.

StarFive
24 Views
AI Processor, JPEG, Vision Processor
View Details Datasheet

aiData

aiData is aiMotive's sophisticated data pipeline system for automated driving applications. This comprehensive toolchain encompasses the entire data management lifecycle, from collection and annotation to synthetic data generation and evaluation. One of its key components, the aiData Recorder, is tailored to capture critical reference data in a synchronized manner, addressing edge cases that are vital for developing robust automated driving solutions. The platform also features an Auto Annotator, leveraging AI to perform automated annotation with precision, while minimizing storage and computational demands. The addition of aiFab to the workflow allows for high-fidelity sensor simulations that replicate varied real-world conditions through domain randomization, effectively refining the quality and efficacy of synthetic training data. Integrated metrics evaluation is another highlight, offering real-time insights into the performance and development progress of neural network algorithms against predefined benchmarks. Aiding the continuous improvement of AD systems, aiData's versioning system precisely measures the impact of newly added data, ensuring ongoing relevance and optimization across the entire pipeline.

aiMotive
23 Views
AI Processor, Embedded Security Modules, Vision Processor
View Details Datasheet

AON1100

The AON1100 is a groundbreaking edge AI chip that stands out in the realm of power efficiency and performance. Designed for voice activation and sensor fusion, it fuses AONVoice and AONSens technologies in one compact marvel. This chip significantly enhances the capabilities of smart devices, catering to applications in both the automotive and industrial IoT sectors. With a focus on energy efficiency, the AON1100 boasts a super low-power consumption model, ensuring devices remain active yet highly efficient. This chip integrates multiple processing units, including Neuromorphic Processing Units (NPU), RISC-V, and Hardware DSP, to achieve superior computational prowess. Its advanced sensor fusion and voice detection capabilities present a shift in automotive applications, promoting enhanced safety and operational efficiency. By offering unrivaled accuracy even in noisy environments, the AON1100 raises the bar for edge AI solutions across various markets.

AONDevices, Inc.
23 Views
AI Processor, Audio Processor, Input/Output Controller, Multiprocessor / DSP, Vision Processor
View Details Datasheet

RISC-V CPU IP NI Class

Specializing in artificial intelligence and advanced driver-assistance systems, the NI Class processor is engineered for high-performance computing and communication tasks. With an architecture designed for robust functionality and scalability, it supports extensive RISC-V extensions enhancing processing power and flexibility. This processor class is tailored for applications demanding superior computational capabilities and efficient data processing, such as AI-driven analytics and complex communication protocols. The NI Class features a comprehensive suite of development tools and ecosystem resources, ensuring developers can leverage its full capabilities. This includes optimizing performance for machine learning algorithms and sophisticated telecommunication infrastructures, making it indispensable in the era of intelligent systems and devices.

Nuclei System Technology
23 Views
3GPP-LTE, CPU, Processor Cores, Vision Processor
View Details Datasheet

Yitian 710 Processor

Yitian 710 Processor marks a significant milestone as T-Head's flagship offering. Designed internally by T-Head, this processor features a cutting-edge architecture, integrating high-performance capabilities with broad bandwidth support. It is compatible with the Armv9 architecture, showcasing T-Head's commitment to forward-thinking technology solutions. This processor utilizes a 2.5D packaging framework, which includes two dies summing up to over 600 billion transistors, significantly enhancing computational throughput. It comprises 128 high-efficiency Armv9 CPU cores, each equipped with substantial caches, facilitating rapid data processing and storage. Specifically, each CPU core is accompanied by 64KB of instruction cache, 64KB of data cache, and a larger 1MB secondary cache, while the chip itself features 128MB of system-level cache. Memory support for the Yitian 710 is robust, featuring 8-channel DDR5 support with peak bandwidth reaching 281GB/s. Its I/O system incorporates 96 PCIe 5.0 channels, with a bidirectional theoretical bandwidth of up to 768GB/s, catering to high-throughput applications in cloud computing and data-intensive environments.

T-Head
23 Views
GLOBALFOUNDARIES, TSMC
7nm, 12nm, 16nm
AI Processor, Audio Processor, CPU, Multiprocessor / DSP, Vision Processor
View Details Datasheet

CTAccel Image Processor on Alveo U200

The CTAccel Image Processor on Alveo U200 provides a robust image processing solution by shifting demanding computational workflows from the CPU to FPGA. Specifically designed to handle massive data throughput efficiently, the CIP elevates server performance by up to 6 times while simultaneously reducing latency fourfold. This jump in performance is critical for managing the vast influx of mobile-generated image data within Internet Data Centers (IDCs). Utilizing the FPGA as a heterogeneous coprocessor, the CIP leverages the Alveo U200 platform to enhance tasks such as JPEG decoding, resizing, and color adjustments. The technology removes bottlenecks associated with conventional processing architectures, making it ideal for environments where quick data processing and minimal latency are imperative. The FPGA's ability to undergo remote reconfiguration supports flexible deployment and is designed to maximize operational uptime. The CIP is compatible with popular software libraries like OpenCV and ImageMagick, ensuring an easy transition from traditional software-based image processing to this high-performance alternative. By deploying CIP, data centers can drastically increase compute density, which translates into lower hardware, power, and maintenance costs.

CTAccel Ltd.
22 Views
AI Processor, DLL, Graphics & Video Modules, Vision Processor
View Details Datasheet

Edge AI Neural Network Fabric 2.0

Edge AI Neural Network Fabric 2.0 by Uniquify is a next-generation architectural framework tailored for implementing contemporary neural networks. This innovative fabric supports numerous neural network variants, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Feedforward Neural Networks (FNNs), Generative Adversarial Networks (GANs), Autoencoders (AEs), and Batch Normalization techniques. A primary focus of this advanced neural network fabric is to deliver high efficiency in terms of area and power consumption, making it a cost-effective solution for edge AI applications. With the ability to adapt to various algorithmic demands, it enhances not just performance but also scalability for developers looking to harness the power of AI at the edge. The Edge AI Fabric 2.0 is designed to integrate seamlessly into diverse systems while ensuring minimal power draw and footprint. Its architectural advantages make it suitable for deployment in applications where compact size and energy efficiency are crucial, such as in IoT devices and mobile platforms.

Uniquify, Inc.
22 Views
AI Processor, Audio Processor, Vision Processor
View Details Datasheet

Heimdall Toolbox - Low Power Image Processing

Heimdall Toolbox specializes in low-power image processing, designed for scenarios necessitating immediate image analysis with minimal energy usage. Its core function is processing low-resolution images at 64x64 pixels, enabling fast classification and detection suitable for object tracking and motion interpretation. This makes Heimdall ideal for industrial and IoT applications where quick visual data assessment is essential. The toolbox is equipped with a high-level image signal processor, achieving significant efficiency by reducing power consumption and operational times. Utilizing a minimal silicon footprint, it integrates seamlessly into compact environments, supporting energy-efficient IoT devices without compromising performance. It can operate autonomously, powered by energy-harvesting techniques, which eliminates the need for batteries and enhances its practicality in off-grid or remote monitoring applications. Heimdall finds uses in automation, surveillance, smart agriculture, and various industrial applications requiring visual data processing and analytics. This tool excels in offering a sustainable solution for image-based sensing and control tasks, maintaining high accuracy and scalability across multiple sectors.

Presto Engineering
22 Views
2D / 3D, JPEG, MIL-STD-1553, NTSC/PAL/SECAM, RF Modules, Vision Processor
View Details Datasheet

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

The Tianqiao-70 is crafted for low-power applications, delivering robust computing capabilities suitable for commercial-grade environments. Ideal for contexts requiring energy efficiency without sacrificing performance, this CPU caters to the needs of mobile, artificial intelligence, and desktop applications. Its 64-bit architecture is optimized to handle computational tasks with minimal energy expenditure. Perfectly positioned within the commercial market, the Tianqiao-70 provides essential features to meet various demands, including those of portable devices and energy-conscious desktops. It ensures that performance demands are met while maintaining an eco-friendly approach to energy consumption. With an emphasis on low power, this CPU core helps in the development of competitive products aimed at a broad spectrum of technology sectors. By prioritizing efficient power usage, it's an excellent choice for developers looking to innovate in resource-constrained areas while still achieving significant computational goals.

StarFive
22 Views
TSMC
28nm
CPU, IoT Processor, Processor Cores, Vision Processor
View Details Datasheet

Catalyst-GPU

Catalyst-GPU is a line of NVIDIA-based PXIe/CPCIe GPU modules designed for cost-effective compute acceleration and advanced graphics in signal processing and ML/DL AI applications. The Catalyst-GPU leverages the powerful NVIDIA Quadro T600 and T1000 GPUs, offering compute capabilities previously unavailable on PXIe/CPCIe platforms. With multi-teraflop performance, it enhances the processing of complex algorithms in real-time data analysis directly within test systems. The GPU's integration facilitates exceptional performance improvements for applications like signal classification, geolocation, and sophisticated semiconductor and PCB testing. Catalyst-GPU supports popular programming frameworks, including MATLAB, Python, and C/C++, offering ease-of-use across Windows and Linux platforms. Additionally, the Catalyst-GPU's comprehensive support for arbitrary length FFT and DSP algorithms enhances its suitability for signal detection and classification tasks. It's available with dual-slot configurations, providing flexibility and high adaptability in various chassis environments, ensuring extensive applicability to a wide range of modern testing and measurement challenges.

RADX Technologies, Inc.
21 Views
AI Processor, DSP Core, GPU, Security Processor, Vision Processor
View Details Datasheet

CTAccel Image Processor on Intel PAC

The CTAccel Image Processor (CIP) on Intel PAC platform leverages FPGA technology to offload image processing workloads from CPUs, thereby significantly boosting data center efficiency. By transferring tasks such as JPEG transcoding and thumbnail generation onto the FPGA, the CIP increases image processing speeds by up to 5 times and reduces latency by 2 to 3 times, promoting higher throughput and reducing total costs dramatically. The Intel PAC enables this swift processing by utilizing advanced FPGA capabilities, which support massively data-parallel processing. This effectively addresses the limitations of traditional CPU and GPU architectures in handling intricate image processing tasks, particularly those requiring high parallelism. Additionally, the CIP ensures full compatibility with leading image processing libraries, including ImageMagick, OpenCV, and GraphicsMagick, which facilitates hassle-free integration into existing workflows. The use of Partial Reconfiguration technology allows users to reconfigure FPGA processing tasks dynamically, ensuring maximum performance adaptability without necessitating server reboots, thus enhancing operational ease and efficiency.

CTAccel Ltd.
21 Views
AI Processor, DLL, Graphics & Video Modules, Vision Processor
View Details Datasheet

Polar ID Biometric Security System

Polar ID is a revolutionary face authentication system that leverages advanced meta-optic technology. Designed for smartphone integration, this system captures the polarization signature of a human face, providing a new layer of biometric security. The system's high resolution surpasses conventional methods, ensuring robust performance even with partial obstructions like sunglasses or masks. By operating within the near-infrared spectrum, Polar ID maintains accuracy across diverse lighting conditions, presenting a versatile solution for secure digital interactions. This system eliminates the need for complex optical modules traditionally used in face authentication, such as structured light systems. Polar ID provides substantial reductions in the component size and cost, enabling widespread implementation even in modestly priced devices. This bio-authentication technology is powered by a near-infrared polarization camera and an active illuminator, ensuring secure user identification. The Polar ID's integration with advanced Snapdragon platforms further enhances its capabilities, delivering a seamless experience for manufacturers and end users. By offering superior security at a lower price point, Polar ID paves the way for comprehensive facial recognition capabilities across a wide range of devices, pushing the boundaries of user convenience and security.

Metalenz Inc.
21 Views
Cryptography Cores, Photonics, Sensor, Vision Processor
View Details Datasheet

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is a cutting-edge microcontroller tailored for applications requiring always-on sensing and ultra-low power consumption. Leveraging the computational strength of spiking neural networks (SNNs) and a robust RISC-V processor core, the T1 delivers exceptional signal processing performance on a single chip. This integration allows for efficient and quick sensor data processing, pushing AI and signal processing boundaries within power-limited environments. The processor excels in pattern recognition tasks, thanks to its analog-mixed signal neuromorphic architecture. Designed to handle a variety of application domains, the T1 offers versatile interfaces, including QSPI, I2C, UART, JTAG, and GPIO, along with a front-end ADC. This ensures compatibility with numerous sensor types, making it invaluable for devices that demand high accuracy and ultra-low energy consumption, such as wearable technology and remote sensing devices. Additionally, the T1's compact footprint and low energy requirements make it ideal for pervasive sensing tasks, ensuring non-stop pattern recognition with minimal energy expenditure. The availability of a comprehensive evaluation kit and development tools, like the Talamo SDK, further enhances its accessibility and ease of deployment in various projects.

Innatera Nanosystems
21 Views
AI Processor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Vision Processor, Wireless Processor
View Details Datasheet

Cottonpicken DSP Engine

The Cottonpicken DSP Engine is a sophisticated digital signal processing solution offering extensive functionality for image manipulation. Its capabilities include Bayer pattern decoding into formats like YUV 4:2:2 and RGB, as well as programmable delays for conversion operations. This DSP engine supports various YUV conversions and 3x3 or 5x5 filter kernels, making it versatile for advanced image processing tasks. Moreover, it is designed to handle full pixel data clock speeds up to 150 MHz, adjusting its performance to suit different platform requirements. While available as part of an integrated development package, it remains a closed-source netlist object, emphasizing section5’s proprietary approach to innovation in DSP technology.

section5
20 Views
DLL, DSP Core, Vision Processor
View Details Datasheet

Smart Vision Processing Platform - JH7100

The JH7100 represents an innovative smart vision processing platform renowned for its adeptness in advanced visual processing required by AI and machine learning applications. With an architecture that integrates critical vision processing capabilities, this platform caters to high-performance visual computing and intelligent vision solutions. Designed to handle intensive vision workloads, the JH7100 platform is well-equipped for scenarios demanding intricate image and video processing. It features cutting-edge processing cores tailored to maximize performance while maintaining power efficiency. This characteristic makes it an optimal choice for both current and future applications in the burgeoning field of smart visual technologies. StarFive has purposefully structured this platform to meet the rigors of intelligent vision scenarios, ensuring it can handle complex tasks ranging from augmented reality to deep learning visual interfaces. Its deployment across various fields vividly demonstrates StarFive's leadership in providing intelligent, high-performing vision solutions.

StarFive
20 Views
AI Processor, JPEG, Vision Processor
View Details Datasheet

WiseEye2 AI Solution

The WiseEye2 AI solution from Himax represents a significant leap in ultralow power AI processing, particularly suited for IoT devices. It combines an advanced AI microcontroller with proprietary CMOS image sensors, facilitating continuous operation and data processing at remarkably low power levels. This solution is perfect for applications requiring persistent AI-driven insights, such as smart home devices, consumer electronics, and industrial sensing applications where battery conservation is critical. Built with Arm's Cortex-M55 CPU and the Ethos-U55 NPU, WiseEye2 supports complex neural network computations while maintaining a high standard of energy efficiency. These characteristics allow it to handle more sophisticated algorithms, yielding precise real-time processing and actionable insights. The cutting-edge architecture ensures that devices can operate autonomously for extended periods on minimal power supply. Security features are robust, employing industry-standard cryptography to safeguard sensitive data, making it ideal for applications where privacy and data integrity are paramount. Himax’s WiseEye2 solution continues the company's tradition of pioneering technology that transcends conventional limitations, enabling smarter, more efficient endpoint AI functionalities.

Himax Technologies, Inc.
20 Views
AI Processor, Embedded Security Modules, Processor Cores, Security Subsystems, Vision Processor
View Details Datasheet

aiWare

aiWare is aiMotive's trailblazing hardware solution for automotive AI, providing industry-leading neural network acceleration tailored to meet the challenges of next-generation automotive applications. The aiWare architecture achieves remarkable efficiency and scalability, supporting implementations from edge processing to high-performance centralized systems, and delivers up to 256 effective TOPS per core. Consistent with automotive-grade standards, aiWare's hardware has received ISO 26262 ASIL B certification, ensuring safety and reliability in automotive environments. Its deterministic architecture minimizes external memory traffic, optimizing power consumption and system performance. aiWare's embedded capabilities extend across a wide range of neural network models, including CNNs, LSTMs, RNNs, and Transformer Networks, making it a versatile core for varied AI workloads. Distinctive features like aiWare Studio offer a sophisticated SDK for NN optimization, empowering designers with greater flexibility in neural network implementations. The platform encourages adaptability through its offline performance estimation, allowing 90% of workload optimizations to occur without immediate hardware access. aiWare thus stands out as an invaluable asset for achieving efficient, rapid AI deployments in the automotive sector.

aiMotive
19 Views
AI Processor, Cryptography Cores, Processor Core Dependent, Vision Processor
View Details Datasheet

CTAccel Image Processor on AWS

The CTAccel Image Processor tailored for AWS takes advantage of FPGA technology to offer superior image processing capabilities on the cloud platform. Available as an Amazon Machine Image, the CIP for AWS offloads CPU tasks to FPGA, thereby boosting image processing speed by 10 times and reducing computational latency by a similar factor. This performance leap is particularly beneficial for cloud-based applications that demand fast, efficient image processing. By utilizing FPGA's reconfigurable architecture, the CIP for AWS enhances real-time processing tasks such as JPEG thumbnail generation, watermarking, and brightness-contrast adjustments. These functions are crucial in managing the vast image data that cloud services frequently encounter, optimizing both service delivery and resource allocation. The CTAccel solution's seamless integration within the AWS environment allows for immediate deployment and simplification of maintenance tasks. Users can reconfigure the FPGA remotely, enabling a flexible response to varying workloads without disrupting application services. This adaptability, combined with the CIP's high efficiency and low operational cost, makes it a compelling choice for enterprises relying on cloud infrastructure for high-data workloads.

CTAccel Ltd.
19 Views
AI Processor, DLL, Graphics & Video Modules, Vision Processor
View Details Datasheet
Chat to Volt about this page

Chatting with Volt