Is this your business? Claim it to manage your IP and profile
The Origin E2 offers a balanced AI inference solution designed for power and area efficiency, suitable for smartphones, edge nodes, and various consumer devices. Employing an efficient packet-based architecture, the E2 supports a range of neural networks including RNNs, CNNs, LSTMs, and DNNs, providing performance from 1 to 20 TOPS. The core advantage of the E2 lies in its ability to handle extensive AI processing tasks with minimal latency and reduced power consumption, bypassing the typical hardware-specific constraints. This flexibility allows designers to utilize trained AI models without compromising their precision or accuracy, which is crucial for maintaining high-quality AI functions. With its scalable architecture, the E2 efficiently manages resources and ensures high utilization rates, delivering superior performance with an impressive power efficiency of 18 TOPS/W. This makes it an ideal choice for applications requiring consistent AI processing capabilities, such as smartphones and various edge devices, where space and power are limited.
The Origin E8 NPU is built for performance-intensive applications requiring exceptional throughput and low latency, suitable for data centers and advanced automotive systems. Capable of delivering up to 128 TOPS with its single-core and PetaOps with multiple cores, the E8 addresses the needs of sophisticated AI tasks with unparalleled efficiency. Utilizing an innovative packet-based architecture, the E8 achieves parallel execution, significantly improving resource allocation and performance predictability without altering pre-trained neural network models. Its efficient design mitigates hardware-specific optimization requirements, enhancing system scalability and reducing complexity. The E8 excels in scenarios requiring robust AI processing such as autonomous driving and complex data center operations. Offering high customization, this IP core aligns with specific processing needs, ensuring that AI models execute with precision and minimal resource usage, even under demanding conditions, as reflected in its leading efficiency at 18 TOPS/W.
The Origin E1 neural engines are designed for applications requiring consistent low-power AI processing, particularly in consumer electronics like home appliances and smartphones. By leveraging a unique packet-based architecture, the E1 effectively minimizes power consumption and system cost while delivering 1 TOPS in performance. This architecture allows for parallel execution across neural network layers, optimizing resource utilization and performance predictability. The E1 is tailored for use in always-sensing technologies, such as cameras in smart devices that continuously process visual data to enhance user interaction. These capabilities require precise power management, which is achieved through the E1's minimal external memory usage. Delivering up to 18 TOPS per watt, the E1 LittleNPU is particularly optimized for executing high-quality neural networks developed by top OEMs. Expedera's E1 NPU enables manufacturers to deploy AI applications without needing external memory, which reduces both the power footprint and silicon area. The architecture's deterministic performance ensures all tasks are executed without data loss or back pressure, making it a quintessential choice for power-sensitive applications.
Expedera's TimbreAI T3 is an ultra-low power AI inference engine specifically for audio noise reduction in power-constrained devices such as wireless headsets. Capable of 3.2 GOPS, this core consumes less than 300 micro-watts, making it especially suitable for audio applications where power efficiency is crucial. The T3's design ensures minimal external memory needs, resulting in reduced power consumption and silicon area, which is critical for portable devices that demand extended battery life. Implemented as soft IP, the TimbreAI is easily adaptable to different silicon processes, allowing for quick deployment across a range of smart audio devices. Optimized for audio neural networks, it offers seamless integration without requiring changes to pre-trained models, retaining their utmost accuracy and performance. TimbreAI T3 ensures that AI processing is executed with minimal power while maintaining the functionality and quality users expect from high-end audio systems.
Origin E6 is a cutting-edge NPU IP tailored for high-demand edge inference applications, offering performance from 16 to 32 TOPS. This model supports significant AI models such as image transformers and stable diffusion, as well as complex traditional AI networks like CNNs and LSTMs. Its packet-based architecture facilitates efficient resource use, achieving parallel processing across neural network layers to enhance performance. Designed for versatility, the E6 covers a broad range of applications including smartphones, vehicles, and AR/VR systems, offering support for both standard and custom neural networks. The architecture is crafted to deliver max efficiency in processing workloads, maintaining up to 90% processor utilization, thereby significantly reducing power and memory overheads. Customization capabilities are a major highlight of the E6, adapting to specific customer PPA requirements to provide the most relevant configurations. This flexibility ensures that the E6 meets the demands of both current and emerging AI networks while maintaining its high power efficiency at 18 TOPS/W, enhancing performance in a variety of cutting-edge devices.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to evaluate IP, download trial versions and datasheets, and manage your evaluation workflow!
To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.