Find IP Sell IP AI Assistant Chip Talk About Us
Log In

Expedera

Expedera is at the forefront of semiconductor IP design, specializing in advanced AI accelerators for cutting-edge technologies. Their innovative approach to neural processing units (NPUs) distinguishes them in the industry, as they are designed to enhance the performance, efficiency, and adaptability of AI applications across various sectors. By providing highly efficient, low latency, and power-optimized NPUs, Expedera addresses the evolving needs of modern consumer electronics and automotive industries. The company's flagship technology leverages a unique packet-based architecture that maximizes processor utilization and optimizes memory management. This architecture supports a diverse array of neural network types by allowing parallel execution and eliminating the necessity for hardware-specific tweaks. This attribute, combined with Expedera's ability to tailor solutions to specific customer needs, makes their NPUs highly adaptable and scalable. Expedera offers solutions that not only deliver unmatched performance per watt but also significantly reduce the system's power and area overhead. Expedera's Origin line of products exemplifies their commitment to innovation and efficiency. These NPUs are not only scalable—from small edge devices to powerful automotive applications—but also field-proven, with millions of units already deployed globally. Expedera provides a full software stack to support the rapid deployment of AI models, making it easy for developers to implement their solutions. Whether it is for smart home devices, automotive systems, or industrial applications, Expedera's NPUs deliver the reliability and performance needed for next-generation AI deployments. Read more

Is this your business? Claim it to manage your IP and profile

5
IPs available

Origin E1

The Origin E1 neural engines by Expedera redefine efficiency and customization for low-power AI solutions. Specially crafted for edge devices like home appliances and security cameras, these engines serve ultra-low power applications that demand continuous sensing capabilities. They minimize power consumption to as low as 10-20mW, keeping data secure and eliminating the need for external memory access. The advanced packet-based architecture enhances performance by facilitating parallel layer execution, thereby optimizing resource utilization. Designed to be a perfect fit for dedicated AI functions, Origin E1 is tailored to support specific neural networks efficiently while reducing silicon area and system costs. It supports various neural networks, from CNNs to RNNs, making it versatile for numerous applications. This engine is also one of the most power-efficient in the industry, boasting an impressive 18 TOPS per Watt. Origin E1 also offers a full TVM-based software stack for easy integration and performance optimization across customer platforms. It supports a wide array of data types and networks, ensuring flexibility and sustained power efficiency, averaging 80% utilization. This makes it a reliable choice for OEMs looking for high performance in always-sensing applications, offering a competitive edge in both power efficiency and security.

Expedera
84 Views
11 Categories
View Details Datasheet

Origin E8

The Origin E8 NPUs represent Expedera's cutting-edge solution for environments demanding the utmost in processing power and efficiency. This high-performance core scales its TOPS capacity between 32 and 128 with single-core configurations, addressing complex AI tasks in automotive and data-centric operational settings. The E8’s architecture stands apart due to its capability to handle multiple concurrent tasks without any compromise in performance. This unit adopts Expedera's signature packet-based architecture for optimized parallel execution and resource management, removing the necessity for hardware-specific tweaks. The Origin E8 also supports high input resolutions up to 8K and integrates well across standard and custom neural networks, enhancing its utility in future-forward AI applications. Leveraging a flexible, scalable design, the E8 IP cores make use of an exhaustive software suite to augment AI deployment. Field-proven and already deployed in a multitude of consumer vehicles, Expedera's Origin E8 provides a robust, reliable choice for developers needing optimized AI inference performance, ideally suited for data centers and high-power automobile systems.

Expedera
83 Views
AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Coprocessor, CPU, GPU, Processor Core Dependent, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details Datasheet

Origin E2

Origin E2 NPUs focus on delivering power and area efficiency, making them ideal for on-device AI applications in smartphones and edge nodes. These processing units support a wide range of neural networks, including video, audio, and text-based applications, all while maintaining impressive performance metrics. The unique packet-based architecture ensures effective performance with minimal latency and eliminates the need for hardware-specific optimizations. The E2 series offers customization options allowing it to fit specific application needs perfectly, with configurations supporting up to 20 TOPS. This flexibility represents significant design advancements that help increase processing efficiency without introducing latency penalties. Expedera's power-efficient design results in NPUs with industry-leading performance at 18 TOPS per Watt. Further augmenting the value of E2 NPUs is their ability to run multiple neural network types efficiently, including LLMs, CNNs, RNNs, and others. The IP is field-proven, deployed in over 10 million consumer devices, reinforcing its reliability and effectiveness in real-world applications. This makes the Origin E2 an excellent choice for companies aiming to enhance AI capabilities while managing power and area constraints effectively.

Expedera
75 Views
AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Coprocessor, CPU, GPU, Processor Core Dependent, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details Datasheet

TimbreAI T3

The TimbreAI T3 is an ultra-low-power AI engine specifically designed for audio applications, providing optimal performance in noise reduction tasks for devices like headsets. Known for its energy efficiency, the T3 operates with less than 300 microwatts of power consumption, allowing it to support performance-hungry applications without requiring external memory. The innovative architecture of TimbreAI leverages a packet-based framework focusing on achieving superior power efficiency and customization to the specific requirements of audio neural networks. This tailored engineering ensures no alteration is needed in trained models to achieve the desired performance metrics, thereby establishing a new standard in energy-efficient AI deployments across audio-centric devices. Geared towards consumer electronics and wearables, the T3 extends the potential for battery life in TWS headsets and similar devices by significantly reducing power consumption. With its preconfiguration for handling common audio network functions, TimbreAI provides a seamless development environment for OEMs eager to integrate AI capabilities with minimal power and area overheads.

Expedera
73 Views
Audio Processor, Building Blocks, Coprocessor, Vision Processor
View Details Datasheet

Origin E6

The Origin E6 neural engines are built to push the boundaries of what's possible in edge AI applications. Supporting the latest in AI model innovations, such as generative AI and various traditional networks, the E6 scales from 16 to 32 TOPS, aimed at balancing performance, efficiency, and flexibility. This versatility is essential for high-demand applications in next-generation devices like smartphones, digital reality setups, and consumer electronics. Expedera’s E6 employs packet-based architecture, facilitating parallel execution that leads to optimal resource usage and eliminating the need for dedicated hardware optimizations. A standout feature of this IP is its ability to maintain up to 90% processor utilization even in complex multi-network environments, thus proving its robustness and adaptability. Crafted to fit various use cases precisely, E6 offers a comprehensive TVM-based software stack and is well-suited for tasks that require simultaneous running of numerous neural networks. This has been proven through its deployment in over 10 million consumer units. Its design effectively manages power and system resources, thus minimizing latency and maximizing throughput in demanding scenarios.

Expedera
67 Views
AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Coprocessor, CPU, GPU, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details Datasheet
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to evaluate IP, download trial versions and datasheets, and manage your evaluation workflow!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Review added

Claim Your Business

Please enter your work email and we'll send you a link to claim your business.

Review added

Claim Email Sent

Please check your email for a link you can use to claim this business profile.

Chat to Volt about this page

Chatting with Volt