Is this your business? Claim it to manage your IP and profile
The EFLX eFPGA (Embedded FPGA) from Flex Logix is a pivotal technology that integrates FPGA-like programmability directly into SoCs (System on Chips), enabling a unique blend of flexibility and high performance. By embedding FPGA digital logic into SoCs, it allows for customizable processing capabilities without the additional overhead of traditional FPGA packages. This technology eliminates power-hungry elements like SERDES/PHYs, focusing on delivering dense, high-speed digital logic. Such integration means customers can achieve the same execution speed and density as discrete FPGAs, tailored precisely to their design needs. This flexibility means the exact ratio of logic, DSP, and RAM can be configured for any given process node. The EFLX eFPGA has been incorporated across multiple working chips and is currently in design for many others. The technology delivers substantial reductions in power and area, as unnecessary DSPs or BRAMs can be eliminated. The EFLX eFPGA broadens its applicability across a variety of nodes ranging from 180nm to advanced 18A and offers users extensive control over their SoC architecture. This adaptability is further enhanced by supporting a myriad of use cases from data encryption to communication, ensuring efficient interoperability and future scalability. With its proven compiler software, customers can integrate their functionality rapidly and effectively, ensuring minimal latency and efficient processing.
Flex Logix's InferX AI platform offers a highly efficient AI processing solution designed to seamlessly integrate into systems on chips (SoCs). It provides exceptional batch processing capabilities for vision AI, supporting low power and cost-effective operations due to its highly efficient architecture. Leveraging INT8, INT16, and BF16 precision, InferX delivers high accuracy needed for modern AI workloads. One of the standout features of InferX is its reconfigurable architecture, which allows it to adapt and efficiently run new models, such as YOLO and even Transformers. This flexibility ensures that new operators can be implemented accurately, allowing multiple models to be run concurrently. Beyond just AI, InferX also streamlines integration into existing SoCs and software stacks, making it an attractive choice for developers looking to take advantage of state-of-the-art AI capabilities without significant restructuring. InferX demonstrates proven reliability, having been incorporated into dozens of chips and production qualified. It provides the performance of high-end discrete AI processors within a smaller, more power-efficient footprint, making it ideal for applications needing rapid and flexible AI processing. Its capabilities enable comprehensive support for modern AI applications, ensuring they remain robust and adaptable.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to evaluate IP, download trial versions and datasheets, and manage your evaluation workflow!
To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.