Flex Logix's Cheng Wang to Speak at the AI Hardware Summit on Inference Efficiency

MOUNTAIN VIEW, Calif., Sept. 12, 2019 /PRNewswire/ -- Flex Logix® Technologies, Inc. today announced that its Co-Founder and Senior VP of Engineering Cheng Wang will be speaking about inference efficiency at next week's AI Hardware Summit in Mountain View, CA. Cheng's presentation is titled "From TOPS to Throughput: How to Get the Most Throughput from the Least Hardware" and will be a part of the track titled "Inference in Client & Edge Computing."

Flex Logix recently launched its InferX X1 Edge inference co-processor, based on its nnNMAX architecture, that delivers up to 10X times the throughput compared to existing inferencing edge ICs.

        Date of Presentation:   Wednesday, September 18, 2019



     
     Time:                 
     9:00 am



     
     Location:             
     Computer History Museum

About Cheng's Presentation: The Inference in Client & Edge Computing Tract will cover the deployment of AI accelerators in cameras, consumer electronics, autonomous vehicles, etc. Cheng's presentation will discuss how to achieve high levels of throughput in edge applications using the least amount of hardware.

About the AI Hardware Summit: The AI Hardware Summit is the premier event for the AI chip ecosystem. The aim of the summit is to assemble the critical mass of the global industry to promote innovation and adoption of silicon and systems for processing deep learning, neural networks and computer vision. It serves as the venue where the technology roadmap of emerging AI hardware is analyzed and updated each year and connects silicon and system vendors and hardware innovators to customers, partners, ML researchers and investors.

About Flex Logix
Flex Logix, founded in March 2014, provides solutions for making flexible chips and accelerating neural network inferencing. Its eFPGA platform enables chips to be flexible to handle changing protocols, standards, algorithms and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x faster than Microsoft Azure processing in the Cloud. eFPGA is available for any array size on the most popular process nodes now with increasing customer adoption. Flex Logix's second product line, nnNMAX, utilizes its eFPGA and interconnect technology to provide modular, scalable neural inferencing from 2 to >100 TOPS using 1/10th the typical DRAM bandwidth, resulting in much lower system power and cost. Having raised more than $25 million of venture capital, Flex Logix is headquartered in Mountain View, California, and has sales rep offices in China, Europe, Israel, Japan, Taiwan and throughout the USA. More information can be obtained at http://www.flex-logix.com or follow on Twitter at @efpga.

PRESS CONTACT:
Kelly Karr
Tanis Communications, Inc.
kelly.karr@taniscomm.com
+408-718-9350

View original content to download multimedia:http://www.prnewswire.com/news-releases/flex-logixs-cheng-wang-to-speak-at-the-ai-hardware-summit-on-inference-efficiency-300916664.html

SOURCE Flex Logix Technologies, Inc.