ADAS and Autonomous Driving Industry Chain Report, 2018-2019- Automotive Processor and Computing Chip

NEW YORK, April 23, 2019 /PRNewswire/ -- As automobiles are going smart, cockpit and intelligent driving require more efficient processors.

Read the full report: https://www.reportlinker.com/p05763556/?utm_source=PRN

Full LCD instrument cluster with at least 3 or even 5 to 6 screens, will be an integral of a mainstream electronic cockpit solution which may be integrated with some local and cloud capabilities such as natural language processing (NLP), gesture control, fatigue detection, face recognition, AR HUD, HD map and V2X. So it can be said that cockpit has endless demand for computational resources, for instance, 50000DMIPS in 2020 and more after the year.

Autonomous driving needs processors that perform far better. According to Horizon Robotics' summary of OEM demand, a higher level of automated driving means more orders of magnitude, namely, 2 TOPS for L2 autonomy, 24 TOPS for L3, 320 TOPS for L4 and 4,000+TOPS for L5.

Only computing power is not enough. Complexity of automotive applications should be taken into account. That's because an automotive processor also has to consider how much power is consumed, how much computing power is used or whether it is up to the automotive and safety standards or not.

Automotive processor, also referred to as automotive computing chip, typically falls into three types: Application specific standard products (ASSP), like CPU and GPU; application specific integrated circuits (ASIC); field programmable gate arrays (FPGA). Conventional CPU and GPU have begun to find it hard to meet increasing new demand as AI computing is developing by leaps and bounds, and in terms of energy efficiency, underperform semi-custom FPGA and full-custom ASIC, both of which are booming.

By and large, FPGA and FPGA have their own merits and demerits, offering options for different areas

The huge demand from intelligent connected vehicle (ICV) for semiconductors (including processors) is an enticement for the inrush of consumer electronics processor vendors. Take example for Qualcomm, the fastest entrant whose 820E and 855E among other products have won great popularity in automotive sector. Of the top 25 OEMs worldwide, 18 have chosen the giant's processors. Samsung, MediaTek, Huawei and even Apple follow suit to forge into the automotive semiconductor field.

Processor vendors' fight is more than in computing power area. Tool chain is also their battleground.

One competitive edge on processor lies in more tools for users' easier and more efficient use of processors.

"No one will buy your GPU, if you don't have software and applications", said Greg Estes, the vice president of NVIDIA, at GTC CHINA 2018. With efforts, the inventor of the GPU has expanded its developer's community with more than 1 million members and 600,000 GPU applications.

In 2017, NVIDIA unveiled new NVIDIA? TensorRT? 3 AI inference software that significantly boosts the performance and slashes the cost of inferencing from the cloud to edge devices, including self-driving cars and robots. With TensorRT, the user can get up to 40x faster inference performance comparing Tesla V100 to CPU. TensorRT inference with TensorFlow models running on a Volta GPU is up to 18x faster under a 7ms real-time latency requirement.

At CES 2019, NVIDIA didn't release more efficient processors but enlarge its software tool kit. The company integrated its previous Drive Autopilot software, Drive AGX computing platform and DRIVE Works development tool into a platform, called Drive AP2X. DRIVE AutoPilot offers precise localization to the world's HD maps for vehicle positioning on the road and creates a self-driving route. Drive Works provides developers with reference applications, tools and a complete module library.

Deephi Tech's deep neural network development kit (DNNDK) is an equivalent of NVIDIA TensorRT. DNNDK offers a complete process from neural network inference to model compression, heterogeneous programming, compilation and operation deployment, which is a solution for deep learning algorithm engineers and software development engineers to accelerate AI computing load.

In July 2018, Xilinx acquired Deephi Tech in a USD300 million deal, helping the two-year-old firm promote FPGA.

Starting from EyeQ?5, Mobileye will support an automotive-grade standard operating system and provide a complete software development kit (SDK) to allow customers to differentiate their solutions by deploying their algorithms on EyeQ?5. The SDK may also be used for prototyping and deployment of Neural Networks, and for access to Mobileye pre-trained network layers.

In July 2018, Intel released OpenVINOTM Toolkit for accelerating development of high performance computer vision and deep learning vision application.

There are more than 70 AI start-ups globally, but few of them remain powerful enough to develop tool chain. And conforming to the active safety standards poses a bigger challenge to development of automotive computing chip tool chain.

In China, Horizon Robotics, an autonomous driving chip bellwether, provides full-stack perception software and full-stack tool chain. The way of coordinating algorithms, computing architecture and tool chain enables the firm's BPU with a performance 30 times higher than GPU.

Automakers are deficient in deep learning capability of their processors as well, and they are going all out to improve weaknesses.

In early 2019, NXP joined forces with Kalray to co-develop an autonomous driving computing platform, with the aim of helping NXP gain muscle in deep learning. The partnership will combine NXP's scalable portfolio of functional safety products for ADAS and Central Compute with Kalray's high-performance intelligent MPPA (Massively Parallel Processor Array) processors. MPPA with an optimized tool and a library, enables the best performance of deep learning or vision algorithms.

Renesas plans to roll out next-generation R-CAR SoC for deep learning, which is expected to be mounted on L4 autonomous cars in 2020. The new SoC sample will be unveiled in 2019, and can compute 5 trillion times per second with power consumption of a mere 1W. Also, Renesas upgrades its processor tool chain and ecosystem via its Autonomy Platform.

Read the full report: https://www.reportlinker.com/p05763556/?utm_source=PRN

About Reportlinker
ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________
Contact Clare: clare@reportlinker.com
US: (339)-368-6001
Intl: +1 339-368-6001

View original content:http://www.prnewswire.com/news-releases/adas-and-autonomous-driving-industry-chain-report-2018-2019-automotive-processor-and-computing-chip-300836644.html

SOURCE Reportlinker