VAYAVISION Launches VAYADrive 2.0, a Software-based Autonomous Vehicle Environmental Perception Engine

TEL AVIV, Israel, January 7, 2019 /PRNewswire/ --

VAYAVISION [https://www.vayavision.com ], a leading provider of raw data fusion and perception software solutions for autonomous vehicles, today announced the release of VAYADrive 2.0, an AV perception software engine that fuses raw sensor data together with AI tools to create an accurate 3D environmental model of the area around the self-driving vehicle.

VAYADrive 2.0 breaks new ground in several categories of AV environmental perception - raw data fusion, object detection, classification, SLAM, and movement tracking - providing crucial information about dynamic driving environments, enabling safer and reliable autonomous driving, and optimizing cost-effective sensor technologies.

"This launch marks the beginning of a new era in autonomous vehicles, bringing to market an AV perception software based on raw data fusion," said Ronny Cohen, CEO and co-founder of VAYAVISION. "VAYADrive 2.0 increases the safety and affordability of self-driving vehicles and provides OEMs and T1s with the required level of autonomy for the mass-distribution of autonomous vehicles."

The VAYADrive 2.0 software solution combines state-of-the-art AI, analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensors hardware. The software is compatible with a wide range of cameras, LiDARs, and radars.

VAYADrive 2.0 solves a key challenge facing the industry: the detection of 'unexpected' objects. Roads are full of 'unexpected' objects that are absent from training data sets, even when those sets are captured while travelling millions of kilometers. Thus, systems that are mainly based on deep neural networks fail to detect the 'unexpected'.

To detect objects, no single type of sensor is enough; Cameras don't see depth, and distance sensors, such as LiDARs and Radars, possess very low resolution. VAYADrive 2.0 upsamples sparse samples from distance sensors and assigns distance information to every pixel in the high resolution camera image. This allows autonomous vehicles to receive crucial information on an object's size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road.

"VAYADrive 2.0's raw data fusion architecture offers automotive players a viable alternative to inadequate 'object fusion' models that are common in the market," said Youval Nehmadi, CTO and co-founder of VAYAVISION. "This is critical to increasing detection accuracy and decreasing the high rate of false alarms that prevent self-driving vehicles from reaching the next level of autonomy." 

VAYAVISION will be showing its solution at the CES - Consumer Electronics Show in Las Vegas from 8 - 11 January 2019, at Booth 301 of the OurCrowd Pavilion, Westgate Paradise Center.

About VAYAVISION 

VAYAVISION is a leading environmental perception based on raw data fusion software solution provider for autonomous vehicles. Compatible with all autonomous sensor systems, VAYAVISION's patented autonomous driving technology fuses raw data from cameras, LiDARs, and radars to provide a full environmental model of the driving scenario, including high reliability object detection, classification, and tracking, traffic and road sign recognition, and free space analysis. Working with leading OEMs and Tier 1's globally, VAYAVISION is paving the way for comprehensive autonomous vehicle environmental perception.

        

        Media Contact:  
        Sarah Small 
        Sarah@headline.media 
        IL: +972-052-214-8601  
        US:+1-949-255-1449  
        UK:+44-203-807-1858  

 

SOURCE VAYAVISION