Technology helps self-driving cars learn from own 'memories'

 

Cornell University researchers have created a method to aid autonomous cars in storing "memories" of past events and using them in future navigation, particularly in bad weather when the car cannot safely rely on its sensors.

Regardless of how many times they have traveled a certain route, cars utilizing artificial neural networks have no recollection of the past and are always seeing the environment for the first time.

To get around this restriction, the researchers have written three articles simultaneously. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022) is taking place June 19–24 in New Orleans, and two are being presented there.

Can we learn from repeated traversals? is the key question. stated senior author and computer science professor Kilian Weinberger. "For instance, a car's laser scanner could first mistake a strangely shaped tree for a pedestrian while it is far away, but once it is close enough, the object's categorization will be obvious. Therefore, you would anticipate that the automobile would now detect the tree accurately the second time you drove past it, even in snow or fog."

Drove an automobile outfitted with LiDAR (Light Detection and Ranging) sensors frequently over a 15-kilometer loop in and around Ithaca, 40 times over an 18-month period, to create a dataset, led by doctorate student Carlos Diaz-Ruiz. Various locations (highway, urban, university), weather patterns (sunny, wet, snowy), and times of day are depicted in the traversals. There are more than 600,000 scenes in the final dataset.

It purposefully draws attention to one of the major problems with self-driving cars, which is bad weather, according to Diaz-Ruiz. Humans can rely on memories if the street is covered with snow, whereas neural networks suffer greatly from memory loss.

HINDSIGHT is a method that computes object descriptors as the automobile passes them using neural networks. It then condenses these descriptions, which the team refers to as SQuaSH? (Spatial-Quantized Sparse History) characteristics, and saves them on an electronic map, much like a "memory" kept in the brain.

The self-driving car may "remember" what it learnt the previous time it drove through the same area by querying the nearby SQuaSH database of all the LiDAR points along the route. The database is shared across cars and updated regularly, enhancing the data that may be used for recognition.

Yurong You, a PhD student, stated that any LiDAR-based 3D object detector may use this information as features. Without further oversight or labor- and time-intensive human annotation, the detector and the SQuaSH representation may be trained together.

The research project MODEST (Mobile Object Detection with Ephemerality and Self-Training), which the team is now working on, would take HINDSIGHT even further by enabling the vehicle to learn the whole perceptual pipeline from scratch.

While MODEST continues to believe that the artificial neural network in the car has never been exposed to any items or streets at all, HINDSIGHT continues to presume that the network is already trained to recognize things and enhances it with the capacity to generate memories. It may discover which elements of the environment are stationary and which are moving by repeatedly traversing the same route. It gradually learns what other traffic participants are and what may be safely ignored.

The system can then dependably find these things, even on routes that were not initially traversed repeatedly.

The approaches, according to the researchers, have the potential to significantly lower the development costs of autonomous vehicles (which currently still rely heavily on expensive human annotated data) and increase the efficiency of such vehicles by teaching them to navigate the areas where they are used the most.

Technology helps self-driving cars learn from own 'memories' Technology helps self-driving cars learn from own 'memories' Reviewed by Blogger on July 13, 2022 Rating: 5
Powered by Blogger.