New algorithm helps autonomous vehicles find themselves, summer or winter

Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them—and for the first time, the technology works regardless of seasonal changes to that terrain.

Details about the process were published on June 23 in the journal Science Robotics.

The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, autonomous systems can locate themselves.

The problem is that, in order for it to work, the current generation of VTRN requires that the terrain it is looking at closely matches the images in its database. Anything that alters or obscures the terrain, such as snow cover or fallen leaves, causes the images to not match up and fouls up the system. So, unless there is a database of the landscape images under every conceivable condition, VTRN systems can be easily confused.

To overcome this challenge, a team from the lab of Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at JPL, which Caltech manages for NASA, turned to deep learning and artificial intelligence (AI) to remove seasonal content that hinders current VTRN systems.

California Institute of Technology

“The rule of thumb is that both images—the one from the satellite and the one from the autonomous vehicle—have to have identical content for current techniques to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image’s hues,” says Anthony Fragoso (MS ’14, Ph.D. ’18), lecturer and staff scientist, and lead author of the Science Robotics paper. “In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared.”

The process—developed by Chung and Fragoso in collaboration with graduate student Connor Lee (BS ’17, MS ’19) and undergraduate student Austin McCoy—uses what is known as “self-supervised learning.” While most computer-vision strategies rely on human annotators who carefully curate large data sets to teach an algorithm how to recognize what it is seeing, this one instead lets the algorithm teach itself. The AI looks for patterns in images by teasing out details and features that would likely be missed by humans.

Supplementing the current generation of VTRN with the new system yields more accurate localization: in one experiment, the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a correlation-based VTRN technique. They found that performance was no better than a coin flip, with 50 percent of attempts resulting in navigation failures. In contrast, insertion of the new algorithm into the VTRN worked far better: 92 percent of attempts were correctly matched, and the remaining 8 percent could be identified as problematic in advance, and then easily managed using other established navigation techniques.

“Computers can find obscure patterns that our eyes can’t see and can pick up even the smallest trend,” says Lee. VTRN was in danger turning into an infeasible technology in common but challenging environments, he says. “We rescued decades of work in solving this problem.”

Beyond the utility for autonomous drones on Earth, the system also has applications for space missions. The entry, descent, and landing (EDL) system on JPL’s Mars 2020 Perseverance rover mission, for example, used VTRN for the first time on the Red Planet to land at the Jezero Crater, a site that was previously considered too hazardous for a safe entry. With rovers such as Perseverance, “a certain amount of autonomous driving is necessary,” Chung says, “since transmissions take seven minutes to travel between Earth and Mars, and there is no GPS on Mars.” The team considered the Martian polar regions that also have intense seasonal changes, conditions similar to Earth, and the new system could allow for improved navigation to support scientific objectives including the search for water.

Next, Fragoso, Lee, and Chung will expand the technology to account for changes in the weather as well: fog, rain, snow, and so on. If successful, their work could help improve navigation systems for driverless cars.

The Science Robotics paper is titled “A Seasonally-Invariant Deep Transform for Visual Terrain-Relative Navigation.”

Mars 2020 mission to be guided by USGS astrogeology maps

More information:
A.T. Fragoso el al., “A seasonally invariant deep transform for visual terrain-relative navigation,” Science Robotics (2021). robotics.sciencemag.org/lookup … /scirobotics.abf3320

Provided by
California Institute of Technology

Citation:
New algorithm helps autonomous vehicles find themselves, summer or winter (2021, June 23)
retrieved 23 June 2021
from https://techxplore.com/news/2021-06-algorithm-autonomous-vehicles-summer-winter.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :