Researchers from UCLA and the United States Army Research Laboratory have laid out a new approach to enhance artificial intelligence-powered computer vision technologies by adding physics-based awareness to data-driven techniques.
Published in Nature Machine Intelligence, the study offers an overview of a hybrid methodology designed to improve how AI-based machinery senses, interacts and responds to its environment in real time—as in how autonomous vehicles move and maneuver, or how robots use the improved technology to carry out precision actions.
Computer vision allows AIs to see and make sense of their surroundings by decoding data and inferring properties of the physical world from images. While such images are formed through the physics of light and mechanics, traditional computer vision techniques have predominantly focused on data-based machine learning to drive performance. Physics-based research has, on a separate track, been developed to explore the various physical principles behind many computer vision challenges.
It has been a challenge to incorporate an understanding of physics—the laws that govern mass, motion and more—into the development of neural networks, where AIs are modeled after the human brain with billions of nodes to crunch massive image data sets until they gain an understanding of what they “see.” But there are now a few promising lines of research that seek to add elements of physics-awareness into already robust data-driven networks.
The UCLA study aims to harness the power of both the deep knowledge from data and the real-world know-how of physics to create a hybrid AI with enhanced capabilities.
“Visual machines—cars, robots, or health instruments that use images to perceive the world—are ultimately doing tasks in our physical world,” said the study’s corresponding author Achuta Kadambi, an assistant professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Physics-aware forms of inference can enable cars to drive more safely or surgical robots to be more precise.”
The research team outlined three ways in which physics and data are starting to be combined into computer vision artificial intelligence:
Incorporating physics into AI data sets: Tag objects with additional information, such as how fast they can move or how much they weigh, similar to characters in video gamesIncorporating physics into network architectures: Run data through a network filter that codes physical properties into what cameras pick upIncorporating physics into network loss function: Leverage knowledge built on physics to help AI interpret training data on what it observes
These three lines of investigation have already yielded encouraging results in improved computer vision. For example, the hybrid approach allows AI to track and predict an object’s motion more precisely, and can produce accurate, high-resolution images from scenes obscured by inclement weather.
With continued progress in this dual modality approach, deep learning-based AIs may even begin to learn the laws of physics on their own, according to the researchers.
The other authors on the paper are Army Research Laboratory computer scientist Celso de Melo and UCLA faculty Stefano Soatto, a professor of computer science; Cho-Jui Hsieh, an associate professor of computer science and Mani Srivastava, a professor of electrical and computer engineering and of computer science.
More information:
Achuta Kadambi et al, Incorporating physics into data-driven computer vision, Nature Machine Intelligence (2023). DOI: 10.1038/s42256-023-00662-0
Provided by
University of California, Los Angeles
Citation:
Hybrid AI-powered computer vision combines physics and big data (2023, June 14)