In the past few years, “AI” has become a major buzzword in technology. The prospect of a computer being able to do tasks which only a human could perform is a captivating thought.
AIs can be created using multiple different methods, but one of the most popular ones right now involves the use of deep neural networks (DNNs). These structures try to mimic the neural connections and function of the brain and are generally trained on a dataset before they are deployed in the real world. By training them on a dataset beforehand, DNNs can be ‘taught’ to identify features in an image. So, a DNN may be taught to identify an image with, e.g., a boat in it by being trained on a dataset of images with boats.
However, the training dataset can cause problems if it is not properly designed. For instance, with respect to the previous example, since images of boats are generally taken when the boat is in water, the DNN may recognize only water, instead of the boat, and still say that the image has a boat in it. This is called a co-occurrence bias and it is a very common problem that is encountered while training DNNs.
To solve this problem, a team of researchers, including Yi He, a researcher from Japan Advanced Institute of Science and Technology (JAIST), Senior Lecturer Haoran Xie from JAIST, Associate Professor Xi Yang of Jilin University, Project Lecturer Chia-Ming Chang of the University of Tokyo, and Professor Takeo Igarashi, has reported a new human-in-the-loop system. A paper detailing this system has been published in the Proceedings of the 28th International Conference on Intelligent User Interfaces (ACM IUI 2023).
Prof. Xie says, “There are some existing methods to solve the co-occurrence bias by either reorganizing the dataset or telling the system to focus on specific areas of the image. But reorganizing the dataset can be very difficult, while current methods for marking regions of interest (ROI) require extensive, pixel-by-pixel annotations by humans hired to do so, which incurs a high cost. Thus, we created a much simpler attention method which helps humans point out ROI in the image using a simple one-click method. This drastically reduces time and costs for DNN training, and thus, deployment.”
The team realized that previous approaches for attention guidance were inefficient because they were not designed to be interactive. Thus, they proposed a new interactive method to annotate images through one click. Users simply left click on parts of the image that are to be identified and, if need be, right click on parts of the image that should be ignored.
Thus, in case of the images with boats, users will left click on the boat and right click on the water around it. This helps the DNN identify the boat better and reduces the effects of the co-occurrence bias inherent to training datasets. To reduce the images that need to be annotated, a new active learning strategy using a Gaussian mixture model (GMM) was devised.
This new system was tested against existing ones, both numerically and through user surveys. The numerical analyses showed that the new active learning method was more accurate than any of the existing ones, while user surveys showed that the click-based system reduced the time required to annotate ROI by 27%, and 81% of the participants preferred it over other systems.
Xie says, “Our work can drastically improve the transferability and interpretability of neural networks by increasing their accuracy for real-world applications. When systems make correct and clear decisions, it increases the confidence users have in AI and makes it easier to deploy these systems in the real world. Thus, our work focuses on increasing the trustworthiness of DNN deployments, which can have a major impact on the application and development of AI technologies in society.”
The team believes their work could have a strong influence on the tech industry and enable more applications of AI technologies in the near future.
More information:
Yi He et al, Efficient Human-in-the-loop System for Guiding DNNs Attention, Proceedings of the 28th International Conference on Intelligent User Interfaces (2023). DOI: 10.1145/3581641.3584074
Provided by
Japan Advanced Institute of Science and Technology
Citation:
Click away the bias: New system to make AI training easier and more accurate (2023, March 31)