Building a precise assistive-feeding robot that can handle any meal

Eating a meal involves multiple precise movements to bring food from plate to mouth.

We grasp a fork or spoon to skewer or scoop up a variety of differently shaped and textured food items without breaking them apart or pushing them off our plate. We then carry the food toward us without letting it drop, insert it into our mouths at a comfortable angle, bite it, and gently withdraw the utensil with sufficient force to leave the food behind. And we repeat that series of actions until our plates are clear—three times a day.

For people with spinal cord injuries or other types of motor impairments, performing this series of movements without assistance can be nigh on impossible, meaning they must rely on caregivers to feed them. This reduces individuals’ autonomy while also contributing to caregiver burnout, says Jennifer Grannen, graduate student in computer science at Stanford University.

One alternative: robots that can help people with disabilities feed themselves. Although there are already robotic feeding devices on the market, they typically make pre-programmed movements, must be precisely set up for each person and each meal, and bring the food to a position in front of a person’s mouth rather than into it, which can pose problems for people with very limited movement, Grannen says.

A research team in Dorsa Sadigh’s ILIAD lab, including Grannen and fellow computer science students Priya Sundaresan, Suneel Belkhale, Yilin Wu, and Lorenzo Shaikewitz hopes to make robot-assistive feeding more comfortable for everyone involved. The team has now developed several novel robotic algorithms for autonomously and comfortably accomplishing each step of the feeding process for a variety of food types.

One algorithm combines computer vision and haptics to evaluate the angle and speed at which to insert a fork into a food item; another uses a second robotic arm to push food onto a spoon; and a third delivers food into a person’s mouth in a way that feels natural and comfortable. Their studies are published on the arXiv pre-print server.

“The hope is that by making progress in this domain, people who rely on caregiver assistance can eventually have a more independent lifestyle,” Sundaresan says.

Visual and haptic skewering

Food items come in a range of shapes and sizes. They also vary in their fragility or robustness. Some (such as tofu) break into pieces when skewered too firmly; others that are harder (such as raw carrots) require a firm skewering motion.

To successfully pick up diverse items, the team fitted a robot arm with a camera to provide visual feedback and a force sensor to provide haptic feedback. In the training phase, they offered the robot a variety of fare including foods that look the same but have differing levels of fragility (e.g., raw versus cooked butternut squash) and foods that feel soft to the touch but are unexpectedly firm when skewered (e.g., raw broccoli).

To maximize successful pickups with minimal breakage, the visual system first homes in on a food item and brings the fork in contact with it at an appropriate angle using a method derived from prior research. Next, the fork gently probes the food to determine (using the force sensor) if it is fragile or robust. At the same time, the camera provides visual feedback about how the food responds to the probe. Having made its determination of fragility/robustness using both visual and tactile cues, the robot chooses between—and instantaneously acts on—one of two skewering strategies: a faster more vertical movement for robust items, and a gentler, angled motion for fragile items.

The work is the first to combine vision and haptics to skewer a variety of foods—and to do so in one continuous interaction, Sundaresan says. In experiments, the system outperformed approaches that don’t use haptics, and also successfully

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :