Images from the meals participants with motor impairments
                ate using the robot. Each image is described in-detail in
                alt text below.

Lessons Learned from Designing and Evaluating a Robot-Assisted Feeding System for Out-of-Lab Use

Amal Nanavati, Ethan K. Gordon, Taylor A. Kessler Faulkner, Yuxin (Ray) Song, Jonathan Ko, Tyler Schrenk, Vy Nguyen, Bernie Hao Zhu, Haya Bolotski, Atharva Kashyap, Sriram Kutty, Raida Karim, Liander Rainbolt, Rosario Scalise, Hanjun Song, Ramon Qu, Maya Cakmak, Siddhartha S. Srinivasa

ACM/IEEE International Conference on Human Robot Interaction (HRI) 2025






Spotlight Video




Abstract

Eating is a crucial activity of daily living. Unfortunately, for the millions of people who cannot eat independently due to a disability, caregiver-assisted meals can come with feelings of self-consciousness, pressure, and being a burden. Robot-assisted feeding promises to empower people with motor impairments to feed themselves. However, research often focuses on specific system subcomponents and thus evaluates them in controlled settings. This leaves a gap in developing and evaluating an end-to-end system that feeds users entire meals in out-of-lab settings. We present such a system, collaboratively developed with community researchers. The key challenge of developing a robot feeding system for out-of-lab use is the varied off-nominal scenarios that can arise. Our key insight is that users can be empowered to overcome many off-nominals, provided customizability and control. This system improves upon the state-of-the-art with: (a) a user interface that provides substantial customizability and control; (b) general food detection; and (c) portable hardware. We evaluate the system with two studies. In Study 1, 5 users with motor impairments and 1 community researcher use the system to feed themselves meals of their choice in a cafeteria, office, or conference room. In Study 2, 1 community researcher uses the system in his home for 5 days, feeding himself 10 meals across diverse contexts. This resulted in 3 lessons learned: (a) spatial contexts are numerous, customizability lets users adapt to them; (b) off-nominals will arise, variable autonomy lets users overcome them; (c) assistive robots' benefits depend on context.




Study 1 Footage


                      The top image shows a cafeteria with many people in
                      the background. A person sits on one side of the
                      table with a fork at their mouth. Another person sits
                      at the other side of the table, with a robot arm in
                      front of them. The robot arm has a fork pointing down
                      towards their plate. The bottom 5 images show two
                      images of a table with two people each. In both cases,
                      the robot arm is holding a piece of food in front of
                      one person’s mouth, while the other person is sitting
                      with their plate of food. Across all images, the person
                      being fed by the robot arm is in a wheelchair. The
                      images are overlaid with the location and food items
                      of that study session, which can be read in Table III
                      of the main paper.

Study 1 focuses on the research question: How does the system perform across different users in out-of-lab settings? To investigate this, we invited five participants and one community researcher, all people with motor impairments, to eat a meal of their choice in a cafeteria, conference room, or office. Direct footage from the study, one bite each from each user, can be found below.


                      This figure shows the different plates of food across
                      all the users in Study 1. P1’s is on an off-white paper
                      plate on a blue placement, with square, cut pieces of
                      pepperoni pizza on it and pieces of broccoli. P2’s is
                      on a reflective plate with red, white, and blue wavy
                      patterns. The plate has three groups of food: broccoli,
                      pasta, and chicken, all cut into bite-sized pieces.
                      P3’s is on a red reflective plate. Half of the plate
                      is occupied by cut up pieces of sandwich, with mostly
                      bread and lettuce visible, and the other half has
                      strawberries, a melon piece, and a pineapple piece.
                      CR2’s has two plates. The first one has the same wavy
                      red, white, and blue background and has pieces of beef,
                      tofu, and some fruits (strawberry, pineapple, grapes).
                      The bottom one is on a white reflective plate and has
                      cut up pieces of bagel. P4’s is on the same red plate
                      as P3’, and has cut up pieces of chicken, potato wedges,
                      and cauliflower. Finally, P5 has two plates. The first
                      is the same red plate as above, with salmon, mac and
                      cheese, and brussel sprouts. The second has a bright
                      blue plate with two pieces of donut and one piece of
                      chocolate cake.



P1

(start at 3:04)

CR2

(start at 0:05)

P2

(start at 6:43)

These videos above and to the right show P1, CR2, and P2's meals. A key takeaway from these videos is the impact that assistive technology has on the user experience. Because P1 and CR2 use mouth-based assistive technology, they cannot interact with the system while chewing or talking; this contrasts with P2, who uses touch-based assistive technology (a stylus) and therefore can. Further, since P1's assistive technology is not cursor-based, is takes him much longer that CR2 or P2 to specify a target point for bite selection.




P3

(start at 5:31)

P4

(start at 1:59)

The videos above and to the right show P3, P4, and P5's meals. A key takeaway from these videos is the impact that spatial context has on the user experience. P3 sat near the front of his wheelchair whereas P4 sat near the back, resulting in a quicker bite transfer for P3. However, the staging configuration was also much closer to P3's eyes, which he felt was "weird." For P5, the relative positioning of the social diner impacted her user experience; the robot came between her and the social diner, breaking their eye contact and interrupting the social interaction.

P5

(start at 7:59)




Study 2 Footage


                  This figure shows images from all 10 meals of Study 2. The first
                  images shows CR2 sitting on a wheelchair facing the robot, which
                  has a bite of baked chicken on it. CR2's laptop is in front of
                  him, showing the results of face detection, and a television
                  is behind the laptop showing a scene from a movie. The second
                  image shows CR2 sitting up in a bed, with the robot arm above
                  a plate of chicken teriyaki and cucumber kimchi. In front of
                  CR2, a scene from a television show is projected onto the wall.
                  The third image shows CR2 leaning back in a bed. The robot arm
                  is mounted onto a hospital table and nearing his face, with a
                  watermelon piece on the fork. The fourth image shows CR2 leaned
                  back in a bed. The robot arm is above a plate with cheese,
                  salami, and dried apricots. The fifth image shows CR2 sitting
                  on a wheelchair in front of a kitchen table, with his laptop
                  in front of him. The robot arm is mounted on the hospital table
                  and is above a plate with pieces of avocado toast. The sixth
                  image shows CR2 sitting in a wheelchair facing his laptop,
                  while the robot arm is near his mouth with a bite of cantaloupe.
                  The seventh image shows CR2 sitting up in bed, with the robot arm
                  at his mouth with a piece of pizza on the fork. The eigth image
                  shows CR2 sitting in a wheelchair and his C3 sitting on a sofa
                  next to him. The robot arm is above a plate, acquiring a bite of
                  chicken katsu. The ninth image shows CR2 sitting on a wheelchair,
                  with his smartphone in front of him. He is eating a bite of chicken
                  teriyaki off of the robot arm's fork. The tenth image shows CR2
                  sitting in a wheelchair, with the robot arm moving away from his
                  face after having fed him donuts. In order, the activities written
                  on each image are: 'dinner while watching TV,' 'dinner while
                  watching TV,' 'snack between work,' 'snack during work,' 'breakfast
                  before work,' 'breakfast before work,' 'dinner as a caregiver
                  folds laundry,' 'dinner with caregiver,' 'dinner while watching
                  TV,' and 'breakfast before work.'

Study 2 focuses on the research question: How does the system perform across the diverse contexts that arise when eating in the home? To investigate this, we deployed the robot in CR2's home for five consecutive days, feeding him two meals per day across various spatial, social, and activity contexts.


                This figure shows a schedule for the week of the deployment. All text should be screen-reader accessible in the version of this figure in the paper's appendix. This alt text focuses on the images. Monday breakfast shows a violet plate with discrete bites of strawberries, watermelon, cantaloupe, and honeydew. Monday dinner shows the same plate with discrete bites of chicken, artichoke, bell peppers, and olives.Tuesday snack shows a cyan plate with discrete bites of salami, cheese, and dried apricots. Tuesday dinner shows a beige plate with discrete, square-cut bites of pizza.Wednesday breakfast shows a cyan plate with discrete bites of avocado toast. The avocado is mashed, and there is salami on top. Wednesday dinner shows a violet plate with discrete, long, thin bites of chicken teriyaki and cucumber kimchi, and a cyan plate with discrete bites of egg roll and steamed dumplings. Thursday snack shows a light blue plate with discrete bites of watermelon, cantaloupe, and honeydew. Thursday dinner shows the same plate with discrete bites of chicken teriyaki, cucumber kimchi, and fried dumpling. Friday breakfast shows a pink plate with discrete bites of donut on it. Friday dinner shows a violet plate with discrete bites of roasted carrots and zucchini on it, and the same plate with long, thin bites of chicken katsu on it.

Direct footage from the study can be found below.




Dinner While Watching TV

(start at 0:05)

Dinner While Caregiver Does Other Care Work

(start at 1:18)

A key takeaway from the above videos is the varied spatial contexts in which CR2 eats. When he is seated in his wheelchair, the robot is mounted on his right side, and one of his existing wheelchair buttons is used as the e-stop. There is a face-height hospital table in front of him with his laptop/phone and mouth joystick, which means the hospital table with his food has to be on his right. In contrast, when he is eating in bed, the robot, plate, and e-stop are all mounted on a hospital table to his left. His laptop and mouth joystick are still in front of him, but the collision-free space in front of CR2's face is narrower. Further, CR2 has less head mobility due to the bed back and the tilt of the bed. He often said the robot was "threading the needle" on bed days.




Breakfast Before Work

(start at 4:20)

Snack While Working

(start at 2:34)

A key takeaway from the above videos is how activity contexts shapes the user's meal. When eating breakfast before work, CR2 is more rushed and thus eats the bites without delays. When he is working while eating, CR2 switches from the web app to his work while the robot is acquiring food, keeps working while it is in "resting" configuration, and transfers the bite after working for a bit. Also, note how CR2 teleoperates the transfer; he came up with this strategy by customizing the "resting" configuration such that a single joint-1 rotation could move the robot along a collision-free path to his mouth, allowing him to bypass face detection in contexts where it wasn't working reliably.




Snack Between Work

(start at 5:40)

Dinner With Caregiver

(start at 7:10)

A key takeaway from the above videos is the importance of social contexts. In both videos, CR2 is feeding himself, but a caregiver is available in case he needs help. In the former video, this involves wiping watermelon juice off of his face. In the latter video, this involves putting a cap on him to avoid reflection-induced false positive face detections. Further, in the latter video CR2 is eating alongside a caregiver, which has been a long-standing goal of his: "I have become friends with most of my caregivers. So I consider eating with them [to be] nice. I would rather eat with them [than have them feed me]."




Bibtex

@inproceedings{nanavati2025lessons,
  title={Lessons Learned from Designing and Evaluating a Robot-assisted Feeding System for Out-of-lab Use},
  author={Nanavati, Amal and Gordon, Ethan K and Kessler Faulkner, Taylor A and Song, Yuxin (Ray) and Ko, Johnathan and Schrenk, Tyler and Nguyen, Vy and Zhu, Bernie Hao and Bolotski, Haya and Kashyap, Atharva and Kutty, Sriram and Karim, Raida and Rainbolt, Liander and Scalise, Rosario and Song, Hanjun and Qu, Ramon and Cakmak, Maya and Srinivasa, Siddhartha S},
  booktitle={Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction},
  year={2025}
}