Robotics Research Projects

The Robotics Research Lab is focused on a wide-range of research areas in Human-Robot Interaction (HRI). Currently, there are four main research directions in our lab.

Learning by Demonstration

The ability to learn new knowledge is essential for any robot to be successful in real-world applications, where it would be impractical for a robot designer to endow it with all the necessary task capabilities that it would need during its operational lifetime. Therefore it becomes necessary that the robot is able to acquire this information, in a real-world context, from cues provided by observations in a natural, human-like teaching approach.

While significant research has been performed in the area of learning motor behavior primitives from a teacher's demonstration, the topic of learning a general task or problem-solving knowledge has not been sufficiently addressed. In this area, currently existing approaches are limited to learning simple tasks, in constrained environments and are prone to overspecializations or misinterpretations of the task to be learned.

The main challenge for the learner is to extract all the necessary information pertaining to the task and eliminate all the observations that are irrelevant. This research is focused on developing a set of algorithms that enable generalization of correct and complete task knowledge from multiple, but small number of demonstrations of the same task, performed in realistic scenarios in real-world, unconstrained environments.

In our approach, we will assume that a learner robot is equipped with a basic set of capabilities or skills; the robot is aware of what the goals each of these skills are and under which conditions these skills can or should be executed. The robot will learn how to use his existing skill repertoire to perform similar tasks or to achieve similar goals to those presented by a human teacher based on a small number of examples.

To generalize the correct task knowledge, our solution is based on the hypothesis that during a demonstration of a task, the environment changes in ways that are consistent with achieving all the intermediate goals that lead to the final desired outcome. By analyzing these world state changes, and mapping them to the known preconditions and goals of the robot's own skills we can extract the necessary task information and eliminate any irrelevant information.

Relevant Publications

  • Katie Browne, Monica Nicolescu, "Learning to Generalize from Demonstrations", in Proceedings of AIMSA 2012 Workshop on Advances in Robot Learning and Human-Robot Interaction, 2012.
  • Monica Nicolescu, Odest Chadwicke Jenkins, Adam Olenderski, Eric Fritzinger, "Learning Behavior Fusion from Observation", Interactive Studies Journal, Special Issue on Robot and Human Interactive Communication , vol. 9, no. 2, pages 319-352, 2008.
  • Monica Nicolescu, Maja J Matarić, "Task Learning Through Imitation and Human-Robot Interaction", in Models and Mechanisms of Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions, Kerstin Dautenhahn and Chrystopher Nehaniv Eds., pages 407-424, 2006.
  • Monica Nicolescu, Maja J Matarić, "Natural Methods for Robot Task Learning: Instructive Demonstration, Generalization and Practice", to appear in Proceedings, Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, Melbourne, AUSTRALIA, July 14-18, 2003 (best student paper nomination).
  • Monica Nicolescu, Maja J Matarić, "A Hierarchical Architecture for Behavior-Based Robots", Proceedings, First International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 227-233, Bologna, ITALY, July 15-19, 2002.

Intent Recognition

Understanding intent is a critical aspect of communication. While people are very good at recognizing intentions, endowing an autonomous system (robot or simulated agent) with similar skills is a more complex problem, which has not been sufficiently addressed in the field.

The issue of intent recognition is particularly important in situations that involve collaboration among multiple agents or assessment of potential threats. In the former case, collaboration can be greatly enhanced, while in the latter case dangerous situations can be detected before any harmful actions can be finalized.

Our approach relies on a novel formulation of Hidden Markov Models (HMMs), which allow a robot to understand the intent of other agents by virtually assuming their place and detecting their potential intentions based on learned models of activities. The type of activity being performed, coupled with the context in which the activity is executed, are the major components indicative of the performing agent's intent. Therefore, we incorporate extensive contextual information in order to provide effective detection of the agents' intent.

There are numerous contextual parameters that provide valuable information regarding the underlying intent of actions being performed. In this work, we integrate information such as location, time, object affordances, and history. These factors are incorporated using a probabilistic model that takes into account many possible contexts. The probability distributions of these contexts will be learned from existing databases on common sense knowledge and from the agents' own experience.

Relevant Publications

  • Richard Kelley, Alireza Tavakkoli, Christopher King, Amol Ambardekar, Monica Nicolescu, Mircea Nicolescu, "Context-Based Bayesian Intent Recognition", IEEE Transactions on Autonomous Mental Development - Special Issue on Biologically-Inspired Human-Robot Interactions, vol. 4, no. 3, pages 215-225, September 2012.
  • Alireza Tavakkoli, Richard Kelley, Christopher King, Mircea Nicolescu, Monica Nicolescu, George Bebis, "A Visual Tracking Framework for Intent Recognition in Videos", Proceedings of the International Symposium on Visual Computing, pages 450-459, Las Vegas, Nevada, December 2008.
  • Richard Kelley, Christopher King, Alireza Tavakkoli, Mircea Nicolescu, Monica Nicolescu, George Bebis, "An Architecture for Understanding Intent Using a Novel Hidden Markov Formulation", International Journal of Humanoid Robotics - Special Issue on Cognitive Humanoid Robots, vol. 5, no. 2, pages 203-224, June 2008.
  • Richard Kelley, Alireza Tavakkoli, Christopher King, Monica Nicolescu, Mircea Nicolescu, George Bebis, "Understanding Human Intentions via Hidden Markov Models in Autonomous Mobile Robots", Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, pages 367-374, Amsterdam, Netherlands, March 2008.
  • Alireza Tavakkoli, Richard Kelley, Christopher King, Mircea Nicolescu, Monica Nicolescu, George Bebis, "A Vision-Based Architecture for Intent Recognition", Proceedings of the International Symposium on Visual Computing, pages 173-182, Lake Tahoe, Nevada, November 2007.

Design and Human-Factors

The design of the robot, particularly the human factor concerns, are a key aspect of HRI. Research in these areas draws from similar research in human-computer interaction (HCI) but features a number of significant differences related to the robot's physical real-world embodiment. Physical embodiment, form, level of anthropomorphism, and simplicity or complexity of design, are some of the key research areas being explored in HRI.

Embodiment

The most obvious and unique attribute of a robot is its physical embodiment. By studying the impact of physical embodiment on social interaction, HRI researchers hope to find measurable distinctions and trade-offs between robots and non‐embodied systems (e. g., virtual companion agents, personal digital assistants, intelligent environments, etc.).

Little empirical work to date has compared robots to other social agents. Work by Bartneck et al. (2004) claimed that robotic embodiment has no more effect on people's emotions than a virtual agent. Compelling recent work (Kidd & Breazeal, 2004) used three characters, a human, a robot, and an animated character, to verbally instruct participants in a block stacking exercise. The study reported differences between the embodied and non‐embodied agents: the robot was more engaging to the user than a simulated agent. Woods et al. (2006) studied perception differences between live and video recorded robot performances. They proposed using video recordings during system development as a complementary research tool for HRI.

An increasing body of work (Wainer, et al., 2006, 2007; Fasola & Matarić, 2013) suggests that there are several key differences between a robot and virtual agent in the context of human-machine interaction. The three conditions explored in that work (a physical robot body, a physical robot located elsewhere through a video link, and a simulation of a robot) were an attempt to control variables in order to isolate the effects of embodiment from realism. The researchers surveyed the participants regarding various properties related to the interaction. The results showed that the embodied robot was viewed by participants as more watchful, helpful, and appealing than either the realistic or non‐realistic simulation. A large study involving 70 participants (half over 65, the rest across the age span from young adults to 65) and five interaction sessions with an exercise coach showed a strong preference for a robot coach over a computer one, across different games, interaction modalities, and user ages (Fasola and Matarić, 2012; 2013).

Much work remains to be done in order to address the complex issues of physical embodiment in human-machine interaction. One confounding factor of this pursuit involves the robot's form, discussed next.

Anthropomorphism

The availability and sophistication of humanoid robots has recently soared. The humanoid form allows for exploring the use of robots for a vast variety of general tasks in human environments. This propels forward the various questions involved in studying the role of anthropomorphism in HRI. Evidence from communications research shows that people anthropomorphize computers and other objects, and that that anthropomorphism affects the nature of participant behavior during experiments (Reeves & Nass, 1996).

HRI studies have verified that there are differences in interaction between anthropomorphic and non‐anthropomorphic robots. For example, children with autism are known to respond to simple mobile car-like robots as well as to humanoid machines. However, pilot experiments have suggested that humanoid robots may be overwhelming and intimidating, while others have shown therapeutic benefit (Robins, et al., 2005; Scassellati, 2005). Biomimetic, and more specifically, anthropomorphic forms all for human-like gestures and direct imitation movements, while non‐biomimetic form preserves the appeal of computers and mechanical objects.

Several examinations have been performed of the effects of anthropomorphic form on HRI (Duffy 2003). These include studies of how people perceive humanoid robots compared to people and non‐humanoid robots (Oztop, et al., 2005), possible benchmarks for evaluating the role of humanoid robots and their performance (Kahn, et al., 2006), and how the design of humanoid robots can be altered to affect how users interact with them (DiSalvo, et al., 2002).

Simplicity/Complexity of Robot Design

The simplicity/complexity of the robot's expressive behavior is related to the biomimetic/anthropomorphic property. Researchers are working to identify the effect that simple versus complex robot behavior has on people interacting with robots. For example, Parise et al. (1999) examined the effects of life-like agents on task‐oriented behavior. Powers and Kiesler (2006) examined how two forms of agent embodiment and realism affect HRI for answering medical questions. Wainer et al. (2006; 2007) used a similar experimental design to explore the effects of realism on task performance. In those studies, the more realistic or complex a robot was, the more watchful it seemed. However, it was also found that participants were less likely to share personal information with a realistic or complex robot.

Other Attributes

In Reeves and Nass (1996), several human factors concepts are explored in relation to human-computer interaction (HCI). As researchers work to better understand human-robot interaction, human factors insights from HCI can be valuable but may not always be relevant. Nass examined the relationship between a a virtual agent's voice and its personality. The authors found that users experienced a stronger sense of social presence from the agent when the voice type and personality matched than when they did not. In an HRI study, researchers showed that when a robot's expressive personality matched the user's personality, task performance was better than when the personalities were mismatched. Studies have correlated users' actions with surveyed perceptions regarding feedback to determine how feedback can be most effectively given, and how it can be given in as effective a context as possible. A similar design has been used to evaluate how a robot (compared to a computer agent or to a human) can give feedback for making decisions.

Ongoing research is also exploring how cultural norms and customs can affect the use of computer agent and robot systems. For example, designed an experiment to test the differences in behavior reciprocity between users of a virtual agent in the US and users in Japan. They discovered that users from both countries expressed attitudes consistent with behavior reciprocity, but only US users exhibited reciprocal behavior. However, they discovered that when recognizable brands from popular culture were used, then reciprocal behavior was exhibited in Japanese users as well.

Relevant Publications

  • Wainer J, Feil-Seifer D, Shell D, Matarić MJ (2007) Embodiment and human-robot interaction: A task-based perspective. In: Proceedings of the international conference on human-robot interaction
  • Powers A, Kiesler S (2006) The advisor robot: Tracing people's mental model from a robot's physical attributes. In: Proceedings of the 2006 ACM conference on human-robot interaction. ACM Press, Salt Lake City, UT, pp 218-225
  • Oztop E, Franklin DW, Chaminade T, Cheng G (2005) Human-humanoid interaction: Is a humanoid robot perceived as a human? Int J Human Robot 2(4):537-559
  • Nicolescu M, Matarić MJ (2005) Task learning through imitation and human-robot interaction. In: Dautenhahn K, Nehaniv C (eds) Models and mechanisms of imitation and social learning in robots, humans and animals: behavioural, social and communicative dimensions. Cambridge University Press, New York

Social, Service and Assistive robots

Service and assistive robotics include a very broad spectrum of application domains, such as office assistants, autonomous rehabilitation aids (Feil-Seifer and Matarić, 2005), and educational robots. This broad area integrates basic HRI research with real-world domains that required some service or assistive function. The study of social robots focuses on social interaction (Fong, et al., 2003), and so is a proper subset of problems studied under HRI.

Rehabilitation robotics grew from the need for rehabilitation technologies, and the notion of assistive robotics has emerged out of and beyond that context. An assistive robot is broadly defined as one that gives aid or support to a human user. Research into assistive robotics includes rehabilitation robots, wheelchair robots and other mobility aides, companion robots, manipulator arms for the physically disabled, and educational robots. These robots are intended for use in a range of environments including schools, hospitals, and homes. In the past, assistive robotics (AR) has largely referred to robots developed to assist people through physical interaction. This definition has been significantly broadened in the last several years, in response to the growing field of AR in which assistive robots provide help through non‐contact, social interaction, defining the new field of socially assistive robotics (SAR).

In rehabilitation robotics, an area that has typically developed physically assistive robots, non‐contact assistive robots are now being developed and evaluated. These robots fulfill a combined role of coach, nurse, and companion in order to motivate and monitor the user during the process of rehabilitation therapy. Observing the user's progress, the robots provide personalized encouragement and guidance. Applications for post‐operative cardiac surgery recovery and post‐stroke rehabilitation have been studied. Other rehabilitation projects have explored using a robot as a means of motivating rehabilitation through mutual storytelling. In these experiments, a robot and a user constructs a story, which, when acted out, require the user to perform physical therapy exercises.

A variety of assistive robotics systems have been studied for use by the elderly. Such robots are meant to be used in the home, in assisted living facilities, and in hospital settings. They work to automate some physical tasks that an elderly person may not be able to do, including feeding, brushing teeth, getting in and out of bed, getting into and out of a wheelchair, and adjusting a bed for maximum comfort. In some cases, the robots are envisioned as part of a ubiquitous computing system, which combines cameras and other sensors in the environment and computer controlled appliances (such as light switches, doors, and televisions). In others, the robots serve SAR roles such as promoting physical and cognitive exercise (Tapus, et al., 2008; Fasola & Matarić, 2013).

HRI systems have been used as companion robots in the public areas of nursing homes, aimed at increasing resident socialization. These robots are designed not to provide a specific therapeutic function but to be a focus of resident attention. One such example is The Huggable, a robot outfitted with several sensors to detect different types of touch. Another such example is NurseBot, a robot used to guide users around a nursing home. Paro (Wada, et al., 2005), an actuated stuffed seal, behaves in response to touch and sound. Its goal is to provide the benefits of pet‐assisted therapy, which can affect resident quality of life, in nursing homes that cannot support pets. Initial studies have shown lowered stress levels in residents interacting with this robot, as well as an overall increase in the amount of socialization among residents in the common areas of the same facility.

Finally, HRI is being studied as a tool for diagnosis and socialization (Diehl, et al, 2012) of children with autism spectrum disorders (ASD). When used for diagnosis, robots can observe children in ways that humans cannot. In particular, eye‐tracking studies have shown remarkable promise when evaluating children for the purposes of diagnosing ASD. In terms of socialization, robots are a more comfortable social partner for children with ASD than people. These robots encourage social behavior, such as dancing, singing, and playing, with the robot and with other children or parents in the hope of making such behavior more natural.

Relevant Publications

  • David J. Feil-Seifer and Maja J. Matarić, "Defining Socially Assistive Robotics," In International Conference on Rehabilitation Robotics, pp. 465-468, Chicago, IL, Jun 2005.
  • David J. Feil-Seifer and Maja J. Matarić, "Distance-Based Computational Models for Facilitating Robot Interaction with Children," In Journal of Human-Robot Interaction, 1(1), pp. 55-77, Jul 2012.
  • David J. Feil-Seifer and Maja J. Matarić, "Ethical Principles for Socially Assistive Robotics ," In IEEE Robotics and Automation Magazine, 18(1), pp. 24-31, Mar 2011.
  • Diehl, J.J., Schmitt, L., Crowell, C.R., & Villano, M. (2012). The clinical use of robots for children with autism spectrum disorders: A critical review. Research in Autism Spectrum Disorders, 6(1), 249-262. doi: 10.1016/j.rasd.2011.05.006. PMID: 22125579
  • Topping M, Smith J (1999) The development of handy, a robotic system to assist the severely disabled. In: Proceedings of the international conference on rehabilitation robotics, Stanford, CA, http://rose.iinf.polsl.gliwice.pl/~kwadrat/www.csun.edu/cod/conf2001/proceedings/0211topping.html
  • Wada K, Shibata T, Saito T, Sakamoto K, Tanie K (2005) Psychological and social effects of one year robot assisted activity on elderly people at a health service facility for the aged. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), pp 2785-2790
  • Michaud F, Clavet A (2001) Robotoy contest - designing mobile robotic toys for autistic children. In: Proceedings of the american society for engineering education (ASEE), Alberqueque, New Mexico, http://citeseer.nj.nec.com/michaud01robotoy.html