In a recent blog post, we discussed emotion recognition technology in terms of how it would change interactions between individuals in an online setting. The same technology is also being engineered in the form of robotic companions for children with autism or chronic health problems.
Scientific American recently published an article about ALIZ-E, a consortium of schools and other institutions that are working together to develop a robot that can bond with children as a mentor and friend. These robots are being designed to read a child’s emotions and respond to them. According to the ALIZ-E Project’s website, “The robots will use a distributed model of long-term memory, which acts as a switch board for other cognitive modalities…will rely on adaptive and sustainable non-verbal interaction, taking an embodied perspective to affective interaction…the robot adapts its behavior to different user profiles and employs user-specific strategies to achieve a goal.”
In the past, hospitals have tried introducing pets to help keep young patients calm. However, pets are expensive to train and keep, and are also not hygienic enough to keep in a hospital environment. ALIZ-E is looking to develop a robot that will take the place of these pets, as well as become a companion for the young patients to bond with and offer counsel to.
The challenge in the engineering of these robots lies in programming them to emote. According to Lola Canamero (coordinator of an earlier project, FELIX GROWING,) “young patients are quite willing to suppress disbelief and bond with [a] robot, with one caveat – the robot has to be capable of expressing emotions [and] reading them in the patient.”
The seven basic emotions are expressed universally, so imagining a robot that could display and read them isn’t too big of a stretch. However, other emotions (guilt, love, etc.) are not expressed by everybody in the same way, so it would be difficult to program a robot to read more than basic emotions. So what happens when the subject has a display of a more complex emotion? A robot could then misinterpret those expressions and react inappropriately. This could greatly affect a sensitive child. Do you believe that a robot can (or should) be designed in such a manner?
Also, it’s one thing for ALIZ-E to program robots to read a child’s emotions, but it’s a whole other issue to create artificial intelligence that responds to such emotions. The ALIZ-E Project does not specify exactly how one of their programmed robots would respond to a child. For example, if the child is sad, how would the robot console them?
The idea of emoting robots also raise other questions as well: Can bonding with a plastic robot be a substitute for genuine human interaction? Wouldn’t it be detrimental to a child in their formative years to spend more time with a robot than another human being? Why aren’t they encouraging more human companionship instead of developing technology to take its place?
While the robots may be programmed to emote and read its subject’s emotions, they aren’t processing and responding in the same way a human would. Children have a huge range of emotions and expressions. Even humans can’t always understand one another, so is it possible to program a robot that would be able to understand a child 100% of the time, and be able to respond appropriately in each given situation? Perhaps these robots could be healthy companions if used only when family, friends, or hospital staff are not available to be with a patient. However, it is difficult to imagine one of these robots taking the place of another human figure in a child’s life.
In researching the general topic of cognition and social robots, there doesn’t seem to be a real reason for developing this technology. Having these robots in the hospitals wouldn’t make things more efficient or cost effective, and humans are capable of bonding with other humans on a much higher level than any machine can be programmed to do. Are engineers designing these robots simply because they can? Or do you think there is a concrete and valid reason behind the project?
I think there could be some benefit in cases where there just aren’t enough people capable of bonding in a meaningful way with patients, but there are a plethora of negatives that go along with something like this.
What happens when the robot responds so well that a patient becomes dependent on the robot’s interactions over a human being?
What happens when the robot triggers too many euphoric emotions in a patient and they become addicted?
What are the liabilities the hospital would face if the robot’s interactions are too inappropriate and causes life-threatening reactions in a patient?
What if the robot’s interactions are more “perfect” than the majority of human beings’ and there are only a small number of humans who can reach a patient emotionally after interacting with the robot for a period of time?
What about the potential negative psychological effect these robots could have over the longer term — such as helping the patient within the hospital environment but then 6 months later the patient having to deal with thoughts of “people don’t care about me, I’m so worthless that they had to program a robot to be my friend.”
And these are only the beginning. There’s a lot more work to be done in this emerging field, and probably most of it relating to ethics rather than technology.
A robot is a mechatronic device (combining mechanical, electronic and computer) is automatically performing tasks that are usually dangerous, difficult, repetitive or impossible for humans or simpler tasks, but by making better than would a human being . The most advanced robots are able to move and recharge themselves, like the ASIMO robot manufactured by Honda