Ever heard of a robot that is emotional? Robots are supposed to be cranky rude metallic creatures that seem to have machine-like robotic and completely emotionless voices, as started to be depicted by Isaac Asimov in his novels and to the more recent movie sequels of the Transformers. But behold! What if a robot, who looks meek and harmless, having stark similarities to one of the greatest physicists of all times, Albert Einstein looks at you and frowns? Or better still, gives you a sarcastic smile?
This week in Star Tech
, we will look into some of the major advances in the world of robotics and its recent trends on research and development.
Take for example the Einstein robot head at University of California -- San Diego (USA), which performs asymmetric random facial movements as a part of the expression learning process. The hyper-realistic robot at the university has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions.
“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning.
The faces of robots are increasingly realistic and the number of artificial muscles that controls them is rising. In light of this trend, UC San Diego researchers from the Machine Perception Laboratory are studying the face and head of their robotic Einstein in order to find ways to automate the process of teaching robots to make lifelike facial expressions.
This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.
Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be executed in a random manner as infants learn to control their bodies and reach for objects.
“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said Javier Movellan, the senior author on the paper presented at ICDL 2009 and the director of UCSD's Machine Perception Laboratory.
Although their preliminary results are promising, the researchers note that some of the learned facial expressions are still awkward. One potential explanation is that their model may be too simple to describe the coupled interactions between facial muscles and skin.
Moving on to machines which understand how we feel or what our mood is, we will look into a scenario at a New York (USA) restaurant where 46 supporters have gathered to watch the Super Bowl, America's most popular televised sporting event, the advertisements of which are valued at USD 3 million for 30 seconds! Machines are monitoring these sports fans' every move and every breath they take.
The viewers are wearing vests with sensors that monitor their heart rate, movement, breathing and sweat. A market research company has kitted out the party-goers with these sensors to measure their emotional engagement with adverts during the highly expensive commercial breaks. Advertisers pay a fortune during the Super Bowl, so they want to be as confident as they can be that their ads are hitting home. And they are willing to pay for the knowledge. "It's a rapidly growing market - our revenues this year are four times what they were last year," says Carl Marci, CEO and chief scientist for the company running the experiment, Innerscope Research based in Boston, Massachusetts, USA.
Innerscope's approach is the latest in a wave of ever more sophisticated emotion-sensing technologies. The latest technologies could soon be built into everyday gadgets to smooth our interactions with them. In-car alarms that jolt sleepy drivers awake, satnavs (satellite navigation systems) that sense our frustration in a traffic jam and offer alternative routes, and monitors that diagnose depression from body language are all in the pipeline. Prepare for the era of emotionally aware gadgets!
The most established way to analyse a person's feelings is through the tone of their voice. For several years, companies have been using "speech analytics" software that automatically monitors conversations between call-centre agents and customers. One supplier is NICE Systems, based in Ra'anana, Israel. It specialises in emotion-sensitive software and call-monitoring systems for companies and security organisations, and claims to have more than 24,000 customers worldwide, including the New York Police Department and Vodafone.
As well as scanning audio files for key words and phrases, such as a competitor's name, the software measures stress levels, as indicated by voice pitch and talking speed. Computers flag up calls in which customers appear to get angry or stressed out, perhaps because they are making a fraudulent insurance claim, or simply receiving poor service.
Voice works well when the person whose feelings you are trying to gauge is expressing themselves verbally, but that's not always the case, so several research teams are now figuring out ways of reading a person's feelings by analysing their posture and facial expressions alone.
Using different techniques, computer programs can correctly recognise six basic emotions - disgust, happiness, sadness, anger, fear and surprise - more than 9 times out of 10, but only if the target face uses an exaggerated expression. Software can accurately judge more subtle, spontaneous facial expressions as "negative" or "positive" three-quarters of the time, but they cannot reliably spot spontaneous displays of the six specific emotions - yet. To accurately interpret complex, realistic emotions, computers will need extra cues, such as upper body posture and head motion.
All in all, we have seen major advancements in machines portraying emotions themselves and being able to comprehend, analyze and act accordingly to human emotions. Robots are fast becoming humans, or so it seems!
Compiled by Mahdin MahboobInformation Sources: UCSD Website, NewScientist, stltoday.com