The University of Arizona’s Eller College of Management has developed a new way to protect our borders.
According to an Arizona News 13 article, UA has created an Automated Virtual Agent for Truth Assessments in Real-Time (AVATAR). This new security measure is supposed to detect what current border patrols might miss such as dilating pupils.
Doug Derrick, a researcher on the project, believes that the goal of this technology is not to replace agents but to give them another tool. He states, “Agents at the border have to make rapid decisions about people’s credibility so they have [a] limited amount of time with large traffic”
The Avatar kiosk looks similar to an ATM machine and has a high definition camera that can capture facial expressions of emotion, infrared sensors to detect eye movement and dilation, and a microphone to pick up what is said and how it is being said. People can just walk up to the kiosk and begin the questioning process.
This lie detecting AVATAR is aimed at aiding border agents in weeding out people who have something to hide.
Another News 13 article about Avatar gave examples of questions that AVATAR might ask. Some questions are as simple as, “How do you feel about passing through this checkpoint?” or “Please let me know how you spent the last few hours before coming to this checkpoint.”
The project is part of the Department of Homeland Security Center of Excellence in Border Security and Immigration at the University of Arizona. It is still in its testing phase and the avatar will not hit the border check points for awhile.
Researcher Doug Derrick goes on to state that, “We all think we’re good at it, over confident in our ability to do it [controlling deception cues] but we are pretty bad at it.”
I’m curious how this technology fairs against people who’ve spent way too many hours fighting automated telephone agents and other voice activated systems like some airlines’s flight status systems. I could see someone being misidentified or mis-flagged because they hate the idea of feeling like they have to answer to a machine or because they have a lot of anger or contempt for what they see as technology moving in the wrong direction.
I also wonder if, over time, this wouldn’t effectively “lower the stakes” of lying for some people. Similar to the way the sense of anonymity on the internet has removed some people’s sense of being respectful or act as if they’re engaged in a “race to the bottom,” or try to be even more ignorant than the next person. To me, eye-to-eye contact with another human being adds to the sense of being in a real moment, whereas for some people, interacting with a machine might feel more like they’re playing a video game, where the object is to beat the “enemy” on the screen.
I’m really curious to see how all of these new technologies pan out over the long run, what the research uncovers, and how they evolve over time.
I suspect that a system like this will eventually employ an artificial intelligence that analyzes a person’s responses and creates an on-screen avatar that lowers the person’s inhibitions and is more likely to compel the truth from them. Just as police often use deception in order to draw a confession out of a suspect by playing on the suspect’s weaknesses, so, too, a computer system like this AVATAR system could choose to use a face and expressions of emotion that play to an interviewee’s weaknesses to get at the truth. I find the thought of such a system to be simultaneously intriguing and insidious.
Keith
These are all very insightful and interesting comments. It gives our readers more to think about. Thanks
You’re welcome. Thank you guys for posting all the thought provoking blogs! 🙂