Facial recognition technology, these days, is being experimented by several businesses and government agencies for everything from making policy to employee timesheets. Even more granular outcomes are on their way, assuring the businesses behind the technology that Automatic emotion recognition could soon assist robots to comprehend humans better, or sense road rage for car drivers.
But experts are warning that the facial recognition algorithms that try to understand facial expressions could be based on indecisive science. The claims are a part of an annual report of AI Now Institute, a nonprofit that studies the impact of Artificial Intelligence on society. The study report also contains recommendations for the regulation of AI and greater transparency in the industry. Co-founder of AI Now, a distinguished research professor at NYU and principal researcher at Microsoft Research, Kate Crawford stated that AI is being implemented in a lot of social contexts that is an issue. Psychology, Anthropology, and Philosophy are all extremely germane, but this is not the people’s training who come from a technical background.
Crawford and the AI Now report referring to the system usually utilized to codify facial expressions into 7 core emotions, originating with psychologist Paul Ekman. His work studying facial expressions in communities that estranged from modern society recommends that facial expressions are universal. The idea of universal facial expressions is expedient for AI researchers since much the AI in use today must categorize complex images or sounds. Amazon sells facial recognition that promises the ability to tell emotions, alongside Microsoft. The list of the emotions that Microsoft’s Emotion API claims to discern is the exact same list of emotions that Ekman lists as having universal facial expressions. And it’s already in use in products like Pivothead, a company that makes smart glasses to live stream an employee’s point of view, as per the reports.