AI Lie Detection in the Global South

In Philip K. Dick’s rich science fiction universe, lies are sifted from truth with the aid of a Voight-Kampff machine — which measures physiological changes, such as respiration and eye movement, in response to stressful emotional stimuli. Today we are witnessing the development, testing, and application of similar tools in the Global South.

Mark Harris has written about EyeDetect, a lie detector that relies on a machine learning algorithm to analyze the images from an infrared camera aimed at the user’s eye. The company behind this new technology, Conversus, markets itself as a lie detector that performs better than the polygraph, the validity of which the academic community has questioned. Harris found that Conversus has “close to 500 customers in 40 countries.” While mainly used in screening potential candidates for law enforcement agencies, EyeDetect is greatly limited by the Employee Polygraph Protection Act, which forbids the use of lie detectors at private companies. However, this leaves open use cases beyond the United States.

Harris shows that companies such as Best Western, FedEx, IHOP, and Sheraton have employed this technology in Guatemala and Panama.

Photo by  Victorien Ameline

U.S. state and federal agencies are also free to integrate EyeDect into their arsenal; organizations like Customs and Border Patrol and the Defense Intelligence Agency are at the forefront of integrating EyeDetect into their toolkit.

Professor Virginia Eubanks, author of Automating Inequality, powerfully writes that “systems tested in low rights environments will eventually be used on everyone.” In other words, marginalized communities, like people who live in the Global South, serve as the guinea pigs for surveillance; these technologies are perfected abroad and eventually deployed broadly in the U.S. as well.

The debate surrounding EyeDetect raises a number of questions. What is the accuracy of this technology? Are there proper channels to dispute the results? While Converus produced a lengthy rebuttal, including ad hominems against the researchers he cited, they miss the point altogether about the biases introduced into technological systems. Converus claims that by removing the human from the loop, EyeDetect, likewise, eliminates the bias that made polygraphs so pernicious.

Yet, the major ethical problems remained unanswered: what are the appropriate levels of false positive or false negative rates? And who decides? We need to make major inroads in this problem space before leaving matters of truth up to chance.

  • Post by Renata Barreto

Igor Rubinov