How Myers-Briggs and AI Are Abused

Let’s say you’re a job seeker who has a pretty good idea of ​​what employers want to hear. Like many companies today, your potential new workplace will give you a personality test as part of the recruiting process. You intend to provide answers that show that you are enthusiastic, a hard worker and a real people person.

Then they put you in front of the camera while you take the test orally, and you frown a little during one of your answers, and their facial analysis program decides you are “difficult.”

Sorry, next please!

This is just one of the many problems with the increasing use of artificial intelligence in hiring, argues the new documentary “Persona: The Dark Truth Behind Personality Tests,” premiering Thursday on HBO Max.

The film, from director Tim Travers Hawkins, begins with the origin of the Myers-Briggs Type Indicator personality test. It is the mid-20th century brainchild of a mother-daughter team, it sorts people based on four factors: introversion / extraversion, feeling / intuition, thinking / feeling, and judging / perceiving. The quiz, which has an astrology-like sect for its 16 four-letter ‘types’, has evolved into a recruiting tool used across America, along with successors such as the ‘Big Five’, which measures five major personality traits: openness, conscientiousness, extraversion, kindness and neuroticism.

“Persona” states that the written test contains certain ingrained biases; for example, the ability to discriminate against individuals who are unfamiliar with the type of language or the scenarios used in the test.

And according to the film, incorporating artificial intelligence into the process makes things even more problematic.

The technology scans written applications for red flag words and screens applicants for facial expressions that could contradict the answers when it comes to an on-camera interview.

Four generations of Briggs Meyers Women.
Four Generations of Briggs Meyers Women.
HBO Max

[It] works according to 19th-century pseudoscientific reasoning that emotions and character can be standardized based on facial expressions, ”Ifeoma Ajunwa, associate professor of law and director of the AI ​​Decision-Making Research Program at the University of North Carolina Law, told me via email. The Post.

Ajunwa, who appears in the film, says the potential for bias is huge. “Since the computerized systems are mostly trained on white male faces and voices, the facial expressions or voices of women and racial minorities can be misunderstood. In addition, there are privacy concerns that arise from the collection of biometric data. “

A widely used recruiting company, HireVue, analyzed the “facial movements, word choice and speaking voice of applicants before ranking them against other applicants based on an automatically generated ’employability’ score,” reported the Washington Post. The company has since stopped the practice, it announced last month.

Although they claim that “visual analysis no longer adds significant value to the assessments,” the move followed a protest over potentially harmful effects.

Cathy O’Neil is a data science consultant, author of “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”, and one of the experts interviewed in ‘Persona’. Her company, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), conducted an audit of HireVue’s practices following their announcement.

“No technology is inherently harmful; it’s just a tool, ”she told The Post via email. “But just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities. This is especially true because people often assume that technology is objective and even perfect. When we have blind faith in something very complex and opaque, it is always a mistake. “

A typical question from the Myers-Briggs personality test.
A typical question from the Myers-Briggs personality test.
HBO Max

There has been a wave of legislative action around the use of facial algorithms in recent years. But New York City is the first to introduce a bill that would specifically regulate its use in the recruiting process. It would force companies to disclose to applicants that they are using the technology and to conduct an annual audit for bias.

Just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities.

Data science consultant Cathy O’Neil

But Ajunwa thinks this is not going far enough. It is “a necessary first step to protect workers’ civil liberties,” she said. But “what we need is federal regulations tied to federal anti-discrimination laws that would apply in all states, not just New York City.”

To those who knew Isabel Briggs Myers, seeing the test, hand in hand with AI, used to ruthlessly determine whether humans can be “hired” seems far from its original intent of helping users realize their true calling.

As one of Briggs Myers’s granddaughters puts it in the movie, “I think there are ways it is being used that she would like to correct.”

Source