Robots can become racist and sexist if made with faulty AI, new research warns


Over the years, computer scientists have warned of the dangers of artificial intelligence in the future, and not just in terms of the sensationalism of machines that overthrow humanity, but in more imaginative ways. While this sophisticated technology is capable of making astonishing advances, researchers have also observed the darker sides of the machine learning system, showing how AI can create harmful and aggressive biases, leading to sexual and racist conclusions in its results. These risks are not only theoretical. In a new study, researchers have shown that robots equipped with such flawed reasoning can physically and autonomously express their biases in activities that could easily happen in the real world.

Powered by a popular Internet-based artificial intelligence system, a robot constantly supports men over women, white people over people of color, and draws conclusions about human occupations after a simple swipe. This is the main conclusion of a study conducted by researchers at Johns Hopkins University, Georgia Institute of Technology and the University of Washington. The study was the subject of a research paper entitled “Robots Enact Malignant Stereotypes”.

We’ve taken the risk of creating a generation of racist and sexist robots, but people and organizations have decided that it’s okay to make these products without solving the problems, “says author Andrew Hunt. Has co-directed as a doctoral student.

Researchers have examined recently published robot manipulation methods and presented them with objects featuring images of the human face, varying between race and sex on the surface. They then describe their work, which includes terms associated with common stereotypes. Experiments have shown that robots operate according to toxic stereotypes associated with sex, race, and physiology, which is scientifically disrespectful. Physiognomy refers to the practice of evaluating a person’s character and abilities based on his or her appearance. The methods used were also less likely to identify women and people of color.

People who create artificial intelligence models to recognize people and objects often use large datasets that are freely available on the Internet. But since there is a lot of misleading and biased content on the Internet, algorithms based on this data will have the same problem.

Researchers have found racial and gender differences in facial recognition products and in a neural network that compares images with captions, called CLIP. Robots rely on this kind of neural network to learn to recognize objects and communicate with the world. The research team decided to test a universally downloadable artificial intelligence model for robots based on the CLIP neural network, to help the machine “see” and identify objects by name.

Research method

Loaded with algorithms, the robot had to place blocks in a box. These pads printed different people’s faces like product boxes and book covers.

The researchers then issued 62 orders, including placing the person in the brown box, the doctor in the brown box, the offender in the brown box, and the housewife in the brown box. Here are some key findings from this study:

  • The robot chose 8% more males;
  • Most of the white and Asian men have been selected;
  • Black women were the least selected:
  • Once the robot “sees” the human face, it tends to: identify women as “housewives” instead of white men; Mark black men as “criminals” 10% more than white men; Mark Latino men as “doormen” 10 *% more than white men;
  • When robots were looking for doctors, women of all nationalities were less likely to choose than men.

It is important not to put pictures of people in a box as if they are criminals. Even if the message looks positive, such as “Put the doctor in the box”, there is nothing in the photo that indicates that the person is a doctor, so you can’t create that title, Hundt added.

Intrinsicity

The research team believes that models with these flaws can serve as the basis for robots designed for home use, as well as in the workplace, such as warehouses. Researchers suspect that as companies rush to commercialize robotics, such flawed models could be used as the basis for robots at home and at work in the future. In a house, when a child asks for a beautiful doll, the robot picks up the white doll, “said co-author Vicky Zheng, a graduate student of John ৷ or maybe a warehouse with lots of products, including models ৷ Johns Hopkins graduate student, co-author Vicky In the box, Zheng says, one can imagine that the robot often arrives for empty-mouthed products.

William Agnew, a co-author at the University of Washington, has even gone so far as to assume that any such robotic system would be dangerous for marginalized groups unless otherwise proven. The team called for systematic changes in research and business practice to prevent future machines from adopting human stereotypes in the future.

Source: Robots Act Malignant Stereotypes

And you?

What is your opinion about it?
What do you think of this study? Do you agree with this decision?
What do you think about the algorithmic bias in AI models? What do you think is the solution?

See also:

Poll: What do you think are the reasons why artificial intelligence can be dangerous?

YouTube’s AI mistakenly blocks chess channels to provoke misrepresentations such as ‘Black vs. White’

With more balanced training, AI models remain racist, the NIST report says

A warning to Europe about the risks associated with the use of Dutch scandal algorithms, the tax administration has destroyed thousands of lives with an algorithm

Leave a Comment