Give Now  »

Noon Edition

Biased Algorithms

From the source: "The U.S. Department of Energy’s Oak Ridge National Laboratory unveiled Summit as the world’s most powerful and smartest scientific supercomputer on June 8, 2018." (OLCF at ORNL, Flickr)

Â

Artificial Intelligence is often viewed through rose-colored glasses, but AI has its problems, too. Computer algorithms can show prejudice just like humans. Sometimes this prejudice is the result of their learning from data sets that had prejudice coded into them by humans. However, researchers have also shown that AI machines can develop prejudice all on their own.

Simulated Prejudice



Researchers at Cardiff University and MIT ran computer simulations in which AI individuals decided whether to give virtual money to someone in their own group or in a different group. After a computer ran thousands of simulations, researchers saw that the individuals were more likely to donate to individuals with similar traits as them, and to develop prejudices against those that were different. They seemed to have learned this behavior paid off in the short term.

Cyclical Discrimination



The researchers also found that random instances of prejudice made the prejudicial groups grow, and that groups seeking protection from prejudicial groups formed new prejudicial groups of their own. According to this research, once the process starts in the virtual population, it's very difficult to reverse.

What's interesting is that this prejudicial cycle occurs in AI individuals with very low cognitive abilities. Apparently you don't need sophisticated human cognition to form prejudices.

Sources and Further Reading





Support For Indiana Public Media Comes From

About A Moment of Science