Give Now  »

Noon Edition

AI develops prejudice all on its own

Read Transcript
Hide Transcript

Transcript

D: Hey, why’d you turn off the radio? I wanted to hear the news.

Y: What for? It’s the same every day—story after story about humans doing a bad job of getting along. Sometimes I think we’d be better off if we weren’t in charge. Maybe we should let artificial intelligence rule the world.

D: AI has its problems too. Computer algorithms can show prejudice just like humans.

Y: But isn’t that because they’re learning from data sets that had prejudice coded into them by humans?

D: Sometimes, but researchers have also shown that AI machines can develop prejudice all on their own. The researchers ran computer simulations in which AI individuals decided whether to give virtual money to someone in their own group or in a different group. After a computer ran thousands of simulations, researchers saw that the individuals were more likely to donate to individuals with similar traits as them, and to develop prejudices against those that were different. They seemed to have learned that this behavior paid off in the short term. Researchers also found that random instances of prejudice made the prejudicial groups grow, and that groups seeking protection from prejudicial groups formed new prejudicial groups of their own. Once the process starts in the virtual population, it’s very difficult to reverse.

Y: That sounds familiar.

D: And what’s interesting is that we’re seeing it here in AI individuals with very low cognitive abilities. Apparently you don’t need sophisticated human cognition to form prejudices.

Y: Well, I wouldn’t call prejudice one of our more sophisticated human traits.

An artistic interpretation of AI, with several light blue branches coming out of a person's head against a dark blue background

AI machines can learn prejudices from human influence, but also all on their own. (deepak pal / flickr)

Keeping up with the news can feel like hearing the same stories over and over again of humans doing a bad job of getting along. What if humans weren't in charge? Maybe we should let artificial intelligence rule the world.

AI has its problems too. Computer algorithms can show prejudice just like humans.

Sometimes this happens because algorithms are learning from data sets that had prejudice coded into them by humans, but researchers have also shown that AI machines can develop prejudice all on their own.

The researchers ran computer simulations in which AI individuals decided whether to give virtual money to someone in their own group or in a different group. After a computer ran thousands of simulations, researchers saw that the individuals were more likely to donate to individuals with similar traits as them, and to develop prejudices against those that were different. They seemed to have learned that this behavior paid off in the short term.

Researchers also found that random instances of prejudice made the prejudicial groups grow, and that groups seeking protection from prejudicial groups formed new prejudicial groups of their own. Once the process starts in the virtual population, it’s very difficult to reverse.

That sounds familiar.

What’s also interesting is that we’re seeing it here in AI individuals with very low cognitive abilities. Apparently you don’t need sophisticated human cognition to form prejudices.

Although, it might not be fair to call prejudice a sophisticated trait.

Read more

Sources

Support For Indiana Public Media Comes From

About A Moment of Science