Artificial Intelligence is often viewed through rose-colored glasses, but AI has its problems, too. Computer algorithms can show prejudice just like humans. Sometimes this prejudice is the result of their learning from data sets that had prejudice coded into them by humans. However, researchers have also shown that AI machines can develop prejudice all on their own.
Researchers at Cardiff University and MIT ran computer simulations in which AI individuals decided whether to give virtual money to someone in their own group or in a different group. After a computer ran thousands of simulations, researchers saw that the individuals were more likely to donate to individuals with similar traits as them, and to develop prejudices against those that were different. They seemed to have learned this behavior paid off in the short term.
The researchers also found that random instances of prejudice made the prejudicial groups grow, and that groups seeking protection from prejudicial groups formed new prejudicial groups of their own. According to this research, once the process starts in the virtual population, it's very difficult to reverse.
What's interesting is that this prejudicial cycle occurs in AI individuals with very low cognitive abilities. Apparently you don't need sophisticated human cognition to form prejudices.
Sources and Further Reading
- Cardiff University. Could AI robots develop prejudice on their own? Science Daily Online, September 6, 2018.
- Whitaker, R. M., Colombo, G. B., & Rand, D. G. (2018). Indirect Reciprocity and the Evolution of Prejudicial Groups. Scientific Reports, 8 (1).