Artificial intelligence has been a fascination of science fiction for almost a century, but recent advancements have shown that autonomous robotics and artificially intelligent systems are a new reality.
The United Nations is set to open the Centre for Artificial Intelligence and Robotics to monitor the development of autonomous technology and its potential risks. The UN warns advancements in artificial intelligence (AI) present new threats to humankind, from massive unemployment to lethal autonomous warfare to cyber-security.
The new center aims to increase knowledge and understanding of the risks and benefits of the new technology.
This week on Noon Edition, our panelists discussed the present and future impacts of artificial intelligence.
The important question is how we define artificial intelligence.
David Leake is a professor of computer science at the IU School of Informatics, Computing, and Engineering and works on artificial intelligence used to problem solve using case-based reasoning. Leake says his preferred definition of AI comes from Northwestern engineering professor Chris Riesbeck.
“It’s basically: how do we answer the eternal question of why are computers so stupid?” Leake says. “The AI classic picture is we want systems that do things like chess, that we think smart people do, or solving equations. The things that are really hard for AI systems are the really common things. Things like vision, things like reasoning about everyday events.”
As AI develops, many warn of the existential dangers of artificial intelligence and the need to regulate it.
Nathan Ensmenger is an associate professor at the IU School of Informatics, Computing, and Engineering and studies the history of artificial intelligence. Ensmenger says we should not be regulating the decisions of AI, but the decisions of engineers behind the AI.
He brings up the example of AI used in financial analysis will replicate structural patterns of discrimination in the real world.
“It seems like we just got this decision from this machine; how could a machine be prejudiced or racial biased?” Ensmenger says. “And it conceals that social factor, and that’s what I think ought to be regulated.”
In terms of the nightmare scenario of a robot take over, Associate Professor of Informatics and Computing David Crandall says there is a huge technological gap between where we are now and a robot apocalypse.
“I think the state of AI is such that we’re optimizing for the simple cases, the things that people do a lot, and that makes a lot easier than trying to handle all of the possible situations that can happen in the real world,” Crandall says.
David Crandall: Associate Professor of Informatics and Computing, IU School of Informatics, Computing, and Engineering
Nathan Ensmenger: Associate Professor, IU School of Informatics, Computing, and Engineering
David Leake: Professor of Computer Science and Executive Associate Dean, IU School of Informatics, Computing, and Engineering