The White House recently released an executive order listing guidelines to improve safety and security for artificial intelligence companies. The overarching goal of the order is to increase transparency from AI companies about how their models work and address concerns about the growing power of AI.
Isak Asare, co-director of the IU cybersecurity and global policy program and executive director of the IU cybersecurity clinic, said we are already years behind responding to AI; the executive order is one of the first major steps the White House has taken in addressing this issue.
“The more that it becomes integrated into all of our systems of life, the more that we become reliant on it, the harder it is to disentangle from that technology, and the harder it is to control against its negative effects,” Asare said.
Asare said he thinks the executive order is lacking when it comes to answering the questions of what kind of privacy rights Americans have, what it means to be a human in a technology-driven world and what laws should be put in place to further regulate AI. The order uses measurements and testing methods for AI risk from the National Institute for Standards and Technology.
“It really starts with answering the question of how do we create an infrastructure where more people are at the table, where there's a more broad, diverse set of contributors to the AI field, that all of us have a say in shaping the world that we want to live in and the role that AI plays in it,” Asare said.
Beth Plale, Michael A. and Laurie Burns McRobbie Bicentennial professor of computer engineering at IU, said she does not find the executive order lacking. The order builds on the blueprint for the AI Bill of Rights that the White House released last year; the document included principles for design use and deployment of AI.
“It lays out a case for, I would say a more conservative approach to how we think about regulating that our imminent threats to health and safety and economic viability,” Plale said. “This actually makes things more specific and directs agencies to take steps forward. I was actually really quite pleased with that.”
Asare said people have already let social media gain too much control and traction in society; AI has assisted social media sites in becoming more powerful. AI gives social media sites more access to personal information about its users and allows it to generate more personalized content. To address this, the executive order introduces the idea of watermarking: being able to mark and recognize content that is created by AI.
“Our relationship with truth is already becoming very strained because of our failure to think very carefully about the role that social media was playing in our lives,” Asare said. “That's being augmented times infinity with artificial intelligence technologies…it has this amplified ability to influence you in any way.”
Asare said it will be hard to enforce these guidelines; they rely on the integrity of AI companies to follow through. Yet it is in AI companies’ interests to have them in place, he said. Ideally, these companies aim to solve societal issues, such as optimizing food production, minimize teacher workload and making the healthcare system more efficient. While AI has the potential to do many different things, releasing these guidelines is just the first step in ensuring AI does not take over. He thinks Congress should take more action in passing comprehensive legislation on AI security and data privacy.
“There are serious concerns about what happens when an amoral actor or a bad actor gets a hold of AI technologies. There's sort of unlimited amounts of ways that AI has been, is being and can be used to limit people's right of movement, to limit people's privacy,” Asare said. “This is a sort of Pandora's box of negative effects because any technology is always going to be the result of social processes.”
Plale said as AI tools become more popular, she expects educators to push more for a transparency of sources in classwork and increase awareness of what AI is capable of and its limitations. If students decide to use ChatGPT, they should be personally responsible for whether their work is right or wrong. Plale also thinks the executive order will encourage more conversations nationally when it comes to AI research and funding through Congress.
“As a primary research institution, I think all of us that are in the university have a responsibility to be aware of the research that we put out there,” Plale said. “What I do see through the executive order is a lot of community engagement. I think there's opportunity for us as citizens and educators and people, journalists, people in industry to contribute and I think that's a very positive thing.”
Both Plale and Asare are worried about the power and presence AI already has in society, more specifically when it comes to facial recognition.
“I am more focused on the short-term harms from having facial recognition in public places where one's actions can be can be trapped and have punitive effects,” Plale said. “The ability to track one's comings and goings can be so easily abused.”
Asare said he thinks AI presents an existential threat to society, and has already changed the way we interact, work and learn. He thinks people should take AI more seriously and educate themselves on it.
“New technologies sort of have a power relationship because of their effect on the environment and their effect on people; if there's a power relationship, that means there's usually going to be a race,” Asare said. “We should be really concerned about any type of a race, because races incentivize people cutting corners, right? Winning the race is a terrible outcome for everybody, usually.”