Give Now  »

Noon Edition

Are You Immune to AI in Your News Feed?

Think you’re immune to the influence of Artificial Intelligence on what you believe? 

With Meta introducing (and then promptly backtracking) artificial intelligence-based users to their platform and rolling back human fact checkers in favor of “community notes” similar to X/formerly twitter, questions arise about how AI might influence what people believe and share online. 

A new study from Indiana University explores how people’s judgement of political news articles changes based on human fact checks versus those generated by AI chatbots - in this case, ChatGPT.  

The Setup

In the experiment, they divided over 2,000 participants into four separate groups. 

Each group read the same 40 news headlines, including an image and lede statement. Half were intended to appeal to Republican audiences – the other half favorable to Democratic audiences. Of the 40 headlines, half were true, and half were false. 

Each participant would read the headline and answer if they believed the article was true and then indicate whether they would share that article. 

The first group’s participants were forced to view a fact check generated by AI, which appeared below the article preview. The second group was given the option to view that same AI fact check, but only if they clicked the box to open it. The third group was given fact checks written by a human being. The control group was only given the news headlines, with no fact checks at all.  

How did they respond?

The results are a bit different than you might expect. While ChatGPT successfully determined that 90 percent of the false news articles were indeed false, it struggled to verify true articles. In fact, ChatGPT expressed uncertainty in about 65 percent of the true articles and reported 20 percent of them incorrectly as false. In other words, it was great at proving that false articles were false, but not so great at proving that true articles were true. 

This could be explained by the types of data that ChatGPT’s Learning Language Model (LLM) has collected in training. A key limitation of automated fact checking systems is known as the “breaking news problem.” Developing stories often discuss newer information that the model hasn’t been exposed to, which could explain its uncertainty even when articles have been verified.  

However, although ChatGPT doesn’t struggle to discern false news, this didn’t seem to stop participants from believing in and wanting to share false articles anyway. In fact, in the few instances where ChatGPT was unsure about false headlines, this increased participants’ belief in them, by about 9 percent compared to the control group. On the other hand, there was a decrease in the belief of true headlines due to ChatGPT’s false judgement by almost 13 percent. 

Attitudes Towards AI and Design

Things get more interesting when analyzing the data from those who had the option to click on the AI fact check. These participants were more likely to share both true and false news. This suggests that we might be using AI fact checks to validate the opinions we already have on a subject. Researchers also noted that when given the fact check option, some participants disregarded it entirely, especially in the case of false headlines.  

It turns out that participants' attitude towards AI (ATAI) likely plays a role in this process. Participants who had a positive ATAI were more likely to believe and share true news even when ChatGPT was unsure about it. Overall, participants seemed to view the LLM objectively, and its uncertainty caused participants to disbelieve true articles based on these generated fact checks. 

There is strong evidence that human fact checks provide the most confidence. The human fact checks provided seemed to have the most positive impact on belief and sharing intent of true articles. This is the case despite the fact that it was never indicated to the participants whether or not the fact check was produced by a human or the LLM. 

As social media platforms become more invested in artificial intelligence, there is a strong need to evaluate the effectiveness and, in particular, the design of these systems. Going a step further, Meta has also announced plans to lay off engineering staff in favor of passing this work on to AI. How could this influence the platforms overall and our susceptibility to information on them?     

Sources 

This study comes from The Observatory on Social Media (OSoMe, pronounced “awesome”) - an interdisciplinary research center at Indiana University. OSoMe unites data scientists and journalists in studying the role of media and technology in society, and building tools to analyze and counter disinformation and manipulation on social media. 

Fact-checking information from large language models can decrease headline discernment. DeVerna, M. R; Yan, H. Y.; Yang, K.; and Menczer, F. Proc. Natl. Acad. Sci., 121(50): e2322823121. 2024. 

https://osome.iu.edu/  The Observatory on Social Media at Indiana University 

Learn more

How Artificial Intelligence impacted our lives in 2024 (PBS NewsHour)

Support For Indiana Public Media Comes From

About A Moment of Science