University of Michigan School of Information
UMSI researchers find journalists temper (un)certainty in science communications

Friday, 02/18/2022
A large-scale study of uncertainty in science communications indicates that journalists tend to temper, not exaggerate, scientific claims.
While splashy clickbait headlines touting the power of chocolate to cure everything from acne to cancer are certainly attention grabbers, these articles may not be commonplace in science communication. Sensational reports and exaggerated scientific findings are a concern because of their potential to erode the public’s trust of journalism and science.
In a new paper, University of Michigan School of Information scholars Jiaxin Pei and David Jurgens dug into how scientific uncertainty is communicated in news articles, and tested whether scientific claims are exaggerated. They also wanted to see how scientific claims in the news might differ between well-respected, peer reviewed journals, versus less rigorous publications. The work was published in the Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
“I feel like when we talk about the potential of journalists exaggerating claims, it's always these extreme cases,” says Jurgens, assistant professor of information and co-author of the paper. “We wanted to see if there was a difference when we lined up what the scientist said and what the journalist said for the same paper.”
The team looked at certainty, which can be expressed in subtle ways. “There's a lot of words that will signal how confident you are,” says Jurgens. “It’s a spectrum.” For instance, adding words like “suggest”, “approximately”, or “might” tend to increase uncertainty, while using a precise number in measurements indicates greater certainty.
Pei and Jurgens pulled news data from Altmetrics, a company that tracks mentions of scientific papers in news stories. They collected nearly 129,000 news stories mentioning specific scientific articles for their analysis. In each of the papers, they parsed any sentences that contained discovery words, such as “find” or “conclude,” to see what scientists or journalists were stating were the claims of the paper.
A group of human annotators went through the scientific papers and news articles, noting certainty levels in more than 1,500 scientific discoveries. “We took claims in the abstract and tried to match them with claims found in the news,” explains Jurgens. “So we said, ‘OK, here's two different people–scientists and journalists–trying to describe the same thing, but to two different audiences. What do we see in terms of certainty?’”
The researchers then built a computer model to see if they could replicate the certainty levels that human readers pointed out. Their model was highly correlated with human assessments of how certain a claim was.
“The model’s performance is good enough for large-scale analysis, but not perfect,” says Pei, a UMSI doctoral student and first author of the paper. He explains that there is a gap between human judgment and machine predictions, mostly because of subjectivity. “When identifying uncertainty in text, people's perceptions can be diverse, which makes it very hard to compare model predictions and human judgements,” Pei says. “Humans can sometimes disagree a lot.”
The team uncovered positive news about science communication. “Our findings suggest that journalists are actually pretty careful when reporting science,” says Pei, adding that if anything, some communicators can even reduce the certainty of scientific claims.
“Journalists have a hard job,” says Jurgens. He notes that it is a skill to take scientific results and translate them to a general audience. “It's nice to see that journalists really are trying to contextualize and temper scientific conclusions within the broader space.”
However, Pei notes that the research translation can get murkier when it comes to the quality of the journal, or what researchers call journal impact factors. Some science news writers report similar levels of certainty in the news, no matter where the original study is published.
‘“This can be problematic given the journal impact factor is an important indicator of research quality,” says Pei. “If journalists are reporting research that appeared in Nature or Science and some unknown journals with the same degrees of certainty, it might not be clear to the audience which finding is more trustworthy.”
Overall, the researchers view this work as an important step in better understanding uncertainty in scientific news. “I think one of the fun things that we were thinking about was how to build this tool to make it useful for folks,” says Jurgens. The team created a software package for scientists and journalists to calculate the uncertainty in research and reporting.
While journalists can benefit from a certainty check on their work, Jurgens notes that this tool might be helpful to readers as well. “It’s easy to get frustrated with uncertainty,” he says. In times of great stress, the public yearns for clear answers, not hedging language.
“I think providing a tool like this could have a calming effect to some degree,” says Jurgens. “This work isn’t the magic bullet, but I think this tool could play into a holistic understanding for readers.”
—Sarah Derouin, UMSI public relations specialist
To learn more about this project and access all the related resources, please visit the project website.