Skip to main content

University of Michigan School of Information


Media Center

UMSI welcomes media inquiries

Doc on the Tok | Getting Cozy with Causality: UMSI Research Roundup

UMSI Research Roundup. Doc on the Tok. Getting Cozy with Causality. Recent publications by UMSI faculty and PhD students.

Tuesday, 05/28/2024

University of Michigan School of Information faculty and PhD students are creating and sharing knowledge that helps build a better world. Here are some of their recent publications. 


Audio-as-Data Tools: Replicating Computational Data Processing

Media and Communication, May 2024 

Josephine Lukito, Jason Greenfield, Yunkang Yang, Ross Dahlke, Megan A. Brown, Rebecca Lewis, Bin Chen

The rise of audio-as-data in social science research accentuates a fundamental challenge: establishing reproducible and reliable methodologies to guide this emerging area of study. In this study, we focus on the reproducibility of audio-as-data preparation methods in computational communication research and evaluate the accuracy of popular audio-as-data tools. We analyze automated transcription and computational phonology tools applied to 200 episodes of conservative talk shows hosted by Rush Limbaugh and Alex Jones. Our findings reveal that the tools we tested are highly accurate. However, despite different transcription and audio signal processing tools yield similar results, subtle yet significant variations could impact the findings’ reproducibility. Specifically, we find that discrepancies in automated transcriptions and auditory features such as pitch and intensity underscore the need for meticulous reproduction of data preparation procedures. These insights into the variability introduced by different tools stress the importance of detailed methodological reporting and consistent processing techniques to ensure the replicability of research outcomes. Our study contributes to the broader discourse on replicability and reproducibility by highlighting the nuances of audio data preparation and advocating for more transparent and standardized practices in this area.

The acceptance of evolution: A developmental view of Generation X in the United States

Public Understanding of Science, March 2024

Jon D. Miller, Belen Laspra, Carmelo Polino, Glenn Branch, Robert T. Pennock, Mark Ackerman

The public acceptance of evolution remains a contentious issue in the United States. Numerous investigations have used national cross-sectional studies to examine the factors associated with the acceptance or rejection of evolution. This analysis uses a 33-year longitudinal study that followed the same 5000 public-school students from grade 7 through midlife (ages 45–48) and is the first to do so in regard to evolution. A set of structural equation models demonstrate the complexity and changing nature of influences over these three decades. Parents and local influences are strong during the high school years. The combination of post-secondary education and occupational and family choices demonstrate that the 15 years after high school are the switchyards of life.

Bringing Communities In, Achieving AI for All

Issues in Science and Technology, May 2024

Shobita Parthasarathy, Jared Katzman

To ensure that artificial intelligence meaningfully addresses social inequalities, AI designers and regulators should seek out partnerships with marginalized communities, to learn what they need from this emerging technology and build it.

Estimating the Ideology of Political YouTube Videos

Political Analysis, February 2024 

Angela Lai, Megan Brown, James Bisbee, Joshua A. Tucker, Jonathan Nagler, Richard Bonneau

We present a method for estimating the ideology of political YouTube videos. The subfield of estimating ideology as a latent variable has often focused on traditional actors such as legislators, while more recent work has used social media data to estimate the ideology of ordinary users, political elites, and media sources. We build on this work to estimate the ideology of a political YouTube video. First, we start with a matrix of political Reddit posts linking to YouTube videos and apply correspondence analysis to place those videos in an ideological space. Second, we train a language model with those estimated ideologies as training labels, enabling us to estimate the ideologies of videos not posted on Reddit. These predicted ideologies are then validated against human labels. We demonstrate the utility of this method by applying it to the watch histories of survey respondents to evaluate the prevalence of echo chambers on YouTube in addition to the association between video ideology and viewer engagement. Our approach gives video-level scores based only on supplied text metadata, is scalable, and can be easily adjusted to account for changes in the ideological landscape.

Curiosity in News Consumption

Applied Cognitive Psychology, April 2024 

Jingyi Qiu, Russel Golman

We analyze how curiosity drives news consumption. We test predictions of the information-gap theory of curiosity using over 100,000 WeChat news articles, applying NLP methods to construct measures of salience, importance, and surprisingness associated with news headlines, experimentally validating these measures, and using them to predict clicks. Our findings confirm that people tend to consume news when: the headline sparks a salient question; the content appears more important (e.g., emphasized by the headline's position on the webpage or an exclamation mark); the headline refers to more surprising topics (measured as the KL-divergence from a baseline topic distribution); and the headline has lower valence. Information-gap theory helps predict aggregate news consumption. Yet our data also reveal a small negative correlation between the number of clicks and the ratio of likes to clicks, suggesting that while inducing curiosity can drive short-term news consumption, it doesn't necessarily enhance long-term reader engagement.

An archival perspective on pretraining data

Patterns, March 2024 

Meera A. Desai, Irene V. Pasquetto, Abigail JacobsDallas Card

Large language models have become ubiquitous but depend crucially on the data on which they are trained. These pretraining datasets are themselves distinctive artifacts that are reused, built upon, and made legitimate beyond their role in shaping model outputs. We consider the similarities between pretraining datasets and archives: both are collections of diverse sociocultural materials that mediate knowledge production and thereby confer power to those who select, document, and control access to them. We discuss the limitations of current approaches to assembling pretraining datasets and ask whose voices are amplified or obscured? Who is harmed? Whose perspectives are taken up or assumed as the default? We highlight the need for more research on these datasets and the practices through which they are built and suggest possible paths forward, drawing on ideas from archival studies.

Abstract LB375: Multimodality dietary intervention for colorectal cancer prevention: The MyBestGI randomized trial

Proceedings: AACR Annual Meeting 2024, April 2024

Zora Djuric, Michelle Segar, Ananda Sen, Reema Kadri, Rob Adwere-Boamah, Juno Orr, Katherine Poore, Samara Rifkin, Lorraine Buis

Colorectal cancer (CRC) is one of the cancers most highly affected by diet. Recommendations for reducing risk of CRC include weight management, eating plentiful plant-based high fiber foods, and limiting intakes of red meats and ultra-processed foods. In addition, an increased proportion of monounsaturated and omega-3 fats in the diet is beneficial for promoting weight management and reducing inflammation. Despite the evidence supporting the importance of diet and the demonstrated success for eliciting dietary changes in research settings, simple methods that can be easily implemented in medical settings are lacking. We developed a bespoke mHealth app (MyBestGI), incorporating an autonomy-supporting, self-regulatory approach to behavior change. The program promotes identification of the benefits of healthy eating on well-being. Two versions of the program are being tested in a 12-month, 3-arm randomized trial that seeks to recruit 240 people at increased risk for CRC. Participants are randomized to either 1) a control group that receives written information on cancer preventive diets, 2) a treatment group that receives a MyBestGI app version for logging 4 food groups associated with increased risks of CRC (red meats, processed meats, added sugars and refined grains), or 3) a treatment group that receives a MyBestGI app version for logging the same 4 food groups to limit plus 7 food groups to encourage in personalized quantities. The two treatment groups also receive a user manual, behavior-oriented, biweekly text messages, and supportive coaching calls. Logging food groups is requested at least 3 per week for 3 months, and 3 per week every other week for the next 9 months. The app displays logging results relative to individualized targets and has features that prompt reflecting on the user’s own results and planning for future eating. The primary endpoints are weight loss and improvement in a dietary cancer prevention score. Recruitment began in June 2023, and the study is enrolling 8-10 participants per month. The drop-out rate is lower than expected. Compliance with study procedures and app use is high. Responses to the end-of-day reflection questions within the app show that 92% of users are neutral, happy, or very happy with their dietary behavior for the day. For the 15 participants who had 12 weeks of logging data available, foods were logged for an average of 38 days (range 20-75) which is slightly higher than the requested logging of 36 days over 12 weeks. Review of the support calls for protocol fidelity indicates participants are enthusiastic about the MyBestGI program, making changes in their eating, and are internalizing the autonomy-supporting beliefs promoted in the program. These early results suggest users are engaging with the MyBestGI app at a very high level, making use of the provided app features, and enjoying the program. The MyBestGI approach therefore has excellent potential to support dietary change towards a cancer preventive diet in a format that is facile to implement in high-risk populations.

After automation: Homelessness prioritization algorithms and the future of care labor

Big Data & Society, March 2024

Pelle TraceyPatricia Garcia

People experiencing homelessness seek support from homeless services systems that increasingly rely on prioritization algorithms to determine who is the most deserving of scarce resources. In this paper, we argue that algorithmic harms in homeless services require a reparative approach that takes the data work of care workers seriously. Building on Davis, Williams, and Yang’s concept of algorithmic reparation, we present a qualitative study that examines the intertwining of data work and care labor of 15 care workers. We show how they wrestle with the ethics of algorithmic prioritization and develop workarounds that allow them to advocate for their clients. We contribute an empirical understanding of how care workers provide care under homeless services systems that equate data work with care labor to justify work intensification. Our findings have implications for understanding the future of care labor in datafied conditions and the social and political ramifications of algorithmically mediated care.

Doc on the Tok: How BIPOC College Students Perceive Healthcare Professionals' Social Media Content

Proceedings of iConference 2024 (Best Poster Finalist), May 2024

Kiara Fletcher, Maahe Kazmi, Adam Alabssi Aljundi, Jordyn Ingram, Heaven Thomas, Kayla Booth, Oliver L. Haimson

90% of the U.S. population interacts with health information on social media. While access to this information can be important to those who experience financial, geographical, and logistical barriers to receiving medical care, social media is also a source of health-related misinformation and disinformation that can cause/exacerbate serious harm. One of many proposed initiatives to combat medical misinformation online is for healthcare professionals to create their own channels and disseminate health information based on their professional expertise on platforms like TikTok. But how do users, particularly Black, Indigenous, and People of Color (BIPOC) who are more likely to experience harm and neglect in medical settings due to systemic racism in the US, perceive the quality of the information healthcare professionals create? This poster paper is the first step in a larger research project to explore this phenomenon in which we: present a preliminary literature review, identify two gaps, and propose a qualitative study to explore BIPOC college students' perceptions of social media content created by healthcare professionals on popular, short-form video platforms. 

What Does CrowdTangle’s Demise Signal for Data Access Under the DSA? 

Tech Policy, March 2024 

Megan Brown, Josephine Lukito, Kai-Cheng Yang

Last week, Meta announced that CrowdTangle, a tool commonly used by researchers and journalists to shed light on what goes on on Facebook and Instagram, will sunset in August of this year. In 2024, 64 countries–nearly half the world–are holding elections, and key transparency tools for social media platforms are increasingly inaccessible. Against the backdrop of profound global humanitarian crises, the violence of war, and the broad threats to civil liberties across the globe, the consequences of this election year are dire. Digital infrastructure is a core piece of the puzzle: it is a source of information, storytelling, organizing, and newsworthy commentary, but it also enables the proliferation of hate speech and mis/disinformation, including election lies that can contribute to offline violence and strife.

Virtual Care: Perspectives From Family Physicians

Family Medicine, April 2024

Olivia Ritchie, Emily Koptyra, Liz B. Marquis, Reema Kadr, Anna Laurie, V. G. Vinod VydiswaranJiazhao LiLindsay K. BrownTiffany C. VeinotLorraine R. Buis, Timothy C. Guetterman

Background: During the COVID-19 pandemic, virtual care expanded rapidly at Michigan Medicine and other health systems. From family physicians’ perspectives, this shift to virtual care has the potential to affect workflow, job satisfaction, and patient communication. As clinics reopened and care delivery models shifted to a combination of in-person and virtual care, the need to understand physician experiences with virtual care arose in order to improve both patient and provider experiences. This study investigated Michigan Medicine family medicine physicians’ perceptions of virtual care through qualitative interviews to better understand how to improve the quality and effectiveness of virtual care for both patients and physicians. 

Methods: We employed a qualitative descriptive design to examine physician perspectives through semistructured interviews. We coded and analyzed transcripts using thematic analysis, facilitated by MAXQDA (VERBI) software. 

Results: The results of the analysis identified four major themes: (a) chief concerns that are appropriate for virtual evaluation, (b) physician perceptions of patient benefits, (c) focused but contextually enriched patient-physician communication, and (d) structural support needed for high-quality virtual care. 

Conclusions: These findings can help further direct the discussion of how to make use of resources to improve the quality and effectiveness of virtual care.

Everyday Equitable Data Literacy is Best in Social Studies

Improving Equity in Data Science, June 2024 

Tamara L. Shreiner, Mark Guzdial 

Teaching data literacy in social studies provides opportunities to teach fundamental data literacy skills to all students while also teaching social studies content. However, social studies teachers are not often properly prepared to teach about data and data visualizations and not all social studies teachers are motivated to teach equity-driven, justice-oriented content and skills. Additionally, tools for data visualizations are more often designed for STEM classes than for social studies classes. This chapter discusses technology and learning supports we have designed to address these challenges by helping social studies teachers implement equity-driven, justice-oriented data literacy. 

Now Is the Time to Strengthen Government-Academic Data Infrastructures to Jump-Start Future Public Health Crisis Response

JMIR Public Health and Surveillance, April 2024 

Jian-Sin Lee, Allison R B Tyler, Tiffany Christine VeinotElizabeth Yakel 

During public health crises, the significance of rapid data sharing cannot be overstated. In attempts to accelerate COVID-19 pandemic responses, discussions within society and scholarly research have focused on data sharing among health care providers, across government departments at different levels, and on an international scale. A lesser-addressed yet equally important approach to sharing data during the COVID-19 pandemic and other crises involves cross-sector collaboration between government entities and academic researchers. Specifically, this refers to dedicated projects in which a government entity shares public health data with an academic research team for data analysis to receive data insights to inform policy. In this viewpoint, we identify and outline documented data sharing challenges in the context of COVID-19 and other public health crises, as well as broader crisis scenarios encompassing natural disasters and humanitarian emergencies. We then argue that government-academic data collaborations have the potential to alleviate these challenges, which should place them at the forefront of future research attention. In particular, for researchers, data collaborations with government entities should be considered part of the social infrastructure that bolsters their research efforts toward public health crisis response. Looking ahead, we propose a shift from ad hoc, intermittent collaborations to cultivating robust and enduring partnerships. Thus, we need to move beyond viewing government-academic data interactions as 1-time sharing events. Additionally, given the scarcity of scholarly exploration in this domain, we advocate for further investigation into the real-world practices and experiences related to sharing data from government sources with researchers during public health crises.

Repairing the harm: Toward an algorithmic reparations approach to hate speech content moderation

Big Data & Society, April 2024 

Chelsea Peterson-Salahuddin 

Content moderation algorithms influence how users understand and engage with social media platforms. However, when identifying hate speech, these automated systems often contain biases that can silence or further harm marginalized users. Recently, scholars have offered both restorative and transformative justice frameworks as alternative approaches to platform governance to mitigate harms caused to marginalized users. As a complement to these recent calls, in this essay, I take up the concept of reparation as one substantive approach social media platforms can use alongside and within these justice frameworks to take actionable steps toward addressing, undoing and proactively preventing the harm caused by algorithmic content moderation. Specifically, I draw on established legal and legislative reparations frameworks to suggest how social media platforms can reconceptualize algorithmic content moderation in ways that decrease harm to marginalized users when identifying hate speech. I argue that the concept of reparations can reorient how researchers and corporate social media platforms approach content moderation, away from capitalist impulses and efficiency and toward a framework that prioritizes creating an environment where individuals from marginalized communities feel safe, protected and empowered.

Networks and Influencers in Online Propaganda Events: A Comparative Study of Three Cases in India

Proceedings of the ACM on Human-Computer Interaction, April 2024

Anirban Sen, Soham De, Joyojeet Pal

The structure and mechanics of organized outreach around certain issues, such as in propaganda networks, is constantly evolving on social media. We collect tweets on two propaganda events and one non-propaganda event with varying degrees of organized messaging. We then perform a comparative analysis of the user and network characteristics of social media networks around these events and find clearly distinguishable traits across events. We find that influential entities like prominent politicians, digital influencers, and mainstream media prefer to engage more with social media events with lesser degree of propaganda while avoiding events with high degree of propaganda, which are mostly sustained by lesser known but dedicated micro-influencers. We also find that network communities of events with high degree of propaganda are significantly centralized with respect to the influence exercised by their leaders. The methods and findings of this study can pave the way for modeling and early detection of other propaganda events, using their user and community characteristics.

Getting cozy with causality: Advances to the causal pathway diagramming method to enhance implementation precision

Implementation Research and Practice, April 2024

Predrag Klasnja, Rosemary D. Meza,  Michael D. Pullmann, Kayne D. Mettert, Rene Hawkes, Lorella Palazzo, Bryan J. Weiner, Cara C. Lewis

Background:  Implementation strategies are theorized to work well when carefully matched to implementation determinants and when factors—preconditions, moderators, etc.—that influence strategy effectiveness are prospectively identified and addressed. Existing methods for strategy selection are either imprecise or require significant technical expertise and resources, undermining their utility. This article outlines refinements to causal pathway diagrams (CPDs), a method for articulating the causal process through which implementation strategies work and offers illustrations of their use. 

Method:  CPDs are a visualization tool to represent an implementation strategy, its mechanism(s) (i.e., the processes through which a strategy is thought to operate), determinants it is intended to address, factors that may impede or facilitate its effectiveness, and the series of outcomes that should be expected if the strategy is operating as intended. We offer principles for constructing CPDs and describe their key functions. 

Results: Applications of the CPD method by study teams from two National Institute of Health-funded Implementation Science Centers and a research grant are presented. These include the use of CPDs to (a) match implementation strategies to determinants, (b) understand the conditions under which an implementation strategy works, and (c) develop causal theories of implementation strategies. 

Conclusions: CPDs offer a novel method for implementers to select, understand, and improve the effectiveness of implementation strategies. They make explicit theoretical assumptions about strategy operation while supporting practical planning. Early applications have led to method refinements and guidance for the field.

Friendship Formation in an Enforced Online Regime: Findings from a U.S. University Under COVID

Proceedings of the ACM on Human-Computer Interaction, April 2024

Soyoung LeeKentaro Toyama 

Friendships are a key element of mental health, yet modern life increasingly involves "enforced online regimes," which can inhibit friendship formation. One example is provided by residential university students under COVID-19. Through interviews with 17 graduate students at a U.S. university, we investigate how new friendships were made and maintained under the pandemic. While some of our individual findings echo previous work with online social interaction, our analysis reveals a novel 7-phase friendship formation process that extends Levinger & Snoek's classic pair-relatedness theory. The model enables pinpoint diagnoses. For our participants, three specific phases were blocked -- Physical Awareness (apprehension of another's physical characteristics); Personal Contact (exchange of personal information); and Ongoing Mutuality (repeat interactions to build friendship). The model also explains divergent results under similar but different situations (e.g., residential students under COVID eventually made friends, but students of purely online courses do not), and enables targeted recommendations.

The Online Identity Help Center: Designing and Developing a Content Moderation Policy Resource for Marginalized Social Media Users

Proceedings of the ACM on Human Computer Interaction, April 2024

Samuel Mayworm, Shannon Li, Hibby Thach, Daniel Delmonaco, Christian Paneda, Andrea Wegner, Oliver L. Haimson

Marginalized social media users struggle to navigate inequitable content moderation they experience online. We developed the Online Identity Help Center (OIHC) to confront this challenge by providing information on social media users’ rights, summarizing platforms’ policies, and providing instructions to appeal moderation decisions. We discuss our findings from interviews (n = 24) and surveys (n = 75) which informed the OIHC’s design, along with interviews about and usability tests of the site (n = 12). We found that the OIHC’s resources made it easier for participants to understand platforms’ policies and access appeal resources. Participants expressed increased willingness to read platforms’ policies after reading the OIHC’s summarized versions, but expressed mistrust of platforms after reading them. We discuss the study’s implications, such as the benefits of providing summarized policies to encourage digital literacy, and how doing so may enable users to express skepticism of platforms’ policies after reading them. 

A dataset for measuring the impact of research data and their curation

Scientific Data, May 2024

Libby Hemphill, Andrea Thomer, Sara LafiaLizhou Fan, David Bleckley, Elizabeth Moss

Science funders, publishers, and data archives make decisions about how to responsibly allocate resources to maximize the reuse potential of research data. This paper introduces a dataset developed to measure the impact of archival and data curation decisions on data reuse. The dataset describes 10,605 social science research datasets, their curation histories, and reuse contexts in 94,755 publications that cover 59 years from 1963 to 2022. The dataset was constructed from study-level metadata, citing publications, and curation records available through the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan. The dataset includes information about study-level attributes (e.g., PIs, funders, subject terms); usage statistics (e.g., downloads, citations); archiving decisions (e.g., curation activities, data transformations); and bibliometric attributes (e.g., journals, authors) for citing publications. This dataset provides information on factors that contribute to long-term data reuse, which can inform the design of effective evidence-based recommendations to support high-impact research data curation decisions.

Interpretability Gone Bad: The Role of Bounded Rationality in How Practitioners Understand Machine Learning

Proceedings of the ACM on Human-Computer Interaction, April 2024 

Harmanpreet Kaur, Matthew R. Conrad, Davis Rule, Cliff LampeEric Gilbert 

While interpretability tools are intended to help people better understand machine learning (ML), we find that they can, in fact, impair understanding. This paper presents a pre-registered, controlled experiment showing that ML practitioners (N=119) spent 5x less time on task, and were 17% less accurate about the data and model, when given access to interpretability tools. We present bounded rationality as the theoretical reason behind these findings. Bounded rationality presumes human departures from perfect rationality, and it is often effectuated by satisficing, i.e., an inclination towards "good enough" understanding. Adding interactive elements---a strategy often employed to promote deliberative thinking and engagement, and tested in our experiment---also does not help. We discuss implications for interpretability designers and researchers related to how cognitive and contextual factors can affect the effectiveness of interpretability tool use.

Exit Ripple Effects: Understanding the Disruption of Socialization Networks Following Employee Departures

WWW’24, Proceedings of the ACM on Web Conference, May 2024

David GambaYulin Yu, Yuan Yuan, Grant SchoenebeckDaniel M. Romero 

Amidst growing uncertainty and frequent restructurings, the impacts of employee exits are becoming one of the central concerns for organizations. Using rich communication data from a large holding company, we examine the effects of employee departures on socialization networks among the remaining coworkers. Specifically, we investigate how network metrics change among people who historically interacted with departing employees. We find evidence of "breakdown" in communication among the remaining coworkers, who tend to become less connected with fewer interactions after their coworkers' departure. This effect appears to be moderated by both external factors, such as periods of high organizational stress, and internal factors, such as the characteristics of the departing employee. At the external level, periods of high stress correspond to greater communication breakdown; at the internal level, however, we find patterns suggesting individuals may end up better positioned in their networks after a network neighbor's departure. Overall, our study provides critical insights into managing workforce changes and preserving communication dynamics in the face of employee exits.

Unfulfilled Promises of Child Safety and Privacy: Portrayals and Use of Children in Smart Home Marketing

Proceedings of the ACM on Human-Computer Interaction, April 2024 

Kaiwen Sun, Jingjie Li, Yixin Zou, Jenny Radesky, Christopher BrooksFlorian Schaub

Smart home technologies are making their way into families. Parents' and children's shared use of smart home technologies has received growing attention in CSCW and related research communities. Families and children are also frequently featured as target audiences in smart home product marketing. However, there is limited knowledge of how exactly children and family interactions are portrayed in smart home product marketing, and to what extent those portrayals align with the actual consideration of children and families in product features and resources for child safety and privacy. We conducted a content analysis of product websites and online resources of 102 smart home products, as these materials constitute a main marketing channel and information source about products for consumers. We found that despite featuring children in smart home marketing, most analyzed product websites did not mention child safety features and lacked sufficient information on how children's data is collected and used. Specifically, our findings highlight misalignments in three aspects: (1) children are depicted as users of smart home products but there are insufficient child-friendly product features; (2) harmonious child-product co-presence is portrayed but potential child safety issues are neglected; and (3) children are shown as the subject of monitoring and datafication but there is limited information on child data collection and use. We discuss how parent-child relationships and parenting may be negatively impacted by such marketing depictions, and we provide design and policy recommendations for better incorporating child safety and privacy considerations into smart home products.

Low Mileage, High Fidelity: Evaluating Hypergraph Expansion Methods by Quantifying the Information Loss

WWW’24: Proceedings of the ACM Web Conference, May 2024

David Y. Kang, Qiaozhu Mei, Sang-Wook Kim

In this paper, we first define information loss that occurs in the hypergraph expansion and then propose a novel framework, named MILEAGE, to evaluate hypergraph expansion methods by measuring their degree of information loss. MILEAGE employs the following four steps: (1) expanding a hypergraph; (2) performing the unsupervised representation learning on the expanded graph; (3) reconstructing a hypergraph based on vector representations obtained; and (4) measuring MILEAGE-score (i.e., mileage) by comparing the reconstructed and the original hypergraphs. To demonstrate the usefulness of MILEAGE, we conduct experiments via downstream tasks on three levels (i.e., node, hyperedge, and hypergraph): node classification, hyperedge prediction, and hypergraph classification on eight real-world hypergraph datasets. Through the extensive experiments, we observe that information loss through hypergraph expansion has a negative impact on downstream tasks and MILEAGE can effectively evaluate hypergraph expansion methods through the information loss and recommend a new method that resolves the problems of existing ones.

Learning to Rewrite Prompts for Personalized Text Generation

WWW’24: Proceedings of the ACM on Web Conference, May 2024

Cheng Li, Mingyang Zhang, Qiaozhu Mei, Weize Kong, Michael Bendersky

Facilitated by large language models (LLMs), personalized text generation has become a rapidly growing research direction. Most existing studies focus on designing specialized models for a particular domain, or they require fine-tuning the LLMs to generate personalized text. We consider a typical scenario in which the large language model, which generates personalized output, is frozen and can only be accessed through APIs. Under this constraint, all one can do is to improve the input text (i.e., text prompts) sent to the LLM, a procedure that is usually done manually. In this paper, we propose a novel method to automatically revise prompts for personalized text generation. The proposed method takes the initial prompts generated by a state-of-the-art, multistage framework for personalized generation and rewrites a few critical components that summarize and synthesize the personal context. The prompt rewriter employs a training paradigm that chains together supervised learning (SL) and reinforcement learning (RL), where SL reduces the search space of RL and RL facilitates end-to-end training of the rewriter. Using datasets from three representative domains, we demonstrate that the rewritten prompts outperform both the original prompts and the prompts optimized via supervised learning or reinforcement learning alone. In-depth analysis of the rewritten prompts shows that they are not only human readable, but also able to guide manual revision of prompts when there is limited resource to employ reinforcement learning to train the prompt rewriter, or when it is costly to deploy an automatic prompt rewriter for inference.

Opportunities for Incorporating intersectionality into biomedical informatics 

Journal of Biomedical Informatics, June 2024 

Oliver J. Bear Don’t Walk IV, Amandalynne Paullada, Avery Everhart, Reggie Casanova-Perez, Trevor Cohen, Tiffany Veinot

Many approaches in biomedical informatics (BMI) rely on the ability to define, gather, and manipulate biomedical data to support health through a cyclical research-practice lifecycle. Researchers within this field are often fortunate to work closely with healthcare and public health systems to influence data generation and capture and have access to a vast amount of biomedical data. Many informaticists also have the expertise to engage with stakeholders, develop new methods and applications, and influence policy. However, research and policy that explicitly seeks to address the systemic drivers of health would more effectively support health. Intersectionality is a theoretical framework that can facilitate such research. It holds that individual human experiences reflect larger socio-structural level systems of privilege and oppression, and cannot be truly understood if these systems are examined in isolation. Intersectionality explicitly accounts for the interrelated nature of systems of privilege and oppression, providing a lens through which to examine and challenge inequities. In this paper, we propose intersectionality as an intervention into how we conduct BMI research. We begin by discussing intersectionality’s history and core principles as they apply to BMI. We then elaborate on the potential for intersectionality to stimulate BMI research. Specifically, we posit that our efforts in BMI to improve health should address intersectionality’s five key considerations: (1) systems of privilege and oppression that shape health; (2) the interrelated nature of upstream health drivers; (3) the nuances of health outcomes within groups; (4) the problematic and power-laden nature of categories that we assign to people in research and in society; and (5) research to inform and support social change.

Pre-prints, Working Papers, Articles, Reports, Workshops and Talks

Keynote: Put Accessibility into Practice: Experience Design for the Edge Case 

HighEdWeb 2024 Michigan Regional Conference, May 2024

Lija Hogan

Accessibility, often expressed through WCAG 2.2 compliance, is often the last checkbox we tick before we launch a website. That means that we lose an opportunity to be intentional about how we design the digital experience for people who are evaluating, applying and attending colleges and universities. By extension, it also means that we miss an opportunity to broaden the scope of work to address people's needs more effectively and coordinate conversations about how class materials, processes and services align to ensure that all learners can access the environments we offer equitably. 

Design for edge cases often results in optimizing the experience for everyone; keeping in mind the unique needs of people who live with disabilities forces us to be more creative in our problem-solving. This talk will focus on strategic and tactical recommendations that you can use to improve your approach to accessibility across campus: 

  • Discover ideas around how AI can be used to complement work. 
  • Learn best practices built on years of experience in experience research, design and education around marketing and supporting equitable learning environments. 
  • Take away recommendations around how you can connect with teams to build momentum to ensure the learning experience works for everyone.

Crowdsourcing public attitudes toward local services through the lens of Google Maps reviews: An urban density-based perspective

arXiv, April 2024 

Lingyao Li, Songhua Hu, Atiyya Shaw, Libby Hemphill

Understanding how urban density impact public perceptions of urban services is important for informing livable, accessible, and equitable urban planning. Conventional methods such as surveys are limited by their sampling scope, time efficiency, and expense. On the other hand, crowdsourcing through online platforms presents an opportunity for decision-makers to tap into a user-generated source of information that is widely available and cost-effective. To demonstrate such potential, this study uses Google Maps reviews for 23,906 points of interests (POIs) in Atlanta, Georgia. Next, we use the Bidirectional Encoder Representations from Transformers (BERT) model to classify reviewers’ attitudes toward urban density and the Robustly Optimized BERT approach (RoBERTa) to compute the reviews’ sentiment. Finally, a partial least squares (PLS) regression is fitted to examine the relationships between average sentiment and socio-spatial factors. The findings reveal areas in Atlanta with predominantly negative sentiments toward urban density and highlight the variation in sentiment distribution across different POIs. Further, the regression analysis reveals that minority and low-income communities often express more negative sentiments, and higher land use density exacerbates such negativity. This study introduces a novel data source and methodological framework that can be easily adapted to different regions, offering useful insights into public sentiment toward the built environment and shedding light on how planning policies can be designed to handle related challenges.

Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic

arXiv, May 2024 

Xi Chen, Scott A. Hale, David Jurgens, Mattia Samory, Ethan Zuckerman, Przemyslaw A. Grabowicz

News coverage profoundly affects how countries and individuals behave in international relations. Yet, we have little empirical evidence of how news coverage varies across countries. To enable studies of global news coverage, we develop an efficient computational methodology that comprises three components: (i) a transformer model to estimate multilingual news similarity; (ii) a global event identification system that clusters news based on a similarity network of news articles; and (iii) measures of news synchrony across countries and news diversity within a country, based on countryspecific distributions of news coverage of the global events. Each component achieves state-of-the art performance, scaling seamlessly to massive datasets of millions of news articles. 

We apply the methodology to 60 million news articles published globally between January 1 and June 30, 2020, across 124 countries and 10 languages, detecting 4357 news events. We identify the factors explaining diversity and synchrony of news coverage across countries. Our study reveals that news media tend to cover a more diverse set of events in countries with larger Internet penetration, more official languages, larger religious diversity, higher economic inequality, and larger populations. Coverage of news events is more synchronized between countries that not only actively participate in commercial and political relations—such as, pairs of countries with high bilateral trade volume, and countries that belong to the NATO military alliance or BRICS group of major emerging economies—but also countries that share certain traits: an official language, high GDP, and high democracy indices

Modeling Empathetic Alignment in Conversation

arXiv, May 2024 

Jiamin Yang, David Jurgens 

Empathy requires perspective-taking: empathetic responses require a person to reason about what another has experienced and communicate that understanding in language. However, most NLP approaches to empathy do not explicitly model this alignment process. Here, we introduce a new approach to recognizing alignment in empathetic speech, grounded in Appraisal Theory. We introduce a new dataset of over 9.2K span-level annotations of different types of appraisals of a person’s experience and over 3K empathetic alignments between a speaker’s and observer’s speech. Through computational experiments, we show that these appraisals and alignments can be accurately recognized. In experiments in over 9.2M Reddit conversations, we find that appraisals capture meaningful groupings of behavior but that most responses have minimal alignment. However, we find that mental health professionals engage with substantially more empathetic alignment.

The Sociotechnical Stack: Opportunities for Social Computing Research in Non-consensual Intimate Media

arXiv, May 2024 

Li Qiwei, Allison Mcdonald, Oliver L. HaimsonSarita SchoenebeckEric Gilbert

Non-consensual intimate media (NCIM) involves sharing intimate content without the depicted person's consent, including "revenge porn" and sexually explicit deepfakes. While NCIM has received attention in legal, psychological, and communication fields over the past decade, it is not sufficiently addressed in computing scholarship. This paper addresses this gap by linking NCIM harms to the specific technological components that facilitate them. We introduce the sociotechnical stack, a conceptual framework designed to map the technical stack to its corresponding social impacts. The sociotechnical stack allows us to analyze sociotechnical problems like NCIM, and points toward opportunities for computing research. We propose a research roadmap for computing and social computing communities to deter NCIM perpetration and support victim-survivors through building and rebuilding technologies.

The Call for Socially Aware Language Technologies

arXiv, May 2024 

Diyi Yang, Dirk Hovy, David Jurgens, Barbara Plank 

Language technologies have made enormous progress, especially with the introduction of large language models (LLMs). On traditional tasks such as machine translation and sentiment analysis, these models perform at nearhuman level. These advances can, however, exacerbate a variety of issues that models have traditionally struggled with, such as bias, evaluation, and risks. In this position paper, we argue that many of these issues share a common core: a lack of awareness of the factors, context, and implications of the social environment in which NLP operates, which we call social awareness. While NLP is getting better at solving the formal linguistic aspects, limited progress has been made in adding the social awareness required for language applications to work in all situations for all users. Integrating social awareness into NLP models will make applications more natural, helpful, and safe, and will open up new possibilities. Thus we argue that substantial challenges remain for NLP to develop social awareness and that we are just at the beginning of a new era for the field.

A scoping review of using Large Language Models (LLMs) to investigate Electronic Health Records (EHRs)

arXiv, May 2024

Lingyao Li, Jiayan Zhou, Zhenxiang Gao, Wenyue Hua, Lizhou Fan, Huizi Yu, Loni Hagen, Yonfeng Zhang, Themistocles L. Assimes, Libby Hemphill, Siyuan Ma

Electronic Health Records (EHRs) play an important role in the healthcare system. However, their complexity and vast volume pose significant challenges to data interpretation and analysis. Recent advancements in Artificial Intelligence (AI), particularly the development of Large Language Models (LLMs), open up new opportunities for researchers in this domain. Although prior studies have demonstrated their potential in language understanding and processing in the context of EHRs, a comprehensive scoping review is lacking. This study aims to bridge this research gap by conducting a scoping review based on 329 related papers collected from OpenAlex. We first performed a bibliometric analysis to examine paper trends, model applications, and collaboration networks. Next, we manually reviewed and categorized each paper into one of the seven identified topics: named entity recognition, information extraction, text similarity, text summarization, text classification, dialogue system, and diagnosis and prediction. For each topic, we discussed the unique capabilities of LLMs, such as their ability to understand context, capture semantic relations, and generate human-like text. Finally, we highlighted several implications for researchers from the perspectives of data resources, prompt engineering, fine-tuning, performance measures, and ethical concerns. In conclusion, this study provides valuable insights into the potential of LLMs to transform EHR research and discusses their applications and ethical considerations.

HCC Is All You Need: Alignment—The Sensible Kind Anyway—Is Just Human-Centered Computing

arXiv, April 2024

Eric Gilbert

The argument of this very short paper is that the problem academic AI has termed “alignment” is just a type of Human-Centered Computing (HCC). HCC is an existing academic field. 

The term “alignment” rose to prominence among AGI and crypto researchers/enthusiasts/grifters [2]—and has been linked with eugenics traditions. Nevertheless, it has jumped into academic discourse and become an umbrella term for creating AI systems that respect “human intentions and values.” I address this latter, arguably more sensible kind of alignment currently happening in academia and industry. Here, alignment often quickly becomes specific technical problems, such as how to learn reward functions that correspond to what users want (e.g., [28]), or how to construct models that can explain themselves to people (e.g., [30]). However, the high-level goal is broader: to bring a complex technology into concert with what people want it to do. 

This is literally what HCC is. There are whole journals and conferences (e.g., CHI, CSCW, UIST, FAccT). There is a whole program at the U.S. National Science Foundation.

For over 40 years, HCC has struggled with, and made progress on, “aligning” different technologies to people. Key issues include: Who, exactly, are we talking about (e.g., [1, 4, 5, 11, 24, 38, 42])? How do we know what they want (e.g., [8, 13, 16, 21, 26, 37, 41])? How stable is what they want to do with technology (e.g., [39])? Can they help us design it (e.g., [19, 20, 25, 31, 34])? What are the different ways a technology can be designed (e.g., [6, 33])? How do we know if it’s good for people (e.g., [9, 32])? What are the limits of design- and tech-centric approaches (e.g., [15, 18, 35, 43])? Is it possible to avoid baking systemic oppression into technology (e.g., [3, 7, 10, 14, 22, 27, 29, 36, 40])? 

Casting alignment as HCC invites it to draw upon the considerable theories, methods, and findings of HCC, and the fields from which it borrows (e.g., STS, Communication, Ethics, etc.)—instead of re-inventing them under new names. Perhaps we don’t need the word “alignment” at all. 

One vs. Many: Comprehending Accurate Information from Multiple Erroneous and Inconsistent AI Generations

arXiv, May 2024

Yoonjoo Lee, Kihoon Son, Tae Soo Kim, Jisu Kim, John Joon Young Chung, Eytan Adar, Juho Kim

As Large Language Models (LLMs) are nondeterministic, the same input can generate different outputs, some of which may be incorrect or hallucinated. If run again, the LLM may correct itself and produce the correct answer. Unfortunately, most LLM-powered systems resort to single results which, correct or not, users accept. Having the LLM produce multiple outputs may help identify disagreements or alternatives. However, it is not obvious how the user will interpret conflicts or inconsistencies. To this end, we investigate how users perceive the AI model and comprehend the generated information when they receive multiple, potentially inconsistent, outputs. Through a preliminary study, we identified five types of output inconsistencies. Based on these categories, we conducted a study (𝑁 = 252) in which participants were given one or more LLMgenerated passages to an information-seeking question. We found that inconsistency within multiple LLM-generated outputs lowered the participants’ perceived AI capacity, while also increasing their comprehension of the given information. Specifically, we observed that this positive effect of inconsistencies was most significant for participants who read two passages, compared to those who read three. Based on these findings, we present design implications that, instead of regarding LLM output inconsistencies as a drawback, we can reveal the potential inconsistencies to transparently indicate the limitations of these models and promote critical LLM usage.

Feminist Interaction Techniques: Deterring Non-Consensual Screenshots with Interaction Techniques

arXiv, April 2024

Li QiweiFrancesca Lameiro, Shefali Patel, Cristi Isaula-Reyes, Eytan AdarEric GilbertSarita Schoenebeck

Non-consensual Intimate Media (NCIM) refers to the distribution of sexual or intimate content without consent. NCIM is common and causes significant emotional, financial, and reputational harm. We developed Hands-Off, an interaction technique for messaging applications that deters non-consensual screenshots. Hands-Off requires recipients to perform a hand gesture in the air, above the device, to unlock media—which makes simultaneous screenshotting difficult. A lab study shows that Hands-Off gestures are easy to perform and reduce non-consensual screenshots by 67%. We conclude by generalizing this approach and introduce the idea of Feminist Interaction Techniques (FIT), interaction techniques that encode feminist values and speak to societal problems, and reflect on FIT’s opportunities and limitations.


Check out more research from UMSI faculty and PhD students by visiting our research roundup and subscribing today



— Noor Hindi, UMSI public relations specialist