Skip to main content

University of Michigan School of Information

Menu

Media Center

UMSI welcomes media inquiries

UMSI researchers recognized with best paper and honorable mention awards at 2025 CHI conference

UMSI at CHI 2025. UMSI researchers: 2 Best Paper Awards. 7 Honorable Mentions. umsi.info/news.

Monday, 04/21/2025

By Noor Hindi

University of Michigan School of Information researchers have earned two Best Paper awards and seven Honorable Mention designations at the 2025 ACM CHI Conference on Human Factors in Computing Systems.

Honorable mentions are awarded to the top five percent of accepted papers at the annual conference and best paper awards are given to the top 1% of papers. 

This year’s conference will take place in Yokohama, Japan. 

To see a full list of accepted papers and workshop presentations by UMSI researchers, check out our CHI research roundup

Best Paper 

Creative Writers’ Attitudes on Writing as Training Data for Large Language Models

CHI 2025, Mon, 28 Apr | 5:32 PM - 5:44 PM

Katy Ilonka Gero, Meera Desai, Carly Schnitzler, Nayun Eom, Jack Cushman, Elena L. Glassman

The use of creative writing as training data for large language models (LLMs) is highly contentious and many writers have expressed outrage at the use of their work without consent or compensation. In this paper, we seek to understand how creative writers reason about the real or hypothetical use of their writing as training data. We interviewed 33 writers with variation across genre, method of publishing, degree of professionalization, and attitudes toward and engagement with LLMs. We report on core principles that writers express (support of the creative chain, respect for writers and writing, and the human element of creativity) and how these principles can be at odds with their realistic expectations of the world (a lack of control, industry-scale impacts, and interpretation of scale). Collectively these findings demonstrate that writers have a nuanced understanding of LLMs and are more concerned with power imbalances than the technology itself.


Placebo Effect of Control Settings in Feeds Are Not Always Strong

CHI 2025, Wed, 30 Apr | 2:58 PM - 3:10 PM

Silas Hsu, Vinay Koshy, Kristen Vaccaro, Christian Sandvig, Karrie Karahalios

Recent work has catalogued a variety of ``dark'' design patterns, including deception, that undermine user intent. We focus on deceptive ``placebo'' control settings for social media that do not work. While prior work reported that placebo controls increase feed satisfaction, we add to this body of knowledge by addressing possible placebo mechanisms, and potential side effects and confounds from the original study. Knowledge of these placebo mechanisms can help predict potential harms to users and prioritize the most problematic cases for regulators to pursue. In an online experiment, participants (N=762) browsed a Twitter feed with no control setting, a working control setting, or a placebo control setting. We found a placebo effect much smaller in magnitude than originally reported. This finding adds another objection to use of placebo controls in social media settings, while our methodology offers insights into finding confounds in placebo experiments in HCI.

Honorable Mention

Micro-narratives: A Scalable Method for Eliciting Stories of People’s Lived Experience

CHI 2025, Mon, 28 Apr | 5:32 PM - 5:44 PM

Amira Skeggs, Ashish Mehta, Valerie Yap, Serau B Ibrahim, Charla “Aubrey” Rhodes, James J. Gross, Sean A. Munson, Pedrag Klasnja, Amy Orben, Petr Slovak

Engaging with people's lived experiences is foundational for HCI research and design. This paper introduces a novel narrative elicitation method to empower people to easily articulate ‘micro-narratives’ emerging from their lived experiences, irrespective of their writing ability or background. Our approach aims to enable at-scale collection of rich, co-created datasets that highlight target populations' voices with minimal participant burden, while precisely addressing specific research questions. To pilot this idea, and test its feasibility, we: (i) developed an AI-powered prototype, which leverages LLM-chaining to scaffold the cognitive steps necessary for users’ narrative articulation; (ii) deployed it in three mixed-methods studies involving over 380 users; and (iii) consulted with established academics as well as C-level staff at (inter)national non-profits to map out potential applications. Both qualitative and quantitative findings show the acceptability and promise of the micro-narrative method, while also identifying the ethical and safeguarding considerations necessary for any at-scale deployments. 


Plurals: A System for Guiding LLMs via Simulated Social Ensembles 

CHI 2025, Tue, 29 Apr | 11:34 AM - 11:46 AM

Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric GilbertCeren Budak

Recent debates raised concerns that language models may favor certain viewpoints. But what if the solution is not to aim for a "view from nowhere'' but rather to leverage different viewpoints? We introduce Plurals, a system and Python library for pluralistic AI deliberation. Plurals consists of Agents (LLMs, optionally with personas) which deliberate within customizable Structures, with Moderators overseeing deliberation. Plurals is a generator of simulated social ensembles. Plurals integrates with government datasets to create nationally representative personas, includes deliberation templates inspired by deliberative democracy, and allows users to customize both information-sharing structures and deliberation behavior within Structures. Six case studies demonstrate fidelity to theoretical constructs and efficacy. Three randomized experiments show simulated focus groups produced output resonant with an online sample of the relevant audiences (chosen over zero-shot generation in 75% of trials). Plurals is both a paradigm and a concrete system for pluralistic AI.


Exploring the Design Space of Privacy-Driven Adaptation Techniques for Future Augmented Reality Interfaces

CHI 2025, Tue, 29 Apr | 12:10 PM - 12:22 PM

Shwetha Rajaram, Macarena Peralta, Janet G JohnsonMichael Nebeling

Modern augmented reality (AR) devices with advanced display and sensing capabilities pose significant privacy risks to users and bystanders. While previous context-aware adaptations focused on usability and ergonomics, we explore the design space of privacy-driven adaptations that allow users to meet their dynamic needs. These techniques offer granular control over AR sensing capabilities across various AR input, output, and interaction modalities, aiming to minimize degradations to the user experience. Through an elicitation study with 10 AR researchers, we derive 62 privacy-focused adaptation techniques that preserve key AR functionalities and classify them into system-driven, user-driven, and mixed-initiative approaches to create an adaptation catalog. We also contribute a visualization tool that helps AR developers navigate the design space, validating its effectiveness in design workshops with six AR developers. Our findings indicate that the tool allowed developers to discover new techniques, evaluate tradeoffs, and make informed decisions that balance usability and privacy concerns in AR design.


ShamAIn: Designing Superior Conversational AI Inspired by Shamanism

CHI 2025, Tue, 29 Apr | 3:10 PM - 3:22 PM

Hyungjun Cho, Jiyeon Amy Seo, Jiwon Lee, Chang-Min Kim, Tek-Jin Nam

This paper presents the design process, outcomes, and installation of ShamAIn, a multi-modal embodiment of conversational AI inspired by the beliefs and symbols of Korean shamanism. Adopting a research-through-design approach, we offer an alternative perspective on conversational AI design, emphasizing perceived superiority. ShamAIn was developed based on strategies derived from investigating people's experiences with shamanistic counseling and rituals. We deployed the system in an exhibition room for six weeks, during which 20 participants made multiple visits to engage with ShamAIn. Through subsequent in-depth interviews, we found that participants felt a sense of awe toward ShamAIn and engaged in interactions with humility and respect. Our participants disclosed personal and profound concerns, reflecting deeply on the responses they received. Consequently, they relied on ShamAIn and formed relationships in which they received support. In the discussion, we present the design implications of conversational AI perceived as superior to humans, along with the ethical considerations involved in designing such AI.


“A Bridge to Nowhere”: A Healthcare Case Study for Non-Reformist Design

CHI 2025, Wed, 30 Apr | 3:10 PM - 3:22 PM

Linda Huber 

In the face of intensified datafication and automation in public- sector industries, frameworks like design justice and the feminist practice of refusal provide help to identify and mitigate structural harm and challenge inequities reproduced in digitized infrastruc- tures. This paper applies those frameworks to emerging efforts across the U.S. healthcare industry to automate prior authoriza- tion - a process whereby insurance companies determine whether a treatment or service is “medically necessary” before agreeing to cover it. Federal regulatory interventions turn to datafication and automation to reduce the harms of this widely unpopular process shown to delay vital treatments and create immense administrative burden for healthcare providers and patients. This paper explores emerging prior authorization reforms as a case study, applying the frameworks of design justice and refusal to highlight the in- herent conservatism of interventions oriented towards improving the user experience of extractive systems. I further explore how the abolitionist framework of non-reformist reform helps to clarify alternative interventions that would mitigate the harms of prior authorization in ways that do not reproduce or extend the power of insurance companies. I propose a set of four tenets for non- reformist design to mitigate structural harms and advance design justice in a broad set of domains.


Designing Daily Supports for Parent-Child Conversations about Emotion: Ecological Momentary Assessment as Intervention

CHI 2025, Wed, 30 Apr | 5:32 PM - 5:44 PM

Seray B Ibrahim, Predrag Klasnja, James J. Gross, Petr Slovak 

Parental emotion coaching approaches that advocate for noticing and validating child emotions can greatly impact children's regulatory abilities. However, in daily life, parents often struggle to apply emotion coaching strategies that they access through parenting programmes or online help, suggesting a need for in situ support. This paper explores a potential new avenue for providing such support. We undertook conceptual work to develop a set of emotion-focused reflective questions that could increase parents’ attention to child emotions and delivered these as daily ecological momentary assessments (EMAs). We investigated the perceived impact of the approach through a 2-week online trial (n=33) and then co-designed child-facing component with parents through a 4-week asynchronous remote community study (n=15). Our paper contributes (1) conceptual insights on designing a potential novel intervention approach, (2) empirical insights on its acceptability and perceived impacts for parents, and (3) design implications for applying the approach to wider psychological constructs.


Development of the Critical Reflection and Agency in Computing Index

CHI 2025, Thu, 1 May | 9:12 AM - 9:24 AM

Aadarsh PadiyathMark GuzdialBarbara Ericson

As computing's societal impact grows, so does the need for computing students to recognize and address the ethical and sociotechnical implications of their work. While there are efforts to integrate ethics into computing curricula, we lack a standardized tool to measure those efforts, specifically, students' attitudes towards ethical reflection and their ability to effect change. This paper introduces the novel framework of Critically Conscious Computing and reports on the development and content validation of the Critical Reflection and Agency in Computing Index, a novel instrument designed to assess undergraduate computing students' attitudes towards practicing critically conscious computing. The resulting index is a theoretically grounded, expert-reviewed tool to support research and practice in computing ethics education. This enables researchers and educators to gain insights into students' perspectives, inform the design of targeted ethics interventions, and measure the effectiveness of computing ethics education initiatives.

RELATED

Check out more research at UMSI by subscribing to our free research roundup newsletter