Skip to main content

University of Michigan School of Information


Media Center

UMSI welcomes media inquiries

Robots, Blood Pressure and Data: UMSI Research Roundup

UMSI research roundup. Robots, Blood Pressure and Data. Check out UMSI faculty and PhD student publications.

Wednesday, 06/14/2023

University of Michigan School of Information faculty and PhD students are creating and sharing knowledge that helps build a better world. Here are some of their recent publications.

"Would I Feel More Secure With a Robot?": Understanding Perceptions of Security Robots in Public Spaces

Proceedings of the ACM on Human-Computer Interaction, October 2023

Gabriela Marcu, Iris Lin, Brandon Williams, Lionel Peter Robert, Florian Schaub

Robots are increasingly being deployed as security agents helping law enforcement in spaces such as streets, parks, or shopping malls. Unfortunately, the deployment of security robots is not without problems and controversies. For example, the New York Police Department canceled its contract with Boston Dynamics in response to backlash from their use of Digidog, an autonomous robotic dog, which sparked fears in the public. However, it is unclear to what extent affected communities have been involved in the design and deployment process of robots. This is problematic because, without input from community members in the processes of design and deployment, security robots are likely to not satisfy the concerns or safety needs of real communities. To gain deeper insight into people’s perceptions of security robots—including both potential benefits and concerns—we conducted 17 semi-structured interviews addressing the following research questions: RQ1. What characteristics do people ascribe to security robots? RQ2. What expectations do people have about the function and role of security robots? RQ3. What are people’s attitudes toward the use of security robots? Our study offers several contributions to the existing literature on security robots.

Assessing the Impact of Context Inference Error and Partial Observability on RL Methods for Just-In-Time Adaptive Interventions

arXiv, May 2023

Karine Karine, Predrag Klasnja, Susan A. Murphy, Benjamin M. Marlin

Just-in-Time Adaptive Interventions (JITAIs) are a class of personalized health interventions developed within the behavioral science community. JITAIs aim to provide the right type and amount of support by iteratively selecting a sequence of intervention options from a pre-defined set of components in response to each individual’s time varying state. In this work, we explore the application of reinforcement learning methods to the problem of learning intervention option selection policies. We study the effect of context inference error and partial observability on the ability to learn effective policies. Our results show that the propagation of uncertainty from context inferences is critical to improving intervention efficacy as context uncertainty increases, while policy gradient algorithms can provide remarkable robustness to partially observed behavioral state information.

"I wouldn’t say offensive but...": Disability-Centered Perspectives on Large Language Models

Proceedings of FAccT, June 2023

Vinitha Gadiraju, Shaun Kane, Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton, Robin Brewer

Large language models (LLMs) trained on real-world data can inadvertently reflect harmful societal biases, particularly toward historically marginalized communities. While previous work has primarily focused on harms related to age and race, emerging research has shown that biases toward disabled communities exist. This study extends prior work exploring the existence of harms by identifying categories of LLM-perpetuated harms toward the disability community. We conducted 19 focus groups, during which 56 participants with disabilities probed a dialog model about disability and discussed and annotated its responses. Participants rarely characterized model outputs as blatantly offensive or toxic. Instead, participants used nuanced language to detail how the dialog model mirrored subtle yet harmful stereotypes they encountered in their lives and dominant media, e.g., inspiration porn and able-bodied saviors. Participants often implicated training data as a cause for these stereotypes and recommended training the model on diverse identities from disability-positive resources. Our discussion further explores representative data strategies to mitigate harm related to different communities through annotation co-design with ML researchers and developers.

Envisioning Equitable Speech Technologies for Black Older Adults

Proceedings of FAccT, June 2023

Robin Brewer, Christina N. Harrington, Courtney Heldreth

There is increasing concern that how researchers currently define and measure fairness is inadequate. Recent calls push to move beyond traditional concepts of fairness and consider related constructs through qualitative and community-based approaches, particularly for underrepresented communities most at-risk for AI harm. One in context, previous research has identified that voice technologies are unfair due to racial and age disparities. This paper uses voice technologies as a case study to unpack how Black older adults value and envision fair and equitable AI systems. We conducted design workshops and interviews with 16 Black older adults, exploring how participants envisioned voice technologies that better understand cultural context and mitigate cultural dissonance. Our findings identify tensions between what it means to have fair, inclusive, and representative voice technologies. This research raises questions about how and whether researchers can model cultural representation with large language models.

Care and Coordination in Algorithmic Systems: An Economies of Worth Approach

Proceedings of FAccT, June 2023

John Rudnik, Robin Brewer

Algorithmic decision-making has permeated health and care domains (e.g., automated diagnoses, fall detection, caregiver staffing). Researchers have raised concerns about how these algorithms are built and how they shape fair and ethical care practices. To investigate algorithm development and understand its impact on people who provide and coordinate care, we conducted a case study of a U.S.-based senior care network and platform. We interviewed 14 technologists, 9 paid caregivers, and 7 care coordinators to explore their interactions with the platform’s algorithms. We find that technologists draw on a multitude of moral frameworks to navigate complex and contradictory demands and expectations. Despite technologists’ espoused commitments to fairness, accountability, and transparency, the platform reassembles problematic aspects of care labor. By analyzing how technologists justify their work, the problems that they claim to solve, the solutions they present, and caregivers’ and coordinators’ experiences, we advance fairness research that focuses on agency and power asymmetries in algorithmic platforms. We (1) make an empirical contribution, revealing tensions when developing and implementing algorithms and (2) provide insight into the social processes that reproduce power asymmetries in algorithmic decision-making.

Voice Assistant Use in Long-Term Care

ACM Symposium on Neural Gaze Detection, June 2023

Bruna Oewel, Tawfiq Ammari, Robin Brewer

Research on voice assistants has primarily studied how people use them for informational needs, music requests, and to control electronic devices (e.g., IoT). Recent research suggests people such as older adults want to use them to address social and relational needs, but lacks empirical evidence to show how older adults are currently engaging in these behaviors. In this paper, we use a machine learning approach to analyze more than 600,000 queries that 456 older adults in assisted living communities made to Amazon Alexa devices over two years, classifying how older adults use voice assistants for social well-being purposes. We present empirical evidence showing how older adults engage in three primary relational behaviors with Alexa – 1) asking personal questions to "get to know" the assistant, 2) asking for advice and 3) engaging with the voice assistant to alleviate stress. We use these findings to discuss ethical implications of voice assistant use in long-term care settings.

The effect of conference presentations on the diffusion of ideas

arXiv, May 2023 

Misha Teplitskiy, Soya Park, Neil Thompson, David Karger

Conferences are ubiquitous in many industries, but how effectively they diffuse ideas has been debated and the mechanisms of diffusion unclear. Conference attendees can adopt ideas from presentations they choose to see (direct effect) or see incidentally (serendipity effect). We quantify these direct and serendipitous effects of academic conference presentations on future citations by exploiting quasi-random scheduling conflicts. When multiple papers of interest to an attendee are presented at the same time, the person is less able to see them on average and, if seeing presentations is important, cites them less. We use data from Confer, a scheduling application deployed at 25 in-person computer science conferences that lets users Like papers and receive personalized schedules. Compared to timeslots with many conflicts, users cited Liked papers with no scheduling conflicts 52% more. Users also cited non-Liked papers in sessions with no conflicts 51% more, and this serendipitous diffusion accounted for 22% of the overall diffusion induced by presentations. The study shows that conference presentations stimulate substantial direct and serendipitous diffusion of ideas, and adds the analysis of scheduling conflicts to the toolkit of management scholars.

Counter-hegemonic AI: The Role of Artisanal Identity in the Design of Automation for a Liberated Economy

AI and the Future of Work, 2023

Matthew Garvin, Ron Eglash, Kwame Porter Robinson, Lionel Robert, Mark Guzdial, Audrey Bennett 

Transformative improvements require systemic change in an economy marked by extreme wealth inequality, stratified by geography, identity and other social markers. In this chapter, we seek to raise awareness in the technology, design and scientific communities of the long history of artisans—skilled, independent labor striving to keep a relatively unalienated workplace—that can offer a crucial resource to those interested in designing for a liberated economy. It is a resource as a history of counter-hegemonic movements and identities organized around technology. But it is also a potential site for participatory design, solidarity design and other methods for co-developing innovative strategies by which automation can support the rise of an unalienated economy. Rather than simply a throwback to a romantic past, artisans offer a present-day space for understanding work as a liberated form of expression and a locus for technological innovation grounded in just and sustainable ways of life.

Searching for or reviewing evidence improves crowdworkers’ misinformation judgments and reduces partisan bias

Collective Intelligence, May 2023

Paul Resnick, Aljohara Alfayez, Jane Im, Eric Gilbert

Can crowd workers be trusted to judge whether news-like articles circulating on the Internet are misleading, or does partisanship and inexperience get in the way? And can the task be structured in a way that reduces partisanship? We assembled pools of both liberal and conservative crowd raters and tested three ways of asking them to make judgments about 374 articles. In a no research condition, they were just asked to view the article and then render a judgment. In an individual research condition, they were also asked to search for corroborating evidence and provide a link to the best evidence they found. In a collective research condition, they were not asked to search, but instead to review links collected from workers in the individual research condition. Both research conditions reduced partisan disagreement in judgments. The individual research condition was most effective at producing alignment with journalists’ assessments. In this condition, the judgments of a panel of sixteen or more crowd workers were better than that of a panel of three expert journalists, as measured by alignment with a held out journalist’s ratings.

“A Patchwork of Data Systems”: Quilting as an Analytic Lens and Stabilizing Practice for Knowledge Infrastructures

Sage Journals, Journal of the Association for Information Science and Technology, May 2023

Andrea K. Thomer, Alexandria J. Rayburn

Museums and archives rely on databases and similar technologies to manage their collections, but even when tailor-made for memory institutions, databases require considerable adaptation to remain usable over long periods of time. To better understand how collection staff maintain and migrate databases over multiple years and decades, we talked to archivists from the US-based Archon User Collaborative and collection managers from the University of Michigan Research Museums. We found that the collection staff uses terms taken from quilting for database curation: they “tie” and “weave” a “patchwork of data systems” together. We extend their quilting metaphor as an analytical lens and show what can be gained through a shift in framing database work as a craft. We describe database curation as a process of creating a quilted infrastructure: a long-lived knowledge system that is sustained by the use of multiple “digital surfaces,” a reliance on a community of practice, intergenerational transfer of “quilts,” and by leveraging invisibility to conduct work. We argue that this nonnormative mode of computing needs better support from both software developers and administrators. We also show that although the invisibility of craft practices offers practitioners independence, it also can increase their precarity.

Dementia and electronic health record phenotypes: a scoping review of available phenotypes and opportunities for future research

Journal of the American Medical Informatics Association, May 2023

Anne M Walling, Joshua Pevnick, Antonia V Bennett, VG Vinod Vydiswaran, Christine S Ritchie

Objective: We performed a scoping review of algorithms using electronic health record (EHR) data to identify patients with Alzheimer’s disease and related dementias (ADRD), to advance their use in research and clinical care.

Materials and Methods: Starting with a previous scoping review of EHR phenotypes, we performed a cumulative update (April 2020 through March 1, 2023) using Pubmed, PheKB, and expert review with exclusive focus on ADRD identification. We included algorithms using EHR data alone or in combination with non-EHR data and characterized whether they identified patients at high risk of or with a current diagnosis of ADRD.

Results: For our cumulative focused update, we reviewed 271 titles meeting our search criteria, 49 abstracts, and 26 full text papers. We identified 8 articles from the original systematic review, 8 from our new search, and 4 recommended by an expert. We identified 20 papers describing 19 unique EHR phenotypes for ADRD: 7 algorithms identifying patients with diagnosed dementia and 12 algorithms identifying patients at high risk of dementia that prioritize sensitivity over specificity. Reference standards range from only using other EHR data to in-person cognitive screening.

Conclusion: A variety of EHR-based phenotypes are available for use in identifying populations with or at high-risk of developing ADRD. This review provides comparative detail to aid in choosing the best algorithm for research, clinical care, and population health projects based on the use case and available data. Future research may further improve the design and use of algorithms by considering EHR data provenance.

DataChat: Prototyping a Conversational Agent for Dataset Search and Visualization

Annual Meeting of the Association for Information Science and Technology, June 2023

Lizhou Fan, Sara Lafia, Lingyao Li, Fangyuan Yang, Libby Hemphill

Data users need relevant context and research expertise to effectively search for and identify relevant datasets. Leading data providers, such as the Inter-university Consortium for Political and Social Research (ICPSR), offer standardized metadata and search tools to support data search. Metadata standards emphasize the machine readability of data and its documentation. There are opportunities to enhance dataset search by improving users’ ability to learn about, and make sense of, information about data. Prior research has shown that context and expertise are two main barriers users face in effectively searching for, evaluating, and deciding whether to reuse data. In this paper, we propose a novel chatbot-based search system, DataChat, that leverages a graph database and a large language model to provide novel ways for users to interact with and search for research data. DataChat complements data archives’ and institutional repositories’ ongoing efforts to curate, preserve, and share research data for reuse by making it easier for users to explore and learn about available research data.