University of Michigan School of Information
UMSI at CSCW 2024: Awards, Workshops and Papers
Friday, 11/08/2024
By Noor HindiThe 27th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) will be held November 9-13 in San Jose, Costa Rica. Several University of Michigan School of Information researchers will be presenting their work.
AWARDS
Honorable Mention
Yao Lyu, Jie Cai, Bryan Dosono, Davis Yadav, John M. Carroll
Identity work in Human-Computer Interaction (HCI) has focused on the marginalized group to explore designs to support their asset (what they have). However, little has been explored specifically on the identity work of people with disabilities, specifically, visual impairments. In this study, we interviewed 45 BlindTokers (blind users on TikTok) from various backgrounds to understand their identity work from a positive design perspective. We found that BlindTokers leverage the affordance of the platform to create positive content, share their identities, and build the community with the desire to flourish. We proposed flourishing labor to present the work conducted by BlindTokers for their community's flourishing with implications to support the flourishing labor. This work contributes to understanding blind users' experience in short video platforms and highlights that flourishing is not just an activity for any single Blind user but also a job that needs all stakeholders, including all user groups and the TikTok platform, serious and committed contribution.
Recognition for Contribution to Diversity and Inclusion
Yao Lyu, John M. Carroll
There has been extensive research on the experiences of individuals with visual impairments on text- and image-based social media platforms, such as Facebook and Twitter. However, little is known about the experiences of visually impaired users on short-video platforms like TikTok. To bridge this gap, we conducted an interview study with 30 BlindTokers (the nickname of blind TikTokers). Our study aimed to explore the various activities of BlindTokers on TikTok, including everyday entertainment, professional development, and community engagement. The widespread usage of TikTok among participants demonstrated that they considered TikTok and its associated experiences as the infrastructure for their activities. Additionally, participants reported experiencing breakdowns in this infrastructure due to accessibility issues. They had to carry out infrastructuring work to resolve the breakdowns. Blind users' various practices on TikTok also foregrounded their perceptions of independence. We then discussed blind users' nuanced understanding of the TikTok-mediated independence; we also critically examined BlindTokers' infrastructuring work for such independence.
PAPERS
Li Qiwei, Allison Mcdonald, Oliver L. Haimson, Sarita Schoenebeck, Eric Gilbert
Non-consensual intimate media (NCIM) involves sharing intimate content without the depicted person's consent, including "revenge porn" and sexually explicit deepfakes. While NCIM has received attention in legal, psychological, and communication fields over the past decade, it is not sufficiently addressed in computing scholarship. This paper addresses this gap by linking NCIM harms to the specific technological components that facilitate them. We introduce the sociotechnical stack, a conceptual framework designed to map the technical stack to its corresponding social impacts. The sociotechnical stack allows us to analyze sociotechnical problems like NCIM, and points toward opportunities for computing research. We propose a research roadmap for computing and social computing communities to deter NCIM perpetration and support victim-survivors through building and rebuilding technologies.
The Search for Paxlovid: Medication Acquisition as Anticipation Work After China’s Zero-COVID Policy
In early December 2022, China’s zero-COVID policy came to an end. For Chinese citizens, an imaginary of safety predicated on collective action and state care was replaced with a general focus on individualized responsibility for maintaining one’s health. This expedited policy reversal had the unintended effect of infecting around 80% of China’s population in one month, leading to a period of confusion, anxiety, and infection. One way of obtaining safety in a precarious time was to acquire medical resources. Through an online ethnographic investigation, we trace the acquisition of Paxlovid, an antiviral medication that had its distribution within China complicated by geopolitical tension and supply issues. By attending to the precarity of Chinese citizens at home and abroad, we show how Paxlovid was desired in both the present tense for the presently ill and in the future tense as an anticipatory form of safety. We use the analytical lens of anticipation work to unpack individual, collective, and state anticipatory practices supported by various social media platforms within and across borders. In doing so, we highlight moments of friction between and among various actors to argue that attending to affect, precarity, and practices in moments of conflict is critical to understanding the complexities of computer-supported cooperative work.
Nadia Karizat, Alexandra H. Vinson, Shobita Parthasarathy, Nazanin Andalibi
Patent applications provide insight into how inventors imagine and legitimize uses of their imagined technologies; as part of this imagining they envision social worlds and produce sociotechnical imaginaries. Examining sociotechnical imaginaries is important for emerging technologies in high-stakes contexts such as the case of emotion AI to address mental health care. We analyzed emotion AI patent applications (N=58) filed in the U.S. concerned with monitoring and detecting emotions and/or mental health. We examined the described technologies’ imagined uses and the problems they were positioned to address. We found that inventors justified emotion AI inventions as solutions to issues surrounding data accuracy, care provision and experience, patient-provider communication, emotion regulation, and preventing harms attributed to mental health causes. We then applied an ethical speculation lens to anticipate the potential implications of the promissory emotion AI-enabled futures described in patent applications. We argue that such a future is one filled with mental health conditions’ (or ‘non-expected’ emotions’) stigmatization, equating mental health with propensity for crime, and lack of data subjects’ agency. By framing individuals with mental health conditions as unpredictable and not capable of exercising their own agency, emotion AI mental health patent applications propose solutions that intervene in this imagined future: intensive surveillance, an emphasis on individual responsibility over structural barriers, and decontextualized behavioral change interventions. Using ethical speculation, we articulate the consequences of these discourses, raising questions about the role of emotion AI as positive, inherent, or inevitable in health and care-related contexts. We discuss our findings’ implications for patent review processes, and advocate for policy makers, researchers and technologists to refer to patent (applications) to access, evaluate and (re)consider potentially harmful sociotechnical imaginaries before they become our reality
Code-ifying the Law: How Disciplinary Divides Afflict the Development of Legal Software
Nel Escher, Jeffrey Bilik, Nikola Banovic, Ben Green
Proponents of legal automation believe that translating the law into code can improve the legal system. However, research and reporting suggest that legal software systems often contain flawed translations of the law, resulting in serious harms such as terminating children’s healthcare and charging innocent people with fraud. Efforts to identify and contest these mistranslations after they arise treat the symptoms of the problem, but fail to prevent them from emerging. Meanwhile, existing recommendations to improve the development of legal software remain untested, as there is little empirical evidence about the translation process itself. In this paper, we investigate the behavior of fifteen teams—nine composed of only computer scientists and six of computer scientists and legal experts—as they attempt to translate a bankruptcy statute into software. Through an interpretative qualitative analysis, we characterize a significant epistemic divide between computer science and law and demonstrate that this divide contributes to errors, misunderstandings, and policy distortions in the development of legal software. Even when development teams included legal experts, communication breakdowns meant that the resulting tools predominantly presented incorrect legal advice and adopted inappropriately harsh legal standards. Study participants did not recognize the errors in the tools they created. We encourage policymakers and researchers to approach legal software with greater skepticism, as the disciplinary divide between computer science and law creates an endemic source of error and mistranslation in the production of legal software.
Kaiwen Sun, Jingjie Li, Yixin Zou, Jenny Radesky, Christopher Brooks, Florian Schaub
Smart home technologies are making their way into families. Parents’ and children’s shared use of smart home technologies has received growing attention in CSCW and related research communities. Families and children are also frequently featured as target audiences in smart home product marketing. However, there is limited knowledge of how exactly children and family interactions are portrayed in smart home product marketing, and to what extent those portrayals align with the actual consideration of children and families in product features and resources for child safety and privacy. We conducted a content analysis of product websites and online resources of 102 smart home products, as these materials constitute a main marketing channel and information source about products for consumers. We found that despite featuring children in smart home marketing, most analyzed product websites did not mention child safety features and lacked sufficient information on how children’s data is collected and used. Specifically, our findings highlight misalignments in three aspects: (1) children are depicted as users of smart home products but there are insufficient child-friendly product features; (2) harmonious child-product co-presence is portrayed but potential child safety issues are neglected; and (3) children are shown as the subject of monitoring and datafication but there is limited information on child data collection and use. We discuss how parent-child relationships and parenting may be negatively impacted by such marketing depictions, and we provide design and policy recommendations for better incorporating child safety and privacy considerations into smart home products.
Daniel Delmonaco, Samuel Mayworm, Josh Guberman, Hibby Thach, Aurelia Augusta, Oliver L. Haimson
Shadowbanning is a unique content moderation strategy receiving recent media attention for the ways it impacts marginalized social media users and communities. Social media companies often deny this content moderation practice despite user experiences online. In this paper, we use qualitative surveys and interviews to understand how marginalized social media users make sense of shadowbanning, develop folk theories about shadowbanning, and attempt to prove its occurrence. We find that marginalized social media users collaboratively develop and test algorithmic folk theories to make sense of their unclear experiences with shadowbanning. Participants reported direct consequences of shadowbanning, including frustration, decreased engagement, the inability to post specific content, and potential financial implications. They reported holding negative perceptions of platforms where they experienced shadowbanning, sometimes attributing their shadowbans to platforms’ deliberate suppression of marginalized users’ content. Some marginalized social media users acted on their theories by adapting their social media behavior to avoid potential shadowbans. We contribute collaborative algorithm investigation: a new concept describing social media users’ strategies of collaboratively developing and testing algorithmic folk theories. Finally, we present design and policy recommendations for addressing shadowbanning and its potential harms.
Oliver L. Haimson, Aloe DeGuia, Rana Saber, Kat Brewster
Extended reality (XR) technologies are becoming increasingly pervasive, and may have capacity to help marginalized groups such as transgender people. Drawing from interviews with n = 18 creators of trans technology, we examined how XR technologies do and can support trans people. We uncovered a number of creative ways that XR technologies support trans experiences. Trans technology creators are designing augmented reality (AR) and virtual reality (VR) systems that help people explore trans identity, experience new types of bodies, educate about and display trans stories and curated trans content, manipulate the physical world, and innovate gender-affirming surgical techniques. Additionally, we show how considering XR as an analogy for trans identity helps us to think about the fluidity and fluctuation inherent in trans identity in new ways, which in turn enables envisioning technologies that can better support complex and changing identities. Despite XR’s potential for supporting trans people, current AR and VR systems face limitations that restrict their large-scale use, but as access to XR systems increase, so will their capacity to improve trans lives.
Samuel Mayworm, Michael Ann DeVito, Daniel Delmonaco, Hibby Thach, Oliver L. Haimson
Social media users create folk theories to help explain how elements of social media operate. Marginalized social media users face disproportionate content moderation and removal on social media platforms. We conducted a qualitative interview study (n = 24) to understand how marginalized social media users may create folk theories in response to content moderation and their perceptions of platforms’ spirit, and how these theories may relate to their marginalized identities. We found that marginalized social media users develop folk theories informed by their perceptions of platforms’ spirit to explain instances where their content was moderated in ways that violate their perceptions of how content moderation should work in practice. These folk theories typically address content being removed despite not violating community guidelines, along with bias against marginalized users embedded in guidelines. We provide implications for platforms, such as using marginalized users’ folk theories as tools to identify elements of platform moderation systems that function incorrectly and disproportionately impact marginalized users.
Samuel Mayworm, Shannon Li, Hibby Thach, Daniel Delmonaco, Christian Paneda, Andrea Wegner, Oliver L. Haimson
Marginalized social media users struggle to navigate inequitable content moderation they experience online. We developed the Online Identity Help Center (OIHC) to confront this challenge by providing information on social media users’ rights, summarizing platforms’ policies, and providing instructions to appeal moderation decisions. We discuss our findings from interviews (n = 24) and surveys (n = 75) which informed the OIHC’s design, along with interviews about and usability tests of the site (n = 12). We found that the OIHC’s resources made it easier for participants to understand platforms’ policies and access appeal resources. Participants expressed increased willingness to read platforms’ policies after reading the OIHC’s summarized versions, but expressed mistrust of platforms after reading them. We discuss the study’s implications, such as the benefits of providing summarized policies to encourage digital literacy, and how doing so may enable users to express skepticism of platforms’ policies after reading them.
Cassidy Pyle, Ben Zefeng Zhang, Oliver L. Haimson, Nazanin Andalibi
Migrants experience unique needs and use social media, in part, to address them. While prior work has primarily focused on migrant populations who are vulnerable socio-economically and legally, less is known about how highly educated migrant populations use social media. Additionally, a growing body of work focuses on algorithmic perceptions and resistance, primarily from laypersons’ perspectives rather than people with high degrees of algorithmic literacy. To address these gaps, we draw from interviews with 20 Chinese-born migrant technology professionals. We found that social media played an integral role in helping participants meet their unique needs but that participants perceived social media algorithms to negatively shape the content they consumed, which ultimately influenced their mobility-related aspirations and goals. We discuss how findings challenge the promise of algorithmic literacy and contribute to a human-centered conceptualization of algorithmic mobility as socially and algorithmically produced motion that concerns the movement of physical bodies and interactions as well as associated digital movement. Specifically, we introduce a fourth dimension of algorithmic mobility: algorithmically curated content on social media and elsewhere based on facets of users’ identities directly influences users’ mobility-related aspirations and goals, such as how, when, and where they go. Finally, we call for transnational policy interventions related to algorithms and highlight design considerations around content moderation, algorithmic user-control, and contestability.
Ethical Speculation on the Imagined Futures of Emotion AI for Mental Health Monitoring and Detection
Nadia Karizat, Alexandra H. Vinson, Shobita Parthasarathy, Nazanin Andalibi
Patent applications provide insight into how inventors imagine and legitimize uses of their imagined technologies; as part of this imagining they envision social worlds and produce sociotechnical imaginaries. Examining sociotechnical imaginaries is important for emerging technologies in high-stakes contexts such as the case of emotion AI to address mental health care. We analyzed emotion AI patent applications (N=58) filed in the U.S. concerned with monitoring and detecting emotions and/or mental health. We examined the described technologies’ imagined uses and the problems they were positioned to address. We found that inventors justified emotion AI inventions as solutions to issues surrounding data accuracy, care provision and experience, patient-provider communication, emotion regulation, and preventing harms attributed to mental health causes. We then applied an ethical speculation lens to anticipate the potential implications of the promissory emotion AI-enabled futures described in patent applications. We argue that such a future is one filled with mental health conditions’ (or ‘non-expected’ emotions’) stigmatization, equating mental health with propensity for crime, and lack of data subjects’ agency. By framing individuals with mental health conditions as unpredictable and not capable of exercising their own agency, emotion AI mental health patent applications propose solutions that intervene in this imagined future: intensive surveillance, an emphasis on individual responsibility over structural barriers, and decontextualized behavioral change interventions. Using ethical speculation, we articulate the consequences of these discourses, raising questions about the role of emotion AI as positive, inherent, or inevitable in health and care-related contexts. We discuss our findings’ implications for patent review processes, and advocate for policy makers, researchers and technologists to refer to patent (applications) to access, evaluate and (re)consider potentially harmful sociotechnical imaginaries before they become our reality
Emotion AI Use in U.S. Mental Health Care: Potentially Unjust and Techno-Solutionist
Kat Roemmich, Shanley Corvite, Cassidy Pyle, Nadia Karizat, Nazanin Andalibi
Emotion AI, or AI that claims to infer emotional states from various data sources, is increasingly deployed in myriad contexts, including mental healthcare. While emotion AI is celebrated for its potential to improve care and diagnosis, we know little about the perceptions of data subjects most directly impacted by its integration into mental healthcare. In this paper, we qualitatively analyzed U.S. adults' open-ended survey responses (n = 395) to examine their perceptions of emotion AI use in mental healthcare and its potential impacts on them as data subjects. We identify various perceived impacts of emotion AI use in mental healthcare concerning 1) mental healthcare provisions; 2) data subjects' voices; 3) monitoring data subjects for potential harm; and 4) involved parties' understandings and uses of mental health inferences. Participants' remarks highlight ways emotion AI could address existing challenges data subjects may face by 1) improving mental healthcare assessments, diagnoses, and treatments; 2) facilitating data subjects' mental health information disclosures; 3) identifying potential data subject self-harm or harm posed to others; and 4) increasing involved parties' understanding of mental health. However, participants also described their perceptions of potential negative impacts of emotion AI use on data subjects such as 1) increasing inaccurate and biased assessments, diagnoses, and treatments; 2) reducing or removing data subjects' voices and interactions with providers in mental healthcare processes; 3) inaccurately identifying potential data subject self-harm or harm posed to others with negative implications for wellbeing; and 4) involved parties misusing emotion AI inferences with consequences to (quality) mental healthcare access and data subjects' privacy. We discuss how our findings suggest that emotion AI use in mental healthcare is an insufficient techno-solution that may exacerbate various mental healthcare challenges with implications for potential distributive, procedural, and interactional injustices and potentially disparate impacts on marginalized groups.
Theorizing Self Visibility on Social Media: A Visibility Objects Approach
Kristen Barta, Nazanin Andalibi
Self-presentation undergirds social interaction on social media. HCI and social computing scholarship draw on visibility to theorize self-presentation management; while research addresses how social media users leverage (in)visibility for self-presentation goals, how users perceive and assess the visibility of themselves and others merits investigation. We conducted interviews ( ) to explore how U.S.-based social media users perceive and assess self visibility. Findings indicate that self visibility comprises a set of related objects’ visibility—content, persons, and identity—associated with distinct, but related, visibility attributes. We develop the visibility objects lens to examine self-presentation, contributing a unified social media visibility framework. We show how users perceive themselves as visible to platforms and algorithms, which act as visibility agents. We introduce reflected algorithmic visibility to describe awareness of visibility to platforms and algorithms informed by algorithmic feedback. We conclude with design implications of a visibility objects lens.
Characterizing the Structure of Online Conversations Across Reddit
Yulin Yu, Julie Jiang, Paramveer Dhillon
The proliferation of social media platforms has afforded social scientists unprecedented access to vast troves of data on human interactions, facilitating the study of online behavior at an unparalleled scale. These platforms typically structure conversations as threads, forming tree-like structures known as "discussion trees." This paper examines the structural properties of online discussions on Reddit by analyzing both global (community-level) and local (post-level) attributes of these discussion trees. We conduct a comprehensive statistical analysis of a year's worth of Reddit data, encompassing a quarter of a million posts and several million comments. Our primary objective is to disentangle the relative impacts of global and local properties and evaluate how specific features shape discussion tree structures. The results reveal that both local and global features contribute significantly to explaining structural variation in discussion trees. However, local features, such as post content and sentiment, collectively have a greater impact, accounting for a larger proportion of variation in the width, depth, and size of discussion trees. Our analysis also uncovers considerable heterogeneity in the impact of various features on discussion structures. Notably, certain global features play crucial roles in determining specific discussion tree properties. These features include the subreddit's topic, age, popularity, and content redundancy. For instance, posts in subreddits focused on politics, sports, and current events tend to generate deeper and wider discussion trees. This research enhances our understanding of online conversation dynamics and offers valuable insights for both content creators and platform designers. By elucidating the factors that shape online discussions, our work contributes to ongoing efforts to improve the quality and effectiveness of digital discourse.
On Being an Expert: Habitus as a Lens for Understanding Privacy Expertise
Houda Elmimouni, Eric P. S. Baumer, Andrea Forte
How do privacy experts and lay persons differ? We investigate this question using data about the use of privacy-enhancing technologies and strategies from 128 surveys and 17 follow-up interviews with two populations: privacy experts (i.e., privacy researchers and professionals) and privacy laypersons. Findings reveal that both experts and laypersons use common privacy strategies, but experts employ a broader range of strategies, favor open source technologies, and possess a more technical understanding of internet privacy risks and technologies such as onion routing and tracking algorithms. We characterize these differences in terms of sociological habitus, where experts and laypersons differ not only in technical skill but also in broader conceptualizations and practices. From this characterization, we make recommendations for technology design practices, as well as legal and pedagogical implications, that can help decenter the experiences of privacy experts and accommodate privacy laypersons' preferences and habits.
Friendship Formation in an Enforced Online Regime: Findings from a US University Under COVID
Friendships are a key element of mental health, yet modern life increasingly involves "enforced online regimes," which can inhibit friendship formation. One example is provided by residential university students under COVID-19. Through interviews with 17 graduate students at a U.S. university, we investigate how new friendships were made and maintained under the pandemic. While some of our individual findings echo previous work with online social interaction, our analysis reveals a novel 7-phase friendship formation process that extends Levinger & Snoek's classic pair-relatedness theory. The model enables pinpoint diagnoses. For our participants, three specific phases were blocked -- Physical Awareness (apprehension of another's physical characteristics); Personal Contact (exchange of personal information); and Ongoing Mutuality (repeat interactions to build friendship). The model also explains divergent results under similar but different situations (e.g., residential students under COVID eventually made friends, but students of purely online courses do not), and enables targeted recommendations.
AppealMod: Inducing Friction to Reduce Moderator Workload of Handling User Appeals
Shubham Atreja, Jane Im, Paul Resnick, Libby Hemphill
As content moderation becomes a central aspect of all social media platforms and online communities, interest has grown in how to make moderation decisions contestable. On social media platforms where individual communities moderate their own activities, the responsibility to address user appeals falls on volunteers from within the community. While there is a growing body of work devoted to understanding and supporting the volunteer moderators' workload, little is known about their practice of handling user appeals. Through a collaborative and iterative design process with Reddit moderators, we found that moderators spend considerable effort in investigating user ban appeals and desired to directly engage with users and retain their agency over each decision. To fulfill their needs, we designed and built AppealMod, a system that induces friction in the appeals process by asking users to provide additional information before their appeals are reviewed by human moderators. In addition to giving moderators more information, we expected the friction in the appeal process would lead to a selection effect among users, with many insincere and toxic appeals being abandoned before getting any attention from human moderators. To evaluate our system, we conducted a randomized field experiment in a Reddit community of over 29 million users that lasted for four months. As a result of the selection effect, moderators viewed only 30% of initial appeals and less than 10% of the toxically worded appeals; yet they granted roughly the same number of appeals when compared with the control group. Overall, our system is effective at reducing moderator workload and minimizing their exposure to toxic content while honoring their preference for direct engagement and agency in appeals.
3DPFIX: Improving Remote Novices' 3D Printing Troubleshooting through Human-AI Collaboration Design
Nahyun Kwon, Tong Steven Sun, Yuyang Gao, Liang Zhao, Xu Wang, Jeeeun Kim, Sungsoo Ray Hong
The widespread consumer-grade 3D printers and learning resources online enable novices to self-train in remote settings. While troubleshooting plays an essential part of 3D printing, the process remains challenging for many remote novices even with the help of well-developed online sources, such as online troubleshooting archives and online community help. We conducted a formative study with 76 active 3D printing users to learn how remote novices leverage online resources in troubleshooting and their challenges. We found that remote novices cannot fully utilize online resources. For example, the online archives statically provide general information, making it hard to search and relate their unique cases with existing descriptions. Online communities can potentially ease their struggles by providing more targeted suggestions, but a helper who can provide custom help is rather scarce, making it hard to obtain timely assistance. We propose 3DPFIX, an interactive 3D troubleshooting system powered by the pipeline to facilitate Human-AI Collaboration, designed to improve novices' 3D printing experiences and thus help them easily accumulate their domain knowledge. We built 3DPFIX that supports automated diagnosis and solution-seeking. 3DPFIX was built upon shared dialogues about failure cases from Q&A discourses accumulated in online communities. We leverage social annotations (i.e., comments) to build an annotated failure image dataset for AI classifiers and extract a solution pool. Our summative study revealed that using 3DPFIX helped participants spend significantly less effort in diagnosing failures and finding a more accurate solution than relying on their common practice. We also found that 3DPFIX users learn about 3D printing domain-specific knowledge. We discuss the implications of leveraging community-driven data in developing future Human-AI Collaboration designs.
U.S. Job-Seekers’ Organizational Justice Perceptions of Emotion AI-Enabled Asynchronous Interviews
Cassidy Pyle, Kat Roemmich, Nazanin Andalibi
Emotion AI is increasingly used to automatically evaluate asynchronous hiring interviews. Although touted for increasing hiring fit and reducing bias, it is unclear how job-seekers perceive emotion AI-enabled asynchronous interviews. This gap is striking, given job-seekers’ marginalized position in hiring and how job-seekers with marginalized identities may be particularly vulnerable to this technology’s potential harms. Addressing this gap, we conducted exploratory interviews with 14 U.S.-based participants with direct, recent experience with emotion AI-enabled asynchronous interviews. While participants acknowledged the asynchronous, virtual modality’s potential benefits to employers and job-seekers, they perceived harms to job-seekers associated with automatic emotion inferences that our analysis maps to distributive, procedural, and interactional injustices. We find that social identity can inform job-seekers’ perceptions of emotion AI, extending prior work’s understandings of the factors contributing to job-seekers’ perceptions of AI (broadly) in hiring. Moreover, our results suggest that emotion AI use may reconfigure demands for emotional labor in hiring and that deploying this technology in its current state may unjustly risk harmful outcomes for job-seekers – or, at the very least, perceptions thereof, which shape behaviors and attitudes. Accordingly, we recommend against the present adoption of emotion AI in hiring, identifying opportunities for the design of future asynchronous hiring interview platforms to be meaningfully transparent, contestable, and privacy-preserving. We emphasize that only a subset of perceived harms we surface may be alleviated by these efforts; some injustices may only be resolved by removing emotion AI-enabled features.
Social Media as a Lens into Careers During a Changing World of Work
People share and encounter a range of posts on social media that depict different career experiences, such as one’s successes or failures. In a continually shifting world of work, observing others’ job experiences can be useful for navigating one’s own career journey. The present study applies the possible selves theory to understand how viewing people’s career experiences on social media might affect viewers’ career-related expectations, hopes, and fears. From semi-structured interviews with 19 social media users with different career experiences, this study found that social media posts may facilitate functions of possible selves by 1) increasing people’s awareness of new and diverse career paths and 2) motivating people in planning and preparing for a desired possible self. Across these functions, we found that videos were particularly beneficial for uncovering and learning about life in a career and that observing others’ career experiences online often evoked social comparison. Through an affordances lens, findings also indicate that social media affordances of visibility and persistence were particularly relevant in developing and assessing career-related possible selves and sometimes strategically managed through leveraging platform algorithms. This paper brings forth empirical contributions to the possible selves theory in the digital age and highlights the significance of social media mechanisms for possible selves. This paper also provides design implications for social media platforms and recommendations for potential career counseling and mentoring strategies that may be useful for individuals navigating and exploring career possibilities.
Form-From: A Design Space of Social Media Systems
Amy X. Zhang, Michael S. Bernstein, David R. Karger, Mark S. Ackerman
Social media systems are as varied as they are pervasive. They have been almost universally adopted for a broad range of purposes including work, entertainment, activism, and decision making. As a result, they have also diversified, with many distinct designs differing in content type, organization, delivery mechanism, access control, and many other dimensions. In this work, we aim to characterize and then distill a concise design space of social media systems that can help us understand similarities and differences, recognize potential consequences of design choice, and identify spaces for innovation. Our model, which we call Form-From, characterizes social media based on (1) the form of the content, either threaded or flat, and (2) from where or from whom one might receive content, ranging from spaces to networks to the commons. We derive Form-From inductively from a larger set of 62 dimensions organized into 10 categories. To demonstrate the utility of our model, we trace the history of social media systems as they traverse the Form-From space over time, and we identify common design patterns within cells of the model.
"Spirits in the Material World: Older Adults’ Personal Curation of Memory Artifacts"
Sam A. Ankenbauer, Robin N. Brewer
Memory artifacts are personal and collective belongings that elicit deliberate or involuntary memories. They are significant as objects of continuity, vessels for identity, and links to past relationships and history---for individuals, families, and communities. Drawing from in-depth interviews and cultural probe sessions with 16 individuals over 65, we consider how older adults curate and interact with their personal artifacts that embody and inform memory. Participants' hands-on experiences with memory artifacts uncover a heterogeneous set of personal curation practices and identify tensions that result from the competing goals of creating a legible narrative or legacy for themselves, their family, and their communities. The transition from physical to digital memory artifacts often perpetuates tension but can also create moments of reflection. These findings contribute a set of design considerations for supporting curation practices. This paper joins and expands upon CSCW scholarship regarding the importance of memory artifacts and the ongoing challenges of retaining individual memory and history over time, which, if managed effectively, can benefit and sustain family and community history at large.
SIG: "Future Dialogues: Personal AI Assistants and Their Interactions with Us and Each Other"
Austin L. Toombs, Kyle Montague, Richmond Y. Wang, Robin N. Brewer, Suchismita Naik, Paul C. Parsons, Selma Šabanović, Derek Whitley
We propose a Special Interest Group (SIG) session during which participants will discuss future possible configurations of personal artificial intelligence assistants (PAIAs) and their potential capabilities to interact with other humans and other personal AI assistants. Participants will engage in design fiction and speculative design activities to discuss the boundaries of acceptable roles that personal AI assistants may play in our relationships in the future. The goal is for discussion and activities during the SIG to help attendees think through their own research and design work as it relates to exploring the impact that PAIAs and PAIA-like systems might have on our relationships with others and our relationships with technology. In the introduction to the SIG, we will use design fictions, sci-fi analyses, and short case studies to introduce a broad conceptual playing field that will inspire discussion for the 75-minute session.
Gendered, Collectivist Journeys: Exploring Sociotechnical Adaptation Among Afghan Refugees in the United States
Amna Batool, Tawanna Dillahunt, Julie Hui, Mustafa Naseem
This paper presents findings from an empirical study that uncovers the economic, psychological, and sociocultural adaptation strategies used by recent Afghan refugees in a Midwestern U.S. state. Through 14 semistructured interviews conducted between February and April 2023, this study investigates how Afghan refugees utilize technology, tools, and skills in their resettlement process, and builds upon Hsiao et al.’s conceptualization of sociotechnical adaptation. The findings reveal (i) gender and collectivist cultural values play a big role in determining the types of adaptation strategies used by men versus women, (ii) strategic choices in terms of the type of support sought depending on shared versus non-shared identity of host community members, (iii) a notable tension between economic adaptation and preserving socio-cultural values is observed, and (iv) creative, collective solutions by women participants to address economic challenges, contributing to the discourse on solidarity economies in HCI. Key contributions include (a) design implications for technological products that can aid in psychological adaptation, fostering solidarity economies, and creating digital safe spaces for refugees to connect with shared-identity host populations, and (b) policy and program recommendations for refugee resettlement agencies to enhance digital literacy among refugees.
Entangled Independence: From Labor Rights to Gig “Empowerment” Under the Algorithmic Gaze
Ira Anjali Anwar, Patricia Garcia, Julie Hui
This study offers a critical analysis of how the Indian gig platform, Urban Company (UC), mobilizes the promise of empowerment to assemble, discipline and algorithmically entangle its gig labor force. Much of the scholarship on gig work has described how the (mis)classification of gig workers as independent contractors allows gig corporations to abdicate their employment responsibilities, thereby disentangling the traditional employer-employee relationship. By contrast, UC promises to invest in it’s Indian gig workforce, with offerings such as health insurance and financial loans to counter the enduring socio-economic precarity of the Indian labor market. However, through a critical content analysis of UCs’ public media, we find that UC’s organization of such "social security" benefits rests on an insidious system of algorithmic evaluation and classification. Access to benefits is conditional, determined by the platform’s algorithmic control mechanisms, which classify workers as worthy or unworthy on a shifting basis. Mobilizing Agre’s analysis of business discourses of “empowered work”, we highlight how UC’s algorithmic classification of workers creates a system of “conditional empowerment.” This system reflects a critical shift in the labor process, from rights-based labor regimes to a neoliberal social order where individuals must constantly prove their worth.
Intermediation: Algorithmic Prioritization in Practice in Homeless Services
Homelessness is a significant and growing crisis in the United States. In an effort to more effectively and fairly distribute limited housing resources, jurisdictions across the US have adopted algorithmic prioritization systems to help select which unhoused people should receive resources. Given the impact of algorithmic prioritization on the lives of unhoused people, there is a need to more fully examine how these systems are implemented in practice by frontline workers such as social service workers. In this paper, we present a qualitative study that draws on interviews and artifact walkthroughs with fifteen social service workers to examine how they interacted with algorithmic prioritization systems as part of their job duties. We found that social service workers employed discretionary work practices to mediate between the rigid formats of algorithmic prioritization systems and the messy, situated realities of homelessness. We term these discretionary work practices “intermediation” and provide four examples that illustrate how our interlocutors were able to maintain their commitment to advocacy and to express their expertise despite the automation of major aspects of their professional decision-making. These work practices, which we argue were motivated by care for clients, as well as a desire to preserve professional autonomy, lead us to conclude that discretion cannot easily be removed from bureaucratic systems and that removing discretion is not necessarily a desirable outcome.
PANELS
Navigating Tensions, Managing Conflict, and Reaching Academic Harmony in HCI
Tamara Clegg, Tawanna Dillahunt, Sheena Erete, Oliver L. Haimson, Neha Kumar, Amanda Lazar, Yolanda Rankin, Katta Spiel
What should we do with Emotion AI? Towards an agenda for the next 30 years
Nazanin Andalibi, Luke Stark, Daniel McDuff, Rosalind Picard, Jonathan Gratch, Noura Howell
Nurturing Digitally Mediated Post-Growth Work Economies
Vishal Sharma, Neha Kumar, Pejman Mirza-Babaei, Aakash Gautam, Cindy Kaiying Lin,
Volker Wulf, Nazanin Andalibi
Datafication Dilemmas: Data Governance in the Public Interest
Linda Huber, Anubha Singh, Lynn Dombrowski, Shion Guha, Jean Hardy, Naja Holten Moller
WORKSHOPS
Envisioning New Futures of Positive Social Technology: Beyond Paradigms of Fixing, Protecting, and Preventing
JaeWon Kim, Lindsay Popowski, Anna Fang, Cassidy Pyle, Guo Freeman, Ryan M. Kelly, Angela Y Lee, Fannie Liu, Angela D. R. Smith, Dr. Alexandra To, Amy X. Zhang
Social technology research today largely focuses on mitigating the negative impacts of technology and, therefore, often misses the potential of technology to enhance human connections and well-being. However, we see a potential to shift towards a holistic view of social technology’s impact on human flourishing. We introduce \textbf{\textit{Positive Social Technology (Positech)}}, a framework that shifts emphasis toward leveraging social technologies to support and augment human flourishing. This workshop is organized around three themes relevant to Positech: 1) “Exploring Relevant and Adjacent Research” to define and widen the Positech scope with insights from related fields, 2) “Projecting the Landscape of Positech” for participants to outline the domain’s key aspects and 3) “Envisioning the Future of Positech,” anchored around strategic planning towards a sustainable research community. Ultimately, this workshop will serve as a platform to shift the narrative of social technology research towards a more positive, human-centric approach. It will foster research that not only fixes technologies and protects or prevents humans from technology’s faults but also enriches human experiences and connections through technology.
POSTERS
Exploring Online Support Needs of Adolescents Living with Epilepsy
Jessica Y. Medina, Jordyn Young, Afsaneh Razi, Wendy Trueblood Miller
SPECIAL INTEREST GROUPS
Imagining Computing Futures and Mitigating Algorithmic Harm: Conversations Between Artistic Disciplines and Computing
Angela Schöpke-Gonzalez, Justin Wyss-Gallifent, Charli Brissey, Steph Jordan, Libby Hemphill
This SIG invites interdisciplinary researchers in ethics, computing, and the arts; data professionals; and arts practitioners into a collective reflection on how these disciplines can work together to mitigate algorithmic harm. As we explore this topic, we recognize the historical and current marginalization of artistic disciplines in terms of credit and funding, and bring into our discussion consideration not only of what computing and data professions can gain from the arts, but of what artistic disciplines can gain from working with computing and data professions guided by the shared goal of mitigating algorithmic harm. This SIG invites reflection on how arts-based methods can help answer the following questions:
- What agency and responsibility do data professionals as individuals have for mitigating algorithmic harm in their day-to-day workflows?
- How does a data professional’s individual agency and responsibility relate to collectives (e.g., institutions, groups of colleagues, families, employers, etc.) that they are a part of?
- What agency and responsibility do collectives have for mitigating algorithmic harm?
- What can artistic disciplines gain from working with computing and data professions to mitigate algorithmic harm?
During this SIG, we will introduce the necessary disciplinary contexts for our collective discussion, facilitate a movement improvisation score that we developed called “On The Perils of Poorly Chosen Sorting Algorithms”, and facilitate reflective discussion around how arts-based methods can invite data professionals to imagine new data workflows that mitigate algorithmic harm.
RELATED
Keep up with research from UMSI experts by subscribing to our free research roundup newsletter!