Skip to main content

University of Michigan School of Information

Menu

Media Center

UMSI welcomes media inquiries

Ashley Zhang and Steve Oney earn Best Paper award at the 2024 Learning @ Scale Conference

UMSI News.

Tuesday, 08/06/2024

University of Michigan School of Information PhD candidate Ashley Zhang and UMSI associate professor Steve Oney earned a Best Paper award at the 2024 Learning @ Scale Conference in Atlanta in July. 

The Learning @ Scale Conference brings together educational technology researchers from all over the world to promote scientific exchange of research at the intersection of learning science and computer science. The goal is to enhance educational practices, improve student outcomes and facilitate effective teaching and learning processes. This year’s conference focused on scaling learning in the age of AI.

Zhang and Oney’s paper, “CFlow: Supporting Semantic Flow Analysis of Students' Code in Programming Problems at Scale” proposes a new pedagogical technique and system that helps instructors teaching introductory programming courses understand “class-wide problem-solving patterns and issues.” 

“Receiving this award is a recognition of our technical depth and research contribution,” Zhang says. “Teaching programming at scale is a significant challenge, and making code understandable at scale is crucial. CFlow presents a novel technique to visualize the semantic flow in a large collection of students' code, demonstrating potential impact on the field of learning at scale. I hope our work inspires new perspectives and ways for thinking about students' code and even their learning behavior at scale.”  

Additionally, UMSI PhD students Xinying Hou and Zihan Wu, along with assistant professors Xu Wang and Barbara Ericson earned Best Paper nomination for their paper “CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning Programming.” The paper proposes CodeTailor, a system that provides personalized help to students struggling with introductory-level programming courses. 

“This acknowledges our focus on a growing key problem in programming learning in the generative AI era and recognizes our contributions,” Xinying says. “CodeTailor creates a programming support system that enhances engagement and active learning, encourages students to think about solution construction, fosters continuous learning, promotes reflection, and boosts their confidence. I hope this work can open ways for tackling new problems students face when learning with easily-reachable AI.” 

UMSI researchers were well represented at the 2024 Learning @ Scale conference. Here are their accepted papers and workshops: 

Research Papers 

CFlow: Supporting Semantic Flow Analysis of Students’ Code in Programming Problems at Scale

Ashley Zhang, Xiaohang Tang, Steve Oney, Yan Chen

Generative Students: Using LLM-Simulated Student Profiles to Support Question Item Evaluation

Xinyi Lu, Xu Wang

CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning Programming

Xinying HouZihan WuXu WangBarbara Ericson

Influence on Judgements of Learning Given Perceived AI Annotations

Warren LiChristopher Brooks

Workshops

Learnersourcing: Student-Generated Content @ Scale

Organizers: Steven Moore, Anjali Singh, Xinyi Lu, Hyoungwook Jin, Paul Denny,  Hassan Khosravi, Chris BrooksXu Wang, Juho Kim, John Stamper

Description: The second annual workshop on Learnersourcing: Student-generated Content @ Scale is taking place at Learning @ Scale 2024. This full day hybrid workshop will feature invited speakers, interactive activities, paper presentations, and discussions, as we delve into the field’s opportunities and challenges. Attendees will engage in hands-on development of learnersourcing activities suited to their own courses or systems and gain access to various learnersourcing systems and datasets for exploration. This workshop aims to foster discussions on new types of learnersourcing activities, strategies for evaluating the quality of student-generated content, the integration of LLMs with the field, and approaches to scaling learnersourcing to produce valuable instructional and assessment materials. We believe participants from a wide range of backgrounds and prior knowledge on learnersourcing can both benefit and contribute to this workshop, as learnersourcing draws on work from education, crowdsourcing, learning analytics, data mining, ML/NLP, and many more fields! Additionally, as the learnersourcing process involves many stakeholders (students, instructors, researchers, instructional designers, etc.), multiple viewpoints can help to inform what future and existing student-generated content might be useful, new and better ways to assess the quality of the content, and spark potential collaboration efforts between attendees. We ultimately want to show how everyone can make use of learnersourcing and have participants gain hands-on experience using learnersourcing tools, such as RiPPLE or PeerWise. Participants will take part in creating their own learnersourcing activities using these tools or their own platforms, and take part in discussing the next challenges and opportunities in the learnersourcing space. Our hope is to attract attendees interested in scaling the generation of quality instructional and assessment content and those interested in the use of online learning platforms.

RELATED 

Learn more about UMSI research by subscribing to our free research roundup newsletter

 

— Noor Hindi, UMSI public relations specialist