UMSI team wins information retrieval competition
UMSI PhD student Cheng Li (fifth from left) and U-M Computer Science and Engineering PhD student Yue Wang (second from right), under the supervision of UMSI professors Paul Resnick and Qiaozhu Mei, were recognized for delivering the top submission in their category at the Text Retrieval Conference (TREC) held November 19-22 in Gaithersburg, Maryland.
TREC is an annual workshop and competition for information retrieval research, organized by the National Institute of Standards and Technology since 1992. Every year, the conference hosts several competitions, called tracks, by selecting a few challenging retrieval tasks, providing standard data sets, and conducting judgments of the results of participants.
Li and Wang’s submission was entered in TREC’s Microblog Track, one of the 2013 competition’s eight categories that focused on retrieval tasks. Both students are members of Mei’s Foreseer research group, which conducts cutting-edge research broadly related to data mining and information retrieval and has discovered broad applications in Web search, social computing, scientific literature mining, and health informatics.
This year's competition required participants to interact with a collection of Twitter posts via a search application programming interface (API). Teams were tasked with developing a system to satisfy users' real-time information needs to access relevant information from the stream of tweets. The systems were required to answer a search query by providing a list of appropriate tweets ranked in decreasing order of predicted relevance.
According to Mei, the submission’s success was attributed to a novel retrieval framework that actively involves users in the loop. “The double-loop system” was developed by Mei, Resnick and fellow UMSI professor Dragomir Radev via research that explored the implementation of tools to help people make personal assessments of credibility. Their research sought to retrieve entire traces of rumors spread in social media and create a text mining system with the goal of minimizing the amount of social implausibility in online statements and interaction.
Li and Wang’s effort beat 20 other submissions in the field, with many more teams registering for the competition, but unable to deliver final submissions. This marked the first time that a team representing the University of Michigan won the competition. Two former students of Mei’s Information Retrieval course (SI 650), Ben King and Ivan Provalov, won the 2011 competition for TREC’s Medical Records Track, but they were sponsored by Cengage Learning.
The tracks selected for the TREC competition each year often reflect the most timely, influential and challenging information retrieval problems. Top information retrieval research groups have participated in various tracks over the years, including teams from Carnegie Mellon, the University of Massachusetts, Cornell, University of Illinois at Urbana-Champaign, the University of Glasgow, Microsoft Research and other renowned institutions.
TREC is overseen by a program committee consisting of representatives from government, industry, and academia. Since its inception, the conference has helped address the major problems in informational retrieval research by creating new, larger test collections, developing standardized evaluation methods, distributing research results and developing models for other information retrieval workshops. In addition, technology developed by TREC is used in many of the world’s commercial search engines. A number of the improvements in Web search engines over the past decade are attributed to TREC.