UMD Computerized System Beats Human Quiz Bowl Team at Atlanta Exhibition
A computerized question-answering system built by researchers from the University of Maryland (UMD) and University of Colorado recently bested a team of human competitors during a quiz bowl exhibition match in Atlanta, Georgia.
The artificial intelligence (AI) system, known as QANTA—which stands for "question answering is not a trivial activity"—won by a score of 260 to 215 against a volunteer team of students at the High School National Championship Tournament, which features top quiz bowl teams from across the U.S.
The exhibition match was not part of the official competition, but was conceived as a way to test recent improvements in artificial intelligence algorithms, says Jordan Boyd-Graber, one of the UMD-affiliated faculty involved in the project.
Boyd-Graber, who returns to UMD in July after spending the past three years at the University of Colorado, says these types of matches are important for the broader field of AI.
"It's an example of how we can synthesize disparate information—that is, data from previous questions and Wikipedia—to not only answer questions, but to know when we can't," he says. “This can help people gain more trust in AI systems and interact with them more comfortably in the real world."
This is the third time that QANTA has taken on human competitors. A similar exhibition held in 2015 ended in a 200–200 tie, and last year the computerized system lost “badly” against a strong human team, Boyd-Graber says.
The exhibition matches have been fairly popular, he adds, drawing hundreds of students, coaches and parents.
Unlike the game show “Jeopardy!,” Quiz Bowl players try to answer a question as soon as they can, whether the entire question has been completed or not.
The questions are structured to reward deeper knowledge earlier in the question. For example, a question about Venus might start by naming a Japanese space probe that studied the planet, ending with how it is the second closest celestial body to the sun.
And that is how QANTA works—unlike other question-answering systems that can see the entire question all at once (e.g., IBM Watson of “Jeopardy!” fame), QANTA decides when it has enough information to answer.
“When we lost last year we had some technical glitches—our deep guess generation system was not tied to the same one used to score buzzes,” Boyd-Graber says.
But for the 2017 competition, UMD doctoral student Shi Feng made sure the buzzing system was better optimized, which helped significantly, and University of Colorado doctoral student Pedro Rodriguez overhauled QANTA's infrastructure and developed a better computational method for generating guesses.
In addition to Boyd-Graber, Feng and Rodriquez, others involved in developing the QANTA system are Hal Daumé III, an associate professor of computer science and director of the Computational Linguistics and Information Processing Laboratory, and Mohit Iyyer, who recently graduated with a doctoral degree in computer science and will start as an assistant professor at the University of Massachusetts, Amherst in Fall 2018.
Daumé and Boyd-Graber both have appointments in the University of Maryland Institute for Advanced Computer Studies.
Feng, a first-year computer science doctoral student, says he has enjoyed working on QANTA.
“As I'm interested in natural language processing and reinforcement learning, QANTA is a great match for me since it uses techniques from both fields,” he says.
Feng adds that it was exciting to see QANTA compete against humans.
“I personally did not expect QANTA to win, seeing how good the human players are, but it turned out that the robustness of the machine is indeed advantageous,” he says. “There is still a large room for improvement for QANTA—especially for the buzzing module—and I look forward to continuing work on it.”
To see a video overview of the QANTA system, go here.
—Story by Melissa Brachfeld