Lecture ‘Can’t you see I’m busy’ addresses ‘interruption management’
You’ve opened a Microsoft Word document and are just about to write. Feel good?
No. Instead of inspiration, along comes Clippy, the annoying little pop-up man with his bobbing eyebrows and balloon full of intrusive questions. “It looks like you’re writing a letter. Would you like help?”
If that puts you in the mood for revenge, welcome to the world of “interruption management” research. Computer scientists are using statistical reasoning and behavioral surveys to find ways of modifying when computers interrupt their human users.
Interfering never is as unacceptable as interfering always, researchers have found. What’s the middle ground, where the benefits of an interruption outweigh its costs?
This research question, just gaining ground in information technology, is one that intrigues Barbara J. Grosz, Higgins Professor of Natural Sciences in Harvard’s School of Engineering and Applied Sciences (SEAS). She’s also the new dean at the Radcliffe Institute for Advanced Study, where cross-disciplinary research blending the arts and science has created a sort of intellectual commons.
At the Radcliffe Gymnasium this Monday (Oct. 27), Grosz delivered her inaugural Dean’s Lecture, leading an audience of more than 200 through a survey of her research on interruption management.
In intellectual terms, the aptly titled “Can’t You See I’m Busy?” set the bar pretty high, but made the goals of Grosz’s research accessible for the humanists on hand, who included a fair sampling of this year’s Radcliffe fellows — a novelist who writes about Byron, for instance, and the National Poet of Wales.
Grosz, animated and comfortable, drew a lot of laughs and landed the main points of her quest for better human-computer interaction. (In the end, she exhorted software creators to pay attention and computer-users to keep complaining.)
Grosz’s lifelong path to science and research was admirably summarized in an introduction by Harvard President Drew Faust, a longtime admirer of the computer scientist, who she said will be “a careful steward of Radcliffe’s past, and a distinguished leader for its future.”
In grade school, Grosz was intrigued by mathematics, but then buffeted by the prejudices of her day. Girls excelled at reading and writing, it was presumed, but could not excel at the elegant mysteries of numbers.
Inspired by a teacher who understood her gender predicament, Grosz went off to Cornell University, determined to be “a math teacher just like him,” Faust related. Instead, her intellectual interests blossomed into a Ph.D. from the University of California, Berkeley, and by 1986 a posting at Harvard, where as a resident expert in artificial intelligence she became the first female tenured professor in what is now SEAS.
In the 1970s Grosz investigated an early computer-aided system for teaching mathematics — one that “managed to capture everything wrong,” she said, by emphasizing rote answers over conceptual exploration.
Emerging from that experience was a question that has occupied Grosz ever since: How can humans and computers communicate well — as collaborators and partners?
Not by Clippy queries, she said, or inaptly named dialogue boxes that abstrusely scold, warn, insist, or simply confuse. (To comic effect, Grosz put a series of these boxes on screen.)
Better to let the fast and computational computer do what it’s good at, while its human user — intuitive and synthesizing — does the same, she said. “Computers are good at searching; we’re good at writing.”
As an example of computer-human collaboration, Grosz cited Writer’s Aid, which lets a computer find, scan, and format bibliographic information, while its human operator concentrates on writing.
Writer’s Aid keeps the computer busy while “waiting for you to interrupt,” said Grosz. The system was described at a 2002 conference on intelligent user interfaces, and was developed with Stuart M. Shieber, the James O. Welch Jr. and Virginia B. Welch Professor of Computer Science at SEAS, and Tamara Babaian of Bentley University.
Grosz said the need to communicate often includes a need to interrupt — but knowing when to interrupt is “not a simple matter” and has occasioned a range of empirical studies on how both people and computers make decisions.
Some of the studies, with an underpinning of statistical formulations, are designed to measure the willingness of a computer user to accept an interruption. As the perceived value of an interruption goes up, she said, the more willing a computer user is to accept it.
Then there is Colored Trails (CT), a gamelike way of investigating how people make decisions in concert with a computer. (Grosz designed it with Sarit Kraus of Bar Ilan University in Israel.)
CT is “family of games,” said Ya’akov “Kobi” Gal afterwards. (He’s a post-doctoral researcher who works with Grosz, and was the first to use CT in experiments that teamed humans and computers.) It provides an analog to the way decisions are made in the real world, and gives weight to the social and psychological factors at play.
Computer scientists are only now beginning to understand the importance of how people make decisions, he said — in part inspired by Grosz and her work in interruption management.
“We … can’t get computers to do everything,” said Gal — and if you try, “you get that Microsoft Clippy thing.”