Video

Research: Outlines Improve Learning from Videos

MIT researchers used crowdsourced outlines to help improve learning outcomes for users watching videos.Researchers at the Massachusetts Institute of Technology and Harvard University are using crowd-sourced conceptual outlines to help learners get more out of educational videos.

The outlines can work as navigation tools, so that "viewers already familiar with some of a video's content can skip ahead, while others can backtrack to review content they missed the first time around," according to a news release from MIT.

"That addresses one of the fundamental problems with videos," said Juho Kim, an MIT graduate student in electrical engineering and computer science and one of the paper's co-authors, in a prepared statement. "It's really hard to find the exact spots that you want to watch. You end up scrubbing on the timeline carefully and looking at thumbnails. And with educational videos, especially, it's really hard, because it's not that visually dynamic. So we thought that having this semantic information about the video really helps."

Though previous studies had demonstrated that instructions accompanying videos improved viewer learning, the group decided to conduct its own.

They used Photoshop video tutorials and created their own outlines. Some study participants were given the outlines before viewing the videos, others simply watched the videos. Both groups were then asked to complete tasks that put their new skills to use. Afterwards, the subjects who had access to the outlines said they felt more confident in their work and satisfied with the videos than those who didn't receive the outlines and Photoshop experts evaluated their work more favorably.

"Last year, at the Association for Computing Machinery's (ACM) Conference on Human Factors in Computing Systems, the researchers presented a system for distributing the video-annotation task among paid workers recruited through Amazon's Mechanical Turk crowdsourcing service," according to information released by the school. "Their clever allocation and proofreading scheme got the cost of high-quality video annotation down to $1 a minute."

The Mechanical Turk solution worked for basic step-by-step instructions, but the MIT team knew from research by Georgia Tech's Richard Catrambone that outlines with subgoal labeling worked better for learners.

"Subgoal labeling is an educational theory that says that people think in terms of hierarchical solution structures," Kim said, in a news release. "Say there are 20 different steps to make a cake, such as adding sugar, salt, baking soda, egg, butter and things like that. This could be just a random series of steps, if you're a novice. But what if the instruction instead said, 'First, deal with all the dry ingredients,' and then it talked about the specific steps. Then it moved onto the wet ingredients and talked about eggs and butter and milk. That way, your mental model of the solution is much better organized."

"We did a bunch of experiments showing that subgoal-labeled videos really dramatically improve learning and retention, and even transfer to new tasks for people studying computer science," says Mark Guzdial, a professor of interactive computing at Georgia Tech who has worked with Catrambone, in a prepared statement. "Immediately afterward, we asked people to attempt another problem, and we found that the people who got the subgoal labels attempted more steps and got them right more often, and they also took less time. And then a week later, we had them come back. When we asked them to try a new problem that they'd never seen before, 50 percent of the subgoal people did it correctly, and less than 10 percent of the people who didn't get subgoals did that correctly."

To create outlines with subgroups at virtually no cost, the researchers again turned to crowdsourcing. On their site, Crowdy, the team began by showing participants YouTube videos about programming languages paused at a random point and asking them to summarize one minute of instruction. Once enough summaries were collected, other participants were asked to watch the same minute and choose from among three descriptions of it.

"Once a consensus emerges, Crowdy identifies successive minutes of video with similar characterizations and merges their labels," according to a news release. "Finally, another group of viewers is asked whether the resulting labels are accurate and, if not, to provide alternatives."

A paper on the team's findings will be presented at ACM's Conference on Computer-Supported Cooperative Work and Social Computing this March.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Whitepapers