Can online learning make teaching more human?

Screen Shot 2013-09-03 at 3.56.59 PM

Data-driven pedagogy. The phrase conjures a robotic, dull future that only intensifies the worst aspects of 20th-century, bureaucratic, industrial wasteland-style schooling, where learners are defined down to “users,” or even metonymized as disembodied “eyeballs,” and force-fed bits of disconnected information.

For a counternarrative, the question is simple. What can creative humans do with the power of data? One possible answer is that computer-powered analytics could expand humans’ ability to focus on the most human aspects of teaching and learning.

I reported earlier this year on a small experiment the video website Khan Academy ran to this end.

While browsing the web site, some Khan users saw a simple slogan added to the page next to, say, a math problem: “The more you learn today, the smarter you’ll be tomorrow.” The line linked to a further explanation of the concept of “mindset,” the famous body of research by Harvard psychologist Carol Dweck on growth, achievement and motivation.

Displaying that one line led to a 5% increase in problems attempted, proficiencies earned, and return visits to the site, compared to otherwise similar learners who did not see the line.

This week, Andrew Liu, Udacity’s data science intern, blogged about his own research with the data generated by that MOOC platform. Apparently the questions they are framing go along similar lines: toward psychological aspects of motivation and engagement.

Screen Shot 2013-08-27 at 4.55.42 PM

Model of student engagement over time.

Modeling student engagement over time.

“At Udacity, we now have the opportunity to take findings that originated from studies on tens of students in physical classrooms – such as Carol Dweck’s concept of growth mindset – and apply learnings to hundreds of thousands of students with improved teaching. But even more powerful is Udacity’s ability to conduct our own pedagogical research at scale on a rapidly growing worldwide classroom that was not even possible a year ago. Pedagogical areas we’re exploring include the importance of metacognition, expectation setting around formative assessment, and even new online challenges such as which characteristics of video keep students most engaged.” 

Mindset, metacognition (learning about learning), engagement–these are great research questions for educators to be looking at. They are not chiefly about automating the consumption and digestion of information, but about deepening the learner’s physical and emotional relationship with the process of learning.

It’s in part simply the growth of sample sizes that has some researchers so excited about what they might learn in the emerging field of data-driven pedagogy. I haven’t verified this independently, but I have often heard researchers repeat the notion that there are just very few large-scale randomized controlled trials out there comparing the efficacy of various classroom techniques and methodologies.  Sample sizes tend to be quite small and experimental effects hard to compare. (If there are counterexamples, I’d love to hear them).

A major example is the efficacy of online and blended learning itself. According to a comprehensive literature review published by Ithaka SR earlier this year, of over 1000 online and blended learning studies reviewed by the US Department of Education, only 45 met minimal criteria of having experimental research design and considering objective learning outcomes. Of those 45 studies, “most have sample sizes of a few dozen learners; only five include more than 400 learners.”

The kind of a/b trials that the Khan Academy and Udacity are doing, by contrast, can be easily run on hundreds of thousands of people.

Obviously there are relationships and aspects of the human dimension of learning that can’t be addressed with even the best data tracking and experimental design, or the largest sample sizes. There is an ever-present danger that the metrics chosen will tend to distort the nature of the undertaking itself. However, I can’t help but be a little optimistic that at least data scientists are starting with the right kinds of questions.