Beyond-standardized testing: Can emotional intelligence be tested?

image: flickr user Franklin B Thompson

image: flickr user Franklin B Thompson

This post draws on the ongoing research I’m doing for my next book, The Test, about the past, present and future of assessment in public schools.

This past week the New York Times Magazine published Jennifer Kahn’s big feature story on emotional intelligence. Like much recent coverage of education, it argued for the importance of a certain set of skills–noncognitive skills.

So-called noncognitive skills — attributes like self-restraint, persistence and self-awareness — might actually be better predictors of a person’s life trajectory than standard academic measures. A 2011 study using data collected on 17,000 British infants followed over 50 years found that a child’s level of mental well-being correlated strongly with future success. Similar studies have found that kids who develop these skills are not only more likely to do well at work but also to have longer marriages and to suffer less from depression and anxiety. Some evidence even shows that they will be physically healthier.

There’s evidence these skills can be learned, and they are starting to be taught. But there’s a big problem. Our current high-stakes testing regime, which is becoming the dominant standard by which schools and teachers are judged, funded, and allowed to keep doing what they’re doing, leaves very little room for anything other than math facts and reading facts. That’s it.

Anything that is not measured, is not managed, as the business axiom goes. So no matter how compelling the research and evidence for “non-cognitive skills” or “emotional intelligence,” or anything similar, our school system will never orient itself around these priorities unless we measure them.

But how? Multiple choice tests have been around for about 100 years. We are pretty confident in their ability to assess students’ knowledge of individual facts. And they are cheap to administer. But assessing a student’s emotional intelligence seems like something that can only be done qualitatively, one on one. That is far too messy and expensive to be adopted nationwide. Kahn alludes to the idea, which you typically hear, that emotional intelligence may boost performance on conventional standardized tests, but that’s not enough to ensure that emotional intelligence will be emphasized or taught. Particularly to the most disadvantaged students at the most underresourced schools who are more likely than others to be subject to intensive test prep.

Some researchers, however, are coming at this problem a different way. They are building computer-based tools like games and simulations to get at measuring the kinds of things we actually think are important for kids to know. Because they are computer based, once they are designed and built they are just as cheap to administer as a standard multiple choice test. One researcher that I spoke with recently is Dr. Dan Schwartz. He is director of the awesomely named AAALab, an acronym for Awesomely Adaptive Advanced Learning and Behavior, at Stanford University.

Schwartz spent a few decades in teaching and holds a PHD in Human Learning and Cognition from Columbia.  I mention this because it’s actually unusual for people who design assessments to have either field experience in teaching, or knowledge of the ways humans learn and think. It may seem amazing, but I’ve learned that historically the field of psychometrics, which includes test design, is more or less completely separate from the field of learning or the profession of education. It’s almost as though medicine were divided into diagnosticians and treatment specialists who went to different schools and spoke different technical languages.

In any case, Schwartz is working on something called Choice-based Assessments. It’s a simple but revolutionary idea; don’t test kids’ knowledge, test how they approach the process of learning. “In our assessments we make little fun games, and to do well at the games you need to learn something. So they’re not just measures of what the student already knows, but attempts to measure how well they are prepared to continue learning when they’re no longer told exactly what to do.” That is, after formal schooling ends and lifelong learning continues. Or as they say on their website, these “interactive assessments can evaluate students in a context of choosing whether, what, how, and when to learn.”

One of the assessments currently under development is called Posterlets. It’s designed for students in middle school. Kids log on and their task is to make posters for a fun fair. They work in a little design program, drawing the graphics, putting in the text, etc. Then they choose to get feedback on their design from a group of animal characters. In each instance, they can choose whether they want to hear positive or negative feedback.  (This evaluation is done automatically by the computer program, which is “smart” enough to tell whether the words are spelled right, whether the colors clash or the font sizes are too small). After that happens, the student has the option to change her design to incorporate the feedback. Then she gets to see how many funfair tickets were sold in response to a particular design.

A couple things strike me about Posterlets. First of all, it sounds more like a game than a test. It actually sounds fun. It also sounds far more like what you might actually do in a workplace or creative context than anything that usually happens on test days: coming up with a piece of creative work, seeking internal feedback, and reworking the work. It’s not so separate from the curriculum as a traditional test.

While it incorporates some material that you might find in a basic graphic design course, the content of Posterlets is secondary to what the simulation is really trying to get at: how does the student approach the learning process? In particular, says Schwartz, it looks at negative feedback. “The more negative feedback you chose, the better your poster gets.” Negative feedback, though it may be harder to hear, gives you more to go on. And a student who has the right level of resilience, motivation and persistence is more likely to be able to choose to hear the tough stuff on the way to getting great.

Posterlets is being tested as a means of evaluating design-based curricula. There are k-12 schools where students are taught to think like designers, a process by which they gather evidence, create novel solutions to problems, and test prototypes. “Good design based curricula emphasizes seeking feedback,” Schwartz explains. So a school that does a better job teaching that kind of skill should have students who do better at the Posterlets task. 

To learn more about Schwartz’s research, I recommend this paper. In future posts I’ll talk about assessing students in informal learning contexts such as Makerspaces, and the concept of “stealth assessment.”


POSTED BY Anya Kamenetz ON September 17, 2013

Comments & Trackbacks (1) | Post a Comment

Tim McClung

You wrote-”I mention this because it’s actually unusual for people who design assessments to have either field experience in teaching, or knowledge of the ways humans learn and think. It may seem amazing, but I’ve learned that historically the field of psychometrics, which includes test design, is more or less completely separate from the field of learning or the profession of education.”

Doesn’t that say it all?

Your email is never published nor shared.

Required
Required
CAPTCHA Image