The data-driven education dilemma

by David Safier
OK, this is a tough one. There are so many on-the-other-hands here, I'm running out of hands.

The Obama administration took aim on Thursday at state laws — adopted after heavy teachers’ union lobbying — barring the use of student achievement data to evaluate teacher performance.

The federal Department of Education proposed rules to prevent states with such laws from getting money from a $4.3 billion-educational innovation fund.

Obama and Ed Sec Arne Duncan are big believers in collecting data on students and using it to evaluate student achievement and teacher performance.

On the one hand, we all want student achievement to improve, and we want to keep and reward our best teachers while we weed out the incompetents. On the other hand, increasing our emphasis on data collection may not be the best way to accomplish those ends and may make things worse, not better.

Here's how data-driven education works. You check each student's achievement as often as you can, trying to diagnose weak areas that need to be improved and looking for overall growth. The idea is to measure each child's progress for the purpose of accelerating that progress, and see how a class full of students progresses to assess each teacher's competence.

So if Child A is two years above grade level in reading in the 3rd grade, then slips to only one year above by the 5th grade, you look carefully to see if there's a problem, with the child, the teachers or both. On the other hand, if Child B is two years below grade level in the 3rd grade, then jumps to one year above by the 5th grade, you break out ice cream and the champagne and throw a party.

If a class of students raise more than a grade level in a given year, that's probably a good sign that the teacher is competent. If the students rate of growth slows, that's a warning sign that a teacher either needs help, or needs to find another line of work.

All that sounds good. On the other hand . . .

The only somewhat objective way we have to measure student achievement is through standardized tests, like AIMS, NAEP, etc. And those are very, very crude measures. Real education is filled with intangibles, and, by definition, you can't test intangibles. In reading, for instance, you can more-or-less measure if students can decode written language and comprehend its meaning in short passages, but that's about all you can measure about students' reading ability. Other factors that involve deeper, more personal understanding or joy of reading, both of which create lifelong readers, can't be measured in any objective way.

And tests aren't even a reliable measure of the tangibles. As we all know — teachers more than anyone — you can effectively raise students' scores by teaching to the test. What you get are better testers, not better educated students.

And of course, you can't test art, because art is untestable by any objective measure. And it's difficult to test history and science, since they're more knowledge and concept based than skill based. So we end up focusing more and more time on the rudiments of reading, writing and math to get those scores up, to the detriment of other subjects.

That sounds bad. On the other hand . . .

If a child never learns to read fluently, or write coherently, or perform basic math competently, most of the other things you want to teach that child aren't going to sink in very well. That person is most likely doomed to low wage jobs or worse and will go through life with a very limited understanding of what goes on in the world, no matter how well rounded the school's curriculum is.

Much as it pains me to say it, if someone convinced me education is a zero sum game, and there is a way to teach reading, writing and math successfully to the lowest achieving students, but it would mean slighting the rest of their education, I would say, "If that's the only way to do it, go ahead. Do it!"

So if a data-driven educational program succeeded at genuinely adding, say, two grade levels to the lowest students' achievement scores — not just by improving testsmanship, but by genuinely improving skills — I would reluctantly applaud the program even if it slighted other areas of the curriculum.

On the other hand . . .

I have yet to see those magic bullets that go to the heart of a student's reading, writing and math skills and lift them to significantly greater heights. So far, what we've managed to do is drill the students more, test them more, then start prepping them for the next set of tests without seeing much in the way of results. All we've gotten for this data-driven, skill-based education is a more rigid educational system that slights those wonderful aspects of our education that can't be tested.

It may be that the best way to increase students' basic skills is to teach a fuller, richer curriculum in a way that encourages students to want to learn. Or there may be some magical, mysterious combination of skill based and conceptually based learning, where one feeds on the other. However, I have yet to see that type of magic bullet either.

To sum up . . . On the one hand, I don't want to keep things as they are. That's unacceptable. But on the other hand, I don't want to replace the status quo with status worse.

Have I ever told you how tough education is, and how difficult it can be to figure out how to make things better?