If paying students can affect their test scores . . .

by David Safier

Standardized tests are fine, so long as the results are kept in perspective. But when they're made the gold standard for assessing student achievement, things get weird, and measurements of achievement become badly skewed.

Here's one of the many problems with high stakes tests.

If motivated test takers do better than an unmotivated test takers — which, of course, they do — then tests measure motivation as well as achievement. And that's a problem, because, if part of the score can be accounted for by how much the students want to succeed, the results aren't reliable measures of achievement.

Here's the latest evidence.

A study was conducted to determine if high school seniors would do better on the NAEP test if they were paid.

The answer is Yes, the study concluded.

Some students were simply given $20 at the beginning of the test. Others were given $5 in advance, then told two questions would be chosen randomly, and they would get $15 for each one they got right. Students in the control group were given no financial incentive.

The students who were given $20 up front did better than those in the control group who got nothing. The ones told they could make money by getting randomly chosen questions right did better still.

The lowest scoring subgroups showed the most marked improvement with the random question incentive.

What do we learn from all this? Whether it's done with money or prizes or coaching or cheerleading, anything that can get students to want to do well on a standardized test will increase their scores.

And the lower the students' scores, the more the incentives give them a boost.

The lesson to schools and teachers is, (a) teach to the test; and (b) persuade students it matters that they do their best, especially the lowest achievers.

Unfortunately, none of those score-boosting lessons have any direct connection with boosting student achievement.