Computers grading essays?


by David Safier

The NY Times has an article about essay grading software being given away free by EdX, a "nonprofit enterprise founded by Harvard and the Massachusetts Institute of Technology to offer courses on the Internet." I rebel at the thought. Computers are, as someone said to me long ago, lightning fast idiots. In the time since I heard that line, the idiots have become far more sophisticated. They play terrific chess. They answer questions on my phone. They do a pretty good job on Jeopardy. But grading higher level thinking and the quality of someone's prose? I'm very, very skeptical.

Here's what happens. A student writes an essay on a computer and submits it to the program. Faster than you can say "Have you graded my essay yet?" the results are back, with the essay scored and corrections made. A student can improve the essay on the spot and resubmit it for a better grade and more suggestions.

The problem is, the computer isn't actually "reading" the essay the way a human being does. People have submitted lofty sounding gobbledygook to these programs and gotten high scores, because the software scans for certain traits in the essay which it uses for evaluation. It can't take in the essay's totality — its overall logic and subtlety of thought.

On the other hand . . . This kind of "artificial intelligence" is part of our future. Meanwhile, teachers are assigning less writing as their class loads increase (If you grade essays for 160 students giving yourself 5 minutes per essay, you'll be finished in 13 hours and 20 minutes! Been there, done that.). The question is, can this kind of computer grading be put to good use by good teachers? I think so.

Grading short answer questions on a test could be a very valuable way of using the program. Too many teachers have gone to multiple choice tests because of their ease and efficiency. When I taught high school English, I almost always gave questions students had to respond to in complete sentences on their tests. They were laborious to grade, but the students had to think, they could show an understanding of the topic even if they didn't have the exact right answer at their fingertips (also, they could display a more sophisticated understanding of the topic if they wanted to write an extra sentence or two), and they had yet another classroom opportunity to learn how to express themselves using the written word. I considered those tests covering literature to be an important part of my writing curriculum.

I think it could be valuable to have a computer program grading short answers on tests, since the teacher is looking for a few specific facts, phrases and details. Students should be allowed to bring the graded test to the teacher and challenge how the computer graded some of their answers. That would take care of the possibility that a student would be penalized for the computer's mistake.

If I were teaching students to write standard expository essays and I wanted them to follow the 5-paragraph formula so they learned the basic organization, I can see how a computer could evaluate practice essays. Giving students a chance to revise their papers on the spot could give them instant feedback on how to improve their overall organization. I would consider this to be a writing exercise. After the computer-based practice, I would assign an expository essay on a different subject that I would grade. That would mean an extra, valuable writing assignment without doubling my work load.

Another possibility would be to use the software with students who were struggling with grammar and syntax in their writing. This is almost always the case with students who are new to the English language even if they're not ELL students, as well as students with learning disabilities who have a difficult time putting together a decent sentence or paragraph. I would often hand papers back to those students where I wrote in corrections over their mistakes — verb tenses, phrasing, spelling, punctuation — and have them rewrite the papers with my corrections. It was one of the most valuable tools I had to help these students — their writing would gradually, noticeably improve — but it was phenomenally time consuming. If a computer program could make those corrections and give the students a chance to see their incorrect choices, then rewrite them in the correct form, the teacher's grading time would be cut way back, and the assignment could be given on a more regular basis.

It's important for educators to be skeptical of these computer-based educational changes because they turn out to be snake oil as often as they're genuine educational tools, but it's equally important not to be Luddites. The change is coming, often for better, sometimes for worse. There's no point in trying to stop the locomotive by standing on the tracks, but it makes sense to try and switch the train to another line that will take us to a more desirable destination.