You know those essay questions on tests like the SATs or GREs? Turns out the ideal reader/scorer is a computer:
“Turns out, though, that standardized test essays are so formulaic that test-scoring companies can use algorithms to grade them. And before you get worried about machines giving you a bad score because they’ve never taken an English class, said algorithms give the essays the same scores as human graders do, according to a large study that compared nine such programs with humans readers. The team used more than 20,000 essays on eight prompts, and you can see in the figure to the right, the mean scores found by the programs and the people were so close that they appear as one line on a chart of the results.”
Says a lot about how we evaluate students’ writing ability, doesn’t it? Ugh.
WAW… that’s kind of amazing… and you think you’ve seen enough.
What’s next then…. I wonder 🙂