February 7th, 2012

The Sorry State of Standardized Writing

A couple of items from the world of writing and assessment have been niggling at me of late.

First, news that the Hewlett Foundation is sponsoring a $100,000 competition to create automated essay scoring software that, in theory at least, will do as good or better job of assessing student writing on standardized tests than the current human graders do. I get the reasoning behind this. Current “scoring mills” have turned test essay readers into skimmers, and the reality that the more kids write regardless of quality the better they score are well documented. And as currently structured, there is no way current assessments do anything to improve student writing. As always, it’s a time and money issue, but my initial reaction to this is if we value writing enough to make sure every child can do it reasonably well, we should also value the time and effort it takes to evaluate it reasonably well.

I remember the long, long hours of reading and responding to tens of thousands of essays during my English teacher days. The turnaround wasn’t always fast. The notes and marks and narrative comments on the page went largely unread despite the fact that I didn’t give a grade to most pieces. I spent as much time as I could holding conversations with kids about their writing both one on one and in small peer groups. The best assessment and advice came in that analysis and feedback where we had a chance to reflect and experiment with the writing. I know that many of my students learned to really appreciate failure in that process because it was a safe place to try things, to push their practice without any stakes, high or low. I know at the end of the day that every child, including my own, should reach some level of expression that allows them to communicate ideas clearly and compellingly, but I also know that the paths to that place are varied and filled with stops and starts. It’s a highly complex process, much more than putting comma in the right place and simply varying sentence structure (though both of those can’t hurt.)

Having said that, it’s scary to see what passes for acceptable writing on the state tests. Yesterday, Michael Winerip’s piece in the New York Times showed examples from the state scoring guide of writing from the state high school English Regents exam that should be scored a 1 on a 2-point scale:

These two Charater have very different mind Sets because they are creative in away that no one would imagen just put clay together and using leaves to create art.

I’m wondering why that would even get one point. Are we really satisfied that student is sufficiently ready to communicate in writing to the larger world? I get the tension here, too:

If the standard is set too high, so many will fail — including children with special education needs and students for whom English is a second language — that there will be a public outcry.

But if the standard is set too low, the result is a diploma that has little meaning.

But will machine scored essays fix this? The Hewlett Foundation seems to think so:

“Better tests support better learning,” says Barbara Chow, Education Program Director at the Hewlett Foundation. “Rapid and accurate automated essay scoring will encourage states to include more writing in their state assessments. And the more we can use essays to assess what students have learned, the greater the likelihood they’ll master important academic content, critical thinking, and effective communication.”

Look, I’m on board that kids should write more, and that learning what a student has learned by having her write is better than having her fill in bubbles for questions she can use her phones to answer if we let her. But here’s the problem: this is more about money than it is about serving kids well. Let’s be honest, while it may be less consistent and more complex, and while it may take more time and money to get it right, human graders have a distinct advantage over machines when it comes to writing: emotion.  I love the way the Conference on College Composition and Communication (CCCC) puts it:

Automated assessment programs do not respond as human readers. While they may promise consistency, they distort the very nature of writing as a complex and context-rich interaction between people. They simplify writing in ways that can mislead writers to focus more on structure and grammar than on what they are saying by using a given structure and style.

Here’s what I know will happen once we move to the machines: we’ll help kids learn how to write what the machines want instead of focusing on the power and beauty and uniqueness of human communication. I can name a slew of brilliant writers who would probably fail the test because they wrote in a unique, compelling style that went far beyond our traditional thinking around “good writing.” Sure, in the name of efficiency we can choose to set the bar low and reward kids for putting together a sentence that’s barely readable but conveys a simple thought regardless. But why wouldn’t we choose something better? 

In the end, I’m getting tired of “efficiencies” when it comes to education. But that’s a larger discussion of priorities that really needs to be left to another day…

17 notes·
education
writing
schools
assessment
  1. spacer catswithoutlegs liked this
  2. spacer professor-oprah liked this
  3. spacer walter-melon liked this
  4. spacer randomlytu8 liked this
  5. spacer stackhouseok45 liked this
  6. spacer passionispurity liked this
  7. girlslugger21 reblogged this from willrichardson
  8. spacer margaretpoplin liked this
  9. spacer herbivorestyle liked this
  10. spacer gjmueller liked this
  11. justastringthing reblogged this from willrichardson
  12. spacer justastringthing liked this
  13. teachandtutor reblogged this from willrichardson
  14. spacer fsocorro liked this
  15. spacer orangesection liked this
  16. spacer g0bananas liked this
  17. spacer purpleishboots liked this
  18. creative-education reblogged this from willrichardson
  19. spacer girlwithalessonplan liked this
  20. joyrite reblogged this from willrichardson and added:
    right problem (students don’t...wrong solution (fully automating essay grading)
  21. moyaofthemist reblogged this from willrichardson
  22. spacer earlybirdwords liked this
  23. spacer This was featured in #Education
  24. willrichardson posted this
Loading tweets...

@willrich45

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.