How does MY Access!® Score Writing?
Just yesterday, I found myself in a business meeting with clients from one of the nation’s largest textbook publishers. After a day-long meeting, during which we discussed working together to build a powerful, new writing application, my colleague stood and said, “Imagine that. Not once during this meeting did anyone question whether a computer could score writing. Look how far we have come in ten short years.”
Indeed. We’ve known for some time that computers could score human writing, but it has taken a while for everyone else to catch up. And still, not everyone is there yet. My mom continues to ask me at least once a month, “What do you do at your job? I told so-and-so that you do something with computers.”
We appreciate that you, as a MY Access!® user, have placed your trust in the ability of our software to fairly score your children’s writing and provide timely, accurate feedback. But I do still receive the occasional survey in my email inbox with the following comment: “I disagree with how my writing was scored.” So, I thought this would be a good time to talk a little about exactly how it is MY Access!® is able to score your essays and stories.
Meet IntelliMetric®
Let’s start, as they say, at the beginning. The tool which scores your writing is called IntelliMetric®. We have spent many years perfecting this tool, which is best described as a highly sophisticated set of natural language processing tools. No doubt you’ll hear more about this technology in the future.
But the story of how it works begins in a much less exciting manner and depends on two important factors—it begins the old-fashioned way with expert humans reading essays and using a writing rubric to score them.
One of the most common concerns we hear about scoring writing is that it is just so subjective. How could one possibly judge another person’s writing and assign a score? It’s true; scoring writing can be a very subjective undertaking. But, if we are going to teach our children how to write, and more importantly, if they are going to learn, we need to find some common ground upon which to begin building a structure.
Writing rubrics provide this common structure by helping humans decide on the appropriate score for an essay or story. Rubrics describe each of the features which will be evaluated as well as each of the score points. With this in hand, it should be clear to a human scoring an essay what a “three” on a six-point scale looks like. If the human is a consistent scorer, he or she will award a “three” to every essay or story that deserves it.
In the next step of the process, we gather hundreds of essays scored by humans using a specific writing rubric and create a set of “instructions” for IntelliMetric®. It examines all the essays which were scored “three,” for example, and begins to “learn” what it means for an essay to be scored “three.” It does this by examining hundreds of features in each essay, from the concepts the writer used all the way down to the words he or she used.
The outcome of this process is what we call a “scoring model.” It enables IntelliMetric® to apply scores consistently, time after time, to a specific writing task using what it learned from the original human scorers.
So What?
Well, I described this process because I thought you might be interested in learning a little about how IntelliMetric® works. But there is a more important consideration.
I don’t agree with this score! Since IntelliMetric® is designed to be true to the writing rubric that was used during its training, any question about scoring, or rather every question about scoring, should return to the writing rubric.