Thursday, May 26, 2005

Value-Added Assessment is the Only Way To Go

There are a lot of politically motivated claims and counterclaims made about whether or not charter schools are helping kids achieve. In recent months charters have been under attack by the NEA, who commissioned a study that used NAEP data to "show" that students in charters aren't doing as well as students in traditional public schools.

I wrote a response that that study, published in the Oregonian, that pointed out it is meaningless to measure snapshot academic levels of students in charters. What matters is academic GAINS.

That is, assume charter schools disproportionately draw students who are behind academically. (Which they do.) Why would we be surprised, then, if we give them a test and find they tend to be behind the other students? DUH!

Of course this doesn't prevent groups with a dog in the fight from commissioning studies that use this data to "prove" charters are failing. But it's just advocacy research. The NEA isn't interested in actually answering the question "How are charters performing relative to the tradional schools?" They want to create the impression that charters are not working for the students in them.

The question we should be asking is how much each student gains academically from any given year spent in a charter school (or any school.) Let's say a fifth grader enters a charter school in the poorest part of northeast Portland, reading at a second grade level. After one year, his reading ability grows to fourth grade level, and at that point he is given the Oregon reading test, upon which he of course will fail to "meet standard."

Has the school succeeded with that child? Not by Oregon's measure. He is below standard, even though his reading ability grew two academic years in a single year. It's a false negative.

On the other hand, take a child in Riverdale School District, an affluent enclave near Lake Oswego. She enters the fifth grade reading at a seventh grade level, and by the end of the year is still reading at the same level.

She takes the Oregon test, and meets the standard. Has that school succeeded with her? Yes, according to Oregon's assessment system. The school gets to take bows for their success. It's a false positive.

The problem is, Oregon's testing system is not geared to reveal academic gains, and until it does, it will continue to be of little value for the purpose it was designed: school accountability. We need an assessment system designed to reveal "Academic Value Added."

The fact that socioeconomic status is highly correlated to academic performance is perhaps the least contested research result in public education. Why would we have an assessment system, then, that fails to account for it? Without a "value added" measure, Oregon will forever be tagging schools in poorer neighborhoods as failures and schools in affluent neighborhoods as successes, even if the academic gains are precisely the opposite.

Guess who opposes changing Oregon's assessment system so it can measure gains? The teachers union. Why would they oppose it? For starters, if we could measure the academic gains of each student, we could also track the data back to the teacher. We would know which teachers were effective at raising every students' academic levels and which were not - something the unions cannot abide.

Tennessee has had such a system in place for several years. Oregon could have one, too, if the education establishment weren't so threatened by it.

My bill to replace Oregon's testing system, HB3162, would enable the state to measure annual academic gains of every student. We would know which teachers were effective and which were not. We would finally be able to separate the effects of socioeconomic status from our measure of school accountability, which has been unfairly categorizing some schools as failures (and others as successes) for years. You'd finally have an honest answer to the question: "How is my kid's school doing?"

The bill has powerful opposition: The Oregon Department of Education, the Oregon Education Association, the Oregon Business Council, the Oregon Business Association, and the Associated Oregon Industries.

I understand why the ODE and the OEA can't abide accurate measures of academic gains, but why would the supposed business advocates oppose it? Hard to say, other than the fact that they have been cheerleading the existing system of deeply flawed assessments for so long that they would lose face if they admitted its failure.

I think they have fallen victim to a medical condition that unfortunately has run rampant through Oregon's poilitical establishment:

Cranial Rectumitis.

6 comments:

Mr.Atos said...

The NEA and Teacher's Union, like other Leftwing advocacy groups, are not interested in the facts, just the appearance that can be projected through the media by the manipulation of those facts. Its the nature of agitprop. Interesting to note perhaps, that when advocates for public education utilize deception as a means of information, it thoroughly undermines the legitimacy of their position entirely.

Your research, post and legislative actions are the best offense against their propaganda. The truth may take a long time to build momentum, but as long as we keep pushing, the inertia becomes increasingly formidable.

Sisyphus just might gain the top of that ledge with his boulder.

Keep pushing!

Anonymous said...

Michael:

Give up on the teacher exams. The whole licensing system is designed to keep the system closed off from people who haven't been properly "programed" at schools of education. It protects the teachers from us, not us from them.

Anonymous said...

Anon I am aware of the purpose of the "licensing" afterall licensing was a key part of the "Black Codes". I am also a firm believer in thr separation of the state and education whoever in the mean time we have to deal with the system we have and it would be nice to know where it is failing. If licensing is one of those cases thenit needs to be exposed.
M.W.

Anonymous said...

Franklin School in Corvallis has been using simple value-added analysis for several years to evaluate performance in grades 3-8. The analysis is also broken down into 3 groups, reflecting the above average, average and below average student groups. The teachers really appreciate the feedback, because it helps them see how well they are reaching these three groups. For example, if they have large value-added scores for the high end students, so-so for the middle students, and little or no value-added for the low end students, it suggests that they are teaching to the high end students and the rest aren't getting it. So they need to back it off and make sure they're bringing along the lower end of the class.

Good teachers appreciate this kind of feedback, if they believe the tests accurately measure what students should be learning at each grade. There can be some honest disagreement about that, but if the state has determined what students should know at each grade level, that is the minimum that they should be teaching.

The big opposition to value-added, of course, will come from poor teachers or lazy teachers, because it exposes them for what they are -- lazy or incompetent. This is precisely why value-added is needed, to let parents and administrators identify who does a good job and who doesn't.

Greg Perry
Corvalls

Anonymous said...

Hiya, Rob! (Yep, I found your blog.)

As a researcher with particular expertise in assessment and evaluation, I think that "value-added assessment" is neither all good nor all bad. It's a tool, one of many in the bag. Sorry, folks, it's not a cure-all.

I am concerned that the obsession with testing - putting all of our evaluation of teacher effectiveness into one broad, general, annual measurement - is foolish. I could *never* get hired to evaluate a program if I proposed one measure each year, and then using that single measure (usually with lower reliablity than we'd all prefer) over and over and over again - compounding reliability problems.

Well, I don't like long posts, so I'll stop. I won't go into the boring technical problems with so-called "value-added" statistical procedures. I mostly wanted to just stop in and say hi!

Rob Kremer said...

Kathy:
Great to hear from you!

But you pose a bit of a straw man here. Nobody said anything about Value-Added being a cure all, nor that all teacher evaluation would be a single annual measurement.

The question is: given we must have an assessment system, what is the most useful design?

Oregon's system fails on virtually every measure of an effective system, as I have painstakingly chronicled in my post on HB 3162.

Yes, there are statistical issues with every value-added model out there. But the perfect should not be the enemy of the good, especially when the choice is between good and the awful we have now.

Rob