Value Added Testing

Value Added Testing (VAT) is a method of teacher assessment where educators are compared on how much the “value” the “add” thru “tests” (hence the name).

Instead of simply giving students of teachers tests, and comparing the average scores of the teachers’ students to find out the best teachers, VAT looks at how students performan before, during, and after their time under a teacher. Good teachers are expected to make students do better than they would have in the future. Bad teachers are expected to make students do less better than they would have otherwise.

It’s easy to see how VAT can be more far than traditional approaches to interpretting test scores, as far as evaluating teachers. Under old methods of evaluating teachers, the system could be easily rigged by giving favored teachers better students. Those better students would of course do better, regardless of the teacher, because a teacher’s influence is limited. Under VAT, however, such a favored teacher’s influence in retarding future performance of studnets should show up.

VAT’s have a major flaw though that is ignored by anti-testing demagogues, but rarely addressed by supporters. The flaw is not with the VAT itself, but with how the tests are written.

VAT’s come from econometrics, the measure of the economy, where there is a common currency (such as the dollar, the euro, or the yen). A CEO may be evaluated thru a VAT by the board of directors: sure, the value of our company went up 26% under you, but if we look what we should have expected, it should have gone up 30%! VAT is an interesting way of forcing company executives to think long term: if you keep paying executives after you fire them based on a VAT, you can reward executives who were unfairly fired, and punish those who were unfairly kept on, without giving up the ability to rapidly hire and fire

But it is very rare for “common” tests to be used across grade levels. Tests for third graders do not measure “learning,” they typically measure “third grade proficiency.” Tests for fourth graders do not measure “learning,” they measure “fourth grade proficiency.” Thus it’s very hard to compare learning across levels using these tests.

For example, imagine a very good fifth-grader teacher who focuses on each individual student, Miss Smith. Miss Smith tailors instruction for each child. Whether a child is a year behind, at, or a year above grade level, the child gains two grade levels of knowledge under Miss Smith. What a great teacher!

But the VAT model, if used with normal proficiency testing, will not show this. Take the student who was at a 1st grade level. Now that student is as a 3rd grade level. But many tests instead will accurately identify the student as far below grade level. Thus there may be no VAT model. (A bad teacher could have gamed the system, focusing on teaching a few superficial skills measured on the fifth grade tests, instead of helping the student develop across the board.) Likewise, a student who is now at the 7th grade level will till be accurately shown to be at or above grade level. Because a fifth-grade proficiency exam is not designed to measure 7th grade skills, the benefits this student gained from Miss Smith will not show up.

The VAT model is a big improvement on old models of testing. But when combined with inappropriate or archaic tests, VAT can provide deceptive results. Teacher advocates should focus on these, real, drawbacks of VAT-based teacher assessments so the assessments should be improved.

4 thoughts on “Value Added Testing”

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>