In the last few posts we have been exploring different aspects of assessment. With “NJASK: Don’t Ask” we discussed “Standardized Test Stress Disorder” and with “Assessing Comprehension: Know the Limitations” it was emphasized that each type of reading test may have different results because of the format or style of the test.
Staying on the comprehension assessment theme, it brings me back to my days as a young graduate assistant in the Temple University Reading Clinic. I remember bringing data to my mentor Dr. Rosner.
Somewhat like Mickey Mouse approaching the Wizard in Fantasia, I asked Dr. Rosner about the results on the comprehension section of an informal reading inventory.
“Gee, Dr. Rosner,” I said in a squeaky Mickey voice. “How can it be that the kid I tested got 30% correct on the third grade selection (a passage about caring for cats), but got 100% on the fourth grade level selection (a story about a skunk family)? How do you explain him doing better on the harder fourth grade level selection?”
Dr. Rosner shot me the wizard glare and in a booming deep voice, “WELL, MR. SELZNICK, PERHAPS HE’S JUST NOT THAT INTO CATS!!!!!!!!!!!!!”
“Ooh, sorry, Mr. Wizard, I mean Dr. Rosner,” as I slid out of his office with my little test results, trying to make sense of the data.
From that time forward I have seen many kids bomb out on “”comprehension” items for a whole host of reasons, some of which include, “just not that into cats.”
Testing data is a snapshot in time. That’s it. The better tests have good predictive power and offer a roadmap as to where the kid is in a moment in time and what he/she needs next. Much depends on the purpose for giving the test in the first place, as to what one should do with the data.
Tests have limitations and scores must be looked at cautiously. “Comprehension” is not easy to test or to get right no matter what anyone tells you in the field.
Maybe he’s just not into cats.