As a public-school educator for 30 plus years, and the site leader of two elementary schools for fourteen of those years, the state school testing cycle every spring was unsettling.
We hoped our work planning instruction, executing lessons, and analyzing data weekly in PLC’s would yield positive results on a publicly published performance summary report. Like many of you, I have haunting memories of waiting for and then making meaning of those state test reports. There was no information about what proficiency meant beyond references to cut scores. Sometimes these reports ranked schools with stars or other acronyms, but in essence it gave us only raw scores of student performance to guide us in making instructional decisions for the coming school year.
But what do these reports tell us? Can we determine the root cause of why a student missed a particular question on a test from a raw score? The information given on a state performance summary report does not allow schools to determine exactly why a student scored the way they did.
When we rely only on test scores it is difficult to figure out where exactly a students work broke down. In order to plan purposefully with a focus on building students as learners and performers we must know where non-proficient work breaks down and why it broke at that point.
I vividly remember mining through state test data and making assumptions like, “they just didn’t learn the content,” or that “particular standard must not have been covered prior to the test.” These were, at best, grossly inadequate assumptions.
Don’t get me wrong, we can certainly name some institutional and environmental causes why students may have performed poorly like content and/or curriculum, and design solutions that will get results. I was the queen of everything intervention. I lead my teams through extensive data team meetings where we analyzed raw scores from a variety of sources, including state tests and benchmark data. We engaged in discussions about individual students and made honest attempts at designing interventions that would result in growth performance. What we neglected to do was examine and analyze student work at an individual task level to determine where the student work broke down and the cause of that breakdown. This type of laser focus ensures that we “analyze and plan” instead of “guess and hope.”
Engaging in an in depth structural and causal analysis will allow you and your teams to pinpoint specifically what is wrong with the student work and where exactly the work broke down. You will be able to determine if the causes are related to institutional issues like curriculum alignment, pacing or a lack of congruence with the expectations or standards; teaching and learning issues like a misalignment of the rigor between class work and testing; or student issues like individual perception problems, or a lack of effort.
This type of analysis requires your teams to look for cause and then design individual solutions/interventions to ensure the success of every student.