Skip to content

The Academic Leadership Year – Part 2

Academic Leadership

Once we have established where students are expected to be as learners and performers by the time they take a test or transition to the next year – we then facilitate the school’s analysis of current status. This can vary from state to state depending on the nature of, and the data included with, the state report, but it will almost always begin with an analysis of district or school “success” rankings.

  1. Did the school meet its state goal?
    (Note that the ultimate state goal is 100% proficiency in content areas. The standards apply to all students) Were there are recommendations or mandates that were included by the state in its report?
  2. If it the school did not reach its goal, why not? Where were points lost? (Note: Ed Directions coaches use a simple chart to begin this discussion)
Tested AreaLast Year’s ScoreThis Year’s GoalThis Year’s Actual ScoreInitial Thoughts
     
     
     
     
     
  • Why were the points lost?
    This requires a “deep dive” into the test report. In many states schools can learn:
    • possible weak areas of content
    • possible weaknesses in question type (e.g. constructive response questions), venue (e.g. reading with embedded visual or graphic) or duration (performance decline at end of test)
    • “red flag” areas (e.g. area of student proficiency under 50%, off level performance by advanced our honors students, performance gaps between and among identifiable groups, etc.) him

Some states provide more information but almost all states allow this level of data analysis. It allows Ed Directions coaches to use a simple chart to transition leadership teams from thinking about the scores to thinking about the students who earn the scores.

ConcernData/DescriptionInitial Thoughts
   
   
   
   
   

In states where scores can be sorted by group or by teacher, this analysis can be carried to another level. We can identify areas of strength and concern with more precision. Ed Directions coaches use the questions below to help clarify what the scores actually tell us.

  • Were the proficient and non-proficient scores evenly distributed across teachers at a grade level or within a discipline?
Teacher grade levelTested area(s)Number proficientNumber not proficientScore
     
    
    
    
  • Were the proficient and non-proficient scores evenly distributed across programs (e.g. honors program, special education, ESL, etc.)?
Program:  TestNumber proficientNumber not proficientScore
Grade level: total number:    
    
    
    
  • Did students with “at risk” characteristics (e.g. high absence rate, behavioral issues, lack of prior academic success, etc.) underperform?
At risk group:  TestNumber proficientNumber not proficientScore
Grade level: total number:      
    
    
    
  • Were there gaps between/among the “accountability” groups of students (e.g. social economic status, race, location, etc.) as compared to the “majority?”
Accountability Group  TestNumber proficientNumber not proficientScore
Grade level: total number:      
    
    
    

We can also begin an analysis of individual student performance and identify “red flags” in individual student score profiles. In this case the red flags indicate that the student did not meet expectations either by over performing or underperforming. A sorting tool used in many Ed Directions’ schools looks like this:

StudentTest:Test:Test:Test:Test:
ExpectedActualExpectedActualExpectedActualExpectedActualExpectedActual
           
           

At this point, we know district-wide and school-wide problems with not only curriculum congruence with standards, but also areas where we fail to meet goals and/or content and format weakness. These “red flags” give us a starting point for building an intentional plan that focuses on more precise areas needing attention than a “low reading score” notation on a state report.

About Post Author