Monday, January 20, 2014

Using Student Achievement Data to Support Instructional Decision Making

The What Works Clearinghouse is a government sponsored activity whose stated goal is:

  • "We review the research on the different programs, products, practices, and policies in education.
  • Then, by focusing on the results from high-quality research, we try to answer the question “What works in education?”
  • Our goal is to provide educators with the information they need to make evidence-based decisions."
Therefore, it is with irony that I appreciate the report Using Student Achievement Data to Support Instructional Decision Making authored by L. Hamilton, R. Halverson, S.S. Jackson, E. Mandinach, J.A. Supovitz and J.C. Wayman. Although the authors repeatedly acknowledge that there is limited or no research supporting their assertions, based on case studies, professional judgments and studies with confounding factors, they determine that data is an important component to effective instruction. Evidence-based decisions really do not enter into the conclusions of this report.

As a special educator whose entire pedagogy is deeply rooted in data, I agree that data is an essential component of designing effective instruction. NCLB from 2004 required some thought to data driven instruction as has AIS and RTI movements. I also know that teachers who are required to provide report card grades know at least a little about data. Formative assessment ideas that have become abundant of late certainly push the concept of using data to drive instruction. Data is an integral component of teaching and as such, it is not surprising that locating what the Clearinghouse determines to be studies of adequate rigor is difficult.

While teachers use data on a limited scope, there is certainly a movement to increase the quantity and quality of data use in education. This means that teachers need to better understand data and how to collect and utilize data. Expanding capacity within educational environments to do so is essential. I should not ever go to a school board meeting where the assistant superintendent of instruction is presenting data on district-wide progress on exams and have my question about the significance of the 1% increase in scores send both the presenter and the superintendent scurrying for their statistics books. (The answer was no, I knew this when I asked, but needed the school board to know that the celebration was premature.)

My favorite piece of the report is the section relating to teaching students to examine their data and set learning goals. They propose two graphics to help students engage in self-examination.

 
Areas of strength and growth and areas for growth
Topic: Writing a 5 paragraph essay
Based on: Rubric –based feedback
Student: ________________________________________________
Strengths
Weaknesses
 
 
 
 


Learning from math mistakes
Test: Unit 1 operations with fractions
Student: __________________________________________
problem number
my answer
correct answer
steps for solving
reason missed
Need to review this concept?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

The authors further recommend that teachers use this sort of item analysis to implement any needed remediation lessons. I think having a set form for student to use, increases the likelihood that self-evaluation will occur. With the first graphic organizer, you could easily adapt it to work with any rubric activity: reading a map, interpreting a primary source, creating a piece of art in the style of a particular artist, balancing chemical equations, etc. In the math case, students could complete the first three columns as a group then small groups could be made- one for reteaching important concepts and the other for enrichment activities. If an extra person were available- a paraprofessional, special educator, math coach, or parent volunteer- someone could supervise the enrichment group while instruction was going on or multiple instructional groups could go on at once. An amalgam of the two organizers could be used with any short answer, true/false or multiple choice test- the first three column headings could remain the same, the steps for solving could be changed to explanation of the correct answer, the reason missed could be eliminated altogether and the last column could remain the same.

For me, the biggest challenge of the report relies not in the fact that the level of evidence is low, but in the complete lack of how this strategy could be used to impact the education of kids who get it. There is a sum total of one sentence about enrichment for kids who scored well. Every comment limits educational experience to the grade level benchmarks. This is an inadequate benchmark for probably at least 10% at the top of the class. It also is not sufficiently precise to pinpoint problems and inform instruction for the bottom 10% of the class. If we want to really enable data to drive instruction, we need tools that are sensitive enough to assess skills, without a large number of students hitting the ceiling or failing. Perhaps this means having multiple tools, accelerating high students to benchmarks at higher grades and allowing struggling students to aim for benchmarks that have been broken down into smaller substeps. Repeated administration of sample tests might enable growth to be shown for the middle 8 deciles, but is unlikely to inform instruction on the ends and thus is a waste of those student's time and the multiple educational resources that go into the assessment (ex. paper copies, instructional time, and teacher grading time).

Data can and should inform instruction, but we need to be sure we are collecting data that is meaningful and actionable.

No comments:

Post a Comment