Examining Effect

John Hattie has spent much of his career listing effect sizes correlating school, teacher, and student factors to student achievement.  Here he lists 195 influences of student achievement along with their effect sizes.  For example, he lists creativity programs, providing formative evaluation, teacher clarity, and interventions for students with learning disabilities as factors that influence student achievement with an effect size greater than 0.4.

His work is important for two reasons:

  1. His list of effects forms a normal curve with a mid-point effect size of 0.4 (using Cohen’s d).  Many have been using Cohen’s rule of thumb interpretation of effect sizes (small effect = 0.2; medium effect = 0.5; large effect = 0.8), but this interpretation has been meaningless in educational interventions, and has led many to say that any positive effect on student achievement is “good.”  Hattie sets a bar for effect sizes at 0.4 and concludes that typical teaching provides effects between .15 and .30.  He implores us to go well beyond typical teaching effects.  john-hattie-effect-sizes-on-achievement-9-728
  2. Hattie’s method of pooling effect sizes can be adopted to discuss effects across programs.  The National Science Foundation (and other federal education funders) have searched for mechanisms to broadly explain the merit and worth of a body of programs, and effect sizes give us a broad approach to do this.  Naturally, we must be cautious about using effect sizes because it may be easy to inflate them, the named effect might contain multiple other factors, and the outcome variable (student achievement) may be difficult to accurately measure.  Still, in the absence of mandating the same instruments across programs, effect sizes give us a method for pooling our data and supporting agencies that need to account for their funded programs.

Evaluating Micro-Credentialing

I wonder if others are noticing a shift in teacher professional development.  The shift moves from the kind of K-12 professional learning we often see in schools – 1) a  teacher attends a week-long professional learning session in the summer to fulfill requirement for continuing education or 2) a teacher attends a professional learning session on Saturdays throughout the school year.  These sessions seem to begin with the premise that teachers need to know something they do not currently know, and it should be delivered in a setting similar to a graduate school education class.  The shift I’ve noticed asks teachers to choose from a menu of targeted learning events.  Digital Promise provides some examples of micro-credentials for educators.  Examples could be “Leading a Professional Learning Community” or “Using Wait Time Effectively.”

How then, do we evaluate micro-credentialing initiatives?  We might adopt Guskey’s (2000) variant of the Kirkpatrick (1996) model.  Here’s what it might look like:

  1.  Participation and selection:  Who is choosing what micro-credential?
    • How did the designers arrive at the set of micro-credentials?  Did they conduct a needs analysis?
    • Did the participant self-select into the program?  If not, how was the participant connected to a specific micro-credential?
  2. Participant Reactions:  Did the participant find the professional learning useful, informative, and engaging?
  3. Participant Knowledge: What did the participant hope to learn?
    • What did the participant learn?
    • What is the connection between what the participant hoped to learn, the learning objectives of the micro-credential, what the participant actually learned, and what the participant applies on the job?
  4. Organizational Support and Change
    • What effect has the professional learning had on the school environment?
    • What barriers prevent participants from using what they have learned?
    • What affordances of the environment promote use?
  5. Participant Actions: How does the micro-credential affect practice?
    • How does the participant intend to use what they learn in the micro-credential?
    • What action does the participant actually take as a result of the micro-credential?
  6. Student Success:
    • Before taking the Micro-Credential:  What affect will the micro-credential have on students?  That is, how will students be different as a result of this micro-credential?
    • During the Micro-Credential:  How has the teacher changed his/her perspective of the effect of the micro-credential on students?  That is, now that the teacher is taking the micro-credential, how has his/her thinking changed in regards to the effect it will have on students?
    • After the Micro-Credential:  What effect has the micro-credential actually had on students?  Can we draw theoretical and ultimately causal connections between the micro-credential and student success?

Evaluating Professional Learning Communities (TCAR)

As we begin preparations for summer professional learning, some may be considering using professional learning communities (PLCs).  Woodland recently published a rubric in the American Journal of Evaluation to evaluate PLCs.  It builds off previous PLC rubrics and incorporates improvement science.  This is particularly timely for those considering research-practitioner partnerships (see this solicitation from NSF).  Researchers are often university faculty who may not know the extent to which PLCs are used in public education, and this provides them with both a framework for understanding these potentially robust communities along with an approach to measuring them.  For practitioners, typically K-12 school leaders and teachers, it provides a strategy to return sleepy, mostly complacent PLCs to their intended use – to foster critical and sometimes tense dialogue around student success.  Here is an overview of the rubric (tcar-form).