The blog of a professional educator, university faculty member, and data geek. This blog contains commentary on collaborative data driven decision-making in education and assessment for program evaluation and accreditation in higher education. Join the conversation!
Tuesday, March 10, 2015
Open Your Eyes
I remember when I was a student teacher in first grade. A small table of emerging readers. Pattern books with predictable text. During guided reading one day one of my students said, "Look, I can read with my eyes closed!" Well I guess I somehow missed the point then! Unless we can read braille, we most certainly cannot "read with our eyes closed." With our eyes closed we are not reading at all.
I have angst over assessing with rubrics. I have angst over performance assessment. Mostly I have angst over the lack of validity and reliability and the level of bias that creeps into our assessment.
When assessing a teacher candidate or in-service teacher performance, many university supervisors and school administrators are asked to use the rubrics associated with the Danielson Framework for Teaching. The frameworks break down teaching into 22 components and 76 smaller elements. A rubric exists for each of the components and there is a line in one of the rubrics for each of the 76 smaller elements.
After training most professionals feel they "know" the Framework. Gee, all of those rubrics... that is a lot of paper. That takes a lot of time. I have how many people to assess? How many times? I know good when I see it. This intern/teacher is good! Distinguished, distinguished, distinguished!
Wait, somewhere I heard...in training, or a meeting... that the intern/teacher doesn't "live" at the distinguished level. It's just a place that we "visit," therefore we do not give distinguished ratings. More than that, we're going to evaluate the intern/teacher low so they have room to grow! Basic, basic, basic!
When we evaluate an intern/teacher on a framework without actually referencing the rubric and providing evidence to support the rating based on what we have observed and the content of the rubric, aren't we really assessing with our eyes closed? Are we really assessing at all?
Is there a better way? How do you use the Danielson Framework for Teaching to evaluate interns/teachers?
Subscribe to:
Posts (Atom)