Okay in my last post I said I would talk a little bit more on how me and my teaching partner are writing and grading assessments for our competencies so here goes.
I have struggled a lot with how to define and assess competencies. There were so many questions. What is a competency? What is the best way to assess competency? What represents the different levels of competency? What role does DOK play in evidence of competency?
My first year, the assessments given were almost exclusively tests and if a scholar did not meet competency (a grade of 2.5 on a scale of 4), then they would have to retake a second version of the test after completing revision procedures. From a teacher standpoint it was easy to manage (test are easy to grade compared to other assessments) but never quite sit right with me as I find tests to be restrictive in the amount of creative thinking and problem solving they allow.
When I got a new teaching partner last year we went full swing in the opposite direction, trading in tests for more creative and engaging projects. A little bit more to grade from the teacher side but the richness of learning was worth it. However, we ran into the problem where it was difficult to assess and maintain competency on some of the more basic skills of the content.
If you don’t know, I teach chemistry and learning chemistry is akin to learning a new language. There is a whole new alphabet (the elements), writing (chemical symbols), combining letters to make words (chemical compounds), and combining words to tell stories (chemical reactions). It’s great to have creative projects but can you really consider a scholar competent if they aren’t fluent in the basics of the language?
To address both the basics skills and creative problem solving, we have moved towards two-part assessments of each competency: a basic skills test and an investigation.
First we outlined the core competencies of our course as a competency statement. These were based on NGSS as well as our professional judgement based on our course content and the 4-year context of our science curriculum.
Then, we wrote 20 question basic skills tests around the low DOK (1 & 2) learning goals of the competency. Finally, we designed investigations to get at the heart of the competencies. These investigations necessarily include the Science and Engineering Practices. This might be creating and explaining a model via screencast or writing a lab report around an experiment that they planned and carried out or creating an infographic to share information about nuclear technology.
These assessments are then graded and the scholar receives a grade on a 4 point scale based on our Bump It Up board, which I adapted from Madly Learning. The levels on the board are roughly based on DOK levels. If a scholar passes the Basic Skills Test then they are at a level 2. To earn a higher competency level they must complete the investigation to the specified criteria. A level 3 shows integration of science and engineering practices with the content. A level 4 goes beyond with more challenging subject matter and a deeper dive to find the real world connections to the content.