After deciding that I was going to create a diagnostic tool to determine my student's understanding of different mathematical concepts it was now time to create the tool.
I wanted the tool to function as a standardised way of checking if they were able to understand what the different strands of mathematics were asking of them, and not assess whether or not they could solve the problems.
In this question, I wanted to know if they would be able to match the term, to the definition, to the example.
In this one, they needed to be able to order the sequence of problem-solving steps.
I ran the test with a sample group of students; I didn't use my target group for the prototype testing - my rationale for this was to try and get a sampling from a range of different abilities, reading levels, and attitudes towards maths. The prototype was tested by 11 different students, and the results were a little disappointing - not because I felt like they hadn't succeeded, but because I felt that the test itself was too confusing. In the reflections with students they were unsure what they were being tested on, and that some of the questions were confusing, even for the extension students.
This leaves me at a bit of a crossroads, is the test tricky because I have made the questions and the tasks too confusing, or is it an indication that this test actually challenges their ability to comprehend the maths concepts, and it's actually revealing holes in their understanding?
I am going to gather feedback from my colleagues, adapt the test using Google Forms to see if the more rigid structure can work as a scaffold them and remove ambiguity. Hopefully the results will show me there is merit in continuing to pursue this diagnostic tool.
In the mean time, I am continuing to explore the CPA Approach to my maths teaching. Check out the next post to learn more about CPA, and how it has been working out in Room 8.
No comments:
Post a Comment