Using assessment to motivate and empower students in grade 9 math

I hadn’t taught MFM1P for a long time – about 9 years. As I began planning this semester I had flashbacks to the last time I taught MFM1P in Moosonee. I vividly remember making great gains in implementing new instructional strategies to support a variety of students with learning disabilities. We used manipulatives regularly and completed more projects and tasks then tests. Unfortunately, I also remember my attempt to shift assessment failing miserably.

I tried to assess based on the overall expectations in a portfolio-like manner. Each student had 20 file folders and would put their favorite work for each learning goal into the corresponding folder. At the end of the semester they reflected and self-assessed. It was a logistical and paper chaos nightmare. It was poorly executed (by me) and one of my first major educational fails.

Thank goodness for technology. Nine or 10 years later, I am able to revisit this method of assessment and am making it work much better. We have 22 learning goals for our class. We have in-class activities and assessments that cover these learning goals in a variety of creative, fun ways. One project might tap into three learning goals, or maybe only one. As a class we set out what makes a level 1, 2, 3 or 4 for each learning goal. Here is a snapshot of how our learning goals are organized.

mfm1p learning goals

We use Activegrade to track and record our assessment. Students and parents can log in and see their grades for each learning goal. This first student killed pythagorean. He missed a group activity in class where we investigated finding the area and perimeter of 2D composite shapes. He didn’t quite get caught up before the in-class assessments for that goal.

Screen Shot 2015-12-03 at 9.37.47 AM

Because he wants to do better, he’s been working on improving that concept. He has a few ways to do that. He has videos he can watch. He has an online program (Knoweldgehook). It contains video lessons and EQAO-like questions to help practice. He also has a printable, paper package I created to help him review the concept, practice and assess.

This next student has a learning disability. I was still learning how to best support his learning at the beginning of the semester. I think we are getting better at supporting his needs. He will take longer to get the concepts involved in solving problems using area and perimeter of composite 2D shapes. And that is just fine. Using Activegrade, each time I add a new assessment for a learning goal, it pushes all previous assessments for that goal into 25% of the total for that learning goal. The new one counts for 75%. I can also (and often do) just delete the previous assessment when a student really struggled and it truly doesn’t reflect their understanding. As long as the assessments used for a goal collectively cover all of the achievement categories (knowledge and understanding, application, thinking and inquiry and communication), all is well. This allows different students to demonstrate their understanding in different ways if they choose. If they want to make up their own example and create a short video describing how to solve a problem – great. If they want to do paper and pen practice and then come and explain to me how they did two of the questions – great. If they want to take me into Minecraft and explain how they solve for the unknown in a ratio to figure out the length of a wall in “blocks” – great. HOW they demonstrate their understanding is inconsequential.

This way of assessment has worked great so far this semester. Students only get to see their overall marks at reporting periods. This means we aren’t focused on their “grade”, but spend most of the semester focused on “what area do you need to level-up?”.

These are a few reasons this way has worked for us:

  • it has empowered students to take ownership and responsibility for their learning. They see the direct correlation to working on a specific area and their achievement.
  • it allows us to add in quick, on-the-fly assessments for a specific student (example: grab a video of a student during a group problem explaining a concept to the rest of his group)
  • it provides a couple of in-class assessments for each learning goals. Those who are not as engaged in assessment and just want to participate in class and not think about it are set.
  • it allows for all the accommodations we integrate to be effective (extra time, multiple methods of assessment, etc.)
  • accommodations are no longer viewed as “cheats”. Everyone could make use of extra time, different methods of assessment. It values different ways of learning without appearing “unfair”
  • it lets STUDENTS take ownership for differentiating ways that work for them

I started off the semester thinking that I would have multiple online, digital versions of “levelling up” each learning goal. We have Knoweldgehook, which is an awesome tool for this. However, I quickly realized that I also needed a paper and pen version of levelling up each learning goal for a few reasons;

  • MANY of our students do not have internet access at home
  • paper versions help students attending our program for lengthy suspensions
  • paper versions work for students who are catching up in alternative learning environments such as the resource room, student success or credit save days when a device is not always available
  • paper versions can be done in the hospital, on the bus, on the train, on a plane, etc.
  • to differentiate – some students truly prefer paper and pen

I am always shocked at how many times in one semester I am asked to “send work” for a student. Fifteen-twenty times each semester I am asked to hand over a hard copy of what a student will or has missed in class. When you teach through problem solving, assess with tasks more than tests and use technology regularly this is actually very difficult. Class is no longer a teacher directed lesson followed by practice. Handing over a Minecraft task or ClassFlow interactive activity is actually quite difficult. Having paper versions of each learning goal, has let me focus on keeping those interactive activities the base of our course while meeting the varied needs of students throughout the semester.

The curious part of assessing this way is that at midterm I ended up with no students in the Level 1 range. Other than some special cases of non-attending students, we have a couple students in the level 2 range and the rest are in the level three or four range. I am way happier when students “level up” their assessment and show me a good solid understanding of a concept instead of simply leaving a student with a poor understanding of a concept and moving on. They are way better prepared for the next grade. The few students in the level two range are there because they simply need more time with the content. If we can find that time within our 110 hour credit-based semester system, I am convinced they will move into the level three range as well.

My one concern with assessing this way is the potential to turn my class into a simple drill-and-kill, mastery class. I highly value the creative and critical thinking aspect of math. The FIRST assessments for all learning goals are always interactive, creative, problem solving activities in class. Levelling up with videos and practice is only for when that didn’t work. For example, we assessed student ability to set up and solve ratios through Educreation videos made after creating scale models in Minecraft or 3D printing. It is almost impossible for a student who missed this in-class activity to catch-up on it. The structure of 75-minute periods in high school don’t make catching up on interactive class activities easy. The videos and alternative assessments are great to ensure that this student doesn’t get left behind. Not the best learning option – but a great alternative.

In the future we plan on improving this system by:

  • creating and collecting better “level up” resources (videos, assessments, tutorials, etc.)
  • creating a rubric for each of the 22 learning goals with examples and explicit details on what makes a level 1, level 2, etc.

I have decided that my assessment is effective if it motivates and empowers students to actually improve their understanding. How do you use assessment to motivate students?

Assessment in Grade 9 Applied Math

Yesterday a student asked me a question that made me stop and think. After squelching my initial reaction, I gave it some thought.

The students’ question was “do I really have to do all these practice questions? I know how to solve two and three-step equations with like terms on each side of the equation”. The old teacher in me would have stressed the importance of practice. I would have thought that I knew best and that all students should do the work that I chose and assigned. After some thought, I realized that this students’ question is an indicator of great things happening in our math class.

We are assessing by standards in our math class. This means that we have broken up the course into 22 learning goals and we measure student ability to do these things instead of measuring achievement on “stuff” (assignments, tests, tasks, etc.). At the end of the day, I need to know if a student can add and subtract polynomials. If they show me this through a task, a test or a video is inconsequential. I just need to know if they can do it.

I have assessed like this for years and remember the excitement when my science students finally understood how the assessment was working and began to advocate for themselves and what they needed to meet learning goals. I always worry that in math, I may not be able to ensure students fully understand how they are being assessed. I stress about how to empower them to come up with ways to demonstrate understanding that works for them. I’m more confident in science assessment.

So, after catching myself and thinking through this students request I responded to him by telling him to “archive your learning and move on”. We can archive our learning in our math class using the Sesame Snap app (thank you Min Min for sharing this tool with me). Each student has a digital math portfolio. We often roam class with our phones and take pictures, videos or notes of student work. These can be used in our assessment.

This students’ question ended up making my day because it showed me that he understood how he was being assessed. He knew that he was not being marked on “stuff”, but that he was being marked on his ability to solve a multi-step equation with like terms on both sides of the equation. He knew that he could better use his time revisiting the concept of multiplying polynomials, because he wanted to improve his mark on that learning goal.

A few of the boys in our class have “gamified” our math class. They have decided that they want to continuously improve and do better on the learning goals, so they are motivated to figure out how to get better and better at the concepts. A couple of these guys have learning disabilities and the ability to show a concept in a different way than the bulk of students have, or the ability to take some more time and reassess it later has really helped with engagement and of course achievement.

I’d love to hear more about how other math teachers view and manage assessment.

 

Descriptive Feedback and Assessment – What my students in Fiji taught me

Many times I’ve heard teachers (and myself) say “students don’t read the comments, they just look at the mark.” I’m not sure why this has taken me so long to connect, but my students in Fiji have taught me something about descriptive feedback. They have taught me that If they have the opportunity to improve, they care. Descriptive feedback in itself is not enough. Assessment practices must also change.

I took 40 students to Fiji for a month (i’m either the craziest or luckiest person in the world – the jury is still out). In addition to learning the ins and outs of the Fijian health care system, my students taught me all about proper assessment. I set up my grade 11 biology course so that students had 10 blog post prompts, 8 labs (some of these were really activities opposed to labs), 3 assignments and a final task. The blog was the core of our ongoing assessment. Students could (and often did) put their assignments and labs up on it as well. Parents and friends could check in and see what they were learning. They had an authentic audience. Many of my “tweeps” and friends commented on student blogs, so I wasn’t the only one providing feedback and pushing student thinking (a HUGE thank you to all those that commented on my student blogs).

I provided my feedback through our class moodle. On the moodle were the blog prompts, success criteria and rubrics. The blog posts were very open-ended and provided lots of choice. The success criteria pointed them to the specific scientific concepts I needed to see their thinking around. I provided some links or examples to information to help them get started (we had no textbook – by choice). When I provided feedback on the moodle I checked off the levels on the rubric and it automatically calculated a mark based on how I set it up. I also provided written feedback usually in the form of stating what they did well and what they needed to do to improve. I provided students the opportunity to fix up their blog posts and resubmit for assessment after. I honestly didn’t think many students would make use of this. Boy was I wrong. Almost every student has made use of this for at least one blog post, some of them for many. They reword, include new components, add new research, etc. to their blog post and then ask me to reassess. Its great.

As we are now back from Fiji and I would like to continue on with my summer and get report cards done, I rushed through one set of blog posts. I simply marked them and made a 3-5 word comment. I shouldn’t be admitting this, but I didn’t think anyone would notice just ONE poorly assessed blog post among many? Well… they did! I got messages saying “I got an XX% and I can see the rubric, but what exactly can I do to improve? The comments just weren’t there”. This has taught me something important. Students DO READ the written feedback and comments. We, as teachers simply need to ENSURE that the opportunity for them to improve upon their work is there. Otherwise they have no need nor way to really process the feedback.

I did not do this resubmit opportunity with the labs and assignments. They were much more “old school” where they had specific questions to ask, or a specific set of genetics problems. Each student was doing the same thing, so the ability to let students resubmit and improve simply wasn’t there. I am now looking at each of those with a very critical eye. How could I have opened up those tasks to allow for growth and personalization? The blog posts were perfect. No two kids had the same posts, rarely even the same topics (except when they co-created them together).

I created this course very specifically to work while on the road in Fiji. I am looking at how many parts of it I could fix up and improve upon. What sticks out mostly in my mind is to open up some of the labs and assignments. Secondly, I need to find a way to have students read each others blog posts more often and provide feedback. Perhaps a feedback form, or maybe simply commenting. We did struggle with internet access on the trip, so that would have to be sorted out as well. But that is a simple challenge.

My students fascinated me throughout the entire trip. The creativity in their blog posts, the connections they made to all aspects of the course, the connections they made to Fiji, how they took responsibility to find the information they needed and asked the questions they needed to understand and how they responded and learned from feedback provided. Aside from spending a month touring an amazing country like Fiji, I learned so much during this month about feedback and assessment practices. I was amazed to watch my students take full responsibility of their own learning. I think the setting may have helped a teeny bit, but ultimately the students took charge.

You can read about our adventures in Fiji here: http://biologyinfiji.edublogs.org . On the left hand side of the page are links to the student blogs.

The Ongoing Zero Debate

This post was first posted and commented on, on June 2nd, 2012 here: http://dynamagogy.posterous.com/the-ongoing-no-zero-debate-drives-me-crazy

A reply post was also written here: http://dynamagogy.posterous.com/135926813

 

Original Post:

This debate makes me angry. It makes me hopping mad, red in the face and ready to scream!!!! The lack of communication or miscommunication around these no-zero policies is atrocious. If the policy cannot be appropriately communicated to educators, how are we supposed to communicate it and stand behind it when working to support students and parents in understanding? I believe it comes down to ineffective adult learning models and communication in our education system. We need to do better.

The debate most recently bringing this to the forefront of my mind is the teacher in Alberta who the media states has been suspended for giving zeros (the school board states this was not the case). One article written about the issue can be found here.

What really gets me going is the misconception that we should be marking “work”. As a teacher, I should not be marking “work”. I should be assigning marks that reflect the students ability to demonstrate certain skills. I am not marking how the student did on “unit 1 test”, I should be marking how well the student can demonstrate “solving linear systems”.

If I am marking “work”, then giving anything other than a 0 for an assignment not handed in does not make sense. But, any good policy states that we are marking skills or expectations. I should no longer have a markbook divided up into “unit 1 test”, “unit 2 test”, “fractions project”, “unit 3 test”. If I do, how on earth do I know what skills the student has mastered and what they haven’t? How do I provide targeted support as needed when all I know is that they struggled on the whole assignment or test?

As a teacher I need to begin tracking actual skills. My markbook should include a list of skills that need to be demonstrated. Then, as per our much griped-about assessment policy,Growing Success, I need to provide multiple times to demonstrate each skill. If I provide more than one time for students to show me how they can “solve linear systems” (e.g., multiple activities and a test) and then the student does not submit the third activity I assigned, it no longer makes sense to give them a zero on the concept “solving linear systems”. If I’ve seen twice already that the student can solve these systems, how can I assign a mark of 0 for that skill just because they missed one assignment? I cannot.

We need to shift away from the belief that the marks we assign are a reflection of work and work ethics. They are not. The marks are simply a reflection of how well they can demonstrate skills. Nothing more, nothing less. The learning skill marks on a report card are where we comment on these things. And thats a whole other story… why are the marks more important than the learning skills? Why are the learning skills less prominent on a report card? Will Richardson lead a good discussion on his blog about the importance of learning skills vs. academic skills.

Dan Meyer shows how he does assessment in his math class in this video. It fits within this “new” mindset of assessment. His weekly testing model doesn’t have to occur in every class, but he shows how his markbook works really well within this “new” mindset.