The Early Warning Radar System: Meaningful, targeted, and responsive formative assessment that busy teachers can actually use
By definition, formative assessment is supposed to provide data that teachers use to identify needs, modify instruction, and provide support.
Formative assessment refers to a wide variety of methods that teachers use to conduct in-process evaluations of student comprehension, learning needs, and academic progress during a lesson, unit, or course. Formative assessments help teachers identify concepts that students are struggling to understand, skills they are having difficulty acquiring, or learning standards they have not yet achieved so that adjustments can be made to lessons, instructional techniques, and academic support. (Glossary of Education Reform)
But how many teachers actually do this on a regular basis? How many teachers can? I'm not talking about a token event that occurs just every now and then. I'm talking about consistent formative assessment that is conducted throughout the year and across the entire curriculum of a course. That proposition takes a lot of time, and time isn't something that teachers have in great abundance. Nonetheless, I am here to report that, with the right tools, consistent and comprehensive formative assessment can not only be done, but it can be fun and motivating as well.
The tools I'm going to discuss in this post include: Google Classroom, Kahoot!, and Google Email. Let's get started.
To implement this system, you will need to use Google Classroom to post comprehensive lesson plans, complete with objectives and instructions, as well as links and attachments for any required resources. It is important to understand that the "Assignment" is the best option for a lesson plan, as it can be associated with a Google Calendar date. (Although it will appear as a "due date," which isn't necessarily ideal.) It is also important to note that you can quickly obtain and copy a URL that will send students to a specific Google Assignment post (i.e. lesson plan). More on that later.
Anyone who knows me or who follows my blog knows how much I love Kahoot. I have developed a Kahoot for virtually every concept I teach. (I won't just have a big Kahoot for a unit review. Rather, if a unit has eight concepts, then that unit will have eight Kahoots.)
I therefore use a Kahoot to provide a quick check-in at the end of each topic. I usually run my Kahoots at the end of a class or at the beginning of the next class. They take about 15 to 20 minutes, and they provide both the students and the teacher with excellent and immediate formative feedback. They are also well designed to allow the teacher to provide timely, targeted remediation on any points of confusion revealed by the check-in. After the results for a given question have been revealed, the teacher can even expand an image that may have been used in that question in order to examine the issue before moving onto the next question.
At the end of the Kahoot, the teacher can download a comprehensive spreadsheet of the class results. For this reason, it is imperative that students sign in using their real names. I make a habit of downloading the Kahoot results, and then quickly seeing who might have failed the Kahoot. Anyone who failed a given Kahoot will receive an Early Warning Radar Bulletin from Google Email.
An email can be sent out from within the Google Classroom, or from with Google Mail. I simply copy, paste, and modify a version of the email presented below, and send it out to anyone who failed the Kahoot. If the circumstances warrant it, I might also carbon copy the parents. I've also been known to send out motivational emails to students (and, at times, their parents) who demonstrate exceptional brilliance within a Kahoot. I should point out that the email will contain live links back to the lesson plan that resides within Google Classroom. If you recall, that lesson plan will contain attachments to any resources associated with the lesson.
As I have often pointed out, setting up technical infrastructures such as Google Classroom lessons and Kahoot check-ins do indeed require a lot of time up front. However, the dividends they pay are great, and the teacher will get to draw those dividends for years to come. After that, this outstanding form of formative feedback, timely remediation, and student/parent communication can become a practical, realistic, and regular part of a teacher's program.
Give it a try, and let me know how it goes.
Early Warning Radar Bulletin
If you are receiving this email it is because our recent Kahoot check-in indicates that you could benefit from reviewing the material on consumer and producer surplus, socially optimal outcome, and market intervention (price ceilings and price floors).
I'm going to suggest that you review this material at your earliest convenience, and then try doing the Market Intervention Quiz in the Mastery Learning Lab. [Unit #1: Economic Theory & Consumer Behaviour --> The Mechanics of the Market System --> Market Intervention Quiz].
Formative Assessment Definition - The Glossary of Education Reform." 29 Apr. 2014, http://edglossary.org/formative-assessment/. Accessed 11 Oct. 2017.
What's all this about index assessment?
A basic definition of an index is "an indicator, sign, or measure of something." A more thorough analysis of the term might reveal a definition such as, "a number derived from a series of observations and used as an indicator or measure." In either case, these definitions really do serve to describe what I mean by a new approach to assessment that I've been experimenting with in recent years. Thus, I have come to call it index marking or index assessment. As the years have gone by, I've been incorporating more and more index assessment into my assessment mix, and I've done this primarily because both technology and connectivity have made this new form of assessment possible.
Essentially, "index assessment" describes an assessment that is based on some sort of running total. That running total is based on numerous and ongoing collections of data. However, the index value is formative during a given unit of study, but becomes summative at the end of the unit. This allows both the teacher and the student to derive all of the value to be gained from formative assessment during the unit (such as low-stress check-ins, immediate feedback to students, and data to inform instructional next steps). However, students also enjoy one additional but highly critical aspect of index assessment: motivation. Knowing that an index mark will eventually become summative, those students who may be more motivated by marks will still be motivated to not only complete an index assessment, but to provide it their best effort as well. In my experience, motivation has been a perennial problem with formative assessment, and no amount of conversations, speeches, lectures, reminders or even infographics would solve this problem.
Over the years, I have made great efforts to communicate the value of formative assessment to my students. While these efforts would make a modest impact on completion rates, I would still never obtain anything close to a 50% completion rate on formative assessments. With the introduction of the index approach, my completion rates are now well over 90% for the exact same assessments. Moreover, the overall level of achievement on associated summative assessments (ex. test at the end of the unit) has also increased.
Putting Index Assessment Into Practice
At the moment, I have two index assessments that account, in total, for 15% of a student's overall grade in my courses. Specifically, these index assessments are the Ongoing Triangulation Index (OTI) and the Mastery Learning Lab (MLL). I have expanded an each of these forms of assessment in their own respective posts. Both of these index assessments can essentially be thought of as marks that are recorded during a given unit of study, remain observable by both the teacher and the student during the unit, and are always available to be improved upon through subsequent efforts made by the student. In other words, the student can respond to his mark in ways that can actually improve his mark during the unit.
The critical point is that an index assessment is formative during the unit, but becomes summative at the conclusion of the unit. Naturally, it is critical that students understand this at the outset of the course. Given that index assessment is both new and somewhat unorthodox, this information needs to be communicated both verbally and in writing, repeatedly, to both students and parents. (More on communication to students and parents is explored below.)
The Strength of Index Assessment: Distributed Practice
Distributed practice refers to the long-noted beneficial effect of spacing out practice across numerous yet smaller periods of time. In other words, it is better to practice something for 15 minutes a day across four days, than to practice for one hour on one day. This effect was first studied by Hermann Ebbinghaus in 1885. Ebbinghaus discovered that he could successfully remember more material in less time if he spaced out the time that he spent on studying as opposed to concentrating for the same amount of time during fewer occasions. This effect is also known as the "spacing" effect, and it has held up very, very well over more than a century of study. In fact, I would dare say that most musicians and athletes naturally discover and take advantage of the spacing effect in their practice and training, as they come to clearly confront the fact that a learner cannot acquire skill within a limited period of time nearly as well as they can when spreading out their practice over extended periods of time.
The Logistics of Index Assessment
Index assessments ask students to repeatedly take the same (or similar) assessments over an extended period of time. Moreover, the student is encouraged to repeat attempts with the knowledge that individual attempts will not count towards a final grade in the short-run, but that overall achievement will indeed count towards a final grade in the long-run. For such assessment to be practical or realistic, it must reside within some form of powerful educational technology that tracks the student's progress. Thus, we must use the appropriate technology, and we must set the scoring preferences in the most appropriate way. I use CourseSites for my Mastery Learning Lab. Specifically, I use the "Tests, Surveys, and Pools" feature made available under "Course Tools." I also make sure to organize my index assessments under unit titles, to manage the columns under the Grade Center so that the quizzes progress in order, and I set up an "Average Column" at the end of each unit. I set the preferences for each individual assessment so that the "highest" grade on each assessment is counted towards the overall mark within the unit - not the "average" or "most recent" grade. (The high score option locks high scores in place so that students can repeat attempts on quizzes or exercises without fear of losing a previously attained high score.)
I also prefer to set up a "Smart View" for each section of a given course, as opposed to setting up an entirely new CourseSite for different sections of the same course. These Smart Views allow me to see each section in alphabetical order, which greatly assists when transposing the marks from CourseSites into my school's grade management system. At the end of the unit, I will then record the mark within the Unit Average column as a summative mark. Literally, this means that I will wait until I am entering the mark for the unit's culminating assessment (ex. unit test), and I will then set up a separate column entitled something like, "Unit #2, Mastery Learning Lab."
Other logistical advice that I would highlight include the need to collect lots of data and to clear the slate at the end of the unit. Given that indexes are based on collections of data, an index mark should be based on numerous assessments that each contribute to the overall index value over the course of a unit. An index mark should then be reset at the end of each unit, allowing a new value to be generated for each successive unit. This is achieved in different ways depending on the digital utility that one might be using to administer an index assessment.
Why not just use formative assessments?
As I've mentioned elsewhere, formative assessments are great, but they're not perfect. Let's just acknowledge two elephants in the room when it comes to formative assessment: i) students often don't do them, and, ii) when they do, they don't tend to provide them their best effort.
Thus, an index mark carries on as a fluctuating, formative value throughout each unit, but carries the promise of being recorded as a summative mark at the end of each unit. This provides the student with an extended period of time in a low-stress environment to master their knowledge and skills regarding a given topic, but then rewards the student's diligence and achievement with a summative mark that will actually make an impact on his overall grade.
How do you explain it to your students?
This is exactly what I tell my students regarding the quizzes in my MLL:
The quizzes in the Mastery Learning Lab are technically considered "formative" during the unit because they are not counted toward your mark during the unit. Moreover, these are mastery quizzes that you can take over and over again during the unit to help you develop your understanding of the topic. Finally, they will help both you and me identify areas of strength and weakness in your understanding of the topics as we move through the unit. However, at the conclusion of the unit, these quizzes will become "summative" because the overall average for a given unit will indeed be counted toward the calculation of your grade. Bear in mind that unattempted quizzes will receive marks of zero as of the conclusion of the unit.
But... who can do all that marking?
I completely understand the skepticism that one might naturally have regarding index marking. It sounds like some airy-fairy, pie-in-the-sky initiative that only a partial load teacher could possibly pursue. I will point out, however, that I am a full-load teacher, and have been for more than 25 years. Index assessment is quite possible, but it is only made possible with appropriate technology and connectivity.
In other articles, I have examined my index assessments in greater detail, and I would invite anyone who is curious about them to read more about the Mastery Learning Lab (MLL) and the Ongoing Triangulation Index (OTI).
As might be evident from the above discussion, index assessment is inextricably tied to the idea of mastery learning. As I've mentioned before, genuine mastery learning requires unlimited opportunities to revisit material and then subject one's understanding of its content to an objective assessment until that assessment indicates that the material has been mastered. (It is all too easy for students to revisit material and then believe that they understand it, but one's sense of understanding can at times be found wanting when it is subjected to an empirical, objective test.)
In the final analysis, It's probably easiest to think of an index assessment as a summative assessment that both the student and the teacher can observe and improve upon as it develops. This provides a significant contrast to typical summative assessments because, with most summative assessments, by the time the teacher or the student sees the mark, it's too late for the teacher or the student to do anything about it.
To be sure, it takes a while for students, teachers, and parents to get the gist of index marking. It's not quite formative, and it's not quite summative... it's a bit of both. I would like to think that it's the best of both, as I believe that index assessment allows students to enjoy the low-pressure feedback and remediation associated with formative assessments, while also enjoying the motivation, acknowledgement, and reward associated with summative assessments.
Bahrick, Harry P; Phelphs, Elizabeth. Retention of Spanish vocabulary over 8 years. Journal of Experimental Psychology: Learning, Memory, & Cognition. Vol 13(2) Apr 1987, 344-349
Bloom, Kristine C; Shuell, Thomas J. Effects of massed and distributed practice on the learning and retention of second-language vocabulary. Journal of Educational Research. Vol 74(4) Mar-Apr 1981, 245-248.
Donovan, John J; Radosevich, David J. A meta-analytic review of the distribution of practice effect: Now you see it, now you don't. Journal of Applied Psychology. Vol 84(5) Oct 1999, 795-805
Ebbinghaus, H. Memory: A contribution to experimental psychology. New York: Dover, 1964 (Originally published, 1885).
Ebbinghaus, Hermann (1885). Memory: A Contribution to Experimental Psychology.
Rea, Cornelius P; Modigliani, Vito. The effect of expanded versus massed practice on the retention of multiplication facts and spelling lists. Human Learning: Journal of Practical Research & Applications. Vol 4(1) Jan-Mar 1985, 11-18.
- See more at: http://www.aft.org/periodical/american-educator/summer-2002/ask-cognitive-scientist#sthash.g0xfsxpB.dpuf
Willingham, Daniel T. Allocating Student Study Time: "Massed" versus "Distributed" Practice. http://www.aft.org/periodical/american-educator/summer-2002/ask-cognitive-scientist#sthash.g0xfsxpB.dpuf
If you're a teacher who requires your students to write essays, I highly recommend that you consider implementing a practice that I've been doing in my law course for a number of years. Essentially, I write an essay in front of my students. I call this annual custom the "80-Minute Challenge."
The 80-Minute Challenge is a demonstration that I will do once a year during an essay-writing work period. During this challenge, I attempt to research and write a complete 800 to 1200 word essay within 80 minutes. I do not know the topic going into the challenge. Rather, the students in the class brainstorm a selection of positions on a variety of legal issues. A student volunteer writes the propositions up on the board, and then we number the propositions (Proposition 1, Proposition 2, Proposition 3, etc.). Finally, a random number generator picks one of the topics. The topic being selected, I then begin to write the essay while a timer counts down the eighty minutes. Everything I do during the 80-Minute Challenge is projected onto the screen at the front of the class, so students can turn to observe my progress throughout the class. At the end of class, the students can see how far I've progressed. By that same afternoon or evening, I share the completed essay with the class. Admittedly, over the years I have found that some essays go a little better than others, but most essay topics end up being quite a lot of fun to research and write.
Too often, students put off essay projects out of a sense of fear and dread. Particularly desperate students will even resort to plagiarism in order to avoid facing the task of legitimately writing an original essay. The purpose of the 80-Minute Challenge is to demonstrate that essay writing is not just an academic skill, but a fun, stimulating, and exciting endeavour. Embarking on a good essay is actually very similar to setting out on a treasure hunt or rally. The act of searching for and locating compelling sources that support one's thesis is actually a great deal of fun, but students can only access that sense of fun after they learn their requisite skills associated with essay-writing. For too long, schools have failed to expose students to the shear fun of essay writing. Once we add in that sense of fun, the technical skills associated with essay writing become something that students are little more inspired to learn, and the actual writing of essays becomes something that students start to embrace.
Various points of skill, strategy, and technique are demonstrated and discussed during the 80-Minute Challenge. Students see actual demonstrations of:
Essay writing is one of the most critical skills we demand of our students in both high school and university, yet the fact of the matter is that essay writing is also one skill that we as educators do not actually demonstrate to our students. Contrast how we teach essay writing to almost any other skill. Do we not demonstrate everything else from physics formulas, to financial statements, to jump shots? I would argue that we continue to see both poor essay writing and high rates of plagiarism because essay writing is the one skill that we attempt to teach without actually demonstrating.
Admittedly, writing an essay in front of our students does seem a bit extreme... possibly even eccentric, but I maintain that this is simply because we've never been able to provide such a demonstration until very recently. These days, if a teacher has i) a computer, and ii) an LCD projector in their classroom, then they have the ability to take the 80-Minute Challenge. Having said that, this final point does give rise to what may be the most important point of introspection a teacher could ask themselves on the topic: if they have the tools, and they have the time, then why wouldn't they? This question will be explored more fully in an upcoming article.
The Art of Argument: Exploring the foundations of essay writing. (See link below.)
Some Past 80-Minute Challenge Essays:
The New Learner Lab
Exploring the ever-changing, often challenging, and always controversial world of teaching.