The New Learner Lab
  • Home
  • Blog
  • Surveys
  • NewLearner
  • Contact

Deconstructing two approaches to triangulation: The OTI and the CWP

4/12/2018

Comments

 
Art Lightstone
I know I've said it before, but teachers in Ontario are still scratching their heads over the question of how to collect evidence of "observations" and "conversations," and then integrate that evidence into a final grade for a student. 

The confusion stems from a paragraph found on page 39 of the Ministry of Education's Growing Success policy document. The paragraph states:

Evidence of student achievement for evaluation is collected over time from three different sources – observations, conversations, and student products. Using multiple sources of evidence increases the reliability and validity of the evaluation of student learning.

Sadly, teachers have not found much guidance from the Ministry on the implementation of this directive. In particular, teachers are left wondering about two critical questions:

  1. How might one actually record or collect evidence of conversations and observations?
  2. Were it to be collected, how would one use this data to inform a student's final grade? 

Perhaps even more troubling is the fact that teachers cannot find a great deal of consensus, even amongst themselves, about one crucial point. Namely, how do we actually differentiate between a "conversation," an "observation," and a "product"?

To be sure, in our own minds we are all quite clear on the distinction between all three, and we would scarcely imagine any particular debate in the matter. It is only when we subject our individual perceptions of these elements against the scrutiny of our colleagues that we come to realize that there is indeed a great variety of perspectives regarding products, conversations, and observations.

Invariably, most teachers probably like the spirit of triangulation, but just because we like it, doesn't mean we do it. In fact, a recent NewLearnerLab Twitter survey revealed that most teachers do not collect evidence of observations or conversations at all.

Personally, I have developed and experimented with two distinct triangulation systems that would both collect conversation and observation data, and then integrate this data with a student's final grade. I called my first system the Ongoing Triangulation Index, and my second system the Classwork Portfolio. 

I have discussed the logistics of each method in separate articles, but after spending a good deal of time implementing both systems in my program, I felt I was now in a better position to present a comparative review of both methods. 

Ongoing Triangulation Index 

Description:


The Ongoing Triangulation Index is a value that is based on recorded observations and conversations that are generated in class. The OTI depends on the ClassDojo application to capture observations and conversations and then curate these observations within a single record for each student.

Each observation or conversation is time-stamped, dated, and accompanied by a brief explanation of the observation and the curriculum associated with the observation. Within this index, students can accumulate both positive and negative observations, the sum of which will determine their overall Index value.

The OTI counts for 5% of the overall mark in a course. Thus, a number of OTI observations are recorded each term.

Strengths:

Allows the teacher to capture demonstrations of learning, understanding, appreciation, etc. that occur spontaneously within the classroom.

Generates a comprehensive record of observations and conversations for each student.

This record can contain detailed information regarding the specific learning goals associated with each observation. Example: “Demonstrated a deep understanding of how Keynesian fiscal policy could be used to fill a fiscal gap using the balanced budget multiplier.”  

The above record is updated instantly and can be made accessible to both parents and students.  

Challenges:

The OTI system uses the ClassDojo app, which lends itself to capturing evidence of observable behaviours. Thus, this system can potentially be criticized as assigning a mark for learning skills and/or work habits.

This system must record both negative and positive observations in order to generate a meaningful value that can then be translated into a mark.

While the OTI facilitated the recording of both conversations and observations, conversations never figured into the index value.

Classwork Portfolio                

Description:

The Classwork Portfolio uses Google Classroom, Kahoot, and Edvance (an online student information system and grade manager) to generate a portfolio of the various activities, exercises, and check-ins that are pursued in class on a day-to-day basis. Each activity is assessed and given a mark between zero and four based on relative quality.

For those in-class activities and exercises that require deeper qualitative assessment, a streamlined Canada and World Studies, 2015 Achievement Chart is used to assess the varying levels of achievement.

The Classwork Portfolio counts for 5% of the overall mark in a course. Thus, a number of portfolio marks are recorded for each student within each reporting period.  

Strengths:

The CWP allows the teacher to capture progress that occurs in class on a daily basis, and then record the observation of that progress in a way that impacts the student’s mark immediately.

Students come to realize that they can effect a positive influence over their mark during each and every class.

The CWP tends to focus more on the manifest result of a student’s learning than it does on student behaviour or learning skills. (Although, depending on one's perspective, this could be seen as a weakness.)

The teacher can include detailed information regarding the specific learning goals associated with each observation. Example: “Achieved the 80% threshold on today’s check-in on using Keynesian fiscal policy to fill a fiscal gap using the balanced budget multiplier. Silver medal... well done!”

As this information is recorded in an online grade manager, it is made accessible to parents and students immediately.

Regular check-ins are designed to more readily allow students to effect a positive influence on their mark, while limiting the potential for a negative impact.

Challenges:

The CWP system can generate a large quantity of micro-assessments that can rather quickly outpace a teacher’s ability to mark. Teachers are therefore advised to limit CWP items to activities and exercises that can be assessed rather quickly: preferably, during the class in which they are assigned.  

The CWP generates a rather disconnected record of observations and conversations for each student. This record is brought together, somewhat loosely, within our online grade manager.
Comments

A Design Thinking Approach to the Problem of Learning: Using a Learner Profile Study to Gather Data and Revise Instruction

1/10/2018

Comments

 
​Art Lightstone
Picture
I've blogged before about the importance of gathering student feedback for improving program design and delivery. However, the methods that I discussed in that earlier piece made use of anonymous online class surveys and suggestion boxes to gather student input. These anonymous tools are great for obvious reasons. The security of anonymity allows students to speak frankly and provide input that they might not otherwise be comfortable giving. Having said that, these approaches lack one key feature when observed from within a personalized learning paradigm: namely, the ability to get to know students on an individual basis.

Introducing the Learner Profile 

A learner profile is not necessarily a new innovation, but its use is gaining popularity amongst educators, especially in the current pedagogical era that is placing increased emphasis on personalized learning. A learner profile essentially asks students to discuss their interests, their goals, and their learning preferences. However, from the standpoint of getting to know each learner on an individual basis, the learner profile's benefit would be completely negated were it to be anonymous. Thus, getting students to provide genuine input that can truly help educators develop their programs does represent a bit of a challenge, but it is a challenge that can certainly be addressed when approached with sensitivity, tact, and a clear mandate to help each and every student. 

Attaining an Understanding of Student Learning Preferences

I think it is both fair and reasonable to suggest that students know best how they prefer to learn. While learning preference may or may not be correlated with learning success, I think it is also fair to say that we could not begin to examine this correlation until we first obtain an understanding of learner preference. I set out to do this at the beginning of the second term of the 2016-2017 academic year. I did it partly out of inspiration from a PD session that I attended during our beginning of term startup, and partly out of inspiration from my students. At that time, my students were somewhat adrift in a veritable sea of learning styles, approaches, and pedagogical ideologies. As someone who not only relies on research to guide my practice, but who also teaches research methods to my students, I naturally set out to apply a field study approach to my analysis of learner preferences. These days, this particular approach might be described as a "design thinking" approach. Call it what you will... I was pretty happy with the results. So happy, in fact, that I repeated my learner profile study this year at the exact same point of the academic year - the first class of our second term. Again, this effort was triggered by an inspiring PD session that kicked off our second term.  

Learner Profile Methodology: A Two-Pronged Study 

My learner profile study consisted of two diagnostic tools: a survey, followed by an interview. After these two phases were complete, I attempted to code my findings with no particular view to shape or influence the results. I will briefly describe each phase of the study below.

Phase I: The Learner Profile Survey

To implement my learner profile study, 
I created a short, open-ended Learner Profile in a Google Doc that asked five fairly straight-forward questions. I then shared this document with all of my students over Google Classroom. The Learner Profile asked my students to tell me about themselves, their interests, and their learning preferences. The profile featured the following areas of investigation: 

  1. Interests (both academic and non-academic)
  2. Academic Goals (ex. career aspirations, universities, specific programs, etc.)
  3. Academic Challenges (ex. writing, researching, public speaking, focusing in class, test writing, etc.)
  4. Learning Preferences (i.e. approaches and activities that really engage your interest and participation while also supporting your learning and understanding)
  5. Learning Non-Preferences (i.e. approaches and activities that don’t really engage your interest and participation or effectively support your learning and understanding).

The learner profile was introduced with the following description: 
Help me get to know you as a learner: The Learner Profile

Now that you've had a term in my course, I'd like you to reflect on the aspects of my teaching and your learning that work well together, as well as those areas that might not be a perfect fit. Please take a moment to fill out the attached "Learner Profile," and then submit it to this Google assignment. 

While I will also be distributing an anonymous survey in the near future, this particular document is neither anonymous nor a survey. The Learner Profile is genuinely meant to help me get to know you as a learner so that I can perhaps emphasize those approaches that work best for you, and minimize those approaches that might not best support your learning.

Phase II: The Interview

Students were asked to complete the learner profile at home or in class if they failed to complete it at home. I then sat down with each and every student to go over their learner profile with them. I did this primarily to clarify their feedback and to probe deeper into their feedback as well. For example, if a student said that he liked a "hands-on approach" to learning, then I would typically ask that student what he means by a hands-on approach. In that case, I would often ask for examples of what the student has experienced in terms of hands-on learning at any point in his schooling.


I then set about coding and categorizing the feedback that I gathered from my students into the various themes that emerged. 

Phase III: Analyzing the Findings

A comprehensive analysis of the data that I collected from this study is definitely deserving of its own blog post, which I will provide in the future after contributing the data gathered from this year's study. However, it is probably safe to divulge the general themes that emerged from last year's study. In no specific order, they are as follows:


  1. Most students reside within one of two general schools of learning preference: either traditional learners or independent learners.
  2. Many students enjoy what they describe as "hands-on Learning." This is often mentioned as a preference, yet with varying interpretations of what "hands-on" learning actually means. 
  3. Kahoot: Everybody loves them.
  4. Long Lectures: Nobody likes them.

A Final Word:

The learner profile is a particularly valuable tool that can be used by teachers to help DIG for student feedback because it can do more for a teacher's professional development and program design than almost anything else a teacher could do. The DIG philosophy highlights the fact that the data gathered from a learner profile can help to i) diagnose problems early on, ii) identify areas of strengths and weakness in students, and iii) generate new approaches for both instruction and assessment. In the final analysis, I feel strongly that the learner profile must serve as the foundation of any genuine attempt to pursue personalized learning in the classroom. Moreover, I believe the profile itself must be followed by a diligent analysis of, and honest reflection upon, the knowledge gained from the effort. The data is right there for the taking - just as long as one is open to gathering it, listening to it, and acting upon it. The learner profile invariably honours the student while informing the teacher, and that, in my view, is a win-win proposition. 

If you're a teacher who has an interest in personalized learning, then I would encourage you to explore implementing the learner profile in your program. At the very least, you'll get to know your students better, and, at the very best, you may just reinvent your practice - helping you to become an even more effective teacher. If you would like to give the learner profile a try, you are welcome to use the learner profile document attached below as a starting point. Please feel free to use or revise this document as you wish. If you do give it a try, please do drop me a line to let me know how it goes.
learner_profile.pdf
File Size: 58 kb
File Type: pdf
Download File

Comments

Exploring the Power of Index Assessment

10/23/2016

Comments

 
Art Lightstone

What's all this about index assessment?

A basic definition of an index is "an indicator, sign, or measure of something." A more thorough analysis of the term might reveal a definition such as, "a number derived from a series of observations and used as an indicator or measure." In either case, these definitions really do serve to describe what I mean by a new approach to assessment that I've been experimenting with in recent years. Thus, I have come to call it index marking or index assessment. As the years have gone by, I've been incorporating more and more index assessment into my assessment mix, and I've done this primarily because both technology and connectivity have made this new form of assessment possible. 

Essentially, "index assessment" describes an assessment that is based on some sort of running total. That running total is based on numerous and ongoing collections of data. However, the index value is formative during a given unit of study, but becomes summative at the end of the unit. This allows both the teacher and the student to derive all of the value to be gained from formative assessment during the unit (such as low-stress check-ins, immediate feedback to students, and data to inform instructional next steps). However, students also enjoy one additional but highly critical aspect of index assessment: motivation. Knowing that an index mark will eventually become summative, those students who may be more motivated by marks will still be motivated to not only complete an index assessment, but to provide it their best effort as well. In my experience, motivation has been a perennial problem with formative assessment, and no amount of conversations, speeches, lectures, reminders or even infographics would solve this problem. 
Over the years, I have made great efforts to communicate the value of formative assessment to my students. While these efforts would make a modest impact on completion rates, I would still never obtain anything close to a 50% completion rate on formative assessments. With the introduction of the index approach, my completion rates are now well over 90% for the exact same assessments. Moreover, the overall level of achievement on associated summative assessments (ex. test at the end of the unit) has also increased. 

Putting Index Assessment Into Practice


At the moment, I have two index assessments that account, in total, for 15% of a student's overall grade in my courses. Specifically, these index assessments are the Ongoing Triangulation Index (OTI) and the Mastery Learning Lab (MLL). I have expanded an each of these forms of assessment in their own respective posts. Both of these index assessments can essentially be thought of as marks that are recorded during a given unit of study, remain observable by both the teacher and the student during the unit, and are always available to be improved upon through subsequent efforts made by the student. In other words, the student can respond to his mark in ways that can actually improve his mark during the unit.

The critical point is that an index assessment is formative during the unit, but becomes summative at the conclusion of the unit.  Naturally, it is critical that students understand this at the outset of the course. Given that index assessment is both new and somewhat unorthodox, this information needs to be communicated both verbally and in writing, repeatedly, to both students and parents. (More on communication to students and parents is explored below.)


The Strength of Index Assessment: Distributed Practice

Distributed practice refers to the long-noted beneficial effect of spacing out practice across numerous yet smaller periods of time. In other words, it is better to practice something for 15 minutes a day across four days, than to practice for one hour on one day. This effect was first studied by Hermann Ebbinghaus in 1885. Ebbinghaus discovered that he could successfully remember more material in less time if he spaced out the time that he spent on studying as opposed to concentrating for the same amount of time during fewer occasions. This effect is also known as the "spacing" effect, and it has held up very, very well over more than a century of study. In fact, I would dare say that most musicians and athletes naturally discover and take advantage of the spacing effect in their practice and training, as they come to clearly confront the fact that a learner cannot acquire skill within a limited period of time nearly as well as they can when spreading out their practice over extended periods of time. 


The Logistics of Index Assessment

Index assessments ask students to repeatedly take the same (or similar) assessments over an extended period of time. Moreover, the student is encouraged to repeat attempts with the knowledge that individual attempts will not count towards a final grade in the short-run, but that overall achievement will indeed count towards a final grade in the long-run. For such assessment to be practical or realistic, it must reside within some form of powerful educational technology that tracks the student's progress. Thus, we must use the appropriate technology, and we must set the scoring preferences in the most appropriate way. I use CourseSites for my Mastery Learning Lab. Specifically, I use the "Tests, Surveys, and Pools" feature made available under "Course Tools." I also make sure to organize my index assessments under unit titles, to manage the columns under the Grade Center so that the quizzes progress in order, and I set up an "Average Column" at the end of each unit. I set the preferences for each individual assessment so that the "highest" grade on each assessment is counted towards the overall mark within the unit - not the "average" or "most recent" grade. (The high score option locks high scores in place so that students can repeat attempts on quizzes or exercises without fear of losing a previously attained high score.)
Picture
You can set the scoring preferences by clicking on the dropdown menu just the the right of a given assessment's column name within the Grade Centre. Then click on "Edit Column Information." You will then be able to select "Highest Grade" beside "Score attempts using."

​I also prefer to set up a "Smart View" for each section of a given course, as opposed to setting up an entirely new CourseSite for different sections of the same course. These Smart Views allow me to see each section in alphabetical order, which greatly assists when transposing the marks from CourseSites into my school's grade management system. At the end of the unit, I will then record the mark within the Unit Average column as a summative mark. Literally, this means that I will wait until I am entering the mark for the unit's culminating assessment (ex. unit test), and I will then set up a separate column entitled something like, "Unit #2, Mastery Learning Lab." 

​Other logistical advice that I would highlight include the need to
collect lots of data and to clear the slate at the end of the unit. Given that indexes are based on collections of data, an index mark should be based on numerous assessments that each contribute to the overall index value over the course of a unit. An index mark should then be reset at the end of each unit, allowing a new value to be generated for each successive unit. This is achieved in different ways depending on the digital utility that one might be using to administer an index assessment. 
​


Why not just use formative assessments?

As I've mentioned elsewhere, formative assessments are great, but they're not perfect. Let's just acknowledge two elephants in the room when it comes to formative assessment: i) students often don't do them, and, ii) when they do, they don't tend to provide them their best effort.

  1. Students tend not to do activities that are not associated with a summative mark. (Oh... did you think that was unique to your students? Nope! That's basically a universal law... and it doesn't matter how "interesting," "relevant," "fun," or "engaging" the work is. It's basic economics: time is scarce, so, all other things being equal, students will direct less effort toward school activities that are not associated with a summative mark.)
  2. Related to the point above, even if we do collect formative data, we must remember that we are observing the results of efforts towards an assessment that the student knew would not count. Again, all other things being equal, results on formative assessments will generally reflect a lower degree of effort and diligence than we would see on equivalent summative assessments. Thus, steering  your classroom by observing formative assessment data is kind of like steering your car by looking through a foggy windshield. 

Thus, an index mark carries on as a fluctuating, formative value throughout each unit, but carries the promise of being recorded as a summative mark at the end of each unit. This provides the student with an extended period of time in a low-stress environment to master their knowledge and skills regarding a given topic, but then rewards the student's diligence and achievement with a summative mark that will actually make an impact on his overall grade.

​
How do you explain it to your students?

This is exactly what I tell my students regarding the quizzes in my MLL:

The quizzes in the Mastery Learning Lab are technically considered "formative" during the unit because they are not counted toward your mark during the unit. Moreover, these are mastery quizzes that you can take over and over again during the unit to help you develop your understanding of the topic. Finally, they will help both you and me identify areas of strength and weakness in your understanding of the topics as we move through the unit. However, at the conclusion of the unit, these quizzes will become "summative" because the overall average for a given unit will indeed be counted toward the calculation of your grade. Bear in mind that unattempted quizzes will receive marks of zero as of the conclusion of the unit. 


But... who can do all that marking?

I completely understand the skepticism that one might naturally have regarding index marking. It sounds like some airy-fairy, pie-in-the-sky initiative that only a partial load teacher could possibly pursue. I will point out, however, that I am a full-load teacher, and have been for more than 25 years. Index assessment is quite possible, but it is only made possible with appropriate technology and connectivity. 

In other articles, I have examined my index assessments in greater detail, and I would invite anyone who is curious about them to read more about the Mastery Learning Lab (MLL) and the Ongoing Triangulation Index (OTI). 


In Summary:

As might be evident from the above discussion, index assessment is inextricably tied to the idea of mastery learning. As I've mentioned before, genuine mastery learning requires unlimited opportunities to revisit material and then subject one's understanding of its content to an objective assessment until that assessment indicates that the material has been mastered. (It is all too easy for students to revisit material and then believe that they understand it, but one's sense of understanding can at times be found wanting when it is subjected to an empirical, objective test.)

In the final analysis, It's probably easiest to think of an index assessment as a summative assessment that both the student and the teacher can observe and improve upon as it develops. This provides a significant contrast to typical summative assessments because, with most summative assessments, by the time the teacher or the student sees the mark, it's too late for the teacher or the student to do anything about it. 

To be sure, it takes a while for students, teachers, and parents to get the gist of index marking. It's not quite formative, and it's not quite summative... it's a bit of both. I would like to think that it's the best of both, as I believe that index assessment allows students to enjoy the low-pressure feedback and remediation associated with formative assessments,  while also enjoying the motivation, acknowledgement, and reward associated with summative assessments.




References:

Bahrick, Harry P; Phelphs, Elizabeth. Retention of Spanish vocabulary over 8 years. Journal of Experimental Psychology: Learning, Memory, & Cognition. Vol 13(2) Apr 1987, 344-349

Bloom, Kristine C; Shuell, Thomas J. Effects of massed and distributed practice on the learning and retention of second-language vocabulary. Journal of Educational Research. Vol 74(4) Mar-Apr 1981, 245-248.

Donovan, John J; Radosevich, David J. A meta-analytic review of the distribution of practice effect: Now you see it, now you don't. Journal of Applied Psychology. Vol 84(5) Oct 1999, 795-805

Ebbinghaus, H. Memory: A contribution to experimental psychology. New York: Dover, 1964 (Originally published, 1885).

Ebbinghaus, Hermann (1885). Memory: A Contribution to Experimental Psychology.

Rea, Cornelius P; Modigliani, Vito. The effect of expanded versus massed practice on the retention of multiplication facts and spelling lists. Human Learning: Journal of Practical Research & Applications. Vol 4(1) Jan-Mar 1985, 11-18.
- See more at: http://www.aft.org/periodical/american-educator/summer-2002/ask-cognitive-scientist#sthash.g0xfsxpB.dpuf

Willingham, Daniel T. Allocating Student Study Time: "Massed" versus "Distributed" Practice.  http://www.aft.org/periodical/american-educator/summer-2002/ask-cognitive-scientist#sthash.g0xfsxpB.dpuf

​
Comments
<<Previous

    The New Learner Lab

    Exploring the ever-changing, often challenging, and always controversial world of teaching.

    Archives

    March 2020
    May 2019
    January 2019
    August 2018
    June 2018
    May 2018
    April 2018
    March 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    May 2017
    October 2016
    September 2016
    July 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014

    Categories

    All
    Activities
    Assessment
    Brain Research
    Civic Engagement
    Classroom
    Collaborative Learning
    Community
    Computers
    Educational Policy
    Edutech
    Evaluation
    Formative Assessment
    Gamification
    Logistics
    Mastery Learning
    Metacognition
    Parent Communication
    Pedagogy
    Personalized Learning
    Project Based Learning
    Psychology
    Realification
    Social Media
    Social Responsibility
    Teaching
    Teaching Strategies
    Testing
    Triangulation
    Writing

    RSS Feed

The NewLearner Lab
Real ideas for real teachers
Picture
  • Home
  • Blog
  • Surveys
  • NewLearner
  • Contact