To my mind, the field study is the new essay. Don't get me wrong, I remain a staunch advocate for teaching essay writing (see the 80-Minute Challenge, the Art of Argument, and The Decline of the English Language), but I believe that students must understand how so much of the knowledge they are taught was actually generated in the first place. I think this will not only help them become better students, but critical thinking members of society as well.
The benefits of teaching students about field studies, as well as involving students in the development of their own field studies, has itself been the topic of scholarly investigation. Dr. Barbara Manner published the results of her own investigation of field studies as a pedagogical approach back in 1995. She discovered that involving students in the creation of original field studies revealed many benefits. "For students, field studies create opportunities for first-hand experiences that encourage critical thinking, long-term retention, transfer potential, positive attitudes towards science, appreciation for nature, and increased scientific curiosity" (Manner, 1995).
I have integrated field studies into my economics programs for at least a couple of decades now. In more recent years, I have implemented a more comprehensive field study project across all of my courses. I find that my students not only enjoy the field study project, but they become far more comfortable with the basic scientific method involved in designing a study, collecting data, and the drawing inferences from that data.
Over the years, my students have discovered so many interesting - even shocking - phenomenon through their own field investigations. Student field studies from my courses have generated data that would suggest:
These are just a few of the interesting inferences my students have discovered for themselves over the years. More importantly, while these students have been designing, implementing, and presenting their studies, they have also been examining and critiquing each other's studies in an effort to isolate possible flaws in study designs, such as post-hoc fallacies, false directions of causality, and composition errors.
Naturally, developing and implementing a field study is not something that students can do overnight. It is critical to first teach students what field studies are, and what they are not. In my program, I focus heavily on five main components of the field study. To my mind, these include the issue, methodology, findings, inferences, and directions for further study. I have included a link below to an activity that helps students explore and summarize field studies before they set about designing their own study. This activity encourages students to listen to online interviews between journalists and researchers discussing the findings of a new study. I have opted to pursue this approach because it requires students to listen to an entire discussion without searching for, or cutting and pasting, information from a web page. Moreover, this activity helps students differentiate between field studies and other things that might easily become confused with a field study, such as an experiment or mere anecdotal observations.
Consider implementing a small field study activity or even a larger field study project in your program. You may be surprised what your students will discover.
Barbara Marras Manner (1995) Field Studies Benefit Students and Teachers. Journal of Geological Education: March 1995, Vol. 43, No. 2, pp. 128-131.
sA Star Chamber is a modified fishbowl discussion activity wherein students in the class discuss a topic, and slowly get persuaded to join one side or the other as the discussion ensues.
The topic of the discussion is given ahead of time (this can be a matter of just minutes, or days). Brief articles may be provided outlining each position on an issue. However, students do not need to limit themselves to the material provided in their preparation for the discussion.
The Star Chamber starts out with a small group of students (relative to the class size). These students either volunteer for the first round, or they are chosen at random. The Star Chamber sits within an inner circle of chairs (anywhere from two to six chairs, with even numbers on each side). Only the students in the Star Chamber may speak. Students outside the Star Chamber must listen off to the side in the gallery until they are persuaded to join one side or the other. Once persuaded, a student may seat themselves within one of the two backbenches of the Star Chamber (see diagram above). After six minutes of discussion, an alarm will sound, and students on the backbench will trade places with their representatives on the Star Chamber.
As students enter the Chamber, they hand their Star Chamber ticket to the teacher (The Star Keep). Students earn a Round 2 ticket every time they coax students from the gallery to join their side’s backbench. (A pat on a shoulder indicates which student persuaded the newcomer.)
All Round 1 tickets must be redeemed before Round 2 tickets can be redeemed.
The side with most participants at the end wins.
See the attached PDF below for blackline masters of the classroom setup and the Star Chamber entrance tickets. Enjoy!
What's all this about index assessment?
A basic definition of an index is "an indicator, sign, or measure of something." A more thorough analysis of the term might reveal a definition such as, "a number derived from a series of observations and used as an indicator or measure." In either case, these definitions really do serve to describe what I mean by a new approach to assessment that I've been experimenting with in recent years. Thus, I have come to call it index marking or index assessment. As the years have gone by, I've been incorporating more and more index assessment into my assessment mix, and I've done this primarily because both technology and connectivity have made this new form of assessment possible.
Essentially, "index assessment" describes an assessment that is based on some sort of running total. That running total is based on numerous and ongoing collections of data. However, the index value is formative during a given unit of study, but becomes summative at the end of the unit. This allows both the teacher and the student to derive all of the value to be gained from formative assessment during the unit (such as low-stress check-ins, immediate feedback to students, and data to inform instructional next steps). However, students also enjoy one additional but highly critical aspect of index assessment: motivation. Knowing that an index mark will eventually become summative, those students who may be more motivated by marks will still be motivated to not only complete an index assessment, but to provide it their best effort as well. In my experience, motivation has been a perennial problem with formative assessment, and no amount of conversations, speeches, lectures, reminders or even infographics would solve this problem.
Over the years, I have made great efforts to communicate the value of formative assessment to my students. While these efforts would make a modest impact on completion rates, I would still never obtain anything close to a 50% completion rate on formative assessments. With the introduction of the index approach, my completion rates are now well over 90% for the exact same assessments. Moreover, the overall level of achievement on associated summative assessments (ex. test at the end of the unit) has also increased.
Putting Index Assessment Into Practice
At the moment, I have two index assessments that account, in total, for 15% of a student's overall grade in my courses. Specifically, these index assessments are the Ongoing Triangulation Index (OTI) and the Mastery Learning Lab (MLL). I have expanded an each of these forms of assessment in their own respective posts. Both of these index assessments can essentially be thought of as marks that are recorded during a given unit of study, remain observable by both the teacher and the student during the unit, and are always available to be improved upon through subsequent efforts made by the student. In other words, the student can respond to his mark in ways that can actually improve his mark during the unit.
The critical point is that an index assessment is formative during the unit, but becomes summative at the conclusion of the unit. Naturally, it is critical that students understand this at the outset of the course. Given that index assessment is both new and somewhat unorthodox, this information needs to be communicated both verbally and in writing, repeatedly, to both students and parents. (More on communication to students and parents is explored below.)
The Strength of Index Assessment: Distributed Practice
Distributed practice refers to the long-noted beneficial effect of spacing out practice across numerous yet smaller periods of time. In other words, it is better to practice something for 15 minutes a day across four days, than to practice for one hour on one day. This effect was first studied by Hermann Ebbinghaus in 1885. Ebbinghaus discovered that he could successfully remember more material in less time if he spaced out the time that he spent on studying as opposed to concentrating for the same amount of time during fewer occasions. This effect is also known as the "spacing" effect, and it has held up very, very well over more than a century of study. In fact, I would dare say that most musicians and athletes naturally discover and take advantage of the spacing effect in their practice and training, as they come to clearly confront the fact that a learner cannot acquire skill within a limited period of time nearly as well as they can when spreading out their practice over extended periods of time.
The Logistics of Index Assessment
Index assessments ask students to repeatedly take the same (or similar) assessments over an extended period of time. Moreover, the student is encouraged to repeat attempts with the knowledge that individual attempts will not count towards a final grade in the short-run, but that overall achievement will indeed count towards a final grade in the long-run. For such assessment to be practical or realistic, it must reside within some form of powerful educational technology that tracks the student's progress. Thus, we must use the appropriate technology, and we must set the scoring preferences in the most appropriate way. I use CourseSites for my Mastery Learning Lab. Specifically, I use the "Tests, Surveys, and Pools" feature made available under "Course Tools." I also make sure to organize my index assessments under unit titles, to manage the columns under the Grade Center so that the quizzes progress in order, and I set up an "Average Column" at the end of each unit. I set the preferences for each individual assessment so that the "highest" grade on each assessment is counted towards the overall mark within the unit - not the "average" or "most recent" grade. (The high score option locks high scores in place so that students can repeat attempts on quizzes or exercises without fear of losing a previously attained high score.)
I also prefer to set up a "Smart View" for each section of a given course, as opposed to setting up an entirely new CourseSite for different sections of the same course. These Smart Views allow me to see each section in alphabetical order, which greatly assists when transposing the marks from CourseSites into my school's grade management system. At the end of the unit, I will then record the mark within the Unit Average column as a summative mark. Literally, this means that I will wait until I am entering the mark for the unit's culminating assessment (ex. unit test), and I will then set up a separate column entitled something like, "Unit #2, Mastery Learning Lab."
Other logistical advice that I would highlight include the need to collect lots of data and to clear the slate at the end of the unit. Given that indexes are based on collections of data, an index mark should be based on numerous assessments that each contribute to the overall index value over the course of a unit. An index mark should then be reset at the end of each unit, allowing a new value to be generated for each successive unit. This is achieved in different ways depending on the digital utility that one might be using to administer an index assessment.
Why not just use formative assessments?
As I've mentioned elsewhere, formative assessments are great, but they're not perfect. Let's just acknowledge two elephants in the room when it comes to formative assessment: i) students often don't do them, and, ii) when they do, they don't tend to provide them their best effort.
Thus, an index mark carries on as a fluctuating, formative value throughout each unit, but carries the promise of being recorded as a summative mark at the end of each unit. This provides the student with an extended period of time in a low-stress environment to master their knowledge and skills regarding a given topic, but then rewards the student's diligence and achievement with a summative mark that will actually make an impact on his overall grade.
How do you explain it to your students?
This is exactly what I tell my students regarding the quizzes in my MLL:
The quizzes in the Mastery Learning Lab are technically considered "formative" during the unit because they are not counted toward your mark during the unit. Moreover, these are mastery quizzes that you can take over and over again during the unit to help you develop your understanding of the topic. Finally, they will help both you and me identify areas of strength and weakness in your understanding of the topics as we move through the unit. However, at the conclusion of the unit, these quizzes will become "summative" because the overall average for a given unit will indeed be counted toward the calculation of your grade. Bear in mind that unattempted quizzes will receive marks of zero as of the conclusion of the unit.
But... who can do all that marking?
I completely understand the skepticism that one might naturally have regarding index marking. It sounds like some airy-fairy, pie-in-the-sky initiative that only a partial load teacher could possibly pursue. I will point out, however, that I am a full-load teacher, and have been for more than 25 years. Index assessment is quite possible, but it is only made possible with appropriate technology and connectivity.
In other articles, I have examined my index assessments in greater detail, and I would invite anyone who is curious about them to read more about the Mastery Learning Lab (MLL) and the Ongoing Triangulation Index (OTI).
As might be evident from the above discussion, index assessment is inextricably tied to the idea of mastery learning. As I've mentioned before, genuine mastery learning requires unlimited opportunities to revisit material and then subject one's understanding of its content to an objective assessment until that assessment indicates that the material has been mastered. (It is all too easy for students to revisit material and then believe that they understand it, but one's sense of understanding can at times be found wanting when it is subjected to an empirical, objective test.)
In the final analysis, It's probably easiest to think of an index assessment as a summative assessment that both the student and the teacher can observe and improve upon as it develops. This provides a significant contrast to typical summative assessments because, with most summative assessments, by the time the teacher or the student sees the mark, it's too late for the teacher or the student to do anything about it.
To be sure, it takes a while for students, teachers, and parents to get the gist of index marking. It's not quite formative, and it's not quite summative... it's a bit of both. I would like to think that it's the best of both, as I believe that index assessment allows students to enjoy the low-pressure feedback and remediation associated with formative assessments, while also enjoying the motivation, acknowledgement, and reward associated with summative assessments.
Bahrick, Harry P; Phelphs, Elizabeth. Retention of Spanish vocabulary over 8 years. Journal of Experimental Psychology: Learning, Memory, & Cognition. Vol 13(2) Apr 1987, 344-349
Bloom, Kristine C; Shuell, Thomas J. Effects of massed and distributed practice on the learning and retention of second-language vocabulary. Journal of Educational Research. Vol 74(4) Mar-Apr 1981, 245-248.
Donovan, John J; Radosevich, David J. A meta-analytic review of the distribution of practice effect: Now you see it, now you don't. Journal of Applied Psychology. Vol 84(5) Oct 1999, 795-805
Ebbinghaus, H. Memory: A contribution to experimental psychology. New York: Dover, 1964 (Originally published, 1885).
Ebbinghaus, Hermann (1885). Memory: A Contribution to Experimental Psychology.
Rea, Cornelius P; Modigliani, Vito. The effect of expanded versus massed practice on the retention of multiplication facts and spelling lists. Human Learning: Journal of Practical Research & Applications. Vol 4(1) Jan-Mar 1985, 11-18.
- See more at: http://www.aft.org/periodical/american-educator/summer-2002/ask-cognitive-scientist#sthash.g0xfsxpB.dpuf
Willingham, Daniel T. Allocating Student Study Time: "Massed" versus "Distributed" Practice. http://www.aft.org/periodical/american-educator/summer-2002/ask-cognitive-scientist#sthash.g0xfsxpB.dpuf
The New Learner Lab
Exploring the ever-changing, often challenging, and always controversial world of teaching.