Thursday, August 11, 2016

Effective Learning Techniques?


August is here! With that comes a shift in priorities and time usage. I was able to submit a research manuscript earlier this week, and start some preliminary work on a grant proposal to be submitted in the fall. But classes start at the end of this month, and so teaching occupies more of my thoughts these days. I was able to read several papers on learning and cognitive psychology this summer, so here’s a summary of one on “effective learning techniques”. For the source, here’s a snapshot with the relevant citation info.

The authors analyzed and evaluated ten learning techniques. There is a scorecard of sorts at the end of the paper summarizing strengths/weaknesses and where more research is needed. The paper itself is a lot more nuanced, but let’s jump to the soundbyte conclusion first, before I discuss the criteria and some salient points from the study. The ten techniques and their “relative utility” are:

·      Elaborative interrogation: Moderate
·      Self-explanation: Moderate
·      Summarization: Low
·      Highlighting: Low
·      Keyword Mmemonic: Low
·      Imagery for Text Learning: Low
·      Re-reading: Low
·      Practice Testing: High
·      Distributed Practice: High
·      Interleaved Practice: Moderate

One very useful aspect of this study is that multiple useful criteria are applied to determine that final “relative utility” rating, and the authors are clear about the caveats and limitations of their analysis. For example, they look at how applicable a method may be across different age groups, the type and breadth of the materials used, the actual tasks asked of the “users”, whether the context closely resembled actual educational contexts, and more. Although the information on each technique was synthesized from multiple studies, key representative examples along with data are shown. (This is a clear and well-written paper!)

Hence, a technique may receive an overall low utility rating not because it is a lousy method, but because it may not transfer well across different contexts, or its effects are small compared to others, or if there was simply insufficient evidence and more is needed. For example, interleaved practice seems to show some gains, but has the fewest studies and so how general and widely it applies is still an open question. I am not surprised to see that highlighting and re-reading have low overall utility even though they are go-to techniques for students. (This says something about educating students in more effective ways to “study”.) They are better than not doing anything, but less optimal – certainly for learning concepts in chemistry!

Practice Testing is one of the high utility methods. The word “practice” is important because this refers to either low or no-stakes tests, and student-generated self-tests. The other high utility method is Distributed Practice, basically spreading out the learning and not cramming at the last minute. I tell students this, but perhaps I should show them the graph below. (This study by Bahrick was on translation of Spanish words.) The final test 30 days after the last practice session is rather telling. The authors also summarize that a useful rule-of-thumb is that “criterion performance was best when the lag between sessions was approximately 10-20% of the desired retention interval.” To remember something four months apart, it’s good to practice every three weeks or so. I’d like to think that’s why in some classes I give exams every three weeks, but that was not how I came to such a practice.

Learning is complex. One limitation of many of the studies is that criterion tasks for recall vary greatly both by type (what is asked and how) and by timing (how long, how frequent). Another is that the strategies for how students implement a particular technique also varies – and there are often other compounding factors that affect one’s “performance”. It is not easy to tease out the effects of one particular technique in isolation. Many of the studies are not surprisingly tested on college students (often in introductory psychology classes) but there are a fair number on children of different ages and a few on adult learners. Subject matter is also a potential problem. Some of the tasks may be trivial and/or irrelevant to the desire of the learner. (Frank Smith would say this is the “nonsense” approach.) However, it was nice to see a few studies that were carried out in actual real-life educational contexts (the results are also messier), as opposed to lab-test-like conditions.

By far, the hardest issue to isolate, and which affects the majority of studies, is the learner’s prior knowledge and motivation. A number of studies that showed gains for a particular technique showed more gains for students at the “higher end” of the scale. Whether this was because they already had more background (or “domain”) knowledge, were more interested in the topic, had actually utilized other learning techniques or simply were more practiced in certain areas, it was hard to tell. I think this says something about the holistic nature of learning. Any technique is limited by the background of the learner, and his/her relationship with learning (in all its facets that may not be easily separable). There is no one size fits all, or even most. This also means that one cannot make significant efficient gains through the increase in technology and massification of education – at least if quality is to be retained. (Caveat: Unless we all become robots!)

No comments:

Post a Comment