It’s been a few years since I followed-up on the pedagogical approach known as Productive Failure, much of the work by Kapur and co-workers, and summarized in a 2016 article I’ve blogged about. The overall idea is that students first participate in what seems like an open-ended “problem-solving” phase. The problem is challenging, and the students are expected to generate a range of solutions, but not be able to “get the right answer” on their own. The second phase involves “direct instruction” where students learn the solution, building upon the seeming failure of the first phase. Results from a number of these “experiments”, mostly involving grade-school level students tackling math-based problems, suggest that Productive Failure yields better results in a post-assessment compared to switching the two phases (direct or explicit instruction first, followed by problem solving).
“When failure fails to be productive” is the provocative title of an article published just last month in Instructional Science (DOI: 10.1007/s11251-020-09525-2). The authors review recent literature involving Productive Failure approaches applied to non-STEM domains, and they also conduct two quasi-experiments with tenth graders. They find that students who went through Productive Failure did not perform better than students who had direct instruction in the first phase. Before probing why, we need to know why having a problem-solving phase first might be beneficial. The principles are articulated clearly in a 2017 paper by Loibl et al. (Educ. Psychol. Rev. vol. 29, pp. 693-715) based on a meta-study of previous work contrasting the two approaches.
First, it is hypothesized that having students generate solutions to a problem before direct instruction, allows them to “activate prior knowledge”, i.e., they have to call on what they know (erroneous or not) to aid them in coming up with a solution. This may also involve being creative and inventive, depending on the scope of the problem. Second, in transitioning to the direct instruction phase, students are made aware of their gaps in knowledge. If you don’t know what you don’t know, how are you going to correct and learn? And finally, it was suggested that this process helps students “identify, explain, and organize deep features of the target knowledge”. We don’t want shallow-learning. We’re trying to move students from novice to expert, so we’d like them to see this deeper structure.
There’s a nice Figure in the Loibl paper that encapsulates the principles of Productive Failure. Note in Mechanism 2 the importance of using either (erroneous) student solutions or contrasting cases to highlight the gap between the wrong or limited answers with the correct or canonical solution. If this isn’t done carefully, they found that students did not recognize the deeper structure. Noted in the 2017 paper is that the majority of previous studies supporting Productive Failure’s superior results were in narrow math-based learning. In the 2020 study, the authors discuss a key difference between STEM and non-STEM domains, namely that the former allows a “level of control of the conditions under which knowledge can be gathered” that is not available in the latter. In STEM cases, where things can be quantified, and variables can be more easily controlled, the scope of the problem is more constrained. More importantly, there is a canonical solution in the STEM problems introduced in such experiments.
Another meta-study published in 2020 by Chen and Kalyuga (Eur. J. Psych. Educ. vol. 35, pp. 607-624) examines the issues with a different lens. Their starting point is Cognitive Load Theory; to investigate when Productive Failure might do better than first-phase explicit instruction. The categories in their meta-analysis look at (1) whether the type of knowledge to be learned is conceptual or procedural, (2) whether the materials show high-element interactivity, and (3) prior expertise level. They conclude that Productive Failure shows gains for conceptual knowledge while explicit-instruction first does better for procedural knowledge, but this is complicated by different types of knowledge, and a balance between element interactivity and expertise that are not independent of each other.
What is one to conclude from all of this? It’s hard to say. And I say this as someone who’s read hundreds of such papers but as a non-expert, i.e., I don’t do this type of research myself. I’m merely a dilettante. All these studies have limitations and particularities. While the meta-studies do seem to show that under some conditions, Productive Failure approaches do seem to yield possibly superior learning outcomes, there are always confounding exceptions that are hard to explain. For example, the Glogger-Frey et al. paper from 2015 (Learning and Instruction, vol. 39, pp. 72-87) is often cited as a contradictory case; it’s an interesting and cleverly designed study in my opinion but this isn’t the space for me to delve into its details.
For me, the point of reading these articles is to keep learning about the art and science of teaching and learning. I’ve taught long enough to know which parts of the subject material students stumble over, and I’ve come up with some effective strategies to help students over the hump. They don’t work for all students all of the time – at least not what I cover in class and through assignments. Office hours allows me to address particular issues with particular students, but often it’s not the ones who need the most help who come by. Teaching is an ever-changing endeavor, and my students today are different in many ways from those I taught twenty years ago. There are no sure-fire pedagogical approaches, and I choose different methods based on the topic I’m teaching for the day and my perception of student background knowledge and readiness. All approaches have their limitations. Reading the literature reminds me not just of this fact, but to be wary of the pronouncements of educational punditry.
No comments:
Post a Comment