Wednesday, December 7, 2016

That's Obvious


If you are playing a game of Balderdash, and making up the meaning of a new word, what might meandertal mean? What comes to mind in 2-3 seconds?

By combining the English word meander with the German suffix tal (that means a valley, dale or glen) I’m going to say that it’s someone who meanders around in an area or field deliberately and thoughtfully. Sort of like a laid-back academic who isn’t too worried about fame and fortune. Or someone just doing his or her own thing, guided by wide and varied interests. By that definition, I am a meanderthal. (I’m now adding the “h”.) That seemed like an obvious definition to me probably because (1) I’m an academic, (2) I recently read the Thursday Next series by Jasper Fforde, and (3) based on accumulated recent data, the caricature of neanderthals as slow or dim-witted is likely false.

But it might not be obvious to someone who comes at this from a different viewpoint. For example, at urbandictionary.com, meanderthals are “people who wander aimlessly and always seem to get in your way in stores and supermarkets, chatting on their cell phones and paying no attention to their surroundings”. Was this definition obvious to you? I hadn’t considered it until I did a Google search and found it as the top hit. How sad. I like my definition much better, but it hasn’t yet become mainstream. (Feel free to promote my definition!)

I’ve been thinking about the “That’s Obvious” response quite a bit as I meander through the higher education literature, primarily when reading results trying to measure if some particular pedagogy is effective or not. Then I stumbled across the article by George Yates in Educational Psychology. The citation and abstract is shown below, for those interested in reading the article in full.

Here’s the opening paragraph to whet your appetite: ‘ “These research findings are just obvious,” glares the critic. On the receiving end of such criticism, the seminar presenter feels a mixture of anguish and momentary worthlessness. Can it be the case that educational researchers, especially those whose base draws upon the discipline of scientific psychology, spend years striving to advance propositions already known to all thinking people? Were such notions known already to the intelligent person in the street even at the time our great-grandparents were alive? If what we do is validate truisms, then are we not wasting our energies? Houston (1983) stated this cogently: “A great many of psychology’s principles are self-evident. One gets the uneasy feeling that we have been dealing with the obvious but did not know it” (p. 208).’

Yates draws examples from evaluating effectiveness of teachers and evaluating if learning is taking place. As teachers we should greatly care about these two things. Am I as a teacher using effective practices in my classes, both in the classroom and through outside-of-class assignments? Are my students learning effectively and how do I know? Having taught for many years, I have to a large extent forgotten what it felt like to be a novice teacher. I’m sure I was often confused as to whether I was being an effective teacher or if my students were actually learning anything. Even now, I might be still confused, but I don’t feel that way. In fact, I feel quite confident that I apply a repertoire of teaching techniques honed over the years for effectiveness in both teaching and learning. But is that really true? Am I being partially blinded by my own specific context? Do my strategies actually generalize to other contexts?

When I sit in a classroom as an observer watching one of my colleagues teaching, it is interesting to reflect on what I notice. For better or for worse, I am likely projecting what I think effective teaching looks like from my context and experience on to the person I’m observing. Thus, when I give post-visit feedback, I find myself suggesting things “I would have done” at those time-points during the class. “Here’s how I would explain it that might be more effective”, I might say. Or “by posing the question this way here’s how one could increase student participatory learning.” Since I don’t know what’s going on in the parallel universe where my colleague does what I suggested, and I don’t know the students in that class as well as my colleague, my suggestions may or may not be all that helpful or useful. My colleagues should wisely ignore some of my suggestions, although others might prove helpful to them. If all this was simply in the context of formative assessment of a colleague’s teaching, then fine and good. But if my visit is part of a formal evaluation, then I as the observer should be very careful in what I think demonstrates effective teaching. A single observation could be misleading, much like a single data point.

When I listen to a seminar or read an article about teaching, pedagogy, curriculum matters, or higher education in general, I sometimes have the “That’s Obvious” response. I’m sure in some cases this is warranted, especially if I do know the data supporting the argument in detail, but in other cases perhaps not. But that doesn’t shake the “that’s obvious” feeling. Yates suggests several reasons why this is so. First is the false consensus effect, “the belief that others construe the world in more or less the same way as oneself.” Another is ego defense where the psyche fits the data to “[confirm] the self’s command of knowledge, wisdom, and intelligence, and [establish] how facile it was of another person or agency, using the cover of research, to try to upset the self’s worldview.” It’s essentially a threat-reducing response. Yates also suggests that prior knowledge leads to a projection process that has cognitive liabilities, because of its relation to the fast activation of System 1 in formulating an adaptive response.

Yates argues that the compilation of “best practices traits” from teacher effectiveness research studies are not as obvious as one might think. (Read the article if you’re interested.) However, he then goes on to tackle a second issue that I want to briefly discuss: “The misconception that knowledge discovered is superior to knowledge transmitted.” The current popular pedagogy in STEM surrounds the idea of “inquiry-based” learning. This is a loaded phrase because it conjures up different particular and specific pedagogical practices depending on who speaks it or listens to it. (The listener and speaker may actually have different, although possibly overlapping, notions of what the phrase means to them individually.) The roots of inquiry-based approaches are rooted in constructivism. I personally think there are many good ideas and suggestions that have come from practitioners of “inquiry-based” learning (in the broadest sense of the phrase) and I myself utilize such methods to some extent.

However, it is problematic when the apostles of inquiry learning oversell the constructivist philosophy, often using it to denigrate what I will call “direct instruction”. Yates describes his painful cringing sitting in a seminar and being told that “cognitive psychology”, his subfield and specialty, supports the notion that “discovered knowledge is more meaningful than knowledge transmitted by a teacher” and the teacher can only be a knowledge “facilitator” but not a “transmission source”. Yates argues that this idea “is flawed, since it invokes false dichotomies and confuses motivational goals with instructional methodology. Put simply, the goal of direct instruction is to promote understanding, and there is no conflict between constructing knowledge and listening to a superb teacher explain complex processes.”

One thing I have learned over the years as I have delved into the science of learning literature, is that a little bit of knowledge can be a dangerous thing. I was quite enamored of a constructivist approach on my journey as an educator. But as I delved more into the primary literature, I started to cast a critical eye on the parameters of each study and the conclusions drawn. I have now a concomitant healthy skepticism towards the notion that inquiry-based learning is superior to direct instruction at the introductory level in the sciences. I don’t think direct instruction is superior to an inquiry-based method either. I think what you use depends on the context. It will depend on who your students are, what background knowledge or experiences they have, the subject matter, the particular topic you’re teaching that day or week, what level you’re aiming at, and more. Use what is best for the particular learning goal you are trying to get across during that particular class meeting. My official written teaching philosophy sounds constructivist, but it is tempered by the practice of what I think works best for student learning in a varying learning context. Sometimes, it is not so obvious what the best approach might be. But this is what makes teaching both delightful and challenging.

P.S. As I wrote this, I have started to peruse the most recent PISA results. They actually indicate a negative correlation between student performance on the science questions and what would be broadly construed as inquiry-based methods. It is worthwhile looking at the sample questions and the actual report data.

No comments:

Post a Comment