Monday, February 27, 2023

Knowledge-Rich

In chapter six of Curious, the author, Ian Leslie, turns his attention to the importance of knowledge, not just for cultivating curiosity but for educating children as a societal imperative. For me, he’s preaching to the choir. But to my chagrin, it’s a choir that’s diminishing in influence. It feels like those who preach a knowledge-rich curriculum are going out of fashion. Yet again.

 

And that’s because the progressive education fad runs in cycles. It rises to a crescendo with seemingly simple small-scale interventions that show positive results. But when scaled-up, the supposed benefits begin to vanish – unless sustained with significant additional resources. The downswing results in knowledge-rich curricula and pedagogies becoming relevant again. But eventually the fad rises again in a new guise and the cycle repeats.

 

This is not to say there aren’t any positive things coming from progressive education ideas which got a large visible boost from John Dewey’s work in the early twentieth century. These new ideas forced educators to think about what they were teaching and how they were teaching, and brought balance to stale practices that had relegated the learner to a robot. Kids were unhappy about school. Teachers were unhappy about school. Everyone’s looking for that magic bullet to cure the travails of mass education.

 

There is no magic bullet. If there was, we’d be in an education utopia. The fact that we’re not and the failure of each fad reminds us of this. But hope remains and the cycles will continue. Does anything work? Yes, but it takes work. The learning sciences have accumulated plenty of data over the last seventy years suggesting that knowledge-rich curricula, at times seemingly tedious for everyone involved, continue to inch us forward in preparing children for an evolving milieu. It’s not sexy. There are accusations that it’s a killer of creativity. But the reality is that, unless we want to return to the static stratified society of the middle ages, a knowledge-rich curriculum for mass education is the best way forward. While it has its many flaws (rightly pointed out by its detractors), reducing the primacy of content knowledge is likely to have overall worse outcomes, both for tackling the ‘wicked problems’ of today and for societal fairness.

 

Leslie begins his chapter with Rousseau’s Emile, and describes the key disagreement starkly: “The fault line in these debates is this: Should schools be places where adults transmit to children the academic knowledge that society deems valuable? Or places where children are allowed to follow their own curiosity, wherever it takes them?” I would argue that schools need to do both. It doesn’t need to be an either-or. But when you listen to the progressive education camp rail against the strawman of ‘traditional’ education, the knowledge-rich curriculum gets painted as robotic and soul-killing. When executed poorly, that is indeed what it feels like. But I’d also argue that just because something feels tedious, repetitive, and un-fun, at least some of the time, doesn’t mean it isn’t valuable or that it’s being done badly. Leslie spends most of his chapter discussing three myths.

 

Myth #1 is that “the natural curiosity of children is stifled by pedagogical instruction”. Yes, it can be. But evolutionary evidence suggests otherwise. Human children are especially dependent and apt at learning from their elders. And “in the absence of knowledge imparted by adults, children’s natural curiosity only takes them so far.” They get discouraged, or give up, or learn things that are wrong. This is particularly true in the natural sciences, where direct instruction by teachers is crucial. Leslie provides a chilling example: “The Internet doesn’t solve this problem; it makes it worse. Imagine a group of children trying to learn about Darwinian evolution, for example, armed only with a broadband connection. How many would end up concluding that it is a Satanist plot? Some of them might learn some valuable information but only after wasting a lot of time struggling to distinguish spurious nonsense from informed discussion.” In addition, teachers can play a key role in introducing new things to students that pique their curiosity that the kids would otherwise never have been exposed to. Leslie argues that children “need to gain enough information to be conscious of their own information gaps, and sometimes require firm direction. Without it, we condemn them to be forever uninterested in their own ignorance.”

 

Myth #2 is that ‘traditional’ schooling kills creativity. Leslie’s book is about curiosity. Does curiosity naturally lead to creativity? It could, but if you want to be truly creative in a way that makes a difference, it turns out that you need lots of knowledge. That’s because meaningful creativity is, to a large extent, domain-specific. And for cross-domain creativity, you either need to know more, or better still you need to find creative partners with complementary knowledge. An interesting statistic: “Researchers who study innovation have found that the average age at which scientists and inventors make breakthroughs has increased over time. As knowledge accumulates across generations, it takes longer to acquire it, and thus longer to be in a position to supersede or add to it.” From an educational perspective, this means that we teachers should be helping students gain such knowledge. We help digest knowledge so students can consume it more efficiently, in the same way that cooking food helps us digest it. To push my cooked-food analogy further, cooking it to smell tasty whets the appetite – and we should be inspiring our students to do the necessary work to chew the food and not just be satisfied with the aroma.

 

Myth #3 is that students should be taught generic ‘skills’ (‘critical thinking’ is the popular one right now), more so than knowledge. I’ve stated this baldly. Of course, one needs knowledge to think critically. But the current fad wants to minimize content knowledge to focus on the ‘skill’. To this, my response is that doing so will lead to skills that are merely superficial at best. That’s what the research shows and I was pleased that Leslie (throughout the chapter) quotes relevant studies; he did his homework. Cognitive load theory makes its appearance although Leslie doesn’t name it as such: “Knowledge makes you smarter. People who know more about a subject have a kind of X-ray vision; they can zero in on a problem’s underlying fundamentals… The less we know in the first place, the more brain power we have to expend on processing, comprehending, and remembering what we’ve read and the less we have left over to reflect on it. The emptier our long-term memories, the harder we find it to think.” And how do you build your long-term memory store? Through knowledge-rich learning. Leslie cites example after example of how the gap widens over time between those who have knowledge and those who have not. The disparity is sobering.

 

I’ve become tired of the fight between educational camps. Once upon a time, I was more vocal in larger gatherings and formal meetings, in my attempt to temper the rising cycle of proponents putting new clothes on to Rousseau’s old argument. Now, I just do my own thing and wait for the cycle’s downswing. I guess I would be characterized as ‘old-school’ but I’m not sitting still. I’m curious and I like learning new things and trying new things out as evidenced by much that I’ve written on this blog. I hope more people read Leslie’s book, and I was refreshed that he took time to sift through the evidence and conclude that it favors a knowledge-rich curriculum. That’s now what’s popular right now.

Thursday, February 23, 2023

The Curiosity Zone

Humans seem to lose curiosity with age. Babies, toddlers, young children, seem eternally curious. Then as we grow up, many of us have fewer questions and we’re content with what we think we know. Some claim that formal schooling drives it out and that we should return to some idyllic Eden where wonder was eternal. No one knows what that Eden should look like, and the history of education is littered with failed initiatives. That doesn’t mean we shouldn’t keep trying to improve and adapt our educational approaches to changing circumstances. But one should be skeptical at any proposed magic bullet that would miraculously transform the system.

 


Curiosity is the subject of Ian Leslie’s very readable and engaging book, Curious. Leslie begins with the familiar behavior of curious babies, toddlers trying to eat everything, children constantly asking “why?”, going where they’re not supposed to, and touching what they’re not supposed to. This phenomenon is dubbed diversive curiosity. Social media and our phone apps are designed to whet this appetite for “the new and the next”. But for curiosity to really pay off, here’s what Leslie has to say.

 

“Diversive curiosity is essential to an exploring mind; it opens our eyes to the new and undiscovered, encouraging us to seek out new experiences and meet new people. But unless it’s allowed to deepen and mature, it can become a futile waste of energy and time, dragging us form one object of attention to another without reaping insight from any. Unfettered curiosity is wonderful; unchanneled curiosity is not. When diversive curiosity is entrained – when it is transformed into a quest for knowledge and understanding – it nourishes us. This deeper, more disciplined and effortful type of curiosity is called epistemic curiosity.”

 

In human development, the brain of an infant has many more neural connections than the adult brain, but these are pruned as we age. Thus, “the baby’s perception of the world is consequently both intensely rich and wildly disordered. As children absorb more evidence from the world around them, certain possibilities become much more likely and more useful…” That’s actually a good thing. Leslie continues: “It’s essential in becoming a person who can act on the world, rather than one helplessly in thrall to it, hostage to every passing stimulus. Computer scientists talk about the difference between exploring and exploiting… As babies grow… into adults, they begin to do more exploiting of whatever knowledge they have acquired… however, we have a tendency to err too far toward exploitation – we become content to fall back on the stock of knowledge and mental habits we built up when we were young, rather than adding to or revising it. We get lazy.”

 

So how do we maintain a healthy balance between diversive and epistemic curiosity? How do we capitalize on both exploring and exploiting? Leslie dives into some of the studies in behavioral psychology. In considering the cognitive aspects of curiosity, Jean Piaget argues that what makes us curious is encountering an incongruity between our knowledge and a new observation – we are surprised! Simplifying this in a U-shape graph (left side) suggests a happy medium that maximizes curiosity. If you’re not surprised by something, curiosity is low. If something is completely incomprehensible, you don’t want to think about it and shut it out.

 


George Loewenstein builds on this idea by suggesting that being aware of an information gap (known unknowns) makes us curious. But once again, there’s a sweet spot. If you have zero knowledge of a subject, getting into it is intimidating. Imagine you’re in a group conversation that turns toward something you know nothing about; chances are you’ll begin to disengage. On the other hand, if you’re an expert in something and a novice brings you what they think is “new” information in the area, chances are you’ll be less interested. As a teacher, I’m enthusiastic when my students have an “aha!” moment, but it doesn’t whet my curiosity on the subject. Thus, you can construct a U-curve for the correlation of prior knowledge and curiosity (middle graph). My goal as an educator is to get students into the “zone of proximal development” where they recognize and information gap but not one that is insurmountable. And by guiding them into it, they (hopefully) get excited to know more!

 

This brings us to the third U-shape graph (right side) – the role of confidence in the knowledge. If you absolutely know nothing about the topic and the information fed to you feels like it far exceeds your capabilities, you get scared and shut down. Fear is a curiosity killer, and it may not even be about the new information. If you’re anxious about having enough to eat, feeling safe, navigating a conflict, or the many other things a student might worry about that are unrelated to what they’re supposed to learn in class, it makes the learning more difficult. Cognitive and emotional resources are being eaten up by the anxiety. On the other hand, supreme confidence that you already know everything (you don’t know what you don’t know) is also a curiosity killer.

 

So if I want my students to be curious about chemistry, I need to expose them to things that are surprising and unexpected. Chemistry is full of the surprising and unexpected, I’m pleased to say. But I also need to get them to a place where they recognize the information gap and have enough confidence they can bridge the gap. This is tricky to manage in a classroom with a wide range of academic ability and background knowledge. But if I want them to learn the material deeply and not superficially, I also have to help them want to put in the effort to learn. That’s also tricky. A false heuristic that some of my students have been discovering is equating learning easily with learning deeply. This week, two students (independently during office hours) told me their discovery that my “smooth” explanation and going through examples in class made them think they understood easily, but when they had to work problems, they realized they didn’t really understand deeply. We talked about strategies to bridge that gap.

 

This reminded me of studies on “desirable difficulties”. Leslie also discusses this, connecting it to the cognitive effort that must be employed to encode the material into long-term memory where it can be built on. That’s also why some things students are learning should be memorized because it provides a scaffold that opens up their ability to learn more complicated things. In the old days, a search means “embarking on an arduous quest. It implied a question that led to more questions. You would encounter obstacles, or get lost, and you might not even find what you started looking for, but you would learn something along the way. Your perceptual scope – your mental map – would have increased.” But nowadays, it means “typing a word or two into a box, or muttering them into a mouthpiece, and getting an answer almost instantaneously.” And we leave satisfied by our meagre superficial meal of insta-knowledge that we quickly forget about. The answers seem to come so easily that we forget how to ask meaningful questions. Google even admitted this. When asked by the Guardian if “efforts to refine Google’s accuracy are being boosted as users learn how to enter search terms with greater precision”, the head of search Amit Singhal replied that unfortunately the opposite is true: “The more accurate the machine gets, the lazier the questions become.” But I do not yearn for the pre-Internet days. Having such a resource at one’s fingertips can be so, so very useful!

 

Parting remark: Is the internet making us more stupid or more intelligent? Yes. Depends on how you use it.

Sunday, February 19, 2023

Do dementors get cold feet?

Question: Do dementors get cold feet? Quick answer: Sure! Cast a patronus charm at them and away they scurry!

 

Follow-up question: Do dementors even have feet? Good question. I don’t know. When first introduced in Harry Potter and the Prisoner of Azkaban, the dementor is described as “a cloaked figure that towered to the ceiling… face completely hidden beneath its hood… a hand protruding… [was] glistening, grayish, slimy-looking, and scabbed, like something dead that decayed in water.” So we know dementors are tall and have hands. Dementors are never mentioned as walking, though. They glide. But do they have feet?

 

In Harry Potter’s first encounter with a dementor, it draws a “long, slow, rattling breath, as though it were trying to suck something more than air from its surroundings. An intense cold swept over… Harry felt his own breath catch in his chest. The cold went deeper than his skin… He was drowning in cold…” That’s the book’s description. There’s also a darkness about them and they can create a chill mist. Dark and cold. In the movie, dementors seem to be patterned after the Black Riders in Peter Jackson’s Lord of the Rings rendition. They are more skeletal than decaying. And the Harry Potter movies significantly accentuate the cold. Window panes frost up. Water freezes. The thermal energy of the surroundings goes down significantly when a dementor is present. That’s interesting, thermodynamically speaking.

 

Thermodynamic Analysis: If the environment is getting much colder, then thermal energy is being transferred (“heat”) from the environment to the dementor. Is it because the dementor is colder to begin with? If so, you’d expect spontaneous transfer (Zeroth Law of Thermodynamics) until thermal equilibrium is reached. But perhaps the coldness isn’t felt unless the dementor is actively trying to absorb energy (corresponding to human “life-force” in the books), i.e., when it draws its long breath which somehow doubles as a sensor. The dementors were supposed to be searching the Hogwarts Express for Sirius Black. So maybe it’s an active absorption of thermal energy, like when a vacuum cleaner is turned on and sucks in anything in its immediate environment.

 

Does the internal energy of a dementor increase when it takes in thermal energy from the environment? Thermodynamically the answer should be yes. Will its temperature increase? Will the dementor get hotter? Not necessarily. When ice absorbs energy at its melting point, there is no change in temperature. The absorbed energy goes to breaking some of the hydrogen bonds in water. There is no mention of anyone touching a dementor to see if it feels warmer. One might wonder whether it would feel colder to the touch, but it’s unclear why that should be. If a dementor was constantly keeping itself at a temperature much lower than the environment, it would have to actively pump out energy that would otherwise flow in because of the temperature gradient. Humph, this is all sounding contradictory!

 

What if a dementor had feet? And these feet touch the ground. If the ground gets cold in the presence of a dementor, would the dementor then get cold feet? Or are its feet well-insulated from the ground to reduce any transfer of thermal energy? If dementors glide above the ground, then a cushion of air could insulate it to some extent, but it might still get cold feet because thermodynamics doesn’t care who you are, magical or not. Could the energy that the dementor sucks in travel to its feet to keep them warm? Possibly. Dementors might be thermodynamic-heat-engines of a different sort.

 

Final answer: I don’t know, but I would guess, partly yes. And I wouldn’t have thought about this if not for the movies accentuating the coldness of the environment in the presence of a dementor. Who would’ve thought that dementors could be interesting thermodynamically?

Thursday, February 16, 2023

Test Anxiety

What is the most frequently reported student emotion in the college classroom? You might have guessed it: Anxiety.

 

Why is anxiety so pervasive? Here’s what Sarah Rose Cavanagh has to say in Chapter 6 of The Spark of Learning: “…there are just so many things to be anxious about: performance anxiety when giving group presentations, anxiety about speaking in class, anxiety about not being smart enough to master the material and, of course, anxiety about tests, quizzes, and grades.” I try to allay student anxieties about speaking by giving them time to think after a question is posed, the opportunity to discuss with their classmates, and I encourage them to write something down so they don’t have to speak extemporaneously. I give plenty of low-stakes five-minute quizzes (typically dropping a third of the lowest scores).

 

But the big one is test or exam anxiety. I’ve given practice self-tests and provided past-year exams, and I’m presently experimenting with providing more practice through study guides. Still, the students are anxious and this is no surprise. In reviewing the literature, Cavanagh identifies two general causes: “cognition-based worry about assessment, and physiologically based emotional arousal”. Turns out most of the research on test anxiety is on math anxiety. It’s quite common, and I certainly see it in my chemistry courses. Essentially, “students high in math anxiety take longer to solve problems and perform less well than students low in math anxiety… [and] avoid mathematics courses and careers there math is involved”. I attribute this mostly to confidence or lack thereof. But the effects are observable: the worrying steals away cognitive resources that should otherwise have gone to problem-solving.

 

Cavanagh cites a number of studies (see her book for details and references). The one that jumped out of me looked at the correlation between cortisol levels and performance in solving “large math problems”. Interestingly, high-anxiety students with higher cortisol levels did worse, while low-anxiety students with higher cortisol did better. Apparently you have to be somewhat aroused/engaged (higher cortisol) and also have sufficient working memory cognitive resources (i.e., not stolen away by anxiety) to perform well. Lower cortisol was associated with being bored and unmotivated.

 

What to do about this? Cavanagh has four suggestions: (1) Give students more time so they don’t feel rushed especially if they are slower math-problem solvers. I’m reminded that I need to take another close look at my P-Chem exams. I sometimes forget that even though the students have supposedly had math practice from the prerequisite calculus and physics courses, that this math-anxiety can actually be even more pronounced. (2) Encourage mindfulness in students. Hmm… I haven’t done anything here. (3) Be transparent and clear both in the syllabi and in teaching. I’d like to think I do a good job here. The vast majority of my students rate highly my level of organization and clarity. (4) Expose students to your testing style. I’d like to think I do a good job here, certainly in providing both examples and opportunities in multiple contexts. But students don’t always take advantage of these or they don’t take seriously the self-annotation assignments.

 

Chapter 6 in Cavanagh’s book also brought up a term I was unfamiliar with: “psychological reactance”. This has to do with negative emotions when students “perceive an unjust infraction… [and] report feeling angry, pained, frustrated, stressed, violated, cheated, disgusted and embarrassed.” This can lead to things in the classroom going downhill very quickly. It is exacerbated when students know each other outside of class (“hyperbonding” – another new term for me) and this can lead them to “encourage each other to greater heights of rebellion”. I have had the good fortune not to have personally experienced this, but I have heard the stories and I’ve personally known colleagues who found themselves in this situation. It’s a real problem, and while sometimes the instructor carries some of the blame, that’s not always true, and the rebellion is often disproportionate to the perceived injustice. I suspect being male protects me somewhat from this. Students know I have a different national origin but it’s not one they’re familiar with and they likely have not developed stereotypes about it.

 

Cavanagh discusses how to reduce psychological reactance: (1) “…use language that is low in threat or demands, expressing empathy and interpersonal similarity…” I’m not sure what to make of this. I don’t think I make threats or demands. I think I’m clear about what students need to be doing to be successful in my class. (My advice isn’t always followed.) And I don’t sense interpersonal similarity with my students. It’s stark to me that given my different background, I’m very different from them. (2) “… paying attention to the power dynamics of the classroom…” Honestly, I hadn’t thought about that very much. Maybe I need to pay more attention. Cavanagh goes through different types of power, those that are favorably and unfavorably viewed by students. I don’t have much to say about this mostly out of my own ignorance. Cavanagh did give me something to ponder here.

 

The title of Chapter 6 is “Best Laid Plans: When Emotions or Challenge Backfire” and Cavanagh ends her book by reminding us that students have emotional lives that intersect with their learning even when those things seem disconnected to their academics at first glance. It’s a reminder that we deal with human beings, many of whom are adolescents with emotional highs and lows and who deal with varying degrees of uncertainty about their present and future life. As someone who is over-the-hill, and does not experience those huge emotional shifts, I should be cognizant that my students are dealing with so much else. I can do my best as a teacher and learning might still be sub-optimal. But I should keep trying. And so should my students.

Tuesday, February 14, 2023

Curiosity and Mystery

I’ve been thinking about the effect of affect in learning while reading The Spark of Learning by Sarah Rose Cavanagh. While I’ve read some of the primary literature on how emotions play a role in learning, it was nice to find many of the highlights in one place. Cavanagh does a good job weaving the studies along with anecdotal material in her book to make her point that emotions do play an important role, and we as educators should take it into consideration. She provides many examples. Today’s post is on Chapter 4 (“Burning to Master: Mobilizing Student Efforts”). I’ll highlight my takeaways; but for the specific examples, please read her book!

 


How do you trigger student interest and draw them into a topic? I’ve dealt with the issue of fostering interest because I often teach the early 8am section of general chemistry – the least preferred time for many students. I’ve also had plenty of experience teaching the nonmajors chemistry course, where a number of students wish they weren’t there but need to fulfil a science requirement. Here’s Cavanagh on the topic: “Interest arises when information is new and potentially complicated, but inherently graspable. Novelty and complexity in the absence of comprehensibility only leads to confusion, which is usually not our goal.”

 

But triggering isn’t enough. How do you maintain student interest especially when some of the material is challenging? Chemistry certainly falls into this category. As Cavanagh says, after triggering, we want students “to burn to know what comes next”. That’s when curiosity is piqued! One way to do this is by introducing puzzles. Essentially you “draw people’s attention to the gap between their current state of knowledge and what they perceive as knowable”. That’s the heart of the most engaging novels and TV series. I can imagine using case studies that require applying chemical knowledge to solve a puzzle. Students get an endorphin boost by being successful in resolving the problem. It’s a virtuous cycle, and boosts confidence and motivation to solve other puzzles.

 

Cavanagh doesn’t stop there. She suggests that, in addition to puzzles, we should introduce mysteries. “Mysteries provoke a different type of curiosity than do puzzles… characterized by deep, effortful, and sustained pursuit of understanding. One engages… not just because one wants to solve and set aside a focused question, but because the quest is its own reward, and the knowledge that the quest is ongoing is enticing.” For me, the mystery I work on as a scientist is the chemical origins of life. I chose it because it’s likely to keep me motivated to the end of my career. I introduce bits and pieces of it into my classes when I can make a relevant connection. But I could improve on how I do so.

 

The subject of getting-into-the-zone or a flow state is tackled next. It draws on the pioneering work of Mihaly Czikszentmihalyi who posited that the sweet spot is to “match the challenge level of your activity to the very limit of your skills or abilities”. If things are too easy, boredom results. If too difficult, confusion followed by frustration reigns. What jumped out at me: While the studies show that eliciting the flow state does increase both interest and motivation, it doesn’t necessarily mean you’ve learned more – at least when measured via multiple-choice assessments after the activity. Cavanagh speculates that it could be “that flow increases enjoyment but not greater learning… [yet] this can only have good effects in the long term – even if it doesn’t translate into immediate learning gains.”

 

While I’ve spent time designing how I use my class time around hitting this “zone of proximal development” (from Vygotsky’s work), I haven’t spent much time designing mystery or puzzle activities that span more than one class period. (I did design a week-long Alien Periodic Table exercise that I’m proud of, but it was so time-consuming for me that I’ve never done anything like it again.) Hitting the sweet spot is difficult. What seems like a promising exercise or activity can just as well collapse into confusion.

 

Cavanagh writes: “Curiosity and confusion can be considered dark mirrors of each other, in that both involve uncertainty, and both create an itch to know more.” Curiosity is seen in a positive light, while confusion is often viewed negatively. However both can contribute to learning. Encountering confusion should perhaps be expected since we are trying to get students to cast aside faulty misconceptions they have accumulated over time that take some work to dislodge. Perhaps the students experience, for a time, some sort of cognitive dissonance, as they wrestle with seemingly incompatible ideas before they are able to refine their conceptual knowledge. Sometimes ambiguity or uncertainty helps.

 

While I think the evidence for the general effectiveness of “productive failure” is lacking (although I think it can work well in specific instances), there might be occasions when introducing some confusion may be worthwhile. Cavanagh presents guiding principles from her colleagues when applying this in the classroom: “students should possess the ability to successfully resolve the confusion; and/or when students cannot resolve the confusion on their own, there are appropriate scaffolds or buttresses in place to aid the students in their resolution of the confusion.”

 

One last takeaway from the chapter from Cavanagh: “giving feedback to students about what they’ve done right, particularly if it is a skill that they were previously lacking”. I haven’t done this much, but I should.

Thursday, February 9, 2023

Life might be a noun

On the first day of my Metals in Biochemistry class, we discussed two questions: “What is Life?” and “What is Living”? In the first question, Life looks like a noun. In the second question, Living looks like a verb. The students agreed that the second question was easier to tackle than the first, and we proceeded to come up with a range of examples. We also discussed the possible fuzzy boundary between living and non-living – the realm of cyptrobiosis.

 

But maybe, the words life and living should be adjectives. Or maybe it might be instructive to examine the possible interchange of adjectives and nouns. This is what Robert Rosen does in his essay “Genericity as Information”, compiled in his Essays of Life Itself.

 

Rosen is trying to argue that context-independence, a way of slicing up the world into objective chunks, is too impoverished to describe complex systems such as life. By stripping out the subjective, science then “assert that only particular things are real, and that we can learn about them through enumeration of their adjectives.” A corollary to this is that “a set of such particulars is not itself a particular, and accordingly is allowed no such objective reality”. In chemistry, a classification method such as the periodic table is a “subjective intrusion superimposed on the particulars it assorts and classifies”. This is what it means to be a strict empiricist. Rosen is not of this clan.

 

Here’s the example Rosen provides. We can think of particular nouns such as water, oil, and air. The chemist in me is already envisioning particular (pun-intended) pictures of H2O, hydrocarbon chains, and a box consisting of mainly N2 and O2 molecules. Rosen adds that we might classify these three substances as “fluids” for convenience, but the empiricist will emphasize that this is merely for convenience and that “fluid” is not a thing in itself. Rosen then says that the empiricist “has no trouble with phrases such as turbulent water, turbulent oil, and turbulent air. Here, the adjective turbulent correctly modifies a particular noun to which our empiricist will grant an objective status, or reality. But suppose we turn these phrases around, to yield water turbulence, oil turbulence, and air turbulence. We are now treating the turbulence as if it were an objective thing, and the water, oil, and air, as instantiations or adjectives of it.”

 

This interchange of noun and adjective is anathema to the strict empiricist. How can you empirically analyze turbulence in itself? When the famous physicist Erwin Schrodinger asked the question “What is Life?” he implied that it was a particular challenging question that the physics of his era was incapable of answering and that “new” physics was needed. (No, quantum mechanics was not the new physics capable of answering his question.) Perhaps that’s why it’s a little easier to compare a living cat, dead cat, and hibernating cat; and less easy to tackle what it means for a cat to be alive versus a dog to be alive. We sometimes think we can make distinctions: What it means for a bacterium to be alive might be different from what it means for a human to be alive. But it’s hard to grasp what is so distinct about life, not that there aren’t models that have been proposed.

 

The trouble with words is that attaching meaning to them seems inherently subjective. Is there a yawning abyss between syntax and semantics? Is that why describing mechanics in terms of present physics and chemistry doesn’t quite get us to biology? While systems-thinking attempts to bridge some of this gap, is it enough? And if it isn’t enough, then is it true that life cannot truly be simulated in an enclosed system computer or otherwise? Perhaps my efforts as a computational chemist studying the origin of life are doomed from the start.

Monday, February 6, 2023

Heat is not a noun

In thermodynamics, heat is not a noun. It’s a verb. This is confusing. Why? Because colloquially we think thermal energy and heat are the same thing, but in thermodynamics they are not. One is a noun and the other is a verb. Before diving into the weeds, let’s first acknowledge that nouns and verbs are labels. Keeping things simple: a noun labels a ‘thing’; a verb labels an ‘action’. Now we’re ready to define the two.

 

Thermal energy, from the chemical perspective, is the energy of molecular motion. The amount of thermal energy in a chemical system can be quantified by measuring its temperature. Thermal energy is a noun. This is confusing because its definition contains the word ‘motion’ that sounds verb-ish. Considering the two broad categories of kinetic energy and potential energy, thermal energy is more closely associated with kinetic energy, the energy due to motion.

 

Heat is the transfer of thermal energy. Heat is a verb because it involves the ‘action’ of transferring. This transfer of energy takes place spontaneously across a temperature gradient if a suitable pathway for the flow of energy is available. Heat is the flow, not the thing that flows. The thing that flows is thermal energy. The zeroth law of thermodynamics is about this flow: When a hot object is placed next to a cold object, thermal energy is transferred from the hot object to the cold one. The cold object is heated by the hot one. Simultaneously the hot object is cooled by the cold one. Heat is more closely associated with potential energy, as are other gradient-related energies such as gravitational (potential) energy.

 

But it’s hard to exclusively use heat as a verb. We easily slip into saying that ‘heat’ is transferred when something becomes hotter or colder. To repeat, heat is the transfer, not what is being transferred (which is thermal energy). In our minds, because of how we colloquially use the word ‘heat’, we associate it with temperature change – and thus conflate it with thermal energy. I tell my students to link thermal energy with thermometer (its measure). Thus, a change in thermal energy of an object leads to a change in its temperature. Many examples of heat-flow do lead to a change in temperature, but not always. Let’s look at three examples.

 


In the zeroth law example shown above, two objects at different temperatures are brought together. Spontaneously, the direction of heat-flow down the gradient is indicated by the red arrow. Both objects change temperature during the process. Finally, thermal equilibrium is reached. Both objects now measure the same temperature. The temperature gradient has been degraded – it no longer exists. We say that the two-object system has reached thermal equilibrium.

 

Chemists want to know about energy changes in a chemical reaction. In thermodynamics, we define the ‘system’ as the chemical substances. They have some energy content before the reaction, and after a chemical reaction (where chemical bonds have been made and broken) they often have a different energy content. We cannot easily measure the system’s change in energy directly, so we couple the system to what we call ‘thermal surroundings’ – typically modeled as an insulated water bath. If a chemical reaction releases energy (and most favorable reactions do), that energy is transferred to the water and when its temperature rises, we can calculate the rise in thermal energy of the thermal surroundings. A calorimeter is our device to measure this energy change. (An insulated water bath works well as a calorimeter, hence the model used!) This is illustrated below.

 


Students can quantify the heat by the formula qtherm = mCΔT where m is the mass of water, C is the specific heat capacity of water, and ΔT is the change in temperature of the water. All this is for the thermal surroundings. If the temperature goes up, ΔT is positive and therefore qtherm is also positive. But what about the system? Since the thermodynamic universe containing both the system and the surroundings is ‘isolated’, the energy gained by the thermal surroundings must have come solely from the chemical system. Thus, students learn that qsys = –qtherm and energy is conserved. But heat is a verb. The noun that is used to represent the energy of the system is ‘enthalpy’. Textbooks and chemistry instructors often refer to it as ‘heat energy’ which is historically true (and noun-ish) but semantically confusing. This is why students have trouble with understanding the concept of state functions: enthalpy, the noun, is a state function; heat, the verb, is not.

 

A brief aside: I have skipped using the term ‘internal energy’ for the chemical system and ignored PV-work in the model of the thermodynamic universe, although I did draw the piston and shaft (in black) to represent it. Leaving PV-work out of the discussion, as I argue in a previous blog post, keeps this analysis cleaner and simpler for students.

 

In the model shown above, we only measured temperature changes in the thermal surroundings. The temperature of the system may or may not change. If we consider reactions solely carried out under ‘standard conditions’ so that we can tabulate standard enthalpies of formation, then we consider that the chemical reactants and their subsequent products to be at the same temperature. The system temperature didn’t change! But the temperature of the thermal surroundings might have. Thus heat-flow in this case only involves a temperature change in one milieu, not both. It makes sense to refer to qtherm as ‘heat’ because the thermal surroundings did change temperature, but for historical reasons, we still call qsys ‘heat’ even though the system many not have changed in temperature. Why is there still an enthalpy change? The chemical bonds made and broken in the reaction have different enthalpies. In a typical ‘exothermic’ reaction, one that ‘releases heat to the surroundings’, weaker bonds are broken and stronger bonds are formed in the chemical system. Thus the system becomes more stable or lower in energy and ΔH, the change in enthalpy of the system, is negative.

 

Second aside: In some physical systems, students may be analyzing the opposite situation where the system is changing temperature, and exchanges energy with a thermal bath (which remains constant in temperature). For example, you could use a water bath to ‘heat’ a system as shown below. For that matter, you can consider a heating element in a reaction to act as such a reservoir of thermal energy.

 


In my final example, energy can change between the system and the surroundings without any change in temperature in either milieu. Consider the picture above where energy is being supplied by the thermal bath/reservoir to melt ice, the system, at zero Celcius. The ice receives energy and turns into liquid water at zero Celcius. Was thermal energy transferred? Hmm… no temperature change was involved. Historically we’ve come to call this ‘latent heat’. We don’t measure any temperature change although the word ‘heat’ is still invoked. In class, I avoid saying latent heat and simply refer to this as “delta-H of fusion” or ΔHfus.

 

All this seems very clumsy. In class, I try to avoid using the phrase ‘heat of formation’ and use the more cumbersome ‘standard enthalpy of formation’. I get better every year but old habits die hard. I warn the students that they will often hear many of these enthalpies referred to as the “heat of [something]” even if there are no temperature changes. From a chemist’s point of view, I emphasize to students that when they think of ΔH, they should be thinking about changes in the strengths of chemical bonds (or interactions that fall under the rubric of “intermolecular forces”) – the old ones being broken and the new ones being made – in a chemical process. They shouldn’t think about heat per se.

 

This becomes doubly important when students begin learning about entropy. In any chemical reaction, there is an inherent entropy change. Here’s my brief one-paragraph version. If you’re utilizing a chemical reaction to do useful functional work (which may be different from PV-work), you may lose some of that energy as ‘heat’ to the thermal surroundings that isn’t due to inefficiencies in your apparatus (which may also be present). Rather, it is inherent to the chemical reaction. In the picture below, you want to minimize this ‘heat’ loss (the small double-headed arrow) and maximize the energy transfer from the system to functional work. In an equation, I would write this as ΔH = w’ + TΔS. As the enthalpy changes, the entropy changes (ΔS), and w’ (w-prime, to distinguish it from PV work) is the maximum useful work you could get out of the system, under ideal conditions. Students learn w’ as ΔG, the change in free energy of the system. When we talk about heat as the worst form of energy when it comes to utilization, we’re primarily referring to issues of entropy, not enthalpy.

 


I could go on, but this post has likely passed the TL;DR threshold. If you’re a student reading this, I hope this helped you. If you’re an instructor reading this, the take-home message is that when you use the word ‘heat’ in place of enthalpy in the context of thermodynamics, using it as a noun rather than a verb, you’re essentially using it as a label – a name – for a more abstract quantity. Your students hearing the word ‘heat’ might be associating it with temperature changes even if no such changes occur. Yes, I do want my students to use the phrases ‘endothermic’ and ‘exothermic’ correctly based on the sign of ΔH. But I want to drum into them that, conceptually, they should primarily associate ΔH in terms of the relative strengths of chemical bonds and interactions; that’s the system property chemists want to focus on! In thermodynamics, heat (unless being used as a more abstract label) is not a noun.