Tuesday, March 30, 2021

Lotsa Lotka

After a busy last week, I now have time to resume my reading of Into the Cool while I enjoy Spring Break! This morning’s chapter was “Thermodynamics and Life”, a mere seventeen pages, and it took me an hour and a half. Not because the writing was particularly dense, but I kept stopping to wonder and ponder after reading a paragraph or two. I was particularly slow reading the four pages describing the work and thoughts of one Alfred Lotka.

 

Today I’ll be quoting a fair bit from Dorion Sagan and Eric Schneider, the authors of Into the Cool. What a fantastic book. I think today’s is my fourth blog post this month on some aspect of the book. Funny story. I borrowed Into the Cool from the library some ten years ago, read a little but didn’t appreciate it thinking it was fringe-ish science, and returned the book. How wrong I was. It’s a marvelous book!

 

Let’s look at Lotka. He is “best known for the Lotka-Volterra equations used to simulate populations of predation and prey… [which] led to population dynamics, today a subfield of ecology… But Lotka also developed such equations to model observations about life as an energy-driven autocatalytic process… Lotka’s most frequently cited work gives a nod to Boltzmann’s view that life’s evolutionary struggle is for available entropy. Lotka argues that selective advantages accrue to those living beings best able to capture and store energy.”

 

One aspect that has confused me thus far in my reading of non-equilibrium thermodynamics is the existence of two seemingly mutually exclusive principles: Maximum energy flux and minimum entropy production. Having been immersed in equilibrium thermodynamics (maximum entropy production) including teaching it for twenty years makes it a challenge for me to conceptualize what’s going on. Sagan and Schneider are particularly helpful in clearing up the paradox.

 

“Maximum power principles state that those organisms or ecosystems that can most efficiently convert energy into biomass (including seeds and spores) enjoy an evolutionary advantage over their neighbors. Individuals and populations that fail to maintain or expand their systems’ energy flow head for the exits of extinction… Lotka wrote that the energy flux through a system will maximize but only insofar as such maximization is compatible with the constraints to which the system is subject… Modern discussions of Lotka’s [most-cited work] almost never mention Lotka’s ideas on minimum entropy production.”

 

The key here is that power is energy flow per unit time. I’m so used to picturing energy flow down a gradient almost exclusively as gravitational potential energy and falling objects down a height, a length unit. I puzzled over this alluding to it in my most recent blog post about degrading a gradient, but I think the fog is clearing. The relationship of length, time, and amount of material flowing, across some gradient, is starting to make more sense although I haven’t quite grasped it yet. Hence, all those pauses as I was reading the chapter, and as I’m writing this post. I’ll turn to Sagan and Schneider again.

 

“Biological systems do attempt to capture and degrade as much high-quality energy as completely as possible; yet, at the same time, green life captures solar energy, intercepting it from falling to ground state and maximum entropy production. Life passes around that stripped-off photon, trading immediate entropy production for getting the most out of energy over time. The second law does not say that systems come to equilibrium as fast as possible. Life defers, delays the immediate fall of free energy to ground state, trapping and rerouting it. This, the essence of metabolism, allows life to preserve itself as a degrading system.”

 

I’d been talking about this in vaguer terms in my G-Chem classes this semester, discussing entropy in terms of heat dissipation and “quality” of energy. Photons are high-grade energy that can be used (while degraded). Heat is low-grade energy which can hardly be degraded anymore. There’s no more entropy to produce in the equilibrium state. But if energy is continuously flowing, non-equilibrium thermodynamics moves you towards steady state – it’s equilibrium-like in the sense that concentrations of intermediates are hardly changing with time, and entropy production is minimized in a sense, but energy is still flowing maximally over time. As much as the system allows. Until a dam breaks and that energy can be rerouted for increased flow. That’s where kinetics come in. What is a catalyst? It lowers the activation barrier by providing an alternate pathway. Or so my G-Chem students learn. By doing this life can wring out useful energy to do work losing free energy in tiny steps rather than blowing it all up in a single fiery explosion.

 

Sagan and Schneider summarize it by rewriting Lotka. “Evolution and ecosystems will… maximize, on one hand, the energy intake of organic nature from the sun, and on the other, minimize the outgo of free energy by dissipative processes in living and decaying material. The net effect is to optimize in this sense the energy flux through the system of organic matter… But when organisms through their own evolution come upon a new energy source, there may be a period of danger associated with experimentation and rapid spread. The new energy forms, useful as they are, have not yet been integrated into stable modes of survival… Having lived through the global depression, two world wars, and the state-orchestrated development and deployment of nuclear warheads, Lotka towards the end of 1945 wonders whether, with higher energy flow, humans will become even more addicted to the energy capture and degradation business. Considering luxury products he notes that the desire for such things… is not, like the biological appetite for food, in principle limited. He interprets luxury as new forms into which excess energy can flow.”

 

I admit to not having thought about the difference between commodities and luxuries in this way. Sagan and Schneider make connections to a related difference natural and sexual selection. They also make an argument for why we observe convergence in biological evolution – the constraints are energetic and entropic. There’s an interplay or balance between rapid spread and adapting to shifting environmental conditions. Replicate identical copies quickly when you can, but at the same time albeit at a slower pace, Diversify! Or wait out harsher conditions as a seed or spore. “Go with the flow… but if there is no flow, hunker down and wait for the next good gradient.”

 

The final paragraph of this chapter sums up the story well in evocative prose:

 

“Quickly growing systems – ones that through evolution, technology, or both, tap into previously unrecognized or untapped gradients – may spread like wildfire. But, like raging flames, they rob themselves of their own resources. Slow growers, by contrast, display an innate ingenuity; they make up in longevity and cunning what they lack in rapid gradient destruction, dissipation, and entropy production. They gratify nature not instantly, but enduringly. There are many ways to skin a cat, whether Schrodinger’s new cat of the role thermodynamics plays in living systems or Blake’s feline of energy and fearful symmetry.”

Monday, March 29, 2021

Degrading a Gradient

If physico-chemical processes can be summarized by the dictum “nature abhors a gradient”, how then is that gradient reduced or degraded?

 

I've been puzzling over this, and have sketched out a preliminary picture, but I'd like to get into the weeds. Let’s begin with a cliff. A ball falling over the edge of the cliff goes down its (potential energy) gradient (A). When it hits the bottom, equilibrium has been reached. Nothing else happens. There is no longer any gradient where the ball is concerned.

 


Let’s now imagine a waterfall with the same cliff. As long as there is water (perhaps from rain) at the top, it will keep falling down along its gradient to the bottom. If it’s a hard cliff that cannot be altered by the flowing water, this picture remains unchanged, as long as the water keeps flowing (B). This is a steady state situation. The potential energy of the water continues to be dissipated as it falls. It might be possible to extract work from this continuous falling water. Unlike the single ball that has hit rock bottom.

 

If it’s not a hard cliff, but consists of softer dirt, the cliff can be eroded. The extent of erosion would depend on both the force of the flowing water and how easily the dirt can be moved in little bits and pieces. This could reshape the top of the cliff, but also the bottom of the cliff as slit accumulates (C). The flow has now changed, albeit very slightly. If the flow is of very low force, but continuously removes dirt from the top and deposits it at the bottom, the cliff might eventually become a slope (D) and perhaps reduce in steepness (E), although the overall gradient for water flow (from top to bottom) remains the same. Eventually when you run out of cliff, that overall gradient may finally begin to reduce.

 

But what if you had a variable flow? You might imagine a different situation where silt is carried some distance but then starts to build up a small mound (F). This creates a small pond at the bottom of the waterfall pent up by a natural dam. The pond may overflow when water is abundant. The natural dam may break, depending on how sturdy it is and how well it can hold together against the force of water. The water may find some other way out beyond the single dimension represented by these simple figures. Imagine a mountain lake constantly fed by rains, its water finding its way down through many streams and rivers, the flows changing, and their paths evolving over time as natural dams are built up and broken down. The landscape might look quite complicated as nature finds its way to degrade the gradient.

 

On planet Earth, there are three source gradients that could power the evolution of complex structures that we call life. These are (1) the sun (providing both a thermal gradient and energy in the form of photons), (2) the gradient in chemical redox potential between the minerals of the lithosphere and the hydrosphere/atmosphere, and (3) geothermal energy from radioactivity. I’ve also ranked them by importance in extant life today. Most of life today essentially harnesses its energy from the sun. But deep down in hydrothermal vents, other (albeit) weaker sources of energy can power life.

 

How would we map chemistry on this landscape? Kinetically stable compounds are pools that can build up until they overflow their banks. Increasing energy is akin to increasing the flow of water. “Low barrier” pools with low banks are easily overflowed as energy increases. “High barrier” steep banks are less easily overcome, but a weakness in the bank may cause a breach may result in an alternate pathway for water to escape – this is the role played by chemical catalysts that lower the energy barrier. A high enough flow can result in water sometimes traveling uphill, but only if there’s an inexorable downhill flow at a later point. The need to reduce the thermodynamic gradient reigns supreme. Our vision is limited by our discomfort imagining higher dimensions, but this is what we need to imagine in a truly complex chemical landscape. I don’t know how to describe the picture, nor am I sure I can grasp it in my little feeble brain.

 

Let’s therefore return to one or two dimensions to explore if there are other conceptual gains in our simple picture. In 1900, Henri Benard “created” hexagonal cell-like structures by heating sperm whale oil; order arising from disorder spontaneously.  Intriguingly, in a paper titled “Life as a Manifestation of the Second Law of Thermodynamics” (Math. Comput. Modelling, 1994, 19, 25-48), Schneider and Kay present a scheme analyzing these Benard cells that looks similar to the waterfall cliffs (B, D, C, respectively). In (a), Q represents heat flow in an “ideal” isothermal system. In these experiments, initially heat flows by conduction and shows a linear slope across the temperature gradient in (b). At a critical point, conduction transitions to convection as shown in (c).

 


While the zig-zag flow across the gradient represents a “longer” path compared to the straight-line slope, it is more efficient at dissipating heat. Below is a plot showing both the heat dissipation rate and entropy production along the gradient, emphasizing the shift from conduction to convection. This critical point is where order seemingly arises from disorder. Increase the heat flow further eventually the hexagonal structures collapse once again into chaos.

 


I’ve been puzzling over the meaning of “length” and “time”, which in scientific terms are two independent and fundamental variables. But when we think about energy flow in non-equilibrium thermodynamics, the two categories might get fuzzy. For one thing, I recently saw a similar picture proposed for the evolution of “global structure of metabolic pathways” using a toy model (Abedpour, N.; Kollman, M. BMC Syst. Biol. 2015, 9:88). Their analogy is that there’s a trade-off between metabolic “path length” and “maintenance costs” and depending on the particular situation, you may observe the switch.

 


And I happen to simultaneously be reading an academic history book (The Art of Not Being Governed by James C. Scott) about the various “tribal” peoples in the highlands that border Burma, Thailand, Vietnam, Laos, and China. When asked how far it is from location A to location B, the locals don’t tell you the distance in length units, but rather in time units. Because that’s what counts. And therefore that’s what is counted. Fascinatingly, useful politico-economic history “boundaries” can be drawn when one makes map based on time-travel rather than distance-travel. What seems like a longer distance to us non-locals unfamiliar with the local terrain turns out to be the practical “shortest” time. The simple one-dimensional pictures I’ve drawn are misleading in a sense. In a multi-dimensional landscape, the most efficient way to degrade a gradient is the one that life figures out evolutionarily. How it does so… ah, that’s the fun of research.

Wednesday, March 24, 2021

Every Life is on Fire

What do Moses’ encounter with the burning bush, and the theory of dissipative adaptation in non-equilibrium thermodynamics, have in common?

 

Answering a question with a question: That we understand neither?

 

Okay. So that’s my lame attempt at inventing an origin-of-life joke; if it ever goes viral in the future, you read it here first.

 

You might be surprised, however, that a serious scientist is making a connection between a religious encounter with a strange sight and the physics of complex systems. That individual would be Jeremy England in his first book Every Life is on Fire. The subtitle of the book: “How thermodynamics explains the origins of living things”.

 


I’ve enjoyed reading England’s papers, and I’ve recently mentioned and quoted one of his key papers introducing dissipative adaptation. His prose is very clear and he’s good at explaining things by providing touchstone examples to the non-physicist, even when these papers are in highly technical journals.

 

Every Life is on Fire, though, is a strange hybrid book. It’s aimed at a very general non-scientist audience. You won’t see equations, and instead you get simple sketches communicating complex physical principles in simpler lower-dimension examples. However, in addition to the science, there are short vignettes on how key events in the first two books of the Bible (Genesis and Exodus) provide analogies (or perhaps parables) to picturing the science. The cynic in me thinks this will reduce the marketability of his book. Non-religious scientists reading the introduction might cringe at what they think might be mystical mumbo-jumbo that obscures rather than illuminates. Non-scientists interested in religious and spiritual things might be disappointed a few chapters in when they find that the book focuses on the science; the connections to spirituality are tenuous and loosely allegorical at best. That’s too bad, because while there is an unevenness to the book, England overall does a good job explaining his theory of dissipative adaptation.

 

Let’s dive into the science. What is dissipative adaptation? The idea is that if one is in a situation where energy is flowing, one will adapt oneself to that energy flux in particular ways. This adaptation can be in harmony with the ‘direction’ of energy flow, as when a tree bends it shape over time given the prevailing winds. But it can also resist that flow by reducing the effects of the energy-shaping over time. In some cases, the shaping causes the incoming energy to be more favorably absorbed. In other cases, the shaping causes “holes” where response to those energy bands is minimized. England uses vibrational resonance to illustrate a number of his ideas, e.g., the shattering of a wineglass by an opera singer. He also discusses balls rolling up and down hills, springs, and escalators.

 

I particularly liked his escalator analogy. He uses it to describe how an energy source can drive a chemical reaction one-way over an activation barrier, but make it very difficult for the reverse to happen. This is not what we tell our chemistry students in college-level General Chemistry. The picture we show them has smooth a curve representing the energy “hill” that must be surpassed for a chemical reaction to take place. Imagine rolling a ball up the hill. Only if the ball has sufficient energy in its motion to overcome the activation energy will it get over the hill to the other side. If not, it rolls back down. Indiscriminate broad spectrum thermal energy (a.k.a. “heat”) is like this. The ball rolls around in its valley unless it gets enough of a kick to get over the hill.

 


But what if the source of energy had the feature of providing small kicks in a particular direction? Think of this as an escalator going up. It powers your movement in that direction so you don’t need to exert the same amount of energy climbing the stairs. Small work cycles in thermodynamics can conceivably do this under the right conditions. If you’re trapped in a deep valley surrounded by steep smooth mountains, but one direction provides a way of escape via an escalator, and all other directions would require you taking a run at the slope, what’s your most likely path out? The escalator!

 

The crux is that when you get over the hill, you slide down and there’s no escalator going back up in the reverse. So even though the activation energy is the same backwards or forwards (in the diagram), this reaction will be mostly one-way. Equilibrium thermodynamics predicts equal amounts of reactants and products over time. Non-equilibrium thermodynamics with dissipative adaptation will favor the products.

 

What does creating an escalator look like at the molecular level? This is not so clear. Mechanochemistry is a young up-and-coming field where mechanical stimulus through microwaves or some other oscillatory source at specific frequencies has opened up new types of chemistry targeting specific chemical bonds in specific chemical reactions. This is akin to the more established field of photochemistry that uses lasers or monochromatic photonics to drive specific reactions in chemistry. Resonance at specific vibrational modes is the driving force in these cases. England also mentions other clever setups involving rearrangements of particles that interact via weak forces taking on specific “organized” arrangements with an appropriate stimulus.

 

But all this clever chemistry is designed by intelligent humans with a specific target in mind. At the origin of life, things are much messier. The chemical system is a diverse hodgepodge of molecular compounds. The energy sources are less specific and more broad-ranging. In solution, where most chemistry happens, molecules are constantly tumbling about, and it’s difficult to “force” a specific orientation. This may be an argument for a key role played by solid surfaces – orienting molecules which are then subject to experiencing a particular gradient across the solid-solution interface. I’m discussing this in vague terms because we don’t quite know exactly what this would involve. In a sense, England’s book can be frustrating – we don’t know the answer or the specifics – but it also reflects the reality of what we know about such systems, which is not very much.

 

England recognizes the limitations in his theory of dissipative adaptation. He doesn’t make oversized claims, and he is careful to hedge his pronouncements. While his approach based in non-equilibrium thermodynamics and leveraging kinetics is not new, he provides a fresh way of looking at the problem with easy-to-understand analogies and examples even though the underlying reality is much more complicated, or more accurately, complex. The picture I have after reading his book is the shaping of a lump of clay by external forces. As you push the clay, it bends partly to your will absorbing the mechanical energy into its new structure. If you pressed your thumb into the clay, the imprint gives you a clue as to what took place. New external forces can cause other deformations into new structures. These might leave clues as to how the clay evolved into its present shape. Some of that “history” may have been erased or written over, but a close examination may recover other traces. I suppose that’s what we do in origin-of-life research.

 

The burning bush is a fitting analogy to England’s book and he discusses it in his final chapter. Fire, the energy source, does not consume the bush nor reduce it to ash. Rather the bush takes on larger life characteristics, or a larger-than-life character as the deity speaks through the burning bush. Those opposed to the introduction of spiritual and religious imagery into science can skip it while enjoying the rest of the book. (They can also skip the couple of pages at the end of each chapter.) But for those who are open to a more holistic view, England does a relatively good job discussing how such imagery can help the reader appreciate complexity and paradox, things that science stumbles over. He doesn’t try to infuse mysticism into every aspect (as Gerald Schroder does in The Science of God, which I find unconvincing), but instead England is thoughtful about science and spirituality. Overall though, it’s still a book about science – and it’s nice to see someone introduce non-equilibrium thermodynamics in layman’s terms and analogies. While I personally enjoyed reading it, I suspect others will find it less engaging compared to some of the best science writing out there.

Saturday, March 20, 2021

In-Between Spaces

Coincidentally, this month I read two books of fiction that feature the multiverse. What is the multiverse? It’s the idea that as we make choices in our lives, our space-time lines split into new universes as divergent paths are taken. This would mean universes are constantly splitting and there exist an infinite number of universes capturing every possible path traversed. This is the Many Worlds Interpretation of quantum mechanics generally attributed to the physicist Hugh Everett.

 

Is there evidence for the multiverse? Everett’s interpretation suggests that the equations of quantum mechanics do not preclude this scenario, and might even require it – at least if you believe in real possible non-deterministic choices. But is there “hard” evidence? No. And it’s unclear how one would test the Many Worlds hypothesis because there is no clear path between one universe and another. These worlds exist side-by-side yet are inaccessible to each other. How could that be? Well, time and space are funny things – at least according to physics. Science is stranger than fiction, thereby giving rise to interesting science fiction. Even if you don’t think you read science fiction, you’ve encountered time-travel, wormholes, and multiverses, in the movies, on popular TV shows and in the mass media. They’re fun to think about, but there’s no proof they exist.

 


The Midnight Library (written by Matt Haig) opens the narrative with a young woman about to kill herself. That sounds like a morbid way to begin a story that reads like a novel geared at young adults. But at the moment between life and death, the young woman finds herself transported to a strange space that resembles a library with shelves and shelves of books. It’s a liminal space. Transient. In-between. And the books contain stories of her other lives – paths she could have taken. She gets to explore these, accessing the multiverse via the in-between space of the liminal library.

 

No spoilers from me, except to say that the author cleverly uses a mechanism of “zoning out” to afford these visits. Just as the Matrix movies hints that the reason behind the feeling of déjà vu might indicate that you’re in a computer simulation, these zoning out episodes might be a glimpse that the Midnight Library is in operation. You wouldn’t classify Haig’s book as science-fiction, although it utilizes this clever mechanism to explore the roads not taken… although there’s a hint they could be if one so chose.

 


The Space Between Worlds (written by Micaiah Johnson) would indeed fall under the sci-fi genre. The protagonist is also a young woman who is born and grows up in a situation where death is rampant and comes easily. Therein lies the key to bodily accessing the multiverse. The premise is that technology has built the ability to traverse the multiverse. How the transporter works is nebulous – it’s literally a black box that’s spherical. But to survive the trip to an alternate universe, you must be dead in that other timeline. Otherwise bad things happen. As our young protagonist travels to her other “world” she traverses a liminal space that invokes ancient gods and mysteries, and perhaps the key to what holds the multiverse together.

 

But this isn’t what’s explored as science-technology questions are not the focus of the book. It’s about people and the choices they make. The space between worlds is about the gap between the haves and the have-nots, much like you might see in a movie such as Elysium or District 9. Johnson’s emphasis on peoples of different ethnicities reminds me of the writing in The Fifth Season (by N. K. Jemisin) although the latter has more prominent science elements. Both are excellent books, and I expect they will be made into movies -  Johnson’s likely being the more tractable. No spoilers from me other than to say that life and death lurk prominently as one travels in between spaces (or times).

 

The question that reading these books brought to my mind: What is the space in between spaces? I hadn’t really asked myself that question because we typically think of space as the thing in between objects. The ancient Greeks might have been the first to articulate something that seems natural to us: There exist Atoms and the Void. Atoms are discrete. The void is continuous, and it’s the open space forming the backdrop allowing the movement of discrete objects. Without it, no movement. But could space be discrete rather than continuous? Can it be chopped up into tiny pieces and then chopped no further? And what are the implications if space is discrete?

 

We sometimes think of time, the fourth-dimension of space-time, as being discrete. We measure moments. Blocks of time. And if those blocks get small enough, we can’t sense them, thereby allowing the movie industry to thrive by showing us reels of static pictures, each slightly different from the other, but separated by a time block too small for us to tell. Perhaps this is why quantum mechanics can be so speculative. It deals with spaces and times too small for us to directly sense or even build a device to tell if there are in-between spaces or in-between times.

 

If atoms and the void are all that exist, and if you wait long enough, any arrangement of atoms within that void, no matter how rare, might arise again. Maybe the accessing of different spaces in the multiverse is simply time-travel in the very, very long game. In thermodynamics, we have a word to describe such a system. Ergodic. A god-like vision appears in The Space Between Worlds. The Ur-God perhaps. An ancient one outside of time and space. Einstein called some of this deep physics the Secrets of the Old One. We have yet to uncover them. But in the meantime, I recommend both The Midnight Library and The Space Between Worlds. Both are engaging pieces of story-telling and very well written!

Thursday, March 18, 2021

Revisiting Textbooks

I’m annoyed. At textbooks.

 

Thanks to the supposed arms race of textbook prices and the resale market, my university bookstore is haranguing me about informing them what textbook I will be adopting (if I do so) for next semester. Yes, this now starts in early March for the semester that begins in September.

 

I haven’t submitted the requisite information for my quantum class, because this might be the semester where I decide not to use the textbook and switch to worksheets. I’ve been doing this for over a decade in my statistical thermodynamics because I don’t like any of the textbooks out there. There is a quantum textbook I do like that I’ve assigned for years. But the price has gone up significantly over the years, and I’ve been thinking of incorporating new aspects into the class such as a more extensive focus on chemical bonding. Also, I think something more liberal-arts-ish as complementary reading such as The Quantum Astrologer’s Handbook might be an interesting pairing. And I’m not going to ask students to purchase two books. I could see my students later in life choosing to re-read the Handbook for fun. On the other hand, they’d try to sell their textbook as soon as possible if not going to graduate school in P-Chem.

 

To textbook or not to textbook? That is a question I’ve mused about. And my conclusions (or lack thereof) haven’t changed these last several years.

 

I’m also annoyed at the G-Chem textbook we’re using. While I was on sabbatical last year, far away and not participating in the discussion, the group teaching this class voted to switch textbooks starting this academic year. The primary reason, I surmise, was unhappiness with the less-developed and possibly clunkier online homework system, because it comes from a much smaller publisher rather than the Pearson juggernaut. We’ve used the Pearson juggernaut in the past for quite a while so most of us are used to its online homework system, Mastering Chemistry. And professors generally like what they’re used to – there’s an activation barrier to learn a new system. My opinion is that from the student perspective, there’s not much difference between one system and another. (See here for the limitations of such online homework.)

 

Last semester in G-Chem 1, the drawbacks of the present textbook (which, to be fair, is like most of the popular G-Chem textbooks in the market) were not as marked. In my opinion, it was marginally inferior in terms of arrangement of material, quality of figures in doing their job of visually highlighting the key features of otherwise challenging concepts, and the types of conceptual questions available. It wasn’t as big a deal. But this semester in G-Chem 2, I’m noticing these drawbacks to a greater extent – significant enough to irritate me beyond the occasional gnat. This, I think, is poorer for the students. But here’s the rub: While I might be significantly bothered by some of these aspects, my fellow instructors might consider these occasional gnats, and much prefer the online homework system that all of us are more used to.

 

We won’t be switching G-Chem textbooks next academic year because it’s a group decision (and I’m just one vote from a dozen instructors) and because I think one should always give a book at least two years before switching. So I’ve requested the group revisit the discussion next year for possibly adopting a different text for the following year. I’m not optimistic I will prevail in a change back to the textbook we used the prior two years that I think was much better for the students even if instructors didn’t like the homework system as much. I’m painting this picture in broad brushstrokes to more starkly emphasize the differences, but the actual discussion will be more nuanced and subtle when the time comes to air our differences of opinion. There are good reasons for us as a group to agree on a single textbook for both semesters of G-Chem and not go our own rogue ways, and I’m sure we’ll revisit these arguments when we meet next year.

 

Since G-Chem and P-Chem are what I’m teaching next semester, and the G-Chem textbook is already decided, I just need to figure out what to do for Quantum. Should I finally abandon the textbook and restructure my class using worksheets? It will be a lot of work on my part, but I think it might be worth it. But would the students be at a significant disadvantage without a textbook? Some students think so. In my Thermo class, some students will write in their evaluations that it would have helped them to have a textbook. This is balanced by those who write positively about the worksheets and not having a textbook. I expect the same to be true (from the student perspective) if I do the same in Quantum. But there are some significant conceptual and mathematical challenges in Quantum that may make the class “harder” for the students. I can imagine what these might be, but I think there are ways to overcome this. I won’t know until I try and they try with me.

 

Now that I’ve written this post, I feel less annoyed. Maybe that’s another role for writing my blog. Besides being a Pensieve-like storehouse for my thoughts, it’s cathartic.

Tuesday, March 16, 2021

Dammed by Kinetics

In general chemistry, we tell our students that the second law of thermodynamics is all about maximizing entropy. What is entropy? It’s a nebulous thingamajig that we are seemingly “forced” to introduce to explain why certain physical processes take place. For example, gases seem to always expand into vacuum, and table salt always seems to dissolve in water at room temperature and pressure.

 

When I first started teaching, general chemistry textbooks characterized entropy qualitatively (e.g., disorder, freedom of movement, heat dissipation); and then quantified the change in entropy by relating it to the change in enthalpy per unit temperature. Students are puzzled by what this means. Then we thought it might be clearer if we smuggled in some physical chemistry concepts into general chemistry, and current textbook “teaches” students how to count microstates and group them into macrostates. This causes our students to focus on one particular view of entropy: counting the number of arrangements and assessing probabilities of different arrangements. It feels more scientific when you can quantify something, I suppose. And (while they shouldn’t), students feel more comfortable when there’s a formula you can plug things into.

 

What we often don’t tell them: Our characterization of entropy arises because we’re using a closed-system model of equilibrium thermodynamics. I’ve been trying to highlight the usefulness and limitations of models in my classes, more than usual this semester. I think subconsciously that’s because it feels like we’re all trapped in our little Zoom boxes on-screen. I can’t wait to get back to in-person classes and relish the freedom of being in a more “open” environment.

 

Perhaps a better and more general way to think about the second law is that it’s all about reducing gradients. We talk about gradients in chemistry, usually related to concentration gradients, and we allude to gradients when we use the dictum that “everything is trying to become more stable” – energetically. We talk about downhill chemical reactions being favorable. We use the example of a waterfall when we discuss potential gradients in electrochemistry, analogizing the flow of electrons in the latter to the flow of water molecules in the former. And of course, the zeroth law of thermodynamics is all about reducing temperature gradients – our epitome of a ‘spontaneous’ process.

 

In closed systems, gradients are reduced until they no longer exist. You’re done. You’re at equilibrium. You’ve maximized your entropy. This is an oddly static picture. We remind our students that things are actually at dynamic equilibrium – chemical reactions are still going back and forth but at the same rates so that macroscopically it looks like nothing’s changing. Students then get muddled when they encounter steady state flows in biochemistry. It vaguely looks similar, but seems different. That’s because we’re now in a non-equilibrium situation, and we’re no longer operating in a closed system. But we’re still trying to reduce a gradient – even though that gradient persists in an open system even as energy flows down the gradient to be dissipated. This wider lens view of thermodynamics is (I think) more useful because it helps us understand Schrodinger’s Paradox – why seemingly ordered life exists in conjunction with the second law of thermodynamics.

 

The authors of the book Into the Cool have something useful to say about this: “Classical thermodynamics heads towards maximum entropy, exhaustion. In Onsager’s realm [of near-equilibrium processes] we see another situation, systems that minimize their entropy production. Energy scientists often assign systems a certain quantity of entropy production. But a better measure is specific entropy production… per unit weight, per unit volume of flow, per unit surface area… Subjected to continuous flow of energy and matter, no system can come to equilibrium… it does the next best thing… goes to a state of minimum entropy production – that is, to a state as close to equilibrium as possible.”

 

How do living systems accomplish this? Interestingly, here’s where kinetics comes in. Below is a typical picture I show in my G-Chem classes. I remind them that we often treat thermodynamics and kinetics as being independent from each other (the “activation energy” is not connected to the thermodynamic gradient), but they are more subtly connected.

 


Imagine a mountain-top lake, such as those that feed the great rivers of planet Earth. Water flows down, often in more than one path. Gradients drive the flow, and the kinetic energy of liquid water cuts it way through the solid earth. The flows can change, cutting out new paths downhill, and sometimes they deepen the current flow path allowing a greater flux of water. Sometimes they spill over their banks in flooding devastation. What did we humans do? We sought to utilize the natural flow down the gradient. We built dams. These structures allow us to control the flow of water, converting the potential energy into other forms whereby we can do useful work. We’ve set up a barrier allowing us to control the kinetics of water flow. The downhill thermodynamic gradient is dammed by kinetics.

 

I’ll quote from Into the Cool once more: “We are continually threatened by a too-large entropy production that would destroy our delicate bodies. Activation energy… keeps our bodies from exploding in puffs of smoke… In life, the chemical tendency inherent in the second law for the hydrogen of bodies to react with the oxygen in the atmosphere does not happen as violently – as in rocket fuel – but is channeled through the complex chemical systems we recognize as metabolism. And so, with intricate feedback loops and controls, we slowly‘burn’, metabolizing rather than bursting into flames. But these chemical systems, like a waterwheel catching and redirecting a powerful stream to run a mill, can fail…” 

 

Death lurks around the corner. We’re damned by thermodynamics. But life persists. Dammed by Kinetics. It’s an intricate dance. A dynamic steady state. A spiral of cycles. Let the music play on while the sun shines and maintains its energy gradient with us here on planet Earth.

Monday, March 15, 2021

Atomic You

When I teach kinetics in G-Chem, we always open with a short discussion of Molecular Me, or what your experience might be if you were the size of an atom or small molecule. This semester I took things one step further by asking students to write a short creative piece as their response to the weekly prompt, playfully titled Atomic You. I’ve warned my students about the dangers of anthropomorphizing in chemistry, but it can be helpful in getting students to think about the molecular level.

 

Since I also participate in the prompts (usually to write a quick thought or an encouraging response to a student who wrote something interesting), I decided to type in my own entry:

 

I am K+, a potassium ion hanging out in a neuron. My life is a roller-coaster. When there’s nothing much going on, I lazily move around randomly going with the flow while surrounded by my posse – water molecules who love hanging out with me. Every now and then, one water molecule leaves but another quickly takes their place. But then something called an action potential travels down the neuron and things get crazy. I suddenly feel impelled to charge at a tunnel in the wall. I lose my water friends and swoosh through the tunnel. And when things calm down again I float back to my original resting place. Whew! It would be nice to know when this happens so I don't get such a jolt every time.

 

The most common imagined situation was being a water molecule, sometimes in a soda or coffee, but two that I’ve highlighted below imagined being in water, the substance.

 

Hi! I am an atom inside a glass of water. Last night I was poured into a glass with some of my friends who were all stuck together in an ice cube, but overnight they separated and joined the rest of us in the glass who were all moving freely. All I really do all day is just float around and bounce off my other atom friends. Sometimes when some new friends are added in an ice cube we all get a little bit more stiff, but I can still float around. Sometimes when we are left in front of the window we get a little bit hot! When that happens we loosen up even more and I bounce off of my friends way quicker than usual.

 

I am an atom inside the pitcher of water in the fridge. I am excited because today I think I will finally get poured into a glass. My atom friends and I have been waiting for this day for almost a week, the people in this household do not drink a lot of water. All I hope is that we don't get separated and the atom friends that got frozen and turned into ice might even join us! It is fairly cold in the fridge so we are moving pretty slow so it will be exciting when we get poured so we can move a little faster as we warm in room temperature. I am hopeful that we won't get boiled and move into the air, it is dangerous out there once we become a gas since we start moving so fast. We all tend to get separated and take on different roles in our atom lives.

 

Sometimes the students picked interesting cases based on what they’ve learned. This one discusses selective precipitation, but also reveals that the student thinks ion-pairs often stick together in solution, even for “soluble” salts. I’m not sure any of my standard assignment questions would have elicited this artifact. Something for me to ponder.

 

I am an atom of Ag in the compound AgNO3. KCl is being poured into the beaker that I am in, creating a mixture and causing a reaction to begin. I notice that my bond between me and NO3 is breaking, and so is the bond between my K and Cl counterparts. After our bonds are broken, I begin to be attracted to nearby Cl atoms. However, this time I notice something different: The AgCl bonds that are forming are beginning to bond together as well. I am now a member of a lattice, and my bonds between my counterparts are stronger than ever. I am now part of a solid. Meanwhile, the K and NO3 have bonded as well, however they are still moving around quite freely as they are still in the aqueous state. My solid is now at the bottom of the beaker, and later on future reactions may break me apart. Otherwise, I will remain a solid.

 

Here’s one with material I haven’t covered in class. We get to batteries and electrochemistry only later in the semester.

 

I am a Lithium ion that is within a lithium ion battery. When the battery is not being used, I stay in the negative terminal of the battery within the device or system. Since my outermost shell has one valence electron, I constantly want to get rid of it. When the battery is being used, the electrons that leave me flow from the negative terminal to the positive terminal through the circuits and components in the battery. Since I am giving off negative electrons, I become more positive. Since I become more positive, I move through an electrolyte to the positive end of the battery in order to neutralize the charge build up and keep the reaction going with specific gradients. Throughout using the battery, there is less buildup of myself and other lithium ions in the negative end and the battery becomes empty.

 

The one below about carbon is my favorite! Reminds me of Hazen’s book, Symphony in C.

 

I am a carbon atom hanging out about 100 miles below the surface of the Earth. A lot of my other friends down here are also carbon atoms. Our environment is very hot and there is a ton of pressure on us because of this enormous rock called Earth. Because of these conditions, I am strongly covalently bonded to 4 of my closest carbon friends. We have assigned spots, resulting in a very orderly structure. My town is quite nice because of these strong bonds between carbon atoms. We are all so close and orderly that we become a hard crystal. Woah, did you feel that? Why is the ground shaking? What is AHHHHH! One second I was chilling in the Earth's upper mantle and now I'm on the surface of the Earth! A strong volcanic eruption must have taken my friends and I all the way up here! The eruption was so quick and powerful that the entire crystallized town stayed together...phew. Time passes (I don't know how much, I am just a carbon atom) and these weird walking creatures described my town as being a "radiant, rare diamond". I am very flattered :). They carefully pick my crystallized town up and gently dust us all off. I heard them discussing how we are going to be transformed into beautiful jewelery! Better yet, I can always be right near my other carbon friends, as one carat of our crystal diamond represents our entire town composed of billions of carbon atoms. 

 

And this final one, while not quite atomic-sized since the perspective is perhaps more macromolecular, is hilarious anyway. Life as a baseball.

 

I am an atom on the surface of a baseball. Only a few short weeks ago, my life consisted of travelling stretches of grass, easily the most boring thing of all time. And the smell! Horrendous! Thankfully, I made out of that life and into the life of surrounding cork and yarn. I certainly was handled pretty rough, but after the excruciatingly long time of doing nothing, I don't mind. I now reside in some sort of trap, made of the same components as me, but smelling so much better than the cows we came from. It's dark in here and the noises outside are loud, but I am finally in contact with something else. It seems to be a humans fingers! They're kind of sweaty...I wonder if he's nervous? The fingers begin to spin me in the trap...I think I heard them call it a glove? The fingers settle in on a position and suddenly I am out in the sun. Wait, now I'm not touching anything! But this rush of wind is so amazing, exhilerating, wonderful! CRACK! OWWWWW!!! What'd you do that for? That really hurt! Whoaaaa the view from up here is crazy! A sea of grass, a sea of people, and then a sea of...black? I came to a stop in some sort of painted box. I see a bunch of cars around me but man is it hot! Guess I'm going to get my suntan today!

 

I suspect that the allusion to cows being smelly is my fault since I don’t know if many of my students have actually smelled cows. I have fair experience growing up at the edge of town next to a rubber plantation where cows regularly trekked around and our street would occasionally be plastered with cow poop. I didn’t tell the students this. But I did use methane being belched from cows as a memorable illustrative example of what I dubbed the mystery of the Unexploded Cow. Students who wanted to know more were directed to the origins of dragonflame.