Thursday, May 22, 2025

Exploring ADOM

I’m thirty years late to Ancient Domains of Mystery, more commonly known by its acronym ADOM. Created by Thomas Biskup in 1994, it is a computer role-playing game (CRPG) that is often described as Rogue-like. There’s an irreverent but informative video that discusses this issue amidst a high-speed playthrough of the ASCII version of ADOM. While there are spoilers, they go by much too fast for you to remember any of them, so I wouldn’t worry about it. I’ve never played the original Rogue so I have no basis for comparison. The three features that stood out to me are that it is turn-based, the dungeons are procedurally generated, and there is permadeath – when your character dies, the game immediately records it and all you can do is restart from scratch with a new character.

 

Most of my experience with computer games was for a half-decade in the mid-to-late 1980s, sometimes referred to as the golden age of CRPGs with plenty of different designs with new spaces to explore. I was first hooked by Ali Baba and the Forty Thieves and got through Ultima V before life took over. In the last year, however, I have been rediscovering the cousins and close descendants of those early games. I’ve been surprised by delightful obscure gems such as Antepenult alongside well-known old classics such as the first Might & Magic.

 

The old CRPGs required patience. You had to grind your way through lots of fights to earn experience, gold, and better weapons and armor. You had to level up your spellcasters to access more powerful destructive and protective magic. The baddies and bosses got harder. You needed special items to access special areas. I don’t have the same patience now as I did forty years ago, but I am enjoying the discovery aspects of ADOM. There’s a huge world to explore (mostly underground since it is a dungeon-crawler) with tons of different items. You don’t know what’s around the next corner so you’d better be prepared to fight or run. There’s a sweet satisfaction with surviving a nail-biting encounter, discovering a strange new space, or coming across an item you’ve never seen before.

 

I think I’m on my tenth or twelfth character in ADOM, and only the second to make it to the mid-game stage. (You die early and often, which is part of the exploration.) My previous troll fighter reached level twelve and made it to Dwarftown but got killed shortly after, somewhere in the Caverns of Chaos. I currently have a hurthling archer that successfully completed many early quests that I’m quite attached to now, so I am save-scumming (allowing me to restart if something bad happens or my character is close to death). What’s a hurthling? It’s like a halfling, hobbit or bobbit. If you recognize any of those names, you’ll know that CRPGs borrow heavily from Tolkien and its D&D derivatives. In ADOM, mithril gear is better than regular gear made of wood, leather, or iron. It’s also lighter – and carrying weight matters! But there’s also adamantium and eternium. ADOM has no problem mixing genres.

 

While I had no idea what I was doing in the first several games, experimenting with different objects and strategies, now that I have mid-level characters, I would prefer to not go back to the beginning and grind my way anew. (I’ve always had Fate decide all aspects of my starting character rather than picking my own stats.) So alongside my natural experimenting within the game world, I’ve also started referring to Internet resources (such as the ADOM Guidebook). In addition, I’m watching my way through a very entertaining play-through on YouTube, where I’m pacing myself episode-wise so my character is roughly at the same level. I’m learning that ADOM is even bigger than I thought, and there are things you can do that I hadn’t even considered. I suppose I’m learning from the large community of those who had gone before me. Long-term players have played thousands of games over the years and built up a lore of knowledge.

 

It’s a bit (or perhaps a lot) like science – discovering the natural “laws” of the world around you, what you can do and what you can’t do. There’s the trial and error approach which I used early on, and there’s learning from the community of those who have trialed-and-errored a lot more and are sharing the results of their labor. Finally, there might be folks who have looked at the code and can say something about the “hidden” underlying rules. This is how we humans learn. Many before us have experimented directly, and the fruits of discovery have been passed down to us so we don’t have to reinvent the wheel. As a teacher, it’s an integral part of my job to pass down this knowledge in my field of expertise, which is chemistry. One thing I convey to students is that we can come up with abstractions to understand the underlying rules of chemistry. This includes mathematical models that powerfully allow us to make predictions of what will happen in a different situation; it’s like discovering the source code of nature!

 

Yes, I could try to “enjoy” grinding my way through ADOM with no outside references. But I think the exploration is enhanced by tapping into the wisdom of the community while being careful to avoid spoilers. Without it, I think I would just give up – ADOM is a hard and unforgiving game. But games also allow you to explore the paths not taken, and ADOM’s many different starting characters and strategies, and its multiple endings (from what I’ve gleaned without looking at any of them in detail), provide a certain satisfaction. The procedurally-generated dungeons add to the game’s high replay value. For someone with my old-school 1980s CRPG background, ADOM provides an exploration experience at its finest. Warts and all. My character recently grew horns due to increased background corruption. That was yet another recent surprise with reaching the mid-game!


Thursday, May 8, 2025

Shapeshifting Vine

I’ve been thinking about plants and photon-absorbing pigments having recently read a speculative and interesting origin-of-life article that suggests animals might be gardeners co-opted by plants. Last year, I read about flavor molecules and poisons, which are part of the suite of secondary metabolites released by plants. Right now I’m reading The Light Eaters by Zoe Schlanger, a fascinating look into cutting-edge and controversial research in botany. Do plants “scream in pain” when we pluck a leaf or break a twig? Do they then warn their neighbors with signaling molecules that danger is nearby? Can they listen to sounds? Are they “conscious” in their own way, different from humans or octopi? These are interesting questions, and Schlanger delves deep into the research. Her writing is also thoroughly engaging, aimed at the non-expert, reminding me of Ed Yong’s superb book. Who would have thought botany would be so exciting!

 


Chapter 8 discusses the “chameleon vine”: Boquila trifoliolata. I’d never heard of it before. It is native to Chile and has some interesting cousins in Asia. This vine of a plant is an actual shapeshifter. Not like the lizard chameleon that can only change its colors. Not like the leafy water dragon that has evolved to look like the water plants where it spends its time. Not like many examples of adaptation in nature that take generations. Boquila does its shapeshifting in real time. But this is plant time, measured in hours or days; too slow for us impatient humans who prefer to marvel at it with time-lapsed photography. It has reshaped its leaves to mimic dozens of other plants, some that look very different from each other. Sometimes the details are astonishingly close; sometimes the match is poorer. How does this happen?

 

There are two prevailing theories; there is some experimental evidence for each but scientists are still in the throes of figuring out what’s going on. That’s what the cutting-edge of science looks like. The theories may sound wild. One is the “plants have vision” hypothesis. Plant leaves have plenty of light-sensitive molecules, not just ones used in photosynthesis. Maybe the vine sees its neighbor and mimics it. The other gives primacy to microorganisms; the shared space may allow microorganisms to exchange information (horizontal gene transfer), and to move from one plant to another, that may then translate into building leaves that look identical. Both ideas sound crazy when you first hear them, but there is some evidence for each. Not enough to gain widespread acceptance. Science is conservative, for good reason. You don’t discard an established theory that was built up by initial evidence unless the new evidence that disproves it is sufficient and overwhelmingly so.

 

I don’t know which of the two theories I lean towards. However, both ideas have pushed me to think more about my own research. Since getting into origin-of-life research, I’ve started to pay close attention to microorganisms, bacteria and archaea. Since starting to teach biochemistry, I’ve been marveling at the world of metabolites – plants and fungi are amazing in this regard where secondary metabolism is concerned. I’m starting to see conjugated pi-systems show up in origin-of-life related molecules, and I’ve started to read up on how to analyze photochemical reactions using computational chemistry. There’s something intriguing about the interplay of photons and chemistry that could be the key to why we have dynamic systems, building up molecules and breaking them down, going along the flow of the second law of thermodynamics yet diverting it to one’s own ends. That last phrase might sound speculative and crazy too.

 

This brings me back to pondering the nature of the boggart, one of my early posts when I started this Potions for Muggles blog. We now have a plant boggart in being able to shapeshift, but seemingly limited to being a mimic. While the Harry Potter books do mention interesting plants and their properties, this seems subdued compared to Fantastic Beasts and how to find them. We miss the wondrous nature of plants because they seem so unlike us. Boquila would have been of great interest to Professor Sprout, and I could see her working closely with a Potions Master to delve into the subtle connections between plants and their secondary metabolites that would go into potions!


Thursday, April 17, 2025

Spontaneity, Reversibility, Equilibrium

Last week, right after my P-Chem II class, a bright student came up to ask me to help clear up and issue comparing reversible processes that actually move a process forward (you might expect a change in free energy or overall entropy) and equilibrium (you expect delta-G to be zero). Since we only have ten minutes in between when one class gets out and the next one comes in to the room, I did what many professors do – I gave a handwaving explanation. I said something about an overall system being at equilibrium macroscopically (equal rates of forward and reverse reactions), and separated that from a “reversible” process where you’re moving something along via infinitesimal steps. One’s the overall “system”, the other is looking at a series of steps for a specific process, I said. Clearly, or maybe obtusely, I was hedging.

 

Part of the issue here is that in real life you’d never run a process in infinitesimal steps because it would take an infinite amount of time. Essentially nothing is changing if each step takes eternity. I did tell students that, in practice, no process is truly reversible in this infinitesimal-step sense if in fact it actually takes place. Students understand this issue of practically. But since P-Chem II is calculus-infused, we can do the math by taking limits. In particular we go to the limit of the infinitesimal step and use d’s instead of deltas. This leads to beautifully simple calculations to calculate the mechanical work or heat transfer in idealized “reversible” cases, while comparing them to “irreversible” cases that are “less efficient”. In the reversible case, work in equals work out if you did this ideally. In the irreversible case, you get less work out compared to what you put in. Most students are satisfied by all of this, but for this student, there was a bee in her bonnet, and rightly so.

 

Actually, I don’t know if she found my hedging answer satisfactory. I should ask her. I was unimpressed by my own answer even though it was practical in the interest of time (she had to run to another class). In fact, I told her that I needed to be more careful in how I defined the terms “equilibrium” and “reversible process” and that sometimes I’m not careful enough. I provide the students with the formal definitions in our lecture notes, but I slide my way from one term to another in the “heat of the moment” when I’m trying to be dynamic and lively in class. (“Heat” is another of those tricky terms.) In class I only look sporadically at my own notes. Sometimes I remember to emphasize something to watch out for, and other times I simply forget.

 

In my quest to figure out how to not confuse future students, I started scouring the primary literature for inspiration. The article I’ve found most useful thus far is by John Norton titled “The impossible process: Thermodynamic reversibility” (Studies in History and Philosophy of Modern Physics 2016, 55, 43-61). It provides a historical slant and includes many examples where much more famous physicists have also elided their way through. Now I don’t feel so bad about my quick handwave. I think I can tighten up my definitions a little better. And I need to spend a little time talking about the issue of using calculus to take the limit for infinitesimal steps. Many chemistry students are a little calculus-phobic so I try not to emphasize the mechanics of calculus and instead concentrate on the chemistry. But I’m reminded that I need to be more careful in this regard. Norton also points out that on the molecular scale, thermal fluctuations make this ideal-calculus-limit taking a problem. In a big-picture Mack view, all this might be okay, but tiny Mike would protest that there’s a problem! (Mack and Mike represent macroscopic and microscopic views.)

 

I also need to be very clear when I use each of these terms: reversible, equilibrium, and spontaneous. I make a very big deal (multiple times) in both my G-Chem and P-Chem classes that thermodynamic spontaneity has nothing to do with how fast a reaction might take place. All it tells you is which way the reaction is likely to proceed absent any external intrusions on the system. Thankfully, we’re on the verge of changing our G-Chem textbook to, in my opinion, a superior one that excises the confusing term “spontaneity” and instead uses “thermodynamic favorability”. I’m all in favor of that change. I’ll just have to remind students to be careful when they encounter “spontaneity” on the internet because, sadly, that’s where many of them go to look up things rather than their textbook.

 

I think I should stop using the term “reversible” in G-Chem as a thermodynamic definition. I should limit myself to discussing forward and reverse reactions (in the kinetic sense), that both occur, and that if one waits long enough eventually the rates of the forward and reverse reactions are equal. That’s when dynamic equilibrium is reached. If the change in system free energy or the change in the entropy of the thermodynamic universe is zero, then the system is overall at equilibrium. That’s it. No need to belabor the point.

 

In P-Chem I’m considering using the term quasi-reversible to emphasize that taking the infinitesimal limit is actually an impossible situation at the molecular level. Perhaps I should always say quasi-reversible process. This may help emphasize the distinction between the macroscopic system as a whole (which may or may not be at equilibrium) and considering a specific process in getting from one state to another state. I’m not sure I want to go into the language of “a series of connected equilibrium states” since this muddies the waters. Since I don’t use a textbook in P-Chem, I can just change all the notes that I provide students to tighten up these definitions. I will restrict using the word equilibrium to the usage I mentioned above in G-Chem. When I get to the stat mech version of discussing equilibrium, I will focus it on the equilibrium constant as a ratio of the number of product molecules versus reactant molecules, while reminding the students that the state of being at equilibrium is a macroscopic description. There will be a tricky part when I get to transition-state theory in thinking about the transition state as a quasi-equilibrium state; not sure how to handle that terminology-wise. We’ll see how this all works out the next time I teach P-Chem II.


Friday, April 11, 2025

The Optimality of Forgetting

In the education business, we’re often emphasize the business of remembering. Remembering what you learned is good. Forgetting what you learned is bad. Students may wish they had better memories to remember all the stuff I’m telling them. Heck, I often wish for better memory as I age and forgetfulness increases in frequency. So why do we forget when improved remembering seems like what we want? If remembering was so adaptively so much better than forgetting, evolution should have selected for the best memorizers!

 

What has our memory evolved for? And why might forgetting be just important as remembering? One possibility is that in a noisy and ever-changing environment, having specific detailed memories that persist make it difficult to learn new things and adapt appropriately to analogous yet different situations. I didn’t come up with this myself. I just spent the last hour reading a perspective article: “The Persistence and Transience of Memory” by Richards and Frankland (Neuron 2017, 94, 1071-1084). Parts of the article were slow-going because I lack the background related to the experimental work being reviewed, but I think I got the gist of it. And that’s the point! Getting the gist may be what matters adaptively.

 

The authors argue that the interplay between persistence (remembering) and transience (forgetting or erasing memories) is key. In particular, transience “enhances flexibility, by reducing the influence of outdated information on memory-guided decision making, and prevents overfitting to specific past events, thereby promoting generalization.” There are supporting experiments in rats and fruit flies for this hypothesis. Neural network models also suggest a congruence with the experiments: Injecting “noise” into the network, reducing weighting factors, encoding sparsely rather than densely, seem to improve the network’s ability to handle generalized situations.

 

When teaching physical chemistry (and to a lesser extent in general chemistry), I try to emphasize the models underlying the equations we used. The simpler the model, the simpler the equation and the more generalizable it is: the ideal gas law equation (PV = nRT) is an example of a very powerful equation that works for any gas, as long as it behaves close to ideally. The model of an ideal gas imagines a large number of particles moving randomly in a box with plenty of empty space with all collisions being elastic. That’s a good approximation for N2, O2, CO2 and Ar which constitute over 99% in dry air. We can elaborate the model further for “real” gases through the two-parameter van der Waals equation or a multi-parameter virial equation. A mathematical model is powerful because its quantitative aspect allows it to make predictions of future situations to be encountered.

 

But putting in too many parameters can result in over-fitting, which can then result in incorrect predictions. So if we go through life encoding every moment in dense detail, it might actually hamper our ability to see the forest from the trees and adapt to new situations. Everything is a detail and the big picture is lost. The article’s introduction mentions the oft-quoted story of a patient with seemingly photographic memory of his entire life, but had plenty of problems navigating life because of this. I’m also reminded of how we learn when encountering something new. If you’re a novice, you try to absorb as much as you can but you have no idea which “details” are important and which are not. But if you already have some background, you’re able to ignore the artifacts and focus on abstracting the most crucial features. How exactly that happens, I don’t know. But I see it every day in my teaching. I constantly have to remind myself that I have the curse of knowledge in that I can’t quite remember or fathom how hard it was for me to build my chemistry scaffold oh so many years ago.

 

We humans haven’t had enough time to evolve towards learning academic subjects. Or even the seemingly simple acts of reading, writing and arithmetic. I don’t remember how I learned to read. I improved my writing through sheer practice and repetition. I have a vague “memory” that algebra was completely obtuse when I first encountered it; but I had an aha(!) moment at some point in life and somehow grasped it in a gestalt experience. Now algebra is obvious to me, at sometimes I’m at a loss helping students work a chemistry problem and realize they don’t get algebra. (This is a very small number, but I’ve noticed a few more post-pandemic.) Learning is still mysterious to me.

 

What can I do to help students learn chemistry? In class and through homework and practice, I try to emphasize the things students need to remember. I repeat the salient points a lot such that I sound like a broken record, but I think it’s crucial to keep the students attending to the main thing. The first time I say something the strongest students may grasp the salience but the majority of the class hasn’t yet. So I need to keep repeating and emphasizing the most general principles. But I have to do this in the context of multiple examples that look different from each other. Same principle, different example. This is the key to “transfer”, the ability to effectively apply something you’ve learned in a different situation; and this includes knowing the limits of applicability!

 

I also add a lot of tidbits (history, broader applications, interdisciplinary connections) to my lectures. I hope that the students find them interesting, possibly strengthening a neural connection; but even if students forget these, that’s okay. For the things I need them to remember and use, there’s no substitute for repetition to strengthen the memory (both conceptual and procedural). If the students don’t practice retrieving these memories and using them, they will forget. It’s not a bad thing. Transience and persistence go together and I wouldn’t want my students to be maladaptive to new situations. So I’m not looking for them to have better memories (even though they might wish for it), but I’m trying to strengthen the neural connections they do have and maybe even replace some incorrect misconceptions they might have. Forgetting has its place in learning!


Saturday, March 8, 2025

Rediscovering Earth

Captain’s log, 25.04-05 in the year 4620. We’ve found the planet they call Earth in the Sol system of the Pythagoras cluster. There are no signs of life but we will try to find the underground station.

 


I’m playing Starflight, released in 1986 for the IBM-PC. The four-color CGA made it hard to distinguish terrain types, and I found it challenging to use the controls via the emulator on my laptop. After a bit more research, I found a “cracked” Amiga version that doesn’t require using the copy-protection codewheel every time I leave starbase. Also, the expanded color palette is very welcome, and the controls were more streamlined. Except for space battle where I still don’t know what I’m doing and randomly shooting at enemy craft.

 

Starflight is impressive. I’m amazed how much game they were able to pack in given memory and disk constraints. You start out as a starship captain from the Arth system, hire a crew, and explore the galaxy. The galaxy is huge! All those little circles represent solar systems, each of which may have multiple planets. There are more hidden in the green blobbed nebulae.

 


I don’t know what the object of the game is yet. Starting out, I needed to make some money to improve my ship and train my crew. To do that, mining is the name of the game. I was told that the innermost rocky planet was a good place to start prospecting so that’s what I did. Then I decided to explore the third planet and found strange artifacts such as a blue bauble, a silver gadget, a bladed toy, and strange cloth. There were ruins and a message telling me about a black egg. Also, there were strange creatures. And despite the simple graphics, the game gives you the feeling that you are indeed an explorer. It feels like you’re in Star Trek exploring strange new worlds. Even the silhouette of my starship reminds me of the Enterprise. Except mine’s much smaller and only has six crew members.

 


Not only is the galaxy large, planets are sizeable areas to explore in one’s all-terrain-vehicle. You pick a landing site and then drive around the local area grabbing minerals and specimens of the local fauna and flora. Sometimes there are ancient ruins where you load up on endurium, fuel for your starship. As a chemist, I’m delighted to read the chemical composition of the atmosphere, hydrosphere, and lithosphere of each planet I visit. We note the climate and the gravitational field, and determine if it’s suitable for colonization. If so, we log the planet and gain a reward upon returning to starbase, which is also where we sell our minerals and specimens, upgrade our ship, visit the bank, and if needed, hire crew replacements.

 

Then I ventured outside the Arth solar system. The nearby system is a K-class star, slightly unstable, apparently. On one of the planets I find a message asking me to report to another planet in a different system but the message is cut off. Clues lead to more clues, and in the meantime, the message board at starbase suggests that things are amiss. The sun of Arth is dying. Other ships were destroyed by androids. And as I venture further into outer space, I encounter aliens! The dialogue system is in real-time and you need to keep your wits about you. Do I try to be friendly? Do I raise my shields and prepare to fight? These encounters are tension-filled and even nerve-wracking, and you can be destroyed quickly by superior foes.

 

I’ve now learned a little more history from some of these encounters. A long time ago there was an old empire, but as starfaring expanded, different alien races encountered each other, and war often broke out. But now something is beginning to threaten all of life. There is no longer any life on Old Earth. It might be a race against time but I don’t know what to do yet.

 


Interestingly, Pluto is not a planet in Sol. At the time Starflight was released, Pluto had not yet been downgraded into a dwarf planet. Mars is mineral-rich and there was a polar station but it has been long deserted. In the meantime, I’ve made some friends and made some enemies. My ship is pretty formidable and I’ve got a good crew. I’ve picked up some useful and interesting alien devices and I have a reasonable guess of what I might need to do to prevent galactic destruction, but I need more specific information. Meanwhile the price of endurium is going up as people look to flee for safer havens. But for now, I’m elated that I rediscovered Earth!


Thursday, February 13, 2025

Animals as Gardeners

Yesterday, in my General Chemistry class, we discussed using bond energies to calculate the change in enthalpy of a chemical reaction. Breaking bonds is endothermic and requires energy input into the system. Conversely, making bonds is exothermic and energy is released from the system. Chemical reactions almost always involve both the making and breaking of bonds. Therefore, whether the overall chemical reaction will be endothermic or exothermic will depend on whether the bonds being broken are stronger (or weaker) than the bonds being formed.

 

One example I showed was ATP hydrolysis. The reaction is marginally exothermic. Even though the same types of bonds were being made and broken, the bond energies are slightly different in different chemical structures. That’s the beauty of chemistry – a subtle interplay between structure and energetics! The purpose of this example was to counter the conceptually wrong mind-worm students acquire where they tell me that “breaking bonds releases energy”. This usually comes from a simplified misunderstanding of something they hear in a biology class.

 

Towards the end of class, I couldn’t resist connecting bond energies to the origin of carbon-based life on Earth. The students had previously worked a problem on the strength of the O–H bond in water and the corresponding wavelength of a photon that matched the bond energy. Referring to the solar spectrum and ultraviolet light, I speculated about how adenine may have been important as a photon absorber prior to its role in the universal energy transduction of living systems. I mused about water-splitting, the invention of photosynthesis and suitable molecular pigments (conjugated pi-systems!) that may have arisen through chemical evolution. I didn’t say anything about such pigments dissipating thermal energy and seemingly “wasting” it.

 

This brings us to today’s question: Why do animals exist on Planet Earth?

 

This morning, I went down a rabbit-hole reading several articles by Karo Michaelian. It all started with “The Pigment World: Life’s Origins as Photo-Dissipating Pigments” (Life 2024, 14, 912). He makes the provocative claim that animals essentially “provide a specialized gardening service to the plants and cyanobacteria, catalyzing their absorption and dissipation of sunlight in the presence of water, promoting photon dissipation, the water cycle, and entropy production.” That’s a mouthful. We’ll break it down momentarily, but essentially the claim is that animals help to move molecules around, spreading them far and wide so that more and more photons can be absorbed and that energy dissipated. It’s the second law of thermodynamics in action at the level of the biosphere. And what’s the stuff we’re moving around? Pigment molecules!

 

It's an interesting argument. He begins with the argument that many leaves absorb photons in the ultraviolet and visible range before dropping off significantly at the infrared boundary. Leaves look green to us because red and blue light are absorbed more than green. Photosynthesis however only makes use of a narrow regime of red light, yet leaves strongly absorb in the ultraviolet and in the (blue) visible range. Plants evolved to absorb photons which are hardly absorbed by water, and apparently “fill even small photon niches left by water over all incident wavelengths”. That’s rather curious. Also, the albedo in life-rich ecosystems (jungles and forests) is considerably lower than in sandy deserts which reflect much more of the incident light. Additionally, “the albedo of water bodies is also reduced by a concentrated surface microlayer of cyanobacteria”. What happens to this absorbed energy? It is converted to heat – essentially chopping up a smaller number of high-energy photons into a large number of low-energy photons. It’s the second law of thermodynamics: energy is being dissipated and entropy increases mightily!

 

The evolution of these absorbing pigments in plants may have been primarily to increase transpiration. Photosynthesis is a secondary player in this regard. That’s a shocker to me. I’ve always considered the oxygenating of the atmosphere via photosynthesis to be a driver for the complexity of life – which it is – but I hadn’t thought of it as a byproduct to mostly increase heat dissipation via transpiration. In the first week of class, I told students about water’s high heat capacity and its suitability as a calorimeter. In a couple of weeks after we get through entropy, we’ll be looking at the change in enthalpy and entropy of vaporization as liquid water turns into gas. Water is an excellent dissipator that helps drive the second law of thermodynamics, but does so if there’s more of it in Earth’s water cycle. Transpiration puts more water into the cycle!

 

What do animals do? They help disperse the pollen or seed of plants. They help bring nutrients to plants through poo or death. As heterotrophs disperse organic matter, they disperse the pigment molecules. More opportunities for absorbing photons. More dissipation to high entropy heat. We animals are the gardeners, helping the second law to roll along. Humans in particular have come up with alternative ways to tap photons with inorganic materials, but that’s a recent phenomenon. The organic pigments have been at it far longer than we have. All this makes me wonder if the reason why photosynthesis is so inefficient is because life isn’t optimizing for capturing energy from photons in that way; rather it is optimizing for seemingly wasteful heat dissipation. The second law rules!

 

The tropics are rife with life. Is it because they receive the most photons? Why are there so many insects there? They’re a key part of the gardening crew. Why are there larger animals further away from the equator? The gardening crew is mostly about seed dispersal and larger creatures roam far and wide to stay alive in a less energy-rich environment. Michaelian argues that his proposal cuts to the heart of the source of evolution – the second law, a physical imperative. It cuts through the Gordian knot of biological relativity. It gets around the problem of extending the ecosystem to include more and more of its environment until it becomes an organism of sample size one where Darwinian evolution becomes nonsense. It’s an intriguing argument.

 

A linchpin of the argument is the chemical evolution of pigment molecules that absorb well in the UV-C range eventually transforming into the “broadband pigment world of today”. A specific detailed example looks at the oligomerization of HCN into adenine (C5H5N5) and relies on physics-based arguments about the dissipative process after a UV-C photon is absorbed. In particular, it hinges on the photoexcited pigment rapidly reach the conical intersection that shunts it towards a particular product. There is some hand-waving about how this opens up producing a broader spectrum of molecules capable of absorbing a larger range of wavelengths in the uv-vis range. Analogies are made to how thermal convection cells arise as forces come into “balance”. Stationary states, autocatalytic cycles, and other such features are invoked. And finally, once the ozone layer built up, access to UV-C is now much reduced and therefore we’re unlikely to see life originate again from scratch on our planet. (Also, the heterotrophs will chomp up anything they can!)

 

The final kicker? If UV-C is crucial to the origin and evolution of carbon-based life, then you’re unlikely to see life evolve on systems powered by M-type red dwarf stars. That’s not good news for astrobiologists who have become increasingly interested in such systems as providing suitable cradles for life. UV-C, primarily thought of as destroyer now also takes on the role of creator – Brahma and Shiva, two-in-one, with Vishnu in between as preserver while the photon flux from our sun lasts.

 

The bottom-line of how the second law and chemistry intersect? In Michaelian’s words (from a different article): “All material will dissipatively structure, depending on the strength of the atomic bonding and appropriate wavelength region.” Funny how a first-day class exercise of connecting bond energies to photon energies might turn out to be the foundation of everything we see in our solar system be it on our living planet or our seemingly dead neighbors. I haven’t yet wrapped my head around all of this. In the meantime, I’ll just keep on being a gardener and cultivator of my students’ understanding of chemistry and its wonders. And I won’t look at a plant in the same way again!


Monday, January 27, 2025

Arc of Invention

Who invented the airplane, the steam engine, and the printing press? I would have answered: the Wright brothers, Watt, and Gutenberg. Now I’m not so sure after reading How Invention Begins by John Lienhard. His argument is to look much more broadly at the ecosystem surrounding such technologies both backwards and forwards in time. And while the aforementioned names are the most famous or well-known, Lienhard brings to light many other names and their contributions to the process. Nor is there one type of airplane, steam engine, or printing press. Rather there’s a rich variety of such technologies even if the famous names are associated with iconic versions of each.

 


Why did these inventions come about and were they inevitable? Lienhard wants us to figure out the broader motivations: the desire for flight, the desire for energy sources to do work, and the desire for getting your ideas out! And as incremental improvements build up along the road to technology, there is an inevitability that a flying machine, an engine, and a method for mass-produced reading material, would show up. Perhaps not the iconic Wright, Watt, or Gutenberg version we encounter in museums or history books, but some version would have been invented, then widely used, and then possibly surpassed.

 

Today’s blog post focuses on one chapter in the book, “From Steam Engine to Thermodynamics”. It’s particularly relevant for my G-Chem and P-Chem classes this semester, where thermodynamics is a sizable chunk of the course material. Here’s the broad sweep before we get to the nineteenth century. Mankind has known that boiling water turns it into steam. It’s obvious that the gaseous state can be powerful when you encounter strong winds. But how would you harness its power? As far back as antiquity, there was one Hero of Alexandria who made mini steam-powered turbines. The Egyptian alchemist Zosimos writes of one Maria the Jewess who invented the double boiler, figured out how to make silver sulfide for metalwork, and essentially founded a school of chemistry. By medieval times, windmills have shown up and in the seventeenth century, the behavior of gases was being investigated and vacuum pumps were introduced.

 

I’m skipping over the details of the steam engine in which Watt played an important role amidst a constellation of many others; Lienhard, as a mechanical engineer, discusses this in detail. He also does a great job condensing and lucidly explaining how the ideas and terminology of phlogiston and caloric came about even though these theories have now been superseded by the atomic theory. Joseph Black (a contemporary and friend of James Watt), William Cleghorn, Joseph Priestly, Antoine Lavoisier, Carl Scheele, all show up as they puzzled over the nature of heat (which still confuses us today) and it took a while before the mechanical theory of thermal energy began to take precedence. Even today we still think of heat as fluid-like according to the caloric theory.

 

In the nineteenth century, while atomic theory was still fighting for recognition, another constellation of folks built the foundations of classical thermodynamics – no atoms needed! Carnot, Mayer, Joule, Clausius, and Tyndall show up for their turn in the spotlight. While I had heard all these names and learned the streamlined version of the history, I appreciated Lienhard’s wading into what was confusing at the time. Carnot accepted caloric theory even as he formulated his now famous “ideal” engine model. Mayer, who was trained as a doctor, made the observation that venous blood was redder when he was in Indonesia (then called the Dutch East Indies). Turns out it’s because in the tropics you don’t need to “burn” as much food (there’s a little more oxygen in the venous hemoglobin). And that got him thinking about energy transformation. Joule connects work and heat through his famous experiment shown, now a mainstay in textbook figures. By the time Clausius puts it together, you have the introduction of the new term entropy, and a way to quantitatively discuss the efficiency of an engine.

 

I always find it a balancing act when teaching thermodynamics. Much of the language and terminology we have inherited isn’t intuitive and students easily get confused. The equations we use are built on models from the nineteenth century when calorimetry was an important technique for trying to figure out energy changes in chemical reactions. Chemists use the word enthalpy to describe these changes, again confusing the students when it is used interchangeably with heat, and sometimes no temperature change is taking place. Knowing the models and their limitations helps us think about thermodynamics, but they’re a little strange and were defined for an age now past. In P-Chem, my treatment of thermodynamics is heavily statistical and I try to show students how this leads to what they first encountered in G-Chem. I try to include some of the history for context, but I’m not sure the students quite appreciate it. I certainly didn’t when I was an undergraduate.

 

One thing in Lienhard’s book that I’m still pondering is that we can trace the broad arc of invention in hindsight. But it’s very hard to see where something is headed when you’re in the midst of what might be a technological revolution. Right now the buzzword is A.I. systems, most familiarly in the form of large-language-models that guzzle energy resources. How Invention Begins was published almost twenty years ago as the Internet was becoming ascendant. After discussing the printing press, the explosion of literature, and then the opening up of tertiary education opportunities with the G.I. Bill, Lienhard wonders where education is headed in the age of the Internet. We haven’t quite figured that out and we’re starting to grapple with A.I. with numerous pundits championing it or being detractors. There is an arc, and we should ask the broader question of what humankind is aspiring to, but I’m not sure it’s a thirst for knowledge per se, at least in the way an educator like me envisions it.

 

Will we always crave novelty? I think we’re wired to do so. Do we want labor-saving devices? Yes, most likely. But we also want nebulous things like meaning and fulfilment in life, and it’s less clear how the technological arc will lead us in that direction. If we’re not careful, we can end up becoming slaves to a small oligarchy satisfying their desires for novelty, labor-saving, and fulfilment, which will override at least for a time what the majority would like. But within such a complex system, with nonlinearities that we cannot easily predict, at some point a phase change may take place. A revolution. An evolution. It will likely be messy and painful because of globalism and interconnectivity.

Wednesday, January 22, 2025

The Predictive Brain

What is our brain for? Making predictions. Why? Because that’s one way for a living organism to survive and possibly thrive in an environment that’s constantly changing. In the words of Andy Clark, author of Surfing Uncertainty, the brain is “an action-oriented engagement machine, adept at finding efficient embodied solutions that make the most of body and world.” I’m glad Clark provided that pithy summary at the end of his book. Because I’m not a neuroscientist, it took me a while to work my way through his argument. But I’m glad I did because it made me think a lot about how humans learn and about my origin-of-life research; both are key topics I think about a lot in my professional life.

 


I haven’t fully digested his argument which is essentially using a model he calls Predictive Processing (PP) to explain what the brain does and why. Many open questions remain, and Clark early on acknowledges that the specific details of his model may turn out to be wrong, but that the overarching idea of top-down predictive processes coupled with bottom-up error-signaling processes work together in concert to home in on a best guess of any encountered situation. But this isn’t an isolated brain in a jar. Embodied action is a critical part of honing the process. I will quote parts that really struck me and muse about them briefly in a meandering way. Like a surfer perhaps. This may be fitting given the title of his book.

 

More than a decade ago, when I first encountered the notion of System 1 and System 2 thinking (made famous by Daniel Kahneman’s Thinking Fast and Slow), I was enamored by the idea. But over time I’ve found the separation a little too clean. Clark argues they are one multi-faceted system. We might “use some quick-and-dirty heuristic strategy to identify a context in which to use a richer one, or use intensive model-exploring strategies to identify a context in which a simpler one will do. The most efficient strategy is simply the (active) inference that minimizes overall complexity costs… system 1 and system 2… are now just convenient labels for different admixtures of resource and influence, each of which is recruited as circumstances dictate.” I have a feeling Clark is correct and that his emphasis on multi-timescale processes is a key part of how organisms do what they do. I don’t quite understand how the longer timescale ‘higher-level’ brain processes couple to shorter timescale sensory signals, but I suspect the dynamic coupling of such processes is the beating heart of life.

 

Thermodynamic terms show up in Clark’s treatise. There’s free energy minimization when the brain tries to be efficient and make a prediction at the lowest cost. It’s why we continue to make mistakes (and learn from them) as we encounter new situations or variations of what we thought were things we knew. Entropy is defined in terms of surprisal; when prediction goes awry and we have an oops moment, this allows us to recalibrate. As a chemist, I define these terms differently, but I see a kinship between how I think about thermodynamics and what Clark is trying to do with these terms. However, having seen thermodynamic principles invoked in multiple areas, in my opinion I see more and more muddied thinking that may introduce more confusion than clarity.

 

I very much appreciated Clark’s emphasis on perception and action being inseparable. He writes that they are “two sides of a single computational coin. Rooted in multilevel prediction-error minimizing routines, perception and action are locked in a complex circular causal flow… Percepts and action-recipes here co-emerge, combining motor prescriptions with continuous effort at understanding our world.” While I mostly thought of sensory signals as exteroception, I appreciated Clark’s reminder that proprioception and interoception are just as important, and our brain needs to make sense of all three incoming channels. This made me ponder how to include all three in origin-of-life modeling, and also how to structure the seeming digital-analog divide. Information is efficiently stored digitally, but the action of life is analog. I’m sure that different timescales are important here, but I haven’t figured out how these could or should be modeled.

 

In Chapter 6, “Beyond Fantasy”, Clark delves into the idea that “perception is controlled hallucination”. He thinks we should be circumspect about the notion that our brains and thoughts are akin to virtual reality. Action on our part is important to continuously update the “probabilistic prediction-driven learning… able to see past superficial noise and ambiguity in the sensory signal, revealing the shape of the distal realm itself.” But our brain has also evolved to be an efficient computing machine, and this means pruning out or ignoring a lot of the sensory stimuli to focus on what is salient. I’m reminded about the mystery of learning, especially when it comes to the nonintuitive subject of chemistry. When the aha moment occurs, it’s a gestalt experience. After that I can’t unsee what I now know. It also blinds me as a teacher through the curse of knowledge. It reminds me that I constantly have to work hard at teaching because things obvious to me are not obvious to students encountering it for the first time. I can provide helpful scaffolding but how one actually learns is still mysterious. And my learning needs to be continuously updated. I’m sure I have erroneous notions I’m still passing along to students, but they’re in my blind spot – and I won’t know until I’m surprised by them.

 

Uncertainty surfaces when you least expect it. Perhaps that’s the moral of the story.

Sunday, January 19, 2025

Space Sucks

When  The Expanse begins, a Mars colony already exists and is known for its military prowess. Earth’s moon is an established rendezvous port that avoids the energy-costly gravity well. And the asteroid belt is active with space stations and mining for ores. When you hear enthusiasts discuss the near future of space exploration, this is what they’re imagining is within reach. But is it really? In their tongue-in-cheek book A City on Mars, Kelly and Zach Weinersmith, authors of Soonish, suggest it will be very, very difficult.


 

Why? Because to put it bluntly, outer space sucks for us humans, used to the natural resources and the gravity well of Planet Earth. We evolved to live at an atmosphere of pressure, with the abundance of water, oxygen, and surrounded by carbon-based food sources. These niceties will not be easily available on the moon, on Mars, or anywhere in outer space. Behind the humor of their presentation is well-researched science. You would not be surprised by the basic issues of human body physiology presented by outer space travel and long-term survival. We simply don’t know enough, and what we do know suggests there will be numerous obstacles. There’s even a section titled “Actually, the Whole Universe Wants You Dead” on radiation poisoning outside our home planet.

 

I already knew about bone loss and muscle atrophy when not living at 1-g. But I hadn’t thought about fluid shifts. Apparently, there’s something called “Puffy-Face Bird-Leg syndrome”. Also it causes vision problems. They provide the statistics: In short (< 2 week) missions, 23% of astronauts reported problems with close-up vision. In longer missions involving a stint on the International Space Station, it’s 50%. That’s ghastly. Apparently, the “best guess is that the upward fluid shift increases the pressure in your head, altering the shape of your eyeballs and the blood vessels that feed them.” Other questions asked include whether you can have sex in space and whether you can give birth in space; we’ll need to know this for long-term settlement. The answer: We don’t know. But a speculated theoretical solution involves a strange donut-shaped environment.

 

That was only a smidgen of Part I. Part II goes through the pros and cons of different possible locations: Quoting the chapter titles conveys this nicely:

·      The Moon: Great Location, Bit of a Fixer-Upper

·      Mars: Landscapes of Poison and Toxic Skies, but What an Opportunity!

·      Giant Rotating Space Wheels: Not Literally the Worst Option

·      Worse Options

All I can say is that it’s interesting to think about the ecology (or lack thereof) for the different locations. You’ve really got to worry about the environmental issues, which are all there to kill you! But if you did manage to overcome these, there’s Part III to worry about how to create and live in a terrarium. The authors argue that we should have spent a lot more money creating many different Biosphere-like experiments here on Earth instead of blowing money on prestige-inducing projects involving launching stuff out of our gravity well. We need some of the latter, but we need much more of the former. I found this section interesting because it made me think carefully about inputs and outputs needed to sustain a complex system. That’s what a living organism is!

 

Until I read Part IV, I had not ever considered the legal and sociocultural issues related to space settlements. This part was fascinating and new to me. The authors explain the current loose framework we have now and its problems. They also discuss in detail two other frameworks: Antartica agreements and the Law of the Sea (UNCLOS). The history and the ongoing negotiation of these treaties is a lesson we should learn. These community-rules kinda, sorta work, but there’s always the lurking problem of “might makes right”. At several points, the authors caution us that the scramble for outer space real estate could well lead to more conflict here on our planet’s surface. I’m inclined to agree with them.

 

What I really liked about the book, besides the humorous and engaging approach, is that the question of outer space exploration and settlement is inherently very interdisciplinary. I could see a series of interesting linked college-level courses across the humanities, social sciences, and natural sciences, that could prompt students to think deeply and learn a lot! However, the bottom line still stands. Space sucks. But it’s fascinating to think about.