Thursday, November 20, 2025

Concreteness

As a theorist, I’m very comfortable thinking abstractly. I believe this has helped me gain expertise in my field, evidenced by being able to “see” the deep features when problem-solving, and not be distracted by surface-level features. I also believe that one of the biggest challenges of being a teacher in my area of expertise is the curse of knowledge. I cannot “unsee” the deep features of chemistry, but neither can I bestow my mind’s-eye-sight to the novice. It’s not a concrete gift I can give.

 

My job is to help move students along the path from novice to expert. I’m trying to help students see chemistry the way I do. I have a limited amount of time to carry out this task. Similarly, my students have a limited amount of time to learn a body of material before they take the final exam that assesses their knowledge. My strategy is to make sure students know definitions and problem-solving protocols. Then we go through several different examples where I try repeatedly to point out the common deep structure of the problem even though the surface features are diverse. One challenge is baseline knowledge. The expert has tons of it; the student has little. When you don’t know much, you can only grasp in vain at the surface features that seem more concrete, even though they are less important for solving a problem.

 

I’ve been wondering if, over the years, my theoretical bent has quietly asserted itself more and more in my pedagogy, favoring the abstract over the concrete. Yes, I do want students to be able to think abstractly. This is particularly important in chemistry where understanding what is happening in the tiny nanoscale regime requires abstract imagination. We can’t see atoms or wavefunctions or chemical bonds or dipole attractions. Chemists are always imagining what is invisible to the naked eye, because the heart of chemistry comes from making and breaking chemical bonds. I talk about balls and sticks and springs and waves. I ask students to imagine such entities. I draw graphs. I write equations. I try to include real-world tidbits of chemistry in the mundane that’s all around us. I find myself excited just thinking about such things. But they are all still in my mind’s eye.

 

Do the students see things the way I do? They can tell that I’m knowledgeable and enthusiastic about what I’m teaching (evidenced by comments in student evaluations). But this doesn’t mean they are learning how to think chemically. Humans haven’t been around for long, and for most of human evolution, learning has been visceral. Concrete. Physical. Not so much in the abstract realm. I think I need to bring more concreteness to the teaching and learning of chemistry, and counter my strong bent towards the abstract. Not that the abstract is unimportant – it’s still vital! – but to do better in helping novice students learn chemistry. The best science writers who are able to convey complex ideas to their novice readers employ the visceral in their language. Blood and guts and more. Your mind’s eye is not the only think activated. You can almost feel, smell, taste even though you’re just scanning words on a page. There’s a concreteness to it.

 

How will I remind myself to keep working on this? Over time I am likely to revert back to emphasizing the theoretical and abstract. Maybe I need a (small) concrete block. A physical brick might be sufficient. I could muse more about this topic, but instead maybe I should take the good first step of getting up off my chair and away from my computer and locate said concrete brick.


Thursday, October 23, 2025

Thermodynamic Warfare

Sometimes you just need an equation. Even if you’re writing a “popular” book where equations are discouraged. No, I’m not talking about E = mc2 that shows up just to be associated with someone famous.

 


Karen Lloyd, the author of Intraterrestials is a superbly engaging writer. Her book is littered with well-chosen metaphors and analogies to explain how scientists study organisms hiding away deep in the subsurface of our planet. But I appreciate that the professor in her wants to teach her readers something useful and profound. She chose the Gibbs Free Energy equation:

 


I explain this equation every year to my G-Chem 2 students when we discuss thermodynamics. Lloyd does so with much more flair. In chapter 6 (“Breathing Rocks”) she opens with her life-harrowing yet exhilarating experience of sampling for microbes at a volcano caldera in Chile. After the scenario of physical heat and motion (get your samples quick so you don’t die!), she launches into the heat and motion associated with thermodynamics. She explains the Gibbs equation with colorful examples such as roller coasters and hand warmers. I could quote her for several paragraphs, but instead, I recommend you read her book in full. It’s a page-turner!

 

The crux is that subsurface organisms, unable to get their energy from the sun (like photosynthetic organisms) or eat food they can metabolize with oxygen (like most of us do) respire by breathing rocks. They eke out a low-energy lifestyle turning carbon dioxide (from carbonate rocks) into biomass with the help of nitrogen and sulfur compounds, also found in minerals. Such chemical reactions typically have a small negative delta-G, so you can’t get much energy from them, but they are still energetically “downhill” and thus favorable.

 

But things get even weirder when there’s competition for resources. In chapter 7 (“Life on the Edge”), Lloyd sets up the discussion with another vignette in the cold of Svalbard, Norway, where she is cutting sediment cores dug up for her research. While doing so, she ponders life in the cold Arctic with tremendously varying sunlight. And now I have to quote her: “But intraterrestials don’t care about sunlight or cold. They care about delta-G.” And unlike our familiar surface microbes that “secrete deadly antibotics, hoard nutrients, and grow ultrafast to get ahead” and beat out the competition, subsurface microbes have an additional weapon: “If one microbe’s delta-G is better than another one’s then the first microbe can asphyxiate the second.”

 

I was delighted that Lloyd chose sulfate-breathing microbes to illustrate her point since I’m studying the role of sulfur at the origin of life in competing autocatalytic cycles. She delves into the equation, now focusing on how delta-G can be modulated by Q, the reaction quotient. It’s counterproductive for them to grow big fast because that decreases sulfate diffusion in their cellular bodies. Releasing antibotics is also bad because it would kill symbiotic species in addition to its direct competitors. Molecular hydrogen is a required “food”; you can’t stop your competitors from getting it, but you can hoard enough so that you still have a borderline negative delta-G, while forcing the delta-G of your competitors to turn positive (“uphill”, energetically unfavorable) so their metabolism no longer yields energy and they die.

 

Lloyd writes: “Like a shipwrecked sailor dying of dehydration while surrounded by water, these microbes expire with their food right in front of them. Sulfate reducers win because they take the whole system to the bitter edge of their own thermodynamic capabilities, which pushes everyone else off the cliff.” Ugh. That’s war. But then as the amount of sulfate reduces, the sulfate reducers now face extinction. As they die off, their competitors (often methanogens) can access more hydrogen once again and a revolution takes place.

 

Life gets weirder still. Some microbes (methanogens!) can reverse their food and waste as delta-G switches, so they can keep eking energy. Others ferment; in Lloyd’s words: “takes one slice out of the pie and puts the rest back into the fridge for others to eat later. It’s very polite. Because of this restrained eating, fermentation ends up being one of the lowest-energy processes known to support life.” The low-energy living of intraterrestials suggests that they might live a long, long time without reproducing. It’s immortality of a sort, though not the one we might desire.

 

Reading Lloyd’s book rejuvenated my excitement about my research projects. It also reminded me that I want to be a better teacher and communicator. While this was a library book, I will be purchasing my own copy because it deserves re-reading, and I still need to delve into the scientific papers listed in the references!

 

Tuesday, October 14, 2025

Optimizing Learning and Attention

“Learning is the slow, ponderous and beautiful Galapagos tortoise, and online content is the invasive predator which will inevitably drive it to extinction.”

 

This quote from Daisy Christodolou’s article (“Why education can never be fun”) really struck me. Addictive online games and videos, she argues, only need to optimize in one dimension (fun!) while apps which might actually increase learning need to optimize in two dimensions (fun and learning). Sometimes the two parameters oppose each other. For many learners, this will be the case when learning material involving math. Your most engaging learning app will never beat the app that only needs to optimize for holding one’s attention. She writes: “However much fun you make learning, someone else will use the same techniques minus the constraint of learning. You are in an arms race where you have one arm tied behind your back.”

 

The subject that I teach, chemistry, is hard. While there is some math involved in the introductory levels, what is more challenging is the abstraction of having to juggle three aspects simultaneously – known as Johnstone’s Triangle. Chemistry is abstract by nature. We’re trying to explain everything in terms of tiny things that we can’t see. Hence, we have to think about chemistry using models. None are complete-in-itself; expertise in chemistry involves fluidly moving amongst a panoply of such models. This is not easy for the novice learner.

 

As more tasks become facile with the aid of technology, we humans who outsource our thinking to such machines will become less adept at the basics. In many instances, that might be okay. I have no interest in going back to the stone age, and I’m glad I was not born a century ago. I like my technology-aided creature comforts. I even blog to offload some of my cognitive effort. But something is lost in the process. It’s okay if what I’m losing isn’t crucial, but if it’s something important such as basic numeracy, facility with language, concentration skills, or thinking deeply and actively, then this is a problem. It’s possible that humankind is heading towards a general idiocracy with a small number of elites controlling the levers. Or it could be worse – the oligarchs might be just richer and power-wielding members of the idiocracy.

 

Learning science and math is not easy, but I think it is important to understanding the world we live in. It will sometimes be a slog. No pain, no gain. As educators, we should try to make our subject matter interesting and relevant, but there is only so much you can do to gamify your course before running into the hard reality of actually learning difficult material. Passively consuming short-form videos from creators who are optimizing eyeballs may give you a false sense that you know something. But that knowledge might be superficial at best, or possible misleading, or simply wrong. And if you don’t have the basic knowledge, you won’t be able to tell when you’re consuming crap.

 

After thinking about this, I took a moment to think more carefully about some seemingly basic concepts in chemical bonding that are much more than meets the eye. I needed to remind myself of things that I had read a year or two or more, but have since forgotten because I haven’t practiced the effortful cognition needed to retain some of these ideas. But if I don’t keep making the hard effort, I will slowly but surely be joining the idiocracy and not even realize it.


Wednesday, October 8, 2025

Hermione's Handbag

The Nobel Prize in Chemistry was announced this morning! The winners were pioneers in constructing metal-organic frameworks (MOFs). These are three-dimensional porous materials made up of metal centers and organic molecule linkers that have a surprising amount of empty space. This empty space could be used for storage, gas uptake, and separating mixtures.

 

When I got to work this morning I made a slide, then gave a five-minute presentation to each of my three classes about these materials and why they were considered Nobel-prize worthy. I was also happy to tell them that an alum of our program (who was a student in a couple of classes) went on to a graduate program to work with one of the Nobelists, and then recently started a company based on these materials. I also mentioned one of my colleagues who was working on MOFs.

 

Then I told them about my brief MOF fling. My graduate program had a “proposal exam” where I had to write three proposals (two out-of-field) and defend them in a grilling oral exam. It was the late ‘90s and MOFs were a nascent field. I figured no one knew much about them so I wouldn’t get grilled too badly, so I wrote about MOF research as one of my out-of-field proposals. I passed! I then parlayed that proposal into my research statement when applying for faculty positions. But after I started my position, I got distracted by other interesting projects and never ended up working on MOFs. Oh, well. I’m still happy working on self-assembly problems, even if they are not MOF related.

 

The mass media quickly followed suit with their announcement of the Nobel Prize awarded today. The moniker that has caught on to describe the work is “Hermione’s Handbag”. I also told students about this, which garnered a lot of smiles. For those who don’ t know what this magical object is, Hermione (of Harry Potter fame), the cleverest witch of her age, had a handbag where the inside could hold much more than you might expect if you saw the small bag from the outside. An “extension” charm is supposedly used to do this, although how it works in practice is less than clear. Does it shrink all the materials when you put it into the bag? I’ve previously discussed the tricky chemical considerations of shrinking the size of atoms. Or does it give you access to some other adjacent space not part of the standard three-dimensions we experience? Folded space maybe, as string theorists might postulate. Or maybe a wormhole to some other space… all speculation at this point. Space-filling might be an odd duck; or an Occamy.

 

MOFs are not like Hermione’s Handbag; they have no magical extension to occupy more volume than the size of their pores. What was surprising was how many more molecules they could uptake compared to zeolites, another porous material that had been known for decades. There was both early excitement and skepticism about MOFs in those early years. I kept up with the literature through the decade of the aughts, but not so much after that. I’m pleased that I actually know something about the content for this year’s prize, and I’m pleased that a Harry Potter magical object analogy was used, even if it’s wrong. It gives the sense of the surprise you might have in your first encounter with Hermione’s Handbag.


Tuesday, August 26, 2025

Hydrothermal Conditions

I’ve been conflating hydrothermal vents and fields. I realized this after reading Chapter 3 of David Deamer’s Assembling Life. Most of us who study the origins of life are familiar with the hypothesis that life on Earth may have begun in submarine hydrothermal vents. The initial discovery in 1977, that life was teeming deep in the ocean bed where the water was locally hot around magma-driven minerals was a surprise! The heat and the minerals both act as energy sources (thermal and redox-chemical respectively). Since living organisms crave energy, they congregated to form a local ecosystem.

 


There are two types of hydrothermal vents. The first to be discovered were dubbed “black smokers” because that’s what the sulfide minerals look like under the very harsh conditions where temperatures could reach 400 Celcius. Water remains as a liquid because 2km deep in the ocean the pressure is approximately 300 atmospheres. The vent fluid is also quite acidic (pH 2-4) which can drive certain types of chemical reactions. Black smokers are transient, lasting up to a few hundred years before they collapse and reappear elsewhere along the mid-ocean ridge where the crust is thinner and underlying magma can break through.

 

Origin-of-life researchers have been more enthusiastic about “white smokers” because that’s what the carbonate minerals look like under the not-as-harsh conditions. Water temperatures might be 50-90 Celcius, and the vent fluid is alkaline (pH 9-11) which also drives chemical reactions. These vents can last thousands of years, perhaps longer, and are not associated with volcanic activity. Their existence was predicted before they were discovered in 2000, and the most famous of these, Lost City, is sometimes referred to as a hydrothermal field, which confuses things and likely contributed to my conflating vent and field. Abiotic chemical reactions at these minerals under these conditions can generate methane and molecular hydrogen, important precursors for prebiotic chemistry experiments. Scientists have set up experiments mimicking these alkaline vents and produced some key molecules that may be the building blocks of organic life.

 

Deamer distinguishes hydrothermal fields from vents in the following way. In the submarine vents, there is only one interface: mineral-seawater. Fields on the other hand have exposure to the atmosphere. Instead of being deep in the ocean, they are terrestrial in origin. In the aftermath of a volcanic eruption, the minerals slowly transform such that eventually rainwater collects to form pools. Hot springs and geysers at Yellowstone National Park are an example of such fields. There are three interfaces: mineral-water, mineral-atmosphere, and atmosphere-water. Crucially, the water initially derives from freshwater and although this will dissolve some of the minerals, the ionic strength of the solution is much lower than in seawater. This is critical if you want to form cellular structures from lipid molecules. The high Ca2+ and Mg2+ content in seawater inhibits the self-assembly of micelles and vesicles.

 

Another important feature of such pools is that the acidic water (pH 4-5) of hydrothermal fields also dissolves apatite (calcium phosphate), the same mineral that makes up your tooth enamel. Phosphate is a key constituent of living systems: it’s in your DNA backbone, it’s crucial in the energy transducing molecule ATP, and it’s also use as a biochemical tag in proteins. In neutral or alkaline pH, phosphate precipitates into a solid and is not available for chemistry in an aqueous solution; this is known as the “phosphate problem” in the origin of life. Sulfur compounds likely contribute to the acidity in hydrothermal field solutions, which is why I’m studying them.

 

Terrestrial pools of water have two other attractive features. Since they are not as deep, photosynthesis can play a role. By this I mean that an appropriate mix of molecules that can absorb solar photons that penetrate through the atmosphere provides an additional energy source of driving chemistry. There’s even a pigment-world hypothesis of the origin of life that makes this center-stage. Secondly, a shallow pool potentially allows for wet-dry cycling. This is important because as water evaporates, it concentrates the potential reactants in solution. In particular, evaporating conditions drive the assembly of polymers. If you’re deep in an ocean with lots of water, hydrolysis reigns and water chops up any short polymers back into smaller fragments. The wet-dry cycling of a shallower pool, on the other hand, allows polymers to form and re-form polymers, and a complex mixture could begin to “select” for the most robust ones.

 

Did life on Earth begin in hydrothermal fields (as opposed to vents)? I don’t know. Deamer makes an attractive case for the fields. But it’s a messy complex system and designing good experiments that allow you to extract good data is not easy. I’m thankful to Deamer for making the distinction between vents and fields explicit and I expect to use his definition in the future.


Thursday, July 31, 2025

Hermione's Helping Hand

I’m on vacation and was inspired to re-read the Harry Potter series. This seemed like an appropriate time given that today, July 31, is Harry Potter’s birthday. Also, in a recent family conversation about the re-telling of fairy tales, we mused about the different experiences you might have with an “updated” fairy tale, or one that takes a different perspective from an original source, depending on whether you had read the original version. I remember back in 2001 talking to a friend who had watched the first Harry Potter movie in the cinema, but who had not read the books beforehand. Being from a different country, he had also not been exposed to the Western canon of fairy tales. He enjoyed the movie, but found it a bit disjointed, and was confused what some scenes were about.

 

So, I wondered what it would feel like to re-read the books with some of these thoughts in mind. I have to admit that the first book is not as good as I remembered. That being said, every fresh re-reading rewires how one thinks about the text so perhaps all this is not surprising. I found the text clunky in some parts, possibly because I have not read any fiction catering to eleven-year olds in a while. Another thing I noticed this time around is how much guiding the author uses to set up a future scene. Is it a helping hand for younger readers? I don’t know.

 

My reading is also coloured by my profession as an educator. I’m constantly noticing what may be “teachable moments”. This time around, Hermione’s nagging, her drawing up study schedules for Harry and Ron, her checking of their work, made me wonder if students today need more Hermiones. It may not seem cool, but having a friend and peer want you to do well academically and makes the effort to help, even when it seems like being a nag, might be a good thing. That sort of helping hand might not be welcome, but in this case, Ron and Harry greatly benefit from it. Once Harry gets on the Quidditch team and his timetable gets tight, it’s Hermione’s strategies that gets him through the end of the year and final exams.

 

The title of today’s post comes from Book 6. In that instance, the beneficiary is Ron, but the help is particularly un-Hermione-esque. And throughout the books, the influence runs both ways. I’d like to think that Ron and Harry learn good study habits with Hermione’s help, but this aspect isn’t emphasized. If Hermione wasn’t there to nag them, would Ron and Harry be diligent in their classes? Rather when Hermione decides to stray from her straight-laced approach and become more “rebellious”, this is what’s celebrated. I’m not sure what the lesson is here. (For example, I previously blogged about Hermione organizing an illegal study group.)

 

Finally, an observation made by my sister after she had read the books has stuck with me. One of Hermione’s roles is to help provide information to the reader. In the first book, Hermione does so by quoting books she has read such as Hogwarts, A History. This keeps the story moving along without being bogged down. Need a factoid to keep things going? The Hermione character provides a way to insert knowledge. Other characters in the book also do this, but none as much as Hermione. Her helping hand is integral to the books!


Sunday, July 13, 2025

On Not Reading

To read or not to read. That is the question. Even if you don’t read a book, in no way does it prevent you from talking about it. Or if you feel obligated to skim, ten minutes might be enough. It might even be preferable for you not to read if you are a book critic. This advice sounds positively blasphemous if you love reading and talking about books. But it does come packaged in a witty and humorous book by Pierre Bayard, aptly titled How to Talk About Books You Haven’t Read.

 


I’ve written about many books on this blog. I assure you I’ve read all of them. I even read most of Bayard’s but I did skip a few chapters and skimmed others. I think the author would be proud of me. On the one hand, the book made me think that literary criticism is an absolutely vacuous activity. On the other hand, Bayard emphasizes the non-static nature of a book. Read or not, it provides a jumping point to talk about opinions, ideas, musings, speculations, and engage in other human-like activities. It seems apt that books, read or unread, can promote the idealism of the humanities. Or it might just be a load of rubbish.

 

Ideas are two-faced. Janus-like. That was my biggest takeaway from Bayard’s musings. Two people can have completely different ideas when encountering some reading material, especially if they differ greatly in their backgrounds. There’s a most amusing chapter cherry-picking conversations that an anthropologist has with the Tiv tribe in Africa where she tries to tell them (or perhaps sell them) on the universal human tale of Hamlet. The Tiv may disagree with the typical literature interpretations you might encounter in a college classroom but they interact with the story nevertheless as they ridicule its tropes. I have never read Hamlet although I know enough of the story to quote from it.

 

The other interesting idea comes in the very first chapter with quotes from a book that I hadn’t heard of, The Man Without Qualities by Robert Musil. In it, there is a most peculiar librarian who pointedly never reads any book in the library other the table of contents so that the book can be situated with other books it is related to. An exasperated patron wants to know why. The librarian says that were he to read the actual book, he might “lose perspective”. That sounds preposterous but it turns out the librarian in fact loves all books, so much so, that “incites him to remain prudently on their periphery, for fear that too pronounced an interest in one of them might cause him to neglect the others.” By taking a step back and having a more expansive view, it is the dynamic relationship between books that is more important than one book’s particular content. It’s holistic knowledge by taking preservation of the whole to the extreme.

 

Most books have not been read by most people. And if you do read a book, you begin to forget the moment you start reading. I find this to be more and more true as I’ve aged. I retain the gist of books, stories, TV shows, movies, but I’ve forgotten the details. If enough time has passed, I can’t even tell you the gist. I could consult my blog to reacquaint myself with what I thought of it back when I read it the first time, but a second reading might induce a different response. I’m a different person now than when I first read the book and may interact with it differently as my constellation of ideas has shifted over time. But I don’t think I will ever be like Musil’s librarian. I love the pleasure of reading a book even if it means I miss out on others. Or even re-reading. Since skimming Bayard’s book, I have a hankering to re-read the Harry Potter series. Fresh eyes might provide more fodder for my blog!


Saturday, July 5, 2025

In Search of Nothing

Nature abhors a vacuum. At least on the surface of Planet Earth which supports a gaseous atmosphere at a pressure of 760 mm Hg. How did we know this number? One of Galileo’s students, Torricelli, turned a tube of mercury upside down into a bowl of mercury. As long as the tube is more than 760mm long, there will be a gap of nothing at the top. It’s not an air gap. It’s a gap of Nothing.

 

Toricelli was actually looking for the mystical aether, the sacred material breathed by the gods, the fifth element, the quintessence. Supposedly it “allowed light from the stars to propagate” and was “also holding planets in their orbits”. I’m learning about this history reading through Mark Miodownik’s It’s a Gas. Toricelli had finally isolated the aether, a quest of the alchemists, some of whom thought it associated with the philosopher’s stone that would balance the four humours and cure all illnesses. Perhaps it could even prevent death. No wonder that Voldemort coveted it.

 




The trick to creating vacuum is to pump out all the air molecules from a closed container. That container must be truly air-tight. No leaks! Miodownik writes: “We take the accuracy and intricacy of screws, gaskets and valves for granted today. In the seventeenth century such precision engineering was just beginning.” What shot vacuum to fame was the famous demonstration at Magdeburg by Otto von Guericke. He didn’t use the chemical techniques of the alchemists. He just used mechanics to make an airtight pump. Once the air was pumped out of two hemispheres cupped into a sphere not held together by any other means, two teams of eight horses each could not pull the hemispheres apart.

 

What are the properties of Nothing? Now that scientists could reliably make it. They could start running tests. No living thing survived. (Oxygen was yet to be discovered.) Sound does not travel through vacuum, although light does, and magnetism is unaffected. Turns out that metal wires will glow hot in an enclosed vacuum tube when a voltage is applied, and  Voila! Electric lighting is invented! Even if the wire breaks, you can sometimes get electricity to flow. (Electrons leap across but they didn’t know that yet!) This led to vacuum tubes. And now you have TV. Once you’ve mastered manufacturing silicon chips in vacuum conditions, you now have computers and all manner of smart devices. Who would have anticipated that Nothing would be so important!

 

Miodownik also relates the now-familiar story of the discovery of the noble or inert gases. They upended Mendeleev’s Periodic Table. It took painstaking evidence to show that they existed. They weren’t just Nothing even though they seemed to have no chemical reactivity. How were the noble gases discovered? Rayleigh was unhappy with the imperfections of the masses of the chemical elements. They almost followed a beautiful mathematical pattern, but not quite, and so he decided to measure their masses again with high precision. This is much harder than it sounds. You needed to create a vacuum in a flask and weigh it, then pipe the gas in and weigh it again. But the pressure, temperature, and humidity of the room can affect this measurement. You needed to more than triple-check everything. Most scientists didn’t believe Rayleigh, even after Ramsay provided an independent confirmation. Eventually argon was joined by helium, neon, krypton and radon. Chemistry’s 1904 Nobel Prize went to Ramsay for his discoveries. And eventually scientists and engineers found uses for all these gases that at first glance did Nothing!


Sunday, June 29, 2025

Gaslight

As an urban kid, my first encounter with the will-o’-the-wisp was through literature. In the dreary journey to Mordor in The Lord of the Rings, Frodo and Sam make their way through the Dead Marshes; Gollum warns them not to follow the lights that can lead them astray. For Harry Potter readers, the equivalent creature is the hinkypunk. Wikipedia defines it as an “atmospheric ghost light… dancing or flowing in a static form, until noticed or followed, in which case they visually fade or disappear”. Today, their lore has been transformed into the kid-friendly version of the jack-o’-lantern at Halloween.

 

We now know something about the chemistry that causes this luminescence – oxidation reactions involving methane and phosphines, gases released in marshland from decaying organic matter. Air currents play a role in the wispy behavior. While I’ve personally never seen this before – I don’t visit marshlands in poorly lighted areas – it is apparently spooky-looking. In his latest book, It’s a Gas, author Mark Miodownik shares historical writings about such spooky observations. I learned that a Major Blesson, from Napoleon’s army, did some early experiments to figure out what was going on and concluded that the wisps were “caused by flammable gas bubbling up from the bottom of the marsh”.

 

Living in an age of electric lights in urban areas, I have scant notion of what it would really be like to have experienced what most of humanity knew when the sun went down. Darkness. Danger. And the fear of not being able to see what might be lurking nearby. (Yes, I could go camping in some remote area to see the stars, but I like my creature comforts.) Miodownik discusses the “anatomy of a flame” from a wood fire, and how careful observations led to mass production of charcoal and tar. Folks also discovered that the invisible released gas was also explosive: Methane. In the marshes we can thank anaerobic microorganisms that eat carbon dioxide and poop methane.

 

Enter the scientists and engineers: Could methane gas be used to light the streets and households of urban areas? Can we shoo away the dark and eliminate the spooky? It was also a safety issue. You might fall into a cesspool or get mugged. In 1801, the inventor Philippe Lebon rigged a system for a hotel in Paris. According to Miodownik, “so marvelous was the spectacle of will-o’-the-wisps flickering away around every corner that the public happily paid three francs to enter and see the wonderland he had created.” But that first system didn’t catch on. It was the stink. Not from odorless methane, but from small amounts of hydrogen sulfide that were naturally part of the gas mix. British engineers eventually figured out how to remove the stink: one step in their refining process involved bubbling the gas mixture through lime water (calcium hydroxide solution) which reacts with acidic hydrogen sulfide.

 

Storing gas was a tricky business. You had to compress it. Then you had to release it at the right pressure to get optimal lighting while avoid too many fumes from incomplete burning. Then there was the problem of gas leaks. Today, a tiny amount of methanethiol, a compound very similar to hydrogen sulfide, is added so our noses can detect the smell of a gas leak. In a mere 25 years after Lebon’s demonstration, any large town in Britain had gaslight. Eventually gaslight was replaced by electric light as science marched onward.

 

The word gaslight has returned to our vocabulary in the twenty-first century. As women entered the workforce in ever-increasing numbers and began to vie for positions in leadership, boorish men took to “gaslighting” them. Miodownik relates that the phrase coms from a 1938 play titled Gaslight whereby a conniving husband tries to manipulate his wife into thinking she is insane “by dimming the gaslights in their home, and when she notices, he claims the lights are not dimmer – it is all in her mind.” In a former age, gaslight illuminated. Now it obscures. What will it do in tomorrow’s age?


Sunday, June 22, 2025

Educating AI

One reason my blog writing has fallen off the past year – I’m ambivalent about bots scraping my data to train AI models. But honestly, I’m not that great a writer, and it’s not like the bots are mining gold. I just need to get over myself and keep sharpening my writing practice, be it on this blog or elsewhere.

 

I just finished reading The Alignment Problem by Brian Christian. While the issue of AI ethics and the dangers posed by advanced AI are the main theme, what I spent time mulling over was comparing the educating of AI with the educating of human students. There are differences between human brains and machine learning neural networks, but the bigger difference is the wetware of the entire human body-organism, which cannot be separated into dry hardware and software.

 


Christian launches the historical story with Skinner’s behaviorism, Turing’s computing machines, and the neuron assembly of McCulloch and Pitts. (I didn’t know Pitts was such an enigmatic character until reading this book!) This is the framework of reinforcement learning. The reward hypothesis states that “all of what we mean by goals and purposes [is essentially] the maximization of the cumulative sum of a received scalar reward”. Shoot for the high score! Not surprisingly, Atari and other early video games were utilized in the training process. (I also learned that Montezuma’s Revenge, a game I played in the 1980s, is particularly tricky for an AI to get good at and represented some sort of gold standard.) What made the world pay attention was when AI beat grandmasters at Chess and Go.

 

I appreciate Christian going through the challenges of any training method. (He also carefully distinguishes reinforcement learning from supervised and unsupervised learning.) These include the problem of the terseness of a scalar reward or punishment, compounded by a delay in knowing that a much earlier blundering move may have cost the game. Turns out “reinforcement learning is less like learning with a teacher than learning with a critic. The critic may be every bit as wise, but is far less helpful.” There’s an interesting story on the “dopamine puzzle” that leads to a learning model (known as temporal difference) that what’s really being valued is the “error in its expectation of future rewards”.

 

The most interesting part for me was Chapter 5 (“Shaping”) on the Problem of Sparsity. Essentially, “if the reward is defined explicitly in terms of the end goal, or something fairly close to it, then one must essentially wait until random button-pressing, or random flailing around, produces the desired effect. The mathematic show that most reinforcement-learning algorithms will, eventually, get there…” but it’s inefficient and takes too darn long. The solution is to put together a Curriculum. That’s what we do as human educators. I break down the learning of chemistry into steps; I set tasks for the students; I try to motivate them; and there’s a rewards system in terms of points and a final grade. But creating the right incentives in AI training turns out to be quite tricky. Specifying certain steps along the pathway often does not have the desired outcome. Evolution has had hundreds of millions of years to shape humans, dolphins, elephants, and octopi, all naturally intelligent creatures among many others.

 

Can you get beyond external reinforcement strategies? Can you build in intrinsic curiosity into a computer? Can you value novelty? There are some clever tricks to do this. OpenAI (now famous for ChatGPT) is profiled for their early efforts working on Atari-arcade-like games. Can we learn from how humans and apes learn? Can computers learn through imitation? Do they learn the same way? I learned that human children in some situations over-imitate compared to chimpanzees; “children are from a very young age, acutely sensitive to whether the grown-up demonstrating something is deliberately teaching them, or just experimenting.” Why does this work? It “allows the student (be it human or machine) to learn things that are hard to describe.” The OpenAI folks managed to get an AI to beat Montezuma’s Revenge by watching YouTube videos of many human players.

 

This may be why taking students through worked examples, then letting them try simpler problems, before adding complexity to a more sophisticated problem is a pedagogical approach that works well, at least for the subject of chemistry. Many of these principles came from folks doing research into teaching and learning math. There’s also a tricky balance between intrinsic and extrinsic motivational approaches. It’s not that one always works better than the other. I’m not sure that final grades, which I assign based on numerical scores, are the best value function that most of my students strive towards. I understand that grades loom large for increasingly stressed students in what they perceive to be a global cutthroat career market. My generation did not experience the pressures they are facing now. With AI chomping at their heels as a competitor, the business of educating AI may be existential for them, even if they don’t realize it yet.


Thursday, May 22, 2025

Exploring ADOM

I’m thirty years late to Ancient Domains of Mystery, more commonly known by its acronym ADOM. Created by Thomas Biskup in 1994, it is a computer role-playing game (CRPG) that is often described as Rogue-like. There’s an irreverent but informative video that discusses this issue amidst a high-speed playthrough of the ASCII version of ADOM. While there are spoilers, they go by much too fast for you to remember any of them, so I wouldn’t worry about it. I’ve never played the original Rogue so I have no basis for comparison. The three features that stood out to me are that it is turn-based, the dungeons are procedurally generated, and there is permadeath – when your character dies, the game immediately records it and all you can do is restart from scratch with a new character.

 

Most of my experience with computer games was for a half-decade in the mid-to-late 1980s, sometimes referred to as the golden age of CRPGs with plenty of different designs with new spaces to explore. I was first hooked by Ali Baba and the Forty Thieves and got through Ultima V before life took over. In the last year, however, I have been rediscovering the cousins and close descendants of those early games. I’ve been surprised by delightful obscure gems such as Antepenult alongside well-known old classics such as the first Might & Magic.

 

The old CRPGs required patience. You had to grind your way through lots of fights to earn experience, gold, and better weapons and armor. You had to level up your spellcasters to access more powerful destructive and protective magic. The baddies and bosses got harder. You needed special items to access special areas. I don’t have the same patience now as I did forty years ago, but I am enjoying the discovery aspects of ADOM. There’s a huge world to explore (mostly underground since it is a dungeon-crawler) with tons of different items. You don’t know what’s around the next corner so you’d better be prepared to fight or run. There’s a sweet satisfaction with surviving a nail-biting encounter, discovering a strange new space, or coming across an item you’ve never seen before.

 

I think I’m on my tenth or twelfth character in ADOM, and only the second to make it to the mid-game stage. (You die early and often, which is part of the exploration.) My previous troll fighter reached level twelve and made it to Dwarftown but got killed shortly after, somewhere in the Caverns of Chaos. I currently have a hurthling archer that successfully completed many early quests that I’m quite attached to now, so I am save-scumming (allowing me to restart if something bad happens or my character is close to death). What’s a hurthling? It’s like a halfling, hobbit or bobbit. If you recognize any of those names, you’ll know that CRPGs borrow heavily from Tolkien and its D&D derivatives. In ADOM, mithril gear is better than regular gear made of wood, leather, or iron. It’s also lighter – and carrying weight matters! But there’s also adamantium and eternium. ADOM has no problem mixing genres.

 

While I had no idea what I was doing in the first several games, experimenting with different objects and strategies, now that I have mid-level characters, I would prefer to not go back to the beginning and grind my way anew. (I’ve always had Fate decide all aspects of my starting character rather than picking my own stats.) So alongside my natural experimenting within the game world, I’ve also started referring to Internet resources (such as the ADOM Guidebook). In addition, I’m watching my way through a very entertaining play-through on YouTube, where I’m pacing myself episode-wise so my character is roughly at the same level. I’m learning that ADOM is even bigger than I thought, and there are things you can do that I hadn’t even considered. I suppose I’m learning from the large community of those who had gone before me. Long-term players have played thousands of games over the years and built up a lore of knowledge.

 

It’s a bit (or perhaps a lot) like science – discovering the natural “laws” of the world around you, what you can do and what you can’t do. There’s the trial and error approach which I used early on, and there’s learning from the community of those who have trialed-and-errored a lot more and are sharing the results of their labor. Finally, there might be folks who have looked at the code and can say something about the “hidden” underlying rules. This is how we humans learn. Many before us have experimented directly, and the fruits of discovery have been passed down to us so we don’t have to reinvent the wheel. As a teacher, it’s an integral part of my job to pass down this knowledge in my field of expertise, which is chemistry. One thing I convey to students is that we can come up with abstractions to understand the underlying rules of chemistry. This includes mathematical models that powerfully allow us to make predictions of what will happen in a different situation; it’s like discovering the source code of nature!

 

Yes, I could try to “enjoy” grinding my way through ADOM with no outside references. But I think the exploration is enhanced by tapping into the wisdom of the community while being careful to avoid spoilers. Without it, I think I would just give up – ADOM is a hard and unforgiving game. But games also allow you to explore the paths not taken, and ADOM’s many different starting characters and strategies, and its multiple endings (from what I’ve gleaned without looking at any of them in detail), provide a certain satisfaction. The procedurally-generated dungeons add to the game’s high replay value. For someone with my old-school 1980s CRPG background, ADOM provides an exploration experience at its finest. Warts and all. My character recently grew horns due to increased background corruption. That was yet another recent surprise with reaching the mid-game!


Thursday, May 8, 2025

Shapeshifting Vine

I’ve been thinking about plants and photon-absorbing pigments having recently read a speculative and interesting origin-of-life article that suggests animals might be gardeners co-opted by plants. Last year, I read about flavor molecules and poisons, which are part of the suite of secondary metabolites released by plants. Right now I’m reading The Light Eaters by Zoe Schlanger, a fascinating look into cutting-edge and controversial research in botany. Do plants “scream in pain” when we pluck a leaf or break a twig? Do they then warn their neighbors with signaling molecules that danger is nearby? Can they listen to sounds? Are they “conscious” in their own way, different from humans or octopi? These are interesting questions, and Schlanger delves deep into the research. Her writing is also thoroughly engaging, aimed at the non-expert, reminding me of Ed Yong’s superb book. Who would have thought botany would be so exciting!

 


Chapter 8 discusses the “chameleon vine”: Boquila trifoliolata. I’d never heard of it before. It is native to Chile and has some interesting cousins in Asia. This vine of a plant is an actual shapeshifter. Not like the lizard chameleon that can only change its colors. Not like the leafy water dragon that has evolved to look like the water plants where it spends its time. Not like many examples of adaptation in nature that take generations. Boquila does its shapeshifting in real time. But this is plant time, measured in hours or days; too slow for us impatient humans who prefer to marvel at it with time-lapsed photography. It has reshaped its leaves to mimic dozens of other plants, some that look very different from each other. Sometimes the details are astonishingly close; sometimes the match is poorer. How does this happen?

 

There are two prevailing theories; there is some experimental evidence for each but scientists are still in the throes of figuring out what’s going on. That’s what the cutting-edge of science looks like. The theories may sound wild. One is the “plants have vision” hypothesis. Plant leaves have plenty of light-sensitive molecules, not just ones used in photosynthesis. Maybe the vine sees its neighbor and mimics it. The other gives primacy to microorganisms; the shared space may allow microorganisms to exchange information (horizontal gene transfer), and to move from one plant to another, that may then translate into building leaves that look identical. Both ideas sound crazy when you first hear them, but there is some evidence for each. Not enough to gain widespread acceptance. Science is conservative, for good reason. You don’t discard an established theory that was built up by initial evidence unless the new evidence that disproves it is sufficient and overwhelmingly so.

 

I don’t know which of the two theories I lean towards. However, both ideas have pushed me to think more about my own research. Since getting into origin-of-life research, I’ve started to pay close attention to microorganisms, bacteria and archaea. Since starting to teach biochemistry, I’ve been marveling at the world of metabolites – plants and fungi are amazing in this regard where secondary metabolism is concerned. I’m starting to see conjugated pi-systems show up in origin-of-life related molecules, and I’ve started to read up on how to analyze photochemical reactions using computational chemistry. There’s something intriguing about the interplay of photons and chemistry that could be the key to why we have dynamic systems, building up molecules and breaking them down, going along the flow of the second law of thermodynamics yet diverting it to one’s own ends. That last phrase might sound speculative and crazy too.

 

This brings me back to pondering the nature of the boggart, one of my early posts when I started this Potions for Muggles blog. We now have a plant boggart in being able to shapeshift, but seemingly limited to being a mimic. While the Harry Potter books do mention interesting plants and their properties, this seems subdued compared to Fantastic Beasts and how to find them. We miss the wondrous nature of plants because they seem so unlike us. Boquila would have been of great interest to Professor Sprout, and I could see her working closely with a Potions Master to delve into the subtle connections between plants and their secondary metabolites that would go into potions!


Thursday, April 17, 2025

Spontaneity, Reversibility, Equilibrium

Last week, right after my P-Chem II class, a bright student came up to ask me to help clear up and issue comparing reversible processes that actually move a process forward (you might expect a change in free energy or overall entropy) and equilibrium (you expect delta-G to be zero). Since we only have ten minutes in between when one class gets out and the next one comes in to the room, I did what many professors do – I gave a handwaving explanation. I said something about an overall system being at equilibrium macroscopically (equal rates of forward and reverse reactions), and separated that from a “reversible” process where you’re moving something along via infinitesimal steps. One’s the overall “system”, the other is looking at a series of steps for a specific process, I said. Clearly, or maybe obtusely, I was hedging.

 

Part of the issue here is that in real life you’d never run a process in infinitesimal steps because it would take an infinite amount of time. Essentially nothing is changing if each step takes eternity. I did tell students that, in practice, no process is truly reversible in this infinitesimal-step sense if in fact it actually takes place. Students understand this issue of practically. But since P-Chem II is calculus-infused, we can do the math by taking limits. In particular we go to the limit of the infinitesimal step and use d’s instead of deltas. This leads to beautifully simple calculations to calculate the mechanical work or heat transfer in idealized “reversible” cases, while comparing them to “irreversible” cases that are “less efficient”. In the reversible case, work in equals work out if you did this ideally. In the irreversible case, you get less work out compared to what you put in. Most students are satisfied by all of this, but for this student, there was a bee in her bonnet, and rightly so.

 

Actually, I don’t know if she found my hedging answer satisfactory. I should ask her. I was unimpressed by my own answer even though it was practical in the interest of time (she had to run to another class). In fact, I told her that I needed to be more careful in how I defined the terms “equilibrium” and “reversible process” and that sometimes I’m not careful enough. I provide the students with the formal definitions in our lecture notes, but I slide my way from one term to another in the “heat of the moment” when I’m trying to be dynamic and lively in class. (“Heat” is another of those tricky terms.) In class I only look sporadically at my own notes. Sometimes I remember to emphasize something to watch out for, and other times I simply forget.

 

In my quest to figure out how to not confuse future students, I started scouring the primary literature for inspiration. The article I’ve found most useful thus far is by John Norton titled “The impossible process: Thermodynamic reversibility” (Studies in History and Philosophy of Modern Physics 2016, 55, 43-61). It provides a historical slant and includes many examples where much more famous physicists have also elided their way through. Now I don’t feel so bad about my quick handwave. I think I can tighten up my definitions a little better. And I need to spend a little time talking about the issue of using calculus to take the limit for infinitesimal steps. Many chemistry students are a little calculus-phobic so I try not to emphasize the mechanics of calculus and instead concentrate on the chemistry. But I’m reminded that I need to be more careful in this regard. Norton also points out that on the molecular scale, thermal fluctuations make this ideal-calculus-limit taking a problem. In a big-picture Mack view, all this might be okay, but tiny Mike would protest that there’s a problem! (Mack and Mike represent macroscopic and microscopic views.)

 

I also need to be very clear when I use each of these terms: reversible, equilibrium, and spontaneous. I make a very big deal (multiple times) in both my G-Chem and P-Chem classes that thermodynamic spontaneity has nothing to do with how fast a reaction might take place. All it tells you is which way the reaction is likely to proceed absent any external intrusions on the system. Thankfully, we’re on the verge of changing our G-Chem textbook to, in my opinion, a superior one that excises the confusing term “spontaneity” and instead uses “thermodynamic favorability”. I’m all in favor of that change. I’ll just have to remind students to be careful when they encounter “spontaneity” on the internet because, sadly, that’s where many of them go to look up things rather than their textbook.

 

I think I should stop using the term “reversible” in G-Chem as a thermodynamic definition. I should limit myself to discussing forward and reverse reactions (in the kinetic sense), that both occur, and that if one waits long enough eventually the rates of the forward and reverse reactions are equal. That’s when dynamic equilibrium is reached. If the change in system free energy or the change in the entropy of the thermodynamic universe is zero, then the system is overall at equilibrium. That’s it. No need to belabor the point.

 

In P-Chem I’m considering using the term quasi-reversible to emphasize that taking the infinitesimal limit is actually an impossible situation at the molecular level. Perhaps I should always say quasi-reversible process. This may help emphasize the distinction between the macroscopic system as a whole (which may or may not be at equilibrium) and considering a specific process in getting from one state to another state. I’m not sure I want to go into the language of “a series of connected equilibrium states” since this muddies the waters. Since I don’t use a textbook in P-Chem, I can just change all the notes that I provide students to tighten up these definitions. I will restrict using the word equilibrium to the usage I mentioned above in G-Chem. When I get to the stat mech version of discussing equilibrium, I will focus it on the equilibrium constant as a ratio of the number of product molecules versus reactant molecules, while reminding the students that the state of being at equilibrium is a macroscopic description. There will be a tricky part when I get to transition-state theory in thinking about the transition state as a quasi-equilibrium state; not sure how to handle that terminology-wise. We’ll see how this all works out the next time I teach P-Chem II.