Tuesday, May 5, 2026

All About Maxwell

I am halfway through The Man Who Changed Everything, a biography of James Clerk Maxwell, written by Basil Mahon. As a physical chemist, I know something about Maxwell’s scientific achievements. The Maxwell-Boltzmann distribution shows up in multiple places because chemistry is about the movement of zillions of tiny particles, meaning you have to apply statistical methods to bridge the microscopic world to macroscopic phenomena that we large lumbering humans observe. Maxwell’s 1873 paper titled “Molecules” is marvelous, showcasing his lucid writing and insightful though; I have on occasion assigned it to first-year undergraduates along with light annotations to help them read along. I mentioned this in my very first blog post!

 


What I didn’t know much about, and am delighted to learn from Mahon’s book, is who James Clerk Maxwell was as a person. I learned about his rural upbringing that made him initially stick out as a weirdo in a more urban school, and how his geniality, generosity and genius eventually won over his classmates. While his mother had passed away in his early life, he had a loving extended family, and in his early twenties, he devoted much of his time to caring for his ailing father. He was beloved by his friends, an occasional prankster, liked to exercise, and wrote poetry. Given his fame for conjuring mathematical relationships of physical phenomena, I was surprised to learn that he frequently made lots of math mistakes in his derivations, but his scientific intuition was brilliant and almost always on the mark. A famous contemporary scientist said: “He is a genius, but one has to check his calculations.”

 

Maxwell was also a devout Christian but did not get sucked in to the many debates pitting science against religion. In declining to join an eminent society discussing such matters, he replied: “I think that the results which each man arrives at in his attempts to harmonise his science with this Christianity ought not to be regarded as having any significance except to the man himself, and to him only for a time, and should not receive the stamp of a society. For it is in the nature of science, especially those branches of science which are spreading into unknown regions, to be completely changing.” Maxwell’s views on the relationship between theory and experiment in science are also immensely quotable, for example: “I have no reason to believe that the human intellect is able to weave a system of physics out of its own resources without experimental labour. Whenever the attempt has been made it has resulted in an unnatural and self-contradictory mass of rubbish.”

 

I confess that I never took a physics course in college or beyond. How I became a professor who teaches physical chemistry still amazes me. My weak physics background, and perhaps lack of effort to improve my mediocre mathematical ability, means that I don’t really understand Maxwell’s famous equations although I do have the gist of its broader impact. I was heartened to read that Maxwell made great effort to find physical analogies to explain seemingly mysterious phenomena such as lines of force. Even now, I find it challenging to think through the lens of a field approach, and I use Maxwell’s ideas of fluid flow as a crutch to think about flux. Maxwell’s analogy of potential difference and hydrostatic pressure is also helpful; I use it when I teach electrochemistry in General Chemistry. (In fact, I will use it in my class tomorrow morning!)

 

What jumped out at me in reading the account of Maxwell’s struggle to derive a mathematical framework for Faraday’s lines of force was the ability to bring together insights from one area of physics to solve another. The jumping off point was a discovery by William Thomson (later Lord Kelvin) who found that the equations for the strength of electrostatic force looked similar to those describing the rate of steady heat flow. This seems odd: why would static equations resemble dynamic ones? But Maxwell made it work by imagining the flow of an “ideal” weightless incompressible fluid through pipes. I’m presently covering kinetics in my Physical Chemistry, and was looking ahead at my lecture on molecular collision theory. With Maxwell in my mind, one of the equations looked suspiciously familiar. I flipped back to a lecture I had given in the second week of the semester on the Lennard-Jones potential energy curve (for two-body molecular interactions), and sure enough, the mathematical expression for the static temporary dipole attraction looked analogous to the rate equation in collision theory. Wow!

 

I was impressed to read about the breadth of problems Maxwell tackled. His work on optics and colour vision culminating in his famous colour triangle is brilliant. He even devised spectacles for those with red-green colour-blindness. I did not know that Maxwell won a prestigious award for deriving mathematical equations to describe the conditions of stability of Saturn’s rings. When tackling the possibility that the rings are a fluid rather than a solid, he showed they would break up into smaller entities. But how would a hodgepodge of particles maintain an orbit? Maxwell showed that such rings vibrate in different ways and could be stable at low enough average densities. When he considered multiple rings, “he found that some arrangements were stable but others were not: for certain ratios of the radii the vibrations would build up and destroy the rings.” This sounds like the remarkable Bohr orbits of quantum mechanics where the electron orbiting the nucleus is treated as a standing wave to be stable.

 

Another surprising thing I learned was that despite his lucid and clear writing, Maxwell’s success in classroom teaching was mixed. Mahon writes: “For all his talents, he never mastered the technical part of teaching. He would prepare a lesson beautifully, do fine for a time while he stuck to his script, and then fly into analogies and metaphors which were intended to help the students but more often than not mystified them. He was not expert on the blackboard, where he made algebraic slips which took time to find and correct. And yet the students liked him and some found him truly inspiring… It seems paradoxical [for] such a fine scientific writer… as he believed fervently in the value of good education… Appreciating that people learn in different ways, he may have tried too hard to bring in helpful illustrations and analogies, confusing his audience with a welter of rapidly changing images… And perhaps he was too much of an idealist. All good teachers aim, as he did, to teach people to think for themselves, but most also recognize that all some students want is to gain a second-hand smattering of the subject so they can pass exams, and make a specific effort to help them succeed in this limited ambition. Maxwell never did.”

 

Those are sobering words for me as an educator who is also very excited about imparting chemistry to my students. I certainly try to give metaphors and analogies which I hope are helpful. Given my theoretical bent (a product of both my training and my interests), I have noticed that I now spend more time trying to impress upon my students the key frameworks on which my discipline builds its foundations. And I do this unprompted; it’s not in my lecture notes. It’s almost as if, like Maxwell, I can’t help myself. I feel compelled to make those connections to the broader edifice of how chemists think about the world. One progresses from novice to expert by first glimpsing and then progressively seeing more clearly the abstract categories that undergird chemical knowledge. I pontificate more than I used to. When I first started teaching, I couldn’t see some of the hidden frameworks; my focus was getting the students through the material in a systematic way that allowed them to (hopefully) provide them the basics to solve chemical problems on an exam to prove they understood what I was trying to teach. I am still aware that the majority of students in my classes are interested in the “second-hand smattering of the subject so they can pass exams”, and make efforts to help them along, but I also want to truly inspire the minority to see the beauty and depth of chemistry. Maxwell cannot help me resolve this tension, but I am inspired by his efforts. I look forward to sinking my teeth into the second half of his biography!


Tuesday, April 21, 2026

Biochemistry Mishmash

I am slowly working my way through The Natural Selection of the Chemical Elements by Williams and Frausto da Silva. It’s not the easiest book to read, but it approaches issues of biochemistry from an inorganic and evolutionary lens that I find helpful. I used one of their books in a class three years ago because our library had a digital copy.

 

Today’s post is a mishmash of thoughts sparked by my reading of chapters 11-13 touching on the evolutionary organization of cells and the roles of different chemical substances. Since I study the chemical origins of life, I filter what I’m learning through that particular lens. From that perspective, the book’s contents are idiosyncratic and generates more questions than it answers. But it gives me much to mull about.

 

Since the authors have a background in inorganic chemistry, the function of metal ions features prominently. The big change to the chemical environment is a redox shift from reducing to oxidizing conditions. We have plenty of O2 in our atmosphere today, but this was not so on the Hadean Earth. The progressive oxidation led to a decrease in the availability of some substances, particularly Fe(II) and sulfides, but led to the increase in others, with newcomers such as Zn and Cu becoming available, alongside a shift to complexity, symbiosis, and eventual multicellularity.

 

The final paragraph of chapter 13 begins: “The conclusion we have reached is that multicellular development was bound to increase in complexity as newly available elements were incorporated but could only do so by coexistence with simpler forms. Complexity is eventually self-defeating and the escape from this dilemma is only possible with an ecosystem of the simple and the complex.” Biochemistry is a tinker, so the first sentence is not surprising. There is a mishmash of systems layered upon more primitive ones, palimpsests sometimes peeking through. The second sentence is provocative. Is it true? I don’t know. But we do know that complex systems open the possibility of catastrophic system failure.

 

Things that jumped out of me:

(1) The evolution of life is all about kinetic traps. “Energized” molecules quickly dissipate energy in their thermodynamic progress towards the equilibrium state. But to get a system going that allows for control, kinetic traps are essential, as is the evolution of catalysts. Before central control emerged, persistence is about being trapped long enough or often enough.

(2) Vesicles and other compartments within the cell have chemical environments that can be very different from the cytoplasm and use messenger systems similar to the “outside” of a cell, calcium-based systems for example.

(3) The rates of phosphate versus thioester hydrolysis can vary greatly over different pH and temperature. This may be a clue to a takeover of energy transduction from a thioester world to the modern one primarily using phosphate esters.

(4) The assertion the DNA codes qualitatively for proteins, but not quantitatively, is interesting. The quantitative aspects that require control and regulation were previously “set” by more primitive cells. Since I think a primitive metabolism is prior to nucleic acid coded information, this makes sense to me.

 

Things I need to ponder more: Why is negative feedback more prevalent in the evolution of living systems? Does it arise because living systems are thermodynamically semi-closed? I regularly tell my students in G-Chem II and P-Chem II that we study equilibrium thermodynamics because we can construct a model and its accompanying equations for closed systems. I contrast this to the non-equilibrium thermodynamics of open systems and use life as an example of staying alive by avoiding the equilibrium state. But an enclosed cell that tries to maintain some level of homeostasis along with growth and repair isn’t completely open. It’s very finicky about what goes in and what goes out, and what concentrations are maintained inside.

 

The authors also reminded me about the distinction between control and regulation: “Control acts at the level of metabolism and one part of it is concerned with the use of proteins including catalysts but not with their productions… Regulation acts at the level of gene and… was little altered from that in anaerobes by the development of multicellular organism…” From my slant, this suggests that control precedes regulation, at least on a local level. A protometabolic system evolves to control matter and energy. How? By tinkering! Why? I don’t know but it reminds me of the dictum: “What persists, persists. What does not, does not.”


Wednesday, March 18, 2026

Discovery Learning: ADOM Edition

This past weekend, I notched my fifth ADOM win with a Drakeling Elementalist who is now #2 on the high-score list. In preparation for today’s blog post, I also replayed a Tutorial game on a new installation to remind myself what tips the system provides to brand new players.

 

What I’ve been musing about is “Discovery Learning”, a buzz-phrase that leverages (in this case simple-minded) “common sense” thinking. In the extreme version, there is no formal schooling for kids. Let them explore and discover the world and learn “naturally” from nature. Natural – good! Artificial – Bad!

 

I don’t disagree that much can be improved about the seemingly artificial settings of today’s classrooms especially for kids who have lots of energy and are bouncing off the walls. But I don’t think Discovery Learning and doing away with formal schooling is the answer. It could work well for some people after they’ve had a decent foundation (acquirable in diverse ways). The media likes highlighting the college dropout who went on to found a tech company and become ridiculously wealthy. They don’t tell you about the tens of thousands of other dropouts who did not become billionaires or even millionaires.

 

In ADOM, the world of Ancardia has rules. The basics are provided in the manual, and I found the tutorial much clearer now that I have 70-80 games under my belt compared to the very first time when I was floundering around. Because I had experience with old-school CRPGs, ADOM wasn’t impenetrable, but many of the rules are “hidden”. I actually did okay getting to my first mid-game character within a dozen games purely through Discovery Learning. Characters die early and often in ADOM. I could sink hundreds or thousands of hours into the game and learn more nuances about staying alive and making further progress towards the end goal, or I could learn from experts who have already traversed the path. I chose the latter, and my enjoyment of the game increased by more efficiently getting over many of the otherwise frustrating barriers that would have killed dozens of characters.

 

The “natural” environment of ADOM is brutal. You might even say they are hostile to learning efficiently. While the tutorial gives you a “warning” when you first enter the Small Cave, you have no idea what that really means. And until you notice or understand how the hostile monsters are generated, you’ll bang your head against the wall trying to get through. You have no sense of how the difficulty level scales. You encounter monsters you know nothing about. You might get cursed or doomed and not realize why it happened and what it means for your character. You don’t know what talents or attributes are helpful and how they might be trained naturally. There is plenty that is hidden unless you know exactly what to look for. I’m not sure how many games it would have taken me to figure out that dropping a potion of water on a co-aligned altar blesses it, and that when you dip a scroll of identify into holy water, you can then read it (if your literacy is high enough) to identify all your items in a single swoop.

 

As a chemistry professor, the natural sciences and math are the areas I am most familiar with. Learning math or chemistry efficiently is very unnatural. If you had to figure it out from scratch, it might take you several lifetimes. (Also, failure is not always productive.) The accumulation of human knowledge has taken lifetimes – small bits of info passed down from teacher to student. The apprenticeship model has been true for a long, long time. It’s far better than having to discover everything from scratch through trial and error, but this one-on-one learning is inefficient and very expensive on a larger scale. I don’t like having forty students in G-Chem; I think I do a better job when I have five or ten or twenty. (At least it’s not four hundred.) But I recognize the efficiency of teaching a group of students. They can also help and encourage each other, which is a plus in my opinion.

 

Becoming an expert requires depth of knowledge and acquiring abstract schemas in long-term memory. Without books and teachers and some very effortful thinking on my part, I would not have the expertise that I now have in chemistry. I can’t imagine getting there through pure discovery. Of course, here I’m caricaturing Discovery Learning, and an advocate would say that no one is promoting pure “throw you into the deep end of the pool and you sink or swim”. They’d say the learning has to be guided. I don’t disagree. But the same advocates caricature current classroom practices, especially what is known as “explicit teaching” as inferior to discovery approaches, or “lecturing” as an artifice and therefore worse than a more “natural” approach. In reality, one balances multiple aspects when considering pedagogical strategies.

 

My current ADOM character is a level 13 gnome druid. I just made it to the High Mountain Village although I was not able to retrieve the waterproof blanket on the way because I understand how the Small Cave works. The fun in ADOM is that the dungeon layouts (and the game is a dungeon-crawler) are randomly procedurally generated, so each game feels quite different. Your character’s inherent skill set provides even more variation. I think this is my third druid (the previous two did not make it past level 10 before succumbing), and I’ve learned how to balance spellcasting with traditional weapons. I also now know that most animals are generated friendly, and switching my alignment to Lawful means that I have a reasonably good chance of completing the Rolf Quest and getting the ring of the master cat, provided I don’t die in the Pyramid or somewhere else. The balance of some discovery and some guide-reading, in my case, has led to maximum enjoyment. I still do bits of both when I encounter something rare (statues and artifacts) or exploring a different aspect of the game, and I wouldn’t do this any other way.


Tuesday, March 10, 2026

Square Integrable

I am reading about the extraordinary math and science contributions of John von Neumann in Ananyo Bhattacharya’s book The Man from the Future. I definitely get the feeling that von Neumann was indeed a rare genius. I also got the feeling that maybe I should have persevered in learning more math when I was younger. If so, not only would I have a better appreciation of von Neumann’s achievements, I would also be able to tackle some interesting problems in my research that require mathematically modeling beyond my current abilities. Feynman’s quote notwithstanding, I would like to better understand quantum mechanics since I use it heavily in my research.

 


Today’s blog post is about Chapter 3 of Bhattacharya’s engaging book. The chapter is titled “The Quantum Evangelist” and leverages the author’s physics background. While I know a number of facts about the history of the development of quantum mechanics, I learned a lot more about von Neumann’s contributions and the context surrounding his work. Reading this chapter gave me a better idea of the conceptual differences between Heisenberg’s matrix mechanics and Schrodinger’s wave mechanics. The connections to set theory in mathematics (and Hilbert’s program of systematization) helped clarify the context. Quoting the author: “An atom has an infinite number of orbits… so Heisenberg’s matrices must also be of infinite size to represent all possible transitions between them. The members of such a matrix can… be lined up with a list of the counting numbers – they are ‘countably’ infinite. Schrodinger’s formulation, on the other hand, yielded wave functions describing… an uncountably infinite number of possibilities. An electron that is not bound to an atom… could be literally anywhere.”

 

I now have a better appreciation of Dirac’s “ingenious trick to merge the ‘discrete’ space of Heisenberg’s matrices and the ‘continuous’ space of Schrodinger’s waves” with the delta function. Bhattacharya describes it as a “salami slicer, cutting up the wavefunction into ultra-thin slivers in space”. While Hilbert space still feels fuzzy to me and I don’t quite comprehend it, I can dimly see where square-integrable functions come from. When I teach quantum chemistry, I tell students about this important property and its practical uses along with Born’s probability postulate, I had never talked about their mathematical basis (because I didn’t understand it myself).

 

Where does von Neumann come into the story? Given his mathematical talents, he realized that square integrable functions “can be represented by an infinite series of orthogonal functions, sets of mathematical independent functions that can be added together to make any other… How much of each function is required is indicated by their coefficients... [which] were exactly the elements that appear in the state matrix.” In my class, I invoke orthogonality from a consequence of Hermitian operators. I discuss the importance of having linearly independent functions and spaces (e.g. Cartesian space or polar coordinates) conceptually but my students still struggle to think about it. Linear algebra is not a pre-requisite for my class and most students haven’t taken it. Neither have I for that matter. Until reading this chapter, I had not realized the connection between square integrable wavefunctions and orthogonality. In my class, when we get to multi-electron multi-atom systems, I introduce students to manipulating linear combinations of functions that sum up (invoking the principle of superposition) to get better results when solving the Schrodinger equation. They learn that the sum of the squares of the coefficients must add up to one, but I hadn’t made the connection to square-integrability.

 

There is plenty more in the chapter about the weirder aspects of quantum mechanics, wavefunction collapse, hidden variable theory, pilot waves, Bell inequalities, and Many Worlds. But what really stood out to me was where square integrable functions come from (as part of Hilbert space) and how they connected to orthogonal component wavefunctions. All these connections were a revelation to me, and I’d been teaching for a quarter of a century! How little I know. How much more to learn. This reminds me that I should get back to Beyond Weird by Philip Ball.


Monday, March 9, 2026

Cybernetics Informing Learning?

I stumbled across an interesting blog post connecting Ross Ashby’s principles of cybernetics to how one designs questions to probe student learning. I have some familiarity with the cybernetics principles for thermostat design; several years ago I was reading papers using this to analyze a complex prebiotic chemistry problem adjacent to one of my research projects. I had not, however, considered how this affects instructional design. Given that A.I. methods are heavily encroaching on education, I think the article highlights some of the potential pitfalls of a computerized system that supposedly personalizes learning.

 

The word “system” is important here. The blogger, Carl Hendricks, has this to say: “An instructional system can only regulate what it can detect and many learning environments rely on a channel of extremely low capacity: correct or incorrect [which] carries almost no information about process. It does not distinguish decoding from guessing, understanding from memorisation, reasoning from elimination.” Prior to the present LLM burst, the computerized learning systems relied on multiple choice questions (MCQs) or True/False questions. A subject matter expert designed these questions as a proxy to probe certain learning goals, usually atomized Taylorian-style. In the last decade, this morphed into “adaptive” systems that mixed-and-matched questions depending on whether a student got this right or wrong.

 

I think that expert-designed questions and answers for the computer-distance-online-learner can be effective to some extent. Writing good questions and answers is time-consuming and challenging. It’s also why lazy me doesn’t use exam MCQs. It’s faster for me to write a short-answer question and then evaluate the student answers, i.e., the time it takes me to grade the student answers is less than the time it would take to design really good MCQs. I’ve tried getting the LLM to help generate good answer-question pairs but right now the results are low quality. I expect they will improve with time; I might even be able to train one on a limited chemistry corpus.

 

But while expertly-designed individual questions may be quite good, stringing them together in an A.I. “adaptive” system degrades that goodness. This is why after trying out some pre-LLM systems, I never selected the “adaptive” option. Hendricks mentions the drawbacks of not coming up with good questions and answers that really get at what you want the student to learn, and the additional problem of having a regulating control system that supposedly personalizes the learning. He writes that measuring such performance is “fragile” because “the system ensured that answers were right often enough, but it never ensured that the right thinking had occurred. [It is] informationally impoverished; and no amount of pedagogical enthusiasm can compensate…”

 

LLMs continue to push the personal tutor aspect; I should say A.I. tech companies are pushing heavily because they need revenue streams. Last month I found that the current LLMs do a better job generating chemistry questions and answers compared to previous ones. I see more nuance and better accuracy overall. And the “voice” of the LLM tutor leans heavily on trying to sound helpful, offering follow-up information and more. One thing I learned is that I can make an effort to sound more helpful when students ask me questions, so the LLM had at least that aspect to help me improve. But the LLM doesn’t always (and maybe not often) offer what might be best in actually improving learning. It’s good at helping the student feel good. But that’s because it was designed to do so. It wasn’t designed to be an expert tutor.

 

A thermostat does just one thing – regulate the temperature by measuring the ambient value and then turning on (or off) the heating/cooling device. Even with its narrow purpose, the mechanics of designing a good thermostat is trickier than it looks at first glance. The business of teaching and learning is more nebulously defined in purpose and certainly much more complex. Cybernetics may be a starting point to think about adaptive tutors but there is far to go before it will replace an actual human expert in terms of quality. Pessimistically, I predict that overhyped adaptive tutors will degrade the desired quality to a low common denominator. Hendricks writes: “Learner ingenuity will always exceed designer foresight; there will always be shortcuts that were not anticipated, strategies that were not mapped, paths that were left open by accident. Requisite variety is an asymptote, not a destination.”

 

I’m reminded how amazing it is to learn something as a human being. I don’t pretend to know exactly how it happens especially in glorious moments of gestalt “aha!” understanding. Present neural networks underlying LLMs are not like our brain or our mind or our sense of self. As a computational scientist, I have some vague and wild ideas of how to improve on this. I’m sure others like me have such thoughts and hence I expect over time that LLMs will continue to improve. Whether they will eventually achieve the quality of the hype remains an open question.


Tuesday, February 17, 2026

The Feeling of Knowing

To err is human. To admit to erring… well, that’s difficult. Like most people, I don’t like the feeling of realizing that I’m wrong. I always think I’m right (I can’t help it!), but I don’t think I’m always right. Past experience confirms that I do err; it’s consistent but I can’t predict when it will happen. And I feel I’m right… right up to the moment that I’m proven wrong.

 


The subject of Wrong-ology is taken up by Kathryn Schulz in her book, Being Wrong. Our minds are funny things and the way we learn and remember things is much more complex and mysterious than we imagine. We think, as Plato suggested, that memory works like a wax tablet: “Everything you experience, from your own thoughts and sensory impressions to interactions with others, creates an imprint in that wax… an unchanging mental replica of the events of the past, captured at the moment they occurred.” This may contribute to that feeling of knowing, even when in gross error.

 

As a professor, I’m well practiced at professing. Students think you’re more knowledgeable and know what you’re talking about when you present the material confidently. That’s not hard to do because I feel that I know the material. Even when I don’t know it as well as I should, I still present it confidently. Fake it till you make it. Did I do so when I first started teaching? Was I more diffident back then? Honestly, I don’t remember. I’ve learned not to trust my memory even if I feel I can visualize it in my mind’s eye. My spouse provides me a very useful signal when I might be professing with confidence about something I know little about; she says: “You say that so confidently”. That gets me to chuckle, stop, check, and think.

 

Schulz discusses medical cases of brain issues where patients confidently describe or explain something with no correspondence to actual reality. And they seemingly believe it. This is known as confabulation. Here’s how Schulz describes it: “Imagine, by way of analogy, that each of us possesses an inner writer and an inner fact-checker. As soon as the writer begins devising a story, the fact-checker gets busy comparing it with the input from our senses, checking it against our memory, examining it for internal consistencies, thinking through our database of facts about the world, and, once we utter it, gauging other people’s reactions to assess its credibility… When the fact-checker falls asleep on the job, however, our theories about the world can become wholly unmoored from reality. All of us have experienced this, because the one time our fact-checkers reliably fall asleep is when we do, too. Think about dreams again for a moment, and about how weird even just the averagely weird ones can be… Now, two bizarre things are going on here. The first is that your brain is generating representations of the world that are only lightly tethered to the real, or even to the possible. The second is that you are completely untroubled by this fact.”

 

Being surprised is a good wake-up call to discovering error. You’d think that by now I would have gotten used to being surprised every time I err. But I am surprised every single time. Consistently, yet unpredictable in when it will happen. I’m heartened when Schulz writes that saying “I don’t know” is a good sign of brain function because in some forms of dementia, the fact-checker falls asleep and confabulation ensues. It also turns out that being confabulatory is part of how the human brain works. It’s an engine, possibly the engine, of creativity and imagination. I can think about and imagine things that are not real. I can make mental models of things that are abstract or invisible (which I must do frequently in thinking about chemistry). Our minds have adapted to come up with quick instinctive solutions, not always thought through, that do serve us well on many an occasion. The feeling of knowing allows us to act quickly when needed.

 

When grading exams, I still get surprised by the occasional confabulatory explanations of students. A student who has no idea what’s going on is yet able to come up with a fantastical story involving throwing together chemistry concepts completely untethered to reality. It doesn’t happen often, but it’s interesting for me to read these “answers” and try to imagine how a student came up with them. I wonder if that student had the confident feeling of knowing. But actually didn’t. Not knowing what you don’t know isn’t a great situation to be in.

 

Reading Being Wrong has made me a little quicker to say “I was wrong” in my classes when I make a mistake on the board and it is pointed out by a student. I apologize to the class, then thank the student for paying attention and being brave enough to tell me so that I don’t mislead the class any further. While it doesn’t happen often, I feel that as I age it has ticked up in frequency. I’m not as sharp as I used to be, perhaps. Or maybe I am less well prepared because I’m overconfident in the feeling of knowing, having taught the subject matter multiple times over a couple of decades. But if I actually learn something from my errors, that’s a good thing!


Tuesday, February 10, 2026

Rare Earth: P and N

The idea that Planet Earth is rare and special for being able to host life has a very readable book-length argument, Rare Earth by Brownlee and Ward. Technically it argues why complex life is rare, while simpler life may be more achievable over a broader range of conditions. Given that we only have a sample size of one for life-harboring planets, who knows if life proliferates beyond our star? You can plug a range of numbers into the Drake equation to convince yourself either way.

 

The idea that Earth sits in a Goldilocks habitable zone most often refers to whether water exists as a liquid on the surface of a planet. This assumes that H2O is crucial to life as a liquid, and that other liquids (ammonia, hydrocarbons, formamide) may not have the same versatility. It’s hard to say otherwise with a sample size of one.  We also assume that carbon-based molecules are crucial for life, which is reasonable from a chemical point of view (diversity, bond energies, or as a carrier). That takes care of carbon, hydrogen, and oxygen. What about nitrogen and phosphorus?

 

The idea that our rare Earth may be rarer than we previously thought comes from a paper published last month in Nature Astronomy (DOI: 10.1038/s41550-026-02775-z). It examines core formation of rocky planets and estimates the availability of nitrogen and phosphorus under different conditions. Things that are important are the relative redox state of the mantle. Our planet apparently sits in a zone that optimizes decent amounts of N and P (although much less abundant than C, H, O). It’s a tricky balance. If the redox situation is too reducing, availability of P plummets; too oxidizing and N might be lost to outer space by significant degassing. Simulated origin-of-life chemistry in the lab has always worked better under reducing conditions. Thus, the authors conclude: “there is a plausible case to make that only moderately oxidizing planets have both sufficient mantle P and sufficient reducing power to sustain prebiotic chemistry and then life”.

 

Why do we need N? Amino acids. Catalysts. Chemical versatility. We would not have fine-tuning of the thermodynamics and kinetics of (bio)chemical reactions without nitrogen. Why do we need P? I’m not sure. Arguments can be made for its crucial role in nucleic acids (Westheimer’s famous paper). But it’s possible some other backbone might work. Bioenergetic currency relies on phosphates today, but it’s possible that sulfur could have played the early role of energy transduction. Sulfate bacteria are also fierce competitors. I’d be very curious what is known about sulfur availability during planetary formation. We know that outgassing of volcanoes on the early Earth is a source. And I’m also clearly biased because I have a current research grant to study the role of sulfur in origins-of-life chemistry.

 

Final tidbit of the paper that I enjoyed: the authors use oxygen fugacity as their redox measure. When I teach P-Chem, I try to pepper in examples of why the math-and-models are useful. Fugacity is one of those bewildering topics to students, because the math looks like a merry-go-round and having a hypothetical reference state seems strange. Now I can point to another example why learning about fugacity is useful and interesting. The paper makes reference to Mars and other exoplanets in the search for life in the universe.