Tuesday, April 21, 2026

Biochemistry Mishmash

I am slowly working my way through The Natural Selection of the Chemical Elements by Williams and Frausto da Silva. It’s not the easiest book to read, but it approaches issues of biochemistry from an inorganic and evolutionary lens that I find helpful. I used one of their books in a class three years ago because our library had a digital copy.

 

Today’s post is a mishmash of thoughts sparked by my reading of chapters 11-13 touching on the evolutionary organization of cells and the roles of different chemical substances. Since I study the chemical origins of life, I filter what I’m learning through that particular lens. From that perspective, the book’s contents are idiosyncratic and generates more questions than it answers. But it gives me much to mull about.

 

Since the authors have a background in inorganic chemistry, the function of metal ions features prominently. The big change to the chemical environment is a redox shift from reducing to oxidizing conditions. We have plenty of O2 in our atmosphere today, but this was not so on the Hadean Earth. The progressive oxidation led to a decrease in the availability of some substances, particularly Fe(II) and sulfides, but led to the increase in others, with newcomers such as Zn and Cu becoming available, alongside a shift to complexity, symbiosis, and eventual multicellularity.

 

The final paragraph of chapter 13 begins: “The conclusion we have reached is that multicellular development was bound to increase in complexity as newly available elements were incorporated but could only do so by coexistence with simpler forms. Complexity is eventually self-defeating and the escape from this dilemma is only possible with an ecosystem of the simple and the complex.” Biochemistry is a tinker, so the first sentence is not surprising. There is a mishmash of systems layered upon more primitive ones, palimpsests sometimes peeking through. The second sentence is provocative. Is it true? I don’t know. But we do know that complex systems open the possibility of catastrophic system failure.

 

Things that jumped out of me:

(1) The evolution of life is all about kinetic traps. “Energized” molecules quickly dissipate energy in their thermodynamic progress towards the equilibrium state. But to get a system going that allows for control, kinetic traps are essential, as is the evolution of catalysts. Before central control emerged, persistence is about being trapped long enough or often enough.

(2) Vesicles and other compartments within the cell have chemical environments that can be very different from the cytoplasm and use messenger systems similar to the “outside” of a cell, calcium-based systems for example.

(3) The rates of phosphate versus thioester hydrolysis can vary greatly over different pH and temperature. This may be a clue to a takeover of energy transduction from a thioester world to the modern one primarily using phosphate esters.

(4) The assertion the DNA codes qualitatively for proteins, but not quantitatively, is interesting. The quantitative aspects that require control and regulation were previously “set” by more primitive cells. Since I think a primitive metabolism is prior to nucleic acid coded information, this makes sense to me.

 

Things I need to ponder more: Why is negative feedback more prevalent in the evolution of living systems? Does it arise because living systems are thermodynamically semi-closed? I regularly tell my students in G-Chem II and P-Chem II that we study equilibrium thermodynamics because we can construct a model and its accompanying equations for closed systems. I contrast this to the non-equilibrium thermodynamics of open systems and use life as an example of staying alive by avoiding the equilibrium state. But an enclosed cell that tries to maintain some level of homeostasis along with growth and repair isn’t completely open. It’s very finicky about what goes in and what goes out, and what concentrations are maintained inside.

 

The authors also reminded me about the distinction between control and regulation: “Control acts at the level of metabolism and one part of it is concerned with the use of proteins including catalysts but not with their productions… Regulation acts at the level of gene and… was little altered from that in anaerobes by the development of multicellular organism…” From my slant, this suggests that control precedes regulation, at least on a local level. A protometabolic system evolves to control matter and energy. How? By tinkering! Why? I don’t know but it reminds me of the dictum: “What persists, persists. What does not, does not.”


Wednesday, March 18, 2026

Discovery Learning: ADOM Edition

This past weekend, I notched my fifth ADOM win with a Drakeling Elementalist who is now #2 on the high-score list. In preparation for today’s blog post, I also replayed a Tutorial game on a new installation to remind myself what tips the system provides to brand new players.

 

What I’ve been musing about is “Discovery Learning”, a buzz-phrase that leverages (in this case simple-minded) “common sense” thinking. In the extreme version, there is no formal schooling for kids. Let them explore and discover the world and learn “naturally” from nature. Natural – good! Artificial – Bad!

 

I don’t disagree that much can be improved about the seemingly artificial settings of today’s classrooms especially for kids who have lots of energy and are bouncing off the walls. But I don’t think Discovery Learning and doing away with formal schooling is the answer. It could work well for some people after they’ve had a decent foundation (acquirable in diverse ways). The media likes highlighting the college dropout who went on to found a tech company and become ridiculously wealthy. They don’t tell you about the tens of thousands of other dropouts who did not become billionaires or even millionaires.

 

In ADOM, the world of Ancardia has rules. The basics are provided in the manual, and I found the tutorial much clearer now that I have 70-80 games under my belt compared to the very first time when I was floundering around. Because I had experience with old-school CRPGs, ADOM wasn’t impenetrable, but many of the rules are “hidden”. I actually did okay getting to my first mid-game character within a dozen games purely through Discovery Learning. Characters die early and often in ADOM. I could sink hundreds or thousands of hours into the game and learn more nuances about staying alive and making further progress towards the end goal, or I could learn from experts who have already traversed the path. I chose the latter, and my enjoyment of the game increased by more efficiently getting over many of the otherwise frustrating barriers that would have killed dozens of characters.

 

The “natural” environment of ADOM is brutal. You might even say they are hostile to learning efficiently. While the tutorial gives you a “warning” when you first enter the Small Cave, you have no idea what that really means. And until you notice or understand how the hostile monsters are generated, you’ll bang your head against the wall trying to get through. You have no sense of how the difficulty level scales. You encounter monsters you know nothing about. You might get cursed or doomed and not realize why it happened and what it means for your character. You don’t know what talents or attributes are helpful and how they might be trained naturally. There is plenty that is hidden unless you know exactly what to look for. I’m not sure how many games it would have taken me to figure out that dropping a potion of water on a co-aligned altar blesses it, and that when you dip a scroll of identify into holy water, you can then read it (if your literacy is high enough) to identify all your items in a single swoop.

 

As a chemistry professor, the natural sciences and math are the areas I am most familiar with. Learning math or chemistry efficiently is very unnatural. If you had to figure it out from scratch, it might take you several lifetimes. (Also, failure is not always productive.) The accumulation of human knowledge has taken lifetimes – small bits of info passed down from teacher to student. The apprenticeship model has been true for a long, long time. It’s far better than having to discover everything from scratch through trial and error, but this one-on-one learning is inefficient and very expensive on a larger scale. I don’t like having forty students in G-Chem; I think I do a better job when I have five or ten or twenty. (At least it’s not four hundred.) But I recognize the efficiency of teaching a group of students. They can also help and encourage each other, which is a plus in my opinion.

 

Becoming an expert requires depth of knowledge and acquiring abstract schemas in long-term memory. Without books and teachers and some very effortful thinking on my part, I would not have the expertise that I now have in chemistry. I can’t imagine getting there through pure discovery. Of course, here I’m caricaturing Discovery Learning, and an advocate would say that no one is promoting pure “throw you into the deep end of the pool and you sink or swim”. They’d say the learning has to be guided. I don’t disagree. But the same advocates caricature current classroom practices, especially what is known as “explicit teaching” as inferior to discovery approaches, or “lecturing” as an artifice and therefore worse than a more “natural” approach. In reality, one balances multiple aspects when considering pedagogical strategies.

 

My current ADOM character is a level 13 gnome druid. I just made it to the High Mountain Village although I was not able to retrieve the waterproof blanket on the way because I understand how the Small Cave works. The fun in ADOM is that the dungeon layouts (and the game is a dungeon-crawler) are randomly procedurally generated, so each game feels quite different. Your character’s inherent skill set provides even more variation. I think this is my third druid (the previous two did not make it past level 10 before succumbing), and I’ve learned how to balance spellcasting with traditional weapons. I also now know that most animals are generated friendly, and switching my alignment to Lawful means that I have a reasonably good chance of completing the Rolf Quest and getting the ring of the master cat, provided I don’t die in the Pyramid or somewhere else. The balance of some discovery and some guide-reading, in my case, has led to maximum enjoyment. I still do bits of both when I encounter something rare (statues and artifacts) or exploring a different aspect of the game, and I wouldn’t do this any other way.


Tuesday, March 10, 2026

Square Integrable

I am reading about the extraordinary math and science contributions of John von Neumann in Ananyo Bhattacharya’s book The Man from the Future. I definitely get the feeling that von Neumann was indeed a rare genius. I also got the feeling that maybe I should have persevered in learning more math when I was younger. If so, not only would I have a better appreciation of von Neumann’s achievements, I would also be able to tackle some interesting problems in my research that require mathematically modeling beyond my current abilities. Feynman’s quote notwithstanding, I would like to better understand quantum mechanics since I use it heavily in my research.

 


Today’s blog post is about Chapter 3 of Bhattacharya’s engaging book. The chapter is titled “The Quantum Evangelist” and leverages the author’s physics background. While I know a number of facts about the history of the development of quantum mechanics, I learned a lot more about von Neumann’s contributions and the context surrounding his work. Reading this chapter gave me a better idea of the conceptual differences between Heisenberg’s matrix mechanics and Schrodinger’s wave mechanics. The connections to set theory in mathematics (and Hilbert’s program of systematization) helped clarify the context. Quoting the author: “An atom has an infinite number of orbits… so Heisenberg’s matrices must also be of infinite size to represent all possible transitions between them. The members of such a matrix can… be lined up with a list of the counting numbers – they are ‘countably’ infinite. Schrodinger’s formulation, on the other hand, yielded wave functions describing… an uncountably infinite number of possibilities. An electron that is not bound to an atom… could be literally anywhere.”

 

I now have a better appreciation of Dirac’s “ingenious trick to merge the ‘discrete’ space of Heisenberg’s matrices and the ‘continuous’ space of Schrodinger’s waves” with the delta function. Bhattacharya describes it as a “salami slicer, cutting up the wavefunction into ultra-thin slivers in space”. While Hilbert space still feels fuzzy to me and I don’t quite comprehend it, I can dimly see where square-integrable functions come from. When I teach quantum chemistry, I tell students about this important property and its practical uses along with Born’s probability postulate, I had never talked about their mathematical basis (because I didn’t understand it myself).

 

Where does von Neumann come into the story? Given his mathematical talents, he realized that square integrable functions “can be represented by an infinite series of orthogonal functions, sets of mathematical independent functions that can be added together to make any other… How much of each function is required is indicated by their coefficients... [which] were exactly the elements that appear in the state matrix.” In my class, I invoke orthogonality from a consequence of Hermitian operators. I discuss the importance of having linearly independent functions and spaces (e.g. Cartesian space or polar coordinates) conceptually but my students still struggle to think about it. Linear algebra is not a pre-requisite for my class and most students haven’t taken it. Neither have I for that matter. Until reading this chapter, I had not realized the connection between square integrable wavefunctions and orthogonality. In my class, when we get to multi-electron multi-atom systems, I introduce students to manipulating linear combinations of functions that sum up (invoking the principle of superposition) to get better results when solving the Schrodinger equation. They learn that the sum of the squares of the coefficients must add up to one, but I hadn’t made the connection to square-integrability.

 

There is plenty more in the chapter about the weirder aspects of quantum mechanics, wavefunction collapse, hidden variable theory, pilot waves, Bell inequalities, and Many Worlds. But what really stood out to me was where square integrable functions come from (as part of Hilbert space) and how they connected to orthogonal component wavefunctions. All these connections were a revelation to me, and I’d been teaching for a quarter of a century! How little I know. How much more to learn. This reminds me that I should get back to Beyond Weird by Philip Ball.


Monday, March 9, 2026

Cybernetics Informing Learning?

I stumbled across an interesting blog post connecting Ross Ashby’s principles of cybernetics to how one designs questions to probe student learning. I have some familiarity with the cybernetics principles for thermostat design; several years ago I was reading papers using this to analyze a complex prebiotic chemistry problem adjacent to one of my research projects. I had not, however, considered how this affects instructional design. Given that A.I. methods are heavily encroaching on education, I think the article highlights some of the potential pitfalls of a computerized system that supposedly personalizes learning.

 

The word “system” is important here. The blogger, Carl Hendricks, has this to say: “An instructional system can only regulate what it can detect and many learning environments rely on a channel of extremely low capacity: correct or incorrect [which] carries almost no information about process. It does not distinguish decoding from guessing, understanding from memorisation, reasoning from elimination.” Prior to the present LLM burst, the computerized learning systems relied on multiple choice questions (MCQs) or True/False questions. A subject matter expert designed these questions as a proxy to probe certain learning goals, usually atomized Taylorian-style. In the last decade, this morphed into “adaptive” systems that mixed-and-matched questions depending on whether a student got this right or wrong.

 

I think that expert-designed questions and answers for the computer-distance-online-learner can be effective to some extent. Writing good questions and answers is time-consuming and challenging. It’s also why lazy me doesn’t use exam MCQs. It’s faster for me to write a short-answer question and then evaluate the student answers, i.e., the time it takes me to grade the student answers is less than the time it would take to design really good MCQs. I’ve tried getting the LLM to help generate good answer-question pairs but right now the results are low quality. I expect they will improve with time; I might even be able to train one on a limited chemistry corpus.

 

But while expertly-designed individual questions may be quite good, stringing them together in an A.I. “adaptive” system degrades that goodness. This is why after trying out some pre-LLM systems, I never selected the “adaptive” option. Hendricks mentions the drawbacks of not coming up with good questions and answers that really get at what you want the student to learn, and the additional problem of having a regulating control system that supposedly personalizes the learning. He writes that measuring such performance is “fragile” because “the system ensured that answers were right often enough, but it never ensured that the right thinking had occurred. [It is] informationally impoverished; and no amount of pedagogical enthusiasm can compensate…”

 

LLMs continue to push the personal tutor aspect; I should say A.I. tech companies are pushing heavily because they need revenue streams. Last month I found that the current LLMs do a better job generating chemistry questions and answers compared to previous ones. I see more nuance and better accuracy overall. And the “voice” of the LLM tutor leans heavily on trying to sound helpful, offering follow-up information and more. One thing I learned is that I can make an effort to sound more helpful when students ask me questions, so the LLM had at least that aspect to help me improve. But the LLM doesn’t always (and maybe not often) offer what might be best in actually improving learning. It’s good at helping the student feel good. But that’s because it was designed to do so. It wasn’t designed to be an expert tutor.

 

A thermostat does just one thing – regulate the temperature by measuring the ambient value and then turning on (or off) the heating/cooling device. Even with its narrow purpose, the mechanics of designing a good thermostat is trickier than it looks at first glance. The business of teaching and learning is more nebulously defined in purpose and certainly much more complex. Cybernetics may be a starting point to think about adaptive tutors but there is far to go before it will replace an actual human expert in terms of quality. Pessimistically, I predict that overhyped adaptive tutors will degrade the desired quality to a low common denominator. Hendricks writes: “Learner ingenuity will always exceed designer foresight; there will always be shortcuts that were not anticipated, strategies that were not mapped, paths that were left open by accident. Requisite variety is an asymptote, not a destination.”

 

I’m reminded how amazing it is to learn something as a human being. I don’t pretend to know exactly how it happens especially in glorious moments of gestalt “aha!” understanding. Present neural networks underlying LLMs are not like our brain or our mind or our sense of self. As a computational scientist, I have some vague and wild ideas of how to improve on this. I’m sure others like me have such thoughts and hence I expect over time that LLMs will continue to improve. Whether they will eventually achieve the quality of the hype remains an open question.


Tuesday, February 17, 2026

The Feeling of Knowing

To err is human. To admit to erring… well, that’s difficult. Like most people, I don’t like the feeling of realizing that I’m wrong. I always think I’m right (I can’t help it!), but I don’t think I’m always right. Past experience confirms that I do err; it’s consistent but I can’t predict when it will happen. And I feel I’m right… right up to the moment that I’m proven wrong.

 


The subject of Wrong-ology is taken up by Kathryn Schulz in her book, Being Wrong. Our minds are funny things and the way we learn and remember things is much more complex and mysterious than we imagine. We think, as Plato suggested, that memory works like a wax tablet: “Everything you experience, from your own thoughts and sensory impressions to interactions with others, creates an imprint in that wax… an unchanging mental replica of the events of the past, captured at the moment they occurred.” This may contribute to that feeling of knowing, even when in gross error.

 

As a professor, I’m well practiced at professing. Students think you’re more knowledgeable and know what you’re talking about when you present the material confidently. That’s not hard to do because I feel that I know the material. Even when I don’t know it as well as I should, I still present it confidently. Fake it till you make it. Did I do so when I first started teaching? Was I more diffident back then? Honestly, I don’t remember. I’ve learned not to trust my memory even if I feel I can visualize it in my mind’s eye. My spouse provides me a very useful signal when I might be professing with confidence about something I know little about; she says: “You say that so confidently”. That gets me to chuckle, stop, check, and think.

 

Schulz discusses medical cases of brain issues where patients confidently describe or explain something with no correspondence to actual reality. And they seemingly believe it. This is known as confabulation. Here’s how Schulz describes it: “Imagine, by way of analogy, that each of us possesses an inner writer and an inner fact-checker. As soon as the writer begins devising a story, the fact-checker gets busy comparing it with the input from our senses, checking it against our memory, examining it for internal consistencies, thinking through our database of facts about the world, and, once we utter it, gauging other people’s reactions to assess its credibility… When the fact-checker falls asleep on the job, however, our theories about the world can become wholly unmoored from reality. All of us have experienced this, because the one time our fact-checkers reliably fall asleep is when we do, too. Think about dreams again for a moment, and about how weird even just the averagely weird ones can be… Now, two bizarre things are going on here. The first is that your brain is generating representations of the world that are only lightly tethered to the real, or even to the possible. The second is that you are completely untroubled by this fact.”

 

Being surprised is a good wake-up call to discovering error. You’d think that by now I would have gotten used to being surprised every time I err. But I am surprised every single time. Consistently, yet unpredictable in when it will happen. I’m heartened when Schulz writes that saying “I don’t know” is a good sign of brain function because in some forms of dementia, the fact-checker falls asleep and confabulation ensues. It also turns out that being confabulatory is part of how the human brain works. It’s an engine, possibly the engine, of creativity and imagination. I can think about and imagine things that are not real. I can make mental models of things that are abstract or invisible (which I must do frequently in thinking about chemistry). Our minds have adapted to come up with quick instinctive solutions, not always thought through, that do serve us well on many an occasion. The feeling of knowing allows us to act quickly when needed.

 

When grading exams, I still get surprised by the occasional confabulatory explanations of students. A student who has no idea what’s going on is yet able to come up with a fantastical story involving throwing together chemistry concepts completely untethered to reality. It doesn’t happen often, but it’s interesting for me to read these “answers” and try to imagine how a student came up with them. I wonder if that student had the confident feeling of knowing. But actually didn’t. Not knowing what you don’t know isn’t a great situation to be in.

 

Reading Being Wrong has made me a little quicker to say “I was wrong” in my classes when I make a mistake on the board and it is pointed out by a student. I apologize to the class, then thank the student for paying attention and being brave enough to tell me so that I don’t mislead the class any further. While it doesn’t happen often, I feel that as I age it has ticked up in frequency. I’m not as sharp as I used to be, perhaps. Or maybe I am less well prepared because I’m overconfident in the feeling of knowing, having taught the subject matter multiple times over a couple of decades. But if I actually learn something from my errors, that’s a good thing!


Tuesday, February 10, 2026

Rare Earth: P and N

The idea that Planet Earth is rare and special for being able to host life has a very readable book-length argument, Rare Earth by Brownlee and Ward. Technically it argues why complex life is rare, while simpler life may be more achievable over a broader range of conditions. Given that we only have a sample size of one for life-harboring planets, who knows if life proliferates beyond our star? You can plug a range of numbers into the Drake equation to convince yourself either way.

 

The idea that Earth sits in a Goldilocks habitable zone most often refers to whether water exists as a liquid on the surface of a planet. This assumes that H2O is crucial to life as a liquid, and that other liquids (ammonia, hydrocarbons, formamide) may not have the same versatility. It’s hard to say otherwise with a sample size of one.  We also assume that carbon-based molecules are crucial for life, which is reasonable from a chemical point of view (diversity, bond energies, or as a carrier). That takes care of carbon, hydrogen, and oxygen. What about nitrogen and phosphorus?

 

The idea that our rare Earth may be rarer than we previously thought comes from a paper published last month in Nature Astronomy (DOI: 10.1038/s41550-026-02775-z). It examines core formation of rocky planets and estimates the availability of nitrogen and phosphorus under different conditions. Things that are important are the relative redox state of the mantle. Our planet apparently sits in a zone that optimizes decent amounts of N and P (although much less abundant than C, H, O). It’s a tricky balance. If the redox situation is too reducing, availability of P plummets; too oxidizing and N might be lost to outer space by significant degassing. Simulated origin-of-life chemistry in the lab has always worked better under reducing conditions. Thus, the authors conclude: “there is a plausible case to make that only moderately oxidizing planets have both sufficient mantle P and sufficient reducing power to sustain prebiotic chemistry and then life”.

 

Why do we need N? Amino acids. Catalysts. Chemical versatility. We would not have fine-tuning of the thermodynamics and kinetics of (bio)chemical reactions without nitrogen. Why do we need P? I’m not sure. Arguments can be made for its crucial role in nucleic acids (Westheimer’s famous paper). But it’s possible some other backbone might work. Bioenergetic currency relies on phosphates today, but it’s possible that sulfur could have played the early role of energy transduction. Sulfate bacteria are also fierce competitors. I’d be very curious what is known about sulfur availability during planetary formation. We know that outgassing of volcanoes on the early Earth is a source. And I’m also clearly biased because I have a current research grant to study the role of sulfur in origins-of-life chemistry.

 

Final tidbit of the paper that I enjoyed: the authors use oxygen fugacity as their redox measure. When I teach P-Chem, I try to pepper in examples of why the math-and-models are useful. Fugacity is one of those bewildering topics to students, because the math looks like a merry-go-round and having a hypothetical reference state seems strange. Now I can point to another example why learning about fugacity is useful and interesting. The paper makes reference to Mars and other exoplanets in the search for life in the universe.


Friday, February 6, 2026

Second Week: Spring 2026 Edition

I forgot to write last Fall’s First Week edition (so here’s the one from 2024), probably because I was super-busy teaching two sections of G-Chem 1 and one section of Biochem 1. I’m not teaching any Honors section this academic year so last semester I had around 100 students across my three courses. We’re also using a new textbook and online homework system for G-Chem, and it was just my second time teaching Biochem. All in all, more prep work than usual. This semester, I’m teaching one section of G-Chem 2 and one section of P-Chem 2 with around 60 students across both courses. I’m now used to the G-Chem online homework system, and I like the new textbook. This, the semester feels lighter from a workload standpoint. Hurrah!

 

But I was still very busy last week because of the possibility of a government shutdown here in the U.S. (which turned out to be mercifully short, thankfully). I decided to write up my annual report for my current federal research grant and submit it about a month earlier than usual, in case the government shut down. There was some confusion about whether an administrative request that I sent via webform went through because the website was apparently having problems, but eventually some back-and-forth emails confirmed that they received my request and my report. And hence today’s post is happening at the end of Week 2 rather than Week 1 of the semester.

 

After the long winter break, I’ve been enjoying interacting with students in the classroom, in my office, or in the hallways or atrium of my building. I’ve been making an extra effort not to be overly quick or efficient in my interactions, and hopefully students feel I’m not rushing them when they have questions. We’ll see how that shakes out the rest of the semester. I feel I have more energy even though my first class starts at 8am (instead of 9am last semester), maybe because of the lighter load and maybe because I just feel freer. The first week of class I was still struggling with timing in my G-Chem class because I had rearranged the material to match the new textbook. This week I feel I did a much better job without rushing in the last 5-10 minutes of class. I’m not making many changes to the first eight weeks of my P-Chem class so that has been going smoothly timing-wise. (I will be making some changes to the latter half.)

 

I also feel I have time for research this semester! Last semester, I felt that I hardly made any progress on my own projects. I still helped my research students make progress in their projects, but didn’t have much time for my own. This semester, however, I’ve been getting in 5-10 hours of research or writing (working on a paper) per week, which has been very nice! One of my summer 2025 research students who continued working with me last semester is also a very capable writer so I invited her to write the first draft of the research paper featuring her work. The carrot is she gets to be first author! I have revised much of the text, but kept the overall flow intact. She also made all the Figures and Tables. (For many other papers, I make the Figures since I feel they look nicer and more consistent in size/shape, but this student is exceptional and detail-oriented.)

 

I’m not taking on any new service activities because over winter break, I found out that my sabbatical application for AY26-27 was approved! This means fewer committee meetings and more time this semester, probably contributing to my feeling freer! I’ve also decided to try presenting my work digitally rather than in-person at the upcoming American Chemical Society national conference, so I don’t need to block off travel time. More time freed up! I am reading a little more to try to fine-tune a plan for my upcoming sabbatical. There are so many interesting things to explore. Okay, that’s the end of Week 2, Spring 2026 edition. It may be my last such post as I’ve been reducing my blogging activities.