Sunday, March 31, 2024

Classifying Failures

To err is human. Sometimes it’s necessary. Sometimes inevitable. But is it ever desirable?

 

Perhaps, there’s a Right Kind of Wrong, the title of Amy Edmonson’s latest book. Edmonson is a professor of leadership and management with a diversity of work experience before she entered academia. The subtitle of her book: “The Science of Failing Well.” The gist of her book is arguing that it is useful to classify three types of failures (intelligent, basic, complex) and be cognizant to your situation – you need to practice self-awareness, situation awareness and system awareness. A key ingredient to failing well is to have high standards but also be in an environment of high psychological safety – you’re not afraid to own up to the failure because you will be supported by those around you. Let’s take these in turn. 

 


An intelligent failure is when you learn from failure in a novel situation. When you encounter something you’ve never done before or new to you, the only way to make progress is trial and error. Getting things wrong is inevitable. Edmonson brings up the example of a research lab. You’re pushing into the unknown and you will try a lot of things that fail before you succeed. It’s something I try to impress upon my research students; I typically gesture to my file cabinet of abandoned projects. If you don’t try, you won’t succeed. But you’ve also got to know when to throw in the towel, and that’s a skill that takes time and experience (and hopefully good advice from mentors). So it’s important to ask yourself: Am I in a novel situation?

 

A basic failure takes place in well-trod territory. It’s truly an oops! You should have known better, especially since you’ve done this before. But sometimes overconfidence and not paying attention can lead to an error. The consequences could be small; the consequences could be devastating. Regardless, you must try to learn from it so you can avoid repeating it in the future. On the other hand, a complex failure is not so easy do diagnose. I’ve previously blogged about this after reading Charles Perrow’s classic Normal Accidents. Such errors occur when there are complex interactions among multiple parts in a tightly-coupled system. System failure always has multiple causes. Again, paying attention is crucial, especially to early warning signs.

 

A first ingredient to learn from failure is being self-aware. Even so, failure feels like a letdown and your first instinct is to beat yourself up over it, or worse to ignore it and shift the blame. Edmonson’s advice: “Choose learning over knowing” and reframe a failure as an opportunity to learn. That requires taking a pause and not acting on that pernicious first instinct. To be situationally aware, ask yourself what context you are in. Is it novel? Is it routine? Is something different than before? Is this part of a complex system? You also have to rate the consequences: Is this a low-stakes or a high-stakes situation? If low-stakes, taking a risk so you can learn might be desirable; if high-stakes you might want to think twice before betting the farm. The stakes may be physical, financial, or reputational.

 

It’s hard to be system aware. Ever since I dipped into systems chemistry, I’ve often found myself lost in a tangle. Thinking systemically can also be discouraging; sometimes you feel stuck in a system and there seems to be no easy way out. Edmonson outlines a simple scenario she often uses called the Beer Game, a seemingly simple scenario where students play four roles in a supply chain: “factory, distributor, wholesaler, retailer”. The rules are simple. The retailer picks a card providing the demand for that turn and then each player makes orders and keeps track of inventory. But there’s a lag time as inventory makes its way through the system. Things go awry in a hurry. There’s a tiny and surprising catch in the game that surprises students, but I won’t give it away; read Edmonson’s book or look it up. Edmonson admonishes the reader to “anticipate downstream consequences”, “resist the quick fix”, and “redraw the boundaries”. Per the typical business book, she provides lively anecdotes, engaging examples, and a positive self-help vibe.

 

Reading this book made me ask myself if I am risk-averse. Do I try to avoid failing? Partially, I suppose. Research is probably an area where I’m not risk-averse. But I don’t dump all my eggs into one basket and usually juggle multiple investigations to increase the chances of success. But I’m protective of my time, and that makes it difficult for me to make a large pivot. I sometimes imagine being able to do so, but then step back and do incremental small changes. This is also true of my teaching. I’m always trying new things, but in small increments. Edmonson made me pause and think about where I can shake things up for a much better payoff especially since almost everything I do as a professor is low-stakes for me.

 

How about my students? These days students seem much more risk-averse than when I started teaching. Getting a B or a C on an exam can seem like a devastating outcome. I’ve designed my class with lots of low-stakes ways students can engage in the “struggle” to learn the material. Chemistry is novel, and it’s certainly not easy. I’m upfront with my students about this, but I hopefully also convey that they can all learn it if they’re willing to put in the time and effort. But I recognize that if students don’t feel psychologically safe, then they won’t be willing to take the risk and make mistakes as part of learning. As a result, they don’t learn as well as they could. Right Kind of Wrong has challenged me to think about how I can help the students gain better situational awareness and see learning as a transition from novel to variable to routine such that when you’ve practiced a lot, you really do have the material down pat.

 

I tell students to make their mistakes in class and on the low-stakes homework and quizzes so that they won’t make them on exams. Making mistakes when learning new subject matter is inevitable, even necessary. To err is to (hopefully) learn.

Tuesday, March 26, 2024

Theory of Learning and Evolution

I recently read three papers by Vanchurin (and colleagues) that builds a theory of learning and illustrates its generality in reference to machine learning, biological evolution, and the (presumably physico-chemical) origin of life. Math is involved, and I found some parts can be difficult to follow. But the writing is clear and the progression of the argument methodical. Today’s blog will only focus on one of these, the most conceptual of the three: “Toward a theory of evolution as multi-level learning.” (PNAS 2022, DOI: 10.1073/pnas.2120037119). I will be quoting the paper often my paraphrases will be clumsier than their clear prose.

 

What drives evolution in biology? Essentially, “solving optimization problems, which entails conflicts or trade-offs between optimization criteria at different levels or scales, leading to frustrated states…” There are three important pieces here: (1) optimization, (2) an interplay of distinct timescales in a hierarchical system, and (3) non-ergodicity that arises from competing interactions (the “frustrated states”).

 

The paper begins with seven basic principles. The first, and most important, is the existence of a “loss function of time-dependent variables that is minimized during evolution”. To stay alive is to solve an optimization problem. There needs to be some sort of learning that comes from an organism interacting with its environment. And when you’re dealing with the open unknown, the best solutions we know of involve “implementation of a stochastic learning algorithm”. Thus, learning and evolution are “optimization by trial and error”.

 

The next three principles, still very general, cover the following:

·      There’s a “hierarchy of scales”, each of which has its own “dynamical variables that change on different temporal scales”.

·      These timescales are distinct.

·      The faster-changing variables can be statistically defined by the slower-changing ones. (In thermodynamics, we can use macroscopic parameters to encompass the cacophony of the microscopic world.) This is known as Renormalization.

 

The final three principles are more specific to the living systems of Planet Earth, a sample size of one. “Evolving systems have the capacity to recruit additional variables that can be utilized to sustain the system and the ability to exclude variables that could destabilize the system.” Replication plays a key role in this process but requires the sequestering of “information-processing units”. Finally, there is two-way information flow between the slower-moving information units and the faster-changing data-collecting parts that interact with the environment. This, essentially, is learning how to stay alive.

 

What follows is an exposition of ten “basic phenomenological features of life” which the authors link to their theory of learning. I won’t go into these one-by-one, but rather pick out the concepts that I thought noteworthy. Let me preface this by commenting on the existence of a multiscale ‘situation’. It’s physico-chemical. The universe, and by extension Planet Earth, is made up of a mixture of multiple substances, which consists of molecules, which consists of atoms connected by vibrating bonds. Everything is in motion – it’s a dynamical system – but the timescales of motion are vastly different. Chemical bond vibrations are in the pico to femtosecond range. Molecules diffusing and colliding may be in the micro to nanosecond range. Nerves might pulse in a tenth of a second. I measure my time usage in minutes.

 

The inevitability of multiple distinct timescales is that different processes will ‘compete’ leading to frustrated states. But in addition to temporal frustration, there is also spatial frustration. This leads to a balance of sorts – an equilibrium, so to speak, but one that is semi-stable and may shift as the environment changes. Living systems that sequester their slow-changing informational systems that provide some organismal stability must continually receive and adapt to information being relayed from faster-changing ‘detector’ systems that interact directly with the environment. But neither of the two is privileged. It’s hard for us to think about this because we’re used to linear thinking of cause followed by effect. The lines of communication go both ways – a complex system where separation of the parts leads to death. You can’t separate the function of an organism from its genesis.

 

As to trial-and-error learning, the problem is it guarantees “neither finding the globally optimal solution nor retention of the optimal configuration when and if it is found. Rather stochastic optimization tends to rapidly find local optima and keeps the system in their vicinity”. Frustrated competing variables keep things that way. Not a necessarily bad thing since the environment will change. That’s why “biological evolution comprises numerous deleterious changes, comparatively rare beneficial changes and common neutral changes” that explain genetic drift, according to the authors. The diversity of local optima that arise from nonergodicity is why “evolution pushes organisms to explore and occupy all available niches and try all possible strategies”. We should expect a “diversity of solutions”.

 

Why do parasites show up? Diversity means that entities will arise that “scavenge information from the host” and “minimize their direct interface with the environment”. This may lead to symbiosis, but it might not. Competing imperatives are always in play. Why is there cell-programmed-death? There is an overall loss function to be minimized (the first principle!) and in a multicellular system, the tug-of-war between different scales could well result in reducing system failure by having individual cells (that have naturally accumulated problems – thanks, entropy) die off for the greater good.

 

To illustrate all this, the authors set up a (learning) neural network that has variables in different timescales. Some are ‘trainable’, others are not. There’s a logic to how they assign organismal versus environmental variables, and also how they divide the slower-changing variables into ones where changing them can be deleterious, neutral or adaptable. I won’t go into the math. Their conclusion: “slow variables determine the rules of the game, and changing these rules depending on the results of some particular games would be detrimental for the organism”, so it’s better to have “temporally stable rules” rather than an unconstrained optimization. But in the background, some of these rules can change. And as optimization continuously occurs dynamically, an environmental change may lead to a successful adaptation. Or it may lead to system failure.

 

Most important to me were the crucial setup of a system containing within it processes with distinct timescales, the inevitability of frustration, the role of renormalization, and determining what an appropriate loss function should be. The chemical systems I study (for which I build thermodynamic and kinetic maps) are likely too small. I’ve used the maximization of free energy reduction as the function to optimize, but perhaps I need to broaden or subdivide this into several mini-loss-functions. One of the papers that I didn’t discuss today goes into thermodynamics at a more general level and tries to connect it to the origin-of-life. They’re at a very abstract level, and the question is whether I can use their framework and fill in the details. It’s what keeps research interesting!

Monday, March 25, 2024

Trinity: Live. Die. Repeat.

With my discovery of A Dark Room, reading about the genre of interactive fiction, and watching Get Lamp, I finally decided to bite the bullet and try Brian Moriarty’s famed Trinity. The 1980s were the heyday of interactive fiction marketed as computer games. I tried Adventure and Zork back in the day, but (even crappy) graphics seemed much more alluring than a text adventure game. Thus, I never looked at the many Infocom hits of the 80s. But since I watched Oppenheimer a month ago, and I just started Spring Break, I was motivated to try Trinity. (Pic from the Macintosh Repository.)

 


The 1980s were a strange time with the spectre of nuclear war looming. This was before the fall of the Berlin Wall or the breakup of the U.S.S.R. Was World War III just around the corner? That sets the stage for Trinity, released in 1986. From what I’ve learned about its history, Moriarty had been working on it for several years. (By the time the 90s rolled around, I was no longer playing computer games and also never tried Loom, which is what Moriarty is likely most famous for.) Since I had little experience with Infocom games, I read through the manual (which opens with a comic strip about the Manhattan project) to get a sense of the navigation commands I would need. Then I booted up the game.

 

The scene opens in Kensington Gardens. You’re an American tourist. It’s a crowded day but there are strange scenes suggesting that something is not quite right. I was able to visit all the spots and acquire all the necessary items, but couldn’t quite work out the combination before the air sirens rang out. Then I was killed along with all the other tourists when a nuclear missile struck, ushering a nuclear holocaust. While all this sounds bleak, I was very impressed with Moriarty’s writing. He knew how to paint a scene, provide sufficient hints to help the reader progress, and maintain the tension of the story. I had not expected the game to have a countdown timer, which explains why I started with a wristwatch that told me the time. Every action brought me closer to the appointed hour of doomsday.

 

As an old-school gamer, I knew how to draw a map and take notes. I dutifully did this in my first game, and knew I had fully explored the Kensington ‘scenario’. It was time to brush away the cobwebs and reactivate my old-school gamer-puzzling abilities. In my second run-through, I figured out how to finish the first stage – which turned out to be a prologue. At this point, the opening credits begin: Trinity, a game by Brian Moriarty! I was impressed.

 

The second stage area resembled the Neitherlands of the Magicians, except that instead of fountains that connect you to other worlds, you have ‘doors’ that connect you to other relevant timelines in Earth’s history. These timelines are related to the making of the atomic bomb and its detonation at the Trinity test site near Los Alamos in the New Mexico desert. The tricky part was figuring out where the doors were and how to open them. This required more mapping and systematically testing some hypotheses about which items I need to use in a particular way. I was thrown off for some moments when the ‘world’ seemed to reverse east-to-west but finally figured it out. I patiently worked my way through the puzzles, most of which were straightforward, but a few were quite obscure and took several attempts. (I died once or twice and learned when to strategically save the game.)

 

Now it was time to do the time-traveler thing. Most of the portal doors led to a short scenario where you needed to get an item to help complete your quest. This required the use of other items. It’s crucial to save the game before you enter a portal door, because you’re not likely to do them in the right sequence the first time. And you’ll die a few more times in scenarios where time is of the essence. I admit to a little impatience and occasionally looking up ‘hints’ on the internet to help me along. Back in the old days, I had the tenacity to keep attacking the problem and exhaust the possibilities, but there was also no access to hints. (Because I acquired free pirated versions of most games, I didn’t have manuals either.)

 

I knew that the game would culminate in Trinity 1945 so I saved this scenario for last. That was the correct move. The final scenario is much larger and the puzzle is fiendish and time-constrained. Playing it felt like being in the movie Groundhog Day, or more appropriately Edge of Tomorrow. Live. Die. Repeat. I died a lot. After at least twenty runs at it (with a few hints), and possibly more (I lose track), I gave up. I know where the necessary items are and how to get them, but I just couldn’t figure out the right sequence and not run out of time. And timing is everything in this scenario of Trinity. I succumbed to reading a walkthrough on the Internet, and it’s quite clear that I would not have had the patience to finish the game. (No, I didn’t finish the scenario with the walkthrough instructions.) I do highly recommend this site for the walkthrough, because it had an external link to The Digital Antiquarian which has some superb articles to read about Trinity.

 

Playing Trinity made me think of the value of simulations in training for difficult missions. Pilots train on simulators. Navy Seals are put through rigorous exercises and challenging scenarios before their actual mission. NASA trains its astronauts extensively. Tom Cruise had the tenacity, with Emily Blunt’s encouragement, to keep going again and again until he got the sequence right and didn’t die. Even a tourist with no special training can, with lots of trial-and-error in a simulation, do what it takes to perhaps prevent World War III? It also made me think about tenacity and perseverance. And when to quit. In this case, I quit Trinity at the right time. I spent enough time to enjoy and appreciate it but didn’t burn away many more hours so I can move on to other interesting brain-tickling activities.

Sunday, March 24, 2024

Magi: Artist-Engineers

It was a fateful when Harry Potter shared a train cabin with Ron Weasley on the way to Hogwarts for the first time. Harry generously shares his food-treats with Ron, and Ron acclimates Harry to the Wizarding World that eleven-year olds care about. Ron collects cards of famous wizards, found in treats called Chocolate Frogs. It’s an opportunity for Harry to know a little more about Dumbledore, Hogwarts Headmaster. Ron however, wanting to complete his set, is missing Agrippa. Who is this shadowy Agrippa?

 

Heinrich Cornelius Agrippa (1486-1535) is famous for writing On Occult Philosophy. What does he have to do with magic? He’s a Magus, the singular for Magi. Today we might associate ‘magi’ and ‘occult’ with some cult-group that believes in demons and casting magic spells. But in ancient times, magi were wise and learned men. The most famous instance of magi comes in the Bible where ‘wise men from the east’ following a ‘bright star’ journey to see the baby Jesus while bringing gifts fit for a king. The Babylonian magi were also astronomers, studying the heavenly bodies of the night sky. What signs might the stars portend? In those times, there wasn’t a clear distinction between astronomy and astrology. Today, one is a legitimate science, the other is ‘occult’ pseudoscience that retains surprising popularity.

 


In a book focused on the fifteenth and sixteenth centuries, the historian Anthony Grafton details the writings and doings of characters who melded magic, religion and science – when these were still nascent categories with plenty of overlap. The book is appropriately titled Magus, and Agrippa gets a whole large chapter to himself. In his time, Agrippa was a polymath – someone knowledgeable across many areas of learning. He was a mercenary and fought in wars but more well-known as a theologian, physician, lawyer, scholar, and a writer of the occult. His occult tastes were influenced by Johannes Trimethius, who also gets a hefty chapter to himself in Magus. Both these magi drew on the Kaballah and the neo-Platonist writers in their work.

 

My interest in the history of magical thought was initiated by reading the Harry Potter books as a scientist. Could magic and spell-casting work physically, and how so? (I speculate that it makes use of photons.) As a chemist, I pondered why a wizard would need to rely on potions. Is spell-casting not enough? (I speculate that intricate biochemistry is difficult to control with conventional spell-casting.) That led me to reading books about alchemy and how it influenced the birth of chemistry as a science. (I recommend anything by Lawrence Principe.) But I didn’t know much about magi who did not focus on alchemy.

 

Agrippa was much more interested in the relationship of magic to art and engineering rather than chemistry. Like Trimethius and others before him, the power of magic could be divided into two broad categories. On one hand, there was a ‘black’ magic (to be shunned) that involved the summoning of demons to do one’s bidding. (The Bartimaeus trilogy by Jonathan Stroud uses this as a basis for magic.) On the other hand, there was ‘natural’ magic: the magus, in harmony with nature, learned its secrets and how to harness its power. Scientists and engineers were the true magicians! And since a feature of magic is to amaze its audience, the magus also needed to be a true artist.

 

In the days before TV or radio, so if you wanted entertainment, it had to be live! The magi knew how to put on a good show. Smoke, fire, elaborate costumes, mechanical beasts – this is what brought wows and gasps to the audience. The magus was an artist-engineer and a favorite of kings and rich patrons were automata, “medieval robots” who seemed alive so to speak. Grafton writes: “Agrippa’s emphasis on automata was not accidental. Nothing bothered orthodox readers more than the description of the statues the Egyptians had directed daemons into in order to make their idols speak.” In contrast, the magus was “a master mechanic and a creator of dazzling effects” by knowledge of the natural world and therefore able to design seemingly life-like creations – flying birds, fire-breathing creatures, and talking heads.

 

Making magical devices or artifacts was the distinction of the true magus. This required a true understanding of the underlying rules of the natural world. Our machines and labor-saving devices would awe a medieval time-traveler. A more advanced science does seem indistinguishable from an occult magic. Our harnessing of wired electricity and unwired electromagnetic waves would seem miraculous to Agrippa, but he wouldn’t ascribe it to demons. He’d want to learn from today’s scientists how they accomplished such feats through the study of nature. Agrippa, the magus, would be both surprised and delighted by the advances we have made. He’d also be happy that the demonologists have for the most part been relegated to a fringe group. But he would be concerned that in the present century, science is losing its prestige as a trusted source.

 

Agrippa was an interesting character in an interesting time for science, philosophy, religion and magic. I’d like to think that his writings elevated the craftsman, who in medieval times was more of a second-class citizen. While Grafton’s book is rather academic and at times I found myself skimming over details I was less interested in, I’m happy to learn more about Agrippa and company. Now his name isn’t just a factoid for Harry Potter Trivia Night; and I actually know much more about his life and times as a magus.

Thursday, March 14, 2024

Indelible

I’ve previously read about the wonders of the ballpoint pen and its significant improvement over its predecessors. Yet I found myself entranced re-reading the history of inks and writing in Mark Miodownik’s Liquid Rules. There were many nuggets I had forgotten and others that were new to me. Along with the chapter on glue, it made me ponder the wonder of sticky molecules – fluid one moment, solid and stuck a moment later!

 

Let’s begin with ink. The trick: you need it to flow on to paper, then you need it to stick. Timing is everything if you don’t want to end up with ugly smudges! I learned that reed pens were used by the Egyptians circa 3000 BCE. The ink was made “by combining soot from oil lamps with the gum from the acacia tree, which acted as a binder.” The hydrophobicity of carbon meant you could mix it with water to get a flowing black ink. It was called gum arabic, and can be found in art shops today. Problem #1: carbon ink doesn’t dry fast and can smudge. Problem #2: once dry, it doesn’t bind strongly to the surface, so you can scratch it off – but perhaps that’s a feature rather than a bug.

 

The next revolution was gall ink. (A gall is an oak apple.) Used in biblical times up to last century, it’s a clever bit of chemistry. Miodownik writes: “You make gall ink by putting an iron nail in a bottle with some vinegar; the vinegar corrodes the iron and leaves behind a red-brown solution, full of charged iron atoms… It reacts with the tannic acid from the galls and produces a substance called iron tannate, which is highly water-soluble and very fluid. When iron tannate comes in contact with paper fibers, it flows, through capillary action, into all the small crevices in the paper, distributing itself evenly. And as the water evaporates, the tannates are deposited inside the paper, leaving a lasting blue-black mark.” It's permanent ink. It’s also why substances with high tannin content, “red wine and tea can leave such bad stains on your clothes and teeth.” We also have historical documents of people complaining (by writing with that same ink!) how the ink got all over the place and was hard to wash off. We need a better delivery device.

 

The fountain pen was invented as early as the tenth century with many updates over subsequent centuries. You now didn’t need to carry an ink bottle with you with the ink self-enclosed in the pen. But there were problems: “controlling the flow so that the ink didn’t all rush out at once [made] an enormous blob.” It took a while for inventors to figure out that the problem was the seemingly random formation of air (vacuum) pockets. Miodownik illustrates this by describing the glugging that ensues: “Each glug corresponds to air forcing its way in… and as it does so, it keeps the liquid from coming out. Take it in turns – liquid out, air in, liquid out, air in, glug, glug, glug.”

 

A simple solution might be to put a hole at the other end, but then if you turn the pen upside down the ink leaks. Eventually a clever design by Lewis Waterman in 1884 utilized “a metal nib that allowed ink to flow down a groove by a combination of gravity and capillary action, while incoming air passed through in the opposite direction”. The problem: the acidic gall inks eventually corroded the metal nibs. Miodownik writes: “People would shake their pens in rage, trying to dislodge whatever unseen obstacles were mucking up their writing, but in the process, they would lob ink… onto the clothes of unsuspecting passers-by.” Having been ‘forced’ to learn how to write with a fountain pen in grade school (why, oh why?), I can relate. I’m glad when my school gave up on it after a year and we went back to ballpoint pens.

 

So back to inks. Quink was developed by the Parker Pen company. It was a “blend of synthetic dyes with alcohol… flowed well in the pen… dried very quickly when it came into contact with paper.” But there were problems. The solution of Laszlo Biro was to redesign the pen. From his knowledge of how the newspaper printing press worked, he eventually hit on using a tiny ball as a roller to deliver ink to paper. And this works beautifully because of non-Newtonian flow. Miodownik goes into detail explaining the relationship between viscosity and flow of liquids. Some liquids behave strangely: “if you mix cornmeal with a bit of cold water, it forms a liquid that’s runny when you stir it gently, but if you try to stir it quickly, the liquid becomes very viscous, to the point that it behaves like a solid.” Emulsion paint is another non-Newtonian liquid: thick in the can, fluid when stirred, but then thickens again quickly and doesn’t drip! Quicksand is also a non-Newtonian fluid: If you’re in it, it flows under pressure when you move, then reverts to semi-solid when you don’t. So you get stuck!

 

Ballpoint pens are a beauty, thanks to these non-Newtonian liquids. They don’t smudge easily and “the ink doesn’t bleed as it seeps into the paper… It’s been chemically formulated to have a low surface tension when it comes into contact with cellulose fibers, as well as with the ceramic powders and plasticizers that are added to the top surface of paper…” You can even write upside-down against the force of gravity. You don’t even need a cap for the pen. All this and more can be found in chapter ten of Liquid Rules, aptly titled “Indelible”. I expect to refer back to it again in the future!

Tuesday, March 12, 2024

Jack and Jill, Down the Hill

What are the fundamentals of the Laws of Nature? It’s challenging to find a book that threads the needle between being too simple and being too difficult, that gets at the big picture without sacrificing the important details. I’ve just stumbled on a delightful book written two decades ago by Michael Munowitz: Knowing.

 


It’s an ambitious book. I’m five chapters in, and I think Munowitz rises to the challenge. Here’s an excerpt from his short opening chapter, aptly titled Great Expectations: “Look closely at the weave of the world and see, as if in a tapestry, a frugal simplicity masquerading as complexity. Go beyond the finished work, which can only dazzle, to find the pattern hidden within. Take apart the tapestry strand by strand, color by color, stitch by stitch. Find the regularity. Find the rules. There must be rules. Nothing can be as complex as the universe first appears, and nothing deceives the mind more than complexity.”

 

Munowitz is an engaging writer. I daresay he has the gift of communicating the seemingly alien concepts of the physical sciences in a language anyone can understand. But it does require some effort from the reader to read slowly, pause, and think. While the prose is fluid and seems effortless, don’t be fooled. There’s a lot of hard science packed into it, and by taking the time to chew over what he conveys, I think the reader will come away with a much better foundation of how Nature works. And where does it all begin? By observing, measuring, and then looking for patterns!

 

I particularly liked that Munowitz chose the interaction between two particles to begin his quest for the explanation of everything. Chapter Two is aptly titled Ties That Bind. And in a subheading titled “The Potential To Be Different”, Munowitz introduces us to Jack and Jill going down the hill. Far away from each other, their walking path seems random. But as they get closer and start to notice each other, they draw nearer. “Closer and closer they come, and with each step the sense of attraction increases. The symbolic slope grows steeper.” At some point, they will get too close for comfort, move away from each other, and settle into a comfortable (equilibrium) distance. Here’s a picture from the book.

 


The Bond-Energy Curve is the fundamental underlying business of chemistry. I feel that using it to scaffold all of chemical bonding is conceptually helpful to students, and so I lead with it in G-Chem. We go through several examples to illustrate how different interacting particles prefer different equilibrium distances and have “energy wells” that may be deep or shallow. I try and hammer home the key principle that breaking a bond requires energy input into the chemical system, while forming a bond (which lowers the energy of the system) release energy to the surroundings because of conservation of energy. This is challenging for students, because in most chemical reactions, both bond breaking and bond forming are taking place simultaneously! Thus the net energy change of the chemical system may be positive or negative depending on the relative strength of the chemical bonds in the reactants versus the products.

 

In Chapter Five (Mass as a Medium), Munowitz tackles the equivalence of mass and energy. It’s mostly about E = mc2 (multiplied by gamma as things pick up speed) and the warping of space time, but I like how he introduces potential energy wells. Imagine two hills of similar height with a valley in between. A ball starting at rest on top of one hill gets nudged, and rolls into the valley. It picks up speed as it rolls down (potential energy being converted to kinetic energy), and after reaching the bottom starts to go up the other hill (kinetic energy being converted back to potential energy). Will it make it to the top? Or get trapped in the valley? If the ball loses some energy, then it will get trapped. How might this happen?

 

Here's Munowitz: “Suppose that the particle does not keep every bit of energy the field bestows. It has a mass, remember, a built-in store of energy, an endowment that depends neither on its own motion nor on the influence of any external agent. And by giving away some of that internal energy now, by losing mass, the particle avoids climbing all the way up to the top. It settles in the valley, finding a new stability with less mass and less rest energy than before. Somebody else pockets the difference.”

 

In G-Chem, we discuss this in the chapter on Nuclear Chemistry when we go over the source of energy in fission and fusion nuclear reactions. The missing mass is small, but it translates into a wallop of energy, thanks to E = mc2. I tell my students that this chapter is different from any of the chemistry we discuss all semester long, where the making and breaking of chemical bonds involves rearrangements of electrons (with no change to the atomic nuclei). Yet the same thing happens when a chemical bond forms. The two atoms that form a bond get trapped in an energy well. A tiny amount of mass goes missing, much tinier than in nuclear reactions. I’ve mentioned this in my class, but I don’t think the students really get it. Jack and Jill, when they get together, shed some hair or dead skin as they approach each other going downhill.

 

I’m looking forward to slowly working my way through the rest of Knowing. I hope that Munowitz tackles complexity in some satisfactory way even though the opening chapter hints of an essential reductionist approach, which I think ultimately fails if not paired with thinking about emergence. Analysis and Synthesis. We need both bottom-up and top-down to complete the picture. Munowitz hints at a broader approach in Chapter Four where he introduces three regimes of nature:

1.     A clockwork mechanism: “Every move would be predetermined by the one that came before… With enough observation, with enough analysis, with enough familiarity, we might eventually learn what makes everything tick.”

2.     The quantum realm: “For small particles confined in small spaces, the universe submits to a different kind of government… under which the promise of omniscience is honored (after a fashion) but devalued at the same time… It is a knowledge not of certainty, but rather of probability and chance.”

3.     Chaos: “Between certainty and uncertainty, there is a third form of governance… where the promise of mechanics is upheld according to the letter of the law yet mocked in spirit. We find it everywhere… in the ordinary occurrences of every day life, often the richest and most complex to be found.”

 

This third regime is the cutting edge of science in my opinion. There is so much more to learn and know!

Tuesday, March 5, 2024

Eutectic Surprise

I tried out a new class activity in my G-Chem Honors class yesterday. We had just finished covering colligative properties the class before, which included calculations on boiling point elevation and freezing point depression and making thermodynamic arguments to explain these phenomena.

 

To incorporate my research interests (chemical origin-of-life) and motivate the students as to why this might be interesting, I had them read a paper before class (“Prebiotic Synthesis of Adenine and Amino Acids Under Europa-like Conditions”; Levy, M.; Miller, S.M.; Brinton, K.; Bada, J.L. 2013, Icarus, 609-613.) I annotated the paper so that they would focus on the key paragraphs in the paper; in my experience first-year students can read scientific papers if provided this sort of scaffolding support.

 

At the beginning of class, I briefly discussed the role of heat-shock proteins in prevent the cell contents of organisms from freezing at temperatures below zero Celcius. Then I divided them into small groups and let them loose on a worksheet. The first bit required them to recall how to calculate the freezing point depression of a solution (in this case the NH4CN solution in the paper they read). I discovered that some of the students got stuck even though they should have known how to do this from the examples in class and the previously assigned homework. After some quick reminders everyone was up and running.

 


I hoped they would be able to puzzle out a simple binary eutectic diagram (shown above). In the worksheet, I had defined the eutectic point and the eutectic temperature, and then asked them to work out how to label the appropriate parts of the diagram given the eutectic temperature and the freezing points of two substances (HCN and H2O). One group figured this out relatively quickly but the others struggled a little. The next instruction was to have them figure out the approximate mole fraction of each substance, assuming all “curves” were straight lines. I assumed they would use a simple ratio guesstimate but the students were either stumped or attempted to use y = mx + b equations (which would have worked but were terribly slow). I had to step in before they tied themselves in knots. They were surprised at my simple explanation. I had momentarily forgotten that students often turn to an algorithmic procedure before thinking about a guesstimate, while I usually do the opposite.

 

The point of this was to have them think through what would happen if you started with a 0.1 M HCN solution and then progressively lowered the temperature. Because I was running out of time, I only gave them a few minutes to think about in their groups before walking them through what they would expect to observe. I think the students got my main point, which is that one substance (in this case, the solvent water) solidifies, its mole fraction decreases, resulting in the solution following the diagonal line down to the eutectic point. I should say I think they were surprised, but understood the implications. (I didn’t have time to double-check this with a follow-up question.)

 

We ended briefly with discussing the difference between a eutectic low temperature approach to making adenine from HCN versus a high temperature approach. The students were able to make connections to collisions and thermal energy, which is good because this discussion was meant to be a segue to our kinetics module coming up next. We briefly discussed why guanine is also found in the mixtures (students looked up the structures) but we ran out of time and weren’t able to discuss other aspects of the paper.

 

All in all, I was reminded that even after many years of teaching, I overestimate how much students can do even when I have the Honors section and that includes many academically cream-of-the-crop students. (The performance on the first exam over a week ago was strong.) I did tell the students that they were helping me refine a new class activity I had designed, and I’m sure they were relieved to know that none of this eutectic stuff will be on class exams. Clearly, I have more work to do, and it put a dampener on my desire to completely overhaul my G-Chem class next year with these sorts of activities. I need to remember: Baby steps!