Thursday, March 29, 2018

The Logic of Failure


Books I wish I read earlier in life. The Logic of Failure by Dietrich Dörner. Originally published in German in 1989 and translated into English in 1996, the book’s catchy subtitle is “Recognizing and Avoiding Error in Complex Situations”. It has an excellent cover to match with a large bright red F.



Would it have helped me plan and think through complex issues before I became an administrator? Possibly, but I’m not sure. That’s a good thing, because Dörner doesn’t try to sell the reader a “new” method that grooms leaders into strategic and creative thinkers. He proposes a methodology, but then adds plenty of buts and caveats. The devil is truly in the details because every complex situation is different. We can’t always see when, where and why. There is no substitute for experience, but even experience can cause its own compounding problems. Depressed yet?

How does Dörner study the problem? By putting people in computer simulations designed to model complex situations. Mind you, this is back in the ‘80s. Today’s simulations would likely be exponentially more sophisticated, although I suspect the behavioral results of the participants would be similar today. The setup? There are two. Managing a small fictitious town in an isolated hilly region in Europe. Or managing an African region with different tribes subsisting on farming or herding. Both situations feature complex and inter-related variables, and the point is not just to see who succeeds or fails, but why. As you’d expect, a few do well, many fare poorly, some learn from their mistakes, and others don’t. There are common threads among the successful. The reasons for failure, however, are myriad. But there is a logic to them. A self-help guru would package the common threads into a five-steps-to-success program, but Dörner is much more circumspect. His book closes with a cautionary tale I will discuss at the end.

The introductory chapter includes an analysis of the 1985 Chernobyl “disaster” (although Richard Muller would argue it is less disastrous then commonly thought). From his analysis and observing his simulation participants, Dörner describes four features of complex situations. First, they are complex. His definition: “Complexity is the label we will give to the existence of many interdependent variables in a given system. The more variables and the greater their interdependence, the greater that system’s complexity. Great complexity places high demands on a planner’s capacities to gather information, integrate findings, and design effective actions. The links between the variables oblige us to attend to a great many features simultaneously, and that, concomitantly makes it impossible for us to undertake only one action in a complex system.”

This situation is the bane of scientists designing an experiment to answer a question. We try to keep everything else constant except for one variable, so we can isolate its causes and effects. And when you’re in a complex situation, without the luxury of time to gather all the information you think you need… well, I don’t envy policy makers and administrators (now that I’m no longer one, having some experience being in such situations). It gets worse. “Complexity is not an objective factor but a subjective one,” Dörner writes. “We might even think we could assign a numerical value to it…” Having immersed myself this semester learning about measuring complexity in chemical systems, I’m very much inclined to agree. There are several approaches to measure complexity in molecules and molecular systems, but all the devisers would agree that they are pegged to subjective reference states.

The other three features of complex situations are dynamics, intransparence, and ignorance/mistaken hypotheses. There are many variables for which we have no direct access, thus the “frosted glass” of intransparence. This contributes to our ignorance, and we make assumptions (both implicit and explicit) that might simply be wrong, only to realize later that our actions have made things worse. Dynamics is one feature particularly difficult for us to grasp – how things change over time. Possibly a combination of brain evolution and our education system, we extrapolate linearly when many situations (even simpler ones) might suggest a power law or exponential behavior. I’m certainly guilty of that as a teacher. Just this month I have discussed generating simpler linear models (a common feature in science) multiple times in general chemistry: the Clausius-Clapeyron equation, first and second-order rate laws, and the Arrhenius equation. We drew lots of graphs. Why? Graphs help us translate time into space – turning something dynamic that’s hard to grasp, to something static and easier for our feeble minds to comprehend. And I’m pretty sure most of my students don’t grasp the log scale of a graph axis, even if they can build up the data and sketch the graph.

Dörner devotes a whole chapter to Time Sequences, and another to Information & Models. Other parts of his book discuss the important twin features of Goal Setting and Planning. There are many, many, many places to Fail. I recommend reading the eye-opening examples in his book. There is a logic to them, and I’m not sure this logic could have been extracted without running the simulations. If anything, I felt mildly justified by many youthful hours spent on long complicated games, my favorite being the ‘80s Avalon Hill boardgame Civilization. Its descendant, the ‘90s Sid Meier computer game of the same name (the first version) is probably the last computer game I’ve played. Spending most of my working day in front of a computer, I’m not interested in using it for leisure. I still have my old ‘simulation’ boardgames, but no longer have the time to play them. The shorter ones strip out some of the complexity, and are still fun, but do not teach the lessons of complexity. The variables are fewer. I’ve recently pulled out a few older Reiner Knizia favorites. His designs are exquisite and force tough choices even with a simple ruleset.

In the book’s final chapter, Dörner asks: “How can we teach people to deal effectively with uncertainty and complexity?” The problem: “There is probably no cut-and-dried method for teaching people how to manage complex, uncertain, and dynamic realities, because such realities, by their nature, do no present themselves in cut-and-dried form.” While on average, experienced leader-managers performed better than students in these simulations, there were more than a fair share of significant failures among the experienced.

Dörner’s last example, however, is sobering. Before one of the simulations, participants were divided into three groups. Here are his descriptions. “The strategy and tactics groups received instruction in some fairly complicated procedures for dealing with complex systems. The strategy group was introduced to concepts like system, positive feedback, negative feedback, and critical variable, and to the benefits of formulating goals, determining and, if necessary, changing priorities and so forth. The tactics group was taught a particular procedure for decision making...”

First, the self-evaluation results of each group after the simulation (which was conducted over several weeks). “The members of the strategy and tactics groups all agreed that the training had been ‘moderately’ helpful to them. The members of the control group, who had received training in some nebulous, ill-defined ‘creative thinking’, felt that their training had been of very little use to them.” Just think of all those snake-oil salespeople selling the latest workshop in innovation or creativity for leadership. Heck, I even sold the idea to a group of sophomores where we would experiment combining creativity and chemistry. We’ll reflect on the results at semester’s end.

But how did the actual participants do in Dörner simulations? (In this case it was being mayor of the small fictitious town.) No difference in performance. You heard that right. There was no difference in actual performance! Despite the training. Yet, participants receiving the training thought it helped them. Why though? Dörner’s answer gives me goosebumps.

“The training gave them what I would call ‘verbal intelligence’ in the field of solving complex problems. Equipped with lots of shiny new concepts, they were able to talk about their thinking, their actions, and the problems they were facing. This gain in eloquence left no mark at all on their performance, however. Other investigators report a similar gap between verbal intelligence and performance intelligence… The ability to talk about something does not necessarily reflect an ability to deal with it in reality.”

Does this bring back memories of listening to a well-spoken leader or administrator who turned out to be ineffective or incompetent, or worse? I’ve even done the speaking myself, just a few times, when I’ve had to be on a panel or say something in public. (I try to avoid these situations when I can.) I’ve also been sent to leadership training sessions. I think I learned something from them, but maybe it’s simply just a better vocabulary. I read voraciously to learn, but maybe I’m only acquiring better ways to sound knowledgeable. At present, I’m slowly working my way through Thinking in Complexity by another German author, Klaus Mainzer. It is subtitled “The Computational Dynamics of Matter, Mind and Mankind.” You can bet that I am able to discuss such concepts in Astrophysics, Biology, Consciousness, as smooth as ABC. I know all about non-linearity, attractors, neural nets, and feedback loops. And I am trying to solve the riddle of the origin of life, a complex problem about how complexity arises.

I’m likely not to succeed in solving the riddle. The problem is full of intransparence with too many interdependent variables. It’s a toy problem to work on, and it keeps my mind active. Real-world practical problems, however, are subject to time-sensitive decision making. Delaying a course of action is itself an action, for good or ill, whether one be collecting more data or concentrating on some other urgent priority. If anything, reading The Logic of Failure has made me more circumspect. I wish I read it earlier. Perhaps it could have helped in better decision-making in the past, but perhaps not. There is a logic to failure, but the situations are so diverse that a lifetime might not be sufficient encounter to learn how not to fail in the next complex situation.

Tuesday, March 27, 2018

Resizing Matter: Part 3


After considering two simpler situations, we are now ready to tackle a more difficult problem. How would the strength of covalent bonds change if we magically resized matter? We care about this because living creatures are predominantly made up of molecules containing covalent bonds. So if you were engorging a spider to scare Ron Weasley, or you have obtained the technology of Ant-Man’s suit, you might care to know the side-effects – which may turn out not to be secondary but instead devastatingly problematic.


The challenge here is that the covalent bond is not as easy to describe because it is inherently quantum mechanical. Back in Part 1, we used a simple model to describe ionic bonds relying on the classical Coulomb’s Law, which works satisfactorily for the most part. Then in Part 2, we discussed weak attractive inter-particle forces using an intuitive (also classical) polarizability model. There is no classical analog to describe covalent bonding, and the intuitive model found in many general chemistry textbooks is misleading. Here’s one from a former textbook I’ve used before.



I show students this figure because it is familiar to them if they have taken chemistry in high school. The main idea: By conveniently locating the negatively charged electrons between the positively charged nuclei, this shared pair of electrons attracts both atomic centers and thereby holds the molecule together. A wave of the hand then concludes that this attractive force outweighs the repulsive forces. The figure implies these repulsive forces are electrostatic, and while that might be true for H2, it’s certainly not true for anything larger when Pauli repulsion kicks in. I tell the students that the figure is partly true but partly misleading.

Why is it partly true: The potential energy term due to electrostatic attraction between the electrons and the nuclei does give rise to the attractive force in the covalent bond. And it is sufficiently strong to outweigh the repulsive forces at the appropriate equilibrium bond distance. Why it is misleading: A chemical bond is defined with reference to the infinitely separated atoms. As the atoms approach each other to form a bond, the energy-stabilizing contribution due to the potential energy is actually reduced. Egad! Shouldn’t that cause the bond not to form? It turns out that the repulsive contribution of the kinetic energy is reduced even more. Hence, there is a net stabilization of energy due to forming the covalent bond. (Quantum Mechanics folks: We’ve assumed that the covalent bond is represented by the Heitler-London wavefunction.)


All this is a bit too complicated for the general chemistry student, and so in recent years I’ve moved away from this approach of introducing the topic of chemical bonding. My most recent approach is to use the General Bond Energy Curve early on, and briefly discuss different attractive and repulsive contributions. Then we get into the details of ionic bonds, covalent bonds, and intermolecular forces. The equilibrium distances and equilibrium bond energies change but the overall curve still has a steep repulsive part on the left, an optimum well in the middle, and a shallower attractive part on the right that asymptotically approaches zero energy – the reference state for infinitely separated particles, be they atoms, ions or molecules.



Now back to resizing matter. How would the strength of the covalent bond change if atoms magically or technologically changed in size? Using the same assumptions laid out in Part 2, we would expect that the outermost electrons are held more weakly by the nucleus and therefore the energy-stabilizing attractive contribution due to potential energy should be weakened. We see this if we compared the covalent bonds of the Cl2 versus I2 molecules. The larger iodine atoms lead to a longer and weaker I–I bond compared to the smaller chlorine atoms which have a shorter and stronger Cl–Cl bond. Br2 lies in between, as befits the intermediate atomic size of bromine. (F2 is anomalous because of the Pauli Principle!)

How would the kinetic energy change? That’s a bit more difficult to picture. The simple notion of using 0.5 mv2 likely fails us here. That’s because to some extent the electrons are “localized” in the mid-point region, but in a different sense the kinetic energy drops because they have more “space” to move around in per the overlap of the two orbitals that contribute to the bond. Technically the decrease in kinetic energy can be approximated by the square of the gradient of the molecule’s wavefunction compared to the separated atoms. Does the decrease in kinetic energy decrease proportionally to the decrease in potential energy?

Honestly, I’m not sure what the answer should be. (Sorry to disappoint you!) If there is a relationship between the two, it’s likely not to be linear. To solve this problem, I would actually need to do is set up a quantum mechanical calculation involving two “mutant” atoms with approximate properties I would expect for a magically resized atom. The answers from this calculation would still be approximate, but I would have a better idea how both equilibrium bond energies change as a function of the resized atoms. I’m too lazy to do the calculation at the moment so I will make a guess. (I imagine this is how my students feel sometimes when they encounter my exam questions.) I will guess that bigger atoms lead to longer bonds with weaker attractive potential energy, and although the repulsive kinetic energy term is also lower in magnitude, the fact that a chemical bond exists suggests that the potential energy term is overall larger in magnitude than the kinetic energy term, and therefore more impacted as the equilibrium bond distances change.

While I’m dissatisfied with my answer, and assuming I’m in the Advanced Arithmancy course, I submit it to the Hogwarts professor grading my exam. Hopefully it will be graded soon and an answer key provided. That way I don’t have to actually do the hard work and be willing to accept a not-so-good grade. I suppose sometimes school is about trade-offs, and I’m reminded in a tiny way what it’s like to be in the shoes of my students.

Friday, March 23, 2018

Resizing Matter: Part 2


In Part 1, we considered how resizing matter impacts the relative strength of ionic bonds in simple salts. This was an easy case because we assumed that (1) charge magnitudes remain unchanged, (2) Coulomb’s Law holds, (3) the distance between ion centers is the only variable parameter that affects the bond, and (4) masses of atoms scale proportionally but don’t impact the ionic bonds.

Resizing a living creature (e.g., Engorgio-ed spiders or Ant-Man) is much more complicated. Living systems are made up of a plethora of organic molecules containing covalent bonds. Furthermore, these molecules interact with each other through intermolecular forces.

Let’s get fundamental. What would happen if you magically or technologically resized an atom?

My General Chemistry students would argue that the size of an atom is related to the effective nuclear charge – how strongly the nucleus of an atom pulls on the outermost (valence) electrons. In this view, electrostatics (via Coulomb’s Law) explains why the electron is attracted to the nucleus – we call this the potential energy contribution to the overall nucleus-electron system. But this doesn’t explain why the electron doesn’t collapse into the nucleus. Since electrons are governed by quantum mechanics, the kinetic energy of the electron must also be taken into account. The mass and speed of electrons could affect the system if the atom is resized.


Assuming the average distance of an electron to the nucleus doubled, the (electrostatic) potential energy would decrease two-fold (although the attractive force decreases four-fold from Coulomb’s Law). The increase in mass of the nucleus should not impact the system because the electrons move much faster than the nucleus. To electrons, the sluggish nuclei resemble stationary objects (P-Chem students should recognize the Born-Oppenheimer approximation). If the mass of an electron doubles, but its average speed does not change, then (as a simple approximation) the kinetic energy also doubles. Thus, the potential and kinetic energies balance each other out.

If you doubled the radius of a sphere, it’s volume increases eightfold. Assuming a spherical atom, if you wanted to keep the density of the atom equivalent, you would also need particle masses to increase eightfold, and not merely double. But maybe it’s okay for the density of the atom to decrease as the size increases. After all, atoms don’t merge into each other (they do squish into each other a bit, but not a whole lot) because of the Pauli Exclusion Principle. (P-Chem does come in useful!) The Pauli principle isn’t affected by changes in mass of the particles.

Before we get to covalent bonds (likely Part 3), let’s consider interatomic interactions in an atomic gas. The noble gases are a good example here. Imagine a sample of helium where all atoms doubled in size and mass as described above.

For the most part, helium atoms bounce off each other when they collide. However, there is a very slight attraction between atoms due to the polarizability of each atom. The much larger xenon atom, for example, has a higher polarizability and so xenon-xenon attractions are stronger than helium-helium attractions. In xenon, although the nucleus is much more positive, the outermost electrons are very effectively shielded by shells of inner electrons. Overall xenon’s effective nuclear charge is smaller, and hence the atom sizes are larger.

But would polarizability change if we simply resized helium? Okay, at this point I cracked open a few of my P-Chem books and started reading. Instead of inundating you with equations, let’s simplify it to the following: Atom polarizability is a measure of how much the atom responds to an applied electric field. It is roughly proportional to the volume of the atom. There are a bunch of niggly details, but let’s keep it simple for now. Since volume increases eight-fold with a doubling of atomic radius, polarizability is expected to increase even if the mass of the electron doubled. This means that the interatomic attractions increase with an increase in size. This will be important when we consider a more complex system consisting of a collection of molecules interacting with each other in, say, a biological cell. The intermolecular forces should increase with an increase in size.

Given the density (pun intended) of these ideas, it seems prudent to stop for now. We’ll continue this analysis in Part 3.

Saturday, March 17, 2018

Gulp and Belch, Origins of Dragonflame




Here’s today’s summary pic* (cobbled from Google Image searches). We’ll get to that a little later, but first...

Did you know there are two types of saliva? Stimulated saliva comes from the parotid glands and flows when you chew. Doesn’t matter what you chew. It could be bone or a cotton wad. Simulated saliva also helps to dilute acid in foods. This is a good thing because acidic drinks (coffee and cola) start to dissolve the enamel in your teeth. Unstimulated saliva, the viscous type (thanks to mucins) help to rebuild your enamel. It also traps bacteria, thereby removing them when you swallow or gulp them down. These facts, and other interesting stories, can be found in Mary Roach’s Gulp, subtitled Adventures of the Alimentary Canal.


Roach explores the entire stretch of the canal, starting with your mouth and steadily moving to the other end, with everything else in between. Her humorous footnotes provide interesting analogies and factoids. For example: “The human digestive tract is like the Amtrak line from Seattle to Los Angeles: transit time is about thirty hours, and the scenery on the last leg is pretty monotonous.” Discussing the infamous ear-biting incident of Mike Tyson, she writes: “Fear the fight bite: it can cause septic arthritis. In one study, 18 of 100 cases ended in amputation of a finger. Hopefully the middle one. In the aggressive patient, a missing middle finger may be good preventive medicine.”

In Roach’s hands, or pen, or imagination put to paper, saliva becomes fascinating. She muses about why sores in her mouth don’t get infected, and then goes on to explore how saliva is both “bacterial cesspool” and “antimicrobial miracle – the former necessitating the latter.” She delves into the details of mouthwash, old remedies advocating the use of saliva on countless ailments (some more gross than others), why dogs lick their wounds, spitting for luck and blessing, and enzymes in detergents. And all that’s packed in just one chapter, with breezy engaging prose that is as informative as it is funny! (For those who prefer zooming-in reading, it is Chapter 6: “Spit Gets a Polish: someone ought to bottle the stuff.”

The chemistry and magic enthusiast in me greatly enjoyed Chapter 12: “Inflammable You: fun with hydrogen and methane.” No, we don’t have a sufficiently verified case of spontaneous human combustion yet, in case you were wondering. But there are potential cases of “inflammable eructation” – belching that can catch fire, supposedly. For most people, a belch does not contain flammable gases. (Yes, inflammable and flammable mean the same thing and are NOT opposites of each other!) However in rare cases, there might be some evidence. You’ll have to read it for yourself to find out! That’s because I want to get to cows, snakes and mice.


When I discuss combustion reactions and thermodynamic spontaneity in class, I have a go-to illustration involving the dangers of lighting up a cigarette in a cow-field if you’re facing a bunch of cow butts. Since my chemistry classroom has a natural gas line, we could create a non-explosive environment containing methane and oxygen, provided no one provides a source to trigger combustion. I put my hand on the tap to the gas line as I describe my fanciful idea as a simulation. (I don’t actually open the valve, but all the students are watching very closely, ready to run.) We talk about what spontaneity means thermodynamically as we ponder cows grazing in a field. (No, I have not looked up reports of spontaneous bovine combustion.) My story even has a tie-in to biosignatures and the search for extraterrestrial life. If a planet was found with an appropriately proportioned mixture of methane and oxygen in its atmosphere, it might indicate that living organisms were present and doing their best to stay away from equilibrium!

Roach was also curious about cows. Given that large quantities of methane are produced by grazing cows, and that it would likely be “vented, as stomach gases typically are, through the mouth. You would think that cow-belch-lighting would rival cow-tipping as a late-night diversion for bored rural youth. How is it that growing up in New Hampshire I never heard a cow belch?” Roach provides an answer (you’ll have to read her book) as to why cows don’t belch but quickly moves on to a more interesting story about snakes. They typically don’t belch either, but if a python swallowed a cow or some other ruminant herbivore, they can create an inflammable eructation, or a fiery belch.

How might this work? Stephen Secor at the University of Alabama has a theory and supporting experiments. Apparently, he fed rats to pythons in his lab and then measured the amount of hydrogen gas exhaled as the rats were digested. There was a hydrogen spike, but it came earlier than expected. (For my chemistry student readers, picture hooking pythons to a gas chromatograph in lab!) I will now quote Roach for parts of three paragraphs because her prose is simply unbeatable in this story of her interview with Secor.

… Secor suspected [that] the hydrogen spikes were the result of the decomposing, gas-bloated rat bursting inside the python. “One thing led to another.” (Secor’s way of saying he popped a bloated rat corpse and measured the hydrogen that came off it.) Suspicion confirmed. The hydrogen level was “through the roof”. Secor had stumbled onto a biological explanation for the myth of the fire-breathing dragon. Stay with me. This is very cool.

Roll the calendar back a few millennia and picture yourself in a hairy outfit, dragging home a python you have hunted. Hunted is maybe the wrong word. The python was digesting a whole gazelle and was in no condition to fight or flee. You rounded a bend and there it was, Neanderthal turducken. Gazython. The fact that the gazelle is partially decomposed does not bother you. Early man was a scavenger as well as a hunter. He was used to stinking meat. And those decomp gases are key to our story. Which I now turn over to Secor.

“So this python is full of gas. You set it down by the campfire because you’re going to eat it. Somebody kicks it or steps on it, and all this hydrogen shoots out of its mouth.” Hydrogen, as the you and I of today know but the you and I of the Pleistocene did not know, starts to be flammable at a concentration of 4 percent. And hydrogen, as Stephen Secor showed, comes out of a decomposing animal at a concentration of 10 percent. Secore made a flamethrowery whoosh sound…

With her slaying the dragon of stilted, boring, academic prose, Roach has become my new hero(ine), shining armor notwithstanding. She conveys knowledge with wit, and turns arcania into engaging topics with just the right balance of humor.** I merely give you a flavor; a tiny whiff at best. If the tip of your tongue has been tickled, I recommend reading Gulp in its entirety. A word of caution. It is not for the faint of heart, or more accurately, weak of stomach. Roach does not skimp on the details of various bodily parts and functions in the journey from one end of the tube to the other. Topics you might think about in private but not discuss in public are openly dissected by Roach.

While the book is entertaining, it is also very informative, and I have learned plenty. Especially from Chapter 14, which covers all things flatus. Three of the key contenders to the stink are sulfur-containing gases: H2S, CH3S and (CH3)2S. (Any wonder that hell is described with brimstone?) But there are numerous other molecules that subtly contribute to the smell. It turns out that one’s fart is akin to one’s fingerprint in its uniqueness. So how do scientists go about testing and evaluating odor-eliminating products? Roach poses an excellent question and then tells you what Michael Levitt of the Minneapolis VA Medical Center did in his lab.

Which – whose wind – should represent the average American’s? No one’s as it turned out. Using mean amounts from chromatograph readouts as his recipe and commercially synthesized gases as the raw ingredients, Levitt concocted a lab mixture deemed by the judges “to have a distinctly objectional odor resembling that of flatus.” He reverse-engineered a fart. This “artificial flatus” was put to work testing a variety of activated-charcoal products…

Reverse engineering a fart. Just picture it.

Roach has risen to the top of my list of writing excellence I aspire to emulate. I won’t write in the same style, but she has inspired me to think more carefully about the craft of communicating complicated topics (such as chemistry!) in a way that is educationally sticky, while fostering further interest. Reading about the possible origins of dragonflame makes me consider widening my gaze in writing a book exploring the interface between magic and chemistry, one that is hopefully both educational and engaging. (Here’s a potential Prologue to my potential Potions book.)

Having read Gulp and discovered Roach’s writing style, several of her other books are now on my must-read list, namely Stiff, Spook and Packing for Mars. (Those are three different books.) I’m also considering using her “Inflammable You” chapter as supplementary reading in my General Chemistry class next semester. I’d been thinking about a new angle of my Hiding in Plain Sight” theme: Gases. I’ve picked “Into the Blue” (from Sam Kean’s Caesar’s Last Breath) as a reading to supplement the standard (potentially boring) textbook. Roach’s chapter nicely adds to the Gases theme. Gosh, just thinking about it makes me all excited about teaching! And on a weekend too.

So if you’d like to be inspired, horrified, amused, and learn a bunch of cool and hot things about, um, your alimentary canal, I highly, highly recommend Gulp.

*I put up a summary slide before each class. Here are two examples.

**Other good science-y books with doses of humor don’t quite have the exquisite balance and panache Roach demonstrates. Examples I’ve read recently are in my blog posts on We Have No Idea, Spineless, and Soonish. (Those are three different books.)

Wednesday, March 14, 2018

Resizing Matter: Part 1


Happy Pi Day!

In my most recent blog post, I started to consider how chemistry might change if an object was resized magically (Engorgio and Reducio spells in Harry Potter) or technologically (Marvel’s Ant-Man). My plan is to explore this issue in more detail in a multi-part series.


Let’s start with a simple case, a cube of table salt (sodium chloride). The chemical formula of table salt is NaCl and the solids form cubic crystals. The figures above and below are from the Atoms In Motion website and give you a sense of the lengthscale.


If I (the wizard Hufflepuff Hippo) cast Engorgio on a cube of table salt, I could make it magically grow in size. The question is HOW the cube grows in size. First let’s take an atom-level view. NaCl is an example of an ionic solid, i.e., positive ions (cations) and negative ions (anions) are arranged in a 3-dimensional lattice (see above). Note that the Na(+) ions are smaller in size than the Cl(–) ions.

Possibility #1. The amount of matter increases to match the increase in size. The sizes of the ions themselves do not change.


The advantage of this approach is that there is no change to the chemical behavior of table salt. The strength of the ionic bonds due to the attraction between the cations and anions does not change. If I doubled the number of ions, I would double its mass – but other than being more massive, it would still behave in the same way I would expect table salt to behave. The large cube does not behave differently than the small cube. How might this spell work? If you’re near the sea, Na(+) and Cl(–) ions in the salt water can be summoned and added to the crystal lattice of the cube. This is likely easier than creating new matter from scratch – the problem of Gamp’s Law.

In this scenario, the object changes in inertia, but otherwise functions in exactly the same way. This is typically how we imagine the effect of an Engorgio spell. When a spider increases in size, to Ron Weasley’s horror, it still functions the same way a spider would. We see similar mechanics in Ant-Man when he increases or decreases in size. Any behavioral difference is due to the difference in inertial mass. But in the case of a spider or Ant-Man, it’s much harder to picture how you might add existing matter or create new matter to match the features of the spider or Ant-Man. It was much easier to imagine this for the structurally simpler NaCl cube with its ordered arrangement of ions in a lattice.

Possibility #2. The size of matter increases locally to the object being resized. In NaCl, this means that the size of the individual ions increases with Engorgio.


This is what we typically have in mind when we imagine a spider or Ant-Man increasing in size. But now there’s a problem. If atom or ion sizes increase, this affects the strength of chemical bonds.


Glossing over the details for how an atom would increase in size, let’s make the following initial assumptions. The mass increases proportionally as the size of all particles that make up the atom (protons, neutrons, electrons) increase. The charges stay the same since as far as we know, charges are treated as labels unrelated to size. Thus, the relative charges of protons and electrons remain as +1 and –1. Consequently, an enlarged Na(+) and Cl(–) will have the same charges but the ionic bonds would lengthen as the sizes increase. According to Coulomb’s Law, this would weaken the ionic bonds since the bond strength is inversely proportional to the bond length. (Technically, the force between a pair of ions is proportional to 1/r2 while the potential energy is proportional to 1/r. The Madelung force in the lattice will also be a contributing factor.)

While there are many ions in our body, there aren’t many ionic compounds with a lattice structure. Hydroxyapatite in bone is the closest thing I can think of. So if you engorgio-ed yourself into a giant, you would have weaker bones relative to your increased inertial mass. Something to think about in case you thought it would make you stronger. You might cripple yourself very quickly instead.

It’s less clear how other interactions involving ions would play out. The solubility and transport of ions is also dependent on the attractive strength between an ion and (polar) water molecules, and how large the solvation shell orders itself around the ion. Other examples include metal ions that play a vital role in enzymes or porphyrins. Mg(2+) and Ca(2+) ions have numerous interactions with polar molecules and macromolecules, including the backbone of your DNA. I’ve also restricted the above discussion to a simple classical view of ionic bonds. More importantly, how would resizing impact covalent bonds and intermolecular interactions that govern the chemistry of life?

Resizing is a complicated matter (pun intended!) but we will explore several possibilities in subsequent blog posts. Stay tuned!

P.S. Pi features in Coulomb’s Law!

Saturday, March 10, 2018

Molecular Me


Imagine you are the size of a molecule. Surrounded by many other molecules. What is happening to you and around you?

Before each class, I put up an ‘opening slide’. Students arriving early can ponder something amusing or interesting related to the topic of the day. Yesterday we started Kinetics in my General Chemistry II course. Here’s my opening slide.


My flash of inspiration came 30 minutes before class when I decided to add a small headshot of my younger self to a molecule and label it Molecular-Me! Students who came to class early thought this was funny and we had some good discussion about teaching and learning chemistry for a few minutes before class officially started. I’ve continued to take advantage of my Fifteen Minutes Before Class, at least for the 8am class when there isn’t a class occupying the room beforehand.

I always try to encourage my students to think at the atomistic and molecular level in our class discussions and in my office. That’s the length-scale where chemistry takes place – breaking and making chemical bonds! However, I had not taken the further step of asking students to imagine they were molecules. Two weeks ago, a concrete idea was floated by one of my research students in my creative cluster working on designing creative assignments in General Chemistry. She chose to work on the Photoelectric Effect and mocked up an assignment where students imagined they were different atomistic components in the experiment: an electron, a photon, or a metal atom. What would they experience? What would their neighbors be doing? She “did” her own assignment and came up with an imaginative story involving running from the rain. We might turn it into a video, so stay tuned.

Another inspiration came from watching a preview of Ant-Man and the Wasp when I was at the cinema to watch Black Panther. Just think: If we had the technology to miniaturize ourselves to the size of molecules, we might visualize chemical interactions in a new way! Another of my students has been working on connecting topics in General Chemistry to what astronaut observers might see as they traverse the universe and different planets. When I suggested a miniaturization approach, the students started telling me about watching The Magic School Bus as kids, and we got excited about what chemistry we might see along the way if we had the equivalent magic.

A Shrinking Spell, perhaps? Reducio! Actually, that should be a very difficult spell to cast, as would its counterpart Engorgio! Resizing could wreak havoc on interactions between particles unless ‘universal’ constants also changed proportionally and locally to the resized object. If the sizes of two objects got smaller, would their masses decrease proportionally? (I expect their distances to decrease proportionally.) If so, would the density change? In Coulomb’s Law, would charges change in magnitude? (Charge seems to be a label, that isn’t related to size.) Would the permittivity of free space change? Would electrostatic interactions strengthen or weaken? Hmm… that’s something else I should ponder further. Magic can wreak such havoc on the physical world if not used carefully. That’s why we have Hogwarts – to train young wizards and witches to use magic responsibly and be aware of the dangers of spells with unintended consequences. As the characters in Once Upon A Time learn, “magic comes with a price”.

Analogies have their limits. While asking students to imagine themselves as molecules may enlighten them as they traverse conceptual challenging topics, it may also lead to false notions. Part of my time as an instructor, I teach things that are new. Sometimes I reinforce something the student already knows. However, a chunk of the time is spent attempting to overturn misconceptions. Some of these are deeply lodged and the student has to struggle through this. So while I’m excited about exciting my students’ imaginations in chemistry, I should be equally cautious in using these so-called ‘creative’ exercises. They need to be well-tailored to provide room for novel interpretative thinking, without themselves becoming a road to misinterpretations.

Talking about misinterpretations, here’s my opening slide for Monday’s class on the Integrated Rate Law in Kinetics. I’m looking forward to class!

Tuesday, March 6, 2018

Vibranium... Soonish


When will the chemists successfully synthesize a material with similar properties to vibranium? Soonish.


How long is Soonish? Just over 350 pages including the index. I’m referring to the book Soonish by Kelly and Zach Weinersmith, a scientist and cartoonish husband-and-wife team. The subtitle of the book tells it all: Ten Emerging Technologies That’ll Improve and/or Ruin Everything. What are those ten technologies?

·      Cheap access to [outer] space
·      Asteroid mining
·      Fusion power
·      Programmable matter
·      Robotic construction
·      Augmented reality
·      Synthetic biology
·      Precision medicine
·      Bioprinting
·      Brain-computer interfaces

Okay, how long is soonish? The emphasis is on the –ish according to the authors. It may be 20, 30, 50 years, or more. Or never. There is some progress being made in all those areas, except perhaps asteroid mining – unless we get cheaper access to outer space. The authors estimate the present cost as $10,000 per pound you’re sending into space, but that cost is dropping. Each chapter imagines a scenario where sci-fi has become reality, what we would take to get there, and where we are along that journey. But the best part of the book is that this is done with jokes every 2-3 sentences and funny cartoons to keep the reader entertained. The jokes per minute rate far exceeds We Have No Idea by Cham & Whiteson.

Having just watched the latest Marvel offering Black Panther, reading about Programmable Matter in Soonish, and reading about shape-shifting materials in last week’s Chemical & Engineering News, I’ve been thinking about new materials that have the whiff of a Transfiguration spell. The magic is in designing a material that, when provided the appropriate stimulus, transforms itself or another substance. A year ago, I had my General Chemistry students invent a new element and discuss how and why it would be useful, as their final project.

I don’t know much about vibranium. From the movie, it’s primary ability seems to be the absorption and transformation of energy, which then allows it to transform other materials. The vast technological apparatus of Wakanda is built (somehow) by using vibranium to transform pretty much anything else. Presumably, there is much scientific and engineering know-how required to utilize the vibranium. The Black Panther suit absorbs energy and re-releases it in battle. Vibranium weapons can transform cars into metallic smithereens with a single shot.

In Harry Potter’s world, you have magical Transfiguration instead of vibranium. But if casting a spell is simply the utilization of electromagnetic (energy) waves, perhaps it is not so different than a substance that stores and transforms energies through vibrations. Waves are oscillations, and they vibrate at characteristic frequencies that tell us how much energy they carry. Vibranium is like magic. Any sufficiently advanced technology looks like magic, although Transfiguration has its limits.

So how far are we from making vibranium? Well, apparently you can make cool self-folding materials that respond to light (electromagnetic radiation) of different wavelengths. Here’s a cool video from the North Carolina State University research group working on this. The materials are ‘shape-shifting’ polymers. We’ve had shape-memory materials for a while now. Aerogels are another interesting substance that allow for changes in size and shape. For a super-strong material that can absorb energy, we have Kevlar – a polymer that could be a stepping stone to vibranium if that energy could be stored in some way and then released.

In their chapter on Programmable Material, the authors of Soonish discuss several more exotic things people are working on. Origami robots could be useful, particularly if you needed to send one through the entire length of your digestive tract. What about reconfigurable houses or workspaces? A table that can transform into a chair with an arm that turns into a reading lamp, perhaps? Or a swarm of tiny robots with directional magnets that can join in different configurations to form different tools. They could work together like nanobots in a sci-fi movie, controlled by wifi or some other remote method. Of course, these could be hacked for nefarious purposes.

Each chapter has a section on Concerns, should the technology become a reality, and a section on How It Would Change the World. It’s an amusing read, both thoughtful and entertaining. But in terms of learning something useful, I think Cham & Whiteson do a better job. Amidst the ridiculous amount of wise-cracking, the running joke through the book is a Terminator-esque impending Robot Uprising in 2027. That sounds soonish to me.

Saturday, March 3, 2018

Mayonnaise and the Origin of Life


What would the blog of an eminent scientist and thinker look like thirty years ago, before the word blog existed?

Welcome to Mayonnaise and the Origin of Life by theoretical biologist Harold Morowitz. Published in 1985, it’s a collection of musings – short essays of maybe ~1000 words – on a variety of topics that relate to his broad interests in science, philosophy and society. I chanced on the book at my university library while looking for something else; ah, the serendipity of browsing! It’s a bit dusty and there’s a slight residue on the covers of the book. It probably has not been checked out in a long time.


I first encountered articles by Morowitz when I became interested in origin-of-life chemistry. His writing is clear and lucid, be it a review article or one that dives into the details and weeds. I’ve learned a lot from him, and a number of his ideas have influenced the niche of the problem that I’m targeting. But that’s another story.

Mayonnaise takes its name from one of the fifty short essays. Morowitz starts with the basic recipe for mayonnaise: vegetable oil, egg yolk, and vinegar; but his main focus is on amphiphiles, dual-nature molecules that have a polar head (attracted to water) and a non-polar tail (attracted to oils). Incidentally, I just covered this topic in my General Chemistry class this morning on the thermodynamics of solution miscibility. The cells in your body, or for that matter all living things, are cells (compartments) because of the behavior of amphiphiles. These molecules are fundamental to the origin of life – and life as we know it is cell-ular. (The picture below is from an article in the journal Life showing a potential prebiotic amphiphile and how a collection of amphiphiles can form micelles and vesicles.)


In the 1980s, the “RNA world” was all the rage in origin-of-life research because of the discovery of ribozymes, RNA molecules that also behaved as catalysts. It was a possible solution to the chicken-and-egg problem of which came first: DNA or protein. DNA is excellent as genetic material because of its stability and information storage and copying structure, the double helix with A–T and G–C base-pairs. Proteins are crucial as the molecular machines in our body to catalyse chemical reactions. But proteins can’t copy themselves informationally. They are transcribed and built from DNA. On the other hand, DNA has no catalytic ability – and to make matters worse, you need catalysts to help unzip the DNA and copy itself.

While there was work looking into the role of amphiphiles in the origin-of-life, it wasn’t until the 1990s when this area of research started to take off, thanks to Pier Luigi Luisi and others. Morowitz tried to highlight their importance back in the mid-1980s. He predicted this revolution at the end of his essay. “They are of vital importance now and appear to have been equally important in the salad days of our planet. We are clearly dealing with most significant pieces of biological apparatus. If you have not been informed of them before, ignore that journalistic omission; you will hear much of amphiphiles in years to come.”

I’m halfway through Mayonnaise and two of my favourite essays so far are “Do Bacteria Think?” and “ESP and dQ over TT”. Reacting to a Supreme Court ruling that allows for genetically engineered bacterial strains to be patented, Morowitz muses that this “seemed to imply that the tiny organisms were not fully alive in the same sense that higher organisms are... What the Court did not realize was that this materialistic view runs counter to recent developments in microbiology… The new studies raise a profound question about the evolution of mind: If the simplest forms of life are capable of purposive activity, can they be said to engage in a form of thinking?” Is there a continuum of forms of thinking? Are there phase transitions that lead to a discontinuity? Are all forms of thinking just a reaction to physical and chemical stimuli? Since bacteria can sense time through variable chemical reaction rates, do they have memory?

Morowitz divides extra-sensory perception (ESP) into two categories. Information is transmitted by (1) “physical signals we have not discovered”, or (2) “methods… totally outside the range of measurement of physical devices and not energy dependent in the thermodynamic sense”. He argues that the first isn’t true ESP. We simply haven’t figured out where, how and what the sensory organs or devices are. The second, however, is much more interesting because it would violate the second law of thermodynamics. (Maybe I should bring this up in class. Ha!) Morowitz walks the reader through the measurement of entropy (dQ over T), discusses Maxwell’s Demon and Brillouin’s solution, and posits how we might then be able to “design a device to continuously convert heat into work. Among other benefits to mankind the energy crisis would immediately be solved.”

I’m looking forward to the second half of Mayonnaise. I’m slowly reading a couple of entries per week. Each takes less than 5 minutes, but then I spend the next 10 minutes thinking about it before remembering that I should be doing something else! So if you find an old dusty copy in your library, I recommend it!