Wednesday, October 28, 2020

Found an Argument

“I have found you an argument.” That’s the title of a short excellent article by George Bodner in the Journal of Chemical Education. I’d read this a long time back (the article was from 1991) but stumbled across it again as I’ve been reading and thinking about student misconceptions in chemistry. The article is based on a conceptual knowledge chemistry test given to entering first-year graduate students at Purdue – note these are students who were likely academically very strong students in their respective undergraduate programs.

 

Examples of concept questions to be tested are: “What are the bubbles in boiling water?”, “How does salt melt ice?”, and “What happens to the weight of an iron bar when it rusts?” While many of the students provide something reasonably close to a decent answer, a significant number do not. Sometimes only a small fraction of the students can come up with a cogent argument.

 

The paper’s title comes from a quote by Samuel Johnson: “I have found you an argument; I am not obliged to find you an understanding.” This fits with the argument made by Mercier and Sperbier that coming up with reasons or explanations is often lazy and post-hoc. It’s why I try to emphasize the explanative parts of chemistry, often the hardest for students. I’d like to think that’s the value-added of having an expert interlocutor for students to interact with in real-time.

 

An example that Bodner discusses is one that I have emphasized strongly to my first-semester G-Chem students this semester. The prompt is: “Everyone knows that sodium metal reacts with chlorine gas to form sodium chloride. Explain why. In other words, what is the driving force behind this reaction?” Bodner reports that at first students mostly said “because of a decrease in Gibb’s Free Energy of the system”. Further prompting of why led to “a significant fraction not being able to answer the question”. Scarily, and perhaps not surprisingly: “By far, the most common explanation was based on the assumption that the octet rule drives chemical reactions.”

 

Every year I try to emphasize to my students that the octet rule is simply a rule of thumb for identifying good Lewis structures, and should not be invoked for explanative power. I even have a Happy Atoms Story to go along with this part of class. We explicitly go through the energetic calculations showing that the first ionization energy of sodium is not sufficiently compensated by the first electron affinity of chlorine. Then we invoke Coulomb’s Law and define lattice energy. Before all this is done, we’ve already discussed bond forming and breaking across a range of examples illustrating the general bond energy curve.

 

Here’s a snapshot showing some of the erroneous answers from the incoming chemistry graduate students.

 


But it gets worse…

 


The purpose of Bodner’s paper is not to ridicule students. It is aimed at us, the instructors. We should be thinking carefully about our teaching – it needs to be more robust, and we need to pay explicit attention to common misconceptions and the “Swiss-cheese knowledge” of our students. Here is my paraphrase of Bodner’s observations and conclusions: (1) Students are often only good at applying domain knowledge narrowly, and they need more “cross-training” examples; (2) Misconceptions can be quite sticky and difficult to dislodge; (3) Students can “know stuff” yet lack in understanding; (4) Misconceptions can often be caused inadvertently by instructors.

 

I will close with a different example. Here’s the prompt: “Ice is less dense than water, but steel is almost eight times as dense as water. Explain why both the Titantic and the iceberg it hit were able to float on water.”

 

One of the answers given by a student: “The Titantic was made from titanium, not steel.”

Monday, October 26, 2020

Colors in Chemistry

I recently finished the module on Stoichiometry and Chemical Reactions in my G-Chem class. The last group of reactions we looked at were redox reactions. Of these, one of the classic demos is the metal displacement reaction. I usually put in zinc or magnesium, a grey and shiny metal, into a blue solution of copper sulfate. Moments later, the grey metal begins to dissolve, the solution turns colorless, and out pops a brownish solid – copper metal. Our G-Chem lab has the students run through what’s known as the “copper cycle” where they get to observe these changes and more – although in a Covid year, they will likely not be able to actually do this in person.

 

The metal displacement reaction I described above can be represented by the following chemical equation.

 

Zn(s) + CuSO4(aq)  -->   Cu(s) + ZnSO4(aq)

 

The students learn how to write the net ionic equation and identify sulfate as a spectator ion that does not participate in the reaction.

 

Zn(s) + Cu2+(aq)  -->  Cu(s) + Zn2+(aq)

 

We also break up this net ionic equation into its two half-equations to track the loss and gain of electrons in the redox reaction.

 

All this is accompanied by nice “molecular” level pictures of what’s going on in solution. Here’s one from the current textbook we’re using. 

 


Notice it juxtaposes the microscopic world pictures with the macroscopic picture you’d observe in a real-life demo. Interestingly, the particle picture of zinc metal colors the zinc atoms grey, and similarly the copper metal atoms have the reddish-brown of a copper metal. This is somewhat misleading. Bulk copper metal might be reddish brown, and bulk zinc metal might be greyish-silver, but the atoms certainly don’t have those colors.

 

The day after my class, I was reading another chapter from Multiple Representations in Chemistry (blogged about here and here), which discussed misconceptions students have about bulk properties such as color. This particular zinc-copper displacement experiment was one of several such examples. The chapter described a study probing the conceptual understanding of high school chemistry students navigating between the macroscopic, microscopic, and symbolic views of chemistry (known as Johnstone’s Triangle). Not surprisingly, the students didn’t really understand what gives rise to the color of a chemical substance that they can see – and many of them thought that it came from the color of individual atoms.

 

Horrors! I wonder if my G-Chem students have the same misconception. I certainly talked about the observed color changes. When copper precipitates out of solution, you can see the brownish metal. This and the disappearance of the blue color, which I attributed to Cu2+ ions in aqueous solution, was connected to the reduction half-reaction Cu2+(aq) + 2e- --> Cu(s). Similarly the dissolving of zinc metal and the solution turning colorless was because of the oxidation half-reaction Zn(s) --> Zn2+(aq) + 2e-, and I probably said that Zn2+ ions in aqueous solution are colorless.

 

I think the students comprehend that individual atoms in the solid metal are not colored per se. We went over this the very first day of class when we discussed atom representations, and I occasionally make reference to it when molecular pictures show up again in chemical bonding. “Atoms aren’t colored! This is just a standard representation chemists use!” But I hadn’t discussed the ions. And the textbook picture “colors” the dissolved aqueous ions the same color as the atoms in the bulk solid. Needless to say, they are not blue and colorless for Cu2+(aq) and Zn2+(aq) ions respectively. I’m not sure what the students were thinking about the different colors of aqueous solutions because I didn’t emphasize this point. I should check but haven’t had time to do so, as my class sessions are usually very tightly scripted.

 

In one of our early G-Chem labs, the students make a standard curve for a red dye solution, and then use it to measure the concentration of red dye in a Kool-Aid solution. There is some discussion about what the absorption spectra of the dye looks like as we discuss Beer’s Law and how to set the UV-visible spectrometers to “lambda-max”. I’m not teaching lab this year, and because of remote teaching, I didn’t do my usual activity of having the students use hand-held spectroscopes and look at colored solutions to see what light gets through and what is absorbed. So I’m not sure my students have a good notion of why they observe color in a solution. The textbook picture doesn’t quite help where the molecular level picture of CuSO4(aq) has a darker blue background than the ZnSO4(aq) with a lighter blue background, supposedly representing the “blue” of water. I have mentioned to the students a couple of times that water is not blue even as I use my blue marker to draw beakers of solution on the white board. I might be adding to student confusion without knowing it.

 

What am I learning from all this? Next year when I teach first-semester G-Chem again, I’m going to pay attention to this issue of color and try to address potential misconceptions systematically by building it into several places in my syllabus. And I need to make a stronger connection to what students are seeing and doing in lab. I haven’t taught the lab for a number of years, and that’s caused me to “forget” to make some of those stronger connections between lecture and lab courses.

 

On top of all this, I think the students might further trip up when seeing the simple picture of metallic bonding. The chapter I read discussed students viewing the metal “atoms” in the bulk solid as metal ions – with a sea of separated electrons in the nooks and crannies between the atom/ion spheres. And these pictures have colors. Who knows what the students are actually thinking? I’ll need to pay attention to this again next year. But right now I’m just trying to finish this Covid semester without burdening the students with more details. As to the specific zinc-copper metal-displacement, I have another shot at it next semester when we cover electrochemistry in detail, so I’ll try to design an activity to probe what the students are thinking and how we can correct any misconceptions.

Sunday, October 18, 2020

Merlin's Fungi

Today’s book review is not about Harry Potter’s textbook for first-year Herbology – that would be One Thousand Magical Herbs and Fungi by Phyllida Spore. The book I’ll be discussing today is titled Entangled Life. It is authored by Merlin Sheldrake; that’s not a pseudonym, and he’s a real bona fide scientist, I kid you not. And the book is indeed about fungi. 

 


If you want to know more about fungi, this is the book for you. While aimed at a general audience, it’s a bit denser and slow-going than your typical science trade-book. Spineless, The Tangled Tree, or I Contain Multitudes are breezier reads. Entangled Life shares similarities with these books: it has interesting first-person anecdotes, discusses a web of complexity, and digs beneath the surface at things you can’t see. I’m happy to say it’s well worth the patience and effort to read the book cover-to-cover. The narrative gets breezier in the second half, but the joy of reading it comes from building on the slower first-half.

 

Given my ignorance about fungi, other than my hobbit-like enjoyment of edible mushrooms, there is so much to learn from Sheldrake’s book. I’d been thinking about zombies lately, one thing that jumped out were “zombie fungi” Ophiocordyceps unilateralis, which apparently take over whatever free will carpenter ants might have. In what looks like mind control, the ant is forced to climb up high on a plant, something it normally wouldn’t do. The ant is then forced to “clamp its jaws around the plant in a death grip”, at which point the “mycelium grows from the ant’s feet and stitches them to the surface. The fungus then digests the ant’s body and sprouts a stalk out of its head, from which spores shower down on ants passing below.” That sound diabolical, straight out of an Alien movie.

 

In the same chapter as the zombie fungi, “Mycelial Minds”, are the ergot fungi, famous or infamous for being a source of the drug LSD. There are a lot of mind-altering chemicals out there, and psilocybin mushrooms, of which there are many varieties, have been consumed by humans through the ages for their mind-altering effects. Apparently there are cave paintings celebrating what look like mushroom deities, and there’s even a theory that such neurochemicals helped hominids free their minds. Magic mushrooms, we call them now. I wonder what Phyllida Spore had to say about these mushrooms, or if such “magical” properties were out-of-bounds to first-year Hogwarts students.

 

My favorite part of the book was the last three pages of Chapter 6, “Wood Wide Webs”. Celebrating the complex connections and relationships among fungi and their ecosystem, Sheldrake asks broad age-old questions, not too dissimilar from the questions I would ask while pondering origin-of-life questions. But he does this with both aplomb and wonderful prose. Here’s one of the best paragraphs that captures the heart of the story.

 

“… fungi form and re-form their connections with plants, tangling, detangling, and retangling… wood wide webs are dynamic systems in shimmering, unceasing turnover. Entities that behave in these ways are loosely termed ‘complex adaptive systems’: complex, because their behavior is difficult to predict from a knowledge of their constituent parts alone; adaptive, because they self-organize into new forms or behaviors in response to their circumstances. You – like all organisms – are a complex adaptive system. So is the World Wide Web. So are brains, termite colonies, swarming bees, cities, and financial markets… Within complex adaptive systems, small changes can bring about large effects that can only be observed in the system as a whole. Rarely can a neat arrow be plotted between cause and effect. Stimuli – which may be unremarkable gestures in themselves – swirl into often surprising responses. Financial crashes are a good example of this type of dynamic nonlinear process. So are sneezes, and orgasms.

 

Symbiosis, a term we are now use familiarly to describe relationships between living organisms, came about by studying fungi. Coexistence is a tension between competition and cooperation. It’s complicated. And fungi show a full spectrum of different behaviors towards their neighbors. Beyond psychedelics, fungi have a many-layered relationship with humans. They help us to bake bread and brew beer. Some are willing to pay top dollar for the delicacy of odorous truffles. And there is a whole range of marvelous sustainability and eco-industry solutions – Chapter 7 showcases many creative examples, which simply gave me the “wow, fungi can do that?” feeling over and over. There’s also an intriguing narrative in Chapter 8 that connects the mutation of the alcohol dehydrogenase enzyme in primates (thus allowing us humans to enjoy a drinking buzz without being violently poisoned in the first few sips), to the quest for biofuels.

 

In his epilogue, Sheldrake muses on how “fungi make worlds; they also unmake them.” In response, now that he’s finished the book, he plans to sacrifice two copies to fungi. One will be dampened and seeded with fungi that will decompose or “eat” the book, and when they’ve sprouted mushrooms, he will eat them. And thus, eat his own words. The other will be mashed into sugars, and yeast will ferment it into beer, which he’ll drink. Sheldrake calls this “closing the circuit”. Making and unmaking. Composing and decomposing. The circle of life and death.

Thursday, October 15, 2020

Brick, Beaker, Balloon

I’ve been pondering ways to make explicit the macroworld-microworld connections in chemistry, as I’ve been reading Multiple Representations in Chemistry. Chapter 5 on “the role of practical work” discussed the purposes of lab-work and demonstrations in fostering a better understanding of chemistry fundamentals. While many of the examples discussed and the conundrums posed are better aimed at grade-school chemistry and science, there is some overlap to college-level chemistry.

 

An idea that came to mind was to overhaul the approach I’ve used to introduce my chemistry courses. I’ve been starting with the question “What is matter? And why does it matter?” first with the ancient Greek philosophers and the Four Elements, before moving to the alchemists and the philosopher’s stone, and then we discuss Dalton’s Theory and how to picture it, before touching on when we were finally able to “see” atoms with the scanning tunneling microscope. It’s a nice story, but maybe a bit too esoteric and abstract.

 

How can I concretely introduce something we’ll have to do throughout the semester – shift our thoughts facilely between macroscopic and atomistic descriptions, the heart of chemistry. I came up with the 3 B’s. For my first day of class, I can bring in a Brick, a Beaker, and a Balloon.

 

First, I could break the brick. That would be a whole lot more dramatic than what I do now which is tearing paper! Then we can discuss what makes up the brick. Smaller particles? How small? This gets us into the definition of the atom. How are the particles arranged? This gets us into imagining an atomistic picture of solids. What holds the particles together? We can talk about “attractive forces” in general, and how energy is used to overcome those forces and break the brick. It also previews multiple instances in first-semester chemistry where energy will be used to “take things apart” be it knocking electrons off an atom or breaking a chemical bond. We might even discuss the “elements” found in the brick: aluminum, oxygen, silicon. The brick is a compound and not a pure element.

 

Then I could fill the beaker with water. We can imagine what a particle picture would look like, and how this is different from the picture of solids. What’s different? How are the attractive forces different? Why do liquids take the shape of their container? We could discuss the molecular picture of a water molecule and a similar picture when you have many water molecules. But water from the tap isn’t just pure water. There might be dissolved ions. This allows us to talk about homogenous mixtures and maybe what constitutes a “substance”. We could even talk about why pure (deionized) water might be dangerous to their health.

 

Finally, the balloon. We can work on a particle picture of gases and how it differs from solids and liquids. We can think of hot air balloons, and how temperature might change the picture, thus allowing a discussion of the competition between kinetic (thermal) energy and attractive forces between particles. Density enters the picture. We can also talk about the common gases in air, and why balloons that we blow up fall to the ground. I’ve been thinking about how to weave the Gases textbook chapter throughout the semester ever since reading Caesar’s Last Breath, which has lots of great vignettes and examples to keep students curious and interested. While I might not spend as much time on Dalton and his theory, I could at least briefly pay tribute to the fact that his “discoveries” came from the study of gases.

 

I haven’t drawn out a blow-by-blow lesson plan to figure out if all this will fit into a single one-hour session, and how exactly I will use it to leverage other material throughout the semester. But it’s a start. Brick, Beaker, Balloon. It’s macroscopically visual yet full of microscopic imagination. And visual touchpoints make it an accessible and quick reference for students. I’m looking forward to trying it next Fall. Hopefully in front of a live audience!

Sunday, October 11, 2020

The Invention of Cold

We are energy guzzlers. As humans. As creative technological beings. As comfort-seeking creatures who change the environment around them.

 

Consider a bacterium. It makes just enough energy for itself to survive, and possibly procreate. But at some point in Earth’s history, one ate another (technically archaea eating bacteria), and more complex unicellular creatures (eukaryotes) appeared on the scene with superior energy transducing abilities – thanks to new little engines we now call mitochondria. Cells signaled each other, and started to cluster in colonies, sharing out the work, specializing and becoming more efficient. And thanks to the buildup of free molecular oxygen, they could become bigger multicellular organisms – gorging more energy and burning it up as they built brain and brawn.

 

While every other living creature eats to live, human beings wanted more. We wanted more food. We wanted it to look bigger and better. We wanted more shelter. Bigger and better. We wanted stuff to make our life easier – machines to do the labor. We wanted to live wherever we chose (from the desert to the mountains), travel with ease and speed, communicate in new ways, and tickle our fancies with novel entertainment. New creations require energy when raw matter is shaped into technology. We dug deep to find new energy in black gold, harnessed the power of wind and water, imitated green plants as they looked to the sun, and unlocked the secrets hidden in the tiny nucleus of the atom.

 

Adding energy adds heat, increasing the wiggle and dance of atoms and molecules. We humans know how to add Heat, and we’ve been doing this for a long time. Using Fire in its myriad forms, we burn, burn, burn. But the laws of thermodynamics are relentless. Heat, a most inefficient form of energy, dissipates until the temperature of the hearth matches the temperature of its surroundings. More fuel is needed for the fire to maintain heat. Hot things cool down. We notice this easily because the object of our attention is small and specific. But it is surrounded by the large thermal reservoir of our environment, hardly changes its temperature as it absorbs our heat and not our attention. You might say that the cold does get hot, but we never notice if we can’t tell the difference.

 

This makes the invention of Cold all the more interesting. It’s a fascinating tale, spun by a master story-teller, Steven Johnson, in his book How We Got to Now: Six Innovations That Made the Modern World

 


The story begins with a Bostonian, one Frederic Tudor, who hatched a plan to ship large blocks of ice from the cold Northeast to the sunny Caribbean. The ice survived the trip, but Tudor hadn’t taken marketing into account – the locals didn’t think it useful, not having needed this seeming “luxury” for as long as they could remember. And the ice does eventually melt down in the heat of the tropics. Surviving debtor’s prison and persevering with his “ice trade”, Tudor’s luck eventually turned. In the sweltering heat, when there are those who can afford the luxury of cooling down, there’s profit to be made. And it wasn’t just the Caribbean.

 

We all need to eat. Our bodies need fuel to burn. Refrigeration would completely change the food industry, from the Chicago meatpackers of the nineteenth century to the flash-frozen food innovations of Clarence Birdseye. How do you feed an urbanizing population with ever-more complex lifestyles? Can we live and work more comfortably in our dense concrete jungle? Can we enjoy a theater performance indoors in the summer? Can we cool the air in a hospital ward for patients suffering from fever and ague?

 

It is this last question that eventually gave us modern air-conditioning. John Gorrie, a doctor in Florida, “suspended blocks of ice from the hospital ceiling” to help reduce the fever of his patients and make them more comfortable. But ice shipments were sometimes delayed, and the good doctor built his prototype using recently discovered scientific knowledge of how gases behave. If you can compress a gas, then cool it down to room temperature (for example with running water), and then allow it to expand, the gas will “pull heat from its environment”. This is still the strategy used in air-conditioners and refrigerators today. We’ve become clever and efficient at doing this, and can boast of huge buildings in an ever-increasing metropolis. Always comfortable on the inside, no matter what the temperature is outside.

 

Tudor’s story begins barely two hundred years ago; Birdseye’s flash-frozen food, just a hundred years ago. They seem like old stories, but they’re relatively new in human history. The invention of Cold looks easy. But when your refrigerator or air-conditioning unit breaks down, you’ll likely throw up your hands in despair and quickly call an expert for repair. It takes energy to fight thermodynamics and keep things colder than the ambient temperature. No organism does it well. Pulling energy out of a system to cool it down is a very neat trick that creative humans with the knowledge of science and technology can now perform. Until recent history, the idea that a box could cool things down would have seemed like magic. (I suppose you could try trapping a Dementor.) Humans can be wizards in their own way. 

Friday, October 9, 2020

Spatial Visualization in Chemistry

I’ve started reading Multiple Representations in Chemistry, a compendium of articles exploring the challenging link between Johnstone’s Triangle and how chemists use different visualizations (even for the same molecule). Over the years, I’ve been paying attention to these aspects of my introductory chemistry courses (G-Chem and non-majors Chem), for example when we discuss chemical reactions, but I had not pondered many of the issues that arise in O-Chem. Chapter 1 (“Learning at the Sub-micro Level: Structural Representations”) provides several interesting examples. I’ve picked out three for today’s blog post.

 


Let’s look at chirality. This is not often emphasized in the G-Chem syllabus, and I’ve only explicitly covered it a few times when we used a textbook that included it in the chapter that included molecular shape and hybridization. I do however cover it regularly in my non-majors class (where we spend four weeks on organic chemistry at a very simple level). Here’s a Figure showing the visualization task.

 


Did you find the task easy? I didn’t. And I’ve been teaching for a long time. Although I’m probably figured this out much faster than most students (simply from experience). The folks who studied this task report that “most students employ a strategy of rotation to align a corresponding bond in each structure, thus reducing the problem to a 2D consideration…” Another interesting tidbit is that “students found mental rotation around the vertical bond easier than about the other bonds.” Thinking about my own comfortability, I found that I tend to draw one of the thin lines either in a vertical or horizontal position because that seems to be the easiest for me to visualize. If they’re both diagonal, I have more difficulty. I find this interesting, never having considered it before.

 

The second example is one that I haven’t used in G-Chem, and since I don’t teach O-Chem, I haven’t used it in class either – although I’m very comfortable using it as a visual aid for myself. I’ve also used it when trying to explain dihedral angles to my research students. Here’s the Figure below from a task asking students to name molecules, one with a typical “structural” drawing, and the other with a Newman projection.

 


The study showed that most students had no trouble naming the structural drawing on the left (using the standard rules one learns in O-Chem), but many had trouble with the Newman projection on the right. Now it seems to me that all one needs to do is redraw the Newman projection into the corresponding structural drawing. I don’t know if the students did this (the text doesn’t say and I didn’t look up the original paper) but apparently after being asked in two further exams, students still found it problematic. Did they not learn it after having trouble the first time? I don’t know. It does say that the students complained that the question was “not fair”.

 

Here’s the final example I will highlight today. This involves drawing the product of a standard SN2 nucleophilic substitution. The only difference in the two questions is whether the hydroxide is shown on the left or the right of the reactant organic molecule. Look at the dominant answer provided by students in both cases.

 

 

If you teach O-Chem, you might slap your head in dismay after all that time you spent discussing inversion of stereochemistry and “back-side” attack. But if you’ve seen this error before, chances are you’re incorporating examples in class so that students don’t keep making this sort of mistake.

 

Reading all this made me think about Resonance structures. When we get to this point in Lewis structures, I normally make a song-and-dance in class about how to think about this in terms of the delocalization of electrons. We talk about different representations and the limitations of models and different ways of drawing this structure on a two-dimensional flat surface. Or I should say I talk about them in class. (This post briefly mentions how I discuss it.) Now I’m not sure whether the student really understand what I’m talking about. (We went over this in my G-Chem class last week.) I think I should think about writing up some supplementary information for my students, since this concept likely remains fuzzy for the students.

Wednesday, October 7, 2020

Life's Engines

There’s a surprising twist to Paul Falkowski’s book, Life’s Engines. From its subtitle – How Microbes Made Earth Habitable – you’d expect discussion about microbes, their energy-harvesting nanomachines, and biological evolution. You do get all of this, in easy-to-read clear prose, but you also get the germ of a profound idea – that the origin of life and its subsequent evolution is about the global-scale movement of electrons across minerals, organisms, and the environment. It’s not a new idea. For example, this week my Origin of Life class is reading an American Scientist article from 2009 titled “The Origin of Life: A case is made for the descent of electrons.” But it’s an idea normally found in scientific articles rather than a book aimed at the general public.

 


The argument is compelling, in my opinion as a chemist who studies the origin of life, although Falkowski leaves out many of the esoteric details. He tries to give readers the big picture, and in that he is mostly successful. But the interesting pieces, for those interested in the origin-of-life, are in the trickier details. Grand sweeping views are always easier to explain in broad brushstrokes. But because no one is an expert on everything, the view provided is always limited, with the strongest examples coming from the expertise of the story-teller. The parts I was most interested in, a global view of electron-economics, had much less page time than some very interesting discussions about microbes and their biochemical machinery – the expertise of the author.

 

Chapter 7 is cleverly titled “Cell Mates”. Here’s the opening paragraph: “One of the strategies nature uses to insure that its intellectual property is resilient in the face of potential massive catastrophic events is to spread the risk across a wide range of microbes. The instructions for nanomachines are spread by means of horizontal gene transfer. Although horizontal gene transfer is the principal mode of evolution in microbes, the process is not totally haphazard. One of the major drivers is ecological – the symbiotic association of microbes to optimize the use of scarce nutrients. That driver has served the evolution of life well.”

 

The author argues for the importance of studying microbes in their more complicated ecological environment, and not just isolating and growing them as pure cultures (which has its benefits). The community of microbes has its own urban jungle and economy, much like the ones humans experience in some of the largest cities especially those in the developing world. The currency in this economy is electrons – in the form of chemical molecules. There are electron-rich molecules such as methane, there are electron-poor ones such as molecular oxygen, and the rich tapestry of chemical diversity provides a sprawling bartering marketplace like no other. While we learn in school that ATP is the universal currency of biochemistry, if you look a little deeper you’ll see a messier and more diverse set of chemical substances involved. And there are positive and negative feedbacks in this microscopic world, similar to things you might have heard about in our macroscopic world, global warming for instance.

 

One puzzle I learned about in this chapter had to do with endosymbiosis and the entrance of mitochondria to the global electron-energy game. I already knew the broad strokes: an archaea eats a bacteria, and the latter evolves into a symbiotic energy powerhouse. But I hadn’t considered the details. Turns out that the ancient bacteria that was engulfed is closely related to extant “purple nonsulfur photosynthetic bacteria” which carry out photosynthesis only under anaerobic (no oxygen) conditions. When oxygen is available however, an “electrical circuit is inhibited, and the cells lose their capacity to synthesize the pigments that absorb light. To survive they rewire their internal electronic circuits and allow oxygen to become an acceptor of hydrogen that comes from organic matter. The same bacterium that is a photosynthetic Dr. Jekyll during the day under anaerobic conditions can become a respiratory Mr. Hyde under aerobic conditions.” Falkowski goes on to explain why the evolution of the engulfed bacteria into today’s mitochondria requires a delicate balance – one that we don’t completely understand yet.

 

Last week I gave a talk to first-year students about the interdisciplinarity of origin-of-life research and why it is intriguing to me. I’m thoroughly enjoyed learning about different areas and have a newfound love for biology (which I thought was a slog back in secondary school). Falkowski reminds us readers of the close connection between the reduction-oxidation reactions of chemistry and the intricacies of evolution in biology, not to mention the physics of electron transfer or the minerals of geology that provide sources and sinks of electrons. Life’s Engines is a wonderful little book that I will re-read as I continue to ponder the origin of life.

Sunday, October 4, 2020

Of Salamanders and Skele-Gro

Frozen 2 had promise. It shared some premises with Onward, namely the existence of a pre-technological society that had learned to live in harmony with “natural” magic. This natural magic was founded upon the four elements of Earth, Water, Air, and Fire, the foundation that led to the modern field of chemistry, after a long detour into alchemy. But it botches what could have been a truly magical story into a neither-here-nor-there mess, partly saved by the light-hearted Olaf, and a cute delightful salamander.

 


In Frozen 2, the salamander is a spirit of (elemental) fire, an idea from the medieval alchemist Paracelsus. However, in our world, salamanders are amphibians – creatures of both earth and water. They are often found hiding in driftwood, fallen branches and broken logs. They don’t tolerate heat or fire and try to escape when wood is put to the torch, hence their association with fire. They scamper away when the fire starts, seemingly appearing from nowhere, as if born from the fire.

 

Salamanders are mentioned in the Harry Potter series when the twins Fred and George Weasley get a salamander to swallow some fireworks, or when Hagrid treats them for scale-rot. Salamander blood is also used in Strengthening Solutions, and Hagrid builds a bonfire to keep them happy during Care of Magical Creatures. Salamanders are described in Fantastic Beasts and Where to Find Them (the book, not the movie) as fire-dwelling lizards. The author of said book is one Newt Scamander, a very salamander-ish name if you ask me.

 

However, it’s the non-magical salamanders of our world that are much more interesting. They can regrow their limbs! How do they do so? Well, it’s in their genes. Actually, it’s in all our genes too, since we have genetic instructions to grow limbs. Some people even grow an extra finger! However for us humans, the genes haven’t yet be coaxed to regrow a missing limb. Scientists continue to probe and unlock the secrets of the salamander’s ability to regrow different limbs, although apparently there’s a limit. Chop off the limb enough times and eventually it doesn’t regrow. Ah, biochemistry – fascinating and complicated!

 

Voldermort magically provides Wormtongue a new hand, although it seems more like an advanced prosthetic (with its silvery appearance) rather than the regrowth of his own hand. The hand isn’t completely under Wormtongue’s control. It’s not really his, and it kills him in the end. It’s unclear how exactly Voldemort magically builds his new body, although it is likely not the same as his old body – not exactly grown from him – although it does take extracts from his father, from Wormtongue, and from Harry. Bone, flesh, blood. But in his long efforts to escape death, Voldemort has marred his own body-soul complex. How exactly remains unclear.

 

A simpler example of regrowth is Skele-Gro, a potion used to regrow missing bones. In this case, Harry has to endure a painful evening of regrowing the bones in his arm, after they are removed (cursed?) by the bungling Professor Lockhart. Could a magical spell have been used to put the bones back in – essentially the reverse of what Lockhart did? Possibly. But the choice to do it “properly” is to use a potion. I’ve argued that what makes Potions of such importance to magic-users are its interactions at the molecular level. Biochemistry is complex, and when fine control is required for things too small to see and too complicated to mentally picture, a potion is much more effective than a magical spell. Perhaps the potion magically activates the appropriate bone-growing DNA in Harry, first detecting what’s missing underneath his flesh, and regrowing his bones salamander-like. I wonder if salamander extract is used in Skele-Gro; it would certainly be appropriate.

 

Could a complex potion, perhaps with salamander extract, have regrown Wormtongue’s hand? Or helped George Weasley regrow an ear? I don’t know. Neither was attempted. Neville Longbottom does accidentally transplant his ears on a cactus in a Transfiguration class while attempting a Switching Spell. But that’s not the same as re-growing. I’ve postulated that Transfiguration spells wear off after some time, so they won’t have the permanence one wants in a new limb. Magical spells have their limits, and the Wizarding World still needs chemistry!