Thursday, March 31, 2022

H2O2 too?

There are many hypotheses for the ‘crucial ingredient’ in the origin of life. Adding to this list is hydrogen peroxide, H2O2. That’s the suggestion of Rowena Ball and John Brindley in a review article with an intriguing title: “The Power Without the Glory: Multiple Roles of Hydrogen Peroxide in Mediating the Origin of Life” (Astrobiology 2019, 19, 675-684). It is clearly labeled as a Hypothesis article, and the authors carefully state that “this discussion is purposely (and necessarily) speculative… to stimulate further ideas concerning possible feedbacks between and evolving biomolecular system and its energetic medium, and how these interactions themselves may have shaped emergent life.”

 

H2O2 has long been cast as a villain. You’ve probably heard the standard joke of someone who walks into a bar, asks for H-2-O, gets a glass of clear liquid, drinks it up, and declares it refreshing. On hearing this another patron asks for H-2-O-too, gets a class of clear liquid, drinks it up, and dies! We think of H2O2 as toxic. We caution our students in chemistry lab to be careful when handling it because of its reactivity. The two are related. Because of its anomalously weak O–O (single) covalent bond, H2O2 is considered a ‘high-energy’ molecule and thus reacts favorably in exothermic reactions. It is part of the family of ‘Reactive Oxygen Species’ known in biology and the behemoth cosmetics industry as something that ‘causes damage’ broadly speaking.

 

Five years prior to this article, Ball and Brindley had examined the thiosulfate hydrogen peroxide (THP) redox oscillator. Thiosulfate and H2O2 react heartily and quickly according to the chemical equation:

 


Ball and Brindley argue that the THP oscillator provides a power source, i.e., its exothermic reaction releases energy that can potentially power anabolic reactions and build molecules required for biomass. This “chemiosmotic coupling”, they argue, provides “a viable alternative (or complementary) alternative to… proton gradients across alkaline hydrothermal vents” as proposed by Nick Lane and others. Ball and Brindley also claim that it “improves ribozyme activity [and] provides a possible resolution of the replication versus ribozyme activity paradox”. I haven’t read their cited paper, but I think their argument is that varying pH allows another route between functionally folded ribozymes acting as replicases and the need to also unfold as a template for replication.

 

One intriguing part of their paper is that their previous simulations with the THP oscillator lead to an unexpected probability distribution. Not having read the cited work, I don’t know how it works. But in any case, while we often observe the perturbation of a Gaussian distribution into a Maxwell-Boltzmann distribution (left-heavy with a long tail on the right) that all my students should recognize, the THP oscillator, under the appropriate non-equilibrium conditions, leads to a right-heavy perturbation. Here’s their figure showing the three cases: Gaussian, Maxwell-Boltzmann, and their New Distribution.

 


This result is potentially interesting but, without seeing how their new distribution is perturbed as other environmental conditions change, it’d difficult to assess its importance as a driving force for molecular assembly that builds order even while seeming entropically unfavorable. Nevertheless, it is intriguing. The authors tantalizing suggest that “characterizing this distribution in explicit form could effectively give us a fundamental equation of life which may provide useful guidance in designing molecular experiments: they should be messy… but not too messy.”

 

As to why all of life seems to use the same machinery (genetics and metabolism), they argue that the rise of catalase enzymes to effectively remove H2O2 cuts off the possibility for new life to begin powered by H2O2. In effect, this is “pulling up the ladder” after you’ve climbed up the tree so no one else can follow suit. They characterize evolution as “the burning of a succession of small bridges: the results of a transformational evolutionary step usually destroy the preconditions for its own occurrence.”

 

Personally, I’m skeptical that H2O2 plays such an important role. I can see how, if formed, it can be an energy source. But it’s highly unlikely to be a useful one that leads to a series of ‘forced trajectories’. Rather, I expect H2O2 in a prebiotic milieu would be indiscriminate in its reactivity. It’s akin to striking a match and burning your fuel directly, blowing off the released energy as heat, and going directly to low-energy ‘waste’ molecules, rather than the tortuous and intricate path used by life in its dance of anabolism and catabolism.

Monday, March 28, 2022

Riddles

A topic new to me: Interactive fiction. I didn’t know there was a whole community supporting this genre and that they give out annual awards. Heck, I didn’t even recognize the phrase. Rather, I stumbled across it while reading a random article by someone doing a retrospective look at what Zork wrought. I played Zork once, on an Apple II back in the day. It didn’t seem as interesting as other graphics-based games. And I did enjoy Choose Your Own Adventure books also back in the day. I thought this genre had died, until I encountered it in boardgame form through The 7th Continent.

 

Somehow reading that random article conjured in me unrealistic visions of someday writing a work of interactive fiction that subtly taught readers chemistry as they puzzle-solved their way through learning how to cast magical spells in a Harry Potter type world. After all, this blog got started because I thought I should write a chemistry textbook disguised as a potions book, but I needed to practice writing to begin with! The writing has continued but the potions book hasn’t materialized. I did write a prologue once, and also a recipe for Hemodote that I used as an exemplar for a class assignment. In any case, I decided to learn more about interactive fiction. As an academic, how do I proceed? I read a fellow academic. This led me to Twisty Little Passages by Nick Montfort. 

 


Montfort argues that before we talk about the early computer games Adventure or Zork, or before the role-playing game Dungeons & Dragons, that far back in the recesses of time, “the riddle is not only the most important early ancestor of interactive fiction but also an extremely valuable figure for understanding it… considering the aesthetics and poetics of the form today.” Essentially, he argues that interactive fiction is not just a game, story, or puzzle, it is at heart a riddle – albeit a potentially long and immersive one when executed well.

 

What is a riddle? It can be slippery to define. Not just a diversion for children, it is a form of enigmatic poetry involving the riddler and the guesser. It’s a literary puzzle where the literal is hidden yet waiting to be uncovered. And there is a joy in the process of unveiling as puzzle-lovers know! Montfort provides multiple examples to help sharpen the genre of literary riddles, including the much-debated example in The Hobbit: Did Bilbo’s final question to Gollum constitute a riddle? But let’s cut to the chase: Montfort connects riddles and interactive fiction in four ways: “Both have a systematic world, are something to be solved, present challenge and appropriate difficulty, and join the literary and the puzzling.”

 

The riddle creates a world – a metaphoric one. It has its own rules and analogies, and the best riddles are constructed carefully to keep its ‘world’ self-consistent. Understanding this world is key to solving the riddle. But it must be of appropriate difficulty, and necessary clues should lead the guesser (or riddlee) to the puzzle’s solution. The best riddles start off obscure but the layers peel away as the puzzle progresses, keeping the riddlee engaged in the riddle. (It is certainly challenging for the riddler to formulate a clever long-form riddle posed by a literary piece of interactive fiction.) As the riddlee responds, new avenues open up, some anticipated by the riddler, and others might be truly novel. But eventually the solution is reached, and the itch is scratched. Montfort writes:

 

The riddle, like an interactive fiction work, must express itself clearly enough to be solved, obliquely enough to be challenging, and beautiful enough to be compelling. These are all different aspects of the same goal; they are not in competition. An excellent interactive fiction work is no more “a crossword at war with a narrative” (quoting Graham Nelson) than a poem is sound at war with sense.

 

That’s a tall order. It’s motivating me to try out some award-winning works; Montfort provides a starter list in his 2003 book; I can easily find newer ones thanks to the Internet. But to truly enjoy it and immerse myself in a fictional world, I might need to wait until the semester is over. As someone who enjoys puzzles and does one crossword puzzle per day, I can see an element of the riddle in each crossword clue. Themed crosswords allow a brief buildup of clues leading to the solution. Crosswords are not an extended literary narrative, a key feature of interactive fiction by Montfort’s definition, but they are a collection of riddles of some sort. They get me to twist my mind in various ways as I mull the meanings and allusions of words – a surprisingly fun activity even for a few minutes. Thank you, crossword constructors, for providing your version of riddles that give me daily enjoyment!

Sunday, March 27, 2022

Hidden Variables

My assigned prompt in G-Chem 2 this past week has students scratching their heads. I tell them that the equations we’ve been using in thermodynamics and kinetics were elucidated in the nineteenth century, before the widespread belief in tiny invisible entities known as atoms and molecules. But everything we discuss in class on these topics has been suffused with molecular language. Is it possible to imagine developing and using these equations without knowing about the atomic world?

 

I’ve had some thoughtful responses so far. Students have recognized that to some extent it is easier to connect something you observe with potentially vague yet macroscopic quantities such as pressure and temperature. They’ve also started to muse on how one grafts new ‘theories’ on to existing equations that work. Essentially, previously hidden variables are being brought to light!

 

A similar narrative arc is how Manjit Kumar ends his book, Quantum, that focuses on the debate between Bohr and Einstein (and their supporters and detractors) on the nature of reality in the physical world. This, I think, distinguishes Kumar’s telling of the tale from the many other histories I’ve read about the development of quantum mechanics. The reader gets the sense of why the Copenhagen interpretation reigned supreme for many years, but also where it has proved less than satisfying. Kumar argues effectively that Einstein was not so much against the probabilistic nature of quantum mechanics (associated with his famous “God does not play dice” statement), but instead was philosophically wedded to the idea that nature has an independent reality not necessarily dependent on the observer.

 

Einstein thought quantum mechanics was incomplete, and found anathema Bohr’s view of reality being undefinable until the act of observation causes the superposition of states in the wavefunction to ‘collapse’. In essence, Einstein would have supported the idea that there are hidden variables to be discovered, that would underpin quantum mechanics. He didn’t propose hidden variables explicitly as such, but others after him have pursued the idea – Bell’s theorem being the most famous of these ‘tests’ that eventually led to other strange ideas such as quantum entanglement and nonlocality.

 

In one of my G-Chem 1 prompts last semester (Week 5), I ask the students what they think about indeterminacy as illustrated by Heisenberg’s Uncertainty Principle. The students have trouble with the thought that there is a fundamental limit to measuring the two variables (momentum and position) simultaneously. Many of them argued that we’ve seen how technology has allowed us to access what seemed in accessible, including revealing the invisible entities we now call atoms. The idea of atoms far preceded observing them with the scanning tunneling microscope, but eventually scientists and technologists succeeded. Shouldn’t we expect that behind indeterminacy there are hidden variables which we’ve yet to uncover?

 

In P-Chem 2 this semester, I emphasize the statistical underpinnings of thermodynamics (from a Boltzmann view) and we work on understanding how the mathematics of the microscopic world lead to the macroscopic variables of thermodynamics. I hadn’t thought about couching this discussion in terms of hidden variables until reading Manjit’s book, but essentially that’s what we’re doing. Early in the semester when I introduce the full virial equation to describe non-ideal gases and contrast it to the two-parameter van der Waals equation, we talk about the introduction of variables to ‘model’ an invisible picture underlying reality. I’ve been emphasizing the use of models and their limitations, but I hadn’t explicitly addressed the general idea behind hidden variables (even when we use them). Something for me to consider doing!

Tuesday, March 22, 2022

Exact Science

Earlier this month, I blogged about The Principles of Life by Tibor Ganti (and collaborators). I was able to get my hands on a physical copy of the book (thanks to Interlibrary Loan) and I’m now in Chapter 3, “The unitary theory of life”. It begins by introducing the term “exact science”. Here’s how Ganti introduces the idea.

 

Exact sciences, such as mathematics, mechanics, the theory of electricity, thermodynamics, chemistry, etc., are characterized by the common fact that they all have specific model systems, i.e., systems which represent phenomena of the real world without disturbing factors… By using model systems the exact sciences can describe the phenomena under investigation in qualitative and quantitative respects, can formulate them mathematically.

 

I like how Ganti emphasizes the necessity of models in the physical sciences. As a chemist, we constantly use structure-based models – to represent too-small-to-see atoms and molecules. He also talks about model systems. We’re not just modeling individual structures. We’re also modeling systems and the relationships of different parts within the system. Ganti then emphasizes two things.

 

First… any one of the exact sciences models only one part of the real world and even this one only from a definite point of view, independently of the other phenomena. Second, it must be understood that it is not the real world which the exact sciences are capable of treating with an arbitrary(?) exactness, but their own model systems. Real-world phenomena are only approximated by them.

 

I’ve been thinking a lot about the modeling relation thanks to Robert Rosen’s work. I don’t quite grasp how it all works, but what I do know: When we formulate a model system (yes, with formulae!), we do so via reductionism. This is inherent in making a model. Ideally it captures the key characteristics of the system’s behavior or phenomena we’re trying to capture. But inevitably it will leave out some things. If the system is complex, and not merely complicated, probing some aspect of the system by setting some controlled test will inevitably result in surprises, somewhere down the road – they might not reveal themselves immediately depending on the underlying dynamics that we cannot capture.

 

Ganti goes on to playfully describe the absurdness of geometry (points, lines, planes) in the way it defines idealized ‘units’. Mechanics then looks ridiculous in how it uses ‘point’ masses. But the wonder is how well it works. We’ve been using Newton’s laws for over three centuries very, very fruitfully. Electricity begins with ‘point’ charges. Chemistry begins with ‘atoms’, but we have other strange hard-to-pin down elementary definitions such as the word ‘element’. But since we’re discussing Life, what is the ‘unit’ of entities that are alive? This question is tricky and Ganti spends quite a few pages discussing life, death, and the in-between – not dead, but not living – realm associated with cryptobiosis. The simple answer, the ‘cell’ isn’t quite sufficient and one has to account for different levels without privileging any one in particular – a biological relativity point of view.

 

This leads to an interesting discussion of stability. Ganti exhorts us to be careful because we have to describe narrow scientific ‘model’ terms using everyday language, much like how Bohr argues about the nature of reality using his ‘complementarity’ view given the strangeness of quantum mechanics. Ganti distinguishes equilibrium from stability. He then considers the stationary (or steady) state, which he will differentiate from homeostasis.

 

The stationary state is, by definition, a state of open systems with an equal rate of inward and outward movement of matter. However, living systems are fundamentally growing (accumulating) systems, in which more matter enters than leaves… [it] cannot be in a stationary state, and hence attempts to reduce the stability of living systems to the irreversible thermodynamics of open systems in the steady state are… doomed to failure.

 

The nub of my research studying proto-metabolic systems is how to move from thermodynamic (and kinetic) systems where equilibrium reigns starting, into the arena of non-equilibrium systems that exhibit some measure of stability (such as stationary states), and somehow layered hierarchies of control on top of all this. I’ve barely begun to learn how to deal with non-equilibrium thermodynamics, and already the looming field of control theory (of which I am mostly ignorant) already looms. I have a long, long way to go. Ganti does provide some direction – his idea of “constrained paths” embodied in the wetworks of cyclic chemistry. Autocatalytic networked cyclic chemistry, to be more specific.

 

This measure of control allows the living system to maintain a ‘stable’ internal environment (the idea of homeostasis) against what is going on external to it. This requires sampling some parameters of the external environment, and then responding to it in some way. An element of prediction or anticipation must be involved. The system needs to formulate a model that is sufficiently reduced to guess what might happen next – akin to running a quick simulation in a short timeframe before deciding how to respond. I find myself forced to use such anthropomorphic expressions: guessing and deciding seem to tread on consciousness and free-will. Indeed, there is a long way to go. But the way forward seems to be starting with ‘exact science’, recognizing its limitations, and continuing to refine better and better models.

Saturday, March 19, 2022

X out of Compton

Einstein won his Nobel prize for explaining the photoelectric effect, one of several strange observations that led to the quantum mechanics revolution. Einstein himself was uncomfortable with the idea that light (electromagnetic radiation) might not be a wave but a particle. The physics luminaries of the day also thought that this strange idea seemed far-fetched, although it was difficult to deny how elegantly it solved the problem. Planck, who first introduced quanta, tried to ignore the implications. Bohr, who made powerful use of quanta to explain the behavior of electrons in atoms, was so uncomfortable with the idea of light being quantum in nature that he proposed breaking the law of conservation of energy.

 


These stories and more are lively narrated in Manjit Kumar’s Quantum, subtitled “Einstein, Bohr, and the great debate about the nature of reality”. I’m about 40% through the book. Most of the story is not new to me. Over the years, I’ve read much of the history of quantum mechanics and how we have come to our present description of the atom – the fundamental unit of chemistry. Bohr is central to that story, but many others contributed. Reading Kumar’s book reminded me of something I had forgotten: the key role of Arthur Compton’s X-ray scattering experiments in establishing the light-quanta hypothesis into bedrock theory.

 

Briefly, Compton fired X-rays (short wavelength EM radiation) at various elements and then measured what came out as these X-rays were scattered. It’s somewhat like the famous Rutherford experiment we teach students in introductory chemistry where alpha-particles were fired at a thin layer of gold. Compton discovered that the “secondary X-rays” that resulted from the scattering were at longer wavelengths (and thus lower frequency) than what he started with. Essentially, when EM radiation interacts with matter, some of its energy can be transferred to other forms, and resulting “scattered” radiation has lost that same amount. Energy remains conserved. One photon comes in. A different one comes out, red-shifted in color.

 

This made me think of magic, line-of-sight, and Harry Potter’s second Triwizard task. I had previously proposed that magic is mediated by electromagnetic radiation. That’s why Hogwarts can’t abide the use of electricity or electronic devices. Too much interference! But if you cast a spell, thereby directing ‘magical energy’ in a particular direction, there might still be interference as it interacts with matter. Since we live in “thin” air, EM radiation easily passes through. It can even “bounce” off solid objects. But could it go awry if there is some “scattering” involved?

 

If your spell is for destructive purposes, then perhaps it doesn’t matter. You’re just trying to channel EM radiation to break a bunch of chemical bonds and have the object fall apart. Like firing a laser cannon in sci-fi, or cutting something with a lightsaber (in a galaxy far, far away). But what if you need a controlled specific amount of energy aimed at a specific location? To unlock a door perhaps you’d need just the right amount of energy to mechanically disengage the lock. But if air molecules between you and the door interacted with your magical energy, transforming some of the “carrier” EM radiation into different frequencies, maybe that messes up your door-unlocking spell. Maybe that’s why the closer you are to wherever your spell is cast, the more effective it is. This also brings up the possibility of constructing a door-lock that interferes with the standard Alohomora, as Snape does, to prevent break-ins into his dungeon-office. (One must dissuade students up to no good from trying to steal potion ingredients, or in my line of work, lab chemicals.) A suitable material that scatters EM radiation of the Alohomora wavelength should work!

 

Quantal-light, when behaving like a particle, doesn’t “spread out” like a wave. Does this mean line-of-sight is very important when spellcasting? There seems to be some amount of “aiming” involved during spell-casting especially where doing battle is concerned. You need to point your magic wand in the right direction to be successful – a feature visually emphasized in the Harry Potter movies. But perhaps I’m being too limiting in my emphasis on the physical aspects. Mental energy and imagination are likely to be just as important in directing one’s spell to its desired outcome. And so the adept magician can bend and direct those EM waves in an extra-sensory mind-over-photon way. The Accio summoning spell doesn’t require line of sight; and is useful when you’re summoning an object you can’t necessarily see.

 

This brings us to the Second Task in Harry Potter and the Goblet of Fire. The action is all underwater. Magic, carried by EM radiation, has to pass through a much denser milieu – lots of water molecules amongst other things! We first see Harry dispatch a grindylow with Relashio. According to the text: “A large bubble issued from his mouth, and his wand, instead of sending sparks at the grindylows, pelted them with what seemed to be a jet of boiling water…” The medium (water) changed the spell’s effects. Nothing much else happens magically. Cedric uses a knife and Harry uses a jagged stone to cut their friends free. Harry then threatens the merpeople with his wand so he can also save Fleur’s sister, but no other magic spells are cast.

 

I hypothesize that the denser the medium, the more one might expect scattering of EM radiation. Spells cast underwater change in their effects. What about in a dense fog? Light certainly is scattered. Or in driving rain? Lots of opportunity for interference. But let’s get back to fundamentals. Magic may be mediated or carried by EM radiation, but it might or might not be quantal in nature. To interact with matter, the results might be quantal (e.g. the breaking of a chemical bond) but the magic-energy might be a continuous function, and perhaps magic is infinitely divisible. Hmm… I need to learn more magical theory. For now they seem like “rays” of unknown properties. We might have even called them X-rays, except that name is already taken, and thanks to Compton, we learned something fundamental and strange about the nature of reality.

Friday, March 18, 2022

Pearls in a Pigsty

I try to read broadly in my field: the chemistry of the origin of life. The latest compilation of articles in book form is Prebiotic Chemistry and the Origin of Life (Springer, 2022) edited by Neubeck and McMahon.

 


Essay #5 in the book is “Origin of Nucleic Acids” by Frank Trixler. The title is ambitious. The contents are a mess. Or maybe I’m just not smart enough to pull all the threads together. The chapter starts off promisingly. Trixler poses several interesting questions. Among them: Why is AMP (adenosine monophosophate) so common? Why does life use just four nucleobases? How do we solve the water paradox? What makes a nucleotide sequence functional? And what is the genetic code encoding anyway?

 

I hadn’t thought much about the ubiquity of AMP so I found this section interesting. Trixler points out the many places it shows up, but doesn’t satisfactorily answer why it is ubiquitous. He makes the claim that its “catalytic activity is of very high statistical significance” in a crowded fluid system such as you might find in cells. These “nanofludic effects” supposedly arise from structural considerations, but Trixler’s arguments sounded like Kipling-esque Just So stories. I was unconvinced.

 

Why don’t we have multiple versions of the genetic code? Standardization is Trixler’s answer. I’m inclined to agree in a broad sense, although it’s less clear why nature has picked out the pterin nucleobases (A,C,G,T/U) that we use for the four-letter code. There are numerous chemical cousins that also can also exhibit base-pairing and form alternative or even expanded codes. No clear answer here.

 

The next two sections are titled “A Paradox Falls Into Water” and “The Crystalline Womb”. Trixler favors minerals and solid surfaces to facilitate polymerization of nucleic acids to generate sequences. I personally think that inorganic materials played an important role at life’s origin but much of these sections quote experimental results so broadly as to be less than relevant to the question at hand. Montmorillonite, the magical clay, comes up again as expected, in a Just So way. Nothing new here.

 

Then there’s a section about “Functional Sequence Complexity” as proposed by Abel. Trixler makes the argument that thinking about Complexity as a one-dimensional function, with “order” at one end and “randomness” at the other, needs to be supplemented by two other measures: algorithmic compressibility and algorithmic function. But then he claims that functional complexity lies in this special space – and there’s a graph to demonstrate this – but no evidence whatsoever. There’s a connection to lichen growth, but if he was trying to make an argument here, I just didn’t grasp it. (The Abel reference was interesting but mainly arguing for a negative, and thereby inconclusive at the end.)

 

You’d think I would give up after the mess so far – like wading through a pigsty of facts thrown around willy-nilly. But then there’s a beautiful gem, a pearl of wisdom and clarity that shines through. Here’s a snapshot of the two relevant paragraphs.

 


While I disagree with the word ‘all’ in the first sentence, I felt that Trixler lays out the thermodynamic viewpoint cleanly and clearly. Since this echoes how I look at things, his first paragraph is preaching to the choir, but it’s not new to me. The second paragraph is the real winner for me for distinguishing what living organisms do! I can’t say it any better so I won’t try to paraphrase. It’s the pearl in the pigsty!

 

Unfortunately, that’s it. The remaining sections about the chicken-and-egg DNA-protein paradox, or the idea of dynamic kinetic stability, are a mishmash. I was a little dazed after I finished reading the article. It sounded to me like one of my confused students trying to do a data dump at a superficial level, pulling in threads all over the place with bits and pieces of “evidence”. Or perhaps a better analogy is that this is what I sound like to some of my students. Stuff is coming out of my mouth expounding on some chemistry topic and they feel totally lost possibly because they didn’t read or don’t remember what they’ve learned that we’re trying to build on. Or maybe it’s somewhere in between the two. Unbelievably, I decided to download a couple of other Trixler papers where he provides “evidence” for some of his statements, and I gave up after skimming them. I gave up trying to check for pearls in a pigsty – at least that’s what it looks like to me. Possibly I’m ignorant or my mind isn’t able to see majestic threads being pulled together. All I saw was mostly mess. But I won’t discount the beautiful pearl – and for that it was worth making my way through the article.


Sunday, March 13, 2022

The Dawn of Everything

Ambitious narratives should be supported by significant evidence. This is not an easy thing to do when the narrative encompasses the large sweep of human history. An anthropologist and an archaeologist teamed up to write a 500+ page tome with the ambitiously titled The Dawn of Everything: A New History of Humanity. Sadly, one of the authors, David Graeber, passed away. It remains to be seen whether his co-author David Wengrow will follow-up the story – there is still much to uncover.

 


Many parts of the story were not so new to me because I had previously read Against the Grain and The Art of Not Being Governed, both by James C. Scott. A number of the themes Scott writes about are echoed by Graber and Wengrow. I find their work more compelling than the large sweeps provided by Jared Diamond (Guns, Germs and Steel) or Yuval Noah Harari (Sapiens) with captivating sweeps that are conceptually ‘cleaner’ and perhaps downplay the messiness of human history and how much we really don’t know about the past.

 

There are several conceptual pieces in The Dawn of Everything that struck me. The most prominent is how the authors underpin three conceptual parts that give the modern nation-state their particular flavor of dominance, bureaucracy and governance. Sovereignty is one piece; some ancient ‘kingdoms’ had a monarch in a central role who wielded power – although being able to enforce one’s will was much more limited in the ancient world. Administration is another piece; it is related to esoteric knowledge and results in management applied on a larger scale via the use of specialists. Charismatic competition is the third piece; these may feature voting and election of leaders but may also encompass how decision-making is made that affects the whole ‘tribe’. The authors highlight ancient societies that focused on one of the three, which are then followed by others that combine two of the three. They speculate how this might take place, but leave the question open. And how the three parts come together in modern dominions was likely planned for a future book.

 

The strength of their arguments come from a wide range of examples including many puzzling ones such as Teotihuacan and several other societies in the early Americas. I was familiar with the famous Mayan and Incan ‘empires’, and I knew something about the more shadowy Olmecs. I’d also read a lot about Egypt, Sumeria, the near East, – although I didn’t realize how unusual Knossus (Crete) was until reading the authors’ analyses. I knew something about the Shang dynasty, but I was intrigued to learn of what might have come before. The authors also provided new insight into the Indus valley ‘civilization’ cities. I also came away with the appreciation that words such as ‘empire’, ‘kingdom’, ‘tribe’ or ‘civilization’ carry with them a lot of baggage.

 

Schismogenesis – when societies expressedly try to distinguish themselves from their neigbors – gets a prominent treatment by Graber and Wengrow. They expand on Scott’s observations where he contrasts the societies of the plains versus those in the hinterlands. I think the argument is relevant today as globalization continues its steady march. We see the polarization among different people groups, and increased “us versus them” rhetoric. Interestingly, Graber and Wengrow started their quest to explore “what are the origins of inequality?” and then went down a huge rabbit hole. They haven’t answered their original question, perhaps as they suggest, because it’s the wrong question to ask. I’m happy that they didn’t stop with Hobbes and Rousseau (which is where they started) but instead give their reader something much richer and more complex.

 

I won’t make any of their arguments here – because I would not do them justice at all in a short informal blog post. (Read their book!) I will however, in closing, quote one paragraph in their Conclusion that gives you a flavor of their writing and the interesting questions they explore. I found it both powerful and revealing.

 

“Social science has been largely a study of the ways in which human beings are not free: the way that our actions and understandings might be said to be determined by forces outside our control. Any account which appears to show human beings collectively shaping their own destiny, or even expressing freedom for its own sake, will likely be written off as illusory, awaiting ‘real’ scientific explanation; or if none is forthcoming (why do people dance?), as outside the scope of social theory entirely. This is one reason why most ‘big histories’ place such a strong focus on technology. Dividing up the human past according to the primary material from which tools and weapons were made (Stone Age, Bronze Age, Iron Age) or else describing it as a series of revolutionary breakthroughs (Agricultural Revolution, Urban Revolution, Industrial Revolution), they then assume that the technologies themselves largely determine the shape that human societies will take for centuries to come – or at least until the next abrupt and unexpected breakthrough comes along to change everything again.”

 

And with that, I was motivated to play a game of History of the World this weekend. (The Sumerians survived the seven epochs!) Even though its narrative encompasses monolithic empire-building and subsequent shattering, much the opposite of The Dawn of Everything, but one is indeed reminded that such empires are not built to last. Dusk still comes and Ozymandias turns to dust.

Monday, March 7, 2022

Forced Trajectories

Google Books had the first chapter of the hard-to-find Principles of Life by Tibor Ganti, famous for his chemoton model. Ganti’s prose is remarkably clear and, in my opinion, hits all the key conceptual points needed to define Life. Chapter 1, titled “Levels of Life and Death” was written circa 2000. I’ve put in a request for the book through interlibrary loan and hope to read the rest of it soon! Meanwhile, in today’s post I will quote Ganti (in italics) accompanied by my short commentary.

 

First, what distinguishes living systems from non-living systems. Tricky question. Ganti starts with discussions from Schrodinger’s famous What is Life? book.

 

All living systems – while alive – do something, work, function.

 

But other things do work and have a function. Ganti anticipates this. He also distinguishes natural systems from man-made (artificial) systems.

 

However, living things are not the only systems that ‘do something’ and also do it for long periods… rivers do their erosive work continuously… in technology, engines are also able to do work without interruption. What is common in these systems is that they are positioned between the higher and the lower potential level of some kind of energy, and, part of the energy which flows through the system is transformed to work.

 

He then hits on the crucial requirement of an energy gradient. There must be a flow of energy, and then something in the system needs to be able to extract work from that energy flow.

 

To get from random to directed work, the flow of energy must be manipulated along a series of forced trajectories within the system.

 

I think this idea of ‘forced trajectories’ is very important. I’ve been puzzling over what this should look like in a chemical ‘living’ system. There are some allusions to it in other articles and books, but I’m still fuzzy on what this means. I get the sense that there is a cascade of chemical reactions, with a particular direction or driving force. But how such a system is assembled is less-than-clear.

 

The driving force of living systems is chemical energy… However, in contrast with the situation for mechanical [or electrical] machines, the energy flow in living systems is manipulated by chemical means… In contrast with manmade technologies, where the machines are based on mechanical or electronic automata, living systems are fundamentally chemical automata. During evolution, the mechanisms of living systems have sometimes been extended using mechanical and electronic components, but their basic structures remain chemical automata. They manipulate the driving energy by chemical methods.

 

As a chemist, I strongly resonate with Ganti’s description. Of course this begs the question of how chemistry manipulates the ‘driving energy’. The second law of thermodynamics is a driving force. As a chemist, bond-breaking and bond-forming at the molecular level is the activity I consider fundamental. At the ‘body’ temperature of living organisms, the enthalpic contribution to making and breaking bonds often outweighs the entropic contribution, and a combination of both allows one to calculate the changed in free energy of reaction (delta-G!) – by definition, the maximum amount of useful work one might be able to extract from the chemical reaction. So my imagined cascade of reactions needs to have a negative delta-G, but also be arranged in a way that allows for extraction of energy for useful work. I haven’t defined ‘useful’ but it connotes an end-goal or function, thereby complicating ideas of cause, effect, time, and agency.

 

Chemical reactions can proceed with suitable intensity only in the fluid phase (gas or solution)... the continuous presence of some kind of solvent is essential. The functioning of mechanical automata is restricted to a rigorous geometrical order of their parts, and the functioning of electronic automata is also restricted to some geometric arrangement of their components. The functioning of the fluid automata is largely independent of any kind of geometrical order. It works even if the solution is stirred, or if half of it is poured into another container… Compared with mechanical and electrical automata… these properties provide living systems with highly favorable possibilities. One of these is, the capacity for reproduction – autocatalytic systems are well known in chemistry.

 

I’ve spent some time thinking about autocatalytic systems, but I hadn’t pondered the importance of being in a fluidic milieu and being ‘independent of geometrical order’ other than superficially. Ganti’s argument makes sense to me, especially if you want to have reproduction of what might be a complex system. The ‘suitable intensity’ of fluids highlights an analog system that presages control mechanisms. Or maybe I’m reading too much into this.

 

Ganti then goes on to define his minimal living system. I have no quarrel with his definition.

 

The fundamental unit (i.e. the minimal system) of biology must have some specific properties:

·      It must function under the direction of a program

·      It must reproduce itself

·      It and its progeny must be separate from the environment

 

This is followed by his description of the chemoton made up of three subsystems. Autocatalysis features importantly in all of them, and they have to work together. They cover the fundamentals we observe when we think about ‘classes’ of molecules found in a cell, the smallest autopoietic unit.

 

A chemoton consists of three different autocatalytic (i.e. reproductive) fluid automata, which are connected to each other stoichiometrically…

(1)  the metabolic subsystem [with] a reaction network of chemical compounds with mostly low molecular weight able to reproduce itself, but also the compounds needed to reproduce the other two subsystems,

(2)  a two-dimensional fluid membrane [with] the capacity for autocatalytic growth using the compounds produced by the first subsystem,

(3)  a reaction system able to produce macromolecules by template polycondensation using the compounds synthesized by the metabolic subsystem… the byproducts are also needed for the formation of the membrane. In this way, the third subsystem is able to control the working of the other two solely by stoichiometric coupling.

… the three fluid automata become a unified chemical supersystem through the forced stoichiometrical connections… unable to function without each other… but their co-operation can function.

 

I’m reminded that my focus on proto-metabolism, leading to the first subsystem, might blind me to its crucial interactions with the other two subsystems. Which might also explain why I’ve been puzzling over how to repeatedly drive autocatalytic cycles if the food molecules run out. High energy ‘food’ molecules transform to low energy waste molecules while organisms use some of the energy towards growth and repair. And if one organism’s waste is another’s food, then a natural symbiosis may sustain those organisms. You’ve gotta eat poop in the primordial soup!

 

Kinetic analysis of the elementary chemical reactions allows us to perform an exact numerical investigation of the workings of the chemotons using a computer… The fact that it is an abstract system means that its components are not restricted to particular chemical compounds. However, they must have certain stoichiometric capabilities and, they must be able to produce certain compounds, which are important for the whole system.

 

Since I’m a computational chemist, I’m encouraged by Ganti’s words. As a quantum chemist, so far I’ve focused on the easier thermodynamic parts, because determining kinetics requires calculating transition states – transient and potentially tricky to optimize. But I’m reminded that I have to worry about the kinetics – it’s a crucial piece to the story. Thermodynamic gradients may provide a driving force but kinetics is the key to ‘forced trajectories’ by providing openings to dams in strategic places.

 

The model does not contain any prescription or restriction on the speed of the chemical reactions in the system. Therefore it remains valid whether the reaction rates are determined exclusively by the concentrations of the components or are influenced by catalytic effects…

 

I take this to mean that to some extent I’m on the right track with my model-building approach. The kinetics are going to be important in some of the nitty-gritty, but I might be able to say something both useful and interesting without having to figure out all the activation barriers. Here’s Figure 1.1 from the book illustrating Ganti’s chemoton. I’ve drawn similar pictures myself.

 


There are no details of the enzymes or catalysts in the model. As I’ve been thinking about how to maintain a forced trajectory through chemical means, I’ve started experimenting (by which I mean computational tests) with ‘carrier molecules’ for shuttling energy. Not ATP, which I think came later in the game. Some are redox-neutral and easier to deal with, but I’m also trying to decide the best way to model the redox reactions that drive the incorporation of CO2 into carbon-based biomass. H2 is the easy reductant to use computationally, and it might be important in some cryptobiotic cases, but extant life doesn’t use molecular hydrogen as is, for good reasons.

 

This post doesn’t have a conclusion because all this is still rattling around in my mind. But my take-home message from the first chapter of Ganti’s book is to focus on ‘forced trajectories’ and think about how to build it into my computational models. Maybe when I read the rest of the book, I’ll discover that Ganti knew the answers to the questions I’m asking. Sadly, he passed away in 2009 so I won’t be able to ask him any follow-ups.