Thursday, June 30, 2022

Thermodynamic Driving Forces

In my previous post on Jeffrey Wicken’s Evolution, Thermodynamics, and Information, I discussed how he distinguishes machines from organisms. Today, I will focus on Chapter 5 of the book: “Thermodynamic Driving Forces”.

 

Wicken doesn’t like the expression “driving forces” because “it suggests some kind of external propulsion – which is not what the teleomatic directive of the Second Law to randomize matter and energy in quantum space is about.” But he admits that “the metaphor has value, since there is an internal propulsion to the emergence and evolution of life”. Note his mention of randomizing matter and energy; I’ll get to that momentarily.

 

As in previous chapters, Wicken is concise in his definitions and tells you where he is going upfront: “Evolution is about variation, the constraints under which it occurs, and its selection according to ecological success. We will be discussing the physical basis for the variation-constraint-selection triad…” So, let’s proceed with some thermodynamics.

 

For anything to proceed according to the Second Law, you need an energy source that can be utilized, and a sink to dissipate the energy (or as Wicken says, “to receive its entropic waste”). For energy to flow from higher to lower potential, there needs to be “[energy] charging of the prebiosphere to higher levels”. Once this charging is achieved, the dissipation of that energy generates molecular complexity. Carbon chemistry is particularly facile for molecular diversity.

 

What’s the best source? Photons. Concentrated packets of energy. Matter (atoms and molecules) can absorb the photon energy and become “energized” so to speak by having an electron excited energetically. Its potential energy has increased. It could release that energy by emitting a photon of the same wavelength, but more interestingly it could dissipate the energy via different pathways. It could dissipate it by making a new chemical bond between two atoms – an exothermic reaction that releases energy as heat (non-utilizable entropic waste). By forming chemical bonds, matter becomes more structured.

 

But Wicken goes further and argues that for this structuring to lead to complexification, requires that “the properties of the elements involved in the structure not be sufficient to determine that structure.” This is particularly interesting because it suggests that for complex matter to form, you need to be able to access multiple diverse structures and that there is uncertainty as to which subset of structures is actually formed. Physics and chemistry constrains those possibilities but if there is still room to play after those constraints, complexification is not only possible but it is expected according to the Second Law. Wicken says “complexity requires structure; but it also requires options”.

 

And now the crux of Wicken’s argument: “The physical basis for information is complexity. Complexity makes available different possibilities for functional interconnections. Organization can’t itself be quantified, since it reflects how well a system operates with respect to some desired output, rather than a physical property. One can, however quantify the complexity of an organization. A major evolutionary trend has been toward increased organizational complexity. The question is, does this trend result simply from selective pressures that provide ecological space for complex systems? Or is there a drive toward complexity operating in evolution independently of natural selection? Both are involved in evolution; but it is in prebiotic evolution that the drive toward complexity can be seen most clearly apart from selection, preparing the scene for life’s emergence. The operation of this drive depended on certain conditions of closure to which the prebiosphere was subject.”

 

Wicken defines information mathematically using a simple equation that mirrors Boltzmann’s entropy equation. He then divides up this “macroscopic” information into three parts: energetic (a function of internal energy), thermal, and configurational. The latter two are the “negative entropy” terms that can be calculated from statistical mechanics. The changes to the thermal contribution relate to structuring (“the movement of thermal energy from practically continuous translational modes to much less densely spaced vibrational modes”) and can be quantified.

 

Now let’s get back to dissipation. Wicken divides these into two categories: “energy-randomization” and “matter randomization”. Energy-randomization is driven by forming chemical bonds. This reduces internal energy (and its informational counterpart). Wicken states: “The result of these associations has been to reduce the number of discrete chemical entities in the biosphere and to increase their average sizes and complexities.” Where does this energy go? It is dissipated thermally.

 

Matter-randomization is the more interesting case. Essentially the more different kinds of molecules you can form, the more you can dissipate configurational information (or produce configurational entropy). Wicken uses some simple kinetic arguments to show that this must be a crucial player in any chemical reaction trying to move towards equilibrium: “Matter-randomizing considerations therefore promote all reactions through some compositional range, and reciprocally assure that no reaction can proceed entirely to completion.”

 

Once he has done this, Wicken can now track the information from all three parts through the various scenarios of prebiotic chemistry. Forming the diverse molecular building blocks of life is possible because this process dissipates configurational information. But after the divergent step comes the convergent step of forming biopolymers and other aggregates driven by the formation of chemical bonds. Wicken then discusses the constraints in play and what drives the process forward in connection with the Second Law. I feel that I’m grasping the mere edges of his argument, but I think he’s on to something compelling. I’m also biased my own research project of mapping the energetic space of proto-metabolism so I’m predisposed to his matter-randomization arguments. But I’ll need to ponder this a little longer before I can translate a Wicken-type analysis to a model system for which I have data.

Wednesday, June 29, 2022

How and Why

A decade ago, I read Jeffrey Wicken’s Evolution, Thermodynamics, and Information. I should say I tried to read it, but didn’t understand it. Now I’m trying again, and with some Ganti and Rosen under my belt, it’s making more sense. Like Ganti and Rosen, Wicken’s ideas were outside of the mainstream. The trio are pioneers – not understood during their time – but they’ve paved the way to a more comprehensive view of what it means to be a living system, and given us glimpses of what it might mean to transition from non-life to life.

 

Wicken’s goal is to bring together the three strands of thermodynamics, evolution, and information. Each area has its own historical development. They might make reference to each other, or more often cause confusion perhaps because we don’t quite understand any of the three well enough to forge deep connections among them. Wicken begins with the distinction between operation and genesis – essentially how and why. The two can be separated cleanly for machines: One can explain the cause and effect processes as the machine proceeds through its mechanism of operation. As to the purpose of the machine – its genesis – a reason for its existence and why it was built is external to the machine itself.

 

For an organism, on the other hand, operation and genesis are not so easily separable. We kinda sorta talk about them separately (yet in parallel), but how they intertwine is a trickier business. Wicken’s one-sentence summary: “It is characteristic of natural organizations from organisms to societies that their existences are inseparable from their operations as informed dissipative structures.” There’s a lot packed into that sentence because it adds three new pieces to the story in the last three words: information, dissipation, structure.

 

How do we even begin to untangle this? Wicken’s approach is to follow the energy flow. For that, one has to begin with thermodynamics. He will expand on this in detail, but here’s his pithy encapsulation: “All irreversible processes result from the disequilibrium brought about by cosmic expansion, between potential and kinetic forms of energy. Energy flows occurring under the impress of this disequilibrium have predictive consequences with respect to the overall form of evolution.” This will involve things I’ve mentioned before in a previous blog post: The disequilibrium comes because radiant energy from the sun hits our planet, and thanks to carbon chemistry, this energy is dissipated as heat because outer space is cold! Energy flows from source to sink with life in-between to dissipate it.

 

But the sun’s rays also fall on other planets and moons and there’s no life there that we know of – so we need to figure out how and why the environmental constraints of planet Earth allowed for the evolution of life. Thus is born a research program. I like Wicken’s description: “Research programs are conceptual spaces in which theories are nurtured, examined, and given room to grow. They can therefore only be as good as their metaphysical suppositions, which must be carefully spelled out. So too must language, terms, and definitions.” Wicken will do this throughout his book, but here’s his research program in his own words: “… to provide unifying principles for evolution – from the prebiotic generation of molecular complexity through the self-organization of living systems through their phylogenetic diversification.”

 

In discussing reductionism, Wicken writes: “Whenever one attempts to explain a phenomenon in terms of something else, it is with the presumption that the ‘something else’ is more general, more basic than the phenomenon at hand. Bringing one discipline into the explanatory orbit of the other is theory reduction… Those who believe that living systems can be explained exhaustively in terms of present physics and chemistry are ontological reductionists. Ontological reductionism has little to recommend it biologically. Organisms are organized wholes, not sums of molecular parts. The most conspicuous triumphs in the application of reductionist methodology… biochemistry and molecular biology – disciplines whose major foci have concerned the study of parts and processes or organisms in isolation from organizational settings… a willingness to whittle down the richness of the organic world to one of nuts and bolts.”

 

And this is why Wicken wants to bring in thermodynamics: “A virtue of the thermodynamic approach is that it does not lend itself to talking about chemical parts. Thermodynamics is a science of systems, dealing more with processes than static elements… Life can’t however be reduced to thermodynamics…” The second law of thermodynamics is often quoted by proponents and antagonists of the theory of biological evolution, and the common understanding of entropy relates it to order and disorder. Wicken wants to carefully define his terms and makes a distinction between order and organization. Order has to do with how. Organization has to do with why. Here is how he describes it:

 

“Organisms are not only systems of very high potential energies compared with equilibrium systems of the same composition; they are also extremely ordered, low-entropy systems. Low entropy is of itself of little biological concern [e.g. inorganic crystals are highly ordered]… Organization involves function, and the physical sciences don’t deal with function… Organized systems are characterized by structural relationships that require information for their specification… When we talk about a ‘well-organized system we are referring to how effectively it carries out certain activities, rather than to specific structural factors internal to the system. Yet, function depends on the existence of structural factors, and quantification requires that organization be looked at from their vantage point… Degree of organization would then be measured by the degree of constraint, by the extent to which interactions between components limit their individual degrees of freedom.”

 

That last sentence is essentially the thesis of Terence Deacon’s Incomplete Nature, except Wicken is both clearer and more concise despite his book being 35 years older. Wicken carefully defines constraints, and ties them to function and information. Two properties of constraints: degree and complexity. This allows him to distinguish machines and organisms: “Machines are organized for external functions, which means that their operations can be kept distinct from their existences. This allows machines to be equilibrium organizations: there is no need for them to degrade free energy to maintain themselves. This is not the case with organisms…”

 

Wicken continues: “There are two ways in which the elements of a physical system can be arranged nonrandomly. One is according to internal patterns or statistical biases; the second is according to functional considerations. The former expresses order, the latter organization and functional information. That biological organization is ordered as well as information-rich expresses a unique feature of living systems that that the machine world does not share… But at the molecular level, order and informational-richness stand in mutual opposition: order requires pattern, periodicity, or a reduction of combinatorial options; informational capacity decreases as these are imposed.” (Wicken defines order in terms of information compressibility, and complexity as “the measure of information required to uniquely specify the elemental relationships of structured systems.”)

 

Entropy lies at the heart of quantifying order and information. But like other very general concepts such as Energy and Life, it is tricky to define. Wicken distinguishes Boltzmann entropy from Shannon entropy, and discusses why they are sometimes conflated even though there are significant differences in how they are defined. (They share equations that look similar.) This provides a way to quantify the ‘how’ in terms of statistics and probability calculations, but the ‘why’ remains elusive. Wicken writes: “Information content depends on complexity but is not coextensive with it.” He will attempt to connect the how and why, but that’s the subject of a future blog post.

Friday, June 24, 2022

Space-filling

After watching the third Fantastic Beasts movie, and realizing how little of the previous movies I remembered, I decided to go back and watch them. The fantastic beasts are more interesting and plentiful, especially in the first movie. This time around, I found myself pondering the space-filling serpent. According to the Harry Potter Fandom Wiki, it is called an Occamy (and shown in the picture below from the wiki). The ability to grow or shrink to fill space is ‘choranaptyxism’, an invented word in the Wizarding World pulled from the Greek words for ‘space’ and ‘unfolding’. Why not use Latin instead of Greek?

 


Why does the beast grow or fill to shrink its space? I don’t know. Why doesn’t it do so when living in Newt Scamander’s garden of beasts? I don’t know either. Why does Newt ask for a teapot to trap it – shouldn’t any other container do? Possibly, it’s unclear. Maybe Newt says this in the heat of the moment to allow his comrades to focus on a solution. How does this beast change its size to fit its container? I don’t know, but I have speculated on what atomic-level forces might be affected as matter is enlarged or shrunk (it is more problematic than it looks). Then again, this is a fantastic magical beast so maybe the rules of ‘normal’ matter don’t apply as when you cast an Engorgio spell to make a spider larger (to Ron Weasley’s horror) or if (Marvel Comics) Ant-Man shrinks.

 

In an introductory chemistry class, students are taught that gases “take the shape of their container”. The size of the gas particles does not change. Rather, because of their constant motion, the particles whiz around to “fill” the container in their never-ceasing movement. Matter isn’t being resized as the container size changes. The pressure of the gas rises or falls proportionally as the volume of the container is decreased or increased respectively, provided the temperature remains constant (Boyle’s Law).

 

Also in introductory chemistry class, students learn different ‘models’ to represent molecules, one of which is called the space-filling model. This model is sort of like a gas-filled container except that (1) instead of gas particles, electrons are the constantly moving entities, and (2) there isn’t actually a boundary per se – the negatively-charged electrons stay relatively close to the positively-charged nucleus because of electrostatic attraction. The space-filling model comes into play because the electron ‘clouds’ on two different molecules cannot substantially penetrate each other because of the Pauli Exclusion Principle. Were molecules to meet in motion, they would bounce off each other, never passing through each other like ghosts.

 

This reminded me of a vignette in Tom McLeish’s book. Robert Grosseteste was a thirteenth century cleric who became Bishop of Lincoln in 1235. He incorporated what we would today call ‘science’ into his theological thoughts. One thing that puzzled him was the ‘solidity’ of matter. (It’s surprising that students today aren’t puzzled – it is strange!) McLeish writes: “If [Pauli Principle repulsion] were not the case then solid or liquid matter as we know it could not exist – all matter would simply pass through itself… think of a simple version of an atomic picture in which the atoms are really point-like particles – since this is the picture faithful to the ancient Greek atoms – the indivisible ones. At first it seems as if this might be a promising route to explain the matter we experience in terms of its hidden, and simpler, substructure… but in this case classical atomism doesn’t work. If we stay with the idea of point-like particles then solidity simply does not appear.”

 

Why don’t we just give up the idea of point-like particles and use ‘hard spheres’? This isn’t as easy as it looks because it still begs the question as why you have hard spheres. For that matter, classical mechanics (the stuff one learns in physics class) is predicated on using point-charges that give us beautiful and simple equations we can use! Without explaining the ‘bulk’ of atoms, we’re stuck in a circular argument spiraling into infinitesimal point-sized particles (which don’t exist). We had to wait until quantum mechanics to figure out Pauli repulsion – a good seven hundred years after Grosseteste became a bishop.

 

Grosseteste’s idea (in De luce) was to use light as a sort-of explanation. McLeish writes: “… light, unlike atoms, does possess a natural ‘extension’ – open a shutter and it streams in to fill the dusty air beyond uniformly and immediately. If matter cannot of itself (simple and without dimension) fill space, then maybe the operation of light on matter might endow the tiny particles with extension by carrying them, or somehow extendedness into them. Actually, [Grosseteste] is very careful to say that this source of corporality might not actually be light itself, but if not then something very like it… Remarkably, Grosseteste’s insight turns out to be more or less correct… light is space-filling in that it is a wave… the quantum waviness of matter allows it to be solid, and prevents my falling through the chair I am sitting on.”

 

Taking this a step further, Grosseteste “makes an extraordinary leap of imagination: he attempts to apply his theory of local matter to the structure of the universe as a whole. Beginning with a flash of light, the entire universe is filled and expanded by its self-propagation until it has reached huge dimensions. Solidifying in its exterior shell, re-reradiated light from this shell of ‘perfected matter’ then concentrates matter back towards the center, leaving the successive planetary shells in its wake, and the unrefined elements of fire, air, water and earth at the center.” It’s clever. Grosseteste comes up with a ‘Let There Be Light’ creation of Aristotle’s cosmos. As an aside, Grosseteste was also interested in how matter might manifest in different Aristotelian ‘accidental’ forms – because he was interested in how one might explain Catholic theology’s concept of transsubtantation – the bread becomes the ‘body’ of Jesus Christ in some participatory way even though it still looks and tastes like bread to the eater.

 

Is there a creative explanation for the space-filling serpent in the magical world of Fantastic Beasts? I don’t know. I’d love to get my hands on a standard Hogwarts textbook: Magical Theory by Adalbert Waffling might be one. Or maybe Newt’s textbook. But I have a feeling that the explanation will leave me wanting. Or I need to ghostwrite the book with an infusion of the magic of quantum mechanics. Quantum mechanics isn’t astrology, but sometimes feels like it. (That book has been written.) So as you’re sitting on your chair reading this, take a moment to ponder why you don’t fall through it. The weirdness of quantum mechanics that gives rise to the space-filling model of molecules gives us solidity!

Tuesday, June 21, 2022

Germ Theory

In the final chapter of his book, Plagues Upon the Earth, Kyle Harper begins with the setting of a Jules Verne sci-fi novel (The Begum’s Millions). It’s a tale of two cities. One is all about industry and mass-producing weapons of destruction. The other focuses on health, which sounds like a good thing, but stifling in its own way: “… public spaces and private habits were minutely regulated to promote healthy living… Hygiene was a public imperative and private duty.” Covid-19 has brought this view to the forefront once again, but the “City of Health” approach has been going on for a while. Verne’s utopia is now a reality in many places at least if you count crude mortality rates – we’ve exceeded Verne’s expectations.

 

As public health laws were being enacted in the mid-nineteenth century, Harper writes: “It is one of history’s ironies that the sanitary reformers based their progressive public health politics on scientific principles that were already becoming obsolete. The sanitarians were mostly committed to the miasma theory of disease. In this view, filth, pollution, putrefaction are the agents of disease.” Miasma theory was challenged by ‘contagionists’ who “believed that disease was transmitted from one infected person (or infected article, like a piece of clothing or agricultural product) to another.” This was the birth of early epidemiology and bolstered the ideas of germ theory.

 

I find it particularly interesting that germ theory became ascendant in the 1860s and 1870s, around the time that atomic theory started to gain more adherents. The Karlsruhe conference of 1860 was a landmark international chemistry conference, the highlight being Cannizzaro’s work on atomic weights. The tide was turning and chemists were starting to believe (or at least invoke) invisible particles as the basis of matter that could explain their chemical experiments. There is an uncanny parallel to germ theory; Harper writes: “Germ theory is the radical idea that disease is caused when the body is invaded by microorganisms invisible to the naked eye.”

 

The two towering figures in germ theory during this period were Louis Pasteur and Robert Koch. Pasteur’s name is well-known, thanks to the widespread process of pasteurization. Pasteur’s experiments are also invoked as the death-knell of the theory of spontaneous generation (which ironically has resurfaced in origin-of-life chemistry, my area of research). Pasteur is also honored for turning vaccination into a science. I learned from Harper’s book that Pasteur chose the word ‘vaccination’ in honor of Edward Jenner’s smallpox vaccine that came from cowpox (Latin vacca for cow).

 

While Koch is less widely known than Pasteur, he is usually credited as the father of bacteriology. His breakthrough was isolating the anthrax bacterium, thus “connecting a specific microbe to a specific disease”. He then discovered the tuberculosis bacterium, the great scourge of the time. Harper writes of Koch’s contributions: “for the first time, a scientific paradigm was born in the artificial environment we know as the laboratory.” What followed as the twentieth century was born? “The hygienics of everyday life were transformed by an active campaign to disinfect the person and the household environment.” Cleaning and disinfection involves a lot of chemistry. And we started killing bugs – vectors of disease-carrying microorganisms. We became Homo hygienicus.

 

At the end of the chapter, Harper reiterates key points he has made throughout the 500+ page book. Because we now live in a human-dominated planet, “Homo sapiens now fundamentally drives patterns of evolution across the biosphere, directly and indirectly. We favor some species intentionally (cows, chickens, pigs), others unintentionally (squirrels, pigeons). We harm some specie deliberately (cockroaches, bedbugs), others inadvertently (polar bears, black rhinos, and, well, thousands of other animals). Pathogens are a special case; we harm them intentionally but benefit them inadvertently because our own biological success is an opportunity for organisms that can exploit us as sources of energy, nutrients, and cellular machinery.”

 

We should expect more global pandemic beyond Covid-19. Microorganisms evolve and replicate, adapting to new conditions quickly. As we crowd more into urban areas, as we rely on mass-produced monocultures for food, as we change land use, as global travel resumes, as wars are fought, as climate change leads to mass migrations, pathogens will adapt. We know how germ theory works. But knowing the theory will not prevent the next pandemic.

Tuesday, June 14, 2022

Robert Brown, Investigator

I recently read an interesting vignette about Robert Brown, namesake of the phenomena known as Brownian Motion. Back in 1827, Brown noted the jittery motion of pollen grains on the surface of water, but struggled to explain his observations. We’ll get to his investigations in a moment, but what I hadn’t known until I read the vignette was that thirty years before Darwin made his famous journey on the Beagle, Brown made his own journey on the Investigator. Brown was a botanist and both collected and catalogued numerous new plant species in his trip to Australia. Brown is also famous for introducing the term nucleus – the particle in eukaryotic cells that stores genetic material.

 

Like any capable investigator, Brown tried all sorts of things. He used different particles instead of pollen. He tried different fluids. He systematically altered the ‘reaction’ conditions. As told by Tom McLeish in his book Faith & Wisdom in Science from which I read this vignette: “Beautifully designed tests of various possible causes of the motion ruled them out one by one.” Not fluid current. Not electrical effects. Not magnetic forces. Not external mechanical vibrations. Not the presence of light. “All the more tantalizing must have been his realization that the motion is universal – not depending on the particularities of particle or fluid.”

 

The mystery wasn’t solved in Brown’s time. McLeish has some words of wisdom: “Sometimes even the deepest questions simply arise before the time to answer them has come. One of the most impressive demonstrations of self-restraint within any scientific writing must be Brown’s masterly scientific detective work, its long list of dead ends and his explanation of why he was not proposing a theory for the effect… Brown wisely guessed that satisfying the temptation to suggest various untested causes might well have set others along false trails before they had allowed imagination sufficient free reign.”

 

Interestingly, one of the popular explanations at the time, not espoused by Brown, was that the jittery motion was indication of a ‘vital force’. Apparently, the great Michael Faraday devoted much time and energy to telling the public not to jump to such conclusions. Faraday had suspected that atomic theory had legs, but back then the idea of all matter being made of seemingly occultic invisible elementary particles was far from established.

 

The solution came in 1905 in one of Einstein’s famous annus mirabilis articles. I’ve had students read it in an introductory college science class. (I annotated it heavily to help the students.) Why was Einstein able to come up with the explanation? Because he was a certified genius? Or perhaps because ‘chance favors the prepared mind’? McLeish makes the following connections: “Einstein felt discomfort with the idea of one law to govern one aspect of the world, while a different law held elsewhere… of the atomic theory – that if these particles existed then they must be in constant yet random motion… that this motion would generate the manifestation of the property we call ‘heat’ in collections of very large numbers of atoms… Brown’s pollen granules were placed among a collection of molecules in seething thermal motion… would have to pick up the random packets of energy that were jumping from particle to particle.”

 

The rest is a few lines of algebra to complete the proof. (Which is why introductory college students can appreciate it.)

 

Brown noticed something that wasn’t part of his main research interest and pursued it relentlessly. He was a botanist. I suspect that pollen moving on the surface of water was uninteresting to most botanists of his time. Brown wasn’t just curious, he was also careful. Careful in his experiments. Careful in drawing conclusions. While I’ve certainly pursued side avenues as a scientist, none were seemingly as mundane as what Brown looked at. I’m also quick to discard something that seems ‘non-promising’. One doesn’t have the luxury of squandering time and resources in this age of competitive research. But perhaps that’s part of the problem – the scientific research enterprise is skewed towards productivity and efficiency. We’re also (too) quick to trumpet success in the business. Brown’s non-conclusive efforts would be considered failure in today’s cut-throat world of scientific reputations and dollars. Maybe that’s why I found it refreshing to read about Robert Brown, the investigator.

 

P.S. I highly recommend the Introduction in McLeish's book. His parable of sonology is superb.

 

P.P.S. I found it interesting that Terrence Deacon also invokes Brownian motion to discuss how macroscopic work comes from the microscopic world.

Monday, June 13, 2022

A History of Plague

How do we learn about health? From disease, unfortunately! And there are intricate connections between large-scale disease and human history. As morbid as it sounds, I’m enjoying learning about such things as I read Kyle Harper’s Plagues Upon the Earth. I don’t know why I’m drawn to reading such things during Covid. Best not to ponder that question for too long. Instead, on to the book!

 


The historical approach taken by Harper is anchored in two recent scientific advances: molecular phylogenetics and sequencing ancient DNA. In his book, Harper refers to these as “tree thinking” and “time travel”. The four major parts of the book are Fire, Farms, Frontiers, and Fossils. A memorable way to remember the key parts of the book. I’m only halfway through so I can tell you that Fire relates to the hunter-gatherer epoch and that Farms is about the rise of agriculture. Harper is a professor of classics, so he also includes archaeological evidence and historical written evidence where available. Here are some highlights of what I’ve learned.

 

Human diseases take a new turn as we start to live in more crowded areas: villages, towns, cities, metropoli. Or perhaps I should say that the disease vectors evolve to take advantage of this situation. The main four vectors are viruses, bacteria, worms and protozoa. While we have focused a lot of spillover from other animals thanks to Covid, it was interesting to learn that compared to other primates, we humans are the biggest carriers. Reverse spillover happens more often – we infect chimps more than they infect us!

 

One chapter is focused on Black Death, the plague recorded as sweeping Eurasia and Northern Africa in the late fourteenth century. The nasty offender is the bacterium Yersinia pestis. I had known about the rats, in particular the black rat Rattus rattus. But what I didn’t know is how well adapted rats are to living alongside congregating humans in cities supported by agriculture and trade. I also learned about the blood-sucking fleas. Turns out that Y. pestis doesn’t care about humans – it’s adapted to the flea-rat combo – and that’s why humans died indiscriminately. We’re a dead-end for the bacteria. Literally. I also learned that the Justinian (Roman) plague some eight centuries before the Black Death was likely also due to an earlier form of Y. pestis. Ugh.

 

When I was young, growing up in the tropics, worms were a concern. My mother was constantly telling me to wear shoes when I was running around outside. I didn’t like socks and shoes. They made my feet hot! They felt heavy, and I felt less fleet-footed. Reading about helminth-caused diseases and the life-cycle of these worms inside one’s body made me shudder. I’m glad I didn’t contract worms. When I was nine, my mother put me on a regimen of deworming medication because I seemed so skinny even though I ate voraciously. The thought was that the worms inside me might be eating my food and that’s why I wasn’t putting on any weight.

 

The section on tuberculosis was particularly interesting. Harper calls TB the Ultimate Human Disease. It’s been around for a long time. Harper writes: “TB is a respiratory disease characterized by patience. It possesses the remarkable ability to modulate the human immune responses and cause chronic infection. It featured prominently in the classical medical literature of China, India, and Greece… thrived throughout antiquity and the Middle Ages. In modern Europe, it was the white plague…” TB was a slower killer. It was known as the “wasting disease”. While we see the bacterial lineage in humans and animals, we’re the likely ones who spread the disease based on the genomic data. You can learn about human migration and trade routes from the spread of TB strains.

 

The grand-scale historical scope of the book singles out how humans are different from any other animal on Earth. With our ability to extract energy from nature (fire!), our growing numbers and spread as the alpha-predator, our dependence on vast monocultural farms as a primary food source, our settling into dense urban areas, we have directly caused remarkable changes to the evolutionary history of parasites upon the earth. They continuously adapt to us, spread with us, and some will kill us. Perhaps to the other animals, we are the great plague upon the earth.

Thursday, June 9, 2022

Organisms vs Machines

This post is Part 3 of Terrence Deacon’s Incomplete Nature (here are Part 1 and Part 2). In this last section of the tome, I was hoping for some illumination, but while a few of the cobwebs in my mind have untangled, the questions of the origin of life and the origin of consciousness remain enigmatic.

 

Let’s begin where I left off – with “Work”. Deacon first defines it as “a spontaneous change inducing a non-spontaneous change to occur”. That’s not unreasonable from a thermodynamic view. Deacon then uses the example of Brownian motion to argue that “even at equilibrium there is constant molecular collision, and thus constant work occurring at the molecular level”. But there’s a problem. You can only get macroscopic work (our colloquial view of getting something done) if microscopic work “is distributed in a very asymmetric way throughout the system”, i.e., when the system is not at (thermal) equilibrium. Otherwise, the symmetric system gives you nothing. Deacon concludes: “Microscopic work is a necessary but not sufficient condition for macroscopic work… [which] is a consequence of the distributional features of the incessant micro work, not the energy of the component collisions, which as a whole can increase, decrease or remain unchanged.”

 

Deacon then proposes a more general definition of work: “the production of contragrade change” and “contragrade processes arise from the interaction of non-identical orthograde processes”. (From Part 2, a contragrade process is defined as going against the flow, while an orthograde process goes with the flow.) When I first introduce thermodynamics in my chemistry classes, we discuss the zeroth law: When you put a hot object next to a cold object, heat spontaneously flows from the hot to the cold. Before the objects were put together, in each separate system, the microscopic particles had symmetric distributions. When brought together, the combined single system now has an asymmetric distribution (heat-wise), and is no longer at equilibrium – it proceeds to find a new equilibrium via heat flow where symmetry can be re-achieved. This is all orthograde.

 

But this is all still in the realm of equilibrium thermodynamics. How do living systems keep themselves away from it? As in Part 2, Deacon invokes climbing up the dynamics ladder. Orthograde thermodynamic processes that “oppose” each other can lead to contragrade morphodynamic work. And after a morphodynamic system has been established, orthograde morphodynamic processes can lead to contragrade teleodynamic work. The establishment of appropriate constraints in all this is crucial. But such situations may be rare, and may be why no scientist has succeeded in creating life from non-life using such principles. It’s unclear how this works outside of the abstract. Even though Deacon provides what he calls practical examples, I’m having trouble seeing this and I feel like I’m stumbling through a cobweb-filled cave. Deacon acknowledges this later in the chapter: “If you have read to this point, you have probably… struggled without success to make sense of some claim or unclear description.” I agree.

 

The next chapter is about Information. All I will say is that at least Deacon reminds the reader not to conflate Shannon entropy and Boltzmann entropy. The next several chapters discuss cybernetics, evolution, and the notion of self. I kept plowing through until getting to the following nugget in the chapter on “Sentience” that relates to the title of today’s post: machines versus organisms. I think it’s worth quoting Deacon in full here (italicized paragraphs below) – he does a good job discussing the distinction. He begins with the notion of computation.

 

Whether described in terms of machine code, neural nets, or symbol processing, computation is an idealized physical process in the sense that the thermodynamic details of the process can be treated as irrelevant. In most cases, these physical details must be kept from interfering with the state-to-state transitions being interpreted as computations. And because it is an otherwise inanimate mechanism, there must also be a steady supply of energy to keep the computational process going. Any microscopic fluctuations that might otherwise blur the distinction between different states assigned a representational value must also be kept below some critical threshold. This insulation from thermodynamic unpredictability is a fundamental design principle for all forms of mechanism, not just computing devices… we construct our [machines] in such a way that they can only assume a certain restricted number of macro states, and we use inflexible… regularized structures… and numerous thresholds for interactions, in order to ensure that incidental thermodynamic effects are minimized. In this way, only changes of state described as functional can be favored.

 

… Although living processes also must be maintained within quite narrow operating conditions, the role that thermodynamic factors play in this process is basically the inverse of its role in the design of tools and the mechanisms we use for computation. The constituents of organisms are largely malleable, only semi-regular, and are constantly changing, breaking down, and being replaced. More important, the regularity achieved… is not so much the result of using materials that intrinsically resist modification, or using component interactions that are largely insensitive to thermodynamic fluctuation, but rather due to using thermodynamic processes to generate regularities…

 

… In machines, the critical constrains are imposed extrinsically, from the top down, so to speak, to defend against the influence of lower-level thermodynamic effects. In life, the critical constrains are generated intrinsically and maintained by taking advantage of the amplification of lower-level thermodynamic effects. The teleological features of machine functions are imposed from outside, a product of human intentionality. The teleodynamic features of living processes emerge intrinsically and autonomously.

 

And to connect it to the mind and consciousness (which is Deacon’s last chapter):

 

…computations only transfers extrinsically imposed constraints from substrate to substrate, while cognition (semiosis) generates intrinsic constraints that have a capacity to propagate and self-organize.

 

Essentially, human brains are meaty, sloppy, computing devices, but this makes all the difference between mind and computing, or between organism and machine. That’s my takeaway from Deacon’s tome.

 

Bonus track: In his epilogue, Deacon has a short section titled “the calculus of intentionality” where he relates taking a derivative at a tangent (to compute an instantaneous velocity) as an analogy to how telos shows up unannounced. (Integrals are also mentioned less convincingly in passing.) This reminded me of another such analogy.

Tuesday, June 7, 2022

A Chat of Cheaters

What do you call someone who cheats? A cheat or a cheater, depending on when and where you learned English. I learned it as ‘cheat’, but for this post I use the more informal ‘cheater’ that’s more familiar in the U.S.

 

What do you call a group of cheaters? A cluster of cheaters? Like a gaggle of geese? Or a congress of cheaters? Baboons! Or better yet, a conspiracy of cheaters? Ravens!

 

In today’s social media age, I’d like to suggest a chat of cheaters. Why? Because if you’re an educator, you should read Matt Crump’s saga of a massive cheating incident in his class and what he did to resolve it. It’s a tale with twists and turns that takes a while to resolve, but well worth the read. Read it now here! The rest of this blog post can wait.

 

Okay, now that you’ve at least skimmed it, here are my brief thoughts below.

 

First, I’m glad that I didn’t have to deal with any such incidents in my remote year. I knew that in at least one of my classes composed of only first-year students, there was a chat group. I wasn’t part of it, but from what I understand it was mostly get-to-know-you socialization which I think was a good thing. My classes were also structured such that while it was possible to cheat, it wouldn’t do you much good if you didn’t know the material anyway, or it made practically no difference to the grade. So, in some minor questionable instances, I left it alone.

 

Second, I’m amazed at the heroic effort made by the instructor to try and treat each student fairly while documenting every incident carefully. I don’t think I would have the patience to fill out the paperwork for each individual student, nor take the time to write code to sort and analyze the data. All I can say is… wow! Nor would I have come up with an alternative syllabus to give the students quite the second chance. I don’t think I would have the patience to write a long blog post or short story with appropriately amusing gifs in different sections.

 

Third, I learned a little bit about students today. Perhaps not so different from many years ago when I began teaching. And perhaps not so different from when I was a student. I had a theoretical notion of what group-chat cheating might look like; I had considered the possibility it might occur in my first remote semester but seeing the train of texts gave me a glimpse of reality. It was an inefficient conspiracy at best, sprinkled with plenty of confusion. I’m not surprised at the range of student responses, from contrition to outright lying. I was surprised that not a single student tattled in a class of that size. I was mildly amused at how students continued to cheating especially when it was obvious the instructor was on to them in the second midterm.

 

Fourth, I liked the setup of the alternate syllabus. Essentially, one can implement such an expanded suite of ‘how to earn points’ at the beginning of the semester that allows students to earn a distinction by doing the work and learning the material in a variety of ways. Some of those alternate assignments sound cool – especially if designed to draw students into the material. It made me think about the rigidity of my own syllabus and ways I could effect a similar expansion. It made me think about my laziness in not wanting to set up such assignments and have to grade them. I’m chastened.

 

Fifth, I was glad to see the redemptive part of the story at the end. The time and effort that the instructor put in paid off, I think. This could have been a terrible train-wreck for the students, instructor, and the institution. I could imagine headlines in the higher education media. I’m surprised this story hasn’t quite gone viral yet. The only reason I knew about it was because a former student (who graduated a number of years back) forwarded it to me last week.

 

And finally, having data skillz is useful! Not just for students, but especially for instructors.