Tuesday, December 28, 2021

History of the World

It has been many years since I last played the epic boardgame Civilization. But to get the full experience you’d want the full complement of seven players and maybe ten hours. It would likely take thirty minutes to set up and teach new players. To get a similar but broader experience, there’s always History of the World. Easier to teach, faster to set up, and you’re looking at four hours for the epic experience. You don’t need the full complement although I think five players is the sweet spot.

 


Today’s post is an overview of the game, some of its features, and then several snapshots of a recent game so you get a sense of what an alternate world history might resemble. Replaying the tape allows you to explore these different possibilities! I have the old Avalon Hill version (no figurines) from 1993.

 

The game begins with the Sumerians, circa 2600 BC. Then it proceeds through seven epochs. Each player controls one empire in each epoch, for a total of seven empires throughout the course of the game. Each empire has a Strength (corresponding to the number of units you can use to expand), a Start Area, and sometimes a Capital city. Each player also has nine event cards to play throughout the game that may consist of temporary military strength, a minor kingdom you can control, fortifications, natural disasters to afflict your enemies, and so on. But once you play an event, you’ve played the event card. So you have to be strategic. But how? Especially when the main catch of this game is that you don’t know which empire you will be controlling in advance!

 

That’s the beauty of the game. Each epoch begins with the assigning of empires. The player with the lowest strength draws the first empire card, looks at it, and then decides whether to keep it or give it face-down to another player. If given a card by another player, you cannot look at it until all empires for that epoch have been assigned. The player with the next lowest strength draws the next empire and looks at it. If this player was given an empire card this turn, then the newly drawn card must be given away to a player who doesn’t yet have an empire. If this player doesn’t already have a card, there are the usual two options: keep or give away. This is the heart of the game and not only does it provide, a self-balancing mechanism, there is room for scheming and strategy.

 

After all players have received an empire, they are revealed one at a time according to chronological order. The controlling player expands the empire by placing units in the starting area and then expanding while possibly fighting battles. In Epoch 1, the empires are (in order) Egypt, Minoa, Indus Valley, Babylonia, Shang Dynasty, and finally the Aryans. If there are six players, all will eventually have their several hundred years of fame. If there are fewer players, not all these empires might show up. In Epochs 2-7, there are seven possible empires. Thus, even with the full six-player game, there is some uncertainty as to who will be the no-show. (If there are only two or three players, each gets to control two empires per turn.)

 

There aren’t many options during a player’s turn. Expand, fight, or build a fortress. Combat is handled with a simple die roll. There are some combat modifiers, but they are few and simple unlike your typical wargame; thus each turn proceeds quickly without too much fuss or muss. You score points for controlling capitals, cities, monuments (which you can build), and areas. The world map is divided into regions (as you’ll see in the pictures below) and you can score for having presence, majority control or full control of each area. The values change with the epochs. When you tally your score each turn, you count your control for not just your present empire but any of your past empires still on the board (if they haven’t already been wiped out).

 

And yes, that’s it to the rules. Now onward to some pictures to illustrate some moments in history! (I apologize in advance for my poor photo-taking skills.)

 

Figure 1: Epoch 1. The Babylonians expand their empire with advanced weaponry, overrunning the Hittite capital in Eastern Anatolia. The Sumerians are confined to the Lower Tigris, To the East, the Indus Valley civilization is thriving. To the West are the Minoans and to the southwest the glorious civilization of Egypt with a monument! (The Epoch 1 counters are supposed to represent a sling and a stone but I always think I’m looking at an amoeba!)

 


Figure 2: Epoch 3. The Macedonians expand to the south and east, controlling the Eastern Mediterranean, but they have to fight their way through in several costly battles. The old Egyptian empire is engulfed but the Persians still maintain much of their former territory. A Jewish revolt has established a kingdom in Palestine. In India, the Maurya dynasty has displaced the Vedic City States but the Indus maintain its capital of Mohenjodaro.

 


Figure 3: Epoch 4. The Tang Dynasty takes full control of China, expanding into the Mekong Valley. A remnant of the Vedic dynasty holes up in the Burmese highlands while a Malay state has been established to the south. The Guptas now rule most of India. To the west of China’s great wall lurk remnants of the Hsiung-Nu (the “White Huns”) in Mongolia, while Attila’s horde still controls most of the Eurasian Steppes.

 


Figure 4: Epoch 4. The Arabs under Omar launch their expansion. All of North Africa falls to their advance, except they leave in peace an old fortified kingdom of allies in the upper Nile. The Arabs expand as far as Iberia in the west, but their eastern advance is severely hampered by the Persian-Sassanid alliance. To the northwest the Byzantines control the Mediterannean waterways while old Rome’s waning might still hold sway over parts of Southern Europe.

 


Figure 5: Epoch 7. The French launch their conquest starting with the Inca Empire where they take heavy, heavy losses. But along with their Spanish allies, they are able to control all of South America. The Aztec empire still holds Central America and Mexico, while the Russians control the the eastern half of North America.Their eastward conquest of northern Europe is more successful, but they run out of steam before reaching the Baltics and Eurasia. Remnants of the Holy Roman Empire still persist, and ancient kingdoms in England and Scotland hold their own. (Area scoring table is in the lower left.)

 


Figure 6: Final scores with Purple having the highest score (175 points) while having the lowest total strength (67). Purple controlled four kingdoms on the Indian subcontinent and having that controlling stake brought in a raft of points.

 


In this game, Russia turned to overseas expansion in North America and in sub-Saharan Africa, leaving the Huns in control of most of the motherland. North China is stilled controlled by the Mongols while being held off in the south by the Ming dynasty. The Manchurians are no-shows. Another notable no-show: the Ottoman Turks. Thus at the beginning of the twentieth century, the Arabs still control north Africa, Crusader states hold Palestine but have also ravaged Mecca. Remnants of the Persians, the oldest empire, still exist with a fortress in the Anatolian highlands. France has a colony in Madagascar. Remnants of the Tang and Sung dynasties control Mekong and the East Indies respectively, untouched by the European powers.

 

And that’s a wrap! Or at least it’s one version of the History of the World.

Sunday, December 26, 2021

The Matrix Resuscitated

I remember exactly where I was when I first encountered The Matrix. I was in Old Town Pasadena trying to decide whether to watch a movie at the AMC Cineplex or at United Artists across the street. It was twilight. I know this because I’m cheap, and the twilight showing (the movie that’s just before 6pm, even lower priced than a matinee) has the cheapest seats. UA, at $4.50 per twilight ticket, was slightly more expensive than AMC, but there wasn’t anything that sounded interesting at the AMC. That left The Matrix which I didn’t know much about except it was sci-fi-ish. I had not seen any previews nor read any reviews. Friends hadn’t told me about it.

 

After the movie, I walked out in a daze. It was day when I entered the cinema but night when I stepped out. Normally, I would take the free Old Town bus back to a stop close to my apartment, but instead I decided to take a half-hour walk back just to clear my head. I had never seen anything like it. Mind-bending. Stylish. It was awesome. Well, except for the final minute telephone booth scene before Neo flies away. But then, the Wachowski’s likely didn’t know they would have a sleeper hit and that tons of money would be thrown at them to make their sequels, so I can see how they stuck that scene in for “closure”.

 

By the time the next two sequels were released, there was plenty of pre-buzz. I had seen previews. (You couldn’t avoid them.) Hype was plentiful. The Wachowski’s were already famous. In my opinion, the sequels are progressively less satisfying than the original. For one, the novelty is lost – that cannot be avoided. But Reloaded made up for it with some very clever bits. We saw Zion for the first time. The short fight scene between Neo and the Merovingian’s exiles is superb. The freeway vehicle chase scene is among the very best. The story-telling arc of the final mission to get Neo to the core is artful and clever. There are some neat surprises throughout the movie. I still greatly enjoyed it.

 

Despite being poorer overall story-wise, Revolutions was still a fitting end to the trilogy. The last stand at Zion with the battle mechs. The tense ride to machine city. And while the final fight was weak in the narrative arc, there was still good drama and tension interspersed throughout the movie. Redeemable. I still enjoyed it even though (or perhaps because) I had correctly anticipated that it would be weaker than its predecessors. Still, it had some skill, as the Merovingian might aptly say.

 

I just watched Resurrections. It was a mess. My expectations were not high, but it still came in below par in my book. I was cheap, watching it through HBOMax which came gratis with my home internet package. Instead of coming up with something original and clever, it felt like themes from the first two movies were repackaged, then compressed into a single movie. The narrative was rushed. Perhaps it should have been The Matrix Retreaded. It still could have been pulled off by a new generation of young faces. But the story bogged down by hanging on to the old. The fresh-faced actors might be talented but they suffer from the weight of their forebears: Laurence Fishburne is Morpheus and Hugo Weaving is Smith. And Carrie-Anne Moss is severely under-utilized. Even the poor Merovingian is reduced to a gibbering vengeful nutcase. An empty shell.

 

They tried to bring back the dead. But resurrection wasn’t achieved. Resurrection is what you get when the perishable old is raised imperishable into new glory. Or so says a wise old glorious book. What we got was resuscitation. A temporary reprieve from lying in the grave in a death state to briefly experiencing a living state. But it’s an old living in an old body with one foot in the grave. What we got was The Matrix Resuscitated.

 

There were some clever parts to the latest movie. I liked how programs could instantiate themselves in the “real” world. Thinking about how machines and humans cooperate from more explicit examples to seeing imagined hybrid visual tech was interesting. But otherwise there wasn’t much meat. The video game idea was not mind-blowing or even spoon-bending. All you’re left with is action sequences. But they feel empty without a sufficiently supporting story arc. “Am I living in a simulation?” is no longer an interesting question after the first movie. “What does Resurrection entail?” could have been an interesting question to tackle. But all we get is Resuscitation. And we should be profoundly dissatisfied.

 

P.S. Cham & Whiteson tackle the question “Do We Live in a Computer Simulation?” in their latest book. They quote Neo in The Matrix. Their analysis is also less than satisfying.

 

P.P.S. In Harry Potter and the Deathly Hallows, the so-called Resurrection Stone doesn't do any better.

Thursday, December 23, 2021

FAQ: Universe Version

I enjoyed Jorge Cham and Daniel Whiteson’s previous book We Have No Idea. They still have no idea, but have new questions to ponder in Frequently Asked Questions About The Universe. I’m finding it to be a drag, a sure sign when I start to skim to find an interesting nugget or two. Maybe it’s because I’ve read most of this physicky speculation before in some form or another. 

 


About two-thirds through the book, I found a few nuggets. Nothing ground-breaking, but interesting to ponder nonetheless. In “Will time ever stop?” the authors ask if Time will end at, well… some point in time. If time stops for everything, would we even notice? Probably not. You can’t tell if you can’t take note of any change. For that matter, we don’t really know what Time is. The authors do make the usual allusion to entropy (which my students find very fascinating when I mention this possible connection in my classes). But their more interesting and basic idea is that we assume time flows. We can’t imagine what it would be like for it not to do so. I hadn’t thought of that before, although I’ve seen “static” space-time block arguments.

 

In “Is An Afterlife Possible?” the authors mull the physics of heaven (or hell). They consider three “key elements that define a traditional afterlife”:

·      There’s a you that can outlive your physical body

·      That you is captured and transported to another location

·      You exist in that other location, still able to experience things forever

 

After dispensing with the purely materialist argument, they suggest that the you that can be “captured and transported” is information. This suggests that you can reduce the essence of you into bits and bytes. I have my doubts since I’m coming around to the idea that reductionist physics is a subset of non-reductionist biology rather than the other way around. Anyway, even if you could reduce your essence into information, the compression problem remains so you’ll have to cut out some stuff. Is an “average” you sufficient to rebuild future-you? And quantum entanglement suggests that in making future-you, past-you will likely be irreversibly changed. Then again, if no quantum information is ever ultimately lost, maybe a reasonable rebuild is possible and you can live forever. Sort of.

 

This of course brings us to the question we could exist in a simulation? Or if we’re living in a simulation right now? I’ll pause on that post until after I’ve watched The Matrix Resurrections, just released. (Reviews don’t bode well.) Instead I will briefly mention also enjoying their chapter on the strange nature of mass and its connection to energy, similar to Wilzcek.

 

P.S. See here for my previous post on FAQs, unrelated to today’s post.

Tuesday, December 14, 2021

Sanitizing Science

Teaching science is an inherently different activity from doing science. At least it is so today under the auspices of ‘mass education’, a phrase I heard in my younger days as a student but is hardly ever used today. Is this because almost all education ‘systems’ today inherently assume efficiently getting students up-to-speed on the basics? We don’t require students to re-discover scientific truths and theories. It would be painstakingly slow as they muddle their way through a morass of knowledge. No, what we do is package the knowledge in relatively quick and digestible chunks so they can potentially consume and internalize this knowledge.

 

I refer to this as “sanitizing science” – to borrow the phrase from John Ziman’s Real Science, a thoughtful meditation on the relationship between science and society. This sanitizing comes in two aspects: (1) We clean things up. Most of the messiness of science is not discussed. We present “just the facts”, leaving out human and social foibles. We sanitize the story. (2) We preserve the sanity of ourselves and our students. By packaging the story efficiently, we avoid the potential insanity of the strangeness of science and the nuttiness of the scientific endeavor. We normalize science. We’ve done such a good job with packaging, that science has come to be seen as the bedrock of knowledge – the uber-normal. That’s certainly the story for the past couple of centuries. And even now, with ‘backlash’ against scientific authority, the establishment paints the rebellion as a minority, abnormal view.

 

Secondary school science education has done this so successfully, that my students come to my introductory chemistry courses primed to “get the right answer”. Ambiguity frustrates them. It’s inefficient and messy. Some of the academically strongest students are the most resistant to the ambiguity. They’ve become very efficient at training their sights on the right answer that yields a high score on their exams. My college-level chemistry classes aren’t vastly different in one sense – there’s still a canon to be learned and while some ambiguity is introduced, what we ‘cover’ in class is still well-sanitized and streamlined – especially at the introductory level courses which serve as pre-requisites to more advanced courses. Chemistry education is very hierarchical in structure.

 

I try my best to convey to students that science is strange – with its highly constrained processes and its (often) surprising results. What we understand scientifically is to some extent far removed from the practical and useful folk-science way of understanding and interacting with the natural world. In general chemistry, we encounter this very quickly when discussing the structure of the atom with its massive yet tiny (in size) nucleus surrounded by a cloud of electrons we can’t quite pinpoint. Several such stories are sprinkled throughout the semester, but I still spend most of our class time streamlining the scientific knowledge I want the students to acquire. It’s still mostly sanitized, for good or ill, to maintain the efficiency of the process within the ‘system’ (where I’m purposefully evoking the industrial factory metaphor).

 

Do I hark to the medieval days of apprenticeship? No thank you. Even my graduate school experience didn’t have that feel. My research/thesis adviser would make interesting suggestions here and there, but for the most part I was left to my own devices to make headway on my research projects. That being said, teaching at a liberal arts college with only undergraduates often results in an apprenticeship model of sorts being applied to undergraduate research. I don’t have many students so I meet with each one individually every week. I perform a process and they mimic it. Then they get practice on their own repeating the process over and over again for different starting “materials and molecules”. I pose questions to get them to take increasing ownership of the project and to help them think like scientists. I’m deliberate in doing this. (Looking back, I’m not sure my graduate adviser did this, but I learned a lot from observing and listening to him – and it has clearly influenced some of my own idiosyncracies as a faculty member.)

 

Efficiency in education isn’t necessarily a bad thing – although it can be. From the perspective of advancing scientific knowledge, I see it as mostly a good thing. It’s hard to stand on the shoulders of giants if it takes you an entire lifetime just to claw your way up to their shoulders. By sanitizing and repackaging the core knowledge that allows one to (hopefully) build on that foundation, and then quickly reach the frontier, we can learn new things we didn’t know before. It would be nice if I had fewer students to teach so I could give more individual time to each – but that’s unlikely to become a reality soon, unless I start my own boutique outfit that’s aimed at a niche population of ‘elites’ willing to pay the money for their children to be educated in such an environment. A part of me finds this idea attractive, but another part of me desires to spend my time in ‘mass education’ and have a broader impact (narrowly construed).

 

Sanitizing science will continue to be part of my practice, but I hope to do it more thoughtfully. Over the years I have slowly ditched textbook materials for my own. I’ve rearranged material, cut some things from the canon, and introduced other things that were traditionally considered peripheral and left out. But I still find myself constrained by the system that I’m part of, and so this progression (if it even constitutes ‘progress’) moves along slowly, in fits and starts. Little changes here and there punctuated by the occasional overhaul. My own sanity is also at stake.

Thursday, December 9, 2021

Mathemorphism

Quantifying stuff is a bedrock of the natural sciences. By saying this, I’m intentionally attempting to juxtapose some precise measurement (“quantifying”) of something vague (“stuff”) while also communicating its foundational importance via a metaphor (“bedrock”). I have to work hard to notice that I’m doing so. Perhaps I am so steeped in my practice as a scientist, that I’m well-conditioned to think the way I do, perhaps unreflectively.

 

As a computational scientist and educator, using the language of mathematics as a formalism to discuss scientific concepts and theories is a key part of my job – especially when I’m teaching physical chemistry. As a chemist, attempting to connect visible macroscopic properties to nanoscopic stuff we cannot directly observe, I equally use the language of metaphor with a good dose of pictorial representation. Essentially, I’m in the business of helping students build abstract conceptual models to help them think chemically – as a chemist would, until it (hopefully) becomes second nature.

 

From one viewpoint, mathematics has been extraordinarily successful in helping scientists think about and represent what goes on in the natural world. You’ve perhaps heard the phrase “the unreasonable effectiveness of mathematics”. The very notion seems to defy explanation. But its power is apparent – in that we scientists often eventually make an appeal to mathematics when we go down a rabbit hole trying to explain something and eventually reach the end of our personal knowledge. If we can represent something mathematically, we think we know something fundamental about it, even if this turns out to be illusory.

 


I’m enjoying reading John Ziman’s Real Science; I would classify it as a practical philosophy of science. It tackles the subject matter broadly and thoughtfully and I’ve found it to clarify ideas that were fuzzy in my own mind about the nature of science and its practice. Ziman perceptively reminded me that much of what I do is trying to set up a tractable mathematical model to represent some complex system, and that by doing so I am likely making over-simplified assumptions. He would also say that this isn’t wrong (nor is this about right and wrong) but reminds me to be thoughtful about the limitations and assumptions inherent in the process. Here’s a quote from him that I found very helpful when he brings in syntax and semantics – ideas I have been wrestling with.

 

“On the one hand, mathematical formalisms have the advantage that they are semantically wide open – that is, the terms that occur in them can be designated to mean whatever we want them to represent theoretically in each case. On the other hand, mathematical formalisms are syntactically very restricted – the relationships symbolized by mathematical operations on these terms are highly specialized and are very often meaningless.”

 

This discussion on the use of mathematics and science is embedded in Chapter 6 (“Universalism and Unification”) of Ziman’s book. It’s a rich chapter that has gotten my mind buzzing as he discusses the role of classification in science – it is a bedrock of what we do as scientists and what we perceive as thinking scientifically. Ziman also emphasizes the widespread use of schemas and the notion of a system that’s inherent to the scientific endeavor. We “schematize the seamless web of the real world by representing it as a more or less closed and coherent set of relationships between potentially separable entities.” Furthermore, what differentiates the ‘soft’ sciences from their ‘hard’ cousins isn’t necessarily that they are more complex, but that classification becomes extremely challenging. Reductionism depends heavily on classification. Ziman writes that the “notion of a model defies formal definition… that a theoretical model is an abstract system used to represent a real system, both descriptively and dynamically.” I’m inclined to agree after steeping myself in Rosen’s work. Ziman also describes computational modeling as building an increasingly important bridge between the twin poles of theory and experiment.

 

Ziman also reminded me of how much metaphor is used in scientific reasoning. I’ve discussed this before in a previous blog post, but lately it has made me think about whether I should ease up on decrying anthropomorphic explanations in my chemistry classes. That’s the subject for a future post. After all, what I’m doing in my daily practice as a computational chemist is what Ziman refers to as “arithmomorphic”. Since this doesn’t always involve arithmetic, I will generalize it to “mathemorphism”, perhaps a close cousin of metamorphism. Not to be confused either with metaphorism. It’s hard not to get all tangled up just thinking about all of this.

Monday, December 6, 2021

Evidence-Based Educational Policy

To give your pet theory some heft, precede its name with the phrase evidence-based. Evidence-based anything is all the rage. And it’s broader than using the phrase data-driven which has its own problems. Given that educating students is what I do for a living, and that I teach in the sciences, I pay attention to theories about science education and how those might translate into suggested pedagogy. Taking a broader lens, applying a pedagogy on a larger-scale might result in sweeping educational policy.

 

Last month, the title of a paper in Educational Psychology Review caught my attention: “There is an Evidence Crisis in Science Educational Policy”. The authors include Paul Kirschner (who has in the past made the careful and important distinction between scientific practice and science education practice) and John Sweller (originator and champion of Cognitive Load Theory). I’ve found their overall arguments compelling over the years, even when I have occasional quibbles with a minor point of two. Here’s a snapshot of the abstract with the DOI reference.

 


The most useful thing about this paper is how it groups studies into three types: Program-based, Controlled, and Correlational. For practicing scientists, the gold standard are controlled experiments. These sorts of studies are key to figure out if a vaccine or a therapeutic drug is going to actually be effective; you’ve likely heard of RCTs or randomized controlled trials in the news related to whether something will be efficacious against Covid-19.

 

It’s not so easy to control most of the variables when you’re trying to figure out if a pedagogy you are experimenting with is favorable compared to “business as usual”. There are good experimental designs and there are poor ones. You typically don’t generate very large data sets, but a well-designed study can allow you to elicit causative factors and eliminate things that don’t work. In contrast to these, big-data correlational studies may show you connections between one factor and another, but do not pinpoint causes. One might say that these two types complement each other.

 

However, most of the education literature discusses program-based studies, which is neither of these two but a sort-of hybrid. As an instructor, you likely perform these even if you don’t write them up as a paper. You want to know if your students will do better if you try a different approach for a particular topic. So you design your new approach, try it out, and then anecdotally compare it to your previous approach. Or you might teach two sections of the same class where you use your new approach in one, and the old approach in another. While you might think of the “business as usual” approach as your control, that’s rarely the case. You’ve likely not designed your comparison to consider all sorts of confounding variables. You just want to know if what you’re trying seems like it works better according to some arbitrary measurement (quantitative or qualitative).

 

It’s good that Sweller and co-workers highlight these differences for the unaware among us who don’t often think about these different categories. What constitutes evidence? How strong is that evidence for a particular pedagogy “working well”? Why has there been no silver bullet in education – no one method that is the “best practice”? I think it’s because the process of education is complex, and therefore cannot be boiled down to a single best practice. But I also think it’s because we simply don’t have as much strong evidence about what works and what doesn’t, and in what situation… and there are a variety of confounding variables to throw us off!

 

I think the authors rightly state that all three types are important. And although program-based studies are the most prevalent and in some sense the easiest to design and carry out, we should be cautious about how widely applicable the “results” are, and be circumspect about supposed “causative” factors that are proposed in these papers. These sorts of studies are generally underpowered statistically and have too many confounding variables. But that doesn’t mean they’re unimportant. I continue to tweak my classes every semester using this approach. I think that’s a good thing to do to improve learning and the student experience. And it’s practical and useful. But I should be careful in making pronouncements that my pet pedagogy is particularly effective for reasons I have come up with anecdotally.

Sunday, December 5, 2021

Zoonoses

If there’s a new phrase we’ve learned from Covid-19, one contender could be zoonotic diseases: when a bug (be it a bacterium, protist, or virus) makes the leap from one reservoir species (usually mammalian) to humans. While zoonoses have afflicted humankind since, perhaps the beginning of what we might call the dawn of civilization, there seems to be an uptick in the last several decades. My subconscious must be dwelling on all this since I’ve chosen to spend my time reading, watching video, and playing boardgames related to epidemics or worse, global pandemics.

 


First, the book. I’m reading Spillover by David Quammen. He’s a brave traveler and superb writer. I’ve only read one of his books thus far, but his other books are now on my reading list. Spillover was written back in 2012, and Quammen has essentially chronicled the rise of zoonoses – why they occur and why they are accelerating. The encroachment of humans into virgin territory, mass domestication of animals for food or other products, the increase in human population in density and spread, easy global travel, climate change, and not to forget the prime directive of biology to replicate, replicate, replicate. Quammen devastatingly lays out why we should expect the Big One of global pandemics to hit. Less than a decade later, here we are.

 

At the beginning of Covid, I read Mark Honigsbaum’s 2019 book The Pandemic Century alongside other books about the complexity of systems and what happens when they cannot respond robustly to a fast-evolving situation. Quammen’s 2012 book gets into the guts of the stories and the individuals involved – he goes out and interviews firsthand witnesses on the frontlines, those who hadn’t been killed by some awful outbreaks, and visits live-food markets, rat farms and bat caves. His narrative of the early days of SARS is even more eye-opening than Honigsbaum’s. Reading his account also explains why East Asia was so much more prepared than the rest of the world when Covid-19 hit, having fumbled their way through SARS. But even back then in 2003, they acted fast and they got very lucky. That’s Quammen’s read of the situation in hindsight, and I’m inclined to agree with him.

 


Second, a TV series and a movie. The past fortnight, I’ve been watching the first season of The Last Ship. A global virulent pandemic has wiped out much of the human population. Governments have fallen. There are pockets of survivors. A U.S. Destroyer with its crew that includes a virologist is on a radio-silence four-month mission out in the Arctic and returns to both silence and mayhem. We don’t know much about the virus although it seems to be zoonotic. But the TV series is mostly about people and how they act and react to the situations thrown up at them in this brave new world. I just watched the season finale where the ship and crew make landfall back in the U.S.; I’m not sure how I feel about where the story arc might be heading. (I have the DVD set of the second seasn on hold at my local library.) In any case, parts of the first season reminded me of Battlestar Galactica (the remake), one of the few series I’ve watched from beginning to end. Similar situation, but different.

 

This weekend, I decided to watch 12 Monkeys, the Terry Gilliam movie from 1995 with Bruce Willis as the main protagonist. I had seen it when it was first released and thought it was a confusing mess – probably because it was my first Terry Gilliam movie. I vaguely remembered something to do with time travel and a global pandemic that wipes out most of the human population, but other than that I hardly remembered anything in the storyline. I was pleasantly surprised that in this second viewing, I thought the time-loop narrative was cleverly told. The virus itself does not feature prominently.

 

Third, the boardgames. After playing Pandemic early in the pandemic at the super-hard level of six epidemics (my final tally was winning 7 out of 20 games), I took a long break from it. Until this weekend. The two games were both losses, although one was down to the wire and was an almost-win. I’m back to including the expansion (mixing in the new roles and special cards) with five epidemics (two regular and three virulent strain). That was my preferred level pre-pandemic.

 

On the other hand, my isolation for most of this year led me to revisit Origins: How We Became Human. It’s a funky civilization-ish game starting with primitive humans discovering technology, both the good and the bad, and making their way into the end of the twentieth century. I’ve immersed myself in exploring the game system and I’m really enjoying it; I might even write a strategy guide for the game. One feature of the game is that to progress to the “first energy level”, one has to domesticate animals which carries the risk of picking up a zoonotic disease which is a setback to your civilization. Then you acquire immunity, but other diseases lurk and can wreak havoc. It’s not the main story of the game, but it’s one factor among many that you have to watch out for.

 

I’m so ready for 2021 to be over. But I think zoonoses are here to stay. Is Covid-19 the Big One? Perhaps. But there’s likely to be another Big One, and possibly another and another. Reading Quammen’s book has convinced me that will be the case. I hope there won’t be too many in my lifetime. And I realize it’s quite possible one of them could end my lifetime.

Tuesday, November 23, 2021

Smoke in the Leaf

In quantum chemistry this semester, we’ve been solving eigenvalue equations over and over. I think some of my students get what we’re doing, but I still have moments in office hours where a student seems to struggle with the very concept. Perhaps the narrow way in which I introduce this topic isn’t helping some of the students.

 

It was therefore refreshing to read a different take on eigenvalues in Jordan Ellenberg’s Shape, which I recently blogged about. Starting with geometric progressions, he shows examples of how one can extract a specific number that seems to control the rate of geometric growth. This number is the eigenvalue. He then introduces the Fibonacci sequence and shows that it arises from the difference of two geometric progressions, controlled by the eigenvalues -0.618 and +1.618, the latter famously known as the “golden ratio”. I like the way Ellenberg discusses the nature of eigenvalues:

 

[They] capture something deep and global about the system’s behavior. They’re not properties of any individual part of the system, but emerge from interaction among its parts. The algebraist James Joseph Sylvester called these numbers the latent roots – ‘latent,” as he vividly explained, “in a somewhat similar sense as vapour may be said to be latent in water or smoke in tobacco-leaf.” Unfortunately English-speaking mathematicians have preferred to half translate David Hilbert’s word Eigenwert, which is German for “inherent value”.

 

In my class, we talk about using Hermitian operators to extract eigenvalues from a wavefunction. I’ve tried to give my students the sense that you’re pulling out a number from some sort of global system that is captured by a mathematical (wave)function, but sometimes when I talk about math the students seem befuddled. Perhaps Ellenberg’s “smoke in the leaf” will give them a physical picture of what we’re trying to do – perhaps capturing the vapor in a GC-MS and extracting an output number, e.g., the mass of a particular molecule in the vapor.

 

Ellenberg expands the idea with more examples beyond the geometric progression “growth” of a pandemic and R0 values. There’s an example of Google PageRank searching and Markov (random) walks. There’s a segue into Monopoly the boardgame, where apparently Illinois Ave (in the U.S. version) is where a counter would spend most of its time in the long run (i.e., you’d need to play a very, very long game to notice this) once the system approaches ergodicity. You can capture this in an eigenvector: “something inherent to the long-term behavior of a system that’s not apparent just by looking at it, something latent like the smoke in the leaf.”

 

What’s helpful at the end of the chapter: Ellenberg provides a worked example of two infinite sequences and two operations. One is a shift, where moving all the numbers in a sequence one spot to the left looks like you’ve multiplied the original sequence by a constant to get the new sequence. The other is a pitch where you multiply each term of a sequence by its position in the sequence. This leads to a discussion of eigensequences, akin to eigenfunctions, followed by demonstrating these as examples of non-commuting operations. And yes, this leads to Heisenberg’s Uncertainty Principle. My students are regularly puzzled by this even though they follow the mathematics. I don’t think I’ve done a great job helping to shed light on this, so I’ve scanned some pages of Ellenberg’s book for my students to read. Maybe it will help shed light, maybe it won’t.

 

Next week we’ll be looking at Huckel Molecular Orbital Theory for delocalized pi-systems. One of the examples will be butadiene. Interestingly, the eigenvalues for the “resonance” integral (in units of beta) are -1.618, -0.618, +0.618, +1.618 for the four molecular orbitals – seemingly related to the two eigenvalues controlling the Fibonacci sequence. I don’t know if there’s a connection between the two, or I should say that I haven’t spent the time thinking about it more carefully. My brain is somewhat fried, and I’m just happy to make it to the Thanksgiving holiday. My students are also tired, maybe more so. It’s been a grueling semester.

 

This chapter in Ellenberg’s book ends with the “notes in the chord”. It’s kinda cool. When you hear a chord triad, you can separate out the three notes and their three eigenvalues mathematically using a Fourier transform. But more amazingly, a trained ear can actually hear these notes in the chord “even if you don’t know calculus… because this deeply geometric computation… is also carried out by a curled-up piece of meat in your ear called the cochlea.” It’s amazing what these small body parts can do. An eigenvalue separator in our very ears!

Sunday, November 14, 2021

Trial and Error

Sometimes I enjoy reading math. Or I should say I enjoy reading about math when it’s aimed at the non-specialist. Jordan Ellenberg does a great job at this, and I enjoyed reading his book How Not to Be Wrong. I had a feeling I would enjoy his latest book Shape, and so far I’ve not been disappointed. Once again, he wraps math – this time focusing on geometry and number theory – around interesting stories of people and events. Yes, there’s a chapter about Covid-19 and geometric progressions, but I won’t be discussing it today.

 


I particularly enjoyed Chapter 6, “The Mysterious Power of Trial and Error”. It’s about random walks, and features both the Drunkard’s Walk and the Gambler’s Ruin. Ellenberg begins the chapter with a question he often hears in his math class (one that I occasionally hear in my P-Chem office hours): “How do I even start this [problem]?” Ellenberg jumps at the teaching moment: “… it matters much less how you start than that you start. Try something. It might not work. If it doesn’t, try something else. Students often grow up in a world where you solve a math problem by executing a fixed algorithm...”

 

That’s a good description of how my students approach chemistry problems. In my G-Chem classes, we’re in stoichiometry tackling problems of how much of A reacts with B to form some amount of C and D. What is the limiting reactant? What if the reaction yield is less than 100%? How much leftover reactants do you have? There are systematic ways to approach these problems, and I try to model these with worked examples. But there are multiple ways to solve these problems, so I try to show the students the common approaches and their caveats. In most cases, these problems are not as open-ended, so learning algorithmic approaches is helpful.

 

Several weeks ago, we were drawing Lewis Structures in G-Chem. Trying to draw the best structures is a more open-ended problem. I tell my students that the only way to get better is to practice, practice, practice. As you draw more structures and evaluate them (using general guidelines about the octet rule, formal charges, resonance), you get better at the task. I show the students my method which is more intuitive and diagrammatic, involving some trial and error. But some of my students have learned a more algorithmic method from their high school chemistry class. I tell students that they don’t have to use my approach if they prefer something else they’ve learned. (My approach also differs from the textbook.) Students don’t like this open-endedness. They want a surefire algorithm. But real chemistry doesn’t work that way. Neither does real math, according to Ellenberg.

 

Research is a good example of trial and error. Sure, there’s intuition involved, and I’ve built up some amount of it over the years. But as I branch into areas new-to-me, I become a novice again, and so sans any better guidance, I launch in and try a few things that may or may not work. This is a challenge for students when they start working in my research group. Yes, I do tell them the first several molecules to build and calculate, and what data to extract – I’m a computational chemist – but then I try to coax them into coming up with their own ideas of what to try next. For some students, this comes more naturally. For others who resist this approach, they don’t last long in my group – because then research becomes starts to feel like a tedious chore.

 

I’ve been educating myself about machine learning approaches for some of my research projects. Nothing hardcore yet; I’m still mostly playing in the kiddie sandpit. Hence it was fun to read Chapter 7, “Artificial Intelligence and Mountaineering”. Ellenberg introduces gradient descent, a method I’m familiar with, but then he scopes out to discuss how one approaches huge N-dimensional problems – things I will have to tackle in the large data space of chemistry. How does one navigate between underfitting and overfitting? That’s an interesting challenge and a lot of it involves trial and error as you decide how much to layer and how to assign weights to your model neural net. You get the computer to do the number-crunching for you, but you should be always cautious about the output and whether it makes sense. I’ve learned that lesson through trial and error.

 

One way you can do this is to have the algorithms play games against each other, the subject of Chapter 5. Tic-tac-toe, checkers, chess, and Go, are famous in the A.I. and machine learning literature. Tic-tac-toe can be worked out by hand. Checkers can be (almost) exhaustively decision-treed. Chess and Go have too many combinations to be checked at the speed of present processors, although quantum computing may cut the Gordian knot. But these games are all closed systems. I was interested to hear that some folks had written an A.I. for the Lord of the Rings CCG – a much trickier prospect with a random draw deck and different sorts of interactions (the A.I. was written for the cooperative version of the game). Could an A.I. learn to negotiate with players? Apparently, there are some folks working on an A.I. for Diplomacy. That is a very interesting choice for a case study: Limited movement with simple rules, but the tricky part is all about the negotiations among players.

 

Can playing games through trial and error train the machine to play the perfect game? I suppose it depends on how tractable the decision-tree might be and what the complicating factors are, but perhaps this is a less important question. Ellenberg quotes top checkers and chess players and concludes: “Perfection isn’t beauty. We have absolute proof that perfect players will never win and never lose [games that end in Draws based on finite decision trees]. Whatever interest we can have in the game is there only because human beings are imperfect. And maybe that’s not bad. Perfect play isn’t play at all… To the extent that we’re personally present in our game playing, it’s by virtue of our imperfections. We feel something when our own imperfections scrape up against the imperfections of another.”

 

That last line is perhaps the beauty in trial and error.