Tuesday, June 30, 2020

Face Metamorphs


In Harry Potter and the Order of the Phoenix, readers are introduced to Nymphadora Tonks, one of the aurors sent to help Harry get from Privet Drive to Grimmauld Place. Tonks is a metamorphmagus, presumably meaning a mage or magic-user who can metamorphose. Tonks describes her skill as being able to “change her appearance at will”. She is born with the ability, unlike other magic-users who “typically need a wand or a potion”. The relevant potion is used throughout the series: Polyjuice. (I’ve made use of it in a Potions project for one of my classes.) The wand might refer to a disfiguring spell that Hermione casts on Harry in Deathly Hallows as the trio are about to be captured by Snatchers.

The Polyjuice potion, we are told, is temporary; a dose lasts an hour according to Chamber of Secrets. It’s unclear how long Hermione’s spell lasts. Harry certainly wonders how temporary it is as he tries to pass off as a Slytherin, thanks to his knowledge of the common room when he used Polyjuice in his second year at Hogwarts. While Polyjuice transforms the drinker entirely into the person who owned the hair (presumably as a DNA source) that is added to the potion, the extent of Hermione’s spell is less clear. It certainly affected Harry’s facial features significantly; everyone else has trouble recognizing him. The spell must wear off after some time, since everyone seems to recognize him normally after the incident at Malfoy Manor. As for Tonks’ ability, the books mainly describe changes she makes to her face simply by physically willing the transformation with mental (and presumably magical) concentration. She doesn’t seem to have trouble maintaining any disguises.

Okay, I’ve talked about the magic. Regular readers of my blog can anticipate the next question: Where does the science come in?

In Chapter 8 of Do Zombies Dream of Undead Sheep, written by a pair of neuroscientists, the subject of facial recognition (or lack thereof by zombies) is discussed. Here’s what I learned. There are two types of clinical disorders related to facial recognition problems. One is psychiatric and known as Capgrass delusion, where you think that someone you know has been “replaced by an impostor”; how and why it arises is unclear. The other is neurological and known as prosopagnosia (combining the Greek words for “face” and “not knowing”), and closely related to it is prosopometamorphosia – perceiving a “visual distortion of facial features”. The latter can also be induced by electromagnetically stimulating the fusiform gyrus in the brain.

The system that recognizes faces, part of the ventral visual stream, turns out to be quite complex. The authors of Zombies refer to it as the “face network”; multiple areas of the brain light up in an fMRI scan, more so than for many other perceptual activities. But is it more complex to disguise yourself by physically transforming your face instead of affecting the perceptions of those around you? From an energy-counting perspective, perhaps the latter. If magic is conducted via electromagnetic radiation, then releasing the appropriate photons (via magical means) to affect other humans within a nearby radius might be energetically less complicated than physically altering your biology: changing bone, skin, and other facial features. The Harry Potter books implicitly assume the physical changes to oneself rather than perceptual affectation of others, but there’s also the Obliviate charm akin to a nifty memory-altering device used in the Men in Black movies. A Confundus might also work.

All this is to say that we should learn more science to have more powerful magic!

Friday, June 26, 2020

Zombie Dreams




That’s the title of an amusing and very informational book if you want to learn the fundamentals of neuroscience. The authors, Verstynen and Voytek, are both neuroscience professors (at Carnegie Mellon and UCSD respectively). And they share a love of zombie movies – Night of the Living Dead and more. If I ever get around to writing my introductory chemistry-in-the-guise-of-magic book, Zombies will be my lodestone. But I will have to come up with a very different title since I don’t plan on riffing off Philip K. Dick’s marvelous Do Androids Dream of Electric Sheep (introduced to the wider world via the movie Blade Runner).


Seven years ago, while preparing to teach a unit on neuroscience in an interdisciplinary scientific inquiry course, I taught myself some basics of the human brain. Zombies is a good refresher; I had remembered some but forgotten a lot. Even better, there are many references to the chemistry of neuroscience in Zombies. I’ve been paying attention to different molecules, looking up structures, and trying to understand evolutionary relationships (given my origin-of-life interests). Living systems in general, and human beings in particular, are remarkable organisms. I have new appreciation for the cerebellum and how it ‘computes’ small adjustments to complex motor movements, or the hippocampus helping you track through your spatial awareness. It’s amazing that we can function as well as we do, without getting our wires crossed (both electrical and chemical signals). Until something goes off course, of course.

In addition to the chemistry, I enjoy the historical vignettes in Zombies. Neuroscience was founded on investigating people (and animals) suffering all manner of maladies and damages to the nervous system. I learned about Mike the Headless Chicken, apparently a celebrity in the 1940s. There was also an operation on a virile rooster helped progress knowledge about the cerebellum in the nineteenth century. The vignette I found most interesting was zombification in Haiti; I did not know much about the process and how the belief in zombies evolved, but the authors refer to The Serpent and the Rainbow, a book that is now on my reading list so I can learn more about Haitian magic. Two chemicals of interest in the process are tetrodoxin and datura – yes, there is a close connection between chemistry and magic!

The approach taken by Zombies is to examine different parts of the system (involving the brain, but also many other parts of the human body), and consider what happens when each part is impaired thereby giving rise to different behavior. Zombie behavior has certain (stereotypical) characteristics, a lumbering gait, insatiable hunger, and somehow a knack for finding living flesh for food instead of trying to eat fellow undead or their undead sheep. Which parts of the brain affect these? Among other things the authors examine sleep, dreaming, wakefulness; and they suggest that zombies have impaired sleep cycles. How exactly might they be impaired? I learnt about the reticular activating system (RAS), the tuberomamillary nucleus (TMN), the ventrolateral preoptic nucleus (VLPO), and the swath of neurotransmitters involved in waking up and falling asleep. Zombies is a science book at heart, just with a lot of zombie jokes thrown in. And if you can bear the zombie humor, the book is a marvelous read!

Wednesday, June 24, 2020

Delaying Detailed Plans


In general, I am not a procrastinator when it comes to teaching-related activities, be it class preparation or grading. I sometimes procrastinate writing up a manuscript, because it feels less productive than getting more research data. Not having a deadline helps, I suppose. But when it comes to grant-writing, I usually start routing university-level paperwork a month ahead of the grant deadline, and I typically officially submit two weeks ahead of the deadline. I don’t like to be under time pressure; I think I perform worse, but it’s rarely been put to the test so I don’t know for sure.

However, with rising Covid cases throughout the U.S. and in my state, there is much uncertainty as to how we will operate when the Fall semester begins. At present, my institution is planning to have (masked) face-to-face in-class teaching, especially for incoming first-year students. It’s hard not to say anything different when your educational model sells itself on small class sizes, personal interactions with faculty, and having a beautiful campus among other factors. But there are contingencies in place: mask-wearing, physically-distanced classrooms with lower seating capacities, health checks, and more.

Having spent my teaching life in face-to-face environments, and having escaped the rapid online shift last semester because I was on sabbatical, I am apprehensive about what the new semester will bring. I’d like to teach in-person, and I might actually get to do so because it so happens I’m teaching two lower enrollment classes in the Fall. One is a first-year G-Chem class with only twenty students, where I’m also the students’ academic adviser. This is by design – the college ensures that every incoming student has at least one such class in their first semester. These classes are being prioritized for in-person teaching in the Fall, so it’s likely I will be assigned an appropriately sized classroom. My other class is a special topics origin-of-life chemistry course. I only have eleven students registered, and the room I’m presently assigned has a Covid-capacity of twelve students. So I might end up able to have all my classes in-person. But that won’t be true for most of my colleagues. Reduced classroom capacities mean that at a minimum, parts of the course must be remote – and there’s some registrar-related scheme to assign students different “live” days throughout the week.

I’m dragging my feet on class preparation. The situation is still uncertain and much could change over the next couple of months before classes begin. The state or county could mandate fully remote classes. I might be assigned or reassigned classrooms requiring partial remote, partial in-person, classes. I might have fully in-person classes. And any of these situations could change mid-semester – likely towards more remoteness as autumn turns to winter and the flu season hits, of if there’s a serious outbreak.

The challenge, and I might be looking at this too simplistically, is that what I think I would do to provide the best in-person teaching-learning experience differs quite a bit from my approach for an all-remote experience. I would structure my classes rather differently in both cases. The middle of the road, the so-called Hyflex approach (hinted at non-explicitly by my institution), is making me nervous. Trying to serve both in-person and remote students simultaneously seems overwhelming – especially when I think about whether I could even pay attention to so many different communication streams at the same time. The image coming to mind is a TED-talk or a game-show, with a live studio audience and many more tuning in remotely. Except I’d want my class to be highly participatory from both groups. Could I do it and would the technology support it? Unclear. Not sure how I would keep good eye contact with my in-person class while trying to watch a smaller screen all at the same time.

An additional wrinkle is that teaching and learning chemistry has a strong visual component. We’re trying to see the unseen, and that requires drawing structures on the board, using hand-held models, lots of gesturing with one’s hands and fingers, not to mention pictures and equations. Yes, one can flash up slides and share the screen simultaneously and even set up one’s presentation to show step-by-step changes. Just thinking about how I would do all this while trying to communicate well with simultaneous “audiences” is overwhelming. Perhaps I’m better off dividing in-class work and remote work into two separate realms, synchronous and asynchronous. But what do you do about a student who has fallen sick or who needs to quarantine for fourteen days or more? We’ve always had policies for illnesses or other absences, but those have always been viewed as individual idiosyncratic circumstances, not a large number, and worked out between the student and the faculty member. But when one’s institution (in a bid to keep students engaged), promotes these high-flexibility options, then it becomes a different ball game.

Perhaps I’m over-thinking the situation with needless fretting. I’ve started preparing for my special topics class because it’s easier to think about. We’ll be reading lots of primary literature and discussing it. Students will be writing. I recognize more than half of the enrolled students, so that will make connecting easier even if we have to go fully remote. I think I could pivot reasonably between in-person and remote given the smaller class size. Also, I’ve been thinking about the topic since I’m just coming off sabbatical and the majority of my workday is spent on research (while trying to write up a paper). I haven’t done much about my G-Chem class. Yes, I’ve taught the course almost every year in my familiar face-to-face format, but I might well have to think differently this coming semester. For now, I’m delaying making detailed plans given the fluidity of the situation. But at some point I’d really like to feel well-prepared. Not sure if I can be.

Friday, June 19, 2020

Lyfe


As the origins-of-life research community grows, we see more offshoots examining “life as we don’t know it”. Whether you’re studying the creation of synthetic life or figuring out how to detect a reasonable biosignature for a Mars missions, it helps to have a working definition of life that’s perhaps broader than “life as we know it” here on Planet Earth. The latest foray comes from Bartlett and Wong in their recent article “Defining Lyfe in the Universe: From Three Privileged Functions to Four Pillars” (Life 2020, 10, 42, doi:10.3390/life10040042). Life is an open-access journal so you can read the article in full for yourself.

Yes, they call it Lyfe. No, it’s not a typo.

First, the motivation. Your definition of life will affect how you detect it. Complicating matters is that “life is a verb, not a noun” (the title of Russell’s article in Geology 2017, 45, 1143-1144). Lane has also argued that “what is life?” is the wrong question, rather it should be “what is living?” So if you think life requires Darwinian evolution (per the NASA definition), then something that exhibits Lamarckian or some other type of evolutionary process will be discarded by definition. That might be problematic because Earth life provides a sample size of one. Our Terran view might be very myopic.

Competing hypotheses have always been a part of the origins-of-life research community. They can be categorized by “what came first?” or alternatively “what is foundationally crucial?” questions. While amino acids can be easily synthesized from simple molecules under a variety of conditions, making functional proteins from such prebiotic soup mixtures was and still remains very challenging. The RNA World hypothesis provided a possible way out, and is the reigning paradigm of the Genes-First camp. The rival Metabolism-First camp has picked up more adherents over time and has been gaining ground, while the Lipid World (Compartments-First camp) has always been acknowledged as important, but perhaps peripherally so with fewer researchers working in that area. These are the Three Privileged Functions outlined by Bartlett and Wong.

However, the three hypotheses could sample a very small space in the grand scheme of Lyfe out there in the universe. Bartlett and Wong provide some very useful visual aids for the reader to understand why, one of which is shown below. The many pink-lavender arrows represent such hypotheses aimed at discovering the trajectory of life on Earth. Synthetic approaches (orange dotted lines), not necessarily aimed at prebiotic plausibility, may provide further routes both to extant life and to artificial/alien life (as we don’t know it). And there might be yet other paths to other types of alien life or subsets thereof.


The broader Lyfe, the authors argue, should have four pillars: (1) Dissipation (due to free energy and the second law of thermodynamics), (2) Autocatalysis (for exponential growth), (3) Homeostasis (to maintain some unity amidst a changing environment), and (4) Learning (to not just survive, but to thrive!). They provide several examples of how these pillars may feature in different types of lyfe or sub-lyfe-forms; they also try to avoid privileging one pillar over others, although it is clear that without free energy, it’s difficult (although not impossible) to do anything else.

I’ve read many conceptual origin-of-life papers, and while there’s nothing earth-shattering about Bartlett and Wong’s approach, I particularly appreciated the Figures in the paper. These do an excellent job communicating the authors’ arguments, better than in many other cases where the reader just gets bogged down in text and technicalities. I smiled at their choice of colors to illustrate the three privileged functions (replication, metabolism, compartments) because the exact same three colors (blue, red, yellow) are used for those same three functions in the origin-of-life game Bios Genesis. (Green, the fourth color in the game, represents negentropy – related to dissipation.)

As I’m preparing to teach a special topics origins-of-life chemistry course this coming semester, I’ve been considering adding a “search for life outside Earth” component, and this paper might be one that I will assign. It’s nice to find well-written articles that will be accessible to undergraduates for a class.

Tuesday, June 16, 2020

The New Normal


I’m back in the U.S. after a year away on sabbatical. We found an apartment, moved our stuff from storage, and bought basic supplies and groceries to restart life here in the new normal. The global pandemic rages on. The U.S. isn’t doing anywhere as well as the country I came from, and I’m amazed at how many people are walking around not wearing masks here.

Since it hasn’t been fourteen days, I’m avoiding going into my office and lab. The good thing is that I’ve gotten over the hump of working from home the last several months. I haven’t set up a dedicated work space at home yet – we’re mostly unpacked, but haven’t finalized exactly where we’d like our furniture to reside for optimal functionality and some aesthetics. I might have to get a new chair and a better headset and headphones if I’ll be doing more remote meetings; I didn’t do many while on sabbatical so it wasn’t an issue.

The semester begins in two months. I don’t know yet if my classes will be fully in-person, or hybrid, or fully remote. A lot will depend on how the state and county are doing over the next two months. I hope to teach in-person, especially since I’ve missed that face-to-face interaction with my students for over a year now. This week I’ve started to set aside some time to learn more about best practices for remote teaching, going through resources that my university has put together. I’ve yet to host my first Zoom meeting, although I’ve been a participant in many. I haven’t used my university’s LMS as much, but I’m starting to browse the different features in case I have to do a lot remotely – my HTML-hacked course website will probably not suffice, even though it has worked well for in-person teaching thus far. And I’ll have to experiment with some lecture-capture technology.

In the meantime, I’m trying to make research progress and hope to finish writing up a manuscript before classes begin and I get very busy with other things. One nice thing about being back in the same city as my university is that VPN works much faster. Since I’m a computational chemist and much of my remote work involves logging on to the local computing cluster and moving files back and forth, I’m happy to have smoother connections and less latency. We also signed up for higher internet speed and bandwidth at home, so that helps.

I’m not completely over jet lag. At present, I’m falling asleep at 9pm and waking up at 5am. That’s not a bad schedule. Maybe I should make this the new normal in my new normal!

Saturday, June 13, 2020

Everyday Forecasting


In the old days, I never trusted the weather forecast. I’d only check to pooh-pooh how poorly they would do. Today, I’m amazed at how accurate forecasts can be, down to my local area. The story of how weather forecasting improved as it became a complex gargantuan endeavor is detailed in Andrew Blum’s The Weather Machine. Published in 2019, the book is aimed at non-experts who might be interested in how the weather wizards do what they do.


Did you know that before the World Wide Web there was the World Weather Watch (WWW)? Formed in 1963 at the instigation of an American, Harry Wexler, and a Russian, Viktor Bugaev, in the midst of the Cold War, it proposed three interlocking global systems for weather observations, data processing, and telecommunications. The core idea: “open and equal access to weather information, for operational and experimental use.”

Weather knows no borders. But measuring the weather has broad implications spanning politics, national security and global technologies. Blum astutely describes the tensions: “The only caveat written into the charter was that the WWW be used for peaceful purposes only. The UN proper might have been overwhelmed by the festering tensions of a world divided between East and West, but the weather diplomats were insistent on the borderless atmosphere. This was bold of them, given the technology on which they relied. Weather satellites were so expensive that they could be justified only on national security grounds. Mostly this limitation was technological: The innovation they required overlapped significantly with both intercontinental missiles and spy satellites. But it was also political: The jingoistic appeal of satellites was also a function of how they overflew the whole earth, without regard for the borders below – overturning the historical understanding of sovereignty and territory.”

In the book’s epilogue, Blum describes efforts by the UN’s  World Meteorological Office (WMO) in tackling climate change and its global effects especially on poorer countries. The tension between public good versus private enterprise has invaded the world of weather data collection and analysis. Global cooperation is facing increasing challenges with the rise of jingoistic world leaders and demagogues with a “me first” attitude. We are seeing this starkly in the current global pandemic involving WHO, a UN sister organization of WMO. The coronavirus knows no borders either, but yet politics, national security, public good, private choice, all come into play.

Three things jumped out at me while reading The Weather Machine. I will spend a brief paragraph on each.

(1) One chapter titled “Euro” takes the reader inside the workings of the European Centre for Medium-Range Weather Forecasts (ECMWF), which has the current best weather forecasting models and simulations. The scientists, the culture, the competitiveness, the collaboration, are fascinating aspects into the human-side of the weather enterprise, fast being taken over by computers as they crunch more data in ever more sophisticated ways. This chapter is an extended vignette among many others where Blum personally visits with the meteorologists to learn the history and inner workings of the weather business. It’s an interesting look behind the curtain to see how things work!

(2) I resonated with the discussion of Wexler’s theory that both the macroscopic and microscopic in weather observation are equally important – “a bigger picture at a higher resolution”. An accessible example might be the evolution of TVs, driven by the desire of consumers to have larger screens with yet finer resolution. I study the chemical origins of life, and we face a similar issue. On the one hand, a global bird’s eye view is important to get a handle on how life may have started, but the nitty-gritty details are also equally important. We’ll need both in increasing measure to make headway on the problem. While my research focus is on the microscopic, I ensure that my reading diet consists regularly of macroscopic views. This will be crucial for research to advance in the field.

(3) Blum makes another very astute observation when he discusses one aspect of how models and simulations work in climate change, but has broader implications to other types of simulations. “The glory of good data assimilation is that it allows for the model to compensate for places where observations are sparse. It becomes a bridge between the areas that are well observed and the areas that aren’t. The surprising result of that discrepancy between model space and real space is that the model, you might say, is more detailed than reality – or reality, at least as it is observed.” This is exactly what happens as we develop models for the origin of life. The data is sparse indeed. As a quantum chemist, I’m threading my way through the complexity but examining the in-between things that are difficult to observe experimentally.

I enjoyed The Weather Machine. I wasn’t expecting to entertain thoughts about origin-of-life simulation/modeling while reading about the weather. It’s hard to forecast when one thought will lead to another!

Thursday, June 11, 2020

Bios Origins


How did we become human? Ignoring the uplift hypothesis and focusing on evolutionary processes, one might hypothesize ‘unlocking’ of the brain to access so-called higher functions. Unbelievably, there’s a boardgame to simulate this. Back in 2007, Phil Eklund of Sierra Madre games published Origins: How We Became Human. The game is a brutal slog – evolution isn’t easy! Sometimes you get stuck and have difficulty making any advances. I played 15 complete games between 2008 and 2017; and it took me several incomplete games to climb the steep learning curve. Once you’ve got the hang of it, a full game takes 4-5 hours. (Here’s a review I wrote on the BoardGameGeek a decade ago.)

Bios Origins, published in 2019 in collaboration with Ion Games, is the reboot of the original Origins. The game has been retooled to fit the Bios trilogy; I’ve previously written about the two earlier games: Bios Genesis and Bios Megafauna. While Bios Origins retains the flavor of the original, it feels less science-y, and plays more like a boardgame than a simulation. This is also true of the second edition of the retooled Bios Megafauna. The new Bios Origins is less brutal, much more forgiving, and there’s always something to do so you don’t get stuck due to some bad luck. But for fans of the original, you will miss the challenge of unlocking your brain.

There are several scenarios for Bios Origins. You can use the present world map, a custom map, or the cratons from a campaign game following Bios Megafauna. You could be terrestrial or aquatic (mer-folk!). There are solo rules too. Below is a three-player game with homo habilis (Player Black), homo floriensis (Player Green), and homo heidelbergensis (Player White). Cuboids represent cities (or metropoli) and meeples represent migrants. There are circular ‘climate chits’ that can be flipped due to climate change events. Printed on the board are also resources that can be accessed: mining, animal domestication, horticulture cultivation.


Bios Origins proceeds through four epochs. Players score points in three areas: Culture, Politics, Industry. There are six tech-levels for advancement: Footprint, Energy, Metallurgy, Immunology, Maritime, Information. The pictures below show a game in the final epoch. The players have made advances in multiple areas. All of them are in the industrial era, Footprint-wise. Energy-wise, one player has made it to Nuclear. They’ve maxed out Metallurgy but are somewhat behind in Immunology. All can access Outer Space, but they aren’t as far along Information-wise. Unlike its predecessors, Bios Origins has more of a tech-tree advancement feel, and less of an evolution feel to gameplay.


Each player still has a brain map, but it’s easy to advance your brain power. The heart of the game comes from the Foundation cards (horizontal) and Idea cards (vertical) as shown in the picture below. Each of these cards provides actions the player may take during their turn. With more cards, there are more actions. Iconography is heavy, but once you gain familiarity, the action phase proceeds quite rapidly. Each player has a Ruling Class favoring Culture, Politics, or Industry. These may shift due to revolutions (chaotically or through an ordered election) as governments favor different advancement policies.


On the left, the sapiens player (Black) currently has a Politics ruling class, indicated by cards with a pink strip on top. Figures on the cards represent dissidents. In the center, the hobbit player (Green) also has a Politics ruling class and on the brain map you can see pawns waiting to be deployed as specialists to pursue new Idea cards. On the right, the neanderthal player has an Industry ruling class (red strip on top) and no dissidents. The Foundation and Idea cards, besides providing actions, also provide the means for advancements along the tech tree. This aspect of the game is markedly different from the older Origins, and makes the gameplay more open by providing multiple options so a player doesn’t get stuck.

Of the trilogy, Bios Origins is likely the most interesting of the three, but it has the highest complexity and longest playing time. The sweet spot, in my opinion, is Bios Megafauna. It plays in less than half the time, is easier to pick up and understand, and evolving new biological structures is interesting in gameplay conversation. (In Bios Origins, too much is going on to appreciate the details in the cards, or maybe I just haven’t played enough games – only five to completion thus far.) While I have a soft spot for Bios Genesis, having served as the chemistry consultant on origin-of-life matter (and it’s also the game I’ve played the most), the gameplay is much more brutal and can be much less ‘fun’ for struggling players. Evolving new biochemistry, less familiar to most players, simply isn’t as interesting as evolving wings, armor, opposable thumbs, or mimicry.

The science nerd in me thoroughly enjoys the Bios series, but it simply won’t get as much play time due to length and complexity. That’s okay, I enjoy my varied game diet; sometimes I want something simple, at other times I enjoy being immersed in a more complex puzzle. I suppose I’m unlocking my brain in different ways!

Monday, June 8, 2020

Uplift and Unpredictability


What makes us human? Some would argue that being sapient, as in homo sapiens, is what distinguishes humans from animals. And how did this sapiency arise? Well, that’s the million-dollar question; one that eludes simple answers despite the many advances in evolutionary biology and neuroscience. Evolution might be one route, albeit a messy one. Religion might provide a different answer – that the awakening of sapiency is a gift (or mistake) of the gods. However, a third possibility is explored by David Brin in his famous sci-fi trilogy from the 1980s: Uplift.

What is uplift? It’s a combination of the first two possibilities. In the distant past, starfaring beings known as the Progenitors, as they found creatures at the “edge” of sapiency, provided the extra push so that conscious thinking intelligence might flourish. As to how they did so, the answers seem lost in the vast stretches of time. These newly uplifted species, as they gained knowledge, understanding, and developed advanced technology, would seek other almost-sapient creatures and uplift them. Thus, uplift provides the means whereby sapiency is achieved. Biological evolution sets the stage. Advanced technology and patronage completes the process.

The first book in the trilogy, Sundiver, set several hundred years in the future introduces the unique dilemma. Of all galactic species, Earthling humans present an enigma. No one seems to know who uplifted them. When First Contact was made with extraterrestrials, humans were a young sapient starfaring species, seemingly the only one of their kind that did not go through the formal process of uplift. Other galactic species can trace their ancestry millions of years through multiple uplifts. When addressing a colleague formally, the line of patronage is part of one’s galactic surname. Instead of son of so-and-so, one’s patrons are named in a line of succession.

Galactic politics is complicated. There are species friendly to Earthlings, but others are hostile and suspicious of the seeming lack of a patronage line. Some of the latter would like to enslave the Earthlings and provide them “proper instruction” through patronage. An uplifted species was contracted to serve their patron masters for thousands of years before being released and allowed to occupy new worlds, uplift others, and thus become patrons themselves. It’s part of galactic prestige; it’s part of galactic alliances; it’s part of having galactic clout. The second book in the series, Startide Rising, is the most interesting of the three in exploring these matters. Neo-chimps and neo-dolphins uplifted by humans, before First Contact, work alongside humans as colleagues rather than patronage slaves. Terran ways are viewed with suspicion by many other E.T.s, who have followed age-old traditions of uplift.

Earthlings are enigmatic because they have bootstrapped their way into sapiency and developed their own (still primitive) technology without the benefit of patronage. Every other known species relies heavily on the Galactic Library, a repository of knowledge and tradition, built up over eons and slowly added to as technologies progress. Earth is grudgingly provided a branch of the Library so they may learn more advanced technology, but humans, having innovated their way through trial-and-error, don’t conform to the traditions of “best practices” laid down by the Library and followed assiduously by other galactic species. Humans are unpredictable. And therein lies their greatest strength, as relative younglings in the starfaring business, still using rather primitive technologies. We Terrans have upended the galactic order. That unpredictability and innovation confounds Earth’s enemies while delighting and amusing its allies.

The trilogy is a paean to human ingenuity and independence, warts and all. The process of evolution might be trial-and-error-messy, but it provides benefits of “thinking outside the box”. As an educator, I found it interesting to consider the role of the Galactic Library in the “educational” process. Do you go with age-old best practices or fly by the seat-of-your-pants disruption? Can the two approaches be combined profitably? As someone who studies the origin-of-life, the balance of conservation and innovation in robustness is a key issue. And like other “hard” sci-fi, ever present is the question of what it means to be human in an intergalactic context. All this comes wrapped in exciting, clever, and engaging stories.

This sabbatical year I’ve read some great sci-fi, including the trilogies that began with the Three Body Problem and The Fifth Season. I’m enjoying my varied and refreshing reading diet thus far! But my sabbatical is coming to an end, and it will be back to the daily grind soon enough. That being said, the year has been both uplifting and unpredictable. Appropriate, I suppose.

Wednesday, June 3, 2020

The Disappearing Lecture


Thanks to Covid-19, we will not be seeing large lecture university classes meeting in person. Social and physical distancing are the watchwords. Students could still watch lectures online, but why should they watch no-name me, when they can instead learn watch lectures from famous scholars at famous institutions? The university lecture of the twentieth century is going extinct – one of the predictions by Wertheimer & Woody in an intriguingly-titled article that muses about the professoriate of this century (see abstract below).


The authors note the increase in the scholarship of teaching and learning coming out of their broader area of psychology. The future they predict, “individual mentorship, small seminars, and advising” are things emphasized at undergraduate-focused liberal arts colleges, so I’m happy to see that. But it will be an expensive shift if you’re not already doing this, regardless of whether you’re employing new technologies. I’m sure someone out there is looking for the end-all education app, the robot-in-the-sky tutor that you can plug into and download into your brain the ephemeral thing we call knowledge. I think it’s presumptuous, but perhaps the nature of teaching makes that so. The authors take a stab at this idea:

“Teaching is a presumptuous activity. It presumes that the teacher knows something that the students do not know, that it’s worth knowing, and that the teacher knows how to teach it. Is paying substantial tuition and other costs and sitting for a long time in a classroom with someone talking at you and a group of your colleagues a worthwhile endeavor?” If Zoom collected statistics on student attention, we might have a numerical answer to this question. The authors probe the ‘task’ of teaching:

“What is its aim? It is to impart knowledge, skill, and wisdom to those who presumably do not yet have it by someone who presumably does have it. The effective teacher obviously has to have a reasonably sophisticated understanding of the subject to be taught. But the effective teacher also should have some idea of the student’s initial ‘cognitive map’ of the subject to be taught… [and] the teacher must have a strategy for generating the transition of the student’s naïve cognitive map into one that coincides more closely with the teacher’s map.” Essentially, one is trying to move students along a continuum from novice to expert, or at least that’s what I talk about in my blog posts.

The article calls for “further development of and emphasis on a technology of teaching that is thoroughly grounded in rigorous empirical evidence and based on the translation of basic research findings into practical applications.” This sounds nice. And scientific. But finding reliable and valid measurements is a huge bugbear. Because of my interest in this area, I’ve read hundreds of such articles – they do shed light on some small aspects here and there, but there are too many confounding variables. Teaching humans is a messy business. We don’t truly understand how it works. Even the nature of improving reasoning ability is not straightforward.

Teaching at scale will always be a challenge because of the diversity of students and their experiences. We are not robots. Some technocrats might hope we are, or try to treat us as such. Or make us identify on a website that we are not bots indeed. I occasionally fantasize about one-on-one teaching tutorials at one’s leisure with a motivated student or two. The only time I experienced this was working on my undergraduate thesis at a liberal arts college. My adviser was technically on sabbatical, but for some reason he didn’t say no when I asked if he would supervise my thesis, for which I’m very thankful! Unfortunately I’m not doing as well with my own students; everyone just seems so busy, including me. In the meantime, I will just keep calm, carry on, and try to improve as a teacher bit by bit.