Saturday, February 27, 2021

Obliviate! Harder than it looks

It’s hard to sympathize with Gilderoy Lockhart in Harry Potter and the Chamber of Secrets. He’s a self-absorbed narcissist who seems rather inept at most magic, except for the Memory Charm which he uses effectively to remove the memories of wizards and witches who have done great magical deeds so that he can claim such deeds for himself. This leads to a career as an author and celebrity. Lockhart is good at selling an oversized image of himself.

 

In the one instance when Gilderoy tries to use the spell, Obliviate, to burnish his reputation once more, he unwittingly uses an almost-broken wand that causes his spell to backfire. Stupendously in this case because he seems to have erased all his memories and has no idea who he is anymore or of any past events in his life. This crass damage is actually the more likely outcome in real life. We’ve learned a lot about how the human brain works by mishaps from people who’ve had accidents that caused severe lesional damage to their brains, and from early medical practices of lobotomy – removing chunks of the brain in the hope of reducing ill-understood maladies such as epilepsy. A most famous case in psychology or neuroscience books is patient H.M. who had major memory problems after being lobotomized, and then happily allowed himself to be studied and tested over decades. Of course he couldn’t remember any of this.

 

Gilderoy’s specializing in using Obliviate with surgical-like precision is actually very remarkable, given the complexities of the human brain. To erase a memory associated just with a singular event which was likely highly emotional and dramatic is actually very challenging – especially if you want to leave the unwitting suspect none the wiser. Even more so, if the event was not recent and likely lodged in long-term memory. You can’t just turn on a blue-flashing light and slam people with electromagnetic radiation as the Men in Black do. Neuralyzing, they call it.

 

Memory and cognition are tricky things. I’ve been marveling at the complexity of how our human brains do what they do, as I’ve been reading through The Idea of the Brain by Matthew Cobb. The year is 1950 and Karl Lashley is giving a seminar at Cambridge reviewing his long studies on memory. The title: “In Search of the Engram”. What is an engram? Apparently, it means “the physical trace of a memory”. Are memories physically located in certain areas of the brain? And if we knew exactly where they were, could we mess with them, or even remove them? (A dramatic view of how this might play out can be seen in the movie Eternal Sunshine of the Spotless Mind.)

 

More well-known in the field than Lashley is his former student Donald Hebb. Both Lashley and Hebb thought that memory was distributed throughout the brain rather than localized in a particular area. Hebb is also associated with some great quotes. Referring to neuron activation and synapse development, Hebb’s theory is couched in the memorable “cells that fire together wire together”. One quote that I particularly like, referring to the dualist brain-mind arguments of the day: “Our failure to solve a problem does not make it insoluble. One cannot logically be a determinist in physics and chemistry and biology, and a mystic in psychology.”

 

Then there’s the “grandmother” cell. Cobb relates the funny story of how it came about as an extrapolation of the work of the Russian neurosurgeon Akakhievitch (you’ll have to read the book to find out more!), and how “it was used as a shorthand way of underlining the inherent silliness of suggesting that every object we recognize, whatever its orientation or context, is represented by the activity of a particular cell or group of cells. Taken to its absurd conclusion, there would have to be a cell for your grandmother sitting, your grandmother standing on her head, your grandmother playing the ukulele, and all possible combinations of the infinite variations in which you could recognize your grandmother.”

 

And then there’s the famous 2005 study by Fried and Koch. Apparently, after inseting electrodes into the brains of patients (who were being prepped for surgery for severe epilepsy), they showed images to patients and recorded the activity of individual hippocampal cells. The results are mind-boggling: “In one case, a unit responded only to three different images of the ex-president Bill Clinton. Another unit (from a different patient) responded only to images of the Beatles, another one to cartoons from The Simpsons…” And most strikingly, “one patient possessed a single unit in the left posterior hippocampus active exclusively by different views of the actress Jennifer Aniston. The cell did not respond if Aniston was pictured with her then partner, Brad Pitt. In another patient, a cell responded consistently to pictures of the actress Halle Berry, including when she was dressed up as Catwoman…”

 

What is one to make of all this? Cobb writes that “the authors were more cautious – although the cells responded consistently to Aniston or Berry or Clinton, that did not mean that these were the only stimuli that could potentially excite these cells – the patients had been shown only a very limited range of pictures… just because a single cell responded to an image, that did not mean it was the only cell involved in recognizing the image, merely that it was the only cell they had recorded from that belonged to the relevant network. They estimated that a million neurons would be activated by each stimulus…”

 

I agree with Cobb when he argues that a significant limitation of our understanding comes from the helpful (yet limiting) reductionist approach – take things apart to find how each part works. The key issue is that “function is both localized and distributed – or rather, to be clearer, both terms are misleading: localization is rarely precise, and distributed functions are also localized to particular networks and cells, even if these may sprawl over the brain. Brain function therefore involves both segregation and integration.” A long-standing dictum employed in neuroscience – that location implies function (or “where is how”) – needs to be re-thought. A related dictum we face in chemistry and biochemistry is that structure implies function. But when we get to systems chemistry, this reductionist stricture becomes very problematic.

 

Let’s get back to Gilderoy. If engrams and grandmother cells exist as localized patches, he’d have to be an exceedingly able wizard to isolate those patches without causing severe harm to his victims (which might then implicate him and reveal his thievery). Perhaps one of the reasons why he’s a bungler at most other magic is that he doesn’t practice it because of his exclusive concentration on perfecting the Memory Charm. Messing with the brain too much though, should start to produce side effects. Legilimency, the Imperius curse, and even the seemingly simpler Confundus! might start to cause longer-term degeneration. There are hints of this in the Harry Potter series where victims seem to become more susceptible to further break-ins to their brain and executive functions.

 

Hermione is praised as the greatest witch of her age. In the final book, she removes the memory of herself from her parents – likely requiring great magical skill made even more difficult by the emotional challenge of the very act. This is one instance where the movie does a better job than the book in conveying the drama of Hermione’s Obliviate. And it reminds me that Lockhart may not be simply a magical oaf with poor skills. Obliviate is much, much harder than it looks.

 

P.S. For a previous post on memory, and The Chamber of Secrets, see here.

 

Thursday, February 25, 2021

Molecular Topology Networks

In preparation for a grant I’m writing, I’ve been reading about graph theory and network topology. This has led me down multiple rabbit holes, some productive, some less so. Today’s productive reading comes from “Topology of molecular interaction networks” (Winterbach, W.; Van Mieghem, P.; Reinders, M.; Wang, H.; de Ridder, D. BMC Systems Biology 2013, 7:90). It’s an excellent overview of network biology, and very helpfully classifies extant research approaches as Descriptive analysis, Suggestive Analysis, and Predictive analysis. Each has its pros and cons, clearly laid out.

 

It’s nice to read a topology paper that itself is well-organized both topically and topologically. There are some excellent tables, with clear definitions. Here’s Table 1 on metric descriptions. I’ve been reading up on graph theory so most of the terminology was familiar, but I wish I had read this paper first because it would have eased my reading of several other dense papers, that were not always as well-organized.

 


To figure out what’s more closely related to what, we typically employ a distance metric. This is usually associated with a numerical value. Algorithms allow us to apply statistical measures, and out pop new numerical values – perhaps telling us something about how closely things are clustered, or whether there is an overall modular architecture, or whether a network is scale-free. We can compare different networks. We can compare one numerical value to a different numerical value.

 

I also found clear descriptions of how we can compare real-world networks to randomly simulated (null) networks, why it is useful, and helpful summary statements about different methods to do this. Even though I had come across this before, I hadn’t quite grasped the importance of the null network comparison until reading this paper. As I read I was jotting down useful language and terminology for my grant.

 

There’s a helpful paragraph where they describe two views of robustness, a concept I’ve been wrestling with. “One property thought to emerge from natural selection is robustness, the ability to maintain function under perturbations… [in some] networks, the number of interaction partners of nodes initially appeared to correlate with their essentiality: … [they] have few hubs and many low-degree nodes. In metabolic networks, almost the opposite is true, with networks being susceptible to disruption of low-degree linker nodes that connect metabolic modules. However, in both cases the systems are resilient to most perturbations but susceptible to targeted attacks, a property known as highly optimized tolerance.”

 

It’s always hard to read a paragraph pulled from a paper, but trust me if you do read this paper, what the authors are saying becomes quite clear. They also highlight limitations at every turn. In the next paragraph, they refer to simulations showing that “such structural features emerge from network dynamics rather than selective pressure.” This cleared up a lot of confusion for me. (I apologize if you find this confusing, but I write my blog to offload excess memory, and this is very helpful to me.)

 

But what if you don’t have a distance metric? I alluded to this in my most recent post because my challenging reading for the day was “The Topology of the Possible: Formal Spaces Underlying Patterns of Evolutionary Change” (Stadler, B. M. R.; Stadler, P. F.; Wagner, G.; Fontana, W. J. Theor. Biol. 2001, 213, 241-274). I’ve been playfully calling this “Nearness is Not a Number”.

 

At first glance, this seems like an impossible task. If you can’t measure the distance between A and B, and also between A and C, how can you tell which is more closely related to A? Is it B or C? And could you relate B and C somehow? Turns out you can still do this using set theory. This paper was a bear to read. I glossed over much of the math beyond the initial definitions. However, Figure 3 (shown below) in the paper was very helpful. I’m used to working in Euclidean vector space, and have started to think about metric space where I’m more limited in the mathematical operations I can use, but I can now see how there are other non-numerical operations or features you can use for your groupings. I also have a new appreciation for how fuzzy boundaries can be utilized. 

 


This blog post does not have a conclusion. I’m soaking in information; I don’t quite know what to do with some of it yet. Maybe when I sleep and dream, new connections will be made. Topology is a strange and funny beast to the non-mathematician me. I’m just beginning to glimpse it usefulness to interesting questions in systems chemistry.

Tuesday, February 23, 2021

Metaphor Limits: Brain Version

We still don’t really understand the brain. And we’re limited by the metaphors of our current milieu.

 


That’s my summary of the first half of Matthew Cobb’s new book, The Idea of the Brain. The book is subtitled “the past and future of neuroscience” and I’ve just finished the first two hundred pages in the section labeled PAST. The story is an interesting one, and Cobb is an engaging storyteller. Beginning with ancient myths, then briskly moving on to folks like Hippocrates and Galen, we quickly reach Vesalius. All in the first chapter. From then on things move more slowly, and while there were familiar names, there were many less well-known and interesting characters highlighted by Cobb.

 

Many of us imagine the brain to be something like a computer. That’s the metaphor of today as we are surrounded by such machines. As computers became more powerful, we started talking about artificial intelligence – to distinguish it from our human intelligence, I suppose. Except it’s no longer an artifact, but embedded into our everyday lives. We are now shifting to discussing machine intelligence. Is man a machine? Can a computer-machine be human? Or mimic one cleverly enough such that we can’t tell the difference?

 

Before the computer arrived, the workings of the brain were described with different metaphors: pipes & hydraulics, the telegraph, electrical circuits, and even immaterial forces. Some of these metaphors seem quite outdated today given that we’ve learned a lot about the brain – and that’s it much more complicated than we imagined. And we still can’t explain the nature of consciousness. I’m not even sure we’re close. Perhaps decades or centuries from today, future scientists will look back at the early twenty first century and muse about the strange inadequate ideas and analogies we use. This is the salient lesson I’ve learned from Cobb, and I’ll quote from his opening paragraphs of the PAST.

 

The history of science is rather different from other kinds of history, because science is generally progressive – each state builds upon previous insights, integrating, rejecting or transforming them. This produces what appears to be an increasingly accurate understanding of the world, although that knowledge is never complete, and future discoveries can overthrow what was once seen as the truth… the history of science is not a progression of brilliant theories and discoveries: it is full of chance events, mistakes, and confusion.

 

To properly understand the past… and even to imagine what tomorrow may hold, we must remember that past ideas were not seen as steps on the road to our current understanding. They were fully fledged views in their own right, in all their complexity and lack of clarity. Every idea, no matter how outdated, was once modern, exciting and new. We can be amused at strange ideas from the past, but condescension is not allowed – what seems obvious to us is only that way because past errors, which were generally difficult to detect, were eventually overcome through a great deal of hard work and hard thinking.

 

A great deal of hard work and hard thinking. Is that what I’m doing as a scientist? It feels like that sometimes, and my brain “hurts” when I read a tough-to-understand paper, as I did this afternoon. (I’d summarize it as “Nearness is Not a Number” – and it has to do with mathematical topology and biological evolution.) But I’m amazed at the wonder of being able to read and think about abstract mathematics and relate it to big ideas in the nature of life. How do our three-pound brains even accomplish such a feat? It’s truly amazing. I’m looking forward to learning more as I get to the PRESENT in Cobb’s book. And Onward to the FUTURE!

Tuesday, February 16, 2021

Paying Attention to Attention

Being distracted is the norm. When I expect students to sit and focus on learning challenging material in the dense time-block of our class meeting, I’m asking for something very difficult. It’s hard to maintain a laser-focused attention throughout. The brain simply gets tired. Attention flags. Distraction distracts. What can we do? That’s the focus of James Lang’s latest book, Distracted

 


Why are humans easily distracted? Lang summarizes the story outlined in Ancient Brains. It’s an evolutionary argument suggesting that being wired to attend to our surroundings while focusing on a task at hand was important to our survival, back in the hunter-gatherer days. While today we’re unlikely to be attacked by crouching tigers or other hidden dangerous beasts, we might still get mowed down by a car while crossing the street – if we’re not paying attention. The odds of tragedy go up when we’re plugged into our headphones and our eyes are glued to our cellphone screens.

 

The book has a chapter on the debate about whether tech should be banned in the college classroom. Designers of apps and social media have learned how to get our attention, thanks in part to cognitive research over the years. Yes, our science has helped entrepreneurs exploit our attention, and have even persuaded us to give up our privacy. Wisely, Lang does not spend too much time on all this, because like in many other debates, the optimal solution will depend on your particular situation in your particular classroom. Instead he focuses on concepts, principles, and helpful practical suggestions.

 

Distracted begins by outlining its three main ideas about Attention.

·      It’s an achievement. (Distraction is the norm!)

·      It is achievable. (Amazingly, we can get so focused and caught up in something!)

·      Our job as teachers is to cultivate attention.

 

Why this focus on attention? Lang outlines the three phases we must traverse for learning to take place.

1.     We must attend to the item, whatever it might be.

2.     We must process what we are attending to and incorporate it into our existing knowledge frameworks.

3.     We must be able to retrieve the newly learned item from our memory after the initial apprehension and processing.

Too often we focus on the second stage. Sometimes we use the third stage to measure learning, and forget to utilize it as part of learning. As to the first stage? Sometimes we assume that what we’re presenting is so compellingly interesting that we skip cultivating attention before we launch into what we think are the meaty parts. I’m certainly guilty of this, but over the years I’ve started to pay attention and be more thoughtful of how to frame each class.

 

My favorite passage in the book comes from Chapter 5, “Curious Attention”. We’re all curious. Our ancient brains may have something to do with this. We’re still certainly curious in this day and age. Young children are marvelous examples of curiosity, constantly asking why, why, why. I particularly enjoy puzzles, and this is not surprising because it tickles a part of my brain and working through a puzzle brings me joy and satisfaction. Lang says it better than me so I’ll quote him instead.

 

“Puzzles intrigue us, but they do so because we know they have a solution that just happens to evade us at the moment. We expect to solve them or have them solved for us. Your favorite mystery novelist excels in the creation of puzzles, which she generally resolves for you by the end of the story. Mysteries are those big, open-ended questions that fascinate us, and yet have no easy answers, or no answers at all. Mysteries are capable of long, sustained study or exploration without resolution…”

 

This might explain why I’m attracted to origin-of-life research. It’s a mystery of the big open-ended kind. But I also enjoy crossword puzzles, logic puzzles, spatial puzzles, jigsaw puzzles, of the shorter and solvable variety. (Interestingly I don’t read many mystery novels.) I like how Lang goes on to ask the question: “What is the mystery that lies at the heart of your discipline?” This semester I opened my G-Chem 2 classes with the elusive and nebulous nature of energy. It’s something we’ll be exploring throughout, and this week’s discussion board prompt asks the student to ponder that strange thing we call entropy. I’m looking forward to reading the students’ entries.

 

I’d like to return to Lang’s third key idea – that as instructors, cultivating attention is an important part of our job. One tidbit he mentions is that attending to complex ideas can be very different experiences for the speaker versus the listener. Chemistry is both challenging and complicated, even at the introductory level. Lots of things are going on simultaneously at different levels. There’s also the gulf between the expert and the beginner to take into account. My ability as an expert to get into the “flow” while expounding on chemistry might feel good to me, but the students might simply be dumbfounded. I should pay attention to this phenomenon.

 

And if distraction is the norm, and if attention naturally degrades over time in my classroom, I need to pay attention to the pacing and the pausing. Lang discusses studies where student attention is renewed when switching to an “active learning” activity, but also when switching out of it and into a mini-lecture. Pondering this makes me doubly irritated at having to teach remotely through Zoom where I’ve not done as much mix-and-match activities in class as I did for in-person classes. This also means I need to revisit the density of information in my classes – I’ve always felt the need to make every minute count in the classroom, and I’m reminded that I often err on doing too much – or expecting too much out of the students. It’s important and good to have high expectations, but I too easily forget to be attentive as a teacher. I need to pay more attention to attention.

Thursday, February 11, 2021

Rise of the Machines

Rise of the Machines. It conjures the portent of a dystopian future, Terminator-like or Matrix-like, pitting man versus machine in a struggle to survive and thrive.

 

A clear distinction is made between man and machine. But can we clearly nail down the differences? Humans are alive. But the machines seem just as alive. Humans are intelligent. Well, so are the machines, although we may not understand their black-box intelligence with the newest machine-learning algorithms. Humans are carbon-based. While you might think of machines as being silicon-based now, that distinction is being slowly eroded as we approach a cyborgian future. A sufficiently clever Turing machine might even fool you into thinking it was human. Sci-fi has no trouble imagining such hybrids, from Star Trek’s Data to Westworld’s Dolores.

 

In marveling at biochemistry and molecular biology, we imagine being made up of a host of molecular machines. Microscopic engines whir and hum along allowing us to accomplish a variety of tasks. (Life’s Engines is an excellent book!) Perhaps the origin of life, and the arrival of LUCA, the last common universal ancestor, was a landmark event in the history of Planet Earth. Maybe the story of biological evolution is a story of the rise of the machines.

 

We’re good at identifying man-made tools and machines. The industrial revolution led to the development of man-made engines; in the early days, these were machine-like behemoths that burned fuel and turned it into energy we could harness. You can think of an engine as an energy transducer. One form goes in. Another seemingly useful form comes out. These days our machines are much smaller, much more compact, and run by tiny engines. They seem intelligent too. We call some of them smartphones. But there’s a part of us that wants to distinguish man-made machines from seemingly natural machines.

 

Forty years ago, the physicist-metallurgist Alan Cottrell wrote an intriguing article titled “The Philosophy of Engines” (Contemporary Physics, 1979, vol. 20, pp. 1-10). The paper begins with the second law of thermodynamics, and considers how open systems, that experience a constant influx of energy from the surroundings, organize a subset of themselves into seeming life-like engines to transduce energy and dissipate it. (Most arguments about why living systems do not break the second law run along similar lines.) There are some excellent analogies in Cottrell’s paper, and although it has some equations, I found it relatively easy to follow as a non-physicist and I recommend reading the article if you find this to be a topic of interest. Here are a handful of quotes.

 

From the standpoint of statistical physics an engine is a strange object, a highly improbable configuration of atoms and quanta, which displays an enormously exaggerated motion in one of its modes. The description of such objects is a familiar task in [biology]… In fact, the total phase space of a large material system covers all possible configurations of the system, among which may be those which we recognise as engines, or artifices generally… But, according to classical statistical thermodynamics, these are unique configurations and hence extraordinarily rare…

 

So long as physics lacks a historical dimension it cannot deal with the special properties of living systems. Similarly, it cannot account for the existence of purposively made things, such as transistor radios, internal-combustion engines, ant-hills and birds’ nests, all of which are statistically improbable yet exist abundantly on earth today. Although we usually recognize such organized systems as these through their special structures, it is nevertheless not through structure that they are to be properly defined. In fact, they do not differ significantly in purely structural aspects from non-functional systems such as misconnected electrical circuits, abiotic polynucleotides and machine-like sculptures…

 

Cottrell uses an example of a self-propagating crack in a crystal to illustrate a simple kind of autocatalysis that is also replicative in nature. Comparing the process of deformation (subjecting a material system to an outside energy source) in something like plastic versus a brittle material illustrates how energy is stored in various dislocations providing an interesting interplay between stability and instability, with different degrees of meta-stability which Cottrell equates to homeostasis. Some definitions of life suggest that it exists at the “edge of chaos”, between seeming order and randomness, in a strange liminal space with a fuzzy boundary.

 

In another short readable article (with just a handful of equations), “Dissipative adapation in driven self-assembly” (Nature Nanotechnology, 2015, vol. 10, pp. 919-923), the physicist Jeremy England sketches out a scenario for how systems can be driven towards becoming seemingly more ordered structures that then function as microscopic machines to maintain themselves, possibly even becoming more complicated over time as energy flow through the system. The trick comes from how seemingly random motion coupled with absorption of “work” energy provides the route to self-sustaining structures. Here’s an example of England’s clearly-written prose.

 

The absorption of energy from a drive allows the system configuration to traverse activation barriers too high to jump rapidly by thermal fluctuation alone, and if this energy is dissipated after the jump, then it is not available to help the system go back the way it came. Thus, while any given change in shape for the system is mostly random, the most durable and irreversible of these shifts in configuration occur when the system happens to be momentarily better at absorbing and dissipating work. With the passage of time, the ‘memory’ of these less erasable changes accumulates preferentially, and the system increasingly adopts shapes that resemble those in its history where dissipation occurred… the structure will appear to us like it has self-organized into a state that is ‘well’ adapted to the environmental conditions.

 

Cottrell says structure is not enough, and function has to be considered beyond mere structure. England, by providing a mechanistic explanation, inadvertently tries to collapse the distinction in the sense that once you have a mechanism, you have a machine. The two words ‘machine’ and ‘mechanism’ have similar roots. Does this mean that man-made machines and the ‘natural’ molecular machines of biology are fundamentally no different from each other? Is it the case that the difference is only a matter of degree? Are living things merely complicated rather than complex? Robert Rosen would disagree. By his definition, if life is complex, then a mechanism cannot be comprehensively conceived and no algorithm can be devised to compute it. Mathematical equations will always be lacking and cannot represent the complex system, although it can provide interesting insights into a ‘reduced’ model of such a system.

 

Can something that started out as a man-made machine cross the threshold of complicated to complex, if such a threshold exists? Perhaps we can only wait and see if today’s smart devices will be tomorrow’s Rise of the Machines.

Tuesday, February 9, 2021

Science Fictions

You might lose faith in science after reading Science Fictions by Stuart Ritchie. That would be the OPPOSITE reaction to what Ritchie desires, and he makes this clear in the book. But when you read about the problems plaguing how results are communicated in shades or with increasing hype, and consider the perverse incentives in today’s scientific currency, it’s enough to make one increasingly skeptical about the whole enterprise.

 


Ritchie’s area of expertise is in the quantitative social sciences (particularly in social and behavioral psychology) where the problems are the most egregious, and He has many nightmarish and depressing examples of bad actors. But medical research, fundamental physics, and even biochemistry take a share of hits in Science Fictions. While there are stories of outright fraud, the book also explores inherent biases, motivations, unconsciously blinkered views, and the increasing role played in hype – the need to get noticed in an increasing deluge of scientific “breakthroughs”. Many of the vignettes relate to the misuse and massaging of statistical data; you might have heard of the p-hacking scandals and the “replication crisis” in the behavioral sciences.

 

“Hype” is the chapter that I found most interesting. The opening vignette was very familiar to me: the exciting press release of GFAJ-1, a microorganism in the harsh environment of Mono Lake, that supposedly survives and grows on arsenic rather than phosphorus, possibly even incorporating it into genetics and metabolism. Ten years ago, I was in a packed session at a conference where the lead scientist was assailed by critics, still in the early days before subsequent attempts to replicate the work led to different conclusions that were nowhere as exciting as the hyped press release. It was a circus, and not in a good way. (My origins-of-life class read the paper and discussed this issue last semester.)

 

Why is there scientific hype? And why is it getting worse? Here’s an illuminating paragraph from that chapter:

 

All this spin serves the same ultimate purpose as exaggeration in press releases and books: scientists want to emphasize the impressiveness and ‘impact’ of their work because impressive and impactful work is what attracts grants, publications and plaudits. The problem is that this can create a feedback loop: the hype nurtures and expectation of straightforward, simple stories on the part of funders, publishers and the public, meaning that scientists must dumb down and prettify their work even more in order to maintain interest and continue to receive this funding.

 

That’s not me. Or so I think, because my research isn’t going to be soaked up by mass media anytime soon. And I don’t think the reviewers of my grant applications and my papers aren’t in the hype machine. But academia has its peculiar and perverse incentives, which Ritchie discusses in detail – a story that is familiar to us in the business. To be “successful” be it in publication-wise, funding-wise, or prestige-wise, is like running in a hamster wheel that’s increasing in speed. If you don’t keep running faster, you’ll fall out of the virtuous cycle, which in reality is a vicious cycle.

 

I’ve personally wondered about my continuing complicity in this system. Now that I’ve reached full professor-hood, I can no longer get promoted (at my current institution) unless I want to go into administration – which I don’t consider a promotion because the fancy title and a higher salary do not, in my opinion, offset the frustrating and ridiculous parts of those jobs. (I can speak from personal experience. I’ve done these because it’s important service and if I’m clearly needed in such a position. Otherwise I’m happy to bow out, and have done so more than once.) I find myself edging away from it, and falling away from the trappings of what it means to be “successful”, and therefore awards and accolades won’t be coming my way, nor will I be headhunted by some other institution looking to burnish their credentials. I still publish research results in the usual channels because I want to give my students a leg-up into a (bleakly) increasing competitive future, but otherwise I’ve mostly exited the ratrace.

 

Finding our way out of a perverse system – and it is a complicated system that “takes on a life of its own” (to use an apt phrase) – will be very challenging. The closing chapters of Science Fictions describes some steps that have been taken to combat fraud, bias, negligence, and hype. But Ritchie also discusses why these measures are limited and may, in themselves, set up new perverse incentives that replace the old ones. Goodhart’s Law keeps coming into play: When a measure becomes a target, it ceases to be a good measure.

 

Or maybe it’s all just an ever-moving target. Science Fictions is a welcome addition to this discourse. I recommend reading it even though parts of the story can be depressing to many of us, the rank-and-file scientists stuck in a system of perverse incentives. It certainly made me think a little more carefully about my biases, how I frame hypotheses in my research projects, and how I write my papers. I was already familiar with the issues in the behavioral sciences since I read that literature on a regular basis too. But it was a good reminder to me not to be lazy when making an argument, and to catch myself from using post-hoc rationalizations. And that it’s worth doing good science!

 

Saturday, February 6, 2021

Five Minutes After Class

One change I’ve noticed with having remote rather than in-person classes: Students only come to class a minute before on on-the-dot when class begins. Some no longer need to come to class early if they want to get the best seats – usually closer to the front because my handwriting on the board is not the largest. It’s also less noticeable if you’re a minute or two late online; there’s no opening and closing of doors with a bunch of people (and the instructor!) looking at you if you’re late. Aiming to come slightly early precludes that.

 

This means I no longer get my five minutes before class to chat with students, learn their names, and make an in-person connection. With Zoom, I no longer need to ask students their names since those are listed alongside their video. Most of my students turn on their cameras so I’ve been able to match names and faces. However, to make sure I pronounce student names correctly and to make a personal connection, I require all my students to visit with me for a 3-5 minute chat in office hours in the first several weeks of class. Hopefully this makes them more comfortable participating in class, and also ensures they know when my office hours are and how to visit via Zoom.

 

Instead, I’m now having a subset of students stay on for five minutes after class, usually to ask questions. This is also a welcome change. Previously I might have only one or two. In-person, we need to vacate the room before the next class comes in. Students who have classes right after my class have to get to their next classroom and have to account for walking time. But with being able to instantaneously leave one class and hop on to the next, students seem to be more willing to spend a few minutes after class and ask additional questions. This is particularly true of my 8am honors G-Chem class, and most of them have a 9am class right after mine (the paired bioenergetics course I’ve mentioned in previous posts). I also know most of the students well, since I had them in the honors class last semester. On the other hand, not many stay back from my larger regular G-Chem section, probably because it’s lunchtime. I’m hungry too at that point!

 

It’s a good thing that my classes are automatically recorded since some of the questions are excellent and they get me talking about things that would benefit all the students. When this happens (as it did on Wednesday), I e-mail the class and encouraged them to watch the extra 8 minutes of video from the after-class discussions. This made me think that perhaps I should regularly aim to end class five minutes early, although I should think of a way to encourage everyone to stay while we do Q&A. Perhaps a “muddiest point” end activity could do this, although that usually involves rehashing something already said in class and might preclude some of the more interesting questions I’ve been getting which are follow-ups or extensions of what we talked about in class – usually related to applications of the material that the students find curious.

 

I’ll have to keep an eye on this as the semester progresses. I don’t know why I didn’t realize this last semester, but it hit me on Wednesday that I can learn something from the five minutes after class that I’m experiencing with the students!

Wednesday, February 3, 2021

Prompts

With remote classes and my reliance on the LMS, I’ve been thinking of more ways to engage my students since we can’t meet in person. One thing I’ve started doing is to require students visit me in Zoom office hours at least once early in the semester for a 3-5 minute chat. It’s a meet-and-greet sort of thing, and the students already know what I’ll ask them about beforehand (since they’ve introduced themselves on the Discussion Board as a first assignment). I’m enjoying meeting the students, learning how to pronounce their preferred names correctly, and learning about some of their interests and motivations.

 

The other thing I’m doing is trying to utilize the Discussion Board to help students as a community think about chemistry. Last semester, the Discussion Board was much more open-ended. I had general guidelines at the beginning of the semester, and while postings had something to do with chemistry (and affirming other people’s good ideas), the particularity of the content was left open. I think this worked well for some people, and about two thirds of folks showed lively participation, but there was much less from the remaining third.

 

This semester I’m trying to focus the discussion a bit more by providing a prompt each week. To facilitate continued discussion, prompts have to be somewhat open-ended. I’ve decided, as a start, to connect these to some of the more abstract conceptual things we’re doing in class.

 

My honors G-Chem class makes many explicit connections to biology because most of the students are simultaneously taking a bioenergetics Honors course. Our first prompt (last week) was:

 

In class on Monday, we discussed definitions of Life and definitions of Energy, and how both are difficult to define. How might the two be connected? Could one be discussed in terms of the other? Or are they two separate yet related concepts?

 

I kicked off the discussion with a quote: "As it is usually rather difficult to produce good definitions for rather general concepts, we have to let examples guide us on our way." [by Hans Primas, in Chemistry, Quantum Mechanics and Reductionism, 1982] There was some good discussion. One of the students mused: “…all living things obtain some sort of energy, which in an essence is what gives them life. Energy is also what allows life to reproduce, and the lack of any energy would mean an object is not living. However, this is somewhat confusing because when something dies, where does the energy go? Since energy cannot be destroyed it cannot die with life…” This sparked off some good discussion about the transformation of energy as things die or decay and other students brought up some relevant examples.

 

Another student who is taking an intro philosophy class and just read about Thales (who comes up in my first day of G-Chem) had this to say about Thales’ musing about soul and motion. “I think that a better way to interpret Thales' original theory is that life harnesses energy in a never-ending cycle of contorting both life and energy into something different.” The student jumped off a comment from another student who brought up the “intertwining” of energy and life. I particularly liked the idea of “contorting” from the thermodynamic point of view of how thermodynamic “machines” might be created. This is something I’ve been musing about myself! Anyway, here’s the follow-up prompt for this week.

 

While the nature of energy is nebulous, and yet can be "counted", we use models to examine the movement of energy as it pertains to chemistry. The thermodynamic universe is a model, as are its parts, as are the models used to represent its parts, and so on. What do you think are some advantages and disadvantages of models and representations that we use to track and measure energy? (Pick a model or an aspect of a model and comment.)

 

At the moment I’m using different prompts in my regular G-Chem section because I’m covering the material a little differently (but with much overlap) and hopefully I don’t confuse myself. Here’s the prompt the other group is discussing.

 

On our first day of class we discussed different forms of energy and how they might be related to each other. Most scientists recognize two broad energy categories: potential and kinetic energy. Other "types" are then related to one of these. But there may not be a clean-cut way to classify energy types as being associated with just one of the two in some cases. Discuss a type of energy you have come across (it doesn't have to be from this class) and how it may or may not fall into one (or both) of the categories.

 

I don’t have an over-arching plan for the prompts and I haven’t thought about the prompts too far in advance, although I’d like there to be a general thread linking them. I’m adopting a wait-and-see approach to figure out what sorts of things work best. Trial and error, I suppose.