Monday, August 22, 2022

Paean to Learning

I’m enjoying Carlo Rovelli’s collection of essays, published in book form as There are Places in the World Where Rules are Less Important than Kindness… and Other Thoughts on Physics and the World. As my summer comes to an end, and no new flash of insight has emerged on the research project I’ve been working on, Rovelli reminds me (in the essay “Ideas Don’t Fall From the Sky”) that discovery is preceded by lots and lots of work. You don’t wake up one morning with Eureka! if you hadn’t been working hard at the problem.

 


So where do novel groundbreaking ideas come from? Rovelli writes: “They are born from a deep immersion in contemporary knowledge. From making that knowledge intensely your own, to the point where you are living immersed in it. From endlessly turning over the open questions, trying all roads to a solution, then again trying all the roads to a solution – and then trying all those roads again. Until there, where we least expected it, we discover a gap, a fissure, a way through. Something that nobody had noticed before, but that is not in contradiction with what we know; something miniscule on which to exert leverage, to scratch the smooth and unreliable edge of our unfathomable ignorance, to open a breach onto new territory.”

 

One example Rovelli provides is Copernicus’ deep astronomical knowledge of those who went before: Ptolemy, Kepler, Brahe. On top of that, Copernicus was in the rich learning environment of the University of Bologna, where Rovelli also spent time as a student. In another essay (“Copernicus and Bologna”), Rovelli reminisces on his time as a university student, while imagining what the young Copernicus might have experienced there as a student five hundred years prior. Rovelli speculates that it wasn’t just immersing oneself in specific subject knowledge, but also the rich milieu beyond one’s field and being exposed and challenged to see everything anew.

 

In our current milieu, where higher education is on the defensive, Rovelli’s final paragraph is a paean to learning: “What can the university offer us now? It can offer the same riches that Copernicus found: the accumulated knowledge of the past, together with the liberating idea that knowledge can be transformed and become transformative. This, I believe, is the true significance of a university. It is the treasure house in which human knowledge is devoutly protected, it provides the lifeblood on which everything that we know in the world depends, and everything that we want to do. But it is also the place where dreams are nurtured: where we have the youthful courage to question that very knowledge, in order to go forward, in order to change the world.”

 

This reminded me of another paean about science and the love of learning by Tom McLeish, author of Faith and Wisdom in Science. (I previously pondered his vignette on Robert Brown.) I close this post with an excerpt of his majestic first paragraph from Chapter 5, which reminds me to persevere on my building computational models for complex non-linear origin-of-life chemistry even when I feel discouraged by my lack of insight.

 

“Science runs far deeper, quirkier and at more fully human levels than we would think from stories of relentless discoveries, spectacular phenomena or the cool application of [scientific] methodology. We know better than to swallow and inadequate narrative that portrays science as simply replacing an ancient world of myth and superstition with a modern one of fact and comprehension… [This] older love of wisdom of natural things, does indeed call on a growing illumination of nature by experiment and imagination, creating understanding where there was none before and opening up the exploration of new phenomena. It maps, in increasing detail, the physical world onto patterns, often mathematical ones, in our own minds. Notably, the scope of science in both its experimental and theoretical explorations needs to capture the stochastic, the random and the chaotic as well as the regular, smooth and periodic. But science also emerges from an ancient longing, and from an older narrative of our complex relationship with the natural world. Its primary creative grammar is the question, rather than the answer. Its primary energy is imagination rather than fact. Its primary experience is more typically trial than triumph – the journey of understanding already travelled always appears to be a trivial distance compared with the mountain road ahead. But when science recognises beauty and structure it rejoices in a double reward: there is delight both in the new object of our gaze and in the wonder that our minds are able to understand it.”

Thursday, August 18, 2022

Symbolic Consciousness

Today’s post is Part 3 of Terrence Deacon’s The Symbolic Species (links to Part 1 and Part 2). I finished the book. The last chapter is interesting and thought-provoking. If you don’t want to plow through 464 pages, the last 32 pages will give you the gist of his argument. My quick version: Deacon’s thesis is that what makes humans unique among all other creatures is the co-evolution of the human brain and early hominid ‘societal’ behavior that leads to referential symbolic consciousness and language. Symbolic consciousness, which is necessary from abstract thought, emerges from indexical consciousness which in turn is supported by iconic consciousness. (Other creatures may exhibit indexical and iconic consciousness, and it’s possible they may attain the symbolic but it will be difficult!) Deacon builds his case methodically, although there are still many unknowns and gaps, which he acknowledges.

 

In this day and age, we imagine the mind to be like a computer. Thus, the question arises as to whether our computers’ artificial intelligence can be conscious. Last month Google fired Blake Lemoine, an engineer who claimed that the A.I. chatbot has a soul. Does it? I don’t know. Depends on how one defines soul or consciousness, I suppose. Deacon argues that human symbolic consciousness is virtual in a way that it transcends the physical flesh, blood, and guts. But he’d also say there are no disembodied souls. Cartesian Mind/Body dualism, he thinks, is an ineffective way of tackling the problem of consciousness. Deacon’s argument is more nuanced (you’ll have to read his book for the full version), and while I think his theory still has many unanswered questions, I find his co-evolutionary approach helpful in sketching out the boundary issues. And he seriously takes into account mutual feedback between individual organisms and their environment (that may include fellow organisms). Physical science and social science can’t be separated so cleanly.

 

Deacon considers the pitfalls of equating mind to computing and he carefully intrigues the Searle Chinese Room argument and it’s criticisms. I’ll quote Deacon: “Part of the danger in current computer metaphors comes from our tendency to call typographical characters ‘symbols’, as though their referential power was intrinsic, and to call the deterministic switching of signals in an electronic device a ‘computation’, because it simulates operations we might perform to derive an output string of numbers from an input string according to the laws of mathematics. We fall into the trap of imagining that the sets of electronic tokens (data) that are automatically substituted for one another in a computer according to patterns specified by other set of tokens (programs or algorithms) are self-sufficient symbols, because of their parallelism to the surface features of corresponding human activities. This brackets out of the description the critical fact that the ‘computation’ is only a computation to the extent that someone is able to interpret the inputs and outputs as related in this way… All the representational properties are vested in the interpreter.”

 

At the beginning of chapter 13, Deacon provides the following quote from the journalist Sydney Harris: “The real danger is not that computers will begin to think like men, but that men will begin to think like computers.” Deacon’s book was published 25 years ago. The quote is even more apt today. As we charge into online mass education and using A.I.’s for so-called ‘adaptive learning’, this is precisely what we are doing – embracing what I think is a myopic vision of using computers to teach us to think like them. How could we not? The machine is efficient and tireless, but only at its narrow task. Taylorism rears its ugly head again, and machine-like productivity is king. Before we know it, we’ll no longer know what a joke is, and become artificially unintelligent. Deacon writes: “Our cherished belief in the specialness of consciousness has not prevented us from thoughtlessly treating people as throw-away tools… The question before us is whether we will begin to treat people like unconscious computers, or come to treat conscious computers like people.”

Tuesday, August 16, 2022

Losing Focus

I’ve noticed that it’s getting harder for me to concentrate when reading. I’m easily distracted and I lose focus. My eyes zone through a paragraph or two before I realize I haven’t processed any of the words that came into my visual field. It feels like I’m having a memory lapse. Maybe I’m just getting old. Or maybe, it’s because everyday things in my environment are stealing that focus away. This is the subject of Johann Hari’s recent book, aptly titled Stolen Focus

 


The book opens with Hari going on an internet-device fast. He takes three months away, cold turkey, from his fast-moving information-saturating journalistic lifestyle, and moves to a small fishing town sans devices. He goes on walks by the water, reads physical newspapers once a day in the morning, talks to people in town, and starts reading physical books he’s brought along to keep him company. The detox proceeds in stages, but he finds his ability to focus slowly resumes. What lessons did he learn?

 

Hari covers a dozen or so topics in brisk, very readable, engaging chapters. Many of the early ones went over material familiar to me: We’re drinking from an information firehose that’s exhausting with too much multitasking. We don’t get enough sleep. The bite-sized shallow copypasta of internet sound-bites, retweets, mindless videos, has made it difficult to engage in sustained reading. (Although binge-watching an engaging mini-series with a complex story arc is one example showing that we can focus in a different immersive medium.) And of course, the fact that social media and the way your devices are designed with their constant alerts, have made use of psychological principles to successfully grab your attention means our brains are losing the battle to the enemy of distraction. In later chapters, he discusses environmental pollutants, deteriorating diets, and other stress factors that promote hypervigilance, things that we may not have thought about. There are also vignettes discussing the virtues of a four-day workweek and universal basic income.

 

The two topics I want to highlight, because they are related to education, Hari calls the “disruption of mind wandering” and “the confinement of children, both physically and psychologically”. In interviewing psychologists and scientists, Hari learns that mind-wandering can be a good thing. He describes the following example: “When you read a book – as you are doing now – you obviously focus on the individual words and sentences, but there’s always a bit of your mind that is wandering. You are thinking about how these words relate to your own life. You are thinking about how these sentences relate to what I said in previous chapters. You are thinking about what I might say next. You are wondering if what I am saying is full of contradictions, or whether it will all come together in the end. Suddenly you picture a memory from your childhood, or from what you saw on TV last week… This isn’t a flaw in your reading. This is reading… Having enough mental space to roam is essential for you to be able to understand a book. This isn’t just true of reading. It is true of life. Some mind-wandering is essential for things to make sense.”

 

Perhaps, I need to consider what is actually happening when I think I’m losing focus in my reading. Maybe I’m more conscious of my mind-wandering now compared to when I was younger. And I do notice it mainly happens when I read non-fiction and when the material is denser. Perhaps when I see a glazed look on the face of a daydreaming student, it isn’t necessarily unproductive (although it could be – I can’t perform legilimency). Teaching at the college level, I’ve never had to exhort my students to ‘pay attention!’ although I could see teachers doing so in grade school. I suspect it happened to me in my early school years, but I honestly don’t remember. Our twenty-first century work culture is all about productivity. And mind-wandering seems to be the opposite of productivity. I think there is some truth to this, but mind-wandering may not be all bad. In my early years as a faculty member, being productive and efficient was important to me. Now, I’m less bothered about productivity, and slowly weaning myself away from trying to be overly efficient.

 

The second topic, closely related to the first, has to do with the more rigid structure of how many children grow up today, both at school but also at home. At least in middle-class families and more wealthy ones. In families trying to make ends meet, parents may barely have time to schedule their children’s activities or the children themselves may be out hustling to make a living. Hari’s point is that he thinks a loss of “free play” and the rigid structural confinements that have aggregated into a system of mass education are a big problem. (He experienced this as a child, half a generation younger than me.) While I grew up in a mass education environment, our school days were shorter (lack of facilities), and there was a lot of free play outside of school with my neighborhood friends. It’s hard for me to imagine how my psychological makeup would be different than someone who grew up in a much more structured environment – but these are my students today! Perhaps I need to make a better effort to understand them.

 

Reading about free-play approaches made me wonder if these are useful and applicable at the college level, or whether their largest impact is for younger kids, and that by the time they get to college there’s not much one can do. I also wonder if the highly structured environment has led to more stress and behavioral anomalies when students first come to college and many are living away from home for the first time in a seemingly less-structured environment. What sort of changes could I make in my classroom that incorporate the creativity of free play? Maybe I’m limited in my imagination in finding it hard to see how to do so in general chemistry or physical chemistry, the two classes I teach most. I’ve tried a few things (e.g. here and here), although I’m not sure they panned out all that well in terms of student understanding of chemical concepts. The experimenting goes on, I suppose.

 

I’m less worried about my own ability to focus after reading Hari’s book. That’s probably because my social media use is very low, I downscaled by doom-scrolling news reading from the early days of Covid, and I hardly use my mobile phone. I’m also very fortunate to live in an environment where I eat healthily and get enough sleep and exercise, and I have a job that pays decently and therefore don’t worry about making rent or being able to afford groceries. Not to mention I enjoy my job and I don’t find it stressful, which is a huge bonus. But just because I’m not experiencing as much stolen focus, doesn’t mean that it isn’t a huge burgeoning systemic problem. I should be part of the solution, and that means thinking about ways to do so in my field. I’m glad Hari’s book motivated me to consider such things.

Monday, August 15, 2022

Locating Language

Among many myths of the human brain that have securely latched themselves as parasitic brain-worms is that artistic and creative types are right-brain dominant and that scientist-analyst types are left-brain dominant. Phrenology just won’t die. When it comes to language learning, there are similar pronouncements – usually backed up by some experimental evidence. Broca’s area and Wernicke’s area (which I first learned about from a board game over a decade ago) have been implicated as distinct locations for language because of their associated aphasias (language disorders). But then things get complicated. These two areas aren’t located in the same brain areas in different patients. Neither can one distinguish them at the microanatomical level from other brain areas. That’s not to say they are unimportant; there’s just much more to the messy story.

 

I’ve made it through Part 2 of Terrence Deacon’s The Symbolic Species. (Here’s my take on Part 1.) It’s all about brain structure and evolution. I can get geeky about the brain but some of the detail Deacon provides made my brain mushy and my eyes heavy. But there are nuggets, and admittedly I skimmed my way through some parts to feast on those tasty morsels. Here’s one: “Classic high-level models of language functions were conceived in order to explain the large-scale features of language breakdown, what might be called macrocognition. We must now face up to the daunting task of analyzing the microcognition of language: analyzing phenomena at a level of scale where the functional categories often no longer correspond to any of the familiar behavioral and experiential phenomena we ultimately hope to explain.”

 

So where are the language modules located? And are there even modules to begin with? How did the brain evolve to allow for or accommodate the use of language? My take on Deacon’s story (and admittedly I might not grasp what he’s really getting at) is that the facility of language is distributed over many parts of the brain although one does see localized concentration in some parts that might hint at modular subfunctions. The takeaway I found most interesting was his argument on the evolution of neuronal connections in the developing brain. In particular differential signal input can lead to “cell and axonal displacement effects” and subdivision into discrete areas with different functions.

 

As to why humans are unique among primates and other mammals, Deacon uses the morbid analogy of an alien brain transplant experiment. Imagine transplanting the embryonic brain of a giant extinct eight-foot ape (Gigantopithecus) into the embryonic body of a modern-day chimpanzee. Then let the Franken-ape grow. What happens as the organism develops? Essentially, because the body isn’t going to be so large, fewer parts of the developing brain need to be recruited to take care of motor and other physical functions. But evolution does what it does, and neural nets shift their connections (with differential signal input) and get recruited for other things. Like Daredevil’s amazing sense of hearing because he is blind. I’m not doing Deacon’s more nuanced argument justice so I recommend reading his book (dry as it is in parts) and going through his detailed examples if you find any of this even mildly interesting or wildly unbelievable. And if not, you can at least read his take on Hoover, the talking seal, a very interesting story. Hoover’s brain (at autopsy) showed some damage possibly related to early encephalitis. Could that be why he talked? Short-circuit? We don’t know.

 

I close this post by quoting a paragraph of Deacon’s that speaks to the limits of a reductionist approach: “The central problem faced by researchers studying the brain and language is that even the minutest divisions of cognitive function we hope to explain at the psychological level are ultimately products of the functioning of a whole brain – even if a damaged one – whereas the functions we must explain at a neurological level are the operations (or computations) of structures. If there was ever a structure for which it makes sense to argue that the function of the whole is not the sum of the functions of its parts, the brain is that structure. The difficulty of penetrating very deeply into the logic of brain organization almost certainly reflects the fact that the brain has been designed according to a very different logic than is evident in its most elaborated behavioral and cognitive performances. This is precisely where the comparative and evolutionary approaches can provide their most crucial contribution.”

 

When Deacon tries to tackle the problem of consciousness in a later book, he discusses parallels to the problem of the origin-of-life, possibly an easier problem with a similar architecture. Since the chemical origins of life are my research area, my takeaway is that I need to combine both the logic of chemical evolution and how to functionally think about biochemistry in vivo. Well, I just checked out another library book for that but I should first finish reading Deacon’s book. One more part with 150 pages to go!

Friday, August 5, 2022

Random OoL Thoughts

Earlier this week, a biochemist colleague asked me what I would tell students in a biochemistry course about the origin of life (OoL). My mind immediately leapt to specific topics that were salient to origin-of-life research for historical reasons: prebiotic syntheses of building blocks (sparked by the Miller spark-discharge experiments!), ribozyme discovery and manipulation for the RNA World, and my own current interest in proto-metabolism. I rambled some random thoughts, not particularly coherently, and not thinking about how I might relate this to what students are learning in our biochemistry lecture courses.

 

I’ve since had time to mull over making connections to what students might ponder when they’re taking biochemistry. What is biochemistry? It’s studying the molecules and processes of living systems. Structure and function are intertwined, although the chemist’s reductionist approach tends to impose causality from structure to function. This is useful when you are taking systems apart to study them, but risks throwing out the baby with the bathwater. I’d like students to recognize this conundrum, and remember that life is embedded in a complex system. Thus, the OoL question has to do with how such a system with its myriad inter-relationships could be established.

 

Experimentally, one can attempt to approach the line between life and death from opposite ends. In the top-down approach, assuming a single cell is the basic unit of life, one could strip out parts until the cell is no longer viable. The challenge is that there are many, many, many ways to kill the cell. And because life is a system, teasing out the ‘fundamental’ part on which everything else depends is nigh impossible because all the parts are interrelated. One might call this ‘irreducible complexity’ although I think the phrase has been hijacked by creationists to argue that evolution cannot lead to a living system. But if complexity is irreducible by definition, this allows us to distinguish the complex from the merely complicated, and free our minds to contemplate the paradoxical cyclical chicken-and-egg relationship of structure and function. As Wicken argues, “the whole is not exactly more than the sum of its parts, since parts are relationally constituted by the wholes in which they evolved…”

 

That being said, and because our students are required to take two semesters of organic chemistry as a prerequisite to biochemistry, I would want to highlight how O-Chem’s “functional group” thinking allows us to group parts together and see relationships among the parts. I tell students that the way to approach O-Chem (while it does require some memorization in the beginning to build up one’s experiential database) is to look for the patterns and groupings. Biochemically, a rough breakdown of the cell’s building block constituents are carbohydrates, proteins, lipids, nucleic acids, and a bunch of other molecules (co-factors and more), plus ions and water. Could these different molecular groups be synthesized under simpler ‘abiotic’ (or ‘prebiotic’) conditions starting from simpler molecules? The answer is yes, and this is where I would highlight the historical advances made by the prebiotic chemists working in this field – the bottom-up approach to OoL.

 

But extant living systems only utilize a narrow subset of these molecular groups, while prebiotic syntheses generate a great diversity. Hence a pruning process needs to take place, and the question becomes one of selection. Here is where I would bring up the idea of autocatalysis and how it plays into the growth and pruning of interconnected cycles of reactions. I would need to briefly discuss how a messy diverse milieu of inefficient catalysts would be expected to evolve into a smaller range of more efficient catalysts in such autocatalytic systems. This would also highlight how control and regulation come into play in a related way. One can argue that what drives this is thermodynamics due to the non-equilibrium situation of a potential energy gradient between the sun and the coldness of space, interposed by a suitable planet such as Earth. But I’d want to avoid too much speculation into why life has not arisen on other planets (that we know of) even though students find these ‘alien life’ questions very intriguing.

 

That’s a lot to squeeze into an hour if I did a class guest-lecture. I’ve left out many interesting stories. The top two on my list would be (1) the discovery and evolution of ribozymes (and the promise and problems of the RNA World theory), and (2) why carbon is uniquely suited as the skeleton for the molecules of life given the environmental conditions experienced by Planet Earth. Other interesting vignettes would be the role of encapsulation and the formation of simple vesicles, the question of homochirality, the iron-sulfur world theory and connections to hydrothermal vents, origins of the genetic code, and my own interests in proto-metabolism. Our first semester biochemistry lecture only covers a bit of metabolism, and much of the focus is on protein structure and enzymes. So I think discussing prebiotic syntheses followed by (auto)catalysis and its evolution fits well with what students would be seeing and thinking about in class.

 

A negative pessimistic view of biochemistry is that it takes the ‘life’ out of biology with its reductionist approach. I have a more optimistic vision – that reductionistic studies in biochemical systems can help us solve the mystery of the OoL, keeping in mind both its limitations but also using it as guide to how autocatalytic systems prune out the myriad mess of molecular cousins (generated in a bottom-up prebiotic chemical synthesis approach). And the boundary between the living and the non-living is fuzzy, as cryptobiosis bears witness. As someone who worked in heterogeneous (surface science) catalysis last century, I’ve imbibed the view that the edges are where all the interesting action happens. The origin of life is surely the ultimate edge question of what takes place at the edges of biology and chemistry and their increasingly large intersection!

Monday, August 1, 2022

Language for Tots

Learning languages was easy when I was a tot. I have three ‘native tongues’. For the most part, they don’t interfere with each other. As an adult, learning a new language has been difficult. I’ve tried two thus far – I estimate I have the competency and speed of a six-year old kid in both cases – while my vocabulary is probably a little larger than a typical kindergartener, my listening comprehension isn’t great especially when something is said with great speed. Why, oh why, was it so much easier to learn a language as a tot?

 


An intriguing answer comes from Terrence Deacon’s The Symbolic Species. In Chapter 4 (“Outside the Brain”), Deacon suggests that language has evolved to best adapt to our brain development at the immature age of toddlers! Previously, if you had asked me why humans have language, I would say our brains evolved to accommodate language. That’s still partially true, but as Deacon argues, languages evolve faster than brains so it’s much more likely that once rudimentary language developed, that it was the languages doing more of the adapting. I suspect he’s right.

 

Instead of Chomsky’s view that “the source of prior support for language acquisition must originate from inside the brain”, Deacon argues that the support comes from outside brains and resides in language itself. The analogy he provides is the evolution of desktop microcomputers from being command-line DOS-based to the Windows-based systems we are familiar with in today’s Macs and PCs. Why the adaptation? To be more user-friendly! Who wants to remember lists of commands and read manuals when you can intuitively point-and-click? The same is true for coding. We do object-oriented programming instead of old-school assembly language (which I attempted to teach myself and mostly failed). And modern ‘smart’ phones are designed to be intuitive, although admittedly I fail to intuit some things and it reminds me that I have become an old fogey.

 

So, how do tots learn language? Certainly not the way I have systematically tried to learn languages as an adult with the help of Duo, the green owl. Deacon argues that “it is discovered, though not by introspection of rules already available in the brain. On the surface, it simply appears that children have an uncanny ability to make ‘lucky guesses’ about grammar and syntax… this appearance of lucky coincidence accurately captures what happens, though it is not luck that is responsible.” Deacon thinks that “language regularities are not just any set of associations… [but] arranged so that [tots’] intuitive guesses are more likely to work.”

 

If you didn’t buy the above argument, I recommend reading the entire chapter of Deacon’s book. He has more examples and analogies, and I find his arguments compelling. To boil it down: “Languages are under powerful selection pressure to fit children’s likely guesses, because children are the vehicle by which a language gets reproduced. Languages have had to adapt to children’s spontaneous assumptions about communication, learning, social interaction, and even symbolic reference, because children are the only game in town.”

 

Deacon’s view provides, in my opinion, a better explanation for what previously seemed idiosyncratic about languages. Our computer-designed languages which are math-like or machine-lie and not like natural languages which seem organism-like. Therefore, argues Deacon, “the proper tool for analyzing language structure may not be to discover how best to model them as axiomatic rule systems but rather to study them the way we study organism structure: in evolutionary terms. The structure of a language is under intense selection because in its reproduction from generation to generation, it must pass through a narrow bottleneck: children’s minds.”

 

When I first started learning chemistry, it made no sense to me whatsoever. At some point, something clicked. I can’t explain it. Actually, it was likely a series of gestalt moments separated in time rather than one glorious illumination. When I teach chemistry, I attempt to pre-digest some bits for my students, help nudge them to focus on what I think are the key aspects, and build up their chemical intuition through many examples. How effective is all this? Honestly, I don’t know, although I think I’m getting better at it with practice. There is a method to the madness. Could tots learn chemistry just like they learn language? Unfortunately, there’s no evolutionary advantage to knowing chemistry, and so I expect one has to slog through the hard work. Learning how to read and write. Learning math. Learning chemistry. All these biologically secondary aspects of education won’t materialize as knowledge in our brains in a gestalt swoop. The slog is necessary.

 

Deacon provides a very interesting vignette about Kanzi, a bonobo who demonstrated significant ability to understand human language – not just at an associative level but by manipulating (lexigram) symbols. But what made Kanzi so adept might be the early exposure as a bonobo-tot literally hanging around. The researchers were trying to teach Kanzi’s mother with not much success, and were amazed that when Kanzi was older (and able to focus without being so easily distractable), he showed tremendous abilities. Deacon thinks there’s something about the immature still-developing brain that is particularly amenable to language-learning. I can’t tell you what it is yet, because I’m only a third of the way through Deacon’s book, but forgetting might also be important in learning. Parts of it are a slog but there are golden nuggets! And it’s making me think a lot about the brain, learning, and teaching!