Friday, June 25, 2021

Cryptobiosis

“A physiological state in which metabolic activity is reduced to an undetectable level without disappearing altogether.” – the definition of cryptobiosis from Oxford Languages.

 

Several weeks ago, Current Biology published a paper on the revival of 24,000-year-old microscopic animals called rotifers from the Siberian permafrost. This seeming resurrection from the dead was first observed by the famed microsopist Van Leeuwenhoek, way back in 1701. His sample of rotifers came from reddish gutter water from his house. Intriguingly, the organisms shrank in size as they dried out. Gutter water turned to dust in the dry and hot summer. Water was added and behold – the rotifers came back to life!

 


The rotifer vignette is one of several stories of organisms perched near the fuzzy boundary between life and death in Carl Zimmer’s new book, appropriately named Life’s Edge. Forty years after Van Leeuwenhoek’s discovery, nematode worms were found by famed naturalist Needham to survive several years in dormancy. Then came tardigrades, sometimes known as water bears, well-experimented on by modern day scientists and known to survive the vacuum of outer space, temperatures close to zero kelvin, and being hit by speeding bullets.

 

Water resurrects these creatures in the twilight zone.

 

How do they do it? According to Zimmer, some species produce a sugar, which “thanks to its chemical structure, trehalose can help proteins keep their proper shape, much like water does.” Others “make a new batch of proteins that link together to form a kind of biological glass. It entombs the cell’s DNA and other molecules in their three-dimensional form, so that they’re ready to revive when water returns.”

 

While larger creatures have not been classified as cryptobiots – we can detect their unusually low metabolic levels – many of those in colder climates use a hibernation strategy when winter arrives. Bears do it. Bats do it. Bees do it. Even one species of bird. (Most others migrate.) Trees do it too. Could humans do it? Maybe. If we tipped into another ice age. And if you believe the situation in Early Riser is possible.

 

In his short Origins of Life book, Freeman Dyson proposes a simple toy model to model the transition between the living and the dead. The two states are ‘attractors’ in complexity theory parlance, and the hilltop between the two is an unstable transition state. With simple assumptions and parameters, Dyson calculates that a population of 2,000-20,000 containing 6-8 monomeric species (that also serve as catalysts), with a discriminative factor of 60-100 (modern enzymes are a hundred-fold more discriminatory) allows for moving from being dead to alive.

 


The caveat? In Dyson’s words: “The basic reason for the success of the model is its ability to tolerate high [reproductive] error rates. The model overcomes the catastrophe by abandoning exact replication. It neither needs nor achieves precise control of its molecular structure. It is this lack of precision that allows… [a] jump into an ordered state without invoking a miracle.” This also means it’s easy to die without invoking a miracle. By changing the parameters in line with evolutionary processes improving the fidelity of replication, Dyson’s toy model shows that life is more easily preserved, but resurrection becomes much, much more difficult.

 

As someone focusing on the chemical origins of metabolism, I find Dyson’s model intriguing. Coincidentally I’ve been considering numbers not too different from his (based on thermodynamic and kinetic data), at least in my small and restricted model although much bigger than Dyson’s and requiring orders-of-magnitude more computer time. The idea of treating such a chemical system as cryptobiotic is a possible framework. And water may play a key role, as we’re seeing in recent wet-dry cycle origin-of-life experiments. Maybe I’m into the business of resurrection after all – or perhaps just resuscitation.

Monday, June 21, 2021

The Quick Fix

Life hacks. Self-improvement. Quintessential American traits. Makes for good clickbait: Transform your life through this one simple trick!

 

Also, good fodder for TED talks. In the early days, some of these might have featured substance over showmanship. I no longer watch them. Hyped-up chaff has become dominant; thin on substance and thick on sales.

 


Jesse Singal takes on several of these hyped topics in his new book The Quick Fix, appropriately subtitled “Why Fad Psychology Can’t Cure Our Social Ills”. His book is complementary to Stuart Ritchie’s Science Fictions (here’s my post on it) which focuses more on the statistics. While Singal also points out the statistical problems, he focuses instead on why we’re easily drawn into such fads and the history behind their meteoric rise. Topics include the self-esteem movement, super-predators, and power posing, among others. The one that’s most related to education, and we’re still in its grip, is Grit – the topic of today’s blog post.

 

I was excited about grit when it burst onto the stage, championed by Angela Duckworth, who has an interesting backstory. I viewed her TED talk, and even before I read her book, I had started to read the primary literature and felt positively disposed towards the idea. But over the years as I’ve followed the primary literature, I’m much less sanguine the effectiveness about so-called grit interventions, and even its usefulness as a construct. For example, Conscientiousness (from the OCEAN big-five personality traits) seems to correlate better to some measures of student “success”, but there’s still much to quibble about even from the larger meta-studies.

 

Singal opens his chapter on grit with the following: “Grit is everywhere. By the time you read this, it will have been a golden child of the world of education for well over a decade. It’s a sexy, appealing idea: grit predicts success, grit can be measured, and grit can be improved.” Given my own prior reading of the primary literature, I didn’t learn much that was new from Singal’s take, although I did appreciate his historical narrative and the personalities involved – something one doesn’t quite get from reading the primary literature.

 

Grit is appealing to me as an instructor. I teach chemistry. It’s hard. Students both complain and acknowledge this. Don’t expect to breeze through the class. You have to persevere and put in the hard work. Even if you do, you might not be successful. But if you don’t, you’re unlikely to do well unless you’re some sort of super-genius. Now I’ve had students breeze through first-semester G-Chem if they’ve had a strong AP Chem or Honors Chem class in high school. But that’s because much of the material is a repeat for them. Second-semester G-Chem is a little harder, but if the students have seen the material before, they still do well – but they actually have to put in some work. Not a single student has told me that P-Chem was easy in my twenty years of teaching it. I suspect every single one of those who got A’s (and there aren’t many) will say they spent many hours studying and working on the problem sets. So would those who got B’s and C’s, for that matter.

 

The problem with Duckworth’s book (Grit: The Power of Passion and Perseverance), according to Singal, is that the vignettes are happy stories of the winners: “… we don’t really hear anything about hardworking, gritty, resilient people who don’t get as far as they would like to, or who fail spectacularly; the losers are nowhere to be seen.” It’s what I see in P-Chem. Perseverance may be necessary. But it isn’t sufficient. Whether or not one is ultimately seen as “successful” depends on a whole range of other factors, some of which are structural, some of which are not under one’s control (be it student or instructor). Grit can get you out of the hole in some circumstances, but not others.

 

A common thread on the psychology fads that Singal writes about, is that there is an element of truth in all of them. That’s why they resonate. That’s why one can make up a plausible explanation of why they work, at least in some cases (usually in simplified “laboratory” conditions removed from the real world). The most clickbait-y ones are low-hanging fruit. Simple life hacks. But they over-promise. A small life hack will likely not lead to a large transformation, at least in most cases. Yes, there will be some outliers, and the positive outliers make great stories. For the average person, a simple optimization could lead to a small improvement. And after you’ve done the small optimizations, you hit a dead end. Unless you’re willing to “disrupt” and make large changes. And there’s always someone willing to sell you something to make that transition easier – technology in education is one I’ve been thinking about lately.

 

The truth is that many things are outside our control. Larger structures and systems are not easy to change. When you run up against them, you feel stuck. Like you’re banging your head against the wall. All your passion and perseverance may lead to nought, not to mention your ingenuity, creativity, and whatever other positive trait is celebrated as being the savior. The quick fix doesn’t get you very far. Worse, if you are one of the haves in society who likes the idea of grit, you might see the have-nots as lazy and not persevering enough. That’s the dark side of meritocracy.

 

Singal closes his book discussing priming and nudging – also popular fads in education today. If you’re interested in these, and more, I recommend his very readable book, with an extensive index for those who want to delve into the primary literature. His take on finishing the book in the midst of a pandemic are interesting, and I’ll quote him in closing.

 

When the virus arrived in the United States, Americans’ choices were, as always, defined by big, complicated structures of power and wealth. Some Americans in the pandemic’s epicenters were forced to choose between financial ruin and continuing to work low-wage jobs in which they faced infection, while others were able to make a fairly seamless shift to working remotely. It would be impossible to overstate the significance of these divergences: no, having money didn’t render anyone immune from the virus, but overall one’s chance of riding this pandemic out safely and in relative comfort had everything to do with the resources at one’s disposal, which, as usual, meant that shocking racial disparities soon emerged. Structural forces went a long way toward dictating who lived, who died, who struggled, and who was relatively unaffected. They always do.

Thursday, June 17, 2021

Models and Metaphors

I’m reading two books about models and metaphors in science. One is straightforward with many interesting vignettes, aimed at highlighting the role of models in describing abstract concepts in science. The other is difficult, aimed at telling a single story, and uses abstraction to conceptualize what scientists do when creating a model. Both highlight the role of metaphor, in different ways, to parse inherent problems and uncertainties in science and our understanding of the natural world.

 


The first book, published in 2003, is Making Truth: Metaphor in Science by Theodore Brown, a retired chemistry professor at the University of Illinois (Urbana-Champaign). His name is familiar to many students from his chemistry textbook, now in its fourteenth edition. You don’t have to be a chemist to enjoy and learn from Making Truth. As an educator, he builds his case step-by-step with many examples. As a chemistry instructor, I particular enjoyed the two chapters devoted to the concept of the atom, both classical and modern.

 

Atomic theory is the foundation of chemistry. This seems obvious to students today, steeped in images and metaphors that assume all matter is made up of tiny entities called atoms, and that molecules (the combination of two or more atoms), give rise to all the interesting stuff you can see and touch and some which you can’t see. Atoms are tiny, tiny, tiny. Nanoscopic tiny. And they’re actually rather strange. My first semester general chemistry course has been themed around the idea of making visible the invisible; discovering the molecular basis for matter in its unity and diversity!

 

I am familiar with most of the examples used by Brown, several of which also make an appearance in my classes. I emphasize the use of models and their limitations, and the important role they play. We use various visual aids, and discuss their pros and cons. Brown also talks about these. But I have not emphasized the metaphors that I use except when I tell the Happy Atoms story. More recently I’ve also used teleportation (Apparition in Harry Potter) to describe electrons changing energy levels in an atom. Brown’s book reminded me to pay attention to metaphors; especially since a metaphor that’s clear to me may not be as clear to my students, a generation removed from my own touchstones and experiences. I’ve now encountered students who’ve never read the Harry Potter books or seen the movies, believe it or not, and would not understand why my blog is called Potions for Muggles. (This proved interesting in one class session.)

 

Brown devotes a chapter to molecular models in chemistry and biology. The story of Watson and Crick’s elucidation of DNA structure is the crowning vignette; no surprises there. But what I enjoyed most from that chapter was the story of van’t Hoff trying to make sense of optical activity, and coming up with the idea of a tetrahedral carbon. Optical isomerism isn’t usually in our G-Chem syllabus although I’ve included it more often in recent years. Instead the tetrahedral shape is discussed through VSEPR theory. I like the van’t Hoff story, and will try to incorporate it into my G-Chem classes next semester. Brown also reminded me of the power of visuals. While I already use such visuals in my classes, I feel motivated to think more carefully about how I can include more interesting and relevant images, in the context of the metaphors I use, when discussing chemistry.

 

There’s a chapter on the protein folding problem (how the ‘primary’ polypeptide structure turns into its ‘tertiary’ functional structure) and its associated models and metaphors. Brown uses the metaphor of language, employing the distinction between syntax and semantics, something I’ve been thinking about recently. If you misspell a word, sometimes you change the entire meaning of a sentence, and other times the reader recognizes it as a typo and the meaning is unchanged. Using this example, Brown gets to the crux of the matter, and I’ll quote him here.

 

The gestalt that consists of the complex of associations and ideas that make up our understanding and use of the written language maps onto the molecular domain of protein sequence. Notice that this mapping does not involve a directly emergent physical experience but rather a human artifact in the social domain. This is an early example of an important and interesting aspect of metaphors in science: As the scientist attempts to understand systems of increasing complexity, metaphors based solely on embodied physical experiences no longer suffice.

 

Brown goes on to illuminate the metaphors we use regularly in our classrooms, sometimes not giving a second thought to our use of them. Energy is one of those nebulous things; we use metaphors such as the waterfall with its attendant directionality of a downward flow. Higher energy is UP. Lower energy is DOWN. Like Hermione, I’ve even had an epiphany about Pipes, although my metaphor was linked to a research problem rather than to finding a fantastic beast. Brown examines the word ‘folding’ as a metaphor that “evokes the notion of bringing into contact various parts of the object, as in folding clothing or card table chairs.” My example would be origami. Extending this image to ‘solving’ the Levinthal Paradox, creasing or pre-folding your origami is key to obtaining the end goal of a beautiful intricate structure.

 

But let’s not forget Brown’s point that the metaphor is a mapping that is also an artifact. He splits the physical domain from the human social domain, but how one imposes this artificial separation is a conundrum. That’s why we have trouble answering the question “What is Life?” as my students have encountered.

 


This brings us to the second book, Life Itself by Robert Rosen, published in 1991. Rosen, a theoretical biologist, was a student of Nicholas Rashevsky at the University of Chicago. Both names are likely unfamiliar to most biologists. The book’s subtitle, A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life, sounds grandiose. Rosen will attempt to answer the question “What is Life?” by distinguishing machines and organisms. It’s a very challenging book and not for the faint-of-heart. Background knowledge is assumed, and I especially struggled through the mathematical abstractions. Who would have thought that set theory and algebraic topology should be employed for such an endeavor?

 

I will honestly say that I don’t understand large chunks of the book. I have a glimpse of where Rosen is going; I suspect he’s on the right track; I can partially follow the arguments made; but there’s this feeling of swirling in a fog as I grope my way through, partially blind. Perhaps this is how my students feel in P-Chem. Rosen, now deceased, isn’t available to answer my questions, and he doesn’t have any disciples that I know of – although I have alluded to Mikulecky’s work, and to Rosen’s follow-up book. I’ve also mentioned Noble’s concept of Biological Relativity, that no level in biology is privileged when discussing causation – Rosen and Rashevsky’s version is called Relational Biology. But where Noble stays with qualitative examples that illustrate the problems with reductionism in biology, Rosen delves deep into the source of reductionism. And it has to do with models and metaphors.

 

If I had any reasonable grasp of Rosen’s work, I would try to explain it, but I don’t – and so you’ll have to make do with some quotes from the book and some ill-defined rambling from me. In the first half of the book, Rosen defines something called the Modeling Relation (see diagram below). In a nutshell: The real world is complex. To discover something about nature, we simplify by utilizing a formal system. This requires two additional steps: encoding and decoding, loosely associated with the activities of observation and prediction in science. And if the process 1 commutes with the sum of the processes 2+3+4, we might have a reasonably good model for what is going on in nature. I’m significantly oversimplifying things here.

 


Here’s a quote from Rosen on models and their role in causation (what he calls entailment).

 

Modeling… Is the art of bringing entailment structures into congruence. It is indeed an art, just as surely as poetry, music, and painting are. Indeed, the essence of art is that, at root, it rests on the unentailed, on the intuitive leap. I have stressed repeatedly that the encodings and decodings on which modeling relations depend are themselves unentailed. Thus theoretical scientists must be more artist than craftsmen; Natural Law assures them only that their art is not in vain, but it in itself provides not the slightest clue how to go about it.

 

Um, that’re reassuring. But Rosen has good reasons for this. So let me quote him some more.

 

Modeling relations can be thought of as transductions, in which one kind of entailment can be replaced by, or converted into, another, in an invariant way. We convert causal entailments to inferential ones for a very simple and basic reason; we can hope to understand what goes on in a formal system. The concrete manifestation of such understanding lies mainly in our capacity to predict and (though this is a quite different order of question) perhaps to control. It would be nice if could pull the modeling process itself inside a formal system, where we can see it whole. We cannot do this directly, but we can do it metaphorically…

 

In probing the natural world, we are rather limited (yes, even with state-of-the-art technology) with what we can actually pull out as actionable data, i.e., Step 2 is very difficult and often incomplete. Nevertheless, we try. Then using some inferential rules (Step 3), we eventually use Metaphor to decode (Step 4). In a sense, science is always speaking in metaphors. Brown’s examples of how we probed the atom and drew ‘conclusions’ expressing them in metaphors are aplenty.

 

Rosen will subject his own framework to intense scrutiny, at least that’s what if feels like to me, the foggy reader. He will carefully define a simulation and how that differs from a model. There will be a distinction between analytic and synthetic models, the former deriving from (mathematical) Cartesian products (and not easy to decompose into its parts) and the latter from direct summands which will correspond to the reductionist approach in science championed by physics. Reductionism is very useful, and Rosen acknowledges this. But it is limiting. Much too limiting when it comes to the wider world of biology. Synthetic models can give us machines with hardware and software, but they can’t give us organisms. Rosen has a solution, but I don’t understand it well enough to summarize.

 

With reductionism, one is stuck with the “Cartesian metaphor of organism as machine”, at least formally, and with as much rigor as the physicist can muster. When we talk about the limits of reductionism and hint at emergence, it’s mostly hand-waving in the fog. Rosen’s goal is to give it rigor through abstraction in the world of mathematics and graph theory. Since I’m still in the fog, I’ll simply quote Rosen again on the (philosophical) Cartesian metaphor.

 

It succeeds in likening organisms to machines, to the extent that both classes of systems admit relational descriptions. But beyond that, it is fundamentally incorrect; it inherently inverts the notions of what is general and what is special. On balance, [it] has proved to be a good idea. Ideas do not have to be correct to be good; it is only necessary that, if they do fail, they do so in an interesting way.

 

Rosen will use the same analogy as Brown, comparing the machine-organism distinction to the syntax-semantic distinction. I’ll leave this to the astute reader of his book to figure out how he does it. In fact, maddeningly, Rosen does this all over the book with “I leave it to the reader…” statements where something is clearly obvious to him and so he skips over steps. I try very, very hard not to do this when teaching P-Chem. Like Brown, Rosen also uses protein folding as an example, although writing in 1991 he is not sanguine about the prospects of solving the problem. Interestingly, in 2021 we’ve gotten much better at the prediction part of the problem, but with a significant loss in understanding, as the number-crunching parts have moved to an A.I. black box. We don’t know the details of Step 3 any longer. What does that tell us about Step 1? I’m no longer sure.

 

Why do proteins fold into their specific structures to carry out their specific functions? Rosen uses the metaphor of a scaffold.

 

Sequence pertains to scaffolding… held together, not by any direct intersymbol bonds, but by being suspended in a larger structure. Conversely, any larger structure that maintains their configuration would create the sequence; its exact nature, its chemistry, if you will, is otherwise irrelevant. If we perchance interpret the elements of such configurations to be… chemical groups… then such scaffolded configurations may themselves act like conventional chemical species. If so, they are in fact much more general than conventional molecules… only “exist” when scaffolded together… If the scaffolding as a whole is perturbed, or disrupted, they disappear, they cease to exist, they denature. But they do not “decompose” in any conventional sense, and they reappear when the scaffolding is restored.

 

This is a metaphor I find very helpful as I’m puzzling over my present origin-of-life projects. I might even come up with a model to go with the metaphor. Perhaps that will help concretize my foggy understanding of this whole business. It’s no wonder many folks think science is difficult. Rosen certainly hasn’t made it any easier. But he might have made it more profound.

Sunday, June 13, 2021

Assumptions and Beliefs

I’ve been exploring the intersection of educational and technology through Neil Selwyn’s book in my two previous posts. The conclusion? Selwyn writes: “… the claims made for education technologies are highly symbolic and often ideologically driven in nature… by people’s wider beliefs, values and agendas… ‘educational technology’ is used as a site for wider debates, contests and struggles over education.” I’m inclined to agree with this sentiment.

 

What are some of these assumptions and beliefs? Selywn lists several, and I will comment briefly on each of these.

 

On the interplay of technology and learning:

 

·      Valuing “individual-driven learning” over “institutional-directed instruction”: Constructivist theories of education have seen a resurgence in recent years, for both good and ill, in my opinion. Yes, I agree that learning is not just moving chunks of content from teacher to student, and that something both subtle and mysterious happens in each individual mind-brain. But I’m not sure that individuals are necessarily the best drivers of their education especially at earlier stages in life, or in introductory-level courses.

 

·      Valuing “exploration and experimentation” over “pre-determined instruction”: We should continue to explore and experiment in education. But over the years we’ve learned a lot about what works (and what doesn’t) when it comes to how humans learn. The latest educational fads are often old ideas wrapped in new clothing. Perhaps I’m just conservative, but I think that education is not ripe for disruption in a major overhaul, at least where learning is concerned. There are other wider societal issues that may reasonably argue for massive changes in the structures of education, but human brains haven’t evolved so quickly as to significantly change what works in helping individuals learn new things. What works may be different depending on what you’re learning, but what works is well established.

 

·      Valuing “social and communal” learning environments: I’m in favor of this shift, because I think there is great value in students learning from each other in a wider social space. However, I think we should be cognizant not to idolize this approach. My modus operandi is to utilize different pedagogical approaches depending on what we’re going to learn in a particular class meeting. And even within the session, several approaches may be employed. Outside of class, students working together on problem sets especially in P-Chem is highly encouraged; while during class it often works better for the instructor to be explaining things and working through examples.

 

On the relevance of teachers, as valuing the authority of expertise decreases: I suppose much depends on the subject material and the level at which it is being learned or taught. If I want to learn something well, and efficiently, having someone with expertise as a teacher is extremely helpful. So I think teachers will continue to be relevant. What might change is the formal teacher-student relationship; it may become less formal, it may become more asynchronous, and I certainly hope it won’t be replaced by a bot – which I think will deepen disparities between the haves and have-nots.

 

On the relevance of schools, valuing the “efficiency” of markets over government: It’s hard for me to thread my way through this murky debate. I simply don’t have enough information to have a strong opinion leaning one way or the other. Perhaps because I’m based in the diverse and messy U.S. educational landscape, I see value in having a variety of options. Before coming to the U.S., I was in a government-only education system and had little notion that a private sector existed, not counting the problematic private test-prep market which exists whenever standardized exams of some import are a factor.

 

Like many other hot topics, there are a small number of loud voices and a mostly silent majority. Selwyn writes that “few people are overly concerned with the topic of education and technology beyond a vague notion that digital tools and applications are ‘desirable’ and ‘probably a good thing’.” That’s a problem, especially when assumptions and beliefs are not explored and debated carefully and pro-actively. If all we’re doing is responding re-actively, it will be difficult to escape the Groundhog Day cycle we seem to be stuck in.

Monday, June 7, 2021

Atomizing Knowledge

Paper gold stars, pinned next to your name, signaling your achievement at a task, may have constituted your first encounter in kindergarten with a leaderboard. It kept track of your successes, directed you towards the next task, and perhaps gave you bragging rights within your local community. That gold star may or may not mean much to the rival kindergarten down the street.

 

The present day high-tech equivalent? Digital badges. Micro-credentials. And if you string enough of these gold stars together, you might be awarded a nano-degree. At a scale of 10-9, you’d need a British billion of these to reach a degree. We’re reaching the atomic scale of knowledge, broken down into its bits and bites. And to store it somewhere, we’ll need bytes.

 

While the culture of assessment is certainly a contributor to this trend, and the reductionism of ‘being scientific’ plays its part, my post today is to muse about how educational technology has influenced the atomization of knowledge. Some of this is discussed more eruditely in Neil Selwyn’s Education and Technology that I’ve been reading. I highly recommend his balanced and thoughtful book to the reader interested in pursuing these topics. But on to my musing.

 

Atomizing knowledge in ‘factory’-like schools was seen as a good thing at the tail-end of the industrial revolution in the early twentieth century. Instead of patchwork and varied curricula of one-room schoolhouses, the ‘modern’ school was considered standardized and efficient, just like its factory counterpart. In the present-day, the factory-like aspects are a routine punching bag for pundits of education, technology, and of course, politicians. Technology is heralded as savior, breaking the strictures of school, and having the power to unleash your creativity through freedom of exploration. Personalized education is the new watchword – tailored, crafted – God forbid it be industrially produced.

 

Teachers are often blamed for their ‘resistance’ to this evolution. Or revolution. They should move away from being ‘sage on the stage’ to ‘guide by the side’ or perhaps even ‘peer at the rear’. No, I didn’t come up with that last one on my own. Selwyn mentions it in his book, which suggests it has some widespread use. I’d never heard it before and hope it doesn’t perpetuate. I realize my blogging about it seems antithetical to my hope. Such is the power of sticky, funny-sounding, buzz-phrases.

 

But teachers might have good reason to be suspicious of the technology that claims to assist them but is not-so-secretly attempting to replace them, despite protestations of the entrepreneurs seeking the killer app holy grail of education. Some of us educators are being recruited in that effort. Sometimes there is the offer to elevate myself from an expert to a rock-star expert. Other times, it seeks my ‘valuable’ contribution to the Robot, the Singularity of A.I. For educational purposes, of course. I received yet another e-mail this morning about such an opportunity. In the past, I used to send them directly to spam without looking. But nowadays I take a brief glance to see what the public face of the edtech startup is purporting to deliver to its clients. Then I send it to spam.

 

How do you train a robot to do a human’s job with precision and efficiency? By atomizing the tasks. Since I’m in the education business, it’s the atomization of knowledge. That’s the basis of how educational adaptive systems work, at least in the present paradigm. (I’m not smart enough to predict the future paradigm.) There is learning and cognitive science to back-up some of the A.I. approaches. I personally find Cognitive Load Theory to be a useful framework when designing my course and its activities. There are many helpful practices we’ve learned to help make things stick for students. Duolingo even sent me a message earlier this year explaining its ‘techniques’, for example why it occasionally uses funny and amusing phrases.

 

When you’re teaching something that’s new and challenging – chemistry for example! – it really helps to break things down into bite-sized pieces. Some of it needs to be pre-digested. Others need to have the texture for students to chew on for a while. I am in the business of atomizing knowledge, although in my case the pun is also clearly intended because conceptual chemistry aims at the scale of atoms and molecules. This reductionist analytical approach is useful, but I pair it with building-up synthetic approaches that are not so easy to describe. I’m not just hedging. There’s a good reason for this. I think learning is a complex process, not just a complicated one, which means it cannot be reduced to the sum of its parts. The parts are the things most easily measured. It’s the ‘science’, if you will.

 

How does the Robot adaptively figure out when you’ve learned something and awards you the digital badge? By asking questions. And if you answer them ‘correctly’ then it assumes you have learned. Those correct answers are based on the programmers’ putting together the ideas of subject-matter experts, then run through test-users (students), and the data is analyzed. Rinse. Repeat. This is how the machine learns. A beta-release has now codified some of the ‘best practices’, subject to tweaking with more machine-learning data. Sounds like a factory operation to me. Tailored and crafted to what end? A standardized credential that can be exchanged for other factory goods. Gold star trading.

 

I won’t pretend to know what my students have learned with the precision of a machine. What I can offer them is an ongoing conversation. I do ask them questions and I try to elicit responses. Some knowledge will likely be passed. Hopefully their thinking will be challenged and expanded. And ideally, good decisions will be made in their lives based on wisdom derived from knowledge and understanding. None of this will pass muster in the assessment reports. For that, they’re looking for the atomization of skills. Not even knowledge.

Friday, June 4, 2021

Edutech Groundhog Day

A year of remote teaching motivated to think more about the relationship between education and technology. So what do I do when I want to learn about something? I start by reading a book. In this instance, it was Neil Selwyn’s Education and Technology: Key Issues and Debates, 2nd edition, 2017. Today I will focus on Chapter 3, “A Short History of Education and Technology”, but let me first set the stage with some Chapter 2 quotes that advise caution in our thinking, and to avoid being too adoring or vilifying of today’s push for digital technology use in education. 

 


Anyone who is studying education and technology therefore needs to steer clear of assuming that digital technology has the ability to change things for the better. History reminds us that technical fixes tend to produce uneven results, very rarely resulting in similar outcomes across the population and often just replacing one social problem with another. Even when [it] is seen to ‘work’, it can be difficult to ascertain why… Often, [it] will only deal with the surface manifestations of a problem rather than its roots… In particular, some of the most misleading assumptions about education and technology are the deterministic claims of technologies possessing inherent qualities and being capable of having predictable ‘impacts’ or ‘effects’ on students, teachers and educational institutions if used in a correct manner.

 

Selwyn goes on to state the dangers of technological determinism and simplistic ‘cause-and-effect’ instrumentalist viewpoints. It’s not just a matter of figuring out “the impediments that are delaying the march of technological process”, but worse, it blinkers one’s view and reduces adaptability to the ensuing unpredictable outcomes. Selwyn briefly discusses the assumption (prevalent from 1980-2000) that computers and subsequently e-mail would lead to a paperless office. But in fact, paper used increased. In surveying his brief history of education and technology, Selwyn will approach the subject by looking at how society shapes and uses technology, in contrast to marching through each development as an upward evolutionary arc of increasing technology.

 

Chapter 3 focuses on four technologies of the 20th century: film, radio, TV, and the (micro)computer. Unlike the staying power of the textbook (17th century) and the chalkboard (19th century), these four have had mixed success in sustained widespread use. I will provide just brief highlights along with some of my thoughts on each of these, and if you’re interested in the details, I highly recommend Selwyn’s very readable and thoughtful book for the full story.

 

The famous inventor Thomas Edison was a champion of educational film in the early 1900s. He thought it would displace textbooks and completely revolutionize the educational system. He put in money and resources, commissioning educational films related to relaying the wonders of science and the natural world. There was plenty of enthusiasm for this venture, and historians looking back found that “early ‘experimental’ studies… found that groups of students using film were ‘greatly superior in learning information and concepts’ when compared to students using traditional methods.” Does this sound familiar? The same language is adopted today arguing that study X has shown result Y which enhances student learning of Z, which is then extrapolated as a general methodology for learning anything and everything.

 

But then the use of that technology declines over time. Subsequent studies question the positive results from the earlier studies. Enthusiasm is curbed. Blame is assigned. The costs were too high. The implementation was wrong. Teachers were Luddites resisting the new technology. It didn’t fit with lesson plans and other institutional requirements. The list goes on.

 

In the mid-1900s, Groundhog Day replays the situation but now the new technology is radio. I didn’t know much about this and found it interesting to learn that World Radio University (established in 1937) “broadcasted classes in 24 languages to 31 countries”, and that there were many “Schools of the Air” which offered supplemental instruction reaching over a million students in the U.S. in its heyday. Rise and Fall. Repeat. Supplanted by the next emerging technology: television. Enthusiasm reached new heights. What could be better than a medium so engaging with access to the very best teachers surpassing anything you could experience in your local school classroom. Sounds like a MOOC to me. Why, we should bring it into the classroom! And in American Samoa, 80% of students were “spending between one-quarter and one-third of their class time watching televised lessons, which were then supplemented by follow-up exercises and question periods led by teachers. Similar ‘immersive’ projects in US states suggested that television-viewing students could improve their position in league tables of test scores when compared to national norms.” Active-learning flipped classroom, anyone?

 

And then there’s the computer. Selwyn describes a 1966 study where students from a ‘deprived’ school found themselves immersed in “computer terminal, light pens and screens to teach reading and arithmetic” and apparently loved it. New studies trumpeted the successes of computer-assisted learning to reduce the drop-out rate, not to mention freeing up teachers to do other important things, or even giving them the axe and saving money on manpower. Quoting a 1985 study by Stonier and Conlin: “Not the least of the successes was the testimony of a girl who stated that the computer was the first math teacher who had never yelled at her.”

 

When I was going to the equivalent of grade school in the 1980s, computer-assisted education had not permeated my country’s education system. I did find computers fascinating, not least because you could play cool games. I would visit friends who would have home computers (Apple II clones were in vogue) and I borrowed books from the library to teach myself BASIC. I would write out programs on paper and when I had a chance to visit a friend and we had played a few games on it, I would type in my program to see if it worked the way I had anticipated. These were my “educational” excuses for why my parents should let me visit those friends, preferably more often. Looking back on those experiences, I wonder if that subtly moved me towards computational chemistry mainly by getting me comfortable with using the command line.

 

Computers are certainly ubiquitous in education today, but perhaps not in the same way envisioned by its early prophets. They have certainly made data analysis a whole lot easier in class but also when students are writing up their lab reports or working on their problem sets. Access to data repositories in real-time thanks to the Internet allows me to design small group in-class work that, I think, promotes useful learning of the material. Without the technology and bandwidth of today, classes may have ground to a halt during the Covid pandemic. As a teacher, I’m very grateful to the Internet as a resource helping me put together what I hope are engaging classes for my students. There is reason to think that some of our 21st century digital technologies are significantly transforming education and that we might not see a replay of Groundhog Day. But we should also learn the lessons of the past. I’ll quote Selwyn again (from the end of Chapter 3).

 

We have seen… how successive introductions of film, radio, television and micro-computing into education were accompanied by considerable hyperbole and hucksterism. Many claims were made about the enhanced nature of technology-based learning and the resulting improvements to learning, as well as the establishment of ‘fairer’ conditions for ‘rich’ and ‘poor’ students and schools. We also saw how research ‘evidence’ was produced quickly to ‘prove’ the ‘effect’ of these technologies, especially in terms of learning gains – regardless of the fact that this evidence was inconclusive and equivocal… it is notable how many of the ‘educational’ rationales for these technologies were based on ambitions towards the mechanisation of the teacher’s work, increased efficiency and economies of scale…

 

In general, the twentieth century was a period where many people were keen to proclaim the ‘power’ of various technologies to affect substantial societal change… The flaw in this reasoning… was that ‘social problems are much more complex than are technological problems’ [quoting a Manhattan Project physicist]… [there is] a clear ‘cycle’ of events that is more or less repeated with each ‘wave’ of technology development. This cycle is seen to begin with substantial promises for the transformative potential of the technology backed by research evidence and other instances of scientific credibility. Yet… educators go on to make inconsistent use of the new technologies for a variety of technical, professional and personal reasons. Perhaps most importantly, few changes appear to occur in the arrangements of educational institutions. A number of rationales are then proposed to explain this ‘lack of impact’…

 

It’s difficult to see where things are headed when you’re in the thick of it. The historical perspective is useful to consider when one has the benefit of hindsight. Even if our present ‘wave’ proves transformative, it will likely unfold in a way differently from that predicted by its present champions. I suspect thinking about the relationship between technical and social problems is important, but I haven’t quite wrapped my head around what this means. Also, Selwyn’s caution against educational hyperbole and hucksterism still stands. And there will be a lot of it. Separating the wheat from the chaff will not be easy; it might be even more difficult with the so-called information explosion. But even that’s an old story, repeated for a new generation with different technology. To quote from the Matrix movies that also feature cycles: “some things never change… but some things do.”

 

Stay tuned for more on Selywn’s book in future posts!