Tuesday, August 31, 2021

Grand Creation

When describing my research to a broader audience, I’ve been using the following mantra: “Things that persist, persist. Things that don’t, don’t.” It’s a good segue into describing simple chemical autocatalytic cycles. Anyway, I finally looked up the source of that quote and traced it to one Steve Grand in his book Creation, published back in 2000. I promptly went to the library, borrowed the book, and read it cover-to-cover. 

 


I had never heard of Steve Grand who became famous for creating the popular computer game Creatures in the 1990s. I don’t recall ever hearing about Creatures, a game where players nurture alien species named Norns, helping them to grow and learn within a virtual environment. I did hear about tamagotchis, so-called handheld “digital pets”, which function on a similar premise. Never having played with either, I plead ignorance to the aficionados who can point out the dissimilarities between the two.

 

Creation is a strange book, at least when reading it twenty years later having spent a bunch of time reading and thinking about complex systems while probing the question “what is life?”. I recognized much of what Grand was trying to describe. I agree with him that reductionism is flawed when it comes to describing biology, and that one needs to analyze life at different functional levels rather than just focus on its material mechanistic aspects – especially tempting to a chemist who is trained to think at the molecular level of action. As I work my way through Rosen’s much heavier treatise on the nature of dynamics and what it means when we try to model a system, it seems that Grand has worked his way to similar conclusions without the heavy theory.

 

But not having the theory means that Grand struggles to describe what seems indescribable, and one gets the sense that he is grasping whatever straw-like metaphors he can come up with. I say this with the utmost respect for his book, since I don’t think I could put into coherent words the swirl of thoughts I have about similar questions. I appreciate how challenging it is, and that’s why I find Creation to be a strange book. It attempts to appeal to a general audience with an interesting hook (designing life-like organisms in a virtual world), but feels nebulous when you read it. If I read it twenty years ago, I don’t think I would appreciate how challenging it is to describe what Grand attempts. The subtitle of his book is “Life and How to Make It”. It’s a grand goal (pun intended), but not an easy one because the notion of life, and in particular organic life, is more slippery than an eel.

 

For those Creature gaming enthusiasts who read Creation, Grand does a nice job describing what goes into his mind as a designer to try and give the user-player the feel of life. For someone like me who has never played the game, I found it interesting because his approach is quite different from how I would approach the problem. Our goals are different, of course, but I think game designers and scientists can learn from each other in this respect. Grand doesn’t worry about the mechanics at the molecular level; what he does is use a neural net to mimic feedback and feedforward loops. It’s elegant in its own way, and I’ve been pondering how some of his ideas can be married into graph theory to solve the problems I’m working on. I particularly appreciated the way he approached the analog-digital divide. And with his computer-programmer perspective, seeing movement as a copy-and-erasure process when applied in the virtual space of bits, was eye-opening to me.

 

I enjoy having my mind stretched. Grand’s book does that in a way that’s different from most of what I’ve read, which has tended towards the dense and academic. But it’s still a strange book that doesn’t fit any particular niche. Perhaps I did read it twenty years too late. Grand doesn’t worry about getting things right, like the scientist would. It’s as if he’s feeling his way towards the solution of a problem like going into a dark room arms stretched out to figure out what’s around you. Like an explorer! Perhaps it is not so different from a broader notion of research after all.

 

P.S. The title of this blog post comes from the book spine which essentially reads “Grand Creation”

Thursday, August 26, 2021

Fading Summer

Classes start next week. I think I’m ready. Or at least I’ve spent most of the last two weeks getting ready. Syllabi are done. LMS content is ready-to-go. I had a training session yesterday with media services to learn how to use the new technology in our classrooms, given that I was teaching completely remotely last year and was away on sabbatical the previous year. We now have fancy cameras installed in each classroom. I don’t plan on using them because my college is planning all in-person classes (masked). But there needs to be contingency plan if a student had to quarantine, so I have one in theory which I hope I won’t have to put into practice.

 

Office hours will be interesting. I am offering them hybrid, i.e., I will be logged on to Zoom but students can also visit me in my office (masked). However, before they can enter the suite leading to my office, they’ll have to Zoom in first before getting permission to physically enter so that we can all maintain lower human density in the vicinity of a bunch of small rooms. Not sure yet how this will work. One of my classes is the math-symbol-heavy P-Chem, and I’m encouraging my students to come in-person when possible. My other two classes are G-Chem and online Q&A generally works fine, with only slightly reduced efficiency.

 

My original plan over the summer was to spend about two thirds of my time on research and a third on class prep because I am overhauling my P-Chem class. We won’t be using a textbook and I’m making worksheets for every class meeting. At this point I’ve done the bare minimum; I have enough material prepared to get me through the first four weeks. I only started working on this in earnest several weeks ago rather than starting in June like I originally planned.

 

The other unexpected activity this summer was doing a chunk of data analysis on “student success” in my department particularly where it relates to different student groups (race/ethnicity, socioeconomic status, whether a student transferred from a different institution or is a first-generation college student, etc.) I was able to obtain a reasonably sized data set from the institutional repository and I spent probably a solid two weeks writing scripts to analyze the data. I’m not done yet, but I did put together a presentation for my department. I think it was time well-spent even though it wasn’t originally in my summer plan.

 

The rest of my summer was earmarked for two things: writing a paper and learning some new research methodology. I’m pleased to say I succeeded in the first endeavor, and I submitted the paper last week. The figures took a long time to make, but I’m quite proud of this paper because I think I pulled together some creative ideas amidst the research data. We’ll see what the reviewers think, but I won’t know for another month or two. The second item on my list is still ongoing. I haven’t made as much progress as I would like although I’ve slogged through some challenging reading with many equations I don’t understand. I also spent a solid few weeks trying to make progress on a new project – I ran a bunch of test simulations and collected a bunch of data, but I’m not sure if I made much of a dent on the problem I’m working on. I have a suspicion that my current approach is flawed.

 

I did enjoy working mostly from home all summer (I went into the office once a week for a half-day or so). I’m pleased that I was rather disciplined about partitioning my time between work and relaxation at home. I suppose having no travel plans thanks to Covid meant that there was simply a lot of staying in. I enjoyed having lunch at home, and occasionally walking in the park after lunch. Not having to commute saved me 40 minutes every day, and probably several tanks of gasoline.

 

But summer is fading and the new semester is upon me. I’m actually excited about being back in-person, having run into several students on campus that I had only met in class over Zoom. (I recognized some, but not others; we’re harder to identify when we’re all masked.) I’m not looking forward to teaching while masked, but my classrooms aren’t very big so I hope my voice carries sufficiently. Otherwise I’ll have to get a wireless microphone and amplifier. I have new research students that I’ll be training next week. And soon there will be meetings aplenty. I’ve enjoyed the quietness of the summer but I suppose it’s time to get back into the bustle. I have one more weekend to enjoy before summer fades!

Monday, August 16, 2021

Futureproof

Can I be replaced by a robot? Maybe. If you can break my job down into discrete repetitive tasks that can be encapsulated into an algorithm such that it satisfies a particular goal that can be assessed by measurement. Sounds like a test. I mean that literally – one tests the algorithmic system to see if it matches the desired outcomes. Multiple tests preferably, so you can collect statistics to see if your test is both valid and reliable.

 

How might I prevent myself from being replaced by a robot? That’s the subject of Futureproof by Kevin Roose, where he provides “9 Rules for Humans in the Age of Automation”. Roose does not classify himself as an optimist or pessimist, but rather as a sub-optimist, which he defines as “while our worst fears about A.I. and automation may not play out, there are real, urgent threats that require our attention.” He probably wouldn’t classify that as a sub-optimal position to take.

 


While A.I. in its many guises is much discussed in his book, Roose makes a useful distinction between A.I. (the subset) from its broader realm of Automation. The latter does not necessarily involve machine-learning – today’s new catchphrase for grant applications. Whatever can be automated soon will be. We’ve seen many examples from the first industrial revolution to today. However, it will also depend on the acceptable error rate. Who determines this or how it will be determined is a matter of debate that should involve a much wider group beyond technologists and technocrats.

 

A distinction I found most useful in Roose’s telling of the tale is the difference between machine-assisted and machine-managed processes. The former is job-enhancing, and frequently used examples come from techno-optimists. The latter is soul-sucking, and provides the necessary pessimistic counterpoint. I certainly use technology in my job, and that use has increased over time, taking a quantum leap thanks to Covid-19. I’ve blogged about the pros and cons of how I spent my Covid year; in summary, I didn’t like it but it wasn’t as bad as I had anticipated.

 

So, which parts of my job are machine-assisted and which parts are machine-managed? Let’s try a few examples. E-mail has changed how I communicate with students and colleagues – there’s a lot more of it. What’s nice: it’s efficient, it keeps records, it allows me to think before responding asynchronously, and I don’t have to be spatially co-located. What’s not-so-nice: it can be a taskmaster in more ways than one because of its advantages. What do I do? I don’t have it open all the time, checking only once every hour or so during “work hours” and not at all other times. I happen to be in a job which hardly ever involves a life-or-death situation. And students and colleagues can be trained to not expect replies during evenings and weekends.

 

Example Two: A web site to deliver course materials. Until Covid which forced my complete use of the LMS, I delivered materials through my simple HTML-hacked website. It was easy to change things on the fly without having to print or re-print, be it class notes, problem sets, quizzes, syllabus items, and many more. Coupled with something like Microsoft Word that allows me to modify and reuse documents, this has been a huge time-saver. I remember the old days when I would handwrite most things – faster and easier than using a typewriter. (Having weak fingers, I was a one-fingered typist on those older machines, but I’m very quick on the modern QWERTY soft-touch keyboard where multiple digits are used.) I can’t think of drawbacks to my simple use of these tools, but I do not like the enforced LMS categories which might look fancier but turn out to be less efficient at least for the way I teach, presumably referred to as old-school.

 

Now let’s tackle whether my job can be atomized and algorithm-ized. I’m in the business of helping students learn chemistry. The end goal is that students have learned the chemistry I wanted them to know. Often that chemistry is “standardized”, i.e., chemistry college courses all over the country have similar content and skills we want the students to master, at least for standard courses such as G-Chem, O-Chem, P-Chem, etc. How do we assess whether students have learned the material? Typically through some final assessment that may be an exam or project or portfolio or paper. Could a machine conduct and “score” that final assessment? For a multiple-choice exam, certainly. For other formats, let’s just say that machines are getting much, much better. Whether the assessment fairly evaluates student knowledge (a rubric is like Swiss Cheese focusing on the holes!) is a different matter altogether and a much longer discussion.

 

Instead, let’s ask whether given the assessment tool, an algorithm can now be devised that leads the student through a process that improves their score on that assessment. I suspect the answer to this is yes, and that the error rate is reducing over time. We call this “teaching to the test”. I don’t mean that in a derogatory way. In a sense, all teaching is to the test. We have final goals in mind that we want to assess, and we want to train our students to reach those goals. If the test is standardized, it’s likely that the learning process to reach it can be atomized and algorithmized – minimally in a Swiss Cheese manner that assumes reduced parts capture the whole. So, could a robot do my job? Under the circumstances and constraints I have proposed, I think the answer is yes.

 

What is my added value as a human instructor? Is it presumptuous to think I add value? I’d like to think that knowledge and life cannot be ultimately atomized and algorithmized, and therefore cannot be automated. Parts of it can – the parts that are reducible – but others cannot be because they can’t be part-itioned. Living systems are likely one of those irreducible things. Hence, part of my job (oh, the irony) is to constantly allude to those complex things. They’re never easy to define – fundamental things never are – but we can get at such systems with many different examples that complement each other to some extent.

 

Roose’s book suggests habits-of-mind to do so in his nine rules, the most useful of which is Rule #1: “Be Surprising, Social, and Scarce”. A.I. is not so good in these areas, at least for now, and perhaps for a long time if indeed these are parts of what it means to be complex and not just complicated. I can see how being Surprising or being Social can be construed as complex. I’d replace Scarce with Unique or Rare, which captures its meaning better. Roose provides examples for these, but the question is how this applies to my specific job. Thinking about how I teach, and how this has evolved over the years, has energized me for the upcoming semester. I feel I’ve unconsciously gone in the direction of providing the extra sauce that machines cannot provide in thinking more deeply about the conceptual parts of my classes and conveying that in multiple exams that defy a simple definition or description. Chemistry looks supremely organized from a bird’s-eye-view, when you think about an entity such as the Periodic Table, but as you take a closer look it becomes so much more messy, complicated, and interesting!

 

The other rules take different angles; I’ll highlight a few with a short sentence or two.

·      Rule #2: “Resist Machine Drift” reminds us to not let machine-recommended systems drive what we read or consume online. Reset your browser. Venture out of your comfort zone.

·      Rule #5: “Don’t Be an Endpoint” asks you to take a close look at whether your job involves helping two machines talk to each other simply because different systems haven’t perfected direct communication yet. You might think teaching would be immune to this, but I was recently at a vendor presentation where they were giving you as a teacher everything that you needed electronically, so that a new teacher could just use their materials out-of-the-box. You’re there to help connect the learning system and the assessment system by being a friendly face and answering some questions the system can’t yet handle.

·      Rule #6: “Treat A.I. Like a Chimp Army” is obvious.

 

I thought that reading Futureproof would make be despondent about the potential of losing my job to a robot. However, it galvanized me to think about teaching and learning at a fundamental level and the key role played by human-to-human connection (mediated or not by technology), and how to continue leverage technology in a machine-assisted manner to improve the process. And this makes me excited about meeting my students face-to-face again this upcoming semester! What makes me futureproof is to continue engaging in the conversation of what’s important and why in my field of teaching and learning more broadly. Even more exciting, my research now involves thinking about systems, algorithms, complexity, and the limits of reductionism. How exciting!

Sunday, August 15, 2021

Reality and Chaos

It’s been a while since I blogged about the intersection of science and magic, the reason I started writing Potions for Muggles. And while this is something I still find interesting, in reality I have too many interests and read too many books. While randomly coming across The Science of Dune, which I wrote about recently, I also noticed there’s a Science of Discworld series. I’ve only read one Pratchett book, and it was so long ago (two to three decades), I’m not sure which one. (I suspect Guards! Guards!)  I did remember it being wacky and chaotic. I had not learned to appreciate it yet.

 

Thanks to the internet and Wikipedia, I discovered a flowchart for the Discworld novels. The Science series is an offshoot of the Rincewind series. I suppose this is fitting, since the Rincewind series begins with The Colour of Magic, also the first book about Discworld (I think). Thanks to my local library, I borrowed a battered old copy (cover shown below) and read it this weekend. It’s a short novel and a riot to read, as in chaotic riot. Or should there be riotous chaos? It reminded me of Jasper Fforde’s series that I discovered several years ago. This has helped me appreciate the chaos amidst a reality of some sort.

 


The main protagonist of The Colour of Magic is named Rincewind, a wizard who flunked out of magical school – for reasons that are (partially) explained in the book. In a series of unfortunate events, he takes up with a clueless tourist named Twoflower who hails from a different realm and is looking for adventure after apparently reading about the adventurous deeds of colourful characters in Rincewind’s realm. Twoflower travels with a strange many-legged magical Luggage box that stores his belongings, and sports a Demon-in-the-Box polaroid camera. Yes, there is a homunculus-like demon who operates the box. He remains unnamed, but I vote for Maxwell.

 

What is the colour of magic? Given my interest in how magic might be transduced via the electromagnetic spectrum, and the fact that Pratchett-fans lobbied for a new element to be named after this colour – I mentioned this in a previous post – it was fun and interesting to explore the wacky world of science and magic in Discworld. Reality meets chaos is how I would summarize Pratchett’s formula. But there are rules. Of a sort. Here’s Rincewind trying to explain some theory:

 

[Rincewind] tried to explain that magic had indeed once been wild and lawless, but had been tamed back in the mists of time by the Olden Ones, who had bound it to obey among other things the Law of Conservation of Reality; this demanded that the effort to achieve a goal should be the same regardless of the means used. In practical terms this meant that, say, creating the illusion of a glass of wine was relatively easy, since it involved merely the subtle shifting of light patterns. On the other hand, lifting a genuine wineglass a few feet in the air by sheer mental energy required several hours of systematic preparation if the wizard wished to prevent the simple principle of leverage from flicking his brain out through his ears. He went on to add that some of the ancient magic could still be found it in its raw state, recognizable – to the initiated – by…

 

You’ll have to read the book for yourself if you want to know more. Today’s quotes are found in Part 2, “The Sending of Eight” – yes, there is a magic number too (cubed). Interestingly, I’ve had my students go through a similar exercise, but I didn’t have the wit to call it the Law of Conservation of Reality. I’m stealing that phrase for the next time I facilitate a similar discussion. In any case, the adventures of Rincewind and Twoflower get progressively wackier, but there are limits. Ummm… conservation limits of some sort where reality smacks you in the face. I don’t pretend to understand how this works (maybe the Science series will be illuminating), but here’s another colourful description by Pratchett:

 

But Time, having initially gone for the throat, was now setting out to complete the job. The boiling interface between decaying magic and ascendant entropy roared down…

 

Yet amidst all the chaos:

 

There was a sound on the edge of Rincewind’s hearing. It sounded like several skulls bouncing down the steps of some distant dungeon.

 

Perhaps gods do play dice with the universe. I should remember this every time I sit down to play a boardgame.

Wednesday, August 11, 2021

The Genesis Quest

The next time I get to teach an origin-of-life course, I will likely use Michael Marshall’s new book, The Genesis Quest, as supplementary reading. I’ve taught the class twice, and our primary activity is to read and discuss the primary literature. Alongside those tough-to-digest articles in all their science-jargon glory, I sprinkle in readings from a book aimed at the wider public. The first time, we read selections from Robert Hazen’s Genesis, and the second time we used David Deamer’s First Life as a guide.

 


Unlike Hazen and Deamer who are both working and practicing scientists, Marshall is a journalist and science writer. This gives his book a very different feel. He gives you a sense of the science in broad strokes, peppering in anecdotes and analogies, rather than trying to give you the “right” scientific details. Also, he focuses on people-stories, as apparent in the subtitle of his book: “The Geniuses and Eccentrics on a Journey to Uncover the Origin of Life on Earth”.

 

If you wanted a combination of journalist and scientist, there’s Bill Messler and Jim Cleaves’ recent book A Brief History of Creation, but I found it less suitable for my class, and I think Marshall hits the appropriate complementary notes. In my most recent class, I sensed that what students found more fascinating than the science was any anecdotal droppings I would make about the scientists and the field in more general terms, including the many controversies and personalities involved. I’ve met many of the “geniuses” profiled by Marshall in his book, although I don’t know most of them personally as I’m an outsider who hasn’t worked in the origins-of-life field for very long. I don’t attend many conferences and panels, and my contact with famous-named scientists was usually brief and focused on the science.

 

I enjoyed Marshall’s arrangement of the material. He begins the story with individuals involved in the controversies surrounding vitalism and spontaneous generation before quickly moving on to Oparin, Haldane, Urey, and Miller – the last name being the famous Stanley Miller whose 1953 experiment marks the kick-off frenzy into prebiotic chemistry research. 1953 was also the year Watson & Crick published their famous paper on the double helix of DNA. Marshall picks up on this story and does a nice job describing the key tenets (and problems) of the RNA World, the dominant theory in the field, and of course all the personalities behind it.

 

The most interesting eccentric genius profiled is the late Graham Cairns-Smith. I’ve never met him, but I devoted one class period to discussing his Clay World theories. The students found it interesting but abstract; they also think it implausible compared to the RNA World which they find the most compelling. I don’t reveal to the students what I think and try to play the role of impartial interlocutor in our discussions. But I didn’t know much about the person Cairns-Smith before reading Marshall’s masterful narrative gleaned from interviews with those who knew him. It’s a very, very interesting story – I won’t provide any spoilers, but that for me was reason enough to read Marshall’s book. It’s Chapter 5 (“Crystal Clear”) for those who are already familiar with the field, but I recommend reading Marshall’s book in sequence because his narrative sets up his individual stories superbly.

 

Marshall lays out the three main camps in origins-of-life research depending on what you thought came first: genes-first, metabolism-first, compartments-first. Each of these has its subfields along with many variations. There are many interesting personalities behind these ideas, and the proponents of their respective theories have strong arguments – but so do their detractors. Marshall also traces the work in the most recent decade that have blurred the boundaries between the three camps. By and large, most of us in the field now think that things were messier and that elements of the different theories come into play in an ecological cooperative sense. There is evolutionary competition, of course, but the heated arguments of previous decades have died down. Since this past decade was when I joined the field, I didn’t have any entrenched horse to back.

 

That being said, I’m personally drawn towards some of the ideas of outsiders in the field. One such is Gunter Wachterhauser, a patent lawyer, trained in organic chemistry, who quietly on his own worked for many years to build one is now known as the Iron-Sulfur World where pyrite (FeS2) is a key player. Marshall provides several back-story anecdotes that I hadn’t heard which I found very interesting. I met Wachterhauser once at a conference a few years back where he listened to one of my talks and said it was “interesting”. He seemed a little frail, and my impression was that of a genial elderly gentleman. His papers had given me the impression of a firebrand.

 

For those in the field, there’s no new science revealed in Marshall’s book; rather it’s a highlight of the key experiments. Students who have taken my most recent class have read the scientific articles that Marshall alludes to. But what Marshall does, which my students found hard to do, was to see things in perspective. That’s hard to do in a one-semester class when you’re diving into the deep end of the pool, steeped in scientific jargon. Marshall provides the important step-back to take in the larger vista. He does this in a very readable book that, in my opinion, gets most things right – although he throws in the occasional materialist/reductionist quip that I think reveals his ignorance in some areas.

 

The last chapter in The Genesis Quest is appropriately titled “Just Messy Enough” and there’s an epilogue which muses on the meaning of life. Marshall leaves on the right note as to where we are right now in origins-of-life research. It will be very interesting to see how the field has progressed a decade or two from now. But if you want an excellent summary of the last seventy years of active research in easy-to-digest form, I highly recommend The Genesis Quest. I will certainly be telling my students about it.

Sunday, August 8, 2021

Science by Allusion

With the new Dune movie to be released later this year, and having thought more about the science in my last reading of Herbert’s novel six years ago, I serendipitously stumbled on a library book. The Science of Dune is a collection of essays edited by Kevin Grazier. It is subtitled “An Unauthorized Exploration into the Real Science Behind Frank Herbert’s Fictional Universe”. Doesn’t sound promising, does it?

 


It turns out that Herbert didn’t delve much into the scientific details in Dune. (He might have slipped more details in later novels, but I haven’t read any of them.) It’s really all about politics, economics, psychology, and anthropology. No surprise; that’s true of much science fiction. Grazier, the editor, even before the book’s publications relates a chatboard message asking: “Is there any science in Dune?” After getting over his initial apoplexy, Grazier has an interesting insight; I’ll quote from the first page of his introduction to the book: “… Frank Herbert, either by design or accident, mostly have hints about the technical underpinnings of the Dune Universe employing… science by allusion. Herbert avoided the pitfall into which many science writers wander: by omitting much in the way of technical details, the work never becomes dated, at least in this respect.”

 

That’s likely part of the reason Dune holds up well today even though it was published back in 1965. In contrast, Star Trek, which premiered that year looks extremely dated – Grazier and his co-authors provide some humorous examples which you can read for yourself in their book. Grazier makes another important point: “When a writer makes a technical gaffe, the increasingly technically literate reader of today is taken out of the novel, is no longer, seeing the depths of the writer’s universe through the eyes of one of its characters, and reverts instead to a person in the twenty-first century holding a book saying, Hang on a minute!”

 

So that we’re not unfair to Star Trek, I do think the job is harder with a visual medium. I haven’t read enough early science fiction to get a sense of how poorly many other authors hold up over time. Asimov’s Foundation series, which I read as a teenager and didn’t understand, meant I didn’t read much sci-fi until recently with my discovery of Rainbows End, which I think is excellent, but the question is how well it will hold up thirty years from now. I don’t remember much of the Dune movie from the ‘80s, and I don’t think I will re-watch it, although I’m very much looking forward to Villeneuve’s reboot. I’m likely to scrutinize it with a scientific eye – I can’t help myself – and it’ll be interesting to see how science by allusion gets interpreted on the screen.

 

Because Herbert provided few details, that gives scientists the opportunity to speculate. That’s what The Science of Dune is all about. The most interesting essays, in my biased opinion, have to do with the spice melange and its role in opening up the mind so that Guild navigators can traverse faster-than-light pathways or so that prophets can see the future. There’s chemistry (of psychoactive stuff) and physics (of time and seeing the future) and biology (of differential effects depending on your species). There’s also ecology – one might argue this is Dune’s most prominent scientific aspect.

 

Other questions considered: How do sandworms move through the desert and what is their life cycle? Do stillsuits really work and what would it take to make one? What are the evolutionary pressures on Arrakis and other planets? Where are all these planets anyway? How does the Reverend Mother’s pain-box work? Can you cheat gravity with suspensor technology? There’s even one essay that goes into detail of how and why sand dunes form, humorously titled “The Dunes of Dune”. An even better title perhaps is “The Real Stars of Dune”; the essay written by Grazier has science-y graphs of spectral class versus luminosity, but the best two lines in the chapter are: “Astronomers are like paparazzi to the stars of our galaxy. They take pictures of stars, always without their consent, and determine who is hot and who is not.”

 

The Science of Dune is a humorous little book with ideas that could win Ignoble prizes. There’s some science but lots of speculation too. I’m not sure the science of Dune holds up, after reading it, but the allusion – or maybe illusion – is what will make it a classic for years to come. I hope Villeneuve doesn’t mess up the movie – I’ve already seen large worms show up in the Battle of the Five Armies movie, which I re-watched last winter. No such worms are in Tolkien’s actual Hobbit book. Herbert’s influence can be felt even there. The Science of Dune covers not just the original novel, but the sequels and the Duniverse as a whole. And while I still have no interest in the sequels, the very first prequel has piqued my interest. I’d like to know why humans decided “thinking machines” couldn’t be trusted since I’m presently watching Person of Interest. Funny how these ideas come around.

Thursday, August 5, 2021

Brain Food

This week I’ve been reading two books related to brains. Brain food, perhaps? The first title is obvious: Great Myths of the Brain by Christian Jarrett. It’s a scoped collection of 41 myths, some you’ve heard many times, and others are more obscure. I suppose I find thinking about neuro-thingies interesting. Was that circular reasoning? Hmm. Anyhoo, today I’ll briefly discuss Myth #29 (“Brain Training Will Make You Smart”) and Myth #30 (“Brain Food Will Make You Even Smarter”) before we get to the second book with its less obvious title.

 


If you’re packing 41 myths into a 300+ page book, you can’t go into too much detail. But Jarrett does a very nice job in encapsulating the key research for and against each myth. There’s always an element of truth in myths. These may be profound, or they may be trivial – mostly the latter in the case of the present book, which is the point of trying to debunk such myths.

 

I generally ignore brain-training commercials. I don’t think there’s a ten-step program that will unlock your brain’s potential in any of the faddish ways being promoted. I’m certainly not going to pay good money for it. Um, except that we subscribe to the online New York Times because my wife and I enjoy the daily crossword puzzle and we’ve recently added Spelling Bee to our routine. I’ve always loved puzzles, and yes there’s a small part of me that hopes my continuing to be challenged by them daily will stave off mental deterioration as I age, but it’s mainly the pleasure of solving puzzles.

 

There is research supporting the idea that training can lead to improvement, in general. That goes for anything in life. I haven’t been watching Olympics (too cheapskate to buy a subscription and we don’t own a TV) but I do think training can improve whatever skill you’re practicing. But brain training fads claim “far transfer”, that their simple exercises extend far beyond their remit. The evidence is slim. Very slim. And the misinformation is compounded by companies (Jarrett names some names) that blur the boundaries between near and far transfer; some of what they’re doing helps in a narrow way but it’s mixed in with faddish nonsense. Jarrett also provides two vignettes – one on such programs aimed at the elderly, and the other for young kids (aimed at their parents) to supposedly give the kids a head-start in life.

 

Let’s move on to brain food. My mother has told me repeatedly that fish is brain food. Every morning (until I left home) I had a spoon of Scott’s Emulsion Cod Liver Oil – original flavor in a glass bottle. Most kids hate the stuff. I actually found it tasty and happily ate my tablespoon a day. Nowadays it comes mostly in sickly sweet orange flavor in a plastic bottle. You can’t find it easily in the U.S. so I stopped eating after moving overseas. But my mother still gets a bottle of original flavor (now hard to find) when I fly home periodically for a two-week visit. Somehow, I went on to earn my PhD at Caltech in chemistry. Was it the fishy stuff?

 

Jarrett takes this head on (in “A Fishy Tale”), and once again there isn’t good evidence supporting the link between omega-3 polyunsaturated fatty acids and amping up your brain power. Fatty fish is good for a balanced diet, but it doesn’t make you smarter, ads notwithstanding. Then there are vitamin pills. Blah, blah, blah. But I did not know about “neuro” drinks (Jarrett names names). Oddly my students make no mention of these when apparently a Kardashian swears by them. Maybe my students don’t want to let me know they’re consumers. (In my time, the tonic of choice for exam-cramming was “essence of chicken”.)  And while I’ve heard about the supposed wonders of chocolate, I had not heard of the glucose-willpower fad. (I enjoy dark chocolate on occasion but I’m not into sugary stuff taste-wise.) Interestingly, there are experiments that “glucose in the mouth triggers reward-related activity in the brain, thus prompting participants to interpret the [experimental] task as more rewarding, which boosts their motivation”. And yes, they ran the experiment by asking students to gargle rather than drink glucose-sweetened lemonade versus artificial sweeteners (or none). But let’s move on.

 

The second book I’m reading is First Steps by paleoanthropologist Jeremy DeSilva, a foot specialist. (There’s a good reason why researchers in his profession specialize; read his book to find out!) It starts off with familiar fossil figures such as Lucy and T. rex, but also covers the Laetoli footprints and some of the less well-known hominims. There’s also discussion on the biomechanics of walking, echoing Daniel Lieberman’s book, Exercised. There’s a nice blend of anecdotal story-telling and detailed research information, packaged in easy-to-read chapters. But since we’re talking about the brain – what does bipedalism have to do with it? After all, First Steps is subtitled “How Upright Walking Makes Us Human”. And brains are a large part of what makes us human.

 


Research on various species of Australopithecus show that bipedalism comes early, before the significant growth in brain size. Yet, the tantalizing connections and puzzles remain. DeSilva outlines Darwin’s (1871) suggestion that brain size increase “was the consequence of a suite of changes in early members of our lineage – bipedalism, tool use, and canine reduction. But the timing doesn’t appear to work. The earliest bipeds had brains that were no larger than a modern chimpanzee’s. Other researchers have proposed that walking on two legs required a large brain to balance and coordinate such a sophisticated musculoskeletal machine. Tell that to a chicken, whose brain is the size of an almond.” (DeSilva is an engaging writer!)

 

Our brains are energy-guzzling machines, grabbing ten times more energy as a proportion of body weight. Interestingly, human walking is rather energy-efficient, at least compared to chimpanzees. DeSilva speculates that walking more and tree-climbing less allowed energy to be funneled to growing the brain. Perhaps this led to cooperation or developing tools in the quest to find food (energy!). A positive feedback loop led to more food, more energy, and more brain. Richard Wrangham’s thesis in Catching Fire on the importance of cooking fits in well with this storyline. It’s a fun and interesting read (my vague recollection from over a decade ago), but not to be confused with a Hunger Games novel, if you’re presently looking for it at the library.

 

In a later chapter titled “Why Walking Helps Us Think”, there are interesting anecdotes from Darwin to Dickens about how walking can spark creative juices while solving thorny problems to boot. (Although wearing boots might weaken certain foot muscles; DeSilva is a foot expert after all.) There are some experiments exploring these links, and they are interesting, although I think still speculative. It helps if you take a walk in the woods like Thoreau, but not so much in an urban area with vehicular traffic and construction. I still remember the first time (about fifteen years ago) when a student in office hours hit a brick wall in understanding and nothing was getting through. I asked the student to take a walk around our beautiful campus and come back in 10-15 minutes. (This was before they all owned distracting cellphones) Somehow it worked. Why? I don’t know

 

Given that I’ve been playing Origins: How We Became Human because of Covid, it was interesting to read DeSilva’s speculation on the origin of human language where it relates to changes in bone, muscle, ligament, and cavity structures. Hand signals may have been early on, and bipeds have freer arms, hands, and fingers, for a wide range of activities and gestures. Somehow this leads to symbols and abstract thought. The Origins game explores this through Julian Jaynes’ theory of consciousness. I’ve borrowed the book from my local library but haven’t had the time to read it yet with all these other interesting books! First Steps also has a chapter on homo floresiensis (a.k.a. Hobbits) appropriately titled “Migration to Middle Earth”. I’ve also explored this in an Origins session report.

 

There you have it – quick highlights from two books related that provide food for thought. Or Brain Food. I’m looking forward to Jarrett’s Myth #31: “Google Will Make You Stupid, Mad, or Both.” In it, there’s a quote from psychologist Daniel Simon who writes (in relation to video games) that “there’s no reason to think that gaming will help your real world cognition any more than would just going for a walk.”

Monday, August 2, 2021

Excel Too Good

I’ve told you about Tim Harford’s new book, The Data Detective. His blog continues to be interesting, and his post on “The Tyranny of Spreadsheets” sparked a few thoughts. I recommend reading his post in full. It’s interesting and witty. The tale begins with 16,000 “missing” Covid cases, and the culprit was Microsoft Excel – or at least older file versions. You’ll learn about how spreadsheet computer programs got started, how they make our lives easier, and how they’re not the best suited thing for, say, genetics researchers.

 

I use Excel in my research. Data gets parked there and then algebraic manipulations are used to turn some numbers into other numbers. I encourage my students to use Excel and take advantage of entering in formulae for calculations, so they don’t make mistakes by hand. It also allows handy and quick analysis by some simple manipulations. Fill Down or Fill Right are wonderful inventions!

 

But I don’t solely use Excel. Particularly for larger data sets that require more advanced data manipulation. (To be clear, I use the word “manipulation” in a neutral sense and not to indicate I’m trying to fudge or twist the data.) It helps that I can write code. Most of my students can’t (since they’re mostly undergraduates majoring in chemistry or biochemistry), but a few can and I’ve recently encouraged my students to take the Intro to Computing class offered at my university when they made the switch from C to Python, and revamped the curriculum to emphasize computational thinking.

 

Data can be fumbled. And automated functions in a data processing program can mislead you and severely compound errors if you’re not careful. My students sometimes learn this the hard way, and it’s a good lesson in the importance of thinking carefully about how you’re setting up those data manipulations. I’ve had my own Excel fumbles. It helps that I’ve built up an intuition over the years so a sixth sense tingles when a number looks suspicious and I double or triple check. Too bad it only works for the narrow methodology of my expertise. Being a mediocre coder also reminds me to be extra careful. I write in little tests to check both the integrity of the data and my code. This means that sometimes things take a little longer at the initial stages, but once I’m confident everything’s working fine, the analyses proceed quickly.

 

Reading Harford’s post was an excellent reminder not to let my guard down. Also, I need to make sure I use the newer Excel file formats, since I have several old templates which are still circulating in my folders. Excel's strength - computation - the automation of mathematics, is also its greatest weakness. It is too clever by half.