Thursday, February 29, 2024

Liquid Rules

I hadn’t given much thought to how glues work until reading “Sticky”, the fourth chapter of Mark Miodownik’s Liquid Rules. But it’s not just glues that stick. Water sticks too. It sticks to glass. It sticks to towels (by wicking, so that it sticks less to your skin). I like Miodownik’s description of paint – essentially you’re trying to get colored substances to stick to paper, canvas, concrete, wood, your face, or the walls of a cave.

 


What makes substances sticky? It has to do with a change of state. Miodownik writes: “Glues start off as liquid and then, generally speaking, turn into a solid, creating a permanent bond.” He then describes the myriad ways this can take place with many interesting examples. Here’s what I learned:

 

·      The 5000-year old ice-man Otzi had an axe that used birch-bark resin as glue. The major component is a methoxyphenol. Once the liquid turpentine in the mixture evaporates, the remaining methoxyphenol polymerizes. Tree resins are a great source of glue; some smell fragrant such as frankincense!

·      Animal glues come from collagen. Separate the collagen in hot water and you can turn it into gelatin. Apparently, the Egyptians were pioneers in constructing plywood by crisscrossing the grains of thin pieces of wood and glue-stacking them. Thankfully Egypt is dry, because gelatin falls apart in hot and humid environments. Woodcrafters took advantage of unsticking animal glue with steam to cleanly repair and restore everything from furniture to violins. Knowing how to stick and how to unstick a glue makes it versatile!

·      Rubber is a glue of sorts. Its ability to be stretched and molded is what helps it retain a “grip” be it a bicycle’s handlebars or a car’s tires. Post-it notes use rubber, and that’s why they don’t damage the surface when you peel them off.

·      Phenol and formaldehyde make a polymer that is a strong glue. (I’ve published a number of research papers related to formaldehyde, but never with phenol.)

·      Superglue is cyanoacrylate; oily by itself but super-sticky when it comes into contact with water because of polymerization.

 

In between a liquid and a solid, you can have liquid crystals. They’re fascinating substances! Miodownik discusses how they alter the polarization of light, and how you change a display by changing the electric field. It’s how my cheap handheld calculator works. I also learned that unlike watercolor paints which stick quickly once the water evaporates, oil paints adhere much more slowly through oxidation reactions. Famous paintings in museums are mostly oils and famous painters layered these oils to “create complex visual effects… [by] controlling color, luminosity and texture”. The same effect can be realized by tiny dots placed close to each other. Inkjet printers just need four primary colors to do this: cyan, magenta, yellow, and black. Thus, CMYK on your ink cartridge.

 

Reading Liquid Rules reminded me of two things. First, I should be leveraging a much wider variety of interesting examples in my G-Chem classes; there is so much fascinating chemistry in the world around us and instead students get bogged down in abstraction (which is partly my fault). It’s another reason to consider ditching the textbook. Second, in P-Chem I’ve mostly avoided the mathematical modeling of liquids because they’re hard to deal with; it’s much easier to describe solids and gases with “clean” mathematical equations. But the dynamic liquid space is where much of the interesting chemistry happens; Liquids Rule! I should consider what changes I can make beyond the simple modeling of “dilute” solutions where I currently spend a small sliver of time. Any book that gets me thinking about improving my teaching makes reading it worth my while. Liquid Rules certainly qualifies. (I also recommend his previous book, Stuff Matters. Here is my post on aerogels.)

Tuesday, February 27, 2024

Dozenal

Why do we use the decimal system? Supposedly because most of us have ten fingers, and humans used this to count before the appearance of written numerals. This is known as base-ten. We go 1, 2, 3, 4, 5, 6, 7, 8, 9 and then we shift the 1 by one spot to give 10. But ten is a terrible number when it comes to division. If you wanted to equitably divide ten sweets and not deal with kids complaining that someone else got more than them, that’s difficult to do if the number of kids is not two, five or ten. Base-twelve works much better. Twelve sweets can be divided equally by two, three, four, six or twelve.

 

It's much easier to slice up a pizza for twelve people than ten. We divide our day into twelve hours and our night into twelve hours. (I grew up near the equator where sunrise and sunset times didn’t change very much.) The ancient Babylonians used base-sixty. We still use this to count minutes and seconds, and to measure angles. I learned that the Yuki of California used base-eight by “using the space between their fingers as markers”, while the Oksapmin of Papua New Guinea used based twenty-seven using body parts that included the nose. The Egyptians though used base-ten but had symbols to handle place-value. I learned this from reading Kit Yates’ The Math of Life & Death.

 


Does it matter? Apparently there is a Dozenal Society of Great Britain (Yates is British) and a counterpart here in the U.S. who think that base-twelve is superior. Yates writes that “advocates of the dozenal system claim it would reduce the necessity for rounding off and hence mitigate a number of common problems”. Yates provides real-life examples where rounding errors mattered in a national election and in a stock exchange index. And in real life we often divide things by the number three. A third. Two thirds. And then we have to deal with the annoying 3.333… and 6.666… Ugh.

 

When I was in elementary school, we were forced to memorize our times-table up to twelve. Why? I don’t know. Is it an unconscious relic of a dozenal base? Or is it simply because nine, ten, and eleven are easy, so kids need to be challenged to twelve? I don’t know, but my education was based on the British system. My spouse who grew up in the U.S. thinks she only had to memorize the times-table up to ten, but she can’t quite remember. I’ve heard that some countries require kids to get to sixteen. Ah, hexadecimal. I remember learning to read it when I first hacked into computer programs.

 

I don’t think the general public will ever shift to the dozenal or hexadecimal system. Decimal is too strongly baked in. But we, the general public, should learn to think more mathematically. And that’s what Yates’ book is about. Yates provides engaging examples in words and numbers. There are no equations for the math-phobic to worry about. Yates talks statistics, exponentials, probability, algorithms. His final chapter is devoted to thinking about disease spread in pandemics, aptly so since it was published in 2019 shortly before Covid-19. If only more people had read his book, they would have been less confused when talking heads mentioned herd immunity, patient zero, or R-nought.

 

I’ve read a number of “math is cool” books aimed at the general public. The Math of Life & Death is easy to read and has relevant examples that make you feel smarter after you figure out that math isn’t as hard as you thought, and it’s so useful! I’d been mulling an overhaul of my G-Chem classes to include a more significant data science and modeling component. Not sure yet if I’m ready to take the plunge but Yates is helping me move the needle.

Tuesday, February 20, 2024

Pedagogy of Abundance

For much of human history, education has operated according to an ‘economics of scarcity’. What does this mean? According to Martin Weller from the Open University (UK), in his 2011 article titled “A pedagogy of abundance” (available on JSTOR), I would summarize the underlying factors as:

·      Expertise in a subject area is scarce

·      Access to the experts is limited

·      Materials for content knowledge are physical and not easily distributed

 

But thanks to today’s digital technology that allows myriad interconnections in the blink of an eye, we are in a situation where ‘knowledge’ is abundant. There’s so much of it now that the bigger problem is separating the valuable nuggets from the false, nonsensical, and misleading. We want our students to be life-long learners. Their digital worlds are steeped in having access to an abundance of content knowledge. (Whether they can learn from it is a different issue.) How can we help them live and learn in this milieu of abundance?

 

Of the three aforementioned factors, one has clearly been overturned. Materials for content knowledge are abundant and easily distributed. It’s very easy to copy 1’s and 0’s thanks to the underlying digital format, and to push this through the internet infrastructure at close to the speed of light. As to access to experts, technically this should be easier with modern telecommunications and crowdsourcing from online communities, but finding what you need might require wading through a marsh of gobbledygook. And it’s only getting worse. Disinformation is much easier to spread than verified correct information. It used to be the case that if I was looking up data (usually a physical constant), that the first term on a web search engine would give me the correct information. Now that’s no longer the case, as my students discover to their chagrin.

 

Expertise in a subject area, I would argue, remains scarce. Of course this depends on the level of knowledge you want to access. College-level chemistry? I’d say that knowledge is relatively scarce. On the occasions that I happen to wander into a Q&A website, often superficial knowledge is repeated and proliferates. I started looking because I regularly assign pre-class questions to students. They no longer look in a textbook but do a quick web search, and then parrot those answers back to me, some of which are just plain wrong. Others are misleading. Some turn out to be correct, but it is increasingly more miss than hit.

 

While I’ve ditched the textbook in P-Chem (and I’m liking it), I have yet to do so in G-Chem. But I have been thinking about restructuring my G-Chem course to get students to look up and use more data from the internet and how to suss out more reliable sources. This is partly because students no longer own their chemistry textbooks but rent them digitally for a semester. This is both cheaper and requires carrying a heavy book. In Weller’s article, he suggests principles of leveraging a ‘network’ of learning. Here’s my summary.

·      We encounter abundant and varied content that is easy to share.

·      Organizing groups or communities of learning is both easier and more fluid.

·      User-generated content can be constantly corrected and updated by the community.

·      Today’s student must learn to be adept at navigating an ever-changing knowledge environment.

Learning challenging content such as chemistry beyond a superficial understanding doesn’t get any easier, even with all this. I think that’s why I have job security – at least while such expertise is recognized and valued. A day may come when folks may “gather around them a great number of teachers to say what their itching ears want to hear” (to quote the Bible). In some quarters, that day is already here.

 

Weller makes two comments at the end of his article: “How can [educators] best take advantage of abundance in their own teaching practice, and secondly how do we best equip learners to make use of it?” Weller also makes the cautionary note: “Abundance does not apply to all aspects of learning, indeed the opposite may be true, for example an individual’s attention is time-limited. The abundance of content puts increasing pressure on this scarce resource, and so finding effective ways of dealing with this may be the key element in any pedagogy.” That’s one challenge I’ve been pondering. When I try new activities in class, I sometimes overestimate the ability of students to cut through the noise and focus on the key important things. But that’s what experts do while novices flail. A data-rich open-source version of G-Chem will certainly have its challenges. But in the long-term, this approach may be more important and applicable for life-long learning.

Thursday, February 15, 2024

Decoherence

I finally finished Philip Ball’s Beyond Weird on the nature of quantum mechanics. I feel I haven’t digested it. Maybe that’s because Feynman was right – no one really understands quantum mechanics. I will let some time pass and then take a second stab at it in the near future. Content-wise it overlaps with Manjit Kumar’s Quantum and Amanda Gefter’s Trespassing on Einstein’s Lawn but is stylistically very different. There’s something about Ball’s writing that hits a sweet spot in my brain – maybe it’s his background training in chemistry that employs language that resonates with me.

 

Today I’d like to discuss decoherence. I’m likely not to do so coherently because I don’t understand it. Also, it’s clearly beyond weird. I will have to quote Ball a fair bit since I don’t really know what I’m talking about. And in Ball’s book, quantum mechanics, knowledge is a tripping (or perhaps trippy) factor.

 

Why do we have certain knowledge about larger objects to which the rules of classical mechanics apply? They have well-defined properties. We can say something about both an object’s position and velocity simultaneously; its properties are localized to the object and not “spread out mysteriously through space”. On the other hand, the “quantum world is (until a classical measurement impinges on it) no more than a tapestry of probabilities, with individual measurement outcomes determined by chance.” This wave-like behavior can manifest itself when waves superimpose coherently. We see this in wave interference because there is a “well-defined relationship between the [waves]… when they are in step.”

 

But, “if the quantum wavefunctions of two states are not coherent, they cannot interfere, nor can they maintain a superposition. A loss of coherence (decoherence) therefore destroys these fundamentally quantum properties, and the states behave more like distinct classical systems.” Coherence is what gives us the beyond weird smeary quantum behavior. Decoherence destroys it and provides distinction. What causes decoherence? Somehow making a measurement to extract information from a quantum system does so. How? We don’t know. Making a measurement of two conjugate properties in a different order (because you can’t do so simultaneously) gives different results.

 

Ball latches on to some clues. No system is truly isolated. It has to interact with an environment. Then comes the whopper suggestion. Decoherence is not because quantum states are “fragile” but rather because “they are highly contagious and apt to spread out rapidly.” Here’s how it works. When a quantum particle interacts with another, this “places the two entities in an entangled state. This is, in fact, the only thing that can happen in such an interaction… the quantumness – the coherence – spreads a little further. In theory there is no end to this process… [molecules hit more molecules!] As time passes, the initial quantum system becomes more and more entangled with its environment. In effect, we then no longer have a well-defined quantum system embedded in an environment. Rather, system and environment have merged into a single superposition… [they] infect the environment with their quantumness, turning the whole world into one big quantum state.”

 

Ball continues: “Quantum mechanics is powerless to stop it, because it contains in itself no prescription for shutting down the spread of entanglement… This spreading is the very thing that destroys the manifestation of a superposition in the original quantum system… we can no longer ‘see’ [it] just by looking at the little part of it… What we understand to be decoherence is not actually a loss of superposition but a loss of our ability to detect it in the original system.” This is starting to sound a bit like what we see even in classical thermodynamics. A temperature differential induces heat-flow, essentially a one-way street. Entropy rears its head. You haven’t lost any energy, but you’ve spread it out in such a way that it’s highly improbable you will collect it all back in one ‘place’ so you’ve effectively lost it from the system to the environment.

 

If the nonlocality of quantum mechanics spreads itself everywhere, why do we experience profound locality and distinctiveness in classical objects. The crux has to do with making a measurement. Ball writes: “A measuring device must always have some macroscopic element with which we can interact: a pointer or a display, big enough to see, say.” Decoherence must take place in this interaction and essentially “imprints information about an object onto its environment. A measurement on that object then amounts to harvesting this information from its environment.” It’s what happens when we ‘see’ an object. Our “retinas are responding to the photons of light that have bounced off it.”

 

But beyond weird, decoherence “creates a kind of ‘replica’ of the object… that eventually produces a reading in our classical measuring apparatus”. I sense echoes of Command Copy. Repeated over and over again. Ball writes that “some quantum states are better than others at generating replicas… These are the states we tend to measure, and are the ones that ultimately produce a unique classical signature from the underlying quantum palette. You could say that it’s only the ‘fittest’ states that survive the measurement process, because they are best at replicating copies in the environment that a measuring device can detect. The physicist Wojciech Zurek has dubbed this “quantum Darwinism”. I’m reminded of the notion that perhaps it’s not mathematics all the way down, but biology at every level!

 

It turns out that the position of an object is particularly well suited for this decoherence replica-generating thingamajig. Ball explains: [it’s] because those interactions tend to depend on the distance between the object and elements of its environment, such as other atoms or photons: the closer they are, the stronger the interaction. So interactions ‘record’ position very efficiently. The corollary is that decoherence of position states tends to happen very quickly, because pretty much any scattering of photons from an object carries away positional information into the environment. And so it is really hard to ‘see’ large-ish objects being in ‘two places at once’… The states we can measure are the ones that are most easily found out.”

 

And if all that wasn’t mind-blowing enough: “when we measure a property of a quantum system by probing its ‘replica’ in the environment, we destroy that replica (by entangling it with the measurement apparatus).” This implies that “too much measurement will ultimately make the state seem to vanish.” That’s what makes it hard to make measurements on very, very, very small objects. We perturb it into another state. Ball writes: “Take a peek and you’ve used up all the information that was available about it. Subsequent measurements may then have a different outcome.” This sounds like what we see when we observe beyond weird quantum behavior. Thus we are limited when trying to extract information from a quantum system.

 

There’s more, but I have to stop here. My mind is swimming in decoherence and I’m likely to soon spout nonsense, so I’ll stop before the spread reaches my fingers and my keyboard.

Wednesday, February 14, 2024

Knowledge and Overconfidence

A little knowledge can be a dangerous thing.

 

This aphorism seems obvious, but can it be quantified? There’s an interesting study published in the journal Nature Human Behavior (2023, vol. 7, pp. 1490-1501). The title is provocative: “Intermediate levels of scientific knowledge are associated with overconfidence and negative attitudes towards science”. Ugh. Probably most of the general population has some (albeit limited) knowledge. And thanks to the internet and more widespread dissemination of information (which may be true, false, and shades of grey), more people than ever before fall into the category of having a little knowledge.

 

Maybe it’s not so bad in chemistry? Anecdotally, when I meet someone new who doesn’t know what I do for a living, the most common response translates as “I was bad at chemistry and didn’t understand anything”. But that’s what most people might say if they met someone with an expertise in a particular area. It’s a social response that doesn’t necessarily reflect a deep-seated feeling about competence (or lack thereof) in science.

 

The study looked at three large scale surveys covering almost 100,000 respondents across Europe and the U.S. over a 30-year timespan. Often, the way to measure confidence was to provide a “don’t know” answer option, but this has several drawbacks. The authors address this by redoing the analysis, and acknowledge that there are always compounding factors whatever technique you use to measure confidence. (I do like the emoji approach!) They also report on two newer but smaller scale studies that mitigate some of the issues in the previous surveys. I was at first skeptical about whether their conclusions were warranted, but after reading the article in its entirety, I think they’ve been careful in pointing out the caveats of their work. And I think their conclusions might be relatively robust.

 

The questions that test knowledge are interesting. Here are some True/False/Dunno statements that relate to chemistry.

·      The oxygen we breathe comes from plants

·      Radioactive milk can be made safe by boiling it

·      Electrons are smaller than atoms

The attitudinal questions are also interesting. Here are a few.

·      Thanks to science and technology, there will be more opportunities for the future generations.

·      For me, in my daily life, it is not important to know about science.

·      Because of their knowledge, scientific researchers have a power that makes them dangerous.

 

But on to the results. Confidence does increase with knowledge. But it increases faster than knowledge before tapering off at the end. Thus participants who answered very few questions with true or false said dunno a lot. At the other end, participants who answered many questions correctly and had very few wrong answers also used dunno a bit more (in relative proportion). Those who got a bunch of wrong and some right answers had relatively fewer dunno responses. This group also had the most negative attitude towards science based on the attitudinal surveys. I’m probably not explaining this very well (because the variables aren’t completely independent), so I recommend looking at the bar charts and graphs in the actual paper. (It might be paywalled; sorry.)

 

At first glance, some results seem “obvious”. But that’s our hindsight projecting in to give ourselves a boost. It does make sense that someone who was willing to say dunno and really didn’t know much indeed had lower knowledge confidence. Someone who knows a lot and encounters a “tricky” question might acknowledge dunno – confident in what one knows and doesn’t. Both these have more accurate metacognition on their actual state of knowledge. The overconfident person – a little knowledge being a dangerous thing – has poorer metacognition, over-estimating self-knowledge and getting lots of stuff wrong.

 

Maybe this is simply part of the learning process. When you learn something new for the first time, you might learn it superficially. You have some grasp but don’t really know it well, and you simply don’t have enough knowledge to assess how well you actually know it. You automatically try and connect this new piece of knowledge to other things, correctly or incorrectly, perhaps randomly, because your overall knowledge base isn’t wide. When you actually have a deeper grasp, your metacognition is sharpened and as you encounter new information, you can better evaluate how it fits into your wider knowledge base. (I’m assuming that knowledge here means “factually correct” scientific knowledge.) Anecdotally, in my chemistry class, the very weakest students know that they don’t know anything, and are not surprised by their exam results. The strongest students actually underestimate their ability slightly (but this might just be social playing-down their ability). It’s the middle-to-weak group that don’t have a good sense of where they stand. “I studied really hard and felt I knew the material” is a common refrain

 

As to attitude towards science, the results surprised me a little. I thought that those with the least knowledge would have the most negative attitude towards science, but that’s not the case. It turns out that this group tended to be rather ‘neutral’ towards science overall. The ones with the strongest negative opinions tend to think they know more about science than they actually do. This makes things especially difficult for science communicators to the public. Trying to simplify the science so that it is easier to grasp or digest could “offer a false sense of knowledge to the public, leading to overconfidence and less support, further reinforcing the negative cycle.” It gets worse: the overconfident are “more resistant to new information, especially if it contradicts their certainty, creating a negative reinforcement loop”. And because the surveys examined covered a wide timespan, there seems to be a higher correlation between overconfidence and negative attitude as the internet has grown to be a key information source for most people.

 

It's with a little despair that I ponder this information. I suppose that I should just keep calm and carry on doing my part to convey scientific knowledge as accurately as possible, clear up misconceptions, and try to sharpen student metacognition. After several years of self-tests in G-Chem, I stopped doing them because I felt it wasn’t helping the lower end of the class, at least in the current format I am using. Possibly some tweaking could fix the problem but I will need to ponder this a bit more.

Sunday, February 11, 2024

Grit versus Quit?

About a decade ago, I made an agonizingly difficult decision to quit a new position that I had been very excited about, uprooted my life, moved thousands of miles for, and still had the potential for a bright future. I wasn’t completely disillusioned, but I had glimpsed multiple potential future problems down the road. The higher-ups in the organization seemed unwilling to change course despite my tactful warnings. So I quit. In hindsight, it was the right decision. But for a number of years the feelings that I had given up too soon bubbled up regularly. That’s why I read Annie Duke’s book Quit because I had seen a blurb about a chapter heading that read “Quitting on Time Feels Like Quitting Too Early.” It covers exactly how I felt, but also vindicated the timing of that fateful decision.

 


I had never thought of myself as a quitter until that big decision. I’ve since quit other things. After quitting something big, it’s easier to quit smaller things. The rationale I’ve given myself is that I’d rather spend my time in other ways. I suppose I’ve learned some of the things that Duke discusses in her book such as opportunity cost. Doing one thing means you’re not doing some other thing. I’m not a quantum particle with the potential to be in two places at once. Actually, even quantum particles can’t do so. But the quitting still hung over me with tinged with negativity. Duke’s book helps one look past that perspective. Here are some excerpts from her prologue that set the tone of the book.

 

“We view grit and quit as opposing forces. After all, you either persevere or you abandon course… and in the battle between the two, quitting has clearly lost. While grit is a virtue, quitting is a vice. The advice of legendarily successful people is often boiled down to the same message: Stick to things and you will succeed… Quitters never win, and winners never quit… By definition, anybody who has succeeded at something has stuck with it. That’s a statement of fact, always true in hindsight. But that doesn’t mean that the inverse is true, that if you stick to something you will succeed at it. Prospectively, it’s neither true nor good advice. In fact, sometimes it’s downright destructive.”

 

Duke will provide a host of examples to underscore this point, the main thesis of her book. Instead she frames success as “picking the right thing to stick to and quitting the rest.” She also reminds the reader that circumstances constantly change. While you may have set a goal for yourself, you sometimes have to adapt. You have to look down the road and make an approximate determination (because you’ll never have all the facts and you can’t exactly predict the future) as to whether your road ahead leads to a positive expectation value or a negative one. Duke, a championship poker player, after being forced to quit academia for health reasons, knows what it means to constantly decide when to stay and when to fold.

 

Should you always quit while you’re ahead? Duke has practical suggestions of how to consider this. She also explains why it’s particularly difficult to quit once you’ve wrapped your identity into your present course, and instead of quitting when you should, you do the opposite and “escalate your commitment”. She employs the visual aid of a katamari, a sticky rolling ball that picks up debris, growing larger in the process. She discusses the behavior of ants and how even after a food source is formed, there is still constant exploration. Keeping the options open. Because the environment is going to change.

 

As a scientific researcher, I’m constantly evaluating whether a project is worthwhile to pursue. I have a filing-cabinet-full of projects that were discarded partway when I deemed it was time to move on. When you first start a project, it’s hard to tell whether it’s going to pan out. You don’t know what will stick. I liked Duke’s discussion of the importance of kill criteria. One needs to set these out to know when to kill a project and move on. The sunk cost is already sunk. Continuing to push ahead will likely result in diminishing returns or worse: further losses of time, energy, money, and more. My kill criteria have been haphazard, and Duke made me think about how to sharpen these.

 

I also found Duke’s assertion that you should tackle (or at least get some handle or realistic picture of) the most difficult thing in a project first (before the easier stuff) to make good sense. Her visual image, borrowed from an interview with Eric Teller: “Imagine that you’re trying to train a monkey to juggle flaming torches while standing on a pedestal in a public park. If you can achieve such an impressive spectacle, you’ve got a moneymaking act on your hands. There are two pieces to becoming successful at this endeavor: training the monkey and building the pedestal… The bottleneck, the hard thing, is training a monkey to juggle flaming torches… there is no point building the pedestal if you can’t train the monkey.” So now I have the image: Monkeys and Pedestals, also a chapter title in her book. I’m also reminded of Barbara Oakley’s advice to students to tackle the hardest problems on an exam first, but quickly pivot when needed.

 

As a chemistry instructor and also an academic advisor, I occasionally find myself in a discussion with a student about whether they should quit something. It may be the dream of going to medical school. It may be dropping a class. It may be changing their major. My students, certainly more so than me, have grown up hearing the gospel of grit. Quitting looks super-bad to them. The student is usually quite shocked to hear me support a quitting decision that they have agonized over. I tell them that they know themselves best. But I also have to tell them that sticking out a class and getting a ‘C’ is not the end of the world, particularly when it means they don’t have to retake it (and waste more time and energy).

 

Grit isn’t a bad thing. But sometimes it needs to be paired with quit. They aren’t opposites. They’re complementary. Duke makes that clear in a chapter title: The Opposite of a Great Virtue is also a Great Virtue.


Thursday, February 8, 2024

Not Everything at Once

I’m midway through Philip Ball’s Beyond Weird. It’s about quantum mechanics. Thanks to popular media, the idea that the quantum world is weird is having its heyday. Multiverse – here I come! Perhaps weird isn’t the right word to use; I’d suggest counter-intuitive. Ball argues that the challenge is “our (understandably) contorted attempts to find pictures for visualizing it or stories to tell about it.” One can learn the math and do the calculations. The problem is that we feel compelled to add interpretations to kinda sorta explain what’s going on. This alludes to the fundamental problem – if quantum information forms the bedrock of all this funny business, then knowledge, observation, measurement, calculation, seem to merge although it’s unclear what exactly that means.

 


Today’s post is on a “chapter” of the book titled “Not everything is knowable at once”. Technically there are no chapters in the book and each section is bookended by a grey page instead of a white one with words on it. As a quantum mechanic, I appreciate the design choice even though it makes it infuriatingly harder to refer to things. (Thankfully, page numbers remained.)

 

The chapter is about Heisenberg’s Uncertainty Principle, although Ball argues that uncertainty is a misleading word. It makes us think that everything in quantum world is fuzzy. Neither is it “that if we want to measure one thing very accurately then we have to accept a commensurate blurring in the values of everything else.” I admit that I have even sometimes inadvertently misled students towards this interpretation. My G-Chem students, on first encountering it, think this is a ridiculous notion. But what’s going on is more subtle. Ball writes: “Quantum objects may in principle have a number of observable properties, but we can’t gather them all in a single go, because they can’t all exist at once.” Fundamentally, it’s a question about knowledge.

 

But this restriction only applies to certain pairs of (conjugate) variables – the ones often used are momentum and position. And it should be emphasized that it has to do with simultaneous knowledge of both momentum and position. It’s not just that we can’t measure them accurately to a great degree of precision. It’s that the two properties are linked in such a way that they don’t manifest separately all at once. Isn’t this language maddening? Counter-intuitive? Weird? Ball writes: “if the math says that we can’t measure some observable quantity with more than a certain degree of precision, that quantity simply does not exist with greater precision”, or that’s what Niels Bohr might say. It only affects certain pairs. Mass and charge can be known simultaneously; no problem there.

 

Ball makes the analogy that in matrix mechanics, the order in which you multiply two matrices matters: “M x N is not necessarily the same as N x M.” In contrast “3 x 2 is the same as 2 x 3”. Matrix mechanics is how Heisenberg formulated quantum mechanics. I am much more comfortable with Schrodinger’s wave mechanics, and so that’s what I teach students in quantum chemistry. Ball also suggests that instead of Uncertainty, we should call it Unknowability or Unbeability, presumably the ability to be or exist. I like this. You can’t have Everything, Everywhere, all at once. Simultaneity is the problem. But that opens up another can of worms. We think of simultaneity in terms of the time variable. But time and energy are conjugate variables. And as I tell my students, everything in chemistry is about energy. We don’t really know what it is, but we can count it, even as it morphs from one form to the other. Follow the Energy!

Tuesday, February 6, 2024

Reorganizing

Not being able to find previous papers I’ve read and stored in pdf format is increasingly annoying me. I haven’t organized the files in the most logical way; rather it has evolved organically over time in what looks like a mess. When there were fewer files, and my memory was better, I could easily find what I needed. Also, prior to Covid, I separated my work-life and home-life. All my files were in a single location – on my desktop computer at work. But with a recent overseas sabbatical and then Covid (and working remotely), I now have multiple places where my files are saved.

 

My desktop contains copies of all the files from my various laptops over the years, but I haven’t made an organized effort to mesh them. No, I don’t store things on the cloud even though my university uses GoogleDrive. Instead, I have local files on various computers which I periodically back up. The inertia to change my ways – I’m not sure what to say about it, other than that I am a creature of (bad) habit. But I’m now sufficiently annoyed that I might do something about it. My current goal is to go through some of the folders every week and start to mesh everything into a single place, perhaps renaming some files so I can better search for them.

 

What I need to do is think of an overarching organizational system. I didn’t know what this would look like as a new faculty member. Also in my early pre-tenure years, I did not read as widely. Most of my papers were particular to the projects I was working on actively or grants that I was writing when considering future projects. But during my first sabbatical, I started to reach much more widely as I pondered new research directions and interests. And not just chemistry research or pedagogy. I read more widely in the sciences, history, philosophy, and psychology. My “future research project papers” folder (labelled “FutureResProjPapers” because that’s how I label folders) now has way too many files, and that’s the one that would be most practically useful as I plan ahead.

 

A starting point for reorganizing would be looking at all the subfolder names in my “Reading” folder. Actually, I have multiple such folders. There are some overlaps between the names so some can be consolidated. But I haven’t figured out what to do about papers that span multiple categories. In the past, I made copies and stuck them in different thematic folders, but I doubt I’ve done so consistently. I perhaps need a way to tag my files with an appropriate set of tags. Decisions, decisions! The whole business looms and just thinking about what I should do is giving me pangs of paralysis.

 

Are my desktops, both physical and virtual, a sign of my messy and disorganized mind? Maybe. I’m tempted to just “start fresh” with a clean slate. Are all those old papers important? Likely most of them are outdated now and less interesting. But it would take time for me to figure that out. If I could get an A.I. to read pdf files and churn out a summary, that might help. But would I trust it? I don’t know. Maybe I shouldn’t worry if I miss anything. If something is that important, it will come up again. And likely I can find the paper off the internet. So maybe I should just work on what I’m using actively (say the past one year) and just archive the rest in an “OldPapers” folder. That’s tempting.

 

What I should be doing right now is getting a move on this process. Instead I’m procrastinating by writing this blog post. Okay, I have to stop writing now and spend the next twenty minutes making some organizing headway.

Sunday, February 4, 2024

Device Use

This week I noticed that over 80% of my P-Chem students this semester use a tablet and stylus to take notes in class. That’s a new record high. Pre-pandemic, the number was below 20%, but the increase in tablet use has been noticeable. In P-Chem, this makes sense. I provide pdf worksheets ahead of time, and the most facile way for students to take notes is to write on the worksheets. Whether you do it old-school with pencil-and-paper or electronically with a tablet and stylus, it’s easier to write than type when lots of math is involved. (Funnily, one might consider the tablet-and-stylus combo very old-school, when those tablets were made of clay.) Students can also turn in their problem sets electronically, and doing the work electronically means not having to take pictures of one’s handwritten scrawl.

 

In my G-Chem classes, there is increased tablet use but still less than 50%. Most students use pencil-and-paper. I write on the board a lot. There are calculations, simple illustrations (Lewis structures for example), chemical equations, along with written-out explanations after I’ve reasoned through a procedure verbally. No one uses a laptop since it’s simply not practical to try and take notes in chemistry class when non-text is involved. I do show some PowerPoint slides, but not many – and they’re usually pictures from the textbook (data tables, graphs, or pretty figures). I expect the number of tablet-users to go up as more and more students use them in high school, and as first-year college students see more of their peers using these devices very effectively for note-taking and working homework problems.

 

In Biochem class, which I’ve only taught once (last semester), almost all the students brought laptops, and a few also had tablets with them. That’s because much of the class is taught through PowerPoint slides. That’s what I did the first time because that’s what my biochemistry colleagues do. It makes sense since there are a lot of hard-to-draw pictures of macromolecules and complicated schemes that would way too long to draw by hand. The slides are provided ahead of time and the students annotate or type as we go along. This works well since the class isn’t equation-heavy. In the few classes where I do more math (full derivation of the Michaelis-Menten equation for example), students pull out a notebook and write by hand or use their tablet. I also encouraged students to bring laptops on the days we would probe protein structures in the Protein Data Bank. I had designed a few in-class activities (and a problem set) for them to get practice using visual tools.

 

Last spring, when I taught the senior-level elective course Metals and Biochemistry which essentially involves reading and discussing primary literature, all the students had either laptops or tablets. This makes sense because pdfs of the papers and the discussion questions are provided ahead of time. Students have also taken notes on their reading and come prepared to discuss. When I taught Origins-Of-Life chemistry during the remote year, we were all boxes on electronic screen, so all of us were on our devices.

 

What have I learned from all this? Students adapt their use of technology to the way I teach and the types of materials I provide. This means I should think carefully as I introduce new activities in class that leverage technology use. Last semester I introduced in-class electronic structure calculations in Quantum Chemistry that used a webserver. Students all brought their laptops/tablets on those class days, and I think it worked well. At some point I’d like to introduce more python but I haven’t jumped in yet because I wanted to think about it more carefully. In G-Chem I’ve made use of students pulling data off the internet or their electronic textbook for in-class activities, but that has been fewer and far-between.  Interestingly, I have increasing number of G-Chem students not having a hand-held scientific calculator. I have to tell them to get one to use in exams. Except for the remote year, all my exams in G-Chem and P-Chem are still old-school pen-and-paper. I expect to stick to this while we continue meeting in person. But who knows what the future holds?