Friday, December 29, 2017

Rainbows End


Wanna know what the hi-tech future might be like in a city you’re familiar with? That’s the feeling I got reading Hugo award winner Rainbows End by Vernor Vinge. The novel was published back in 2006, i.e., most of it was likely written in the preceding years. The novel is set in 2025, so reading it in 2017 is particularly interesting. While I don’t think we will reach the portrayed tech levels in 2025, the novel feels prescient and realistic of what it might be like to live in the not-so-distant future. Rainbows End feels real. It is neither dystopian nor utopian, and there is no pot-of-gold at the end of the rainbow.

Wearable technology has become ubiquitous. While keypads are still available for novices, expert users employ physical bodily motions to adjust the view overlays through their contact lenses. Data abounds. Do you want to know more about the plants that you’re walking by? There’s an (overlay) app for that. How about information about someone near you? Sure thing (with varying privacy controls, of course). Or does the world seem drab to you, and you’d rather live and breathe in your own fantasy-land? Yes, you can physically immerse yourself in (a digitally-enhanced) World of Warcraft. (A visual example are the Big Market scenes in Luc Besson’s recent Valerian movie.)

Global interconnectivity at lightning speed and super-broad bandwidth allow thousands, millions, billions of users to fashion the world around them via digital enhancement. No need to look at your cellphone screen to hunt for elusive Pokemon created by a gaming corporation, because user-generated crowd-sourced collaborative content pervades the ether. The new Gods of Entertainment are sustained by enthusiasts via ‘belief circles’. And when these factions clash for supremacy, a mash-up of genres is inevitable. A battle might ensue between knights of the Renaissance Faire and My Little Pony & Friends. You can bet that bookies are hard at work taking real-time bets with real-time odds.

For governments, corporations and their shadowy counterparts, Data Analysis is king. The ubiquitous job of the future? Data Analyst. Supported by varying levels of Artificial Intelligence, of course. With the power of global high-speed interconnectivity comes the possibility of catastrophic chaos. Will a terrorist threat go global in the blink of an eye? In Rainbows End, the moving of analytic resources by ‘leaders’ seeking to neutralize threats resembles an insect swarm. Diffuse, yet focused. Global fireflies conducting a merry dance through wire and air.

The action is set in San Diego, and reaches its zenith at UCSD. I am familiar with the campus, having spent time there first as a postdoc and later as a sabbatical visitor. The author, formerly a math and computer-science professor at neighboring SDSU, is very familiar with the environs. In 2025, he imagines high-security high-level biotech facilities dotting the area around UCSD. His description of the geography is spot-on. As I followed characters in the story walking through campus, treading dead eucalyptus leaves underfoot or winding along the snake-path, I could imagine being there.


Appropriately, the iconic Geisel Library with its otherworldly architecture, is an important part of the story. (The snow fortress in Inception could be an overlay of the Geisel library; see comparative figure from Uproxx above.) In one thread of the narrative, book digitization is proceeding apace with a method akin to shotgun genome sequencing. What should the library of the future look like? How should information be arranged, accessed, enhanced, embellished, and brought alive to future generations? What would Theodore Geisel, Dr. Seuss himself, think of the digitally enhanced creatures that resemble those in his stories? Could not, would not, there be a Seuss belief circle powered by green eggs and ham? As digital titans clash over the control and flow of information, should the Library decide the victor? How would it decide? (Read the book to find out!)

What does education look like in this new world? For me that was one of the most interesting threads of the story. What are the skills prized by the future? What would schools teach? How do the arts mesh with the science? Can you take a crash-course Matrix-upload style? How do students collaborate on projects and how would teachers grade them? Will the divide deepen between the haves and the have-nots? What happens to adult workers as their skills become obsolete and they have to re-train? Is there a place for an eminent poet of yesteryear who thanks to medical advances is experiencing a reversal of Alzheimer’s but has missed the rapid upgrades of the digital revolution? That’s the story of Rainbows End. If you would like a glimpse of the future from a master storyteller, this is a science fiction story for the liberal artist. And while it abounds in future-tech, at its heart it is a story of changing relationships and what it still means to be human.

Tuesday, December 26, 2017

Rest


Rest is actually about creative work. That’s the thesis behind a 2016 book by Alex Soojung-Kim Pang, a Silicon Valley consultant, author, and founder of the Restful Company. The book is subtitled Why you get more done when you work less.

In the introduction, Pang makes four claims. (1) Work and rest are partners. (2) Rest is active, examples includes exercise, stimulating hobbies, REM sleep. (3) Rest is a skill. It takes practice and discipline to reap its rewards. (4) Deliberate rest stimulates and sustains creativity. This last one is the selling point, and the reason why there’s a consulting business surrounding the idea.

There is a chapter on the ‘science of rest’. It has neuroscience related acronyms such as fMRI, PET, EEG and DMN (default mode network) and acronyms for standard creativity measures such as RAT (Remote Associates Test) and AUT (Alternative Uses Test). The tests might measure certain variables associated with small-c creativity. The neuroscience methods measure several proxies for brain activity. All this small-scale semi-quantitative information is meshed (or mashed-up) with anecdotal accounts of big-C Creativity individuals in the science, music, the arts, technology and leadership. Only a few of these examples are contemporary. Most are fifty to a hundred years old, some older (such as Charles Darwin). Einstein is of course mentioned.

While the anecdotes are interesting, correlation of creativity with certain ‘habits’ of possibly unique individuals, who may or may not even form a reasonable sample, is tenuous at best. From the historical examples (that form the majority), many of these individuals had resources, access and opportunity, in an era when few had such advantages, and without the global competition and connectivity of today. To this reader, it is unclear if these habits on their own will indeed boost creativity, or the spark-of-genius, or even elevate your company/corporation above the competition. After all, many people have these habits in some measure, but we don’t classify them as creative geniuses. That being said, the book is a light-and-easy read, and it made me feel good about myself since I have many of these habits. Now, I just need that spark of genius to illuminate me and unleash my creativity!

So I thought it would be amusing for this post to go through the different habits and see where I stand or fall in each area or activity. Let’s begin with the six habits of Part I: Stimulating Creativity.

(1) Four Hours. According to the author, that’s how much of really good-difficult intellectual work you can accomplish in a day. Putting in more hours just leads to diminished returns. That’s not to say you should leave work early every day. There are many routine tasks that are part of one’s job and can be interspersed with your four hours. While I log my time, I haven’t logged the intensity of my intellectual activity. Some days I might get in those four hours, but most days, it’s likely to be one or two at best. And on busy days with lots of administrative busywork, it might be close to zero. The lesson here: I should be more deliberate in carving out time and be more disciplined with high intensity intellectual work.

(2) Morning Routine. Apparently many creative folks start their day early, given the sample of anecdotes chosen by Pang. I’m pleased to say that when I started my faculty position, I shifted my schedule earlier. This was partly to accommodate my wife who is a morning person (while I was a night owl for many years), but partly because I detest looking for parking. So I happily teach the MWF 8am section of General Chemistry most semesters. However Pang’s examples got most of their high intensity creative work done in the morning. Should I still come early, do my creative thinking in the morning, but teach later classes when my energy levels are lower? Or is teaching my creative work and therefore it’s good for me to keep those in the morning? Hmm… I’ll have to think about that and maybe experiment a little. I do notice that I get some great ideas when I’m driving to work at 7am.

(3) Walk. Apparently that’s when ideas may come. I walk 1-2 miles every day for my aerobic exercise. That’s a far cry from some of those great thinkers who took much longer walks, sometimes with pals to bounce ideas off. Not sure how many great ideas I get walking in the evening (when I’m more tired). Maybe I don’t walk long enough.

(4) Nap. Good for many of those famous scientists back when they had more leisurely schedules. The author does single out Winston Churchill as an example of a very busy man whose naps were crucial. I used to nap in my days as a student night owl, mainly because I didn’t get enough sleep at night. I’d fall asleep after lunch anyway so I might as well do it in comfortable conditions at home. But now that I hold a ‘regular’ job, or at least I keep such hours, I no longer nap. I think that’s okay, because of the next item.

(5) Stop. Separate your work and non-work activities. That’s the advice from Pang given his anecdotal examples. By not napping, and being very good about separating my work and non-work, I’ve managed to stay very productive and happy in my career. It might have improved my creativity, but as measured with a small-c, perhaps I haven’t noticed any genius in myself. I do think I have creative ideas at least some of the time, and I enjoy turning these ideas around in my head.

(6) Sleep. It’s important to get good sleep. That’s a problem for me because I’ve had insomnia for many years, although things have improved the last five years partly with some lifestyle changes, and possibly because I’m simply getting older and sleeping earlier.

There are four other habits in Part II: Sustaining Creativity.

(7) Recovery. Take those vacations and take time off from work. It’s closely related to #5 Stop. I’m good about doing these actively and I enjoy my mind-stimulating hobbies: daily crossword puzzle, reading, playing games, and even cooking.

(8) Exercise. Apparently a number of creative folks took part in strenuous physical activities, running marathons, scaling mountains, etc. Niels Bohr and his mathematician brother Harald were top-notch soccer players. I don’t see myself doing very much in this category although maybe I can up my daily exercise routine a little more.

(9) Deep Play. When I have a larger block of time, I enjoy deep puzzles and immersive boardgames. Got this covered!

(10) Sabbatical. I’m blessed to be an academic where sabbaticals are built into my job. (I do have to apply for them and write a reasonable proposal.) I’m very much looking forward to my next sabbatical (coming up in 2019) and I’ve already been thinking about new directions and new possibilities! My last sabbatical was wonderful because I immersed myself learning about origin-of-life chemistry. This led to a funded proposal, lots of interesting projects, a bunch of papers, some academic activity opportunities (grant panels, editorial boards, etc.), and being a consultant on a successful origin-of-life game!

In the meantime it is Winter Break and I’m enjoying my Rest with a variety of non-work activities. I might even be feeling some creative juices churning!

Friday, December 22, 2017

The Alchemist's Daughter


Reading fiction is for the holidays. Why? Because Mortimer Adler convinced me (in a non-fiction book) that fiction is best enjoyed in a large block of undisturbed time. This allows the reader to immerse in the fictional world. During the school year, non-fiction is suitable for a twenty or forty-minute block in the evening before bedtime. Weekends are when chores and other errands get done.

I kicked off winter break with The Strange Case of the Alchemist’s Daughter by Theodora Goss. My sister had recommended the book, knowing of my interests in settings where the distinction between science and magic was blurred. The setting is 1890s England. The protagonist is one Mary Jekyll. Readers of Victorian era literature will recognize the connection to The Strange Case of Dr. Jekyll and Mr. Hyde by Robert Louis Stevenson.

The 2017 novel by Goss is a modern mash-up of fictional 19th century literary characters whose own stories intertwine the strange and macabre of that era. A strange case requires a detective, and so Sherlock Holmes gets involved. The Whitechapel murders form one thread of the case. There is also a fourth-wall undercurrent running through the story that is slightly distracting and highly amusing at the same time. Sisterhood is explored in interesting ways, and the book has a breezy 21st century feel – the plot keeps moving, and I won’t be surprised if this book gets adapted into a movie or miniseries.

I won’t give away the plot other than to say that a shadowy group of neo-alchemists are involved, although our protagonist needs to explore her way through the case to figure things out. The setting however is interesting. It presumes that the alchemists have moved on from the failed quest of transmuting the chemical elements. Thanks to Darwin’s theory of evolution, the new quest is the transmutation of biology. The mutants could be seen as 19th century versions of today’s X-Men or X-Women. Science is the driving force in this case, similar to origin stories in many of today’s familiar comic book super-powered mutants.

Folks from the 19th century would see today’s scientific tools for transmutation in modern biology as magical. CRISPR technology reminds us of the ethical issues debated over the last century as biochemistry and molecular biology have raced ahead in technology. Unfortunately The Strange Case of the Alchemist’s Daughter does not go into any scientific details, which I would have undoubtedly found interesting. But the book is still a delight to read. At 380 pages, it took me 4.5 hours at a leisurely pace since I read fiction slowly to immerse myself in that world. If any of the above sounds interesting, you might enjoy it too!

Tuesday, December 19, 2017

Noble Chemicals


I swear this post is not about noble gases. But if that’s what you were looking for, I have recently discussed the original Bohr model of argon and how the discovery of the noble gases could feature in my general chemistry course next year.

Instead, I was surprised to see “Nobel Prize winner introduces skin care line” in this week’s Chemical & Engineering News. The prizewinner in question is J. Fraser Stoddart, famous for molecular machines, a crest in the nanotechnology wave. The company is PanaceaNano. The skin care line is branded NOBLE with a square diamond in place of the O. The square represents the “organic nano-cubes” that deliver the appropriate rejuvenating molecules.


I used the word nano thrice in the preceding paragraph. So let’s define it before we move further along. A nanometer is 10-9 meters (or 0.000000001 m). Think nine decimal places for nano. That’s the size of most relatively small molecules, i.e., the realm of the nanoscale. To get a sense of how these exponentials compare to the sizes of different objects, I highly recommend the animation Powers of 10.

The Noble Skin Care website has a slick video to explain their chemistry. The opening scenes trade on Sir Stoddart’s name and Nobel, as if to give the stamp of approval that their product uses Nobel-worthy cutting-edge chemistry – and therefore one can charge higher prices for such a premium product. No details are given in the video of the organic nanocubes, other than their size (3 nm in length), but they are represented as cubes. The C&EN article mentions that they are cyclodextrins. So not cubes per se, but container-shaped molecules that can be used for drug-delivery among other things. I considered them in my potion design example for one of my classes but went with a polycaprolactone instead. Here’s a Wikimedia commons picture of the cyclodextrin molecule and its approximate “container” shape.

In my chemistry for non-science majors courses, I have the students watch some videos promoting pseudo-scientific concepts. In the past, I’ve mainly aimed at water products since they are ubiquitous and popular. Penta water has been the poster-child offender for a number of years but Alkaline water is the rising fad. Some of the videos are slick, while others are clumsy. My goal is to have students look past the slickness and ask questions about the chemistry presented. They should not just believe the talking head’s scientific claims. Most of the talking heads in the nanoscale water products have dubious scientific credentials. Stoddart, on the other hand, has a Nobel prize along with an array of top-notch scientific credentials. He doesn’t function as a talking head in the video, although he is chief technology officer and co-founder of PanaceaNano. Is nanoscale chemistry a panacea or will it overrun us with evil nanobots? It’s too early to tell. But the company “chose to make its first commercial product a line of cosmetics because of the high margins and the ease of market entry.” Money first. World domination later.

The scene that caught my eye in the video was the molecules, so I took a screenshot (above). What are these molecules? Can I use this video as an exercise in my class and have my students hunt down the identities and uses of each substance? Can they use their background of rather limited (3-4 weeks) of organic chemistry and functional groups? How long would it take me, a physical chemist (i.e. not an organic or biochemist), to identify these? What strategies would I use? Well, I decided to put it to the test.

I could easily identify lactic acid, salicylic acid and retinol right off the bat; 3 out of 10 with no extra work (though this would not be true of my students). With retinol as a clue that vitamins were involved, it took me just a few minutes to identify ascorbic acid, niacinamide (B3 derivative), and what looks like a B6 derivative. Then it took some staring at the trihydroxy stilbene molecule, which looked very familiar, before I figured out what it was. However, I had to make use of my own background knowledge of what might be in anti-aging skin-rejuvenating creams. It also took some staring before I identified ubiquinone. Next I tackled the polymer. I could see it was a disaccharide of glucose and GlcNac (glucosamine), but I had to rack my brains a little. Then in a flash of insight, I remembered that four years ago, one group in my class decided for their final project to look at hydration creams. They even asked me about this specific compound, and then the name popped into my head. Chance favors the prepared mind, perhaps? Not sure how that memory ball was accessed.

So in maybe 15-20 minutes, I had 9 out of 10. (Okay, maybe 8.5 because I couldn’t directly identify the B6 derivative by a common name even if I could tell you the IUPAC name.) I could not identify the last molecule and needed to do a web search to find out it was ferulic acid. A student without much chemistry background would not be able to find most of these (except for the smallest three) using a chemical formula search. They would need other clues. If I learned anything from this exercise, it was interesting to reflect on my search strategy and what background information I accessed.

Timed delivery and molecular precision are the two other factors pushed by the NOBLE video and website. Given the range of molecules and what I know about other skin rejuvenating products (that use the same range of molecules), I don’t think there’s anything more precise going on in the molecular interactions. The timed delivery is more interesting. Presumably one can tailor-make cyclodextrin derivatives that provide a slower release over a longer period of time. But that strategy also works of any container-shaped molecule you might use. Some chemist might actually come up with a cube although it would likely not be completely organic. By organic, I mean not containing any metals (rather than “natural”). From a design perspective, to get right-angled shapes, metal centers are almost always (although not exclusively) required. In any case, cyclodextrin is a good choice as a delivery agent as it is likely not to screw anything else up.

The alchemists of olde sought to synthesize the substance that would form the elixir of life. The nuevo nanochemists carry on the noble quest.

Friday, December 15, 2017

Child's Play


Do children lose their creativity as they grow older? Are present formal school systems responsible for suppressing the ‘natural’ creativity of kids?

Several authors in Creativity and Development suggest that these questions are not meaningful because they fundamentally assume that the answer to the question “Are children creative?” is Yes. But perhaps the answer is No.

Here’s a provocative excerpt from David Henry Feldman, currently a professor at Tufts, and one of the contributors.



I suppose it depends on how one defines ‘creativity’. The contributors to Creativity and Development, all major figures in their field who have pondered this issue, differ in specifics but they all acknowledge that context is important. They try to counter the ‘myth’ that creativity is primarily a function of the individual. Instead the environment, society, field and domain play a part in whether some ‘thing’ will be ultimately judged creative (with hindsight, of course). I purposefully kept ‘thing’ vague because while it is common to assess creativity via a product (a work of art, an insightful theory, a nifty device), one could conceive a creative act where the process is by far the crucial piece and not the product. Improv is often cited as an example.

Why do we hearken to the idea of the creative child? Here’s an excerpt of a thoughtful narrative by Seana Moran, currently a professor at Clark University.


There is some correlation between child’s (fantasy) play and standard tests of divergent thinking (one measure of creativity), but the effect is small. Are there developmental precursors to adult creativity? That’s one of the questions explored by the book – relating creativity to a developmental process across time, challenging the popular trope of the eureka moment. If our brains and thought processes develop differently over time, do we need different theories of creativity for different life stages? Are transitions between one stage and the next creative, akin to the emergence of a phase transition on the natural sciences where something novel and impactful is produced?

In Chapter 1 of the book, R. Keith Sawyer (also the volume editor) explores the idea of emergence in the context of creativity and development. Freud and Piaget make their appearance, but so do other thinkers. The broad definition of creativity is introduced. “Creativity is a socially recognized achievement in which there are novel products.” In Chapter 2, Seana Moran and Vera John-Steiner analyze the contributions of Vygotsky to the dialectic of creativity and development. I knew something of Vygotsky’s theories and targeting the ‘zone of proximal development’ has been one of my teaching mantras for many years. However, I learned much more about the context of Vygotsky’s ideas and how they have (developmentally) influenced contemporary theories in the field. I also appreciated Vygotsky’s emphasis on the joy of learning amidst the struggle! I should consider how to incorporate this in my chemistry classroom.

Robert Sternberg, a stalwart in the field, discusses how creativity with a small ‘c’ develops in the process of decision-making in Chapter 3. I’ve read a lot of Sternberg so I didn’t find his chapter as novel-ly interesting as the others but he provides clear theoretical models with practical implications. He also has a nice section titled “Teaching Creativity: 21 Ways to Decide for Creativity” that I might assign my students to read in my classes. I’ve been thinking about how to inject some type of Creativity-in-Chemistry into my classes and experimenting in small doses with particular activities and class projects.

The chapter I found most interesting (and I surprised myself because I typically shy away from reading biographies) was David Henry Feldman’s attempted reconstruction of the Multiple Intelligences (MI) Theory developed by Howard Gardner. While I’m familiar with the populist view of MI Theory, I’ve had a knee-jerk reaction against it because of how it has been misused in popularizing Learning Styles (successfully debunked, yet lives on – like a zombie). Reading the context in which Gardner developed his theory gives me a greater appreciation for what he was trying to accomplish as a challenge to the then-dominant and narrow approach of psychometrics. I also enjoyed reading Chapter 5 (“Creativity in Later Life”) by Jeanne Nakamura and Mike Csikszentmihalyi. As someone over-the-hill (i.e. on the other side of age forty), I found it encouraging to ponder the many and varied examples from interviews with folks in their sixties, seventies and eighties. There was the joy of continuing to be creative coupled with a realistic appraisal of additional constraints added by age.

As a book in the Counterpoint series, the authors discuss a range of questions in the final chapter. These include the two quoted screenshots above, but I want to share one more because it got me thinking about my research into origin-of-life chemistry. This one is also by Seana Moran.


I’m interested in proto-metabolism. How does a collection of small molecules emerge into a primitive metabolic cycle that streamlines into a novel energy transducing system that in one instance promotes an explosion of diversity (think about the range of novel life forms!) with strong constraints on the building blocks. In prebiotic chemistry, the problem is an embarrassment of riches. How did nature prune its metabolic pathways to only use a small subset amongst the myriad closely related chemicals? There are many different amino acids, sugars, and pterins that could function similarly. Why did nature lead to such a narrow selection? And yet the diversity of life-forms is fantastic. Moran’s ecological approach gave me an outside-the-box idea of how to attack this problem.

While novelty is often emphasized in creativity, constraints provided by the environment might be equally important. The open-endedness of an overly dilute soup of novelty might just as well be tons of uselessness. While I may disagree with particular curricular choices of formal school systems, the philosophy of having a formal school system may be good for a creative society as a whole. It’s difficult to be creative in a domain and/or field in which one has very little knowledge, and formal systems help to build that knowledge base. On the other hand, as one gains expertise and is acculturated into a community of practice, those constraints could also act as blinkers that hinder subsequent creative acts. Striking the right balance is tricky.

Reading Creativity and Development slowly over the last two weeks has made me think seriously about putting together some broader Creativity-in-Chemistry projects. I now have some vague outlines of how to proceed and I’m looking forward to discussions with colleagues and students about these ideas before launching a pilot program. I should also get back to working on my back-of-the-envelope chemistry card game. I developed it a little farther over Thanksgiving break so I’m looking forward to winter break to refine my ideas!

Monday, December 11, 2017

Revisiting the Bohr Model


After distracting myself the last two weeks with a copper-cyanide research project, I have returned to reading Helen Kragh’s Niels Bohr and the Quantum Atom. In a previous blog post, I described an inquiry idea I got from the book based on Thomson’s (flawed) atomic model. In today’s post I ponder the difference between Bohr’s original proposed model in 1913 versus the Bohr model as described in textbooks today. In the figure below, I have illustrated the two models using the argon atom.


If you’ve taken a chemistry course, the model on the left should look familiar. The Bohr electron configuration is (2,8,8) with the innermost shell having two electrons. The outermost shell has eight electrons corresponding to the “octet rule” – I have argued previously that one should be careful teaching this topic since students tend to imbibe the ‘happy atom’ story even when you as a teacher make the effort not to spin the tale that way. On the right, you see Bohr’s original 1913 idea. The electron configuration is flipped (8,8,2); and if you drew this on your chemistry exam, your teacher would likely flip out.

I can imagine a teacher (myself included) saying “That’s just wrong! The first shell can only accommodate 2 electrons, then the next two shells can accommodate 8 electrons, blah, blah, blah…” But why, though? You might be tempted to invoke the quantum numbers if you’re teaching general chemistry, but a skeptical student should see them as arbitrary. Why do those quantum numbers have those arbitrary rules? And who’s to say that they translate into the Bohr shell model the way you’ve described? (Unless you recall your own Quantum Chemistry class, you’ll be at a loss to explain any of this.) A much stronger argument can be made using experimental data from photoelectron spectroscopy, and I’ve opted to use this approach the last several years in my general chemistry course even though the standard first-year college chemistry textbook does not.

Bohr was a very, very clever scientist. Is there anything we can learn from his original “wrong” model? Maybe it isn’t so wrong after all. First let’s take a look at the electron configurations for the first twenty-four elements as presented in Kragh’s book (Figure 2.3) shown below.

The first six elements (Hydrogen to Carbon) are what you would expect. But then Bohr proposes Nitrogen is (4,3) instead of (2,5). The argument was based on chemistry, i.e., in its chemical compounds, nitrogen is known to be trivalent – it forms three bonds with other atoms, never five. Phosphorus (#15) is correspondingly (8,4,3) instead of (2,8,5). While phosphorus can be pentavalent, its trivalent compounds are significantly more stable thermodynamically. Bohr takes chemistry seriously and considers the types of stable compounds formed by each element, just as Mendeleev did fifty years prior in his iconic version of the Periodic Table. Oxygen is (4,2,2) since it tends to be divalent; Fluorine is (4,4,1) since it tends to be monovalent. Interestingly H, F and Cl are in the same ‘row’ on the table above indicating that they all just tend to form one bond.

While Neon (8,2) has two outer electrons in the original model, this places it in the same ‘row’ at Helium and Argon – all unreactive noble gases. It doesn’t demand that having two outer electrons corresponds to being divalent. However, we now know that when noble gases do form compounds (as difficult as that might be), the smaller ones tend to be divalent. Krypton difluoride, KrF2, was synthesized in 1963; and when the divalent H-Ar-F was finally synthesized in 2000, there was cause for celebration! So maybe Bohr’s original idea wasn’t so ridiculous after all.

The naysayer might argue the messiness of having inner shell accommodations vary as atomic number increases. While elements #3-#6 accommodate 2 inner electrons in Bohr’s original model, elements #7-#9 accommodate 4 inner electrons, and then the rest have 8 inner electrons. (And then you see the same pattern of 2,4 and then 8 build up in the second innermost shell.) It’s a “reverse octet rule” except that it progresses in even-numbered stages. It turns out Bohr has some good reasons for the even numbers (see Kragh’s book for more information), but the idea of increasing the number of inner electrons has merit. As the atomic number (# of protons) increases, the electrons should experience a much stronger electrostatic attraction and be increasingly pulled closer to the nucleus. Who’s to say what the limit should be? (Bohr picked 8 as the limit because of periodicity, again appealing to chemistry!) Remember, at this point we don’t know about the four quantum numbers although there is some idea that quantized shells exist and that electrons have angular momentum.

In fact we use this sort of argument when appealing to the transition metals. In General Chemistry you would recognize this as the strange exception where if asked to write the electron configuration of a transition metal cation, you must remove the s-electrons before the d-electrons. For example, neutral titanium atom is 1s22s22p63s23p64s23d2 but the Ti(+1) cation is 1s22s22p63s23p64s13d2 instead of 1s22s22p63s23p64s23d1. Students are flabbergasted by these exceptions. The reason we give them for this reversal of affairs? For transition metal cations, the 3d subshell ‘sinks below’ the 4s subshell in terms of energy. We say this having just taught them the Aufbau principle for writing electron configurations with its weird ‘inter-level’ crossings!

When we discuss trends in the periodic table, how atomic radius or first ionization energy change across a row or down a column, we often appeal to contraction or expansion of the shells. By this we mean that the ‘circle’ representing the shells gets smaller or larger, but not the number of electrons (at least that’s the mental picture we give the students). But having electrons move from outer to inner shells is not unreasonable. If we did not go into orbitals and quantum mechanics, there’s a lot you can do with the original Bohr model that avoids all manner of strange exceptions because Bohr based his ideas on chemical valence.

In 1913, Bohr attempted to conceive molecules. The picture above (Figure 2.4 from Kragh) shows some early sketches of H2, H2O, O2, O3, CH4 and C2H2. For H2, the two electrons in the chemical bond are illustrated as rotating in a circle (because: electrodynamics). In O2, four electrons are involved – in today’s Lewis dot structure parlance we call this a double bond. CH4 is correctly predicted to be tetrahedral, and C2H2 is indeed linear although there should be six electrons in the middle ‘ring’ rather than four. Although H2O and O3 are incorrectly drawn as linear, Bohr anticipates the resonance structures of ozone with two diagrams. Lewis dot structures combined with VSEPR Theory would ultimately prove superior, and Bohr being the physicist had very complicated calculations for the electrodynamics of the electrons in his model.

One reason for the success of Bohr’s theory is how it explained the results of spectroscopy – those patterns of light you see in a chemistry textbook. (This previous blog post has an example.) Bohr didn’t know why or how an electron jumped from one level to another. “How does it know where to stop?” is a question my students should ask when they encounter this, but no one has done so the last several years. Pity. I think it’s because I have well-prepared students, who have encountered these models before, and so the strangeness eludes them. What I’ve enjoyed about Kragh’s book is a reminder of the strangeness of the chemistry models of atoms and molecules. Historians of science, please continue to do your good work!

P.S. In case you’re wondering what happened to the copper-cyanide project mentioned in the first line, I’m temporarily stuck. This is a very common state of affairs in research. I usually leave the problem alone for a while before getting back to it later. Is it part of the Creativity cycle? Stay tuned for the next blog post!

Friday, December 8, 2017

Garbage Can University


Studying complex systems via computational modelling can lead you far afield to interesting ideas. This week I read several papers on the Garbage Can Model and how it might describe the decision-making processes (or lack thereof) in universities. I first read the original 1972 paper describing the model. (Abstract and citation in figure below.)


The system being modeled is “organized anarchy”; the authors claim that universities are a prototype of organized anarchy. Such systems have three general properties. First, problematic preferences means that the organization “operates on the basis of a variety of inconsistent and ill-defined preferences [and] can be described better as a loose collection of ideas than as a coherent structure…” Second, unclear technology refers to a lack of understanding among members of the organization. The system therefore “operates on the basis of simple trial-and-error procedures, the residue of learning from accidents of past experience, and pragmatic inventions of necessity.” Third, fluid participation means that time, effort and involvement may have wide variation, and therefore “the boundaries of the organization are uncertain and changing; the audiences and decision makers for any particular kind of choice changes capriciously.” Sound familiar?

In particular, the authors are interested in studying (1) how such organizations make choices in the absence of consensus, (2) how participants become “actively” involved when “not everyone is attending to everything all of the time”. At its base, the model assumes four independent streams that vary temporally: Problems, Solutions, Participants, and Choice Opportunities. There are varying flow rates of different streams, and a nebulous “energy” term is needed to solve (or in most cases) resolve a problem. Sounds like a multidimensional kinetics problem of a complex system, at least to me, the physical chemist. The 1972 paper details the model and includes a full Fortran program for users to tinker with the model and its parameters.

How can decisions be made in the model? The one we all assume: By resolving a problem that shows up, after working on it for some time. But there are two others. By oversight – there’s a norm, and it is simply applied without much time and energy. Or by flight, i.e., the problem is simply punted and not resolved, as new problems come in. The participants have simply re-attached themselves to new and different incoming problems. To see the model and simulation statistics, I recommend reading the paper in full. For this blog post, I wanted to highlight the implications of the results as described by the authors. A note of caution: the model is very simplistic, so one should be wary about the strength of the conclusions. That being said, as an academic I find the qualitative descriptions familiar-sounding.

“University decision making frequently does not resolve problems… Decisions whose interpretations continuously change during the process of resolution… Problems, choices, and decision makers arrange and rearrange themselves. In the course of these [re]arrangements the meaning of a choice can change several times… Problems are often solved, but rarely by the choice to which they are first attached. A choice that might, under some circumstances, be made with little effort becomes an arena for many problems… The matching of problems, choices, and decision makers is partly controlled by attributes of content, relevance and competence; but it is also quite sensitive to attributes of timing, the particular combinations of current garbage cans, and the overall load on the system.”

The 1972 paper has some interesting graphs. Above is Figure 5 from the paper looking at how different hypothetically sized schools with different baseline resources might change the way decisions are made depending on whether times are plentiful or lean. The authors make some predictions based on their model. “As adversity continues… all schools, and particularly rich schools, will experience improvement [i.e. resort to ‘higher efficiency’] in their position… Presidents of such organizations might feel a sense of success in their efforts to tighten up the organization in response to resource contraction.” Interestingly, small selective liberal arts colleges with large endowments are in the ‘unsegmented’ decision structure, i.e., they remain less hierarchical than their counterparts although as the famine approaches, they may cross the border into the ‘hierarchical’ decision structure.

The authors conclude: “It is clear that the garbage can process does not resolve problems well. But it does enable choices to be made and problems resolved, even when the organization is plagued with goal ambiguity and conflict, with poorly understood problems that wander in and out of the system, with a variable environment, and with decision makers who may have other things on their minds.” Sound familiar?


How has the Garbage Can model fared over time? Not very well because there are fundamental flaws as described in a 2001 paper by Bendor, Moe and Shotts. (See abstract and citation in figure above.) The authors think this is unfortunate because the “[original] paper is brimming with provocative insights that offer a promising basis for theory, but thus far their potential has largely gone untapped.” Their review and critique is comprehensive; I recommend reading the paper in full. I will highlight two significant issues. First, there is a disconnect between the verbal form of the theory and its heavily constrained incarnation within the model. Everyone should remember the dictum: All models are wrong, but some are useful. But in this case, the conclusions drawn may have well exceeded what the model is able to do given its limitations and strictures. The verbal articulations do not match up well with the mathematical constraints and rules. Second, the model neglects the role of individuals and how they actually behave when problems arrive. Depending on their role in the organization and their relative expertise, there could be a number of specific feedback loops crucial to the model. The independent stream assumption coupled with a linear temporal flow is unrealistic, possibly to the breaking point. However, the authors hope that the model will be revamped significantly so that it might actually prove insightful.

One example that tries to improve on the model is recasting it as agent-based. The authors, Fioretti and Lomi, also add several features to the model. (Abstract and citations shown below.) Flight (i.e., not resolving the problem) can now be postponement (per the original) but also passing-the-buck. I can personally attest that both happen very often at multiple administrative levels in the university. At first glance, this might seem to be a bad thing – but the simulation shows some interesting results.


The authors also compare and contrast organized anarchy with two different hierarchical setups. (In a hierarchy, “participants are only allowed to make decisions on choice opportunities that are equally or less important than their own hierarchical level.”) In the competent hierarchy, participants higher up in the hierarchy have the greatest ability to ‘solve problems’. In the incompetent hierarchy, the opposite is true. The higher-ups have lower ability to ‘solve problems’. Problems are narrowly defined to be technical, and solutions to problems also have narrow characteristics –read the paper in full for details. Again, there are some surprises. The incompetent hierarchy may actually outperform the other systems under some desired metrics/outcomes.

Fioretti and Lomi have worked on confirming the decision-making characteristics outlined by the original 1972 model. Three are highlighted by the authors. (I have modified their quotes slightly for clarity.)
·      Decisions by oversight are much more common than decisions made by resolution suggesting that the rational mode of decision-making is rare (< 20%). Most decisions are socially induced acts, made with the purpose of obtaining legitimacy by conforming to required rituals.
·      In a hierarchy, top executives are busy gaining legitimacy for their organization by means of decisions by oversight, whereas the [lower-level] bottom line cares about solving [technical] problems.
·      Organizations make themselves busy with a few problems that present themselves again and again. So participants have the impression of facing the same problems repeatedly.
Sound familiar?

There are several interesting conclusions (again, subject to the limitations of the extended model). Postponement and passing-the-buck can be “beneficial to an organization, since they avoid members wasting time on problems they cannot solve… they channel the most difficult problems to the best problem-solvers, creating opportunities for them to display their abilities.” Another surprising observation is that contrary to common wisdom, it is NOT necessary that the most capable problem solvers sit at the top of the hierarchy. Instead, good “socializers” who can “obtain legitimacy” for the organization should be the high-level “managers”. That already happens in many cases, but that means that the different roles in a hierarchical structure simply emphasize different strengths. The disparity in compensation seems particularly outlandish in this light – unless one values the social aspect much more highly over the technical aspects. The authors carefully state that “incompetence at problem solving should not be confused with [overall] incompetence… [and] top decision-makers should be good at gaining legitimacy for their organization, which is possibly the kind of ability they should be selected for.”

While I think the garbage can model is still too crude to establish particular practices for organized anarchy and hierarchical systems, it did remind me that I should be a little more understanding when the system doesn’t “work” as efficiently – i.e., the person I think should solve a problem postpones or passes-the-buck. Not because I think that’s the right response in the particular cases I’m thinking about, but that being in the midst of an organized anarchy leads to certain behavioral ‘ruts’ – explicable to some extent by systemic issues. The papers also made me think about how encountering different modeling approaches to complex systems in areas vastly different from my own, has expanded my horizons. Organized anarchy seems oxymoronic, but it’s also a good description for thermodynamics. Are managers simply needed to control entropy?

Monday, December 4, 2017

Taking on Airs


Just two weeks of classes to go before final exams. Today I started on the topic of Gases in my general chemistry class. In most popular textbooks today, the relevant chapter shows up somewhere in the middle. Therefore, in our two-semester sequence, Gases occupies an uncomfortable spot. Since our department offers many sections of general chemistry (and labs), the instructors must come to an agreement on the topics to be covered. The last five years, we’ve covered gases at the end of first semester. The five to seven years before we covered it in the second semester. And prior to that it was back in the first semester. We’ve had an uncomfortable relationship with Gases, you might say.

Coincidentally, I have been working my way through Caesar’s Last Breath by Sam Kean. The book is subtitled Decoding the Secrets of the Air around us. The book flap teaser begins with intrigue. “It’s invisible. It’s ever-present. Without it, you would die in minutes. And it has an epic story to tell.” The intriguing book title poses whether you might be breathing in one of the molecules that Julius Caesar exhaled as he died. Of course, the same poser could be made for the first or last breath of anyone in history, but hey, it’s Julius. Thankfully the book spends little time on Julius, because there are many more interesting vignettes the author would like to tell us.

Having previously read Kean’s The Disappearing Spoon and found it not-as-engaging, I did not hold my breath when Caesar’s Last Breath was released. But after recently reading a few more glowing reviews, I decided to check out the book at my local library. The first two chapters were just okay, possibly because I knew many of the factoids and stories, and it felt like Kean was trying a little too hard to be cute and punny with his prose. At that point I almost stopped reading, but I’m glad I persevered. While Chapter 3 (“The Curse and Blessing of Oxygen”) trod familiar ground, the whimsical retelling and intertwining of the tales involving Priestley and Lavoisier started to engage me, even though I already knew many of the factoids.

Section II of the book (“The Human Relationship with Air”), comprising chapters four through six, is where Kean’s writing shines. (I haven’t started Section III yet.) Chapter 4 opens with the mischievous and misanthropic Thomas Beddoes who would “develop a reputation as the queerest man in English science”. A physician-scientist, Beddoes was attempting to use gases as cures. During that time, most folks thought that disease originated from “bad air”, hence the “flocking to seaside resorts and mountain sanatoriums, places where they could breathe free and easy.” Beddoes tested a variety of gases on himself, but I did not know that he was the one who hired another eccentric named Humphry Davy who enjoyed taking “hits” with different gases. Beddoes and Davy went on to promote nitrous oxide to get high. Their Pneumatic Institution was “a respectable clinic” by day, but “resembled an opium den, with [people] lounging around and huffing nitrous gas from green silk bags.” Davy, ever the experimentalist, apparently tested people’s sensory responses while they were high. (I shared this vignette in class today!) Davy went on to become a famed scientist and experimenter, but I had not known about his earlier life working with Beddoes until reading Kean’s book.

Chapter 5 (“Controlled Chaos”) has several interesting tales. I learned about James Watt joining the Lunar Society, Birmingham’s famed intellectual club that “met one evening per month for raucous discussions of literature and philosophy, always gathering on the Monday nearest the full moon. Basing your meetings on the phases of the moon seems charming today, if not downright mystical, but the schedule actually had a prosaic explanation. Members needed moonlight to find their way home afterward.” You could call them Lunatics; there were certainly some strange characters in that group. To help expand the marketing of his steam engines, Watt coined the term horsepower as a standard of comparison, something his customers would easily understand. He “envisioned [the engines] as universal sources of energy – machines capable of powering any mechanical process… Watt dreamed of building the steam equivalent of computers, machines versatile enough to work in any industry.”

The science of thermodynamics grew from Watt’s steam engine, and eventually the horsepower was fittingly replaced by the watt as the unit of comparison. I particularly enjoyed this section because I have been thinking about better ways to connect material from the first semester to the second semester. Students seem to “forget” much of what they learned over winter break. The first topic of second semester general chemistry is thermochemistry, the introductory chapter to thermodynamics. I have often used animated machine simulations to get my students thinking about energy conversion on their first day back. But Kean’s chapter gave me more concrete ideas of how to connect the two pieces, and in fact I should start scaffolding the material now as we go through gases these last couple of weeks.

What got me thinking even more about tying different material together is Chapter 6 (“Into the Blue”). It begins with two sets of brothers, the Montgolfiers and Roberts, competing for the honor of flying the first humans in balloons. Yes, it’s a hot-air story involving many gaseous protagonists. There are many chaotic incidents; which is very fitting as the word ‘gas’ is derived from ‘chaos’. But then the chapter moves to discussing the exploits of Rayleigh and Ramsay. Kean weaves a story of noble gas discovery with spectroscopy, the periodic table, density measurements, and why the sky is blue. Discovering that the noble gases existed as single atoms rather than as two-atom molecules was one of the surprises. Kean ties multiple elements that I would cover in the first semester alone. After reading the chapter I feel sufficiently motivated for yet another makeover of my class next fall semester. I’m looking forward to planning this!

Friday, December 1, 2017

Laziness and Suboptimal Learning


I do not practice what I preach. Well, maybe I don’t preach. But I do teach. And I do profess. Does profess + teach = preach?

You would think that after all my learning about learning (evidenced by many blog posts), that I would have learned to apply it to myself. But no! I still personally persist in sub-optimal learning because I am lazy. Admitting one’s sin is the first step towards repentance and learning, no? Now I’m preaching.

I do not practice what I preach. This epiphany revealed itself two days ago in a typical conversation with one of my students – and yes, this is the unfortunately common before-exam conversation about study habits. This student happens to be quite conscientious, and mostly applies good (or relatively optimal) studying strategies. She’s worked lots of problems (good in chemistry), but is still nervous about the exam and wants to know what else she can do to ‘solidify’ the material. We discuss several other strategies she can use. My course website also has a chunk of material devoted to “How to Study for this class [General Chemistry]” with pointers, that many students read and forget.

This made me think about suboptimal strategies that students use, such as massed re-reading or gravitating towards solving easier rather than harder problems. (For a summary of both effective and ineffective techniques, see here.) Students who try to go the easy (or lazy) route find out they haven’t learned very much come exam time. And after not doing well, I often hear the phrase “I’m just bad at chemistry”. For many years, thinking of myself as a ‘math-science person’, I bought into a related false dichotomy. My version: “I’m just bad at learning languages.” Reality: I was lazy and not willing to put in the optimal work. While I am multilingual from a young age, I hadn’t made any attempt to learn a new language since probably middle school. Until recently.

Here is my uplifting tale. Several years ago I had a cool opportunity to help start a new higher education institute (HEI) in a different country. My department and college was very supportive allowing me to take a leave of absence, and we made a big move across an ocean to a different continent. While English was mainly used at the HEI, there were other local languages. My goal was to learn the next most widely used language to interact with the locals who were less comfortable using English. With some intensive Rosetta Stone, a notepad, and a scheme of revising older material in a semi-organized way (‘interleaved practice’ for those who know the jargon), I started language learning a few months before the move. Upon arrival, I kept up my study (30 minutes a day at least) but I also watched soap operas on TV (the nightly news was much harder to follow) and made attempts to read signage and occasionally use simple phrases. While I did not achieve fluency, I was able to communicate with younger kids (simpler vocabulary, slower speech) and understand parents talking to their kids.

But now it’s time for my sub-optimal tale. I return to the U.S. disabused of the false dichotomy since I made decent progress learning a language at, ahem, a more ‘advanced’ age. Enthusiastically, I decide I should learn another language now that I’m back in the U.S. After English, what’s the most spoken language in the U.S.? That’s right, it’s Spanish. Voy a estudiar espanol (with tilde above the n). I go back to what worked the last time around, Rosetta Stone. But I don’t keep a notebook, nor do I devise my own system of interleaved practice. Also, since my daily life doesn’t require any knowledge or use of the language, I hardly practice. We don’t own a TV, and I hardly watch any Internet videos. (Shocking isn’t it? That’s because as a computational chemist and a professor, I spend most of my workday in front of a computer. So I minimize using the computer at home.)

After a while, I started to slack off, even with Rosetta Stone, although I eventually manage to get through all five levels. Some days I’m motivated to learn, other days I’m not. I feel guilty about this occasionally, and I recognize that the lack of practice and consistency (not to mention motivation) means that I’m not achieving fluency. After a while I tried Pimsleur (working my way through scores of CDs) and supplemented with some books and DVDs. But even with the books, I just read through the exercises without exercising too much effort on my part. In short, I’m lazy. I now know that I am able to learn, but I’m not motivated to put in the effort. I go through the motions in the hope that doing so will magically enable me to achieve fluency. I know this doesn’t work, but I still persist anyway. That’s the epiphany – I’m doing the same thing as some of my students! And I expect my students to learn chemistry? (In my opinion, chemistry is much more challenging than Spanish thanks to Johnstone’s Triangle.) It’s a lazy persistence. That sounds oxymoronic, but I think it best describes what’s happening.

Now, my students have more motivation to learn chemistry than I have to learn Spanish. Whose fault is that? Mine and mine. Let me explain. Chemistry is a gatekeeper class to a variety of science majors or perhaps to graduate (if chosen to fulfil the ‘science requirement’). And the pre-med students are all hoping to get A’s in chemistry. This is not easy. (The average grade in my classes is typically in the C+ range.) There are exams. Students are at least somewhat motivated to study so they can earn a ‘good’ grade and move on. I’m the gatekeeper, the setter of exams, and therefore function (for the most part) as an extrinsic motivator. Why am I less motivated to learn Spanish? I don’t have an extrinsic motivator. And apparently my intrinsic motivator is not strong enough.

My epiphanic solution is that I need to plan some sort of extended trip (not just a few days) in a mainly Spanish-speaking country. I’m uncomfortable living in a place where I’m not fluent in the language. (Short visits are fine.) Perhaps that will help me get over my ‘activation barrier’ of laziness and spring me out of my sub-optimal learning approach. What have I learned in all this? I have a better appreciation for how challenging it is for students to take upon themselves the practice of better and more optimal strategies. The knowledge of such strategies is not enough. (I know all these strategies but still do not use them for my Spanish.) Somehow I need to help them over the barrier with chemistry. In any case, I’m thankful to my student who sparked my epiphany. Sometimes in those conversations with your students, you learn more about yourself. Now, para mi, I just need to practice what I preach. Or teach. Or profess. And not be so lazy.