Saturday, October 31, 2015

Ghosts!


Happy Halloween! This marks one year of Potions for Muggles. I just re-read my very first post a year ago where apparently I wrote about alchemy and teaching. Coincidentally, I’ve just started reading The Chemistry of Alchemy written by chemistry professors who were also responsible for writing the joy of chemistry. I’ve only read the first two chapters so I haven’t formed a solid opinion on the book yet. Each short chapter comes with Do-It-Yourself instructions for the reader interested in recreating alchemical reactions. Safety information is included and emphasized. I think I’ll stick to watching videos of others attempting these. Let’s say that the smells are less desirable. In fact the subtitle of the book is “From Dragon’s Blood to Donkey Dung, How Chemistry was Forged”. I’ll stay away from donkey dung, thank you very much.

In the first Harry Potter book, the discovery of the 12 uses of dragon’s blood is attributed to Dumbledore. Harry first learns this from a Chocolate Frog card. He also learns about Nicolas Flamel and alchemy from that same card. When Harry arrives at his new school, Hogwarts, he also encounters ghosts for the first time – the subject of today’s Halloween-appropriate post.

What are ghosts? According to Severus Snape, the potions master at Hogwarts, a ghost “is the imprint of a departed soul left upon the earth”. Sir Nicholas, the ghost of Gryffindor House, elaborates: “Wizards can leave an imprint of themselves upon the earth, to walk palely where their living selves once trod.” Ghosts show very little interaction with physical matter familiar to us. They pass smoothly through solid objects and do not seem influenced by gravity. Perhaps they are made out of neutrinos, sometimes referred to as ghost particles. Maybe magic can interact with neutrinos, localizing them as an “imprint” when a wizard “dies” but does not move on to what Dumbledore calls “the next great adventure”.

Although technically not alive, ghosts certainly seem to be sentient. Since I’ve been pondering artificial intelligence (A.I.) and life in recent posts, perhaps a comparison between ghosts and A.I.’s are appropriate. A sufficiently advanced A.I. would appear sentient to humans in its interactions with us. One could in fact create a modern ghost using a holographic projection animated by a computer that looks ghost-like. However for the ghost to verbally interact with humans, it would require physical (i.e. non ghost-like) sensors for listening and a speaking device. Not to mention some physical hardware needs to project the holograms.

How could ghosts pass through solid opaque objects within a room? If electromagnetic radiation was the source of the hologram (rather than neutrinos which we don’t quite know how to manipulate), a room equipped with a sufficient number of sensors and projectors could simulate this. However my money would be on a virtual reality (VR) system to more effectively simulate, not just ghosts, but any other magical interactions. Come to think of it, maybe I should put some actual investment money into emerging VR technologies. The VR option would be much cheaper than outfitting a castle in Poland with some serious souped-up technology. But perhaps the physical human experience is key – at least now while the technology is still somewhat primitive. If you could get to the level of the Matrix, now you’d have serious competition – so serious you might not be able to tell the difference.

Would Hufflepuff Hippo become a ghost post-mortem? I don’t know. The next great adventure sounds appealing provided I had “a well-organized mind” (according to Dumbledore). Is my mind sufficiently well-organized? I have no idea how this is determined. However if Hufflepuff Hippo were to become ghostly, the resident ghost of Hufflepuff House, the Fat Friar, seems like a jolly example. Not much is known about the Fat Friar in the Harry Potter books – much more is revealed in Pottermore, but I don’t read it. The books depict the Fat Friar as sociable, welcoming, forgiving, enjoying food and drink (at least in his former life), and excited about meeting new students. I’m guessing there was some religious component to the Friar’s life, hence he would have been “an educated person” in the Middle Ages – he might even have been an alchemist. Or a very good cook. Perhaps a fat fryer.

Happy Halloween! Here’s to another year of Potions for Muggles, if I can keep up the discipline of writing.

Sunday, October 25, 2015

Random Thoughts on Machines and Life


After reading Jerry Kaplan’s book (reviewed in my previous post), I dived into The Second Machine Age by Erik Brynjolfsson and Andrew McAfee. While sharing some of the same concerns as Kaplan, particularly the problem of widening spread between haves and have-nots, this book is significantly more optimistic. The book is subtitled “Work, Progress and Prosperity in a Time of Brilliant of Technologies”. Their conclusion is that humans can find their niche in a Race With the Machine, as opposed to a Race Against the Machine, the title of their earlier book. The niche is in ideation and novelty, generating new good ideas and finding novel recombinations of existing ideas while leveraging technology. The last few chapters of the book also include suggestions for individuals, corporations and governments.

The problem of spread may be at a critical juncture. The authors build up the case first using Moore’s Law, followed by mass digitization, the exponentially widening payout gap between the number one product and second-best, thus leading to an economical stratification based on superstars and everyone else. Instead of something closer to a normal (or Gaussian) distribution where the mean and median are close to the same, the exponential speed of technology has moved us into a power law distribution. But now the median is much lower than the mean, and the majority will have (far) below average wealth.

In their policy recommendations, they argue for a correlation between a strong education program for all that keeps up with technology to keep economic inequality at bay. A study is cited whereby improved test scores (using PISA for example) is strongly correlated with economic growth. Technology seems to be the savior in this regard, and the usual arguments for how technology-enhanced education will be a positive feedback loop towards superior learning are presented without much detailed analysis. The authors call for a “Grand Bargain” – having higher teacher salaries and more accountability. I’m all for higher teacher salaries, but I think many of the present efforts at accountability are wrong-headed. Here’s an optimistic summary from the authors.

“We have little doubt that improving education will boost the bounty by providing more of the complementary skills our economy needs to make effective use of new technologies. We’re also hopeful that it can help reduce the spread, especially insofar as it’s caused by skill-biased technical change. That’s largely a matter of supply and demand. Reducing the supply of unskilled workers will relieve some of the downward pressure on their wages, while increasing the supply of educated workers diminishes the shortages in those areas. We also think creativity can be fostered by the right educational settings, boosting the prospects not only of the students but also society as a whole.”

I’m in agreement with the last sentence, and I’ve been working on coming up with some creative principles to foster, not just in my classes, but perhaps one that changes the culture around me too! That’s a subject for another post. Today I’d like to ponder something a little crazier.

Is our system of networked computers “alive”? I suppose it depends on how you define life. If asked to define life, most folks will think about some of the characteristics of extant life from a biological point of view. Wikipedia has a nice summary of these seven characteristics. However, there are difficulties with such definitions, as there always seem to be odd exceptions to the rule that pop up. For example a mule is sterile and unable to reproduce but we would certainly consider it to be alive. Are viruses alive? More generally, are parasites alive?

The ability to adapt and evolve seems to be important, as is a metabolic system of some sort that transduces energy. From a thermodynamic point of view, perhaps life is a system that more efficiently disperses energy over both space and time. Certainly one could connect the dots that early cellular evolution was concerned with improving energy transduction. The better an organism was at collecting and using energy, the better its odds of survival in a changing environment as food or energy sources change. One could even paint a narrative connecting the rise of multi-cellularity to climax ecosystems as a story of energy use and dispersal. Human beings might be the ultimate organisms to transduce energy with transformative leaps as the steam energy is invented, or electricity is harnessed. Computers were once humans. Now they are machines, networked together for increasing interconnectivity, transducing more energy in the process. If James Lovelock’s Gaia is a living organismic system, perhaps so are our networked computational clouds.

Sure, we could pull the plug and stop the electrical juice from powering our computers and they would be “dead”. But if the sun went dark, so would we and most other living things on earth unless we could find a different source. Perhaps we’d have sufficient technology to form the underground city of Zion (from the Matrix) and make use of geothermal energy – although we would be much reduced in numbers and would need to invent machines to effectively transduce this energy source for our use to survive. Are all computer programs like viruses? When shut down, they go dormant. But when turned on and connected, behold they alive! We have backup systems that copy our files just in case our computer “dies”.

Perhaps “artificial life” isn’t so artificial after all. Maybe we could distinguish inorganic and organic life, just like the chemists of old distinguished organic and inorganic chemistry. But we chemists know that there is no hard and fast distinction, and in fact much of the interesting chemistry takes place at the edges of these two fields. As we become cyborgian, tethered to the computing systems that have become a natural extension of ourselves, perhaps that’s where all the action is. Certainly that’s where venture capitalists are putting their money and where we have seen the most gains in economic wealth, for good or ill. Here we are in the Second Machine Age. Will we Race against, Rage against, Race with, or become a joined Race with the machine?

Friday, October 23, 2015

How to Innovate Your Career into Obsolescence: A.I.


Imagine an infinitely patient teacher, available 24-7, able to access vast informational resources at the blink of an eye, cognizant in the science of learning, and can adapt course materials to match your learning speed and style. Sounds like the dream of every parent. The learner might even find it fun because the coursework is designed to put the learner in the Zen state of “flow” even for the most challenging material. Sounds like the dream of every student. Personalized curricula will match your interests and skills, but embedded in your learning program will be the skills to think critically and creatively in a broad way as you adapt to a modern ever-changing cosmopolitan society. Sounds like the dream slogan of every politician. Or university president. Or technocrat. Or pundit.

This sounds like an impossible scenario, but our world might be inexorably moving in that direction. At least for those who have the good fortune of such an education. As we transition into an information-rich age with ever more powerful processing machines, the landscape of education is changing rapidly. Both my parents were schoolteachers and I don’t think the computer factored into any of their lesson plans. I started out as a whiteboard-only teacher (or blackboard, depending on the age of the classroom) with my lecture notes written out, and revised, by hand. I adapted quickly to the overhead projector (to show useful figures in black and white) and replaced it with PowerPoint slides in color. Textbook companies kept up with the changing times by providing transparencies and slides of their figures. The changes seemed gradual.

Learning Management Systems came along. I resisted them for a number of years, and now I use them in some of my classes. Then online homework bundled with the textbook helped ease the burden of grading, and allowed the students to get immediate feedback with a primitive system that could give hints or nudge them to correct an error. Now these systems (such as ALEKS and Knewton) are built towards adaptive learning, personalizing the lesson plan according to the learner’s base. Control of the curriculum has moved away from the instructor to the A.I. The questions and exercises have also increased in sophistication, as have the feedback systems. As the number of users increases, data analytics can be increasingly leveraged. Adjustments can be made in real-time and propagated in the blink of an eye. All these changes have come rapidly in the span of the last several years, and the technology is only getting better. (Here’s a link to an earlier post on technology trajectories in higher education.)

We are just starting to see the unbundling of higher education as online courses, coding boot camps, digital badges and certifications, from a variety of providers jostle for your time and money (promising you future time and money). What will be the fate of our “traditional” institutes of higher education? Will they adapt or die? If all we are offering is content (i.e. the one-way transmission one sees in a stereotypical traditional science course particularly at the introductory levels), this can be outsourced to an appropriate A.I. system, which might do a better job as a teacher than the human instructor. Even for classes that are discussion based, these will evolve to increase the role of the A.I. and data analytics, and decrease the role of the instructor. We teachers will be introduced to these as enhancements to our teaching, taking away some of the less enjoyable repetitive tasks, so we can “concentrate on the essential teaching tasks”. These “essential” tasks are ones that the A.I. isn’t so good at now, but will get better as it monitors how teachers manage discussions online. The day may come when we will increasingly become assembly line workers as the “best” curricula are consolidated through much pruning. Sure, there will probably remain a place for a small number of human designers to keep this up-to-date, but this will be a significant minority.

If you don’t think that A.I. systems will be sufficiently sophisticated, I highly recommend Jerry Kaplan’s book Humans Need Not Apply: A guide to wealth and work in the age of artificial intelligence. His involvement with the Stanford A.I. Lab and startups in Silicon Valley, that used A.I. methods in designing and developing new technologies, give him a bird’s eye view of the progress over the years. While the title of his book and some of the content seems bleak, his out look is provisional optimistic. He thinks we still have a window opportunity to “get things right” at this developmental stage particularly to address the looming spectre of income inequality that is likely to develop given the present course. Governments or perhaps agreements among large multi-national corporations might facilitate such measures.

Some highlights of the book (for me) were peeking behind the curtain of companies that make use of A.I. The power of A.I. was rather eye-opening in two areas: high frequency trading and digital marketing. Unbeknownst to me, there is a furious war raging in the digital world of bits and bytes where computer programs jostle for supremacy. Kaplan asks: “So what’s the root cause of all this electronic pandemonium – computer programs fighting each other over the opportunity to game our financial systems or influence our consumer behavior? Can’t synthetic intellects just play nice, like decent civilized people?” If I were to imagine this anthromorphically, it would be like a scene from the Matrix movies where programs do battle with each other in hand-to-hand combat and any weapons they can access.

Kaplan responds: “The answer is surprisingly simple. These systems are designed to achieve singular goals, without awareness of or concern for any side effects… there’s no incentive for combatants in these new electronic coliseums to show any mercy to each other, or to pay anything more than the bare minimum they must to get what they want.” He goes on to provide a series of examples, from what may seem benign to behaviors that we humans might find morally repugnant. The way A.I. learns adaptively does not follow the same process as humans, and it could optimize differently in a given situation.

Thanks to sci-fi movies I’ve thought that the rise of A.I. would be Terminator-like, and there would be an epic battle between humans and machines. Kaplan thinks this is unlikely and the “takeover”, if there is one, will be much more subtle. The “embodied” A.I., usually in some partly anthropomorphic robot entity, may resonate with us viewers in the entertainment world. However, this is not an efficient way of deploying an A.I. Kaplan argues that robots would work much better (and more cheaply) with a distributed network of sensors and tools. He gives a great example of a robotic housepainter.

“It’s easy to imagine a humanoid form climbing ladders and swinging a brush alongside its mortal coworkers. But it’s more likely to appear (for instance) as a squadron of flying drones, each outfitted with a spray nozzle and trailing a bag of paint. The drones maintain a precise distance from each other and the wood siding of your Colonial, instantly adjusting for wind gusts and other factors. As they individually run low on supplies, they fly over to a paint barrel to automatically refill and recharge, then return to the most useful position. A series of cameras sprinkled around the perimeter of the project continuously monitors this flying menagerie and assesses the quality of the job. The actual device directing this mechanical ballet needn’t even be present. It can be what’s called software-as-a-service (SAAS) rented by the manufacturer and running on the Amazon cloud. Why bother to put all that computing power out in the field where it may get rained on and be used only a few hours a week?”

Kaplan has plenty more examples, and if you like this excerpt of his writing style, I recommend you read his book. In the meantime, I’m wondering if I should think twice about delving further into the intersection of education and technology. While I’m excited about designing top-quality technology-enhanced curricula with adaptive capability to maximize learning, where the teacher becomes more of a guide-by-the-side coach, at some point I might be innovating myself out of a job. I knew I wanted to be a teacher when I was young, and honestly I thought that if I was good at it, I would have job security. We’ll always need teachers, won’t we? Actually I’ll probably be able to retire before such innovations completely change the playing field, but maybe I should think carefully about the human relational element and its distinct role in the learning process.

Saturday, October 17, 2015

Neutrino Hunters


I was oddly unexcited about the Chemistry Nobel prize announcement this year. While DNA repair mechanisms are clearly important, it just didn’t excite me. I did show a slide summarizing the announcement to my classes as soon as I heard the news, but it felt rather “blah” for lack of a better word.

On the other hand, the Physics Nobel prize announcement the previous day got me all excited! I read the press release and a summary of the work aimed at the general public. (I also attempted to read the scientific report but had trouble following it as I did not have the requisite background in particle physics.) I did not know much about neutrinos, other than its famous introduction by Pauli back in 1930 and its involvement in beta-decay first postulated by Fermi. I had some vague memory hearing about early experimental detections being awarded the Nobel Prize, and I had heard about facilities such as Kamiokande involved in neutrino detection.

My excitement led me to borrow Ray Jayawardhana’s book Neutrino Hunters published just a couple of years ago.  I remember reading some great reviews of the book but never got around to reading it. The book is subtitled “the thrilling chase for a ghostly particle to unlock the secrets of the universe”. I concur. The book was very engaging; I had trouble putting it down! It is aimed at the non-physicist but the physics is so well communicated that I feel I learned a little bit of particle physics and experimental methods in physics from the book. That’s not the main point though. The best part is how the author weaves a thrilling narrative that jumps between the present and the past. The history of great discoveries in physics are conveyed with excitement, and the colorful scientists are presented quirks and all. While some of these were familiar to me, I had not heard of Bruno Pontecorvo nor was I familiar with the work by Ray Davis and John Bahcall. I think I also gained a better understanding of the work awarded this year’s physics Nobel Prize and I think it is very well-deserved. I wonder if Jayawardhana’s book had anything to do with it. He certainly did a superb job popularizing the work.

While there was much in the book that caught my attention, let me just point out two things I really enjoyed learning. First was the careful experimental work by Ray Davis complemented by equally careful work by Bahcall leading to their proposal of the Homestake mine experiment. The thoughtfulness of both scientists, and in particular the patience of Bahcall to hold on to the soundness of his theoretical calculations in the face of much of the physics community thinking there was something wrong in his work was amazing – it was really satisfying to read about his vindication in the end.

The other thing that really jumped out at me came from the final chapter in the book where the author looks forward to applications of further neutrino research. One of these was designing detectors that a body such as the IAEA could use to figure out if a facility was stockpiling to produce a nuclear warhead. Since neutrinos pass easily through matter, they would be difficult to hide even in an underground bunker. Of course, the same property makes their detection difficult but it sounds like solid progress is being made in this area.

Who would have thought neutrinos would be so interesting? I think I will include them next year in my chemistry classes. By the time the Nobel announcement came out I had finished “covering” nuclear chemistry and I usually don’t talk about neutrinos when we discuss beta-decay. I think I’ll cover it next time; it’s quite the thrilling story and I think there are some scientific inquiry lessons to be learned.

Bottom line: I recommend Neutrino Hunters. You won’t be disappointed!

Monday, October 12, 2015

What if?


Serious scientific answers to absurd hypothetical questions – this is the premise to “what if? authored by Randall Munroe, creator of xkcd comics. A colleague once remarked that he could probably teach an entire course using xkcd comics. This book, in particular, is a good source of quantitative reasoning Fermi problems – ones that are absurd, of course, otherwise where’s the fun? I’m having a blast reading through the humorous explanations accompanied by xkcd illustrations.

My favorite question in the book that Randall attempts to answer is “How quickly would the oceans drain if a circular portal of 10 meters in radius leading into space were created at the bottom of Challenger Deep, the deepest spot in the ocean? How would the Earth change as the water was being drained?” (submitted by Ted M.) First, Randall locates the other end of the portal far away from Earth; otherwise “the ocean would just fall back down into the atmosphere [while wreaking] all kinds of havoc with our climate”. The more interesting part is his drawings of the world map showing the landmasses and the sea as the ocean levels drop. A 50-meter drop doesn’t show that much change (at least on a global scale). A 250-meter drop though starts to show up some strange features. A bunch of islands start to appear and Indonesia is a big blob. New Zealand grows dramatically at the 1-km mark. What’s interesting is that after about 3-km, most of the major sea bodies would be disconnected and the draining would stop. You get the following picture below. (I’ve skipped the intermediate pictures, which you can find in the book or possibly on the web by searching for “Drain the Oceans”.)


As a chemist who ponders the origin of life, I thought about my “what if?” question. Here it goes: What if the elements in the Li-Ne Line (pun intended) of the Periodic Table disappeared?” Since the molecules of life are mainly built of carbon, hydrogen, oxygen and nitrogen (abbrievated as CHON), what would happen if the entire row Li, Be, B, C, N, O, F, Ne did not exist? (Maybe there was some absurd mechanism that caused all their isotopes to be unstable.)

This of course begs the question as to why life makes use of C, N, O in particular. So let’s try and get a handle on this. (Disclaimer: I have no good xkcd-style comics to accompany my explanations because I’m simply too lazy.) Let’s first consider the elemental abundances in our solar system (see Wikipedia figure below). Note that the vertical axis is a log plot, i.e., each unit is an order of magnitude (e.g. 6 on a log plot is 10 times larger than 5.)


While H and He are the most abundant, you can see a fair amount of C, N, O. Hence it is perhaps not surprising that most of the molecules you come across are “organic” molecules that contain CHON. Actually there’s probably a slightly higher diversity of CHO compounds, partly due to the lower abundance of N. There’s a second reason – the triple covalent bond that holds the two N atoms in N2 is very strong. One of the most important biological innovations was to “fix” nitrogen by breaking this strong bond and converting it to compounds such as ammonia (NH3). There is an interesting biochemical evolution story here but I’m going to pass over it.

Let’s take a closer look at the bond energies of other such covalent bonds. A typical table (search “Bond Energy” on Google images) is the one shown below.


As you can see most of the bonds shown are single bonds (one line between the atoms) and there are not many multiple bonds shown. Of the multiple bonds shown, they involve C, N, O in at least one of the partners. There’s a reason for this. Atoms that are smaller in size can get closer to each other to form that second (or third) bond. For example, the double bonds C=C, N=N, O=O have strengths of 620, 420 and 495 kJ/mol respectively. They are also quite a bit stronger than their single bonds C–C, N–N, O–O with strengths of 345, 160 and 145 respectively. (The N–N and O–O single bonds are anomalously weak for reasons I won’t get into.) On the other hand, you don’t see the analogous Si=Si, P=P, S=S bonds listed. That’s because they are just a tad stronger than their single bonds, and therefore if you’ve got a lot of hydrogen hanging around (it being the most abundant element), it’s energetically much more favorable to have single bonds all around (the Si–H, P–H, S–H bonds are decently strong too).

The molecules of life have a large variety of single and double bonds giving rise to a large diversity of structures. The strong C=O bond, in particular, is quite prevalent. It’s part of the reason why burning “organic molecules” as fuels provide energy when carbon dioxide and water are formed. Such molecular diversity cannot be found among Si, P and S (which I shall now refer to as SiPS).

There is another problem with silicon. Even though it can form four bonds just like carbon, it does not form compounds that are more stable than pure silicon and pure hydrogen. On the other hand, many hydrocarbons are more stable than pure carbon and pure hydrogen. For example, methane (CH4) has a “heat of formation” of –75 kJ/mol, i.e., it is more stable compared to graphite and molecular hydrogen by 75 kJ/mol as indicated by the negative sign. Silane (CH4) on the other hand is +34 kJ/mol less stable than its pure elements in their standard states. Silane is quite reactive because Si–O and O–H bonds are much stronger; silane is quite susceptible to oxidation.

Phosphine (PH3) is only marginally less stable than its elements (+5.5 kJ/mol) but it is susceptible to oxidation for similar reasons. Phosphorus is also much less abundant as seen in the earlier chart. It is only when you get to H2S and HCl that you get relatively better stability than the elements but it’s difficult to form a diversity of structures with S and Cl. Therefore, at least in our current atmosphere with plenty of oxygen, SiPS compounds will tend not to form. In fact, it is only when silicon and chlorine combine do you get compounds that are somewhat analogous to the hydrocarbons. For example, SiCl4, Si2Cl6 and Si3Cl8 are relatively stable. There was even a nice report in 2011 on forming Si5Cl12 analogous to neopentane, C5H12.

But if there was no oxygen, perhaps you can get SiPS compounds to hang around. There would still be the problem of not being able to form as large a molecular diversity because of the relative strengths of SiPS with hydrogen compared to bonds just within SiPS. I would expect molecules with single bonds to form, but there probably won’t be any double bonds.

As a computational chemist, I could calculate a suite of SiPS molecules and actually come up with a scenario of what sorts of molecules one might expect to observe if the Li-Ne Line indeed disappeared. This isn’t the sort of project that will attract grant funding, although it might be publishable if pitched properly to a journal that might be interested in such speculation and not deem it too absurd. I haven’t burned any computer time on the project although I can outline the steps that I would take. Maybe one day I’ll have a student who is a fan of xkcd, knocking on my door and who thinks it would be fun to do something so absurd without any potential “real world” payoff. Or when I have a chunk of free time and I don’t mind burning some computer time. It might just get a kick out of it.

Saturday, October 10, 2015

The Evolution of Universities


Over the past several weeks I finished reading two books discussing the evolution or lack thereof in college and universities in the United States. Both books were published in 2011, and although their central theses are very different and their views contrast in many respects, there are some points of meeting. The most obvious of these is the clarion call from both authors that if universities go down the road they are headed, they will be in serious trouble.

One of these books is Abelard to Apple: The Fate of America’s Colleges and Universities, written by Richard DeMillo. The author has lived and worked both in the academic world as a professor and an administrator, and in industry as the chief technology officer of a major corporation. The main thesis of the book is that the majority of the two to three thousand institutions constituting America’s traditional colleges and universities are in trouble as the education “marketplace” has evolved. The “elites” may survive thanks to their large endowments and brand name recognition that people are willing to pay for. New upstart educational approaches, leveraging technology, he argues are nipping at the heels of the “Universities in the Middle” – those that will go extinct if they do not find their value proposition. Here’s the opening from “The Laws of Innovation” – a chapter in the final section of his book.

“When it is written, the story of American colleges and universities in the twenty-first century will note that they became strong at a time when there were comparatively few choices in higher education. When faced with competition, some institutions reinvented themselves, but most of them clung to the belief that change, if it came at all, would be gradual. They seemed to be helpless bystanders as their value was quickly eroded by newer – often more agile – institutions. It is not a new story.”

“The pattern repeats throughout history: institutions that become inwardly focused, self-satisfied, and assured of their central role in society are easy prey for innovative experimenters who tap into the needs of students, places, and times. Universities that want to escape this fate have to understand the laws of innovation.”

“The forces shaping higher education – curriculum, a faculty centered culture, reliance on simple fixes, unexamined assumptions, and the inherent advantages of disruptors – are strong. There are incentives to solve big problems, but higher education is a massive system, and the ability of an individual institution to change is often masked by complexity. How many university presidents would turn their attention from solving immediate, near-term problems to charge into a battle where the stakes were high and the likelihood of prevailing depended on so many different factors? … Universities in the Middle that want to make it to the end of the twenty-first century should look again at the historical arc. They should take the long view.”

The author makes his long-view case by tracing the history of tertiary educations starting with the twelfth-century monk Peter Abelard. The rise of institutes across Europe, the role of religious institutions in shaping the early universities, the shifts in purpose wrought by the German approach, the changes in the nineteenth and twentieth centuries brought about my a confluence of politics, economics and technology – all these are weaved into a narrative arc to show that the early twenty-first century is yet another crossroads. It is a story of evolution – and like its biological counterpart, those that adapt to the changing milieu changes may not just survive, but thrive. There are also examples both in the U.S. and abroad where large sweeping changes are being made – the author has interviewed a number of higher education leaders.

DeMillo’s background is in computer science, and he sees the exponential developments in technology and networking as a major disruptor that will change the higher education landscape significantly. He argues that the faculty-centered curriculum of the Middle needs to shift to a student-centered curriculum. I suspect that Benjamin Ginsberg, a professor political science who spent many years at both Cornell and John Hopkins would disagree. Ginsberg’s book, also published in 2011, The Fall of the Faculty is subtitled “The Rise of the All-Administrative University and Why it Matters”. Ginsberg’s chronicles the rise of faculty governance (and the tenure process) through the early twentieth century and its current demise. He argues that the rise of university administrators and their “minions” (he calls them “deanlets”) have disenfranchised what a liberal arts education should be to the detriment of students, faculty, society – and everyone else except the administrators.

The book is peppered with invective, harsh words, and examples of the worst kind of administrative behavior and arrogance. The numbers he cites related to administrative bloat (and he calls out institutions by name) are downright appalling especially when the concomitant increase in students and faculty. Ginsberg recites arguments from the administration as to why more staff and administrators need to be hired, and presents his counter-arguments. The bloating of administration is seen to be almost virus-like – interested in self-perpetuating in size and influence to the detriment of the educational enterprise. Here are a couple of paragraphs and sentences to give you the gist of his scathing comments from chapter two, “What Admnistrators Do”.

“The number of administrators and staffers on university campuses has increased so rapidly in recent years that often there is simply not enough work to keep them all busy… To fill their time, administrators engage in a number of make-work activities. They attend meetings, conferences, they organize and attend administrative and staff retreats, and they participate in the strategic planning processes that have become commonplace on many campuses… Most administrators and staffers attend several meetings every day… In any bureaucracy, a certain number of meetings to exchange information and plan future ocourse of action is unavoidable [though] many meetings seem to have little purpose [other than] reports from and plans for other meetings.”

“Whenever a school hires a new [senior administrator], his or her first priority is usually the crafting of a new strategic plan… The typical plan takes six months to two years to write… A variety of university constituencies are usually involved in the planning process [but] most of the work falls to senior administrators and their staffs as well as to outside consultants… A flurry of news releases and articles in college publications herald the new plan as a guide to an ever brighter future for the school… Strategic planning serves administrators’ interests as a substitute for action.”

“It would be incorrect to assert that strategic plans are never what they purport to be… Such a plan typically presents concrete objectives, a timetable for their realization, an outline of the tactics that will be employed, a precise assignment of staff responsibilities, and a budget… The documents promulgated by most colleges and universities, however, lack a number of these fundamental elements of planning. They tend to be vague and their means left undefined… These plans are, for the most part, simply expanded vision statements… What was important here was not the plan but the process… the new appointee asserted leadership, involved the campus community, and created an impression of feverish activity and forward movement. The ultimate plan, itself, was indistinguishable from dozens of other college plans and could have been scribbled on the back of an envelope or copied from some other school’s planning document.”

While Ginsberg acknowledges that faculty are partly responsible for the rise of the all-administrative university (by not fighting against it), by and large, he paints faculty as having the right approach and that things would be much better off with fewer administrators and with faculty members in charge of most things. DeMillo, on the other hand, argues that the faculty-centered approach (which was not always true in history) is starting to look outmoded. Ginsberg wants a return to this approach. DeMillo thinks it is not possible.

Where both might agree is that universities today are engaged in too much mission creep. Ginsberg would see this as fuel that helps administrators continue to concentrate power and hire more of themselves and more support staff to help these plans. He argues that many of these plans and initiatives should be axed. DeMillo argues that one of the problems with the Universities in the Middle, with fewer resources and serving different populations of students, is institutional “envy” of the “elite”. This has led to the proliferation of initiatives spread thin and demanding ever more resources while passing the costs to students and their families. In his view this is unsustainable and institutions need to carefully define their value proposition, narrow their focus on what’s important, or risk going the way of the dinosaurs.

I resonate with many of the arguments brought up by both DeMillo and Ginsberg in their rather different books (addressing different issues). That being said, I don’t agree with everything, and I do question some of the assumptions. Did these two books get me to think more carefully about the direction we are headed (for those of us who work in higher education)? A resounding yes! If either of these books has piqued your interest, I recommend reading them in full. I haven’t done justice to either of the authors through short quotes and quick summaries.

Monday, October 5, 2015

The Gut, Microbes and Prebiotics


I read two engaging books last week (finishing both this weekend) that were very engaging. It was hard to put them down! Usually I try to read a chapter a day, but I was easily doing multiple chapters because I was hooked. I’ll write about the first book today since it is in keeping with my previous blog post on Making Visible the Invisible, except that instead of molecules it’s mainly about microbes.

The English translation of the book is titled “Gut: The Inside Story of the Body’s Most Underrated Organ”. The author, Giulia Enders, is a German student doing her Ph.D. in medical research. Giulia’s book came to be after she won a Science Slam (the video is fun and went viral!) based on her knowledge of gastroenterology. Giulia has a great sense of humor and it comes out in her writing. The book is accompanied by equally humorous illustrations with cartoons by her sister Jill.

There are so many interesting things I learned from the book. (If only more textbooks were like this!) Besides a journey through the digestive system culminating in the gut, a large portion of the book is dedicated to discussing the microbes in our digestive system. Who would have thought that gut microbiota could be so fascinating! While I had heard the statistic that the bacteria that live in us outnumber our cells by an order of magnitude, I did not realize how varied and interesting they are! They are also rather unique to the individual so I expect we’ll soon be having bacterial records similar to DNA records to identify perpetrators involved in criminal activity. There’s also a fascinating section connecting the gut and the nervous system – those “gut feelings” that you get, they might really originate in the gut. The gut might even be a second brain of sorts, at least the way Giulia describes it.

The last section on the book covers Antibiotics, Probiotics and Prebiotics. As someone with research interests in the chemistry of the origin of life, I use the word “prebiotic” in a different context. This is why I don’t tell people I study “prebiotic” chemistry, because then I would get asked all sorts of questions unrelated to my actual knowledge in chemistry. I did learn useful definitions for these three terms from the author. In particular, one can think of prebiotics as the nutrients needed to feed the “good” bacteria in one’s gut (i.e., the ones that don’t make us sick). Turns out that garlic and onions, two things that I love eating, are good prebiotics – as are several other vegetables that I also enjoy.

Some researchers in the origin-of-life or astrobiology fields look to extreme environments to examine how different “life” may have evolved possibly giving us clues as to what to look for when we send probes to other suitable planets or their moons. These unique “harsh” environments have a range of interesting microbes. But it turns out we don’t have to go very far to look for unique microbes – they’re living in our gut!

I highly recommend Giulia’s book – you’ll never look at your digestive system, gut, feces (yes, they are organized and classified!), microbes, or think about how what you eat is “processed” in the same way. And you’ll enjoy learning new things! What could be better?

Saturday, October 3, 2015

Making Visible the Invisible


[Disclaimer: All images were grabbed by doing an image search on Google.]

As I’ve been teaching introductory chemistry to two different groups of students (science majors and non-science majors), I’ve been pondering how chemists use different representations to explain tiny things that we cannot see. A chemistry demo in class gives the viewer a macroscopic observation of a chemical reaction – the louder and brighter, the better! But the whizz-bang of the demo is merely a prelude, at least in the mind of a chemistry teacher, to the microscopic (or perhaps more accurately nanoscopic) description that “explains” the observation.

The modern expensive chemistry textbook is fully illustrated with colored balls and sticks connected to each other in a sometimes intricate arrangement. “Atoms First” is the current fad in chemistry textbooks. What this means is that the atomistic or molecular view takes center stage in the early chapters. Older textbooks had fewer pictures and started with macroscopic observations of chemical reactions but then used strange symbols and equations to represent them.

As a quantum mechanic who spends time thinking about the nature of the chemical bond, I personally like having atoms and molecules be front and center. However, picking a suitable model of the atom to describe the invisible (to us) particles can be challenging. It is clear that the heavily mathematical quantum model that I teach in an upper division physical chemistry course is unsuitable at the introductory level. In the non-majors class, we use the Bohr/shell model, not just for the hydrogen atom (where it works marvelously) but for every other atom in the periodic table where the simplistic model is actually wrong. It is however very, very useful. Students can get a feel for general trends in the periodic table and an atomistic level description of chemistry just using the shell model.

In science majors chemistry, we wade into atomic orbitals. This allows a finer grain description of atomic properties across the periodic table and a more detailed description of chemical bonds. The students are introduced to the “four quantum numbers” without much idea where they come from or why they are used. (An alternative approach that I have used ignores the numbers and makes use of photoelectron spectroscopy data.) We draw pictures of circles, dumbbells and cloverleafs alongside energy diagrams – and it’s a wonder that the students aren’t more confused as we throw a dizzying area of symbols and representations at them – all in an attempt to make visible the invisible.

Here’s an example. All my students recognize the symbolic formula for the water molecule, H2O. (This is thanks to commercial product advertising, much more than chemistry classes!) Even without my telling them, I can project the following “space-filling” picture and they can all automatically tell me that it is a molecule of H2O.


They have no problem recognizing the ball-and-stick model either. Do either of these pictures represent what a water molecule “truly” looks like? Why do we choose one representation over the other? (These are good questions to toss at students who often haven’t stopped to think about it.)

Here’s a shell model showing just the valence electrons.


Which can be “reduced” to the Lewis structure of the water molecule.


And after talking about Valence Shell Electron Pair Repulsion (VSEPR) Theory to predict molecular shape, we talk about “hybridization”. I’m pretty sure the students are rather clueless as to why we teach them this. (Some instructors may be clueless too.)

And if you were a Molecular Orbital aficionado you might show the students the following diagram from an Inorganic Chemistry textbook.

My students in both classes learn how to draw Lewis structures. In the non-majors class we talk about the principle of keeping electron clouds away from each other (explaining the difference between Pauli repulsion and electrostatic repulsion is not helpful to them so we don’t delve into it). I don’t use the term VSEPR theory since I want students to understand the concept and not try to memorize a fancy term.

In the majors introductory class, they do need to know the fancy term and we do discuss the qualitative difference between the aforementioned types of repulsion (although it’s unclear to me that all but the strongest students actually get the idea). Then I dutifully cover hybridization because we have many General Chemistry sections, and students are expected to have seen this before they get to Organic Chemistry. While I think I’ve managed to persuade most of my colleagues that d-orbitals hardly contribute to “hypervalent” molecules, it wasn’t until the textbooks (30 years late) started mentioning this in passing that I’ve seen instructors move away from using it. I do very little molecular orbital theory in my General Chemistry class even though it is “covered” in the textbook, although I do cover it in great detail when I teach upper division physical chemistry and/or inorganic chemistry.

We’ve got all these different representations to discuss the different properties of a single water molecule. But that’s not how any of us experiences water. We experience it in dollops of gazillions of molecules. Now clearly no one is going to draw a mole (6.022 x 1023) of water molecules, the amount of water you might experience cupped in your hands.


Instead, we have pictures like the one below to tell us about the wonders of intermolecular forces! All represented by just five molecules.

Where am I going with all this? I don’t really know. But writing about it has made me more acute to the myriad ways that chemistry is represented. As an experienced practitioner, I see different symbols and my trained brain knows what information to extract from them and to cut straight to the salient points being illustrated. Students newly exposed to chemistry, on the other hand, do not have that advantage. I guess I’m trying to remind myself to be more judicious about the models that I use, to take wise advantage of the power of illustration, but to point out to my students why certain representations are being used and not to assume that the “picture tells a thousand words”.

Finally, I’m amazed by the ability of the human brain that conceptualizes all this abstract model with the aid of such pictures and illustrations. This is how we see that which is unseen. Through Art!