Thursday, August 30, 2018

Pre-Semester Meetings Galore


It’s the week before classes, and oh so busy! This time around for me, it’s full of meetings.

Monday was my department retreat. It’s basically an all-day department meeting. Fortunately, my department is super-functional and we have strong positive camaraderie even when we don’t agree with each other on every issue. This means that we accomplish a lot during the retreat, but it’s still a very tiring day. Our retreat is off-campus, and we take a mid-day break for a low-key lunch while enjoying a view of the nearby bay! Coffee and other snacks help keep us powered throughout the day. Overall, we had good discussions and even took votes to move certain processes forward.

Tuesday was my catch-up day. I had syllabi drafts for the classes I’m teaching in this coming semester. I was able to finalize these and then worked on constructing the homework/problem sets for the first week of class. I also have slides ready for the first day of class. There was also some administrative work since I’m Associate Chair this year. Questions need to be answered. Forms need to be signed. I don’t feel fully prepared for classes next week, but I’m in reasonably good shape. It’s a good thing I checked my classrooms beforehand. They were cold! I notified our building manager who then contacted Facilities. I will check again tomorrow morning.

Wednesday and Thursday (today) were mainly spent training new students in undergraduate research. I have three new students starting in my group: two sophomores and a senior. For the sophomores, who are about to start organic chemistry, I’m teaching them a little on-the-fly. I’m also being light on the math since they won’t see physical chemistry until the following year. But that’s okay, they can start research, and I hope this has positive synergy with their classes! Last week I worked on updating the many tutorials and handouts that I use. Since I’ve upgraded my software and moved all my projects to a new high-performance computing cluster, there are new protocols.

I’ve also changed some of the training elements. Previously I just had students play around with a list of common commands in vi. This time around, I added a few exercises that mirror the common things would need to do to edit their files. I think this helped when we started on learning the computational software. Although the software vendor has some tutorials for how to use the graphical-user-interface, I made up a more focused cheat-sheet to get at some of the specific things my students would have to do. Building transition states for quantum chemical calculations can be tricky so I tried to be more explicitly clear how to do this. My previous sessions took a day and a half. This time I asked the students to commit to two full days. The last half-day was devoted to starting their actual projects, after they had learned how to use the software working on several prototypical examples. I think my changes helped the process, but after two full days I’m beat.

On Friday (tomorrow), I will be doing more catch-up in the morning (both administrative and class prep). The afternoon is blocked off for a training session that academic advisors must attend. Most years I teach a small general chemistry course in the fall semester where all the students are new matriculating first-years and they are also my academic advisees. The college has now linked such classes into larger structures called living-learning communities (a current buzzword in higher education). Tomorrow afternoon we’ll have the usual informational sessions from the Dean’s office and Student Affairs. After that there will be “breakout” groups into our designated communities involving faculty, staff and student leaders; the purpose of this is mostly for brainstorming and event planning.

Then I have a long weekend (thanks to Labor Day on Monday in the U.S.). I meet my new class of advisees on Tuesday morning. Then, bright and early, on Wednesday morning, classes begin! I’m on at 8am. Good thing I’m already used to waking up early these last couple of weeks!

Saturday, August 25, 2018

The Upside Down


Is there a dark and decayed parallel world sitting right next to us? What if the boundaries between them started to thin? And if this happened in a small town in Indiana in the 1980s because of a government secret lab experiment gone awry. That’s the premise of the Netflix series Stranger Things, a show that cleverly combines science-fiction, drama, horror, with intriguing connections between Dungeons & Dragons, the real world and the parallel world.



[Warning – Season 1 SPOILERS ahead!]

The setting is the fictional small town of Hawkins, Indiana, that also happens to house a secret government lab. It begins with four middle school nerdy kids in a marathon session of Dungeons & Dragons (D&D), the role-playing game popular in the ‘80s. At a crux of their epic quest, a powerful evil creature appears – the Demogorgon. The adventurers can’t make up their minds what do, and whether the wizard in the group should cast a defensive protection spell or an offensive fireball. As the Demogorgon takes advantage of their indecision, the dice are rolled over-enthusiastically and fly off the table. Just at that moment, parents interfere because it’s past bedtime and the merry adventurers have to break up their game. Three kids get on their bikes and cycle home in the night. As one of them, Will Byers, cycles close to the fenced government facility, he hears the chilling sound of an alien presence. He runs for home in an attempt to escape the beast, and then disappears! That’s the setup of the first episode.

Subsequent episodes unveil the stranger things happening in town. Other disappearances follow. A stranger shows up in the form of a hungry young girl with a shaved head. Will’s friends and relatives search for him with the help of the local sheriff who has experienced the tragedy of losing his own daughter to illness some years back. Shadowy government figures begin interfering with the investigation, including killing a local restaurateur who finds and (briefly) shelters the young girl, who turns out to have run away from the government facility. She turns out to possess special powers having been in utero while her mother was undergoing drug-induced experiments to test the limits of the human mind. (Yes, the government did at one point test LSD and other mind-warping drugs to learn if telepathy and telekinesis was possible.)

Eventually we learn that the young girl, named Eleven by the tattoo on her forearm, was used by the government to telepathically listen in on Soviet conversations (this being the cold war) and trained to maim or kill using telekinesis. The telepathy, though, required traversing what seemed at first like a dark dimension which then turned out to be a parallel decaying world containing an alien creature – like the one from Alien, Aliens, and numerous other movies. An encounter between the creature and Eleven opens up a large portal between the parallel worlds within the secure government facility, but also weakens the boundary in the vicinity of rural small-town Hawkins, Indiana.

The alien itself, especially when revealed in full in the final episode is much less interesting – a trope borrowed from Alien and its descendant movies. In earlier episodes in breaks out from its dark world into Hawkins, drawn by the smell of blood, because it needs to feed. The girl Eleven, having escaped from the facility and being sheltered by the other three boys looking for their missing friend, uses their D&D game to explain the situation as best she can. She identifies Will as the wizard, hiding from the alien creature as the Demogorgon, all residing in the Upside Down. The Upside Down is the name given to the parallel dark world because she turns the game-board upside down. It is all black. That’s where the Demogorgon resides and where Will is hiding, so close, yet unable to break back into his own world. The kids recognize it – the Vale of Shadows in their D&D world.

The dark world itself is interesting. There are parallel buildings and structures in the same location. It’s as if this is Hawkins, Indiana, if a pack of dementors have gone through and sucked all its life away. The buildings are decaying, covered with black alien ooze. All is cold and hazy, and the atmosphere is a miasma. With the boundary weakened between the two worlds, the alien creature is able to snatch victims from Hawkins, both human and wildlife. Eventually some of our protagonists enter the dark world through the portal and other cracks in the boundary. Will Byers, the wizard, is close to death in the poisonous haze, but has managed to survive and hide from the creature. Perhaps his spell of protection ‘worked’ where other fellow townspeople were unable to survive, and his experience of being immersed in D&D helped him keep his wits to survive a nasty Boss Creature.

One of the clever parts of this movie is when Will’s mother realizes that he is trying to communicate through electrical lights. In this dark and decaying world, energy signatures of life are uncommon. So as Will moves, in the vicinity near his home, the lamps light up along his path. When the alien creature approaches, it’s energy-presence causes large light-ups and electrical disturbances. Energy in the form of electromagnetic waves (it’s like magic!) seem to be leaking between the cracking boundaries of the two parallel worlds sitting in the same simultaneous space – so close, yet worlds apart.

An ‘80s stereotype in the form of the enthusiastic science teacher who also advises the AV club (of which the four boys are the only enthusiastic members) helps to explain the science of parallel worlds. Hugh Everett’s multi-universe proposal comes up. There’s a neat analogy of exploring different dimensions through the ‘flea and acrobat’ illustration. The AV equipment is utilized to make contact between worlds. The boys use their compasses after learning about magnetic and energy fields from the teacher. And when they want to construct a makeshift sensory-deprivation pool so that Eleven can make contact with Will, the teacher gives them instructions akin to what’s used in Float Therapy. (I had my students work on this in one of my classes!)

Eventually the creature is killed by Eleven; but she disappears in the process. Would rolling an improbably eleven have killed the Demogorgon in D&D? I don’t know but maybe that’s an unexplored connection. We don’t really know what the overly enthusiastic dice-roll was in the first episode even though one of the boy claims it’s a seven – the most common outcome from two six-sided dice. And are the stranger things over? Was the portal closed? We don’t know. The Upside Down world must still be there – cracks and all. And if it’s a world in which the present buildings and structures exist in decay, is it a future world? Is there a traversing through time? I’ll have to wait until Season Two.* Even Stranger Things might be afoot in Hawkins, Indiana.

*You might be wondering why I haven’t already seen Season Two. That’s because I don’t have a Netflix subscription and don’t plan on getting one, so I’m just waiting for my local library to get the DVDs (which was our source for Season One). I did watch Chef’s Table when my sister (who has a subscription) visited in the early summer.

Monday, August 20, 2018

Desirable Difficulties


Should a teacher always be clear and streamlined in a classroom presentation? While this is my aim, I wonder if I should occasionally be murkier and a little disorganized.

Why would anyone purposefully do such a thing? I’m not talking here about being a last-minute disorganized teacher as one’s natural state of affairs, or the trope of the mumbling bumbling professor that no one can understand. Rather I’m pondering whether the smoothness of my pre-digested carefully arranged material gives some students a false sense of security. They think they understand the material because it seems so comprehensible – but actually they don’t. Reality kicks in when attempting to do the homework problems, or worse, when taking the exam (if they didn’t struggle through the homework themselves).

A common student refrain: “Everything seemed so clear in class, but I don’t know why I can’t solve these problems.” Yes, I do carefully go through worked examples in class and strategies for problem-solving. Sometimes there are worksheets for the problem-solving to actually take place in class. There are homework problems to help solidify the material.

A common question from me: “Did you read the textbook and look through the worked examples there?” Often the answer is a sheepish no. With further probing a student will admit that “the lecture is easier to understand than the textbook” so why bother with the struggle? It seems inefficient to the student to look at both textbook and lecture notes. My students think that it’s worth attending class. (I get very good attendance even though that’s not part of their course grade.) This makes sense, to some extent. Chemistry is challenging. There’s lots to learn and I emphasize what I think is most important in class. And my exams do assess what I think is most important in class.

But the nagging question remains of how to get students to grapple and struggle more with the material, rather than giving in to strategies that ‘look’ easy, some of which are known to be rather ineffective. There’s also the occasional study that touts the introduction of a desirable difficulty that improves learning in the classroom. You’ve likely heard of the study that taking notes the old-fashioned way with pen(cil) and paper is better than using a laptop or other electronic device. Or more obscurely, there’s the study that harder-to-read font can encourage more eyeball time and therefore more understanding and reading comprehension. Or you might overhear a student proclaiming (about another class, not yours!) that “I had to do all the studying myself in Professor X’s class because he’s so confusing in class.”

There are in fact studies showing that introducing so-called ‘desirable’ difficulties can promote deeper learning – longer-term retention and (in the lingo) ‘transfer’. The difficulty is that it may slow down learning short-term. For example, interleaving practice shows evidence of longer-term learning compared to massed practice. But there’s a fine line. Making things too difficult or frustrating has a negative impact on learning. Taking an earlier example, you could make the font so blurry that the student, instead of reading more slowly and carefully, resorts to skimming and skipping. That’s going to be bad for learning!


In an open-access review paper published last week in Educational Psychology (DOI:10.3389/fpsyg.2018.01483) Paas and co-workers analyze how and when desirable difficulty turns into undesirable difficulty. (Abstract above.) The key theory used to analyze the issues is cognitive load theory. John Sweller, one of the principals in this area, is one of the co-authors. The three desirable difficulties that are discussed: testing, generation, and varied conditions of practice. Let’s take these in turn.

Testing. Students do not like this. Tests stress them. But they can be very effective for learning. The poster study for the testing effect shows that students who took quizzes before a final exam performed better than students who used the time reading the material more instead. (There are many other related studies that support the testing effect.) That’s why my General Chemistry classes have plenty of very low-stakes five-minute pop-quizzes. I also provide practice exams, but for the coming semester I’m going to try something new to help students test-and-reflect with take-home exams.

The generation effect is “the finding that generating one’s own answers rather than studying the answers of others may have long-term advantages for learning”. Most of these studies involve word generation or sentence completion although some contain math calculations. In my classes, I stress writing out the answers with intermediate steps included. It’s also why multiple-choice questions do not feature strongly in any of my classes. A multiple-choice question requires recognition (or luck), while a generation question forces the student to utilize what they have learned by drawing on resources in long-term memory. (Note, this doesn’t come from sheer memorization. While definitions and some procedural knowledge must be memorized, the rest can be constructed.)

By varying the conditions of practice, for example through interleaving (mentioned above), or providing different-looking problems that help the student practice the same concept, studies show signs of long-term retention. This is contrasted with repeating the same solution or procedure (massed practice) under the same conditions, which is effective for short-term remembering and regurgitating, but shows little lasting effect. (Note that this principle doesn’t apply to automating physical movements such as one might practice in sports, because in those cases one is aiming for autonomous memory rather than long-term memory.) I try to provide varying homework problems, because there’s only so much you can cover in class, but I’m not sure how effectively I’m doing this. I need to work on this more systematically.

In all three cases (testing, generation, varied conditions of practice), there are also studies that show the opposite effect, i.e., that learning is hindered (measured usually by retrieval or a final test of some sort). Cognitive load theory provides a framework to explain these cases. Our working memory is limited. Therefore if a novel task is presented, and there are too many new interactions among the elements of this task, learning is inhibited – what gets encoded in long-term memory is at best a jumble of incorrect notions. Folks in the learning sciences have quantified ‘element interactivity’ and they find that novices have difficulty with handling these interactivity, while experts are able to do with ease (up to a certain point). The authors illustrate how learning effectiveness decreases at the element interactivity increases, and this blunts the desirable difficulty into one that is less desirable.

Reading this article made me think about how to quantify element interactivity in chemistry. I know it’s high for novices from Johnstone’s Triangle. But I don’t know how high. Students starting college chemistry also have a very wide variety of prior knowledge. Some of them have had excellent high school chemistry courses that cover much of the material in first-year college chemistry. Others have problems with algebra and proportionality which seriously impedes learning base concepts such as the mole (quantity) and manipulating chemical formulae. In any case, I’m approaching the new semester with a sharp lookout for element interactivity in the material I’m teaching. I expect this to be significant in my Quantum Chemistry class, since the math is demanding, and the concepts are counter-intuitive and challenging.

There are occasions where I attempt to confuse the students, but I let them in on it. During class discussions, in trying to sharpen and clarify conceptual material, a student might provide a ‘textbook answer’. I respond by providing a spurious counter-explanation, often prefaced with “I will now try to confuse you by claiming…” and challenge the students to go a little deeper. I have also made deliberate common math or graph errors in class early in the semester to make sure students are paying attention and not just blindly copying what I write on the board. I alert them to this issue within a couple of minutes at most, and we discuss why I made the error. This strategy becomes very useful later in the semester when I inadvertently make a math error on the board. This inevitably happens in the math-laden physical chemistry courses. And I do veer off the beaten path occasionally in class, when our discussion uncovers something interesting. This is desirable and keeps us all on our toes!

Tuesday, August 14, 2018

Omnivorous Eating is Underrated


What is the opposite of a picky eater?

The first two terms that popped into my head were negations: non-picky and non-choosy. In particular I’m trying to describe the ability to eat a very wide variety of things. A varietal eater? A wide/broad eater? A promiscuous eater? This came to me as I started thinking about bacteria being promiscuous, not just in eating but also in exchanging genetic material. Wanton eater? Not to be confused with wantan (the Chinese dumpling) eater, this popped into my head because the game Bios Genesis uses wanton-ness to measure ability for horizontal gene transfer (HGT).

For the title of this post, I’ve settled on omnivorous; not in the sense of eating both meat and vegetables, but rather ‘taking in or using whatever is available’. I’m an omnivorous reader, at least of non-fiction. (I’m rather narrow when it comes to reading fiction.) I also happily eat animal and plant products, and I have no qualms in eating strange international delicacies that would put off the average American. I also enjoy eating and talking about food, and my most regular visual social media posts are probably photos of yummy food.

However, what motivates today’s post is thinking about the distant past and the possibly near future – the origin of life and future technology just around the bend.

The origins of life on our planet are way back in the distant past. Since this is one of my research interests, I’ve been pondering the ancient origins of metabolism. Several articles I read this week speculate about the promiscuity of bacteria (and archaea) to utilize a variety of carbon sources for food. This is crucial to survival in an ever-changing environment. If your main food source is depleted, can you ‘innovate’ or ‘evolve’ to use another food source. If not, you die. HGT could play an important role in providing bacteria with the robustness to handle different food sources. There are experimental studies and computer simulations aimed at discovering how organisms can be robust.

Chemical reactions involving simple precursor molecules that are likely to be found on the early Earth result in diverse complex mixtures of hundreds of interesting molecules, only a small subset of which are used by extant life. I will quote myself from a previous blog post. “The riddle of origin-of-life chemistry has less to do with making a large variety of molecules [this is easy!] – it’s about why life only picks out a select few and uses them over and over again.” My suspicion is that proto-metabolic systems started off being very promiscuous but perhaps not very efficient. They could make use of certain types of molecules as sources of energy – but also all the chemical cousins of these molecules. But as energy demands grow with larger, more complex metabolisms (although still ancient), less efficient pathways are used less and less, and possibly pruned out. We see the same thing in genes. An endosymbiont, compared to its free-living counterpart bacteria, has ‘lost’ a lot of the genes that would generate proteins that make it robust. It’s happy in its host, and would die outside of its cocooned world.

But the environment changes. Bacteria adapt. Those that don’t die. But those that do, live on and multiply. We’ve seen these evolutionary changes in our lifetime. Micro-organisms have evolved to eat man-made chemicals previously thought non-biodegradable. Scientists utilize this promiscuity to evolve organisms that will eat toxins, plastics, and other ‘trash’. You can also engineer micro-organisms to eat particular types of food and then produce (or ‘poop’) other chemicals – think biofuels of the more exotic kind, or other specialty chemicals. We humans think we are remarkably omnivorous too! And so we are perhaps more so than many other creatures closer to our size and mass, but those bacteria are amazing. There’s an evolutionary reason why your gut and digestive system is full of them. We contain multitudes!

Eating, in fact, is simply amazing! We take in nutrients from a variety of sources and our metabolism turns it into biomass and it powers biomechanics. Think about an automobile engine. The number of fuels it can take is rather limited. It powers motion, but there’s no equivalent to building biomass. How fantastic would it be if a car could do simple self-repairs? Heal thyself, automobile! The bodies of living organisms do this all the time, as long as the injury is not too severe – in which case we need the help of doctors and medicines. When you take your car to the garage to be serviced or repaired, that’s sort of like a visit to the doctor.

Why can’t cars repair themselves? Why can’t they redirect their resources when something not-too-catastrophic fails to function? Maybe, they can. With computer systems controlling the ‘heart’ of the car, it’s possible to design the system to consider such contingencies. You could program the program to be systems-oriented. For example, instead of the code that connects your indicator switch directly and solely to the indicator lights. Maybe the code links your input hardware (switches, knobs, buttons, perhaps even the steering wheel or the gearstick) to an array of sensors and outputs but an artificial intelligence helps to sort this out.

Let’s imagine a simple example. Your left rear indicator light stops working. A sensor notes that. When you, the driver, turn on your indicator light – the system, knowing that its primary indicator light is down, makes an alternative choice. It could flash your left rear lamp instead. Or it allows a mechanical adjustment that pushes your front left indicator light away from the body so that a driver behind can see it, if your left rear lamp has also failed. The idea is not necessarily to think up all the contingencies and program them in priority but to code the system into thinking about functionality. Let me call it programming with a systems mentality.

Not just the software, but the hardware of the car perhaps should also be designed and built differently. I haven’t thought about exactly how that should work, but I think eating should be considered important. With the advent of 3D printing and software plans that could be beamed in from the cloud, an intelligent diagnostic system could potentially build the needed fix and tell you how to swap something out at your next stop. In my previous post, I discussed how the manufacture of guns was changed so that the system was built of interchangeable parts that did not require an expert gunsmith to fix every time something went wrong. Precision engineering played a role in that, but the systematic design for plug-and-play (or in this case plug-and-shoot) was crucial. For more serious matters, the modern garage also equipped with more advanced 3D printers could have the part ready for you at your next stop.

What does eating have to do with this? You feed raw chemicals to the 3D printer, and it produces an object that you need for functioning. As 3D printers become smaller, cheaper and more automated, what’s to prevent one from being installed in the automobile of the future? One that can self-repair at least certain minor but important issues. Perhaps the automobile is not the best place for this technology – but I do expect plug-and-play to continue in prominence with the intermingling of software directing the construction of custom hardware. Perhaps it is no coincidence that the cutting-edge 3D printing technology comes from a company called Carbon. I’ve seen some of their work and listened to the company’s founder Philip DeSimone at a couple of conferences. It’s pretty amazing stuff.

Eating. It seems so basic, but it’s anything but. And omnivorous eating? Now, that’s something quite special.

Saturday, August 11, 2018

Mass Production


Delving into the history of precision engineering seems like an unlikely book topic for production to the masses. You might think that only a few nerdy enthusiasts would care. But in the hands of Simon Winchester, the story is engaging, interesting, and you’ll look at the countless man-made gadgets and gizmos around you in a different light. All this awaits you in The Perfectionists, subtitled “How Precision Engineers Created the Modern World”. And while it showcases the past in light of the present, it made me think about the future of my profession – higher education – and the tensions within as we see continued stratification between the ‘elite’ and the masses.


The prologue begins by defining and contrasting two terms, precision and accuracy, something I also do on Day 2 of my introductory chemistry classes. (Day 1 is about an atomistic view of matter and ‘seeing the unseen’.) Each chapter is then organized with a target precision, or tolerance; the values get progressively smaller as inventors and engineers get closer to making the ‘cutting edge’ progressively finer. Yes, indeed. The increasing ability to make fine, precision cuts gave us the phrase that we now familiarly think of being at the forefront of technology.

Chapter 1 begins, appropriately, with chronometry – the measurement and keeping of time. One hero in this story is John Harrison, maker of clocks with the best precision available at the eighteenth century. His time-keeping devices allowed mariners to determine their longitude and for railway lines to substantially increase coordination and efficiency. The business of education is ruled, perhaps enslaved, by time – when classes start and end, when meetings are scheduled and for how long, so we can get on with our day. Harrison’s devices had many tiny interacting parts that had to work just right in concert. His achievements are amazing given the lack of precision tools at the time.

In the eighteenth century, being able to consistently make parts and devices to the tolerance of one-tenth of an inch was remarkable. But did the Greeks beat Harrison and his contemporaries to this level of tolerance two thousand years ago? Winchester introduces us to the discovery and analysis of what is termed the ‘Antikythera mechanism’ (pictured below). Was it an ancient computer? We now know from MRI measurements that there is “miniscule inscribed lettering in Corinthian Greek chased into the machine’s brass work – a total of 3,400 letters, all millimeter-size… – [suggesting] that the gearwheels, once fully engaged with one another on the side of the box, could also predict the movement of [the moon and] five other planets then known to the Ancient Greeks.” Unfortunately, while the device certainly is precise, it is not accurate. Harrison’s clocks on the other hand were both precise and accurate, up to a certain tolerance at least.


In Chapter 5, Winchester brings us to the early twentieth century. Two Henrys feature prominently – Royce and Ford. Their automotive creations, the Rolls-Royce and the Ford Model T, would come to exemplify the divergence between products for the elite and products for the masses. But the interesting story here is that what truly requires cutting-edge precision engineering is efficient mass production, not slower custom-built pieces of engineering. The drive toward perfection shows up in both, but in different places, and for different reasons.

Winchester writes: “… while Henry Royce over in Manchester had been captivated by perfection. Henry Ford in Dearborn was consumed by production. Their two fledgling companies, so similar in so many ways, each wedded to the idea of making the best and most suitable machine it could, began to diverge in both purpose and practice from the moment of their respective foundings… Within Rolls-Royce, it may seem as though the worship of the precise was entirely central to the making of these enormously comfortable, stylish, swift, and comprehensively memorable cars. In fact, it was far more crucial to the making of the less costly, less complex, less remembered machines… for a simple reason: the production lines required a limitless supply of parts that were interchangeable.”

Interchangeable is key here. A system made up of precisely and consistently manufactured parts can be built and fixed quickly and efficiently. If two pieces don’t fit, production slows down until something is fixed or replaced. Winchester’s conclusion is haunting: “Precision, in other words, is an absolute essential for keeping the unforgiving tyranny of a production line going.” The Rolls-Royce doesn’t require precision everywhere in the process because the well-trained artisan engineers can handcraft the required fitting. The production line doesn’t require expensive engineers to run. It just needs all component parts to be precise.

This story isn’t just true of cars, it turns out to be true for guns. Back when these new weapons of destruction entered the battlefield, they were unreliable. Having your gun jammed was a frequent problem. (Bayonets, swords or knives were crucial!) Gunsmiths were few and far between if your gun needed fixing or repair. There was no easy D.I.Y. for the common soldier. It was the French that came up with a solution: constructing a gun that used interchangeable consistent parts that could easily be replaced in a fix. But shortly after the French revolution, “the idea of interchangeable parts had withered and died in France – and some say to this day that the survival of craftsmanship and the reluctance entirely to embrace the modern has helped preserve the reputation of France as something of a haven for the romantic delight of the Old Ways.”

But one person who saw the demonstrations of the French interchangeable flintlock parts back in 1785 was Thomas Jefferson, then emissary of the fledgling American government. Winchester narrates how Jefferson brought these ideas back to the New World, vigorously championing the system. The armory at Harpers Ferry became renown for their production-line produced guns. Consistent, easily fixed, and efficient to produce. No longer did you need a well-trained gunsmith. Machines did all the work – making “lock, stock and barrel”.

All this made me think about higher education. For a while, it was thought that Baumol’s cost disease applied to higher education. After all, isn’t this where you truly need human experts to teach human learners in a comprehensive way? As the Wikipedia entry suggests, it takes “college professors the same amount of time to mark an essay in 2006 as it did in 1966”. Except that robo-graders for student essays are on the rise, despite their teething flaws which could be ironed out over time. Or how about the robot teaching assistant that went under the radar undetected by students? Yes, there was still a human professor for the class, by all accounts a fantastic one, over in the computer science department at Georgia Tech. Ashok Goel, who does cutting-edge Artificial Intelligence (A.I.) research, invented Jill Watson to help him with the online 400-person class he was teaching. Are we innovating ourselves into obsolesence?

In my own field of chemistry, another Georgia institution, Emory University, is requiring students enrolling in the (introductory) General Chemistry college-level course to participate in a Preparatory Module. This module is run on ALEKS, an adaptive online learning system, that aims to help every student achieve competency of core concepts regardless of their individual backgrounds and learning speeds. (I’ve mentioned ALEKS in some blog posts.) Requiring is not the right word, since it’s technically optional. I should say ‘highly encouraged’ by making successful completion part of the grade for the General Chemistry course it is supporting. The goal is to “ensure that your math and chemistry backgrounds are strong enough for you to succeed”. And if they’re not when you begin the module, ALEKS will help you get there. ALEKS is fully automated. No human instructor.

Several new education outfits at both grade school and college level (I’m not naming names), all with a strong technology slant, aim to convince you that their new brand of education is both personalized while promoting the latest in ‘active learning’ strategies. Classes, especially introductory core ones, are tightly scripted and controlled because ‘the system works’. These claims are backed-up by evidence from the learning and cognitive sciences (mainly associated with psychology departments). I’ve spent a lot of time reading these studies out of personal interest, but I don’t fully buy into how these strategies are married into the curriculum. (Like many other seemingly innovative strategies, there are some good parts and some bad parts.) You don’t need faculty or teachers to be well-trained in the subject area. You need good discussion facilitators who have a script with talking points, and who are able to leverage data analytics to keep the system on track. Humans also provide that personalized touch that maybe we’re not quite ready to jettison in higher education.

Online education is here to stay, and will continue to grow markedly in the higher education sector. (Here are some technology trajectories.) At least, that’s my prediction. Today, we witness the proliferation of online master’s programs in myriad fields, as the baccalaureate degree starts to lose its distinctiveness. But it will be the masses that increasingly contend with this new world. Online education is mass production for the masses. Data analytics within integrated learning systems are the new precision devices to deliver a consistent product. That’s how education is portrayed today. It’s a product. Educational institutions produce graduates.

There will always be a small segment of the population that will support so-called ‘elite’ Rolls-Royce education. The rich, the 1%, the elite, will pay for their children to be educated in prestigious institutions with human teachers and professors – a dwindling group of experts and artisans. Competition will be fierce for the ‘top’ teachers, and reputation, more so than ability, will become increasingly important to rise to the top in an age awash with data where it becomes increasingly more difficult to distinguish the quality from the quantity. I used to think that my job as a tenured liberal arts professor was quite secure, and I could continue in my tried-and-true artisanal craftsmanship approaches to education. I’m likely to make it fine to retirement, but I’m less sanguine about the prospects of my younger colleagues, especially those who are not at the top reputable institutions. Expertise isn’t required for mass production, be it at Ford when it churned out the Model T, or for online facilitation of core courses.

Is there a way to break out of this technological system we are caught in? I don’t know, but I hope that folks like Simon Winchester, possibly one of the few remaining polymaths of our era, can help point the way. In any case, I recommend his book.

Wednesday, August 8, 2018

Questioning Data


The article that prompted me to read Too Much To Know (blogged here, here, and here) was the epilogue in a journal issue exploring the intersection of data and history. The article is intriguingly titled “Big Data Is the Answer … But What Is the Question?” (figure below shows authors, abstract, citation).


The authors provide their musings on eight questions they ask to probe the concept of data (listed in the abstract). I will explore a couple that attracted my attention. I recommend reading the article in full, and possibly other articles in that issue of Osiris, if you’re interested in knowing more.

Question 1 is “What counts as data?” I thought this was obvious at first glance and then quickly realized that I bring a rather narrow chemist’s idea of what constitutes data. The authors use the example of crystallography: “… x-rays diffracted by a crystal produce an image containing dark spots, whose intensities are used to calculate ‘structure factors’, which in turn are used to determine the coordinates of each atom composing the crystal. But where are the data? Crystallographers were first content to publish atomic coordinates as the ‘data’ supporting a proposed structure, before they were asked to provide more foundational data…” Perhaps all of them are data. And given that I know something about chemical structure determination via crystallography, I might even add that simulation parameters used in various steps might be data of yet another sort.

Additionally, what counts as data may change over time. Being a computational chemist, I’ve certainly experienced this when attempting to publish research results. Which data is primary and should be part of the main article? Which is secondary and should be moved to Supporting Information? The authors quote a philosopher writing that data are “fungible objects defined by their portability and prospective usefulness as evidence”. When something is categorized as data, it is used at that moment in time, to support a knowledge claim. How many of us try to bolster an argument by saying “I have data to support…”? I have done this many times, especially when trying to get increased resources. But is the data sufficient? Is that data relevant? Is it raw data? Is it derivative data? Has it been ‘interpreted’ in some way? Is it ‘compelling’?

Question 4 is “What makes data measurable? What does quantification do to data?” If you use (standard Shannon information) bits and bytes to measure data, then data sizes might be quantified differently depending on what constitutes your base data. For example, the authors discuss the ASCII text encoding system and how bytes can represent the space of the ‘English’ alphabet and alphanumeric system. But it might have difficulty with Chinese or Czech. As a second example, “a 500-page book and a single scanned photograph require the same number of bytes of computer memory, yet from a human point of view, the book usually contains far more information.” Comparisons may be tricky. When we say we are drowning in petabytes of data, what does that really mean qualitatively? It’s very challenging to have the single ASCII-unit-system measure data of different types.

What does quantification do to data? I’m not sure, but trying to compare elephants and oranges on the same metric might be very misleading. I’ve been thinking about how to measure molecular complexity, as I’ve been probing the question of whether chemical evolution in a non-equilibrium situation leads to the formation of more ‘complex’ molecules. But as I’ve delved into the literature of how to measure molecular complexity, the situation becomes decidedly more ‘complex’ when attempting to choose an appropriate metric. Most approaches use something akin to a Shannon or Boltzmann-type scale coupled with some ad-hoc add-ons.

Up to this point I haven’t touched on Big Data. The last two questions by the authors touch on this aspect. In “Who owns data? Who uses data?” they briefly discuss “supply chains” of data. One interesting historical aspect that I suspect has far-reaching ramifications is that in recent years, “data suppliers, data mangers, and data users have become far more differentiated and specialized. As the collection, organization and curation of data become increasingly professionalized, a divide has appeared between the scientists who produce data, those who manage it, and those who analyze it.” This has led to tension or even conflict between data producers and data analyzers.

Big Data is here to stay. Beyond privacy issues, we should be cautious and thoughtful about what data signifies in whatever context it is being used. We should certainly ask questions.

Some previous posts on big data.
·      Deep-Fried Data

Saturday, August 4, 2018

Technical versus Entrepreneurial Creativity


When I think about creativity, I tend to imagine new inventions or ideas coupled with out-of-the-box approaches. In Exceptional Creativity in Science and Technology, two of the book’s contributors attempt to narrow and distinguish different types of creativity.

Chapter 7 by Susan Hackwood, a former engineer of the famed creative Bell Labs, has a chapter on “Technically Creative Environments”. To probe this idea, she first defines what is meant by technical creativity. Her operational definition: “Creativity is the ability to bring about the new and valuable [where] the distinguishing characteristic… is that the ‘valuable’ part brought about by technical creativity is not the true, good or beautiful, but rather the ‘useful’… Moral neutrality is a second characteristic specific to technical creativity… The key driver… is to achieve power over nature, and [the reasons] can be either Promethean or compassionate.”

Hackwood argues that technical creativity also requires “basically high IQ with a high quantitative and spatial component” as a necessary, but not a sufficient condition. She doesn’t cover how one acquires those skills or whether they are innate, instead she focuses on how a group of such individuals can “work together to achieve higher creativity”. What makes creative achievement possible? Hackwood argues for four elements: two abilities and two traits.
(1) Ability #1: Master the knowledge and skills to accomplish the (creative) task.
(2) Ability #2: Sustain an intense focused effort toward a specific goal.
(3) Trait #1: Be prolific in generating ideas beyond the scope of one ‘type’.
(4) Trait #2: Be guided by an internal (autonomous) vision, and not by external values.

While “blocking any one of these four elements stunts creativity”, Hackwood focuses on the two traits as key to creative thought. (The two abilities are necessary, but not sufficient.) In a statement that will undoubtedly raise the hackles of non-scientists, she argues that “broadly speaking, the presence of the humanities and social sciences departments in the university is not necessarily an asset to technical creativity… [Her] view is that they currently, rather than broadening the mind, often produce individuals inhibited by political correctness, who are discouraged from relying on their autonomous vision by relativism… Paradoxically, the very disciplines meant to provide breath can actually foster limits and inhibitions.” This allows Hackwood to focus on the technical-research parts.

With the history of Bell Labs in mind (I recommend Jon Gertner’s fantastic The Idea Factory), both in its intellectual heyday, and its eventual decay, Hackwood takes aim at what she calls “technological leadership”. I take this to mean anyone who is in a position of authority and holds the purse-strings over people and programs in science-engineering research and technological advancement. Hackwood defines a type: IBNC, Intelligent But Not Creative. Such individuals have Abilities #1 and #2, and possibly bits of Trait #1 (being prolific, but not necessarily wide in scope). The result is that “IBNCs are fundamentally incapable of moving against the accepted vision/opinion of the group in relation to which they define themselves.” (I’m cutting out details in her argument here.)

Here’s the crux, and Hackwood does not mince her words. “The successful creative research environment is characterized by its power to prevent IBNCs from becoming leaders or from dominating the group by intimidation and other social means. This is not easy for the simple reason that the filters (notably schools) that select for abilities [#1 and #2] tend to select many IBNCs, who therefore are inevitably found within any potential creative research environment. The task is to isolate, restrict, and if possible remove IBNCs from the [leadership] group… Once power passes to the IBNCs, the process is irreversible, and the creative group ceases to be such. The decay into mediocrity may be delayed but it is almost inevitable.” I suspect Hackwood was personally there to observe the decay of Bell Labs.

But it gets worse, and she continues: “In practice, leadership by creative people is very difficult to achieve because technically creative people generally are not attracted to management, which is a social task… [thus leadership] is forever threatened with takeover by IBNCs, especially in a time of scarce resources… They end up controlling much of the research activity by inevitably fostering group projects (always in culturally sanctioned areas) and megaprojects (always in interdisciplinary and sanctioned areas). Such control kills the autonomy of the creative person’s vision and inhibits ideational fluency [Trait #1].” And even if you have technically creative leaders, it’s still very challenging to ‘organize’ a creative group of autonomous individuals.

Hackwood has five principles to sustain a technically creative environment. (She makes an argument for each in her chapter. This is just the summary list.)
·      Hire the best and let them free.
·      Do not let IBNCs become managers, leaders, or even dominant in the group.
·      Provide the best research tools.
·      Do not make access to basic research resources depend on constant, fierce competition where noncreative agents pick winners.
·      Move the group to a location where the quality of personal life is high.
Whether or not you agree with Hackwood, I can see how each one of these can be problematic once resources get scarce. It’s challenging to have an ‘ideal’ environment. Patronage or independent wealth seem like the way to go. It’s difficult for government or industry to provide the goods.

In contrast to Hackwood, the next chapter on “Entrepreneurial Creativity” by Timothy Bresnahan, an economics professor at Stanford, shifts the focus away from technical creativity. The social sciences become much more important in this domain. He writes that “no matter how brilliant and creative [a technical invention]… entrepreneurial creativity is also needed [because it] creatively locates and exploits overlaps between what is technically feasible and what will create value for society. This is the key step in the founding of new technology-based industries.” Bresnahan uses of the story of the Integrated Circuit to bring home this point and highlight various aspects of entrepreneurial creativity.

Setting the stage is in order. Finding those overlaps (“between technical opportunity and value creation”) turns out to be very difficult because “knowledge is dispersed widely in the economy… [For example] understanding computer technology deeply does not endow computer specialists with deep knowledge of markets, entertainment, or the delicate arts of social communication. That knowledge is, typically, held by others. More generally, when markets and industries do not yet exist, there is no good reason for the same person to have knowledge of both technical feasibility and value creation.”

Bresnahan continues: “Entrepreneurial implementation lies in building the firms, markets, or industries that exploit a technological opportunity to create value. In many ways, this market focus distinguishes entrepreneurial creativity. The new product or process innovation that serves an important need may appear quite mundane, but if it was not foreseen, it is creative. Indeed, a good working definition of practical creativity ought to emphasize the transition from a state in which something was unforeseen to a state in which it is compelling. Many innovations seem obvious with hindsight because they are compelling to their users.”

Three definitions that Bresnahan uses are helpful here.
·      Invention: The conception of new scientific or engineering ideas.
·      Innovation: The development of new marketable products or new usable processes.
·      Diffusion: The adoption of new products or processes widely in the market.
While Invention very much mirrors technical creativity, Innovation and Diffusion that are the hallmarks of entrepreneurial creativity. Interestingly, from Innovation’s point of view, Invention is counted as a cost rather than a benefit; a necessary one perhaps, but a cost nevertheless.

Diffusion is what I found hardest to grasp. It is clearly important for a new industry to be successful and prove its value, but how it can transform millions of lives is exceedingly difficult to foresee. Bresnahan traces the history of the integrated circuit to the development and widespread use of the personal computer, weaving in stories of high-tech firms rising and falling in Silicon Valley.

Four points that I gleaned from this story: First, there was a certain ‘generalness’ in the design and diffusion of the integrated circuit, that led to the blooming of a wide range of other high-tech industries. Second, the many brilliant scientists learned how to be creative leaders on-the-fly by experience. Ironically, many of those lessons came from learning how not to lead from William Shockley’s example. Third, no one knows everything, and so having a knowledge network is crucial. This allows different firms to work within their constraints and resource limitations, in a sort of competitive-collaborative partnership with their peers. Fourth, recombination shows up a lot. Take inventions and ideas that already exist, but combine them in novel ways while keeping an eye on what will overlap with market value.

Comparing Hackwood and Bresnahan’s back-to-back chapters, it seems that both technical and entrepreneurial creativity are important although they are increasingly rarely found in the same individual. The heyday of the creative polymaths has passed. It’s simply too time-consuming to be an expert in multiple areas, therefore collaboration is crucial. Not just between individuals, but between corporations, between institutions, between governments. Hackwood’s five principles for fostering creativity are ideal, but increasingly difficult to endow. Neither author delves into what makes someone creative, but I increasingly suspect that being in the right place at the right time with the right complementary knowledge is key. Chance favors the prepared mind, perhaps. Predicting the future has never been easy.

For my review of Chapter 1 of this book, click here.