Sunday, October 30, 2016

A Reflection on Writing and Paper


Tomorrow (Halloween) will mark two years of the Potions For Muggles blog. It’s always good to spend some time reflecting on why I spend time on any particular activity. After all, I want my students to do the same. Earlier in the semester (between weeks 4-6), I had my first-year advisees keep a timelog for a week. I wanted them to see where they spent their time, and then have them stop by my office to share their reflections. (I specifically tell that I’m not going to ask to see their actual timelog – I just want to know their reflections on the activity.) So far the students have found the activity helpful to them!

Has blogging been helpful to me this past year? One of my goals was to improve my writing. While I think I saw significant improvement my first year, I’m not sure how much more I have improved over this second year. I’ve noticed that I don’t spend as much time drafting and crafting my post compared to my first year. Can one’s writing deteriorate? I think so. Like many other skills, continued practice is important. I also think blogging about the new things I am trying in my classes keeps me accountable. I think I am more likely to follow through on an idea once I have announced it publically in some form or other. Blogging has also helped me reflect on the new things I am trying in class, simply by forcing me to sit down and write down my thoughts in some coherent fashion.

This past week I have been reading Paper: Paging Through History by Mark Kurlansky. Socrates shows up in the first chapter. In Phaedrus, there is a dialogue titled “The Superiority of The Spoken Word. The Myth of the Invention of Writing”. The familiar argument is that writing is both inferior and dangerous – our memory recall becomes flabby through lack of exercise, one-way communication through writing does not allow the dialectical asking and answering of questions, and being removed from the writer makes it “not sincere, not heartfelt, and thus in a sense less true.” In the dialogue, Socrates describes what happens when (Egyptian god) Thoth invents writing. The pharaoh tells Thoth: “You have invented an elixir not of memory but of reminding, and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant.” (As we move towards more online A.I.-driven education, I see an argument for having a live instructor here.)

Reading the history of paper and writing made me think about the difference between the last millennium of writing on physical objects versus digital writing – moving 1’s and 0’s around on a computer chip stored somewhere in a cloud. In ancient times, only the most important things were etched into stone or carefully transcribed on papyrus or vellum. Very few people could read or write, and a class of scribes grew in power and influence with the “technology” of writing. While many valuable ancient writings have been preserved, many have been lost. Much was lost when the Library of Alexandria was destroyed by fire. The burnings of the Mayan codices by overly zealous Spaniards erased much of the history of a people.

But as paper got cheaper and the printing press was invented, the hunger of the general populace for reading material helped to drive the technology of printing for the masses. With spreading literacy, more folks even took to writing. Kurlansky goes into great detail describing the impacts in each technological iteration. He weaves a loom by pulling in threads from political pamphlets, the invention of newspapers, the scouring of rags, the fad of paper clothes, and the relationship of artists to their paper. I greatly enjoyed the first half of the book, but the second half felt ponderous and I found myself skimming.

In early times, ideas spread as quickly as paper and printing could spread. But this would change in the 20th century as new media superceded paper. When I was young, I very briefly (for a few months) kept a diary. I have no idea where it is anymore. Lost forever perhaps in a landfill somewhere after being thrown into the trash. (This was before recycling became popular.) Now, I have a blog that is broadcast to the world with the press of a button. It exists, not as ink molecules binding to the molecules of paper in some sort of permanence, but in a transient form of bytes somewhere in a cloud hosted by Blogger. The spread of my writing is almost instantaneous. It could all be lost in an instant unless a bunch of folks downloaded local versions. But even then, digital files can be corrupted just as acidity can destroy paper.

Tagging my blog posts and using the search function means that I do not exercise my memory in the way Socrates wishes. The external hard drives in the cloud have become an extension of my memory, searchable by anyone connected to the network using standardized pattern recognition; they are only transiently stored in an idiosyncratic neuronal network in my brain. I don’t claim to offer my readers true wisdom or even the appearance of wisdom as the pharaoh avers. But maybe it’s not the elixir of memory that is important, but the reminding in the form of reflection. Looking back allows us to see where we’ve come from. Those who do not learn from history are likely to repeat its mistakes. So perhaps writing’s utility is as a reflective reminder to the writer. Perhaps I should keep blogging for another year!

Friday, October 21, 2016

Balancing Creativity and Caution


I just finished The Creativity Crisis by Roberta Ness, dean of the University of Texas School of Public Health, and also author on innovation in the sciences. While the title of the book sounds “alarmist”, the message is not. In my opinion I found it well-balanced, perhaps displaying my bias as an academic. (Who doesn’t have biases?) The thrust of the book is to examine the balance between creativity and caution in the scientific research enterprise through three lenses: economics, sociology and ethics. Each chapter ends with an executive summary with practical ideas, and peppered throughout the book are clear proposals of how to promote innovation while exercising sufficient oversight.

Each of the three main sections concludes with a chapter that exhorts organizations, both government and private, to participate in the promotion of creativity and innovation. The economics section closes with “Reinventing Meandering Exploration”, sociology with “Reinventing the Power of the Group”, and ethics with “Reinventing Freedom”. The economics section draws from an excellent book that I read maybe three years ago, Paula Stephan’s insightful How Economics Shapes Science.

The road to innovative discovering is often meandering, but as the forces of efficiency and shorter-term assessment bear down, the pragmatic scientist often chooses the road of caution instead of the risk of meandering. I can see this in my own research; I only started to take on more risky projects post-tenure. They are less likely to get funded, and may not lead to a slough of publications. Therefore I risk falling out of the virtuous (or vicious) cycle of grants and publications. Once you fall out, it’s hard to get back in. Even then, I still have a stream of low-hanging fruit projects that will continue to yield publications – and are also more amenable to undergraduate research. But if my recently submitted grant gets funded, I might try and put a band of undergraduates on a more challenging complex project next summer.

The book is chock-full of interesting data, vignettes and thoughtful suggestions. It also has those moments where I nod vigorously in agreement. I’ll share one from page 160. The context is increased government regulations in a variety of reporting areas. “Effort reporting is yet another questionable federal requirement. Investigators must regularly certify the time spent on each funded project. A survey [concluded that] ‘effort is difficult to measure, provides limited internal control value, is expensive, lacks timeliness … and is confusing when all forms of renumeration are considered.’ In a word, effort reporting is useless.” My institution, like many others has seen an increase in the number of administrators hired, partly because of increased compliance measures across a slew of areas. A few bad actors cause extra paperwork for everyone. I expect this compliance trend to continue, unfortunately. The book cites an estimate of 42% of a scientist’s time going to administrative measures – much of which could go to actual research instead.

As with the rest of the book, the closing chapters also provide food-for-thought and helpful suggestions. Ness suggests that “science would benefit from a common definition of what is innovative”. She suggests “surprise with a use”. Rather than using creative or innovative, the practical use of the word surprise makes it clear because “when we see true novelty, originality and innovation we are struck by its cleverness, its unexpectedness.” The use part is equally important. Creativity isn’t just an idea, but it steeped in a social context. “Culture is the arbiter of what is considered useful. Only those things that are socially accepted and impactful can be useful.” So to some extent, innovation will be seen in partial hindsight as it is adopted, rather than when it is first presented or unveiled.

At her institution, a survey among faculty members resulted in the following barriers to creativity. “Innovation is a great thing until you try to get funded. It is difficult for a junior faculty member to take that kind of risk. Emphasis on short-term performance undermines risk taking and departing very far from what has worked in the past. We need to have time to be innovative.” And the best one: “Challenges to innovation are mandatory annual training, administrative drudgery, silly paperwork.”

As I have been attempting to inject a dose of creativity into my General Chemistry class, I wonder if I am providing sufficient incentive that allows students to take risks. My attempt a couple of years ago to de-emphasize grading didn’t sit well with the students – many of them were much too focused on doing what was needed to earn the grade (they were Honors students after all). But I don’t think I provided sufficient incentive in the grand scheme of things to incentivize risk-taking. I tried to improve on this in my recent Alien Periodic Table assignment. I gave the top grade to a student who didn’t get the “ideal” answer, but came up with a very creative and well-justified solution. I will have to think about this more carefully when I evaluate my next scaffolded assignment due next week. (The students invent new compounds using known elements and discuss structure and properties.)

The other obstacle I face is how to allow some meandering exploration. The syllabus feels so tight in chemistry particular when your class is pre-requisite to the next one in the sequence. I introduced take-home exams as a self-test (for more classroom time), and I have some room for creativity in my problem sets, but after grading the most recent exam and problem set, I wonder if I’m pitching things at the wrong level, or if my expectations are unrealistic. That will be the subject of another post.

Sunday, October 16, 2016

The Robustness Principle


Another week, another book read. This time it was The Seventh Sense by Joshua Cooper Ramo. The premise of the book is that we are at the edge of a new revolution, governed by understanding networks and navigating complex systems, and that we need to develop a seventh sense to survive or thrive. I found the first half of the book gimmicky-sounding, like one of those self-help books that promises you the secret in life to get ahead. This led to skimming over certain paths. Things get more interesting in the second half when the author gets into detailed examples of technology. I learned a lot about internet protocols, computer viruses, and the “disappearing problem” in artificial intelligence (AI) research.

There are many vignettes in the book, but the one I want to discuss today is by Jon Postel with regard to transmission control protocol (TCP): “Be conservative in what you do (or send), be liberal in what you accept.” Since I’ve been reading and thinking about robustness and its connection with higher education, I’m going to try and push this Postel’s rule a little further and see how it might work.

If you read higher education news regularly, there has been a fair bit of chatter about how to break the “iron triangle” of higher education: Access, Cost, Quality. The dilemma is that you can choose to do two well at the cost of the third. (Cooper Ramo has a similar triangle for networks – you can read his book if you’re interested to find out more.) Do you want wide access at low cost? Quality will suffer. How about high quality and wide access? That will cost more than a fortune. Could you retain high quality at lower cost? Possibly, but this will only be available to a small segment – and who will choose the beneficiaries?

The U.S. system of education is modular in three ways: (1) there are a large range of institutional types that cater to different segments of the population, (2) courses are arranged in modular units allowing for some mix-and-match catering to different goals/students within an institution, (3) the modularity allows for mid-stream changes both intra- and inter-institution. (There are some very specialized institutions attached to specific career goals but we won’t consider these for now.) At many colleges and universities, you do not have to declare a major for admission. This creates some messiness for administrative planning; you might say it is less efficient. However, the advantage is flexibility – robustness perhaps.

Since chemistry is what I am most familiar with, I will use that as an example. Note that the chemistry curriculum tends to be more hierarchical compared to many other areas so there will be significant differences, but the general principles may still apply. General Chemistry, the first-year college sequence (in two semesters or three quarters) has the same core content. Textbooks have likely played an important role in the “standardization” of this curriculum. Different departments and instructors will have different emphases. They may have different lab experiments, but the core goals are similar across programs. This means that it is relatively easy to transfer credit across institutions. Our department is liberal in what it accepts coming in, while the program we deliver in General Chemistry is conservative and does not change much from year-to-year. Something similar takes place in Organic Chemistry in the second year, and Physical Chemistry in the third or fourth year.

There are elective classes that may vary a fair bit from one institution to another depending on whether they are offered as a one-semester or two-semester sequence (ditto for quarters): analytical chemistry, inorganic chemistry, biochemistry, advanced organic chemistry, interdisciplinary courses (bioorganic, organometallic, biophysical, physical-organic) and more specialized topics (instrumental analysis, computational chemistry, electrochemistry, materials, polymers, etc.) Different institutions may have different program requirements, but we will accept transfer credit that count towards the major, i.e., there will be X number of elective units in chemistry that constitute the requirements. Our program, like many others in the U.S., is accredited by the American Chemical Society (ACS). The ACS has certain core requirements that must be fulfilled but there is also some flexibility in how you fulfill them. The system, I would say, is overall robust in chemistry.

Being in a liberal arts college at a private institution, we are costly. I’d like to think we are high-quality – at least our students who go on to graduate school tell us that we prepared them very well. As to access, I don’t know of any online programs in the undergraduate college that may provide wider opportunities. Our department has a number of students who transfer in. While we accept their credits, many of them struggle through the program suggesting that there is some mismatch. Some have no problem adapting, and others take a semester or two before they find their footing. This is one way to reduce the overall cost to the student (in terms of tuition). There are financial aid packages, but those are limited and therefore access is constrained.

Is there a robust solution to the iron triangle? I’m not sure, and I don’t think the solution lies in selective liberal arts college as an archetype. In combination with other institutions and initiatives, perhaps. The larger, state public institutions may be the place to break the iron triangle – but they are beset by their own problems, not least of which is diminishing state funding. Can the programs be transformed to run “lean” without sacrificing quality? Difficult to say. I’ve been mulling over whether an online-hybrid of sorts can handle some of the basics encountered in General and Organic chemistry. Certainly adaptive learning platforms claim some success in this area. But a high quality platform is not cheap, even if it may provide widespread access. Would a college institution host and maintain such a platform at very low cost to students (as opposed to a for-profit ed-tech provider)? I don’t know.

The problem with complex technological systems as they grow (inevitable according to Cooper Ramo and Ellul) is that we humans understand less and less, and that blurs the meaning of robustness. When AIs are tasked with making determinations, what do robustness rules look like? It would be scary to aim purely for efficiency. Biologically, at least, that leads to a loss of robustness such that could result in extinction from significant environment change. Then again, we human-cyborgs and our machine counterparts seem to be the most successful so far in altering the environment to suit our needs, at least in the short-term. Will our robustness be measured in the hundreds of years since the industrial revolution? It is a mere blip in the eons of time.

Tuesday, October 11, 2016

Being Robust


I just finished reading Arrival of the Fittest by Andreas Wagner, an evolutionary biologist at the University of Zurich. Early on, the book hinted that it would reveal the secret to how new innovative structures are formed in living systems. I admit I was a little disappointed at the end, because the argument hinged mainly on a trade-off between robustness and efficiency. That being said, I learned some interesting things in the book. Chapter 6 (out of 7 chapters) was marvelous and got me thinking about how information is stored, accessed and acted upon – and how differently the “library” of metabolic reactions is different from how we would organize a library. Wagner’s take-home point is to look at genotype networks, but you’ll have to read through his book to understand what these mean.

The book did make me think about robustness and efficiency, and I found my mind wandering to education systems instead of molecular-based living systems. One way an education system can be robust is by being modular. This is a key characteristic that distinguishes the American education system from the British system. I’m familiar with the latter having gone through many years of it growing up in a Commonwealth country. In fact I chose the U.S. for tertiary education (turning down a nice scholarship to study pharmacy in the U.K.) because I have always been interested in teaching and I wanted to experience the U.S. system. (Timing-wise, I was aided by generous financial aid packages from liberal arts colleges wanting to expand their international student pool from under-represented countries. I give much thanks to the diverse and sometimes messy U.S. system for this.)

In the British system (which itself is changing) that I went through, one didn’t choose courses. There was no chemistry, honors chemistry, AP chemistry, or any other flavor. I was shunted into the “science” stream and started taking chemistry in my fourth year of secondary school equivalent to “Form 4” (for those of you who know the system). The curriculum and syllabus were set. I’ll call it Form 4 chemistry. Next year I took Form 5 chemistry. I honestly had very little idea what was going on in class, memorized a lot, and made it through national exams – a two-week grueling affair where one sat for multiple papers across all your subjects. It was only in Form 6 that things started to click. I still thought biology and physics were more interesting, but I found I was getting good at chemistry (at least from exam scores).

Coming to America was a bit of a shock. I was not used to the pick-and-choose curricular options. Yes, there were pre-requisites and co-requisites to help you keep on track, but in a small liberal arts college, you can often talk your way out of them (I did on multiple occasions, mainly because of timetable conflicts). The chemistry curriculum also had separate modules that sometimes overlapped in terms of course material, so you’d see the same thing though not in exactly the same way in two or more different classes. It’s not the most efficient way of learning the material. (That’s partly why the British-system takes 3 years while the American takes 4 years.) For example, I saw a small section on orbitals and hybridization in organic chemistry at the same time I was seeing them in a different context in inorganic chemistry. And then the next year, I saw them again differently in physical chemistry. Physical organic chemistry and advanced inorganic chemistry once again showed me different facets, although there was much overlap. (Chemical bonding is one of my specialties so I’ve thought deeply about this topic.) Then there was more in graduate school.

All this seemed messy and inefficient if the goal was to churn out cookie-cutter chemistry majors with the “content” topics they should be exposed to as an undergraduate. If I had not gone to the U.S., I would have been taking generic University Year 1 Chemistry instead of my plethora of interesting choices. But I think by seeing how different professors chose to present sometimes the same material in very different ways, I learned how to think robustly as a chemist. By being placed in different “environmental” conditions, I learned to adapt. If this sounds a little like evolutionary biology, maybe it’s because the two are analogous in some way. My training was robust, and I’ve found myself able to teach myself new chemistry and enter new areas I was previously unfamiliar with. This is not to say that one system is clearly better than the other. Over the years I’ve seen parts of the U.S. system move towards narrower training, while other parts of the world with British-like systems move towards broader and more modular approaches. The two systems are no longer as different as they were when I was a student.

My department periodically assesses its curricular offerings and we have made some recent changes to both our chemistry and biochemistry major requirements. In some places, we have restricted choice, and in other places we have opened it up by loosening pre-requisites and co-requisites. This requires modifications in the affected courses, in the former cases for better streamlining, and in the latter cases for improved modularity.

I have seen a similar evolution in my own teaching. When I first started, my lectures were very clean and systematic, as I marched through the material according to my pre-conceived plan, being careful not to deviate (so that the beautifully designed plan would be carried out efficiently). Now I allow for a bit more messiness, which may involve going down a rabbit-hole or two depending on the questions and discussion with the students. I have an inkling that part of teaching students how to think critically and creatively is by guiding them to becoming robust learners. Sometimes the students don’t appreciate it – they want the “quick, clean” answer so they can get on with their lives, but even though it takes more of my time, I should resist providing them with the quick, clean answer.

A final comment on the Wagner book: one consequence of extant life being robust to a certain extent is the suggestion that proto-life must have evolutionarily (in a chemical and physical, not biological, sense) “selected” for robustness. This means that it will be challenging to use the often-scientifically-successful reductionist approach in trying to understand the riddle of life. A much more complicated and messier systems chemistry approach will be needed as we move forward.

Saturday, October 8, 2016

Deep-Fried Data


Sometimes reading one thing on the web leads you to another thing, and then you discover something humorous, well-written and very thought provoking. Yesterday, for me, that was Deep-Fried Data. I encourage you to read the post in its entirety; it is the text version of a talk given to the Library of Congress. The post has a link to a video if you prefer the audio-visual experience. I personally prefer the text because I can chew over the arguments slowly at my own leisure.

It’s a relatively long article (from a 15-20 minute talk), but I’ll quote three short sections to whet your appetite.

Today I'm here to talk to you about machine learning. I'd rather you hear about it from me than from your friends at school, or out on the street. Machine learning is like a deep-fat fryer. If you’ve never deep-fried something before, you think to yourself: ‘This is amazing! I bet this would work on anything!’ And it kind of does...”

“I find it helpful to think of algorithms as a dim-witted but extremely industrious graduate student, whom you don't fully trust. You want a concordance made? An index? You want them to go through ten million photos and find every picture of a horse? Perfect... You want them to draw conclusions on gender based on word use patterns? Or infer social relationships from census data? Now you need some adult supervision in the room.”

“People are pragmatic. In the absence of meaningful protection, their approach to privacy becomes ‘click OK and pray’. Every once in a while a spectacular hack shakes us up. But we have yet to see a coordinated, tragic abuse of personal information. That doesn't mean it won't happen. Remember that we live in a time when a spiritual successor to fascism is on the ascendant in a number of Western democracies. The stakes are high.”

The article is particularly thought-provoking as I have been warily watching the rise of data analytics approaches to solving the problems of higher education. When behemoth companies start plunking down gobs of money into selling products and services to universities, the faculty should really take notice. The calls for “data-driven” assessment that pervades our institutions should make us pause and ask questions. What is this data for? How is it chosen? What does it actually tell us? What is left out in the choice? How is the data reduced into a digestible sound-byte, often some numerical value? Who owns the data? How would it drive decision-making and strategic planning?

My ears now perk up when I hear the phrase “data-driven”. It’s something like “best practices”, usually implying one best practice determined by whoever is bandying the phrase. As a scientist, I’m strongly in favor of using data to support an argument, or to make a case. When I was department chair, I would go to the administration with a data-driven argument, accompanied by graphs and tables in a clear and pre-digested format to get what I needed resource-wise. It’s an effective tactic given the way the winds have been blowing in the increasingly all-administrative university. Put several of these tactics together and you get a strategy. But is it a wise strategy?

With data science programs popping up all over (such as this one at the University of Illinois), fully online of course, and costing a chunk, the lure of big-data jobs sings its siren song. While the corporate world is infatuated with Big Data, there will be plenty of takers. I’ve never taken any of these courses but I sincerely hope that students learn how to interrogate themselves as they mine their data quarries, looking for the riches hidden within. It is human nature to see patterns and weave them into a narrative. But human choices are made. Every step of the way. Humans design the algorithms. Choices were made in the underlying theoretical models. And as the layers get deeper and more complex, we start to understand less and rely more on the output happily provided by the black box.

After all, everything tastes good when it is deep-fried.

Saturday, October 1, 2016

H.M.S. Sulphur, the other Beagle


Periodic Tales by Hugh Aldersey-Williams continues to delight with interesting vignettes. I recently read the sections on sulphur and phosphorus. I did not know that the H.M.S. Sulphur was sort-of the sister ship of the much more famous H.M.S. Beagle. Instead of Charles Darwin as the resident botanist, the samples were collected by a surgeon named Richard Brinsley Hinds. The Sulphur set sail for its circumnavigation shortly before the Beagle returned to England.

Aldersey-Williams begins his section with references to brimstone in the books of Genesis and Revelation in the Bible. I’d always wondered about the pairing of “fire and brimstone” – turns out that in ancient times sulphur was used as a disinfectant and a cleanser. One example comes from Odysseus of Ithaca, who upon returning home the long way around after the sacking of Troy, and killing the suitors of his wife Penelope, gives instructions to “bring some sulphur to clean the pollution, and make a fire so that I can purify the house”. I didn’t know this, but apparently sulphur is still sold today for cleansing purposes although the target is greenhouses. The author adds: “Sulphur fires were used to combat cholera in the twentieth century, and sulphur was taken internally for digestive and other complaints.”

The H.M.S. Sulphur was formerly a battleship, hence its terror-invoking name to match its guns, but was then recomissioned as a survey ship. The ship’s captain, Edward Belcher, was an adventurous man who visited sulphur-spewing volcanoes and hot sulphur springs in the Americas. However, when they reached Asia, he was instructed to sail to Canton as part of the British naval force in the First Opium War where the ship’s guns were brought into play.

Sulphur was one of the key ingredients in the quest for the philosopher’s stone. Every year when I teach General Chemistry, I query the students to see if they can figure out what the popular ingredients might be. This year, no one guessed sulphur, even though I tried to prompt them to think about colours that might be related to gold, since among other things, the stone was supposed to turn base metals into gold. Students are being exposed to less and less these days, perhaps. At least when we covered the mole (Avogadro’s number, not the creature), I had a clear vial of sulphur that was passed around the class – 32 grams of it! So at least now they’ve all seen the elemental substance.

Aldersey-Williams ends his section on sulphur with a great final sentence worth quoting. It refers to the H.M.S. Sulphur’s return to England after its voyage. “The ship returned to a country where the inventor Thomas Hancock had just obtained a patent for the use of sulphur in the vulcanization of rubber, and the brimstone terrors of the book of Revelation had been sufficiently tamed that the name of Lucifer could be tolerated as a brand of matches.”

[For previous posts on Periodic Tales, see here and here.]