Tuesday, June 26, 2018

Three Fall Ideas


If a tree falls in the forest and no one is around to hear it, does it make a sound?

Okay, that first line is simply a pun on the title of today’s post. Now for the actual stuff. I’m a month into summer break and ruminating on ideas for my classes in the Fall. This past semester, I had two students help me generate in-class activities (and themes) to promote creative thinking while at the same time solidifying some of the more challenging chemical concepts. The photoelectric effect and the three-dimensional visualization of orbitals are two examples.

One challenge is to find more in-class time to run these activities. So, Idea #1 for my first semester General Chemistry class involves revisiting take-home midterm exams. I tried this two years ago with mixed results. The thought was that students would use the exam as a self-assessment. Study up; then take the exam under “exam conditions” – closed-book, timed, individual. Students received full participation credit for turning in the exam regardless of how they actually performed. I “graded” the exams so they would see how they did and learn from any errors. Unfortunately, some students did not adhere to the exam conditions (I could tell when grading) and others might not have reflected on their performance because mistakes didn’t affect their grade. The strong students still did well on the in-class closed-book final exam, but the weaker students did poorer than in previous years. (The grades for that class are not included in the analysis I posted last month.)

Anyway, my plan is to include similar take-home midterm exams once more (under exam conditions), but add a reflective component. But after that they go through their exam again and make any comments or corrections in a different color. In this second “reflective” stage they can consult their notes/textbook, take as much time as they want, and consult with their classmates. Then I will grade the exam based on their “best” answers. I fully expect that all of them will get close to full credit, i.e., it will be the same as giving full participation credit in my previous iteration, but the students will have engaged in their own formative assessment. I think they will be strongly motivated to do this well, and more likely to actually take the initial exam under exam conditions.

Idea #2 also relates to my General Chemistry class. My class will only be composed of incoming first-year students whereby this is their first college experience. For a subset of such classes, the college is running a pilot program to schedule a Magic Hour (yes, that’s what they called it) separate from regular class time. My class still meets for the usual three hours per week. But then there will be an extra scheduled hour for integrative activities that involve students thinking about the liberal arts, learning about college resources, academic advising, etc. The Magic Hour should not be used for more course content, but I’m free to structure it in any way I like. My plan is to invite colleagues from other disciplines to lead a discussion about the intersection between their field and the sciences. Our college has been pushing for the “first-year experience” to include more “integration” and for students to appreciate the “value of the liberal arts”. And yes, those are the buzzphrases used. In the past I had sprinkled bits and pieces within my regular course content, but I will experiment moving and expanding those to the Magic Hour, thereby giving me more in-class time for those other creative activities involving learning chemistry.

I am also teaching Quantum Chemistry, one of the dreaded classes for our chemistry and biochemistry majors because of its high math content. The material is conceptually difficult, counterintuitive and one needs to understand the math to get some sense of the abstractness of the quantum world. Students often get bogged down in the math. Some claim they “understand the concepts” but can’t do the math, which is an oxymoron in quantum mechanics. However, there might be an alternative way to assess student understanding other than the traditional exams I’ve been using for many years.

Hence, Idea #3 is allowing students to potentially replace their lowest midterm exam grade with a written reflection of the material covered on that exam. To do this, however they will have to journal regularly each week. The reflection will then draw on their raw journal entries musing about what they’ve been learning in class – and hopefully consolidating their understanding. I plan to give them a sample of what a raw journal entry might look like in the first week of class. I’m hoping that all students will do some thinking and reflection on what they’re learning each week regardless of whether they choose to write up the formal reflection. Yes, it will mean more end-of-semester grading for me, but since I have an unusually small enrollment in the Fall (half the usual size), it won’t be as much of a burden.

Those are my three ideas so far. I’m sure there will be more. I’m also narrowing down three research project ideas for my three new research students who are replacing the three students who just graduated. I have plenty of ideas but I need to narrow these down to give the students the best possible experience. One of my students will be a senior and the other two will be sophomores, i.e., just starting organic chemistry. They’re both quite capable (at least from my G-Chem II class last semester) and my pitch to them was that my origin-of-life projects would also help them see and learn organic chemistry in a different context. One of the many goals of involving undergraduates in early research is for the positive spillover this might have to their coursework but also to engaging them in their major interest!

Thursday, June 21, 2018

The Values of Note-Taking


“Can I take notes using my laptop?”

I get asked this question every year by a small handful of students. The vast majority of my students still come to class on the first day with notebook in hand and a pen or pencil. My stock answer is “Yes, if your laptop use doesn’t distract classmates, but I should let you know that most students find it overall easier to take notes by hand.” There are several reasons for this. Chemistry class features drawing molecular structures, writing chemical equations, doing some math, and sketching experimental setups. I also write quickly on the board and my class proceeds at a relatively quick pace, as my students will attest.

Recently, a few students started using their tablets to take notes. Last semester, one of my new research students did this regularly. In my general chemistry class, another student (who asked me permission on the first day of class) seemed to take notes with ease while doing really well in the class. I decided to ask these students how and under what context they find note-taking efficient with the tablet. It’s only a sample size of two, but I learned that the students tend to use the tablet in their science classes because (a) instructors allowed it, and (b) there was a combination of typing text and drawing structures/equations/graphs. Apparently it’s a bit more challenging to just keep typing straight text in a humanities or social science lecture, and also some of these instructors don’t allow the use of electronic devices. Both students were very fluid in their use of the tablet, and they both had Apps that worked well for them.

“How can I learn to take good notes in your class?”

I’ve never actually been asked this question in class. Not sure why. Perhaps there was a tacit assumption that students should know how to take notes after being in school for so many years before college. I assumed it. The students themselves also assumed it, even if they had suboptimal strategies. It was only after several years of teaching that I started putting together a “how to study for this class” guide. One point exhorts students to read the relevant sections of the textbook ahead of time so they wouldn’t try to write every single word I say (or write) in class. That way they can focus on the most salient and important things. I’m not sure how well this has worked, although last semester when I had several students in my office hour, one remarked to another how reading ahead of time had really helped her follow along in class. I quietly smiled to myself.

I’m sure there are a plethora of note-taking advice websites. (I haven’t checked.) More interestingly, from Ann Blair’s book Too Much To Know, I learned that instructional manuals on note-taking began to flourish in the seventeenth century. The second chapter of her book is devoted to the art and science of note-taking from a historical perspective. (For highlights of the first chapter on information management, see my previous post.) I also learned that “the first manual solely devoted to excerpting, or note-taking from reading, was composed for students in the advanced or rhetoric class at Jesuit colleges by Francesco Sacchini (1570-1626).” The translation of the manual’s title is “A Little Book on How to Read with Profit”. It was published in multiple editions and even had translations from the original Italian into French and German.

Blair discusses one potential source of the popularity of note-taking manuals: extracurricular instruction. “Early modern professors earned extra income by teaching private courses on topics that held special appeal to students, typically because they were fashionable or practical, including courses on study methods and note-taking.” Need that extra edge as a student in a competitive world? Sign up for Complete Note-Taking Best Practices from renowned Professor So-And-So! But even so, manuals were incomplete. Different courses and instructors had their idiosyncracies. Furthermore, there was a belief that the best methods should be kept secret to maintain a competitive edge. Sounds like what the alchemists would do. Why be secretive? One advice-giver suggested that “[other] people would be most impressed by achievements that they did not understand.”

A historical survey of annotations reveals similarities and differences from practices today. Blair writes: “Pupils typically wrote down commentary dictated to them in class; and in books of all kinds one can find annotations that are irrelevant to the text, from family or other records entered in the flyleaf of a book for safekeeping, to doodles and penmanship practice, to recipes, prayers, or poetry written down in a book apparently for the convenience of the writing surface it offered. In the main, however, especially in Latin books, early modern annotations in the margins and flyleaves were reading notes – not personal responses of the kind found in more recent periods, but notes primarily designed to facilitate retrieval and retention of interesting passages. Annotations might make corrections to the text, add cross-references, … words of praise or criticism…” If not for those annotations by the Half-Blood Prince in his copy of Advanced Potion Making, the sixth Harry Potter book might have been a lot less interesting.

One place where we do teach students specific note-taking skills is the first-year General Chemistry laboratory. Keeping a good lab notebook is an important skill for the chemistry student (and potentially future scientist). While there are specific protocols to follow, students are also encouraged to write down their observations, and the thought process that led to their tentative conclusions. Nothing is erased. Errors are cleanly crossed out with a single line. Research lab notebooks have been a boon to historians of science piecing together stories of discovery, often different from the cleaned-up version coming from the scientist’s own recollection many years later. Memory is a fickle thing.

But because memory is fickle, down through the ages, an individual with a seemingly prodigious memory was, according to Blair, “highly regarded as a sign not only of intellectual ability but also of moral worth.” Scholars in the old days spent a substantial amount of time memorizing substantial material. In the widely reprinted note-taking manuals by the Jesuits (Sacchini and Drexel), memory was improved first by the act of writing the notes, and then later re-reading the notes during subsequent recall. Before the laptop, tablet or smartphone, the notebook was what you carried around – since you wouldn’t want to lug a pile of books around. You could study it anytime, anywhere!

Prior note-taking also helps when you are writing, according to Drexel who “asserted that all abundant writers relied on collections of excerpts gathered over years of reading… [but] offered no empirical evidence to support his claim…” Apparently the elder Pliny was a prodigious note-taker, according to his namesake nephew Pliny the Younger. Thomas Aquinas, on the other hand, seemed to compose more from memory than from notes, at least according to the historical information available. But if you have stacks and stacks of notes, how will you find what you need? Blair’s book is about information management down through the ages. So it turns out one Thomas Harrison sometime in the 1640s, while in prison, wrote up a design for a “note closet”.


The picture above, based on Harrison’s description, comes from De arte excerpendi (1689) by Vincent Placcius. Apparently this influenced the great Gottfried Wilheim Leibniz to have one constructed for himself. Mobile slips of paper were the key “invention”, not so much the cabinet itself. A line can be drawn from slips of paper to the index cards of library catalogs. Dewey (of Dewey Decimal fame) even standardized the size of such index cards. And coming full circle, I regularly see students using a stack of index cards as they prepare for exams.

I used to regularly take notes while reading when I was in college and graduate school, back before widespread Internet use and search capability. I no longer do so, but I’m not sure why. Laziness perhaps. Or Search-ability. Blair’s chapter is making me ponder the value of taking up the practice again. Maybe I read too lazily, and therefore do not learn as much as I should. Writing blog posts on what I read functions partly as an external memory aid, a searchable one in particular. I’m also pondering whether I need to do a bit more in helping students take useful and good notes in my chemistry classes. Or maybe I should lead a discussion on what is known about Effective Learning Techniques. More to ponder.

Monday, June 18, 2018

TMI Way Before the Internet


We are touted to be living in the Age of Information, thanks to the advent of widespread access to the Internet. Too Much Information, actually; the deluge is now considered a problem. Complaints abound of having to wade through the mounds of trash to find a sliver of gold. Fake news is everywhere and harder to distinguish from real news, whatever those mean. Oh, for the days of yore, when life was simpler.

Except it wasn’t.

That’s the thrust of Ann Blair’s Too Much to Know, subtitled Managing Scholarly Information before the Modern Age. Published in 2010 by Yale University Press, it is definitely an academic book, dense with information. It’s also very fascinating if you enjoy reading history chockful of detailed and thoughtful analysis. I’m not quite halfway through the book – it’s slow reading – but I’m enjoying it thoroughly!


In the book’s introduction, Blair writes: “We complain about overload in almost every field, from hardware-store stocking to library holdings to Internet searches. A Google search for ‘information overload’ itself generates more than 1.5 million hits, with the promise of solutions from office supply stores, management consultants, and stress relief services, among many others. But the perception of and complaints about overload are not unique to our period. Ancient, medieval, and early modern authors and authors working in non-Western contexts articulated similar concerns, notably about the overabundance of books and the frailty of human resources for mastering them (such as memory and time).”

The objects of Blair’s study are reference materials: reference works, bibliographies, encyclopedias and their forerunners, dictionaries, quotation lists, florilegia, and more. Writers in antiquity complained about information overload and the abundance of useless books. And this was before the printing press. The famous Seneca of Rome chided his “well-to-do contemporaries [wasting] time and money accumulating too many books. Instead Seneca recommended focusing on a limited number of good books to read thoroughly and repeatedly… This position exemplifies an effective and often dominant method of information management – to limit the quantity and nature of information to an established canon of works…”

But how do you choose that canon? How do you pick out the good from the bad? Even trashy works might contain some gleaming nugget of truth. How do we save the best for posterity? How do you determine relative importance? Well, someone has to read a bunch of material and then make selections. Maybe it was Cliff, making a study summary for himself. You might not even have to read the established canon. Just read Cliff’s Notes on those books and, thanks to Cliff, you’ve saved yourself lots of time!

According to Blair, “one of the great feats of information management in late antiquity was the composition of ten books of ecclesiastical history by Eusebius (260-339 CE), who worked with the support of a large staff to excerpt from the abundant holdings of the Library of Caesarea.” Copying was a laborious process in those times, and many old books have been lost over time; we only know of their existence through human compilers and early encyclopedists – before the word encyclopedia was invented. I loved reading encyclopedias as a child. I haven’t touched a physical copy in years now that I can just Google-It. But I was particularly pleased that on a family vacation last week, my ten-year old nephew regularly regaled us with relevant “Did you know…?” facts from his reading of encyclopedias. I’m glad kids these days still enjoy such an activity, and I learned some new facts while enjoying a relatively internet-free vacation.

Of the many reference genres that Blair discusses, I particularly enjoyed learning about florilegia. The term is derived from the Latin flores for flowers and legere for choosing/selecting. I like the term. It conveys the impression of carefully choosing flowers and assembling them into an artful arrangement. They were used in teaching during medieval times. Some reinforced the canon of the Middle Ages. (I learn from Blair that in descending order of citations, these are the Bible, church fathers, Ovid, Virgil, Horace, Cicero, Juvenal, Lucan, Seneca.) Other florilegia excerpted relatively unknown authors, keeping their work alive. And since really ‘old’ school meant being able to quote the pithy saying of an authority to bolster your argument, florilegia were likely the Cliffs Notes of an ancient generation.

If you follow my blog regularly, maybe it helps you with information management. While I did not consciously intend it to be a florilegium, I find that I do excerpt quotes from authors that I’ve read. Writing about the books I’ve found interesting and why also allows me to manage my own information intake. I’ve used my own blog to look up things I’d previously written about. And thanks to the Search function, ubiquitous in the age of the Internet, I don’t have to index my blog. I did start adding keywords about 6-12 months after originally starting my blog to help with the management. Perhaps one of my book reviews led you to reading a book in full that you found worthwhile. Maybe what I’m doing is not too different from the compilers in antiquity. I often learn about other books through blogs that I read.

A final note about Blair’s first chapter focusing on information management. What was the impact of printing? Blair argues that “general cultural consequences of printing are particularly hard to disentangle from those of multiple other [specific] cultural changes under way during precisely the same time.” She provides examples from both Western Europe and China, where the invention of printing (at different times) coincided with other concurrent discoveries and movements. Yes, books became much cheaper. Yes, errors that were introduced became more widespread. Yes, there were now a lot more trashy books and reading materials. But I learned from Blair that one huge new invention was the book cover and title! Previously, manuscripts did not have title pages. They were referred to typically by their opening words.

I like Blair’s discussion on this topic so I’ll quote her. “By contrast, a printed book needed to appeal to buyers who had no advance knowledge of the book, so the title page served as an advertisement, announcing title and author, printer, … and also additional boasts about useful features – ‘very copious indexes’… Title pages occasionally made deceptive claims, proclaiming novelty where there was little, or none.” Who would have thought that the invention of printing would lead to book titles being the forerunner to Clickbait! Why, I even spend at least a few minutes on each blog post trying to think up a catchy title! And if I’m too lazy, I try to edit the lede so that it sounds attractive. I try to convince myself that I’m honing my writing skills to be a better communicator, but maybe I’m subconsciously trying to attract a click!

And to get you to come back? I will discuss Blair’s fascinating chapter on Note-Taking. Interested in how and why scholars and students took notes? Did you know that there are instruction manuals for note-taking? Clever strategies and devices? How to organize all those notes you’ve taken? That’s coming up next!

Saturday, June 16, 2018

Balancing Starfish and Spider


The Starfish and the Spider champions the counterintuitive capabilities of decentralized organizations. Authors Ori Brafman and Rod Beckstrom are startup entrepeneurs. The opening chapter grabs you with two salient examples: MGM fighting peer-to-peer outfits such as Napster and its quick-mutating cousins; and the Apaches successfully fending off the Spanish conquistadors where earlier tribes had speedily succumbed. The common thread? A smaller, seemingly weaker, decentralized organization somehow holds its own against the behemoth. Echoes of David versus Goliath.


Brafman and Beckstrom have chosen the spider and the starfish as the archetypes of centralized and decentralized organizations. To determine whether an organization is more like the spider or the starfish, the authors ask a series of questions. Is there a person in charge? Will it die if you chop off its head? Are knowledge and power concentrated or distributed? Can you accurately count its ‘membership’? Are working groups self-funded? The spider has a central nervous system with a controlling head. Chop off its head and the spider dies. In the starfish, a chopped-off piece can regenerate a whole new organism that can further proliferate.

The book is organized around starfish examples such as Alcoholics Anonymous, Skype, Wikipedia, Craigslist, and investigates how these organizations ‘function’. Instead of a centralized Big Kahuna CEO, decentralized organizations grow because of people called ‘catalysts’. The catalyst acts as a peer rather than the boss. Collaboration dominates over giving directives. There is a high tolerance for ambiguity and less-than-orderly situations. Catalysts are inspirational rather than powerful; they work behind the scenes rather than ham the spotlight; they trust rather than attempt to control.

That being said, the authors recognize that whether an organization will be more effective centralized or decentralized depends on its mission, its environment, and a host of other factors. There is a sweet spot, that changes with the times. Evolve and adapt, or die. Many organizations are spider-starfish hybrids. Where is the sweet spot? The authors suggest the following broad principles. “In any industry that’s based on information – whether it’s music, software, or telephones – these forces pull the sweet spot towards decentralization… [also] if there’s a reason for anonymity… But at the same time, other forces nudge the sweet spot toward centralization… the more important security and accountability become in a given industry… when services are unfamiliar…”

I’m in the higher education industry, and my organization is a traditionally organized university. At the moment, much of the bread-and-butter of my job is decentralized. I decide what content to teach and what pedagogies to use. I have some choice over when I teach time-wise, and how to allocate my time amongst different activities. I decide what research to pursue, how I will mentor my students, which students I will accept into my research group, what committees I am willing to serve on, and what I do with my time when school is not in session. But there are constraints, and those constraints are increasing over time. Higher education is increasingly centrally organized with an increasingly powerful central administration. You can tell by the increase of sheer paperwork, not to mention being constantly invited to participate in extra meetings. Higher education is being financially squeezed here in the U.S., and not surprisingly, it has responded by further centralization. Brafman and Beckstrom explain why this happens through many examples. (You can read their book!) If you’re in traditional academia, I bet you’re seeing increased centralization as your institution unveils new approaches to ‘maintain its competitive edge’. More tracking. More assessment. Amass more data.

The Starfish and the Spider also made me think about the chemical origins of life, which happens to be one of my research interests. As molecules self-organize into systems, heralding the beginning of metabolic cycles, there is likely a sweet spot between centralization and decentralization. Life operates at the edge between order and chaos. An overly centralized system will be wiped out at the next catastrophe, scattered to the wind so to speak. An overly chaotic system achieves no lasting mission as it careens from one state to the next. Different organisms have their niche sweet spots, at least until resources run scarce or a new predator arrives on the scene. Catalysts are also the heroes of biochemistry. Without them, the richness of life as we know it would not exist. The origin of metabolism is rooted in catalysis. Energy transduction drives chemical evolution – and we have it decentralized in mitochondria throughout our cells.

What does decentralized chemistry look like? It’s messy, complex, seemingly chaotic, and difficult to analyze. In chemistry lab classes, students much prefer the ‘clean’ chemical reaction. Avoid contaminants. Reduce the unwanted side products. Prebiotic chemists are increasingly exploring messier conditions, because interesting life-like systems do not emerge from clean reactions. These messy systems are seemingly inefficient at least when compared to a clean system. But perhaps efficiency isn’t always the goal. My students would like their learning to be efficient. So would I. But I find that students often use strategies that lead to efficiently learning at a superficial level. They want to know the superficial answers more quickly and efficiently. Deeper learning is likely a messier process of grappling, sometimes in the dark with slivers of occasional light. And once I’ve consolidated my learning, I’m not even sure I can articulate how exactly I’ve done so. Yet somehow I trudge on trying to be a catalyst for my student’s learning. One thing I enjoy about teaching – it’s never boring.

Friday, June 8, 2018

An Illustrated Guide to Kappa Life


Guest blog post on the origins of kappa.
=====

An Illustrated Guide to Kappa Life

“. . . creepy water-dwellers that looked like scaly monkeys, with webbed hands itching to strangle unwitting waders in their ponds.”Harry Potter and the Prisoner of Azkaban.
This brief mention in Harry Potter was my first encounter with kappa. According to traditional Japanese folklore, kappa are a type of supernatural creature that lures and pulls people into water.
Recently, however, I encountered a different depiction of kappa.

Photo 1: The water-loving kappa. Literally, “river child” in Japanese.

An Illustrated Guide to Kappa (カッパの生活図鑑), by Kunihiko Hisa, portrays kappa as generally harmless and merely fun-loving creatures.

I couldn’t tell how much of Hisa’s Illustrated Guide was made-up by him, as opposed to drawn directly from folklore. Below, I paraphrase parts of his book, which you can contrast with Wikipedia’s traditional depiction of kappa.

Kappa Anatomy
Kappa can walk on land but will die if their bodies get too dry. So they are usually found around lake shores, riverbanks, dry river beds, wetlands, waterfalls, and ponds.

The “plate”: Their most distinctive feature is found on their heads. Functioning like a sponge, the plate’s moisture level tells kappa if they must hurry back to water. It also regulates body temperature through evaporation.

Photo 2: Hair around the “plate” protects the scalp and enables kappa to sense the flow of water and detect movement of fish nearby.

The shell: They have shells like turtles but webbed feet like frogs. You can tell male and females apart by the bottom of their shells: male shells have a pointed tip; female shells have a dent.

Photo 3: Kappa love napping on water, floating shell-up. The shell not only protects their bodies from attacks by predatory birds but also absorbs vitamin D from the sun! Another notable feature of kappa anatomy is the scent gland, located near the bum. If attacked, a kappa can release a gas of intense odor…

Built for underwater life: While swimming, their earlobes and noseholes shut to prevent water from entering. A “lid” also prevents water from entering the trachea. Meanwhile, their eyes have a protective transparent membrane. 

Photo 4: Kappa can spend any length of time underwater, if it isn't rigorous exercise. They don't actually breathe underwater, relying instead on stored-up oxygen in dedicated organs, absorbed through skin. Extreme skin dehydration is fatal. 

Kappa Diet and Tools
The omnivorous kappa eat fish, amphibians, small birds and eggs, small animals like mice, and insects like dragonflies. Contrary to legend, Hisa claims, they do not gang up to drag horses into water. They do fight with otters but don’t eat them.

It’s true, however, that they like cucumbers. Their plant diet includes the seeds of watermelons, gourds, and pumpkins, as well as human-cultivated produce.

Since kappa don’t use fire, they can’t make clay or metal tools. But they do have the obsidian knife. Its glass-like blade can cut as sharply as metal.


Photo 5: Fermenting fruits into alcohol and storing it in dried-out gourds, corked with a piece of wood. 

Kappa Society
Generally, kappa live in family units. The father and mother work together to raise children. Children later leave their parents to form temporary, gender-segregated groups of young mature kappa. They eventually they meet, marry, and start their own families. Kappa who do not marry tend to live alone.

Photo 6: Kappa also form a larger tribe with their “relatives” – those who live within the same region of river, lake, or wetland. Annually, the tribe gathers to exchange information and discuss conflicts with neighboring tribes.

Elderly kappa who no longer have families give up their homes to young couples sometimes. They then roam as young kappa do and visit many homes, teaching children how to make stone tools, hunt, and identify medicinal plants. 

Photo 7: Kappa can live where a body of water is present. For instance, in a hole dug by the riverbank, under a giant fallen tree in the wetlands, behind a waterfall, or even in limestone caves.  


Photo 8: Underground home with a spiked air vent (the spikes deter unwanted visitors).

Kappa at Work and Play
A kappa’s work consists of gathering food, making tools or medicine, and raising children. But they work only as little as they need to survive, never gathering extra resources to sell or barter.

Photo 9: Medicine making: Ingredients for treating cuts, blisters, and fever. Legend even says that a kappa’s severed limb can be glued back with this miraculous salve. 

They enjoy basking in the sun, water sledding, and catching dragonflies. Male kappa enjoy competitive games, especially wrestling. 

Photo 10: Wrestlers grip each other by the shell; grabbing hair isn’t allowed. Other competitive games including stone-skipping and balancing bamboo sticks. 

Raising Young
Kappa females generally lay 1 or 2 eggs. It takes 3 weeks for an egg to hatch and 3 months for a child to mature.

The baby knocks against the egg shell, indicating it’s time to hatch. The mother helps break the shell and pours water over the newborn. She feeds it with food the father has obtained, first chewing the food into small, soft bits that the baby can eat.

Photo 11: The baby cannot stand or swim on its own yet. The father first helps the baby learn to float and swim by holding it gently in his hands. 

Kappa Battles
Kappas seldom fight. But once in a while, kappas do meet in “battle.” 

Photo 12: Kappa battles occurs when an area becomes overpopulated. Or when livable land shrinks in droughts.

But they rarely fight to the point of death. Rather, they gather in large numbers on opposite ends of a river, armed with bamboo poles as a show of force. A representative from each tribe comes forward to compete in a wrestling match. The victor wins the land for his tribe. 
Kappa Seasons
Winter:
Most kappa hibernate in winter. 

Photo 13: In the north, they sleep all winter. But in the south, kappa occasionally rouse from sleep and eat nuts, fruits, berries, fish, or salamanders they have stored up. 

Spring:
Young kappa look for a mate to establish their own families. Most families lay eggs in May, which hatch into babies in June. Babies begin walking on their own around end of June or early July, which is also the rainy season. Rainy season is the best time for walking practice, as children that venture far off are less likely to die of dehydration.
Summer:
Children will have learned to walk and swim on their own by summer. Summer is the most fun season of all, a time for play and learning.
Autumn:
When spring comes, children will leave their parents to live with other young kappa. 

Photo 14: Whence did kappa come? Theory I: Evolved from a primeval shell-bearing reptile. Theory II: Evolved from a proto-human with dinosaur origins. 

Author’s Postscript (excerpt)
Kappa sightings were reported up to the Meiji period, which is about 100 years ago. But now they are a thing of the past. 

Photo 15: They were good at surviving, given they could make tools, raise children, and make medicine. As they developed survival skills, they began to lay fewer eggs to avoid overpopulation, over-foraging, and destruction of nature.

Conflict between kappa and humans has been minimal. Of course there has been the occasional stealing of cucumbers and pumpkins from humans. But as the kappa diet consists largely of meat – especially fish – kappa have rarely caused destruction to crop fields. Rather, there have been more occasions for gratitude toward kappa, such as when kappa shared their medicine with humans.
But how has human life affected kappa life in return? Humans have caused floods and redirected water to their fields and man-made canals. These changes to the natural landscape have made life difficult for insects, fish, and birds, which in turn affects the kappa. 
During the Meiji period especially, riverbanks and shores were built up with concrete, riverbeds were excavated, and dams erected for all sorts of human purposes. To top it all, cities and factories have been dumping dirty water into rivers and lakes.
So, sadly, kappa have vanished. And it did not take very long to happen. (Kunihiko Hisa, 1 January 1993)

Tuesday, June 5, 2018

Learning from Bologna


The Bologna Process is a massive project that attempts to create a European Higher Education Area. Started in 1999, it involves 46 European countries, and affects 4000 institutions of higher learning. Named after the city that houses the oldest university (seal below) in Europe, Bologna attempts to provide common reference points allowing European institutions to “recognize” each other’s credentials and provide greater mobility for their students and subsequent skilled workforce. If successful (some might say when), Bologna could propel Europe to the forefront of competitiveness, creativity, innovation, and make the region an economic force to be reckoned with.


Why am I writing about Bologna? I’ve personally experienced education systems in several countries as a student. I’ve also worked as a professor and a higher education administrator on two different continents with markedly different systems. I regularly follow world higher education news, and I had been loosely following the Bologna process through news articles over the years. With the end of the semester, I now had time to delve into Clifford Adelman’s 220+ page report – The Bologna Process: Relearning Higher Education in the Age of Convergence. I’d previously read shorter pieces from Adelman on assessment. He writes clearly and provides supporting data and detailed analysis. I knew I would learn from his report.

In his report, Adelman goes into detail explaining how Qualification Frameworks work, with national-level examples from Britain, France, Germany, Sweden and the Netherlands. He also provides examples of disciplinary or field-specific qualification frameworks (“Tuning” in the jargon), including chemistry! Adelman discusses the challenges of reaching consensus across very different national systems, each having their own language and terminology. Assessment is one of Adelman’s specialties; I’m not going to go into great detail other than saying that reading Adelman made me appreciate the arguments made for constructing learning outcomes and assessing them. Somehow when an administrator at my own institution talks about assessment, it just sounds like a pain-in-the-ass checkbox task for the purpose of ass-covering. Adelman takes the ass out of Assessment, if you’ll pardon my punny and colorful language. It’s still a lot of work but it sounds more worthwhile given his strong arguments for doing the work and doing it right.

Instead, I will highlight three things that the report made me think about. The first is the challenge of putting together the European Credit Transfer System (ECTS). I’d recently worked on a revised credit hour policy at my home institution, where I made arguments supporting using credit-hours as a measurement of student workload (rather than faculty contact hours). Five years ago, while helping start a new international liberal arts college, I helped to translate and merge two different systems, again based on student workload arguments. I even wrote a white paper for this process, whimsically titled “Modular Credits and Teaching Loads and Workloads, Oh My!” (think Lions Tigers Bears in Wizard of Oz). I didn’t re-read my report but the boldfaced last line reads: “No system is perfectly equitable. Byzantine bean-counting should be AVOIDED!

I hadn’t read Adelman’s 2009 report back then, although I should have. Here’s what he says in a section suggesting what the U.S. could learn from ECTS. These two paragraphs will also give you a sense of his sharp and clear writing!

The U.S. credit currency, based principally on faculty contact hours (along with varying assumptions about student study-time per faculty contact hour), is a metric designed for funding and resource allocation, not as a proxy for learning. Its engine lies in the office of the Vice President for Finance, not the office of the Vice President for Academic Affairs. The student is incidental. Even in the matter of time, the same faculty load serves considerable differences in student work load. Something is wrong here. If we care about accountability for student learning, perhaps we need a redesign. Perhaps the Bologna experience might help us.

Before one redesigns a credit system, one needs some definitions, principles, and guidelines. The mechanical implementation of ECTS doesn’t really do it. Credit should define levels of student work (time volume and intellectual demand) that render courses in different disciplines comparable. In a way, the U.S. system tries to do that now by giving an extra credit for science labs or language labs or by heavier credit weighting of externships. But we do so in a rather arbitrary fashion, and wind up awarding the same number of credits for course work of widely varying intellectual demand. We give three credits for a course in Econometrics and three for Introduction to Sports, and brush such dissonances under the rug. This observation is not new.

The second thing I will highlight is Adelman’s suggestion that we pay attention to the rising importance of the Masters degree in Europe. A lot can be said about the evolution of different Masters degrees in different areas in different countries, but I’m not going to delve into the details. On previous occasions, I opposed expanding our department offerings beyond the undergraduate degree. (At my college, three departments with small Masters programs.) My argument, made with my department colleagues in unison, was that we should focus our energy and resources on undergraduates – and I think we do a fantastic job at it. Arguments I’ve heard from administration seemed to be more about how to increase revenue streams to the university. Being in chemistry, I think we’d be more expensive to run, not to mention we’d stretch ourselves more thinly. Adelman however makes a global mobility and expertise argument for Masters programs – it’s the first time I’ve felt open to the possibility.

The third interesting nugget from Adelman’s report is a proposal to formalize short-cycle degrees or certificates at an institution that normally offers bachelor’s (and higher) degrees. For example, a student could receive a minor (if they have taken the requisite courses) at a certain point in their education before they receive a major a year or more down the road. The U.S. split between community colleges offering associate degrees and four-year institutions offering bachelor degrees might not be a model that serves today’s students well in the long run. Short-cycle degree formalization might be the easiest thing to accomplish piecemeal, although piecemeal might not be the best idea. The different facets of Bologna are “tightly intertwined” and therefore challenging to treat piecemeal, according to Adelman.

Bologna is comprehensive in vision and scope, not just for institutions, but for the students they serve. That also makes it a difficult long-haul process. Adelman writes that “our European colleagues have sought to do right by the student by reinvigorating the most basic and common role of institutions of higher education in every society and economy on this globe: the distribution of knowledge and development of skills to apply that knowledge.” While some institutions may also be involved in generating new knowledge (e.g. through research/scholarship) and preserving knowledge (through archives), all are involved in knowledge distribution and application. This means, according to Adelman, that “content counts”. Knowledge is very important. You can’t teach generic ‘critical thinking’ skills, particularly the higher order ones on Bloom’s Taxonomy, without a good deal of content knowledge.

Where do we go from here? I don’t know. Adelman and others have argued that importing or even adapting Bologna shouldn’t necessarily be the goal. The U.S. should be paying attention and learning from our European colleagues. Other global blocs are doing this, for example, the African Union and ASEAN (Association of Southeast Asian Nations). Unlike the language of standardization used in Bologna (and standardization does not mean clone copies but rather common standards), Africa and Southeast Asia use the language of harmonization. The U.S. is a large country, long esteemed for being the top tertiary education destination in the world. But the world is changing. By being unattentive to globalization, or becoming increasingly isolationist, could lead to a continued erosion of U.S. eminence. Then again, it’s hard to predict the future.

Saturday, June 2, 2018

Creating a Tasting Menu


This week I was introduced to the Netflix series Chef’s Table. I’ve now watched 9 of the 22 episodes, and at least one from each season. The personal story of each protagonist is fascinating; there are some commonalities to all, but they come from diverse backgrounds, cultures and nations. Each individual is intensely creative, very hardworking, and experiences significant moments of failure and disillusion. The food visuals are mouth-watering, and well-juxtaposed into the narrative. The creator of Chef’s Table is David Gelb, famous for the superb Jiro Dreams of Sushi; it permanently changed my view of sushi.


Watching each master-cook at their craft in Chef’s Table prompted me to consider similarities and differences to my own craft – teaching primarily, professoring generally. Long before reaching their creative acclaim, each of the featured chefs had early extensive training and practice in the standards or classics of a particular cuisine. Apparently, the strategy is to apprentice at the best possible restaurant that will take you. Is that the strategy that aspiring academics take in considering graduate schools? In early chef training, French and Italian cuisine play a large (historical) role, but other cuisines also come into play, and it is the mingling of different ideas that spark those creative mouth-watering dishes that also double up as eye-candy. It reminded me that to be creative, you must have a strong knowledge base, so that you have something to be creative with.

Let’s take the oft-quoted 10,000 hours as a proxy for expert-hood corresponding to some level of mastery. In the chef apprenticeship model, aspiring chefs-to-be work really, really hard and very long hours. Let’s say 80 hours a week, or about 4,000 hours per year, so it would take two-and-a-half years to reach expert-hood. If I think about graduate school as preparation for an academic career, then 40 hours a week, or about 2,000 hours per year, means five years to reach expert-hood. (I likely worked between 45-50 hours per week in graduate school on research and teaching. I did more teaching than the typical graduate school because I enjoyed it!) Graduate school, at least in my day, did not train me for much of my job description as a professor in a liberal arts college. How does one actually get better at professoring and teaching? Well, you have to do it. Six years to tenure provides the opportunity to hit the 10,000-hour mark when that major evaluative stepping stone comes up.

Before opening up their own restaurants, these master-chefs slogged easily surpassed the 10,000-hour mark, some by multiples. But when they then struck out on their own, with their new creative ideas, it was hard. Very, very hard. For some, it took years to gain recognition, with many failures and frustrations along the way. I did not have that experience, probably because the analogous thing was if I set up my own independent school with an avant-garde yet unfamiliar teaching pedagogy. I’m more like the chef who stays at an established joint, working on established cuisine with steady tried-and-true recipes. (Actually I did help start up a new international liberal arts college some years back, but the curriculum was not unfamiliar, nor was the pedagogy way-out-there.) It is certainly the road-less-travelled. Who knows how many aspiring chefs out there did not ‘succeed’?

But maybe I should gain some inspiration from these creative chefs to strike out and try something new in a field that I care greatly about. I might even be able to do this from the comfort of my own perch. I was particularly intrigued by the concept of the tasting menu. The chef chooses and creates what you, the diner, will eat and experience; I have never experienced this as a diner (I’m too cheap). Now as a professor, I can decide what gets taught (to some extent) and how it gets taught, and how creative I would like to be. There are some agreed-upon limitations. All over the country, chemistry professors have general agreement over the main things that are taught in a year-long general chemistry (G-Chem) or physical chemistry (P-Chem) course. (These are the classes I teach the most often.) G-Chem is a pre-requisite for a host of other courses so there may be less room to maneuver, but upper division P-Chem doesn’t have many other classes that depend on it, at least at the undergraduate level. That being said, the P-Chem curriculum, however, is quite standard across colleges and universities in the U.S.

Could I break out of tradition without shortchanging my students? If a series of creative dishes come from mixing-and-matching new combinations from familiar ingredients, perhaps there is an analogy to new pedagogical approaches to a similar base curriculum. This approach is already being tried all over with varying degrees of offbeat-ness and varying degrees of ‘success’. I’ve attempted a few things, again with varying degree of success, but nothing too revolutionary. The general advice I’ve read from folks who overhaul their classes is to change a few things in a progression. Changing too many things too quickly is often a recipe for disaster, not to mention tons of prep work. I will have to weigh these considerations as I plan my upcoming fall classes – I’m teaching G-Chem and P-Chem as usual. Perhaps a special topics class best matches the tasting menu. I’m due to offer one soon according to my department’s rotation schedule. Not having a graduate program and running lean means that we can only offer one (sometimes two) electives outside the traditional chemistry per semester. I could call it “Great Tastes of Chemistry” although students would likely associate it with a food chemistry class.

A tasting menu is about enjoying an experience, albeit an expensive one. At top restaurants, these typically run in the $200-$300 range for a 2-3 hour gastronomic experience. At my private college, a typical (three credit hour) class costs $5,000. Let’s say this class meets three hours per week for 15 weeks, i.e., a little over $100 per hour, closely matching the per hour cost of a tasting menu dinner. I’d like to think learning chemistry is an enjoyable experience, but learning it takes a lot of hard work from the learner. As a diner, you don’t do any work (under than the enjoyable tasting than digesting), but the experience is fleeting. I hope that what my students learn in chemistry class sticks with them, at least the core concepts. Perhaps Chef’s Table has fewer touchstones than I thought. Maybe a better comparison would be Master Chef Junior (I’ve seen a couple of episodes) or Nailed It (which I haven’t watched but sounds interesting).

What was my favorite episode of Chef’s Table so far? Christina Tosi featured on Episode 1 of Season 4 – the Pastry season!