Monday, March 28, 2016

The Michel Thomas Method


This weekend I stumbled upon the 1997 BBC documentary The Language Master. (Click here for a YouTube link.) The protagonist is Michel Thomas. He was born in Poland but made his way across Europe, where he was part of the French resistance in the second World War. Eventually he worked for American counter-intelligence and was credited with the arrest of hundreds of Nazi war criminals and locating documents. He was said to be an excellent communicator and used non-violent interrogation techniques.

After the war, he emigrated to the U.S. and became a language instructor to Hollywood stars. He would not explain his methods, but his results brought ringing endorsements. Celebrities would part with their cash for a quick and intensive learning session, and they all claimed that his method really worked. The Language Master was the first recorded session showing snippets of Michel Thomas teaching French to six students in the UK. He spent five full days with the students who had little to no French background and were considered below average in terms of language learning ability. Most of them had failed their foreign language exams.

The first thing Michel does when he meets the students is to push all the standard classroom furniture to the side. The students help unload a truck with rugs and comfortable chairs. No desks are needed. There is no writing, reading or memorization. Michel's philosophy is that students must be in as relaxed an environment as possible so they have maximum “openness” to learning, particularly given many of them have negative associations with language learning. He even tells them that if they forget a word, it is his fault as the teacher, not theirs. The students are nervous when they first meet him, but after a while they get comfortable speaking French. They know the grammar and are able to translate what he says in English into French. The students are amazed at their own ability to learn, and their confidence is restored.

A key aspect of the method has to do with Michel’s long experience in multiple languages. Others describe his approach as “breaking language learning into very small parts and then putting it back together”. It’s as if he has discovered the underlying fundamental structure (at least for Romance languages) and knows how to aid the learner reconstruct the grammar and recall the words by making connections to things they already know. The edifice is then built block by block. He never describes his strategy, and is reluctant to “give it away” to academics who want to dissect his approach and compare it to extant methods. That his method “works” is for him the proof in the pudding.

Watching the documentary made me think about my own teaching strategy. I’ve never considered that students should feel comfortable. Rather, my teaching philosophy for many years has aimed at having most students be at the edge of their comfort zone. Too comfortable, and they think it is too easy and don’t learn. Too difficult and they get frustrated and shut down. I try to give some additional help to the struggling low end of the class, and at the other end of the spectrum I try to include a little to keep the high end motivated and interested. My class sizes are clearly larger than Michel’s. He works with many of his clients one-on-one. (He also has a tape series where you can learn, in your own comfortable setting, by listening to his voice and going through his method – even if you don’t understand how or why it works.)

Another difference is that his approach is time-intensive and immersive. Perhaps this is part of why schools such as Colorado College and Quest University are built on the block system. Maybe I should try teaching an intensive class during the summer or January intersession. Would I teach it very differently compared to what I do now, where my class meets for 3-4 hours per week rather than per day? It seems that I should take advantage of the intensive format for a different kind of learning. (Is there a different kind in chemistry?) I’d never really thought deeply about how I would change my approach. Maybe in the future when I don’t think the experience would be energy-draining, I should challenge myself to give it a try.

Listening to testimonials from students and teachers (who had experience of learning from Michel), I wondered if the “atomization” of language learning followed by subsequent reconstruction also works in my field of chemistry. Is the atom the fundamental building block in teaching chemistry? While this is the first thing that came to mind, maybe it should be the chemical bond. (I'm biased because this is my area of expertise.) Maybe there’s an equivalent grammar and syntax to thinking chemically. I’d like to think that I do “think” chemically, but I’m not sure I can describe what that means with any precision. Nor do I know the point where chemistry suddenly “clicked” for me. (My first two years of chemistry I had very little conceptual understanding, but I had developed good exam-taking strategies.)

This is something I should think about much more carefully. Is there a way to break down chemistry (or physical science more broadly) and reconstruct it so that the learning is much smoother? I don’t know. The advice section of my course website stresses doing the pre-reading, working problems, persevering, working hard, and putting in lots and lots of effort. The Michel Thomas method, by contrast, seems almost effortless on the part of the students. Michel makes it look easy as the instructor, but clearly he has done a lot of hard work to develop and hone his method. He also seems very attuned to his students individually. In the documentary, this trait is juxtaposed with his counter-intelligence experience and his ability to read people and “coax information” out of them. Perhaps that’s a skill that can be developed as a teacher. Certainly experience has taught me where some of the stumbling blocks are, and in my office hours I am sometimes able to work with the student one-on-one to get pass their individual roadblock(s). Perhaps this also says something about the problems of mass education where the tutor-pupil relationship is diluted in a large class.

I have no answers. And perhaps my own roadblock comes from running around doing all my duties as a faculty member. Maybe I haven’t taken the time to be comfortable, relaxed, and to immerse myself in thinking more carefully about this issue. I should figure out when my next sabbatical is scheduled. Or perhaps I should make some work “lifestyle” changes so I can free up some relaxing think-time. Perhaps over a soothing cup of tea! Who will be my muse though?

Friday, March 25, 2016

Metabolism Probability Insomnia


One might think that I might try to get more sleep during Spring Break. I tried. It just didn’t work. (I have occasional insomnia.)

It was my own fault. I had been thinking about one of the rule modifications for the game I’m playtesting. It is called Bios Genesis and designed by Phil Eklund of Sierra Madre Games. (Planned release is Fall 2016.) For a very brief overview of the game, you can read this earlier post that describes how my research, a boardgame, watching TV, and thinking about time travel merged into a wild idea. But now to the matter at hand: I was thinking about the probability of rolling triples given N dice.

But first, a little context is needed. The object of the game is to create life, sustain it, and hopefully evolve into a thriving organism. The currency (how you pay for upgrades) in the game comes through catalysts. In the early stages of the game, catalysts are obtained via dice rolls that allow you to cycle nutrient molecules. Basically you are trying to set up a robust autocatalytic cycle, which fits well with some origin-of-life scenarios. At some point, the autocatalytic cycle can be evolved into simple life – microorganisms. Once they arrive on the scene, each microorganism has to perform a Darwin Roll. The number of dice rolled depends on how many “chromosomes” the organism contains. The aptly named chromosomes are colored cubes in this game. The four colors represent abilities in four areas: metabolism (red), specificity (yellow), entropy (green) and heredity (blue).

If your organism has red cubes signifying its metabolism, you gain catalysts when you roll ones. But you might not have red cubes, or you might be unlucky and not roll ones. (Such is life.) In some playtests, there was a significant shortage of catalysts in the early stages. Thus a rule was suggested that for every triple rolled, your organism also receives a catalyst. This led to a few games that were flush with catalysts, while others might still be relatively lean. It also depended on whether players were more cooperative or conversely more competitive. Clearly the more dice rolled, the higher your chance of getting triples. With this new rule, an organism has better metabolism if it has more chromosomes, not just the red ones. But how does it scale?

[Warning: Some Math Ahead]

This is an easy problem, I thought to myself. Note to self: Don’t work on such exciting things shortly before going to bed. (Hence, the title of my blog post.) Here’s my reasoning. If three dice are rolled, the number of possibilities is 63 or 216. There are just six ways of getting a triple (all ones, all twos, all threes, all fours, all fives, all sixes). So the probability is 6/216 or 2.8%. Another way of calculating this is 6 x (1/6)3 = 6/216. The probability of rolling three equal dice is (1/6) x (1/6) x (1/6) but there are 6 ways you could do this. Or you could say that it doesn’t matter what you roll for the first dice so this is 6/6, but after it is rolled the other two dice must match if this is to be a triple, i.e., (1/6)2 = 1/36 or 2.8%.

What if you roll four dice? Now the total is 64 or 1296. As long as you have three dice equal, the fourth one shouldn’t matter. So now you have 4 x 6 x 6 x (1/6)4 where the factor of four is because any one of the four dice could be the one that doesn’t matter, and one of the factors of 6 is because this dice could be any number from  one to six. This yields 144/1296 or 11.1%. (This turns out to be wrong, but I hadn’t realized it yet.) Note that you could also have written this as 4 x 62/64 or 4 x (1/6)2.

How about five dice? Well, now you have two dice that don’t matter so that gives you 63/65 but there should be 10 ways that you could do this, i.e., the dice that don’t matter are (1,2), (1,3), (1,4), (1,5), (2,3), (2,4), (2,5), (3,4), (3,5), (4,5) where the numbers refer to the 1st, 2nd, 3rd, 4th or 5th dice as the ones that don’t matter. So the probability would be 10 x 63/65 = 2160/7776, i.e., 27.8%. (This will also turn out to be wrong.) This could also have been written as 10 x (1/6)2 = 10/36. At this point, I think I see a pattern. For N dice, you can calculate the probability my multiplying two factors. The first is N!/(3!(N-3)!) and the second is 6N-2/6N.

I am led astray because back in week 3 of the semester, I was teaching students the Boltzmann distribution. A good way to count the number of ways a particular arrangement of particles can take, as they spread themselves in different quantum states, is to count marbles in boxes. If you are tossing N marbles into boxes and the relative box sizes are given by g1, g2, g3, … and the number of marbles in the respective boxes are N1, N2, N3, … then you can calculate the number of arrangements W using the formula below. (In a quantum model, g is the degeneracy of the particular state.)

Assuming fair dice, this sort of resembles a six-box problem with boxes of equal sizes. Hence, all the values of g are 1. The number of marbles is the number of dice rolled. That factor of 4 that I used for four dice and the factor of 10 that I used for 5 dice, well those are just 4!/3!1! and 5!/3!2! which fits with N!/3!(N-3)! and sort of resembles the ratio shown in the formula above. This ratio is part of Pascal’s Triangle. I have highlighted the relevant section in the figure below.

It hasn’t taken me much time to get to this point and I am feeling very pleased with myself. (This is what happens when you get sloppy.) I now merrily plug numbers with increasing N. When I get to N = 7, the probability is 0.972. This can’t be right. I go ahead and plug in N = 8 and sure enough, I get a number larger than 1 and I know my simple formula is dead wrong. Here’s the table below.

You would think I’d have figured this out earlier simply by seeing that Factor 2 is unchanged at (1/36) and since Factor 1 is simply an increasing multiplicative integer, once it passes 36 (and it does so very quickly), this becomes nonsense. It’s time for bed. I’m tired. And I’m somewhat dejected by my own idiocy. Why did I think it was going to be that simple?

I go to bed. Maybe 3 hours later I’m awake. But now my mind can’t stop working on the problem. I’m trying to get back to sleep but my mind has shifted gears to try a simpler problem – rolling doubles rather than triples. If you roll just two dice, then the probability is 61/62 = 1/6. That’s easy. What if you roll three dice? Is it 3 x 62/63 or 3 x 1/6 using similar reasoning as I did earlier? That would yield 108/216 or 50%. What if you roll four dice? That would be (4!/2!) x (1/6) which is exactly 1. That’s clearly wrong. You should only hit a probability of one when rolling seven dice because you could roll (1,2,3,4,5,6) with six dice. I’m still doing this in my head lying in the dark. Okay, I go back to the three dice problem. I imagine all the possibilities in a matrix. The 216 possibilities can be written as a 3 x 36 matrix that I can systematically populate. And if I think of the 3 as xyz coordinates in Cartesian space, I can imagine a cube of length six. When x and y are equal (a double is rolled), then z can take any value. This leads to a plane parallel to z and along the diagonal y=x. There should be two similar planes for x=z and y=z. These planes bisect each other and that’s why my earlier formula didn’t work. I must have double-counted, triple-counted, etc., the intersections! Unfortunately I’m stuck at this point in the dark since I’m not very good at trying to visualize these three intersecting planes. Worse, even if I do figure it out, I probably can’t do the four-dimensional problem representing rolling four dice. I try to think about something else and eventually get back to sleep.

In the morning before going into work I sketch out the three planes and I can guess the intersection. But this is going to be more problematic for higher dimensions. I start writing out sequences (over breakfast) to get a sense of where I might be double-counting and I’m able to quickly figure out that each intersecting plane has double-counted 6 possibilities so instead of 3 x 36 = 108 in the numerator, it should be 36 + 30 + 30 = 96. The probability is 96/216 or 44%. I make a quick stab at the four-dice problem and it’s clear that much larger chunks of the matrix are being double-counted, but I don’t have the time or patience to figure it out. I’ve clearly learned that it’s not going to be so easy.

I get busy and leave the problem for several days. I contemplate writing a simple script that generates N random integers from 1 to 6 (for N dice rolls) and then checks to see if a triple is rolled. I could then run maybe a million trials and get some statistics. The problem is that 6N grows very quickly. When N=10, 6N is about 60 million, so I’d have to sample much more. Not only that, I’d need to go look for a better random number generator than the simple rand( ) function or its equivalent.

Today I decide that I’m going to do this systematically. I’m a lazy and lousy coder. Hence I write a short script that generates the 3 x 6N matrix. I then write a second script that goes in and checks in each case how many triples are in each combination. This might be useful later because in the actual game, rolls of fives and sixes could cause an error catastrophe. If errors exceed the number of blue (heredity) chromosomes, your organism suffers atrophies. Yellow (specificity) chromosomes allow you to reroll some of the dice. And some mutations (when DNA has evolved) confer additional stability whereby only sixes cause errors. Given that 6N starts to blow up exponentially and my script is inefficient with lots of I/O writing out files with 6N lines, you can imagine that this bogs down after a bit on my laptop. I could submit the job to my computational cluster at work but this doesn’t seem right. Anyway in my playtests so far, it’s not often that you have to roll more than ten dice, so I let my laptop work while I do something else.

Here are the results for N=3 to 11 for triples.

Looks like 7 dice get you to at least half a chance of getting a triple. I haven’t done a further analysis taking into account rerolls. A fair assumption is that if able, the player will try to reroll fives and sixes to avoid errors. Assuming success this reduces the player’s final number of triples by a third. I’ll give feedback to the designer who can decide if the rule stays or goes.

And that’s how my episode of Metabolism Probability Insomnia transpired.

Monday, March 21, 2016

Autonomy and Common Curricula


It’s Spring Break, so I’m catching up on reading some blogs. Today I wanted to highlight a post by Greg Ashman titled “The Secret of a Strong Department”. While his experiences are geared at the grade school level (equivalent to U.S. high school), there are some potential takeaways for college-level teaching.

Ashman kicks things off by describing how having well-constructed and detailed lesson plans handed to you when you are a novice teacher can be very helpful. This way you don’t have to reinvent the wheel. There is nothing controversial or surprising about this. When I taught tightly prescribed curricula at the high school or pre-college levels, it was very obvious what I needed to cover and how much time I had to cover the appropriate content and skills to be taught. There was little flexibility and little room to deviate from the master plan – otherwise you would not prepare the student adequately for national-level exams (common in many countries outside the U.S.)

His example of a bad is to “give teachers a list of five vague themes to cover in a term and let them get on with it with very little guidance”. Ashman questions why this approach is perpetuated. Certainly there are departments that are disorganized, headed by folks who seem to be constantly behind-the-ball, and therefore instructions are both scant and vague. However the next two reasons he highlights are the main points of his post.

First, he argues that “when teachers claim that they need a certain amount of autonomy, this is related to flawed notions of what it means to be a professional.” The problem lies in how much autonomy. Certainly it is important for teachers to be able to respond to questions and facilitate discussion in class, and these may vary depending on the students and their level of understanding and engagement. The issue raised is that if a teacher is “not comfortable with a key part of the curriculum, and [has] enough autonomy to de-emphasize it… things might not get taught.” This can certainly be a problem for core concepts. The chemistry program in my department (like many others) is hierarchical in nature. If a student does not master (or is not taught) key concepts and skills, it is difficult to progress to the higher levels where the earlier core knowledge is assumed.

Second, Ashman thinks that really good explanations, really good reading choices, really good exercises, are few in number. Once honed, these can be employed repeatedly for each group of students. He thinks that students are “not as different from each other as is sometimes supposed.” He argues against fully scripted lessons, and suggests that if some key classroom activity isn’t working well, it probably should be changed for all instructors and all sections of the same class, and not just subject to one instruction’s whim. This semester our department has 12 sections of second-semester General Chemistry lab. The labs have been honed over the years to provide what we think is the best educational experience for our students, and we stick to the plan. (Certainly from a lab prep point of view, it is important to be doing the same experiment when you’re at scale.) There are minor variations in each section not in the actual experiments, but we’ve all recognized that the pre-lab questions, quiz questions, analysis questions, discussion/reflection questions, built up over the years, are well-chosen.

In the lecture sections, there is a little more variation in how we cover particular topics, but as a group we all decide on a common textbook and which chapters will be covered each semester. This year I’m covering the material in an order very similar to my fellow instructors. I do this most of the time but I occasionally deviate. Last year when I had a smaller Honors section of 25 students, I re-themed the class and covered the topics in a very different order. I still covered the core topics but I had plenty of autonomy to do so in a very different way. It was a lot of extra work on my part, but I generated a lot of useful class activities, some of which I have modified for my regular section this year. My department is also very good about sharing materials and we regularly talk to each other in the hallways about things we’re trying – those that work well and those that don’t. One very useful thing I did when I first became a faculty member was to regular visit my colleagues’ classes. I still do so, although less regularly than when I started out.

Ashman notes that “joint planning needs to be effectively led” and when done well is the “sign of a strong department”. While I happen to be in a department where this is true, I have also observed many more cases when this situation is not the case. With the recent upheavals in the world of higher education, many institutions (in the attempt to be more relevant and at least maintain their market share) are re-envisioning and revising their core curriculum. In a number of cases we are seeing more “joint planning”, certainly from an assessment point of view. There are also moves towards having part of the core be a common curriculum – that students should have a common yet distinctive experience is starting to be a selling point in admissions brochures, even if the reality behind the scenes is much messier. The administration becomes more complex, and segments of faculty will use the loss of autonomy argument. We are also seeing an eroding of faculty governance as administrations seek to be more “nimble to change”.

If you read Ashman’s post, I recommend skimming the comments too. Some of them are quite thoughtful. Much like other complex interconnected problems, there is no simple solution. But there is food for thought.

Saturday, March 19, 2016

(R)evolution


I just finished Matt Ridley’s new book, The Evolution of Everything. The book’s bold title clearly states its central thesis: Everything that you observe comes from the process of evolution. But what is evolution? Ridley uses the word to denote “bottom-up” change; he wants to clearly distinguish it from “top-down” change, attributed to governments and other bodies that seek to control outcomes. He refers to Darwin’s theory as the “special” theory of evolution, in contrast to the more “general” theory of evolution he is advancing, and perhaps in homage to Einstein’s special and general theories of relativity.

In 16 chapters, Ridley attempts to cover a swath of material. His chapters all begin with “The Evolution of” and the categories are the Universe, Morality, Life, Genes, Culture, the Economy, Technology, the Mind, Personality, Education, Population, Leadership, Government, Religion, Money and the Internet. That’s a mind-boggling list right there, and is symptomatic of the main problem with how Ridley attempts to support his thesis. Basically he tries to shoehorn everything into his general framework by highlighting examples whereby what we humans might consider significant “positive” revolutions are smoothly explicable by his bottom-up evolution framework. Thus, the (r)evolution pun that is the title of this blog post – I got it from Ridley. The top-down approaches by contrast are called “creationist”, a disparaging term that he borrows from the so-called “creation versus evolution wars”.

The problem is he does not take his own advice. He argues that we humans are too easily predisposed to seeing patterns and constructing narratives where they are not warranted, and while I’m inclined to agree with him on this point, he pretty much just cherry-picks examples to support his grand narrative. Worse, he tries to assign positive and negative values to the bottom-up and top-down frameworks respectively, baldly referring to these as good and bad. But in the evolutionary framework he espouses, one cannot do this to historical events – if nothing exists but “atoms and the void”, then history has simply unfolded as it has. Ridley praises Lucretius and each chapter begins with a few relevant quotations from the superb De Rerum Natura. But where Lucretius’ narrative is coherently supported by his (more limited) examples, Ridley’s is not. Ridley even “accuses” philosophers and scientists who started treading the evolutionary path to have not gone far enough but have backed down by adopting a Lucretian swerve. (Read the poem to find out more about this.) But maybe they saw the difficulty with following this path to its logical conclusion, and that the overall evidence so far is lacking. Perhaps that is why Ridley’s grand narrative feels weak to me as a reader.

If we go back to the broader definition of evolution, change over time, without trying to endow it with good or bad qualities based on its purported mechanism, then it is not surprising that you find evolution in everything. This is different from saying that evolution is everything. Ridley does provide some neat examples tracing the evolution in all the categories he discusses, even if a number of them are cherry-picked. I enjoyed learning a number of interesting historical factoids through his narratives. Where he is more knowledgeable, he does a great job. The chapter on Genes is clear, engaging, and has generally well-supported evidence. (I have read Ridley’s earlier book Genome.) Areas in which the bottom-up analysis works well, such as technology development, are more coherent and convincing. His chapter on Education is interesting, but his prescriptions are weak. I think this is because of the cherry-picking examples that take a slice of history that fail to take into account the larger context. That being said, I do agree with him that the over-regulation of education (part of why we are all under assessment assault) is a serious problem.

Here’s his conclusion in the epilogue, The Evolution of the Future: “To put my explanation in its boldest and most surprising form: bad news is man-made, top-down, purpose stuff, imposed on history. Good news is accidental, unplanned, emergent stuff that gradually evolves. The things that go well are largely unintended; the things that go badly are largely intended.” While Ridley admits that one can find counter-examples, he thinks these are in the minority. However, if free will is an illusion (as Ridley thinks) then there is no purpose and ultimately no way to distinguish his two categories. He could go further and suggest we all try to live unintentionally and let ourselves be subject to our appetites, presumably the ones honed through evolution. But he doesn’t. He does intimate that “incremental, inexorable, inevitable changes will bring us material and spiritual improvements” serendipitously. I think a broader swath of examples muddies his positive outlook. Perhaps that is my swerve.

Thursday, March 10, 2016

LMS Choices: The Bad


A couple of days ago, I came across the following article in the Chronicle of Higher Education: What’s Really to Blame for the Failures of our Learning-Management Systems (LMS). In the first paragraph, Michael Feldstein writes: “Have you ever wondered why learning-management systems, which just about everyone on campus uses every day to keep classes running, seem destined to disappoint, year after year? I can tell you why. It’s because of a dirty word that academics don’t like to talk about: procurement.”

I checked the author’s credentials (found at the end of the article). He is a consultant who “helps schools, educational companies and policy-makers [to] navigate the new world of digital education.” Now, one might consider the article a way to drum up business by pointing out dissatisfaction with several major players in the LMS industry. Regardless, what’s interesting is the content – and for me at least, the author’s argument does explain my dissatisfaction with the LMS world.

Why isn’t there a killer app LMS? Innovative small-scale versions begin nimble and quick. If they capture some market share, there is an evolution to a larger-scale beast. More and more features get added on (Feldstein explains why) and once you’ve spent some time relying on your present LMS, it becomes very hard to switch. The big companies know this, and so landing a contract with an institution brings in millions of dollars. The cost of migrating to a new system, even one that is inexpensive is fraught with all sorts of glitchiness. Inertia kicks in. The next version of the LMS fixes some of your problems but introduces new ones, and things that made intuitive sense to you (probably because you were used to them) no longer does so. More time is spent complaining. Eventually some small band of entrepeneurs starts the cycle anew.

The person in charge of shepherding this process has a tough job, explains Feldstein. “None of this happens because the people involved with the selection are lazy. To the contrary, the entire selection process is time-consuming and thankless. The real problem is that it is difficult to gather real teaching and learning requirements, and the people in charge of doing it on campus have neither the time, nor the training, nor the resources to do the job properly.” Furthermore, “the makers of learning-management systems generally know this, but they are captive to the process. There is a strong incentive for them to say they can meet colleges’ needs in order to make sales.” (Feldstein provides examples.)

I have never liked any of the LMS used at the institutions I have worked with. This includes behemoths like Blackboard (what we currently have) to home-grown systems which grow to gargantuan and unmanageable proportions. Some of the “simple and free” versions could be appealing except that I started out hacking my own HTML code once university faculty had “home pages”. My pages constitute a bare-bones system with no fancy graphics but I have full control, and I primarily use it for providing information to student (syllabus, class readings, homework assignments, timetable and exam days, studying tips, links to relevant videos). Some of the content is password-protected, which can easily be set up with htaccess.

Other things however come in separate pieces. I used blogspot for class blogs and discussions (I did not like Blackboard) and right now because we’re using a Pearson general chemistry textbook we have the online Mastering Chemistry homework system (another gargantuan beast to be tackled in a subsequent post). It’s a bit annoying for students to set up accounts for these other sites and our university recently moved to a single sign-on. I think the “success” of Apple has convinced folks that tight integration and bundling is the way to go. But that’s not always the case. Sometimes it is better to keep systems separate and robust. Neither bundling nor unbundling is the magic bullet or the evil practice in itself.

It is interesting to see higher education institutions wrestle with this crisis of “which road to take”. As the competition gets fiercer, finances get tighter, the cult of assessment increases in strength, and the high costs of a university education are front and center in the news, institutions are having to think very carefully about how to position themselves in uncertain times. Some are trying to bundle more services in what I think might be a death spiral (unless you’re at the very top) from a cost-finances point of view. Others are rapidly unbundling to find new niches or a different ecosystem to survive (or perhaps thrive). These are interesting times to be in higher education!

Saturday, March 5, 2016

Learning Glass


This week I sat in on a colleague’s General Chemistry class. It meets in the same room where I teach Physical Chemistry. The room isn’t ideal. The whiteboard takes up slightly more than half of the front wall, on the right side from the students’ view. Next to it is a screen-and-projector combo on the left side from the students’ view. One good thing about the class is it has movable individual chair-desks. The class seats 40 students according to the safety limit. My colleague’s class is quite full, but I was able to find a seat by the back wall. Mine is two-thirds full so the back two rows are usually not occupied.

My colleague used a combination of Power Point slides and writing on the board, about half-and-half. I mostly write on the board and use slides mainly to show Figures or worksheets that the students can access outside of class. One thing I noticed in this visit: Since my colleague is right-handed (like me), when he does write on the board, he partially blocks the left side of the class while he is writing. This is exacerbated by the board being mostly on the right side of the room from the students’ view. Since I was sitting in the back row near the middle but slightly to the left, I was reminded of the student experience firsthand. It is likely worse in my class because I use the board more, and I write both smaller and faster. I do narrate exactly what I am writing as I write it down thereby allowing students who are listening to carefully to write things down even if I am partially blocking the view. However this is clearly far from ideal.

I was reminded of a demo last month where the creator of Learning Glass visited my institution (sponsored by our I.T.S. department) to show off his setup. Matt Anderson is a physics professor at San Diego State University, and his first prototype was built in collaboration with their I.T.S. department. The basic setup is a large glass studded with tiny lightbulbs at the edge. Behind the glass, the instructor writes using fluorescent markers. In front of the glass sits a video camera and mirror that flips the image so that the instructor can write normally (rather than “backwards writing”). You can watch a video (peppered with testimonials) to see how it works. The video feed can be projected to a screen for live classrooms, but it works particularly well for recorded lectures. The instructor has a microphone so there is simultaneous audio.

One key feature with this setup is that when the instructor is writing and explaining things, his or her back does not block what is on the screen. Furthermore you can see the instructor’s facial cues and expressions. In my opinion, this works much better than seeing writing and hearing a disembodied voice. In a live classroom, the instructor can also directly see the students while writing, and can therefore take cues from the students. I could see this also working in an online live classroom. When I first heard about Learning Glass several months ago, I did not think that turning my back to the student when writing on the board was such a big deal until my class observation this past week. Let me be clear – my colleague wasn’t doing a bad job at all. In fact, it was mainly a reminder to me that I am likely doing a bad job with my small quick writing given this particular classroom setup. (In the other larger classroom where I teach the majority of my classes, this isn’t as big a problem.) But now I see how something like Learning Glass could be very useful even in a live classroom. (I think it’s clearly superior for lecture recordings that use significant board-work in a “flipped” classroom setting.)

Those of us attending the demo had the opportunity to participate in Q&A with Matt Anderson after his demo. (Matt, if you’re reading this, I’m the one who proposed that the skier stuck on the frictionless frozen lake should throw his can of beer as hard as possible to get off the lake.) Interesting things that I learned: (1) The glass “boards” can be made larger but may require more illumination. (2) One needs glass of very low lead content otherwise the specks show up. (3) After a week of use, the glass needs a heavy “wash off” to clean off residue that would slowly accumulate from writing. (4) The video camera and mirror setup is relatively straightforward and does not require super fancy equipment.

If the data is of good resolution, it can be projected on to screens of different sizes obviating the need for glass boards that are overly large. As to some of the other issues, those should be solvable with improved prototypes. I can’t quite write as fast as I can on a whiteboard, but perhaps that is not a bad thing, and it simply takes practice. But the same is true the few times I’ve tried writing on smartboards. (Most recently I tried out the Sharp Aquos touchboard that was much better than previous generation SMART products.) Five or six years ago, one of my tech-savvy colleagues suggested that I record my P-Chem lectures. (He said there was a dearth of good P-Chem videos and even offered to help me with the video setup.) I was not sufficiently motivated to do it then, but with this new setup, I might reconsider the prospect!

Tuesday, March 1, 2016

Presentations: Practicing What I Preach


It’s been a number of years since I taught our department’s Research Methods course. The course has a cool structure (in my opinion)! As the course instructor I meet with the students once a week for an hour for “class time”. Students are also working in the research lab of a faculty member 8 hours a week during the semester. The course also fulfills the Upper Division Writing Requirement of the curriculum, i.e., in this case students will write a lengthy report about their research in the format of a research publication.

We begin the class with teaching the students about research resources: databases, how to find articles, using a citation manger, reviewing literature in their research field, etc., before moving on to the nuts and bolts of writing. We then go through how to write the different portions of a research publication: Introduction, Methods, Results and Discussion. The students turn in multiple drafts of their writing, and on top of that have to make several oral presentations of their work throughout the semester. Towards the end of the semester we have a couple of sessions on research ethics, careers in science, and peer review. (Students participate in peer review of their classmates’ papers – and they get another shot at revising these before turning in their final reports.)

This past Friday, I went through “How To Give Presentations”. We discussed the different kinds of presentations that scientists give in different forums. Students were good at coming up with all sorts of tips. (Clearly some of them already have experience giving presentations in other classes.) We also scrutinized some research posters, and the students critiqued different aspects. All this was great timing for the upcoming American Chemical Society national conference in two weeks’ time. I’m giving a research talk and some of my students will be presenting posters. Unfortunately for the first time ever, the session where my students will be presenting coincides with mine. I’ll be able to catch the start of their session before I have to run off for my talk. This has not happened in previous years and is an unlucky coincidence I suppose.

As we were discussing oral presentations in class on Friday, I was reminded of my bad habits when putting together a talk. I talk fast and go through things rather quickly. (Students who have taken my classes will nod their heads vigorously in agreement. It keeps them on their toes, and no one is using non-class related social media or checking their phones simply because they quickly learn they cannot multitask effectively in my class in that way.) My presentation speed exceeds one slide per minute in a research talk. (I don’t use slides in my classes except to show a picture on occasion. The bulk is boardwork.) My slides are also information-dense, not with text per se, but simply with data. Over the years I have reduced the density and number of slides, but it still far exceeds the norm.

So I was very strongly reminded in my class on Friday that I should practice what I preach. I haven’t gotten around to making my slides yet for my presentation in two weeks. (I used to get these things done way ahead of time as a young faculty member, but as my administrative burden has increased, this gets harder to do.) I try not to give the same talk twice. By that I mean covering the bulk of the same material, so it’s actually a fair amount of work to put together a talk since I’m not just slapping together old slides. I try to discuss current work (usually unpublished) although the project is typically reasonably far along that I have plenty of data to share. My goal was to start working on my talk today, but my time got “used” up in other ways at work – all productive, just not always what I plan to do. I suppose if I plan a less dense talk with fewer slides, it might take me less time to prepare. Perhaps that’s a good motivation to practice what I preach. I also tell my students never to wait until the last minute. For many years I’ve practiced what I preach, but it seems to get more difficult every year.