Sometimes I enjoy reading math. Or I should say I enjoy reading about math when it’s aimed at the non-specialist. Jordan Ellenberg does a great job at this, and I enjoyed reading his book How Not to Be Wrong. I had a feeling I would enjoy his latest book Shape, and so far I’ve not been disappointed. Once again, he wraps math – this time focusing on geometry and number theory – around interesting stories of people and events. Yes, there’s a chapter about Covid-19 and geometric progressions, but I won’t be discussing it today.
I particularly enjoyed Chapter 6, “The Mysterious Power of Trial and Error”. It’s about random walks, and features both the Drunkard’s Walk and the Gambler’s Ruin. Ellenberg begins the chapter with a question he often hears in his math class (one that I occasionally hear in my P-Chem office hours): “How do I even start this [problem]?” Ellenberg jumps at the teaching moment: “… it matters much less how you start than that you start. Try something. It might not work. If it doesn’t, try something else. Students often grow up in a world where you solve a math problem by executing a fixed algorithm...”
That’s a good description of how my students approach chemistry problems. In my G-Chem classes, we’re in stoichiometry tackling problems of how much of A reacts with B to form some amount of C and D. What is the limiting reactant? What if the reaction yield is less than 100%? How much leftover reactants do you have? There are systematic ways to approach these problems, and I try to model these with worked examples. But there are multiple ways to solve these problems, so I try to show the students the common approaches and their caveats. In most cases, these problems are not as open-ended, so learning algorithmic approaches is helpful.
Several weeks ago, we were drawing Lewis Structures in G-Chem. Trying to draw the best structures is a more open-ended problem. I tell my students that the only way to get better is to practice, practice, practice. As you draw more structures and evaluate them (using general guidelines about the octet rule, formal charges, resonance), you get better at the task. I show the students my method which is more intuitive and diagrammatic, involving some trial and error. But some of my students have learned a more algorithmic method from their high school chemistry class. I tell students that they don’t have to use my approach if they prefer something else they’ve learned. (My approach also differs from the textbook.) Students don’t like this open-endedness. They want a surefire algorithm. But real chemistry doesn’t work that way. Neither does real math, according to Ellenberg.
Research is a good example of trial and error. Sure, there’s intuition involved, and I’ve built up some amount of it over the years. But as I branch into areas new-to-me, I become a novice again, and so sans any better guidance, I launch in and try a few things that may or may not work. This is a challenge for students when they start working in my research group. Yes, I do tell them the first several molecules to build and calculate, and what data to extract – I’m a computational chemist – but then I try to coax them into coming up with their own ideas of what to try next. For some students, this comes more naturally. For others who resist this approach, they don’t last long in my group – because then research becomes starts to feel like a tedious chore.
I’ve been educating myself about machine learning approaches for some of my research projects. Nothing hardcore yet; I’m still mostly playing in the kiddie sandpit. Hence it was fun to read Chapter 7, “Artificial Intelligence and Mountaineering”. Ellenberg introduces gradient descent, a method I’m familiar with, but then he scopes out to discuss how one approaches huge N-dimensional problems – things I will have to tackle in the large data space of chemistry. How does one navigate between underfitting and overfitting? That’s an interesting challenge and a lot of it involves trial and error as you decide how much to layer and how to assign weights to your model neural net. You get the computer to do the number-crunching for you, but you should be always cautious about the output and whether it makes sense. I’ve learned that lesson through trial and error.
One way you can do this is to have the algorithms play games against each other, the subject of Chapter 5. Tic-tac-toe, checkers, chess, and Go, are famous in the A.I. and machine learning literature. Tic-tac-toe can be worked out by hand. Checkers can be (almost) exhaustively decision-treed. Chess and Go have too many combinations to be checked at the speed of present processors, although quantum computing may cut the Gordian knot. But these games are all closed systems. I was interested to hear that some folks had written an A.I. for the Lord of the Rings CCG – a much trickier prospect with a random draw deck and different sorts of interactions (the A.I. was written for the cooperative version of the game). Could an A.I. learn to negotiate with players? Apparently, there are some folks working on an A.I. for Diplomacy. That is a very interesting choice for a case study: Limited movement with simple rules, but the tricky part is all about the negotiations among players.
Can playing games through trial and error train the machine to play the perfect game? I suppose it depends on how tractable the decision-tree might be and what the complicating factors are, but perhaps this is a less important question. Ellenberg quotes top checkers and chess players and concludes: “Perfection isn’t beauty. We have absolute proof that perfect players will never win and never lose [games that end in Draws based on finite decision trees]. Whatever interest we can have in the game is there only because human beings are imperfect. And maybe that’s not bad. Perfect play isn’t play at all… To the extent that we’re personally present in our game playing, it’s by virtue of our imperfections. We feel something when our own imperfections scrape up against the imperfections of another.”
That last line is perhaps the beauty in trial and error.
No comments:
Post a Comment