Monday, August 16, 2021

Futureproof

Can I be replaced by a robot? Maybe. If you can break my job down into discrete repetitive tasks that can be encapsulated into an algorithm such that it satisfies a particular goal that can be assessed by measurement. Sounds like a test. I mean that literally – one tests the algorithmic system to see if it matches the desired outcomes. Multiple tests preferably, so you can collect statistics to see if your test is both valid and reliable.

 

How might I prevent myself from being replaced by a robot? That’s the subject of Futureproof by Kevin Roose, where he provides “9 Rules for Humans in the Age of Automation”. Roose does not classify himself as an optimist or pessimist, but rather as a sub-optimist, which he defines as “while our worst fears about A.I. and automation may not play out, there are real, urgent threats that require our attention.” He probably wouldn’t classify that as a sub-optimal position to take.

 


While A.I. in its many guises is much discussed in his book, Roose makes a useful distinction between A.I. (the subset) from its broader realm of Automation. The latter does not necessarily involve machine-learning – today’s new catchphrase for grant applications. Whatever can be automated soon will be. We’ve seen many examples from the first industrial revolution to today. However, it will also depend on the acceptable error rate. Who determines this or how it will be determined is a matter of debate that should involve a much wider group beyond technologists and technocrats.

 

A distinction I found most useful in Roose’s telling of the tale is the difference between machine-assisted and machine-managed processes. The former is job-enhancing, and frequently used examples come from techno-optimists. The latter is soul-sucking, and provides the necessary pessimistic counterpoint. I certainly use technology in my job, and that use has increased over time, taking a quantum leap thanks to Covid-19. I’ve blogged about the pros and cons of how I spent my Covid year; in summary, I didn’t like it but it wasn’t as bad as I had anticipated.

 

So, which parts of my job are machine-assisted and which parts are machine-managed? Let’s try a few examples. E-mail has changed how I communicate with students and colleagues – there’s a lot more of it. What’s nice: it’s efficient, it keeps records, it allows me to think before responding asynchronously, and I don’t have to be spatially co-located. What’s not-so-nice: it can be a taskmaster in more ways than one because of its advantages. What do I do? I don’t have it open all the time, checking only once every hour or so during “work hours” and not at all other times. I happen to be in a job which hardly ever involves a life-or-death situation. And students and colleagues can be trained to not expect replies during evenings and weekends.

 

Example Two: A web site to deliver course materials. Until Covid which forced my complete use of the LMS, I delivered materials through my simple HTML-hacked website. It was easy to change things on the fly without having to print or re-print, be it class notes, problem sets, quizzes, syllabus items, and many more. Coupled with something like Microsoft Word that allows me to modify and reuse documents, this has been a huge time-saver. I remember the old days when I would handwrite most things – faster and easier than using a typewriter. (Having weak fingers, I was a one-fingered typist on those older machines, but I’m very quick on the modern QWERTY soft-touch keyboard where multiple digits are used.) I can’t think of drawbacks to my simple use of these tools, but I do not like the enforced LMS categories which might look fancier but turn out to be less efficient at least for the way I teach, presumably referred to as old-school.

 

Now let’s tackle whether my job can be atomized and algorithm-ized. I’m in the business of helping students learn chemistry. The end goal is that students have learned the chemistry I wanted them to know. Often that chemistry is “standardized”, i.e., chemistry college courses all over the country have similar content and skills we want the students to master, at least for standard courses such as G-Chem, O-Chem, P-Chem, etc. How do we assess whether students have learned the material? Typically through some final assessment that may be an exam or project or portfolio or paper. Could a machine conduct and “score” that final assessment? For a multiple-choice exam, certainly. For other formats, let’s just say that machines are getting much, much better. Whether the assessment fairly evaluates student knowledge (a rubric is like Swiss Cheese focusing on the holes!) is a different matter altogether and a much longer discussion.

 

Instead, let’s ask whether given the assessment tool, an algorithm can now be devised that leads the student through a process that improves their score on that assessment. I suspect the answer to this is yes, and that the error rate is reducing over time. We call this “teaching to the test”. I don’t mean that in a derogatory way. In a sense, all teaching is to the test. We have final goals in mind that we want to assess, and we want to train our students to reach those goals. If the test is standardized, it’s likely that the learning process to reach it can be atomized and algorithmized – minimally in a Swiss Cheese manner that assumes reduced parts capture the whole. So, could a robot do my job? Under the circumstances and constraints I have proposed, I think the answer is yes.

 

What is my added value as a human instructor? Is it presumptuous to think I add value? I’d like to think that knowledge and life cannot be ultimately atomized and algorithmized, and therefore cannot be automated. Parts of it can – the parts that are reducible – but others cannot be because they can’t be part-itioned. Living systems are likely one of those irreducible things. Hence, part of my job (oh, the irony) is to constantly allude to those complex things. They’re never easy to define – fundamental things never are – but we can get at such systems with many different examples that complement each other to some extent.

 

Roose’s book suggests habits-of-mind to do so in his nine rules, the most useful of which is Rule #1: “Be Surprising, Social, and Scarce”. A.I. is not so good in these areas, at least for now, and perhaps for a long time if indeed these are parts of what it means to be complex and not just complicated. I can see how being Surprising or being Social can be construed as complex. I’d replace Scarce with Unique or Rare, which captures its meaning better. Roose provides examples for these, but the question is how this applies to my specific job. Thinking about how I teach, and how this has evolved over the years, has energized me for the upcoming semester. I feel I’ve unconsciously gone in the direction of providing the extra sauce that machines cannot provide in thinking more deeply about the conceptual parts of my classes and conveying that in multiple exams that defy a simple definition or description. Chemistry looks supremely organized from a bird’s-eye-view, when you think about an entity such as the Periodic Table, but as you take a closer look it becomes so much more messy, complicated, and interesting!

 

The other rules take different angles; I’ll highlight a few with a short sentence or two.

·      Rule #2: “Resist Machine Drift” reminds us to not let machine-recommended systems drive what we read or consume online. Reset your browser. Venture out of your comfort zone.

·      Rule #5: “Don’t Be an Endpoint” asks you to take a close look at whether your job involves helping two machines talk to each other simply because different systems haven’t perfected direct communication yet. You might think teaching would be immune to this, but I was recently at a vendor presentation where they were giving you as a teacher everything that you needed electronically, so that a new teacher could just use their materials out-of-the-box. You’re there to help connect the learning system and the assessment system by being a friendly face and answering some questions the system can’t yet handle.

·      Rule #6: “Treat A.I. Like a Chimp Army” is obvious.

 

I thought that reading Futureproof would make be despondent about the potential of losing my job to a robot. However, it galvanized me to think about teaching and learning at a fundamental level and the key role played by human-to-human connection (mediated or not by technology), and how to continue leverage technology in a machine-assisted manner to improve the process. And this makes me excited about meeting my students face-to-face again this upcoming semester! What makes me futureproof is to continue engaging in the conversation of what’s important and why in my field of teaching and learning more broadly. Even more exciting, my research now involves thinking about systems, algorithms, complexity, and the limits of reductionism. How exciting!

No comments:

Post a Comment