Thursday, September 9, 2021

Edge Cases

Technology has changed our world in unprecedented ways. Ugh. That first sentence sounds like one of those standardized ‘suggestions’ of how to begin an essay you’re writing for a school assignment. It’s true, but it’s also drab.

 

Your standard computer as an example of technology? Boring. Maybe because of its ubiquity and its eclipse by tablets and smartphones.

 

What’s more exciting? What’s cutting-edge? What’s creative? The new buzzword is machine-learning, previously better known as artificial intelligence. Or if you wanted to take the learning and intelligence out of it, these are automated algorithmic systems programmed for a specific task. That sounds more boring, but it’s the boring you have to be careful of if you worry whether you’ll be replaced by a robot. See my previous blog post about Futureproof where author Kevin Roose hammers this point home.

 

Today’s blog post is about a different book: Artificial Unintelligence by Meredith Broussard. While presently an academic in the journalism school at NYU, Broussard is also a hacker and has the skillz, both in computation and communication. Her specialty is data journalism. She’s not afraid to get into the trenches even when it means riding a ‘startup bus’ packed with too many people, junk food, and all manner of cords for one’s technology – all while trying to create an app to win kudos at a hackathon. I admire her gumption. Her experience is one interesting story among others; I also learned about self-driving cars, campaign-finance rules, the history of computing, and how our obsession with rankings has obscured the difference between the popular and the good. Broussard is an excellent storyteller even as she peppers you with data tables and lines of Python code.

 


Broussard thinks that the general A.I. portrayed in Hollywood movies and dystopian fiction is a mirage. They make for exciting stories perhaps, but are unlikely to live up to their potential. No, she doesn’t think the apocalyptic Singularity is nigh. Narrow A.I., on the other hand, deserves our attention both journalistic and otherwise. What seems boring will change our lives in ways we might not like if we don’t pay attention. Many of our twenty-first centuries problems are intertwined with issues of seemingly boring technology coupled with human greed and indifference. But like Roose, Broussard provides a positive counterweight framing to the problem: the edge cases.

 

Automation can be a good thing in many cases. Broussard write: “[It] will handle a lot of the mundane work; it won’t handle the edge cases. The edge cases require hand curation. You need to build in human effort for the edge cases, or they won’t get done. It’s also important not to expect that technology will take care of the edge cases. Effective, human-centered design requires the engineer to acknowledge that sometimes, you’ll have to finish the job by hand if you want it done.” And here Broussard refers to doing something well in a broader sense of the word.

 

What can narrow A.I. do in my area of teaching and education? Certainly, automation can take care of many mundane tasks – record keeping for example. For many introductory math and science classes, it can deliver homework problems and grade them! It will even mix up the variables so different students get different numbers, which means when they help each other they’ll need to know how to solve a problem and can’t just copy each other’s answers (at least the numerical ones). If there’s a task that many teachers would like to avoid, grading ranks very high. So-called adaptive learning is the rising star with its claims of personalizing the learning process – the ever-patient tutor who curates questions at the right level to help you advance your learning and “knows” when you’ve progressed sufficiently to move you to the next level. What does this require? In short, the ability to atomize knowledge, which I’ve argued is a questionable assumption.

 

But there are many steps in the learning process that can be atomized. My job as a teacher is to break down complex material into digestible steps for the student. After doing it for many years, I have a good idea where students get stuck, where the tricky bits are, and how to use different analogies and models to help illuminate abstract ideas. Some of this can be parameterized into an A.I. tutor. We’re still in the early days of this revolution despite the occasional grandiose claim, and I expect to see more progress in this area.

 

Can an automated tutor system handle the edge cases? Not in all cases, but it is likely to make inroads into those edges as such systems improve. How sizable are the edges? It depends on what you mean by learning and how you determine whether a student has learned or not. One danger is letting the boring technology define and determine what constitutes learning. A follow-up danger is allowing the black box machines to categorize us into boxes (the irony!) – something that’s happening at an alarming rate in so-called dynamic pricing systems be it in retail or insurance. Data is not destiny. Increasing its density does not make data necessarily better if you don’t understand its blind spots. Broussard’s many examples show us why this is so and why narrow A.I. works as well as it does. The edge might be larger than you think if you don’t stop to look at the bigger picture. The narrow A.I. cannot see the big picture, hence you might call it artificially Unintelligent. This is a good distinction to keep in mind.

 

One thing I can do well that an A.I. cannot at the moment, is answering questions from students, or eliciting the gaps in knowledge by asking follow-up questions. When you don’t know something, it’s hard to come up with a well-posed question. Human expert intelligence is particularly good at divining these cases and getting to the bottom of the root question efficiently. That’s why I’m still needed as a human educator – besides the human connection, which I think is just as if not more important. But will A.I. be able to increasingly handle that task well? It’s hard to make predictions. Especially about the future. And edge cases, by their very nature are the hardest to predict. Automated systems need predictability to be trained to work well. Real human behavior is not so predictable. Perhaps our humanity is the ultimate edge case.

No comments:

Post a Comment