Tuesday, March 10, 2026

Square Integrable

I am reading about the extraordinary math and science contributions of John von Neumann in Ananyo Bhattacharya’s book The Man from the Future. I definitely get the feeling that von Neumann was indeed a rare genius. I also got the feeling that maybe I should have persevered in learning more math when I was younger. If so, not only would I have a better appreciation of von Neumann’s achievements, I would also be able to tackle some interesting problems in my research that require mathematically modeling beyond my current abilities. Feynman’s quote notwithstanding, I would like to better understand quantum mechanics since I use it heavily in my research.

 


Today’s blog post is about Chapter 3 of Bhattacharya’s engaging book. The chapter is titled “The Quantum Evangelist” and leverages the author’s physics background. While I know a number of facts about the history of the development of quantum mechanics, I learned a lot more about von Neumann’s contributions and the context surrounding his work. Reading this chapter gave me a better idea of the conceptual differences between Heisenberg’s matrix mechanics and Schrodinger’s wave mechanics. The connections to set theory in mathematics (and Hilbert’s program of systematization) helped clarify the context. Quoting the author: “An atom has an infinite number of orbits… so Heisenberg’s matrices must also be of infinite size to represent all possible transitions between them. The members of such a matrix can… be lined up with a list of the counting numbers – they are ‘countably’ infinite. Schrodinger’s formulation, on the other hand, yielded wave functions describing… an uncountably infinite number of possibilities. An electron that is not bound to an atom… could be literally anywhere.”

 

I now have a better appreciation of Dirac’s “ingenious trick to merge the ‘discrete’ space of Heisenberg’s matrices and the ‘continuous’ space of Schrodinger’s waves” with the delta function. Bhattacharya describes it as a “salami slicer, cutting up the wavefunction into ultra-thin slivers in space”. While Hilbert space still feels fuzzy to me and I don’t quite comprehend it, I can dimly see where square-integrable functions come from. When I teach quantum chemistry, I tell students about this important property and its practical uses along with Born’s probability postulate, I had never talked about their mathematical basis (because I didn’t understand it myself).

 

Where does von Neumann come into the story? Given his mathematical talents, he realized that square integrable functions “can be represented by an infinite series of orthogonal functions, sets of mathematical independent functions that can be added together to make any other… How much of each function is required is indicated by their coefficients... [which] were exactly the elements that appear in the state matrix.” In my class, I invoke orthogonality from a consequence of Hermitian operators. I discuss the importance of having linearly independent functions and spaces (e.g. Cartesian space or polar coordinates) conceptually but my students still struggle to think about it. Linear algebra is not a pre-requisite for my class and most students haven’t taken it. Neither have I for that matter. Until reading this chapter, I had not realized the connection between square integrable wavefunctions and orthogonality. In my class, when we get to multi-electron multi-atom systems, I introduce students to manipulating linear combinations of functions that sum up (invoking the principle of superposition) to get better results when solving the Schrodinger equation. They learn that the sum of the squares of the coefficients must add up to one, but I hadn’t made the connection to square-integrability.

 

There is plenty more in the chapter about the weirder aspects of quantum mechanics, wavefunction collapse, hidden variable theory, pilot waves, Bell inequalities, and Many Worlds. But what really stood out to me was where square integrable functions come from (as part of Hilbert space) and how they connected to orthogonal component wavefunctions. All these connections were a revelation to me, and I’d been teaching for a quarter of a century! How little I know. How much more to learn. This reminds me that I should get back to Beyond Weird by Philip Ball.


Monday, March 9, 2026

Cybernetics Informing Learning?

I stumbled across an interesting blog post connecting Ross Ashby’s principles of cybernetics to how one designs questions to probe student learning. I have some familiarity with the cybernetics principles for thermostat design; several years ago I was reading papers using this to analyze a complex prebiotic chemistry problem adjacent to one of my research projects. I had not, however, considered how this affects instructional design. Given that A.I. methods are heavily encroaching on education, I think the article highlights some of the potential pitfalls of a computerized system that supposedly personalizes learning.

 

The word “system” is important here. The blogger, Carl Hendricks, has this to say: “An instructional system can only regulate what it can detect and many learning environments rely on a channel of extremely low capacity: correct or incorrect [which] carries almost no information about process. It does not distinguish decoding from guessing, understanding from memorisation, reasoning from elimination.” Prior to the present LLM burst, the computerized learning systems relied on multiple choice questions (MCQs) or True/False questions. A subject matter expert designed these questions as a proxy to probe certain learning goals, usually atomized Taylorian-style. In the last decade, this morphed into “adaptive” systems that mixed-and-matched questions depending on whether a student got this right or wrong.

 

I think that expert-designed questions and answers for the computer-distance-online-learner can be effective to some extent. Writing good questions and answers is time-consuming and challenging. It’s also why lazy me doesn’t use exam MCQs. It’s faster for me to write a short-answer question and then evaluate the student answers, i.e., the time it takes me to grade the student answers is less than the time it would take to design really good MCQs. I’ve tried getting the LLM to help generate good answer-question pairs but right now the results are low quality. I expect they will improve with time; I might even be able to train one on a limited chemistry corpus.

 

But while expertly-designed individual questions may be quite good, stringing them together in an A.I. “adaptive” system degrades that goodness. This is why after trying out some pre-LLM systems, I never selected the “adaptive” option. Hendricks mentions the drawbacks of not coming up with good questions and answers that really get at what you want the student to learn, and the additional problem of having a regulating control system that supposedly personalizes the learning. He writes that measuring such performance is “fragile” because “the system ensured that answers were right often enough, but it never ensured that the right thinking had occurred. [It is] informationally impoverished; and no amount of pedagogical enthusiasm can compensate…”

 

LLMs continue to push the personal tutor aspect; I should say A.I. tech companies are pushing heavily because they need revenue streams. Last month I found that the current LLMs do a better job generating chemistry questions and answers compared to previous ones. I see more nuance and better accuracy overall. And the “voice” of the LLM tutor leans heavily on trying to sound helpful, offering follow-up information and more. One thing I learned is that I can make an effort to sound more helpful when students ask me questions, so the LLM had at least that aspect to help me improve. But the LLM doesn’t always (and maybe not often) offer what might be best in actually improving learning. It’s good at helping the student feel good. But that’s because it was designed to do so. It wasn’t designed to be an expert tutor.

 

A thermostat does just one thing – regulate the temperature by measuring the ambient value and then turning on (or off) the heating/cooling device. Even with its narrow purpose, the mechanics of designing a good thermostat is trickier than it looks at first glance. The business of teaching and learning is more nebulously defined in purpose and certainly much more complex. Cybernetics may be a starting point to think about adaptive tutors but there is far to go before it will replace an actual human expert in terms of quality. Pessimistically, I predict that overhyped adaptive tutors will degrade the desired quality to a low common denominator. Hendricks writes: “Learner ingenuity will always exceed designer foresight; there will always be shortcuts that were not anticipated, strategies that were not mapped, paths that were left open by accident. Requisite variety is an asymptote, not a destination.”

 

I’m reminded how amazing it is to learn something as a human being. I don’t pretend to know exactly how it happens especially in glorious moments of gestalt “aha!” understanding. Present neural networks underlying LLMs are not like our brain or our mind or our sense of self. As a computational scientist, I have some vague and wild ideas of how to improve on this. I’m sure others like me have such thoughts and hence I expect over time that LLMs will continue to improve. Whether they will eventually achieve the quality of the hype remains an open question.


Tuesday, February 17, 2026

The Feeling of Knowing

To err is human. To admit to erring… well, that’s difficult. Like most people, I don’t like the feeling of realizing that I’m wrong. I always think I’m right (I can’t help it!), but I don’t think I’m always right. Past experience confirms that I do err; it’s consistent but I can’t predict when it will happen. And I feel I’m right… right up to the moment that I’m proven wrong.

 


The subject of Wrong-ology is taken up by Kathryn Schulz in her book, Being Wrong. Our minds are funny things and the way we learn and remember things is much more complex and mysterious than we imagine. We think, as Plato suggested, that memory works like a wax tablet: “Everything you experience, from your own thoughts and sensory impressions to interactions with others, creates an imprint in that wax… an unchanging mental replica of the events of the past, captured at the moment they occurred.” This may contribute to that feeling of knowing, even when in gross error.

 

As a professor, I’m well practiced at professing. Students think you’re more knowledgeable and know what you’re talking about when you present the material confidently. That’s not hard to do because I feel that I know the material. Even when I don’t know it as well as I should, I still present it confidently. Fake it till you make it. Did I do so when I first started teaching? Was I more diffident back then? Honestly, I don’t remember. I’ve learned not to trust my memory even if I feel I can visualize it in my mind’s eye. My spouse provides me a very useful signal when I might be professing with confidence about something I know little about; she says: “You say that so confidently”. That gets me to chuckle, stop, check, and think.

 

Schulz discusses medical cases of brain issues where patients confidently describe or explain something with no correspondence to actual reality. And they seemingly believe it. This is known as confabulation. Here’s how Schulz describes it: “Imagine, by way of analogy, that each of us possesses an inner writer and an inner fact-checker. As soon as the writer begins devising a story, the fact-checker gets busy comparing it with the input from our senses, checking it against our memory, examining it for internal consistencies, thinking through our database of facts about the world, and, once we utter it, gauging other people’s reactions to assess its credibility… When the fact-checker falls asleep on the job, however, our theories about the world can become wholly unmoored from reality. All of us have experienced this, because the one time our fact-checkers reliably fall asleep is when we do, too. Think about dreams again for a moment, and about how weird even just the averagely weird ones can be… Now, two bizarre things are going on here. The first is that your brain is generating representations of the world that are only lightly tethered to the real, or even to the possible. The second is that you are completely untroubled by this fact.”

 

Being surprised is a good wake-up call to discovering error. You’d think that by now I would have gotten used to being surprised every time I err. But I am surprised every single time. Consistently, yet unpredictable in when it will happen. I’m heartened when Schulz writes that saying “I don’t know” is a good sign of brain function because in some forms of dementia, the fact-checker falls asleep and confabulation ensues. It also turns out that being confabulatory is part of how the human brain works. It’s an engine, possibly the engine, of creativity and imagination. I can think about and imagine things that are not real. I can make mental models of things that are abstract or invisible (which I must do frequently in thinking about chemistry). Our minds have adapted to come up with quick instinctive solutions, not always thought through, that do serve us well on many an occasion. The feeling of knowing allows us to act quickly when needed.

 

When grading exams, I still get surprised by the occasional confabulatory explanations of students. A student who has no idea what’s going on is yet able to come up with a fantastical story involving throwing together chemistry concepts completely untethered to reality. It doesn’t happen often, but it’s interesting for me to read these “answers” and try to imagine how a student came up with them. I wonder if that student had the confident feeling of knowing. But actually didn’t. Not knowing what you don’t know isn’t a great situation to be in.

 

Reading Being Wrong has made me a little quicker to say “I was wrong” in my classes when I make a mistake on the board and it is pointed out by a student. I apologize to the class, then thank the student for paying attention and being brave enough to tell me so that I don’t mislead the class any further. While it doesn’t happen often, I feel that as I age it has ticked up in frequency. I’m not as sharp as I used to be, perhaps. Or maybe I am less well prepared because I’m overconfident in the feeling of knowing, having taught the subject matter multiple times over a couple of decades. But if I actually learn something from my errors, that’s a good thing!


Tuesday, February 10, 2026

Rare Earth: P and N

The idea that Planet Earth is rare and special for being able to host life has a very readable book-length argument, Rare Earth by Brownlee and Ward. Technically it argues why complex life is rare, while simpler life may be more achievable over a broader range of conditions. Given that we only have a sample size of one for life-harboring planets, who knows if life proliferates beyond our star? You can plug a range of numbers into the Drake equation to convince yourself either way.

 

The idea that Earth sits in a Goldilocks habitable zone most often refers to whether water exists as a liquid on the surface of a planet. This assumes that H2O is crucial to life as a liquid, and that other liquids (ammonia, hydrocarbons, formamide) may not have the same versatility. It’s hard to say otherwise with a sample size of one.  We also assume that carbon-based molecules are crucial for life, which is reasonable from a chemical point of view (diversity, bond energies, or as a carrier). That takes care of carbon, hydrogen, and oxygen. What about nitrogen and phosphorus?

 

The idea that our rare Earth may be rarer than we previously thought comes from a paper published last month in Nature Astronomy (DOI: 10.1038/s41550-026-02775-z). It examines core formation of rocky planets and estimates the availability of nitrogen and phosphorus under different conditions. Things that are important are the relative redox state of the mantle. Our planet apparently sits in a zone that optimizes decent amounts of N and P (although much less abundant than C, H, O). It’s a tricky balance. If the redox situation is too reducing, availability of P plummets; too oxidizing and N might be lost to outer space by significant degassing. Simulated origin-of-life chemistry in the lab has always worked better under reducing conditions. Thus, the authors conclude: “there is a plausible case to make that only moderately oxidizing planets have both sufficient mantle P and sufficient reducing power to sustain prebiotic chemistry and then life”.

 

Why do we need N? Amino acids. Catalysts. Chemical versatility. We would not have fine-tuning of the thermodynamics and kinetics of (bio)chemical reactions without nitrogen. Why do we need P? I’m not sure. Arguments can be made for its crucial role in nucleic acids (Westheimer’s famous paper). But it’s possible some other backbone might work. Bioenergetic currency relies on phosphates today, but it’s possible that sulfur could have played the early role of energy transduction. Sulfate bacteria are also fierce competitors. I’d be very curious what is known about sulfur availability during planetary formation. We know that outgassing of volcanoes on the early Earth is a source. And I’m also clearly biased because I have a current research grant to study the role of sulfur in origins-of-life chemistry.

 

Final tidbit of the paper that I enjoyed: the authors use oxygen fugacity as their redox measure. When I teach P-Chem, I try to pepper in examples of why the math-and-models are useful. Fugacity is one of those bewildering topics to students, because the math looks like a merry-go-round and having a hypothetical reference state seems strange. Now I can point to another example why learning about fugacity is useful and interesting. The paper makes reference to Mars and other exoplanets in the search for life in the universe.


Friday, February 6, 2026

Second Week: Spring 2026 Edition

I forgot to write last Fall’s First Week edition (so here’s the one from 2024), probably because I was super-busy teaching two sections of G-Chem 1 and one section of Biochem 1. I’m not teaching any Honors section this academic year so last semester I had around 100 students across my three courses. We’re also using a new textbook and online homework system for G-Chem, and it was just my second time teaching Biochem. All in all, more prep work than usual. This semester, I’m teaching one section of G-Chem 2 and one section of P-Chem 2 with around 60 students across both courses. I’m now used to the G-Chem online homework system, and I like the new textbook. This, the semester feels lighter from a workload standpoint. Hurrah!

 

But I was still very busy last week because of the possibility of a government shutdown here in the U.S. (which turned out to be mercifully short, thankfully). I decided to write up my annual report for my current federal research grant and submit it about a month earlier than usual, in case the government shut down. There was some confusion about whether an administrative request that I sent via webform went through because the website was apparently having problems, but eventually some back-and-forth emails confirmed that they received my request and my report. And hence today’s post is happening at the end of Week 2 rather than Week 1 of the semester.

 

After the long winter break, I’ve been enjoying interacting with students in the classroom, in my office, or in the hallways or atrium of my building. I’ve been making an extra effort not to be overly quick or efficient in my interactions, and hopefully students feel I’m not rushing them when they have questions. We’ll see how that shakes out the rest of the semester. I feel I have more energy even though my first class starts at 8am (instead of 9am last semester), maybe because of the lighter load and maybe because I just feel freer. The first week of class I was still struggling with timing in my G-Chem class because I had rearranged the material to match the new textbook. This week I feel I did a much better job without rushing in the last 5-10 minutes of class. I’m not making many changes to the first eight weeks of my P-Chem class so that has been going smoothly timing-wise. (I will be making some changes to the latter half.)

 

I also feel I have time for research this semester! Last semester, I felt that I hardly made any progress on my own projects. I still helped my research students make progress in their projects, but didn’t have much time for my own. This semester, however, I’ve been getting in 5-10 hours of research or writing (working on a paper) per week, which has been very nice! One of my summer 2025 research students who continued working with me last semester is also a very capable writer so I invited her to write the first draft of the research paper featuring her work. The carrot is she gets to be first author! I have revised much of the text, but kept the overall flow intact. She also made all the Figures and Tables. (For many other papers, I make the Figures since I feel they look nicer and more consistent in size/shape, but this student is exceptional and detail-oriented.)

 

I’m not taking on any new service activities because over winter break, I found out that my sabbatical application for AY26-27 was approved! This means fewer committee meetings and more time this semester, probably contributing to my feeling freer! I’ve also decided to try presenting my work digitally rather than in-person at the upcoming American Chemical Society national conference, so I don’t need to block off travel time. More time freed up! I am reading a little more to try to fine-tune a plan for my upcoming sabbatical. There are so many interesting things to explore. Okay, that’s the end of Week 2, Spring 2026 edition. It may be my last such post as I’ve been reducing my blogging activities.


Tuesday, January 27, 2026

Words and Pictures

I’d read several papers by Richard Mayer on the dual-coding model: Learners have two channels for processing incoming information, verbal and visual. Over time, this was combined with insights from cognitive load theory and learning more about the brain and how memory works. Mayer now calls it the cognitive theory of multimedia learning (CTML) and I read a recent review that goes through the history of how they got there and where to next. The citation is Educational Psychology Review (2024) 36:8, DOI: 10.1007/s10648-023-09842-1. I very much enjoyed the personal insights the author shared about his research journey. Each heading is listed in the bullet points below followed by my thoughts.

 

1. Theory Building Depends on Intellectual Curiosity. Mayer became very curious about how to improve teaching for “transfer” – being able to apply something you’ve learned usefully to a new situation. He did this by first narrowing the issue to the effects of multimedia. I am curious about a lot of things, but I haven’t had the discipline to really narrow my focus, and as a result I remain a dilettante on a broad range of topics. As a result, I haven’t made significant theoretical contributions in my field even though I’ve learned a number of interesting things about a number of interesting systems I’ve studied. It seems I scratch the surface, pick the low hanging fruit, and move on. Maybe I need to change my approach.

 

2. Theory Building is Grounded in Old Ideas. Mayer discusses his reading of classic works in his field. I find reading the historical underpinnings of my research and teaching very enjoyable from a learning point of view. I hadn’t thought much about building new theory off the old ideas in a systematic way. Something for me to consider.

 

3. Theory Building is Not a Straight, Planned-Out path. Mayer relates how he usefully breaks down interesting questions into “shorter 2- or 3-year plans targeted on specific research questions”. This led him to the multimedia principle: “people learn better from words and pictures than from words alone”. I’ve known about this, and it’s common in the natural sciences, to have lots of pictures. I’ve also learned that the pictures I project on the screen should not be cluttered with text as I verbalize my way through an explanation (Mayer’s coherence principle). After doing so, I then write things on the board for students to have good notes, at least in G-Chem. (I’m worse at it in upper division classes.) Mayer also writes about pursuing fruitful paths; I also do this research-wise but I likely move too quickly away from something that looks like it would take more work. I’m lazy.

 

4 & 5. Theory Building is an Engineering Problem [and] an Iterative Process Involving the Persistent Interplay Between Research and Theory. By this Mayer means that it requires tinkering, to make something work better, and going through a development cycle where theory leads to research experiments, the results of which feed back into theory. Mayer discusses fostering generative processing: “motivating the learner to actively engage with the material”. This is a weak area for me. I’ve relied on my enthusiasm for my subject area (which students recognize and comment positively on) but this is likely not enough. My activities mostly require the students to do analysis, but few of them ask the students to be generative. This needs more work on my part.

 

6. Theory Building Depends on Persistence in Collecting New Research Evidence. Sounds obvious, but this requires hard work which is not my strong suit.

 

7. Theory Building is a Team Activity. The days of the lone theorist making substantial novel discoveries are long gone. A good and fruitful collaboration requires work to sustain it, and since I’ve already admitted I’m lazy, my collaborations tend to be short-term and specific, and not dedicated to theory building in particular. Maybe I need to change that.

 

In the middle section of his article, Mayer discusses his “inching towards a visual representation of the theory”. This is very appropriate given what he studies. He starts with simple flowcharts that slowly build up to what has become a compact and useful picture. Here’s Figure 8 from this article. You’ll have to read his article to get all the details, but once you know what each of the boxes and arrows represent, it summarizes the theory in a single uncluttered visual representation. 

 


There’s also a useful Table with his fifteen principles of multimedia instructional design along with their effect sizes from experiments. I already follow some of these, given my prior immersion into cognitive load theory. Here are some that I hadn’t thought about much or haven’t incorporated yet.

·      Presenting material in user-paced segments rather than a continuous unit. I don’t do this well and I need to improve how I cue different segments in class.

·      Sometimes I assume students know definitions and terminology that they don’t and/or present them in an order that confuses them.

·      Apparently in multimedia, using a conversational style works better than a formal style. I don’t know where I am on this spectrum and should reflect more on this.

·      If you’re onscreen as an instructor, high embodiment helps. I take this to mean that being a disembodied voice talking though slides is inferior. In our pandemic all-on-Zoom year, my camera showed me writing on a large white board, and I would sometimes step out of frame so that more of the board would be visible. At some point we’ll have another pandemic and I’ll have to think about this.

·      Generative learning activities help. I mentioned this above; I should design more of these.

While I don’t use 3D immersive virtual reality, apparently studies show that students don’t necessarily learn better compared to a corresponding 2D screen presentation. The effect size of this was small.

 

I have a sabbatical coming up that will allow me to think more deeply about some of these issues. A third of my sabbatical proposal had to do with pedagogy but mostly related to adapting machine learning and data science. And there were a whole bunch of other things in my proposal which are dilettantish, so maybe what I should be considering is how to narrow what I’d like to accomplish into specific questions and design specific activities ahead of time instead of my ad hoc muddle-through approach. But meanwhile I should look over my upcoming class materials and think about the words and pictures and whether I can improve optimizing student learning.


Sunday, January 25, 2026

Are you You?

Getting a new phone this month meant moving from thumb authentication to face authentication. It seems to work pretty seamlessly when I pick up my phone – the devices are getting smarter. Also, this month I’ve noticed Gmail regularly Captcha-asking for additional authentication to prove I’m not a bot. Is agentic A.I. causing more issues? I don’t know. But it made me think about how verifying my identity has changed over my lifetime.

 

Everything was done manually when I was young. No computers or internet. I don’t remember how I was verified when I first entered primary school. How did the school and teachers know I was not an impostor? In my most recent videochat with my mother (a former schoolteacher, long retired), I asked her and she told me about the systems they would use. My first major verification that I vaguely recall was the primary school national exams. Apparently, the government sent us letters with an entry slip and a unique number; and this allowed me to take my exams where I think I had to carefully write the unique number on all my exam papers. I vaguely remember teachers drilling us to do this. And while I don’t recall exactly, I think our own teachers were also the ones who verified us because we’d been in their classes for a whole year.

 

After the age of twelve, when I had secondary school examinations, we all had to bring in our identity cards, and place them on the corner of our desks for each exam. Each of us had a specific desk that had our name on it, and the invigilators who were not our teachers, walked around (clipboard in hand) to verify each of us via the identity cards. I still experience this process when I’m at the airport, visiting the bank in person, checking in to a hotel, picking up my badge at a conference, or any other situation where I need to verify who I am because the verifiers don’t know me and wouldn’t recognize my face. If you lived in a small village and never had to leave, everyone knows everyone and verification is easy. But in an era of urbanization, global travel, and not knowing your neighbors, verification becomes trickier.

 

The age of the internet has made authentication even more challenging. Are you who you say you are? How does the system know? There are logins, passwords, two-factor authentications, additional questions for information unique to you, and now voice and face verification. These are going to get more stringent as A.I. makes it easier to “fake” more characteristics. We’ll be increasingly up the wazoo in verification.

 

“Authenticating” is the title of the second chapter in Brian Christian’s The Most Human Human which details his experience as a confederate in an annual Turing Test competition. He’s trying to prove he’s the human against an A.I. competitor. The chapter opens with a story about a man with phonagnosia. He cannot recognize anyone’s voice and growing up had assumed his voice was distinctive because everyone else recognized his voice but he couldn’t recognize anyone else’s which made phone calls an interesting case of guess-the-identity for him. He couldn’t voice-verify. In another vignette, someone easily breaks into the email account of a public figure simply by selecting “I forgot my password” and then verifying information based on internet searches.

 

The meat of this book chapter, however, is about what might be unique between a human conversationalist and a chatbot. Many of the successful chatbots in the early competitions were able to steer the conversation to avoid the tricky out-of-book situation, which is a reasonably strategy when the conversations are timed in a speed-dating like format. Successful bots typically had a single programmer devoted to developing the bot’s personality so that it would seem like a single coherent individual. The bot felt like a singular You. Today’s A.I. large language models however were developed with the opposite philosophy: use weighted statistics from millions of disembodied conversations. Apparently, this is why A.I. translators are weak on long literary novels which require a singular coherent voice throughout, but do just fine on shorter snippets.

 

When you’re conversing with a chatbot today, you’re conversing with a multitude of voices averaged into a response. Your interlocutor is Legion for they are many. They contain multitudes. You are no longer talking to an individual but a host of ghosts. I’ve never had an extended conversation with a chatbot (I have better things to do with my time), and my queries have usually been specific and chemistry-related; I dabble in exploring if chatbots can help my students gain a better understanding of chemistry. So I don’t personally know if I would ever feel that a chatbot feels like a friendly human; I know some of my students do enjoy their chatbot chats. And it may be that sufficient familiarity and multiple chats provides its own authentication, for better or worse, now that chatbots can access their memory store of their personal conversation from you and draw from it. I suppose this is what Personalization is all about. At some point the chatbot might feel like a You. But that’s because of you.