Thursday, March 30, 2023

Chatbot Study Help

While ChatGPT doesn’t actually understand or know chemistry the way humans do, can it still help students in some way? Since my first chat, I’ve come up with two possibilities to leverage the generative capabilities of GPT: study guides and test questions. These are two things that students regularly ask for. So, if a professor is unwilling to provide them, maybe ChatGPT will oblige. (I have been revamping my G-chem study guides this semester.)

 

I asked GPT: “Can you help me put together a study guide for my General Chemistry 1 class? What should I know or know how to do to prepare for the final exam?” It came up with a bulleted list that mirrored much of what we cover in G-Chem 1 (atomic structure, chemical bonding, chemical reactions, gas laws, periodic trends) although it added some G-Chem 2 topics as well. Here’s one of the items:

 

GPT: Periodic Trends.

·      Atomic Radius

·      Ionization Energy

·      Electron Affinity

·      Electronegativity

 

And it added a nice study tip:

 

GPT: In addition to understanding these concepts, make sure to practice solving problems related to each topic. You can find practice problems in your textbook, online, or from your professor. Good luck with your studies!

 

To drill down, I asked: “Under Periodic Trends, could you be more specific as to what I should be able to do or explain.” It does a nice job. Here’s the first item as an example:

 

GPT: Atomic Radius

·      Understand what atomic radius is and how it changes across a period and down a group.

·      Be able to explain the factors that influence atomic radius (such as the number of electron shells, effective nuclear charge, and shielding effect).

·      Be able to predict which element has a larger atomic radius in a given pair of elements.

 

I asked GPT to provide me a sample question that covers the three things on the list for Atomic Radius. (I did not choose Ionization Energy or Electron Affinity because they are trickier.)

 

GPT: Explain how atomic radius changes across a period and down a group, and identify the factors that influence atomic radius. Using this information, predict which element has a larger atomic radius in each of the following pairs:

a.     Cl or S

b.     Ca or Sr

c.      O or F

Here’s my partly erroneous response: “As the number of protons increases, it pulls the valence electrons closer to it. For this reason, atomic radius should decrease going across a period since the number of protons increases. For the same reason, atomic radius should decrease going down a group because the number of protons increases. Applying it to the pairs: (a) Cl will be smaller than S, (b) Sr will be smaller than Ba, and (c) O will be smaller than F.”

 

GPT encourages me (Great effort!) and agrees with my initial response. Then it tells me I got (a) and (c) correct and (b) wrong.

 

GPT: For part (b), your answer is not correct. Sr has a larger atomic radius than Ca. This is because Sr is located below Ca in the same group, and as we move down a group, the atomic radius increases due to the addition of new energy levels.

 

I ask GPT to help explain what adding new energy levels means, and I ask more clarifying questions when it brings up new terminology (such as shielding). It does a decent job in its responses. Then things go awry. I ask it to help me use shielding in a better response for (b).

 

GPT: to improve your answer to (b), you could say that Sr has a larger atomic radius than Ba because, even though they both have the same number of energy levels, the valence electrons in Sr experience less effective nuclear charge than the valence electrons in Ba due to increased shielding. Therefore, Sr has a larger atomic radius than Ba.

 

Eeeeks! The original question was Sr vs Ca. Now Ba gets thrown in, and the answer is wrong. I say I’m confused because Ba is below Sr on the periodic table. GPT apologizes and corrects itself. We do some back and forth on when to use shielding in an argument and it gets things half right and half wrong.

 

Then I go back to an earlier problem and say: “Earlier you said I was correct when I said O was smaller than F, but now I think I might be wrong. Was I wrong earlier?” GPT assures me that I was right but spouts a nonsensical explanation. After some back and forth it corrects itself apologetically. We then go through some examples that include cations and anions, and I essentially see the same thing half-right, half-wrong explanations.

 

In a new chat, I decided to try a different tact on a different topic: “I need practice with stoichiometry in my chem class. Exam coming up. Can you help generate a practice question for me? (Just the question no answer please.)”

 

GPT: If 5.00 grams of calcium reacts completely with excess hydrochloric acid, how many grams of calcium chloride will be produced?

 

That’s a good start. I ask for a hint to help me get started, and it obliges with an appropriate and helpful hint. I then claim to be stuck: “Okay, I'm trying to write the equation: Ca + HCl --> CaCl2 but where does the H go?” GPT responds correctly that H2 gas is released as a byproduct and provides a correctly balanced chemical equation. I then provide a wrong answer. Then, impressively, it tells me the detailed correct steps I should follow to do the calculation. And even better at the end it says…

 

GPT: “So, without giving away the full answer, I would suggest you to follow these steps to solve the problem. Let me know if you have any further questions!”

 

It might be getting the hang of this hint-providing approach. I tell it another wrong answer and things go awry when it tells me in an encouraging tone that I’m correct but suggests I check my sig figs. I say “Oh wait, I made a calculator error” and proceed to give it the correct answer. GPT confuses itself, probably because I used the word “error” and tells me I’m wrong. It then methodically works out the problem and gets the same answer I did.

 

I follow-up asking for a harder question that requires me to “know limiting reactant and %yield”. GPT obliges. Good question. Wrong answer because it picks the wrong limiting reactant, misled by me. When the resulting yield is over 100%, it suggests that there may be error or inefficiency in the reaction. I ask for some examples and it provides a plausible list. I try asking for a harder question with different reaction stoichiometry and GPT chokes on balancing chemical reactions.

 

In a new chat, I try a different topic: “I have a quiz coming up on intermolecular forces in chem. Could you tell me what are the main things I need to know?” GPT generates the key points and does a very nice job at the G-Chem level even though I didn’t mention General Chemistry in the prompt. I ask it to generate three different questions to test different aspects of intermolecular forces. The questions are reasonable although two are essentially multiple choice. Not surprisingly, it can defend the correct answers to the multiple choice questions well. And it goes awry with the question that requires explanation. At one point it makes a claim that correlates hydrogen bonds to atom size when I ask follow-up questions about the explanations. I ask it for harder questions. Once again, reasonable questions, but if pressed for answers, GPT spouts partial nonsense.

 

When I ask GPT to help me make the main points of intermolecular forces memorable with a poem, it obliges in a whimsical ditty that’s better than anything I could come up with. The poem has meter and rhyme. But it doesn’t help with knowing the relative strengths of the forces. So I ask for another poem that incorporates these, and it obliges with five stanzas – the fifth being a summary. I claim the poem is too long and ask it for a “memonic [sic] or memory aid”. Here’s the response:

 

GPT: Dancing Elephants Don't Hug Intensely. Each letter in the phrase represents one type of intermolecular force, in order from weakest to strongest:

·      Dispersion forces

·      Dipole-dipole interactions

·      Hydrogen bonding

·      Ion-dipole interactions

 

I’m not sure where the Elephants come from but the image in my mind is stark. Hard to unforget! Not sure if I’ll remember the mnemonic.

 

Here’s my summary thus far: GPT can generate reasonable study guides. By asking it for more details and “what should I be able to do?”, one can get a pretty good comprehensive guide. GPT can generate a variety of test questions. By specifically providing categories to explicitly include complexity into the question, one can generate decent mock exam questions. I tell my students this is a good skill when preparing for an exam, and that doing so in groups is even better. I could conceive of an exercise that gets students to think about what should go into a good exam question and have them use GPT as an aid. The study guide points can be fed back into the test question generation process.

 

But asking GPT for help answering questions is where things become problematic. It provides plausible sounding answers but there’s no check when nonsense is generated. I think it’s useful for students to see this too. I find it interesting to read GPT’s responses because sometimes it reminds me of a student who crams information without understanding and then does a data dump. Except GPT is much smoother sounding than the student. Perhaps it can lead to a helpful discussion on what they should look out for when they use GPT, and what it means to actually understand something. Maybe it will help students be more reflective and ask themselves if they really know something or if their knowledge is just plausible-GPT-like. I regularly tell students that they should speak or write out explanations, and that the act of doing so helps with self-clarification. If seeing GPT in action helps them with such metacognitive awareness, that’s probably a good thing.

No comments:

Post a Comment