In an Education in Chemistry feature article last year,
Ross Galloway and Simon Lancaster discuss the challenge of measuring learning gains in students. First, they
mention standard measures and their associated difficulties. For example,
higher final exam scores might indicate more learning, but these may be highly
dependent on student background in the subject. This has certainly been my experience
in the first semester introductory science level courses.
Assessment officials continue to ask for measures of student learning. The public questions what a college education actually provides. For
better or for worse, measurement tools are here to stay. As educators, we
should push for better measuring tools rather than allow some administrator to
(mis)use a worse tool and draw spurious conclusions. Galloway and Lancaster
suggest Concept Inventories. The most well known example is the Force Concept Inventory (FCI) in physics. The questions are insightful,
well-validated, and the FCI is widely used. The authors list five concept
inventories available in chemistry. I downloaded all the references and read
through the articles. I’m not sure all of them have gone through the same
validation rigor as the FCI.
One of the challenges in regularly using a concept inventory
is the danger in starting to “teach to” the questions in the inventory.
Strictly from an education point of view, this is not a bad thing; because if
the questions are well-designed, then as a teacher I would want my students to
grasp key concepts. This however decreases the use of the inventory as a measurement tool. There is an adage
known as Goodhart’s Law: “When a measure becomes a target, it ceases to
be a good measure.” This is not just true of concept inventories, national
high-stakes exams in many countries exemplify this, in my opinion.
Galloway and Lancaster mention the American Chemical Society
(ACS) national examinations. A number of colleges use these as final exams at
the end of a first-year college course in General Chemistry for science majors.
I’ve been in discussions about the validity of such exams, and how the results
can be adaptively interpreted at different schools for minor variances in the
curriculum. (General Chemistry is quite standard across U.S. colleges although
there is some variation on topics at the edges.) Has the use of standardized
exams contributed to the rigidity of curricula in general and chemistry
curricula specifically? Possibly. That’s one of the challenges dealing with
legacy systems. Compared to the other sciences, chemistry has traditionally
maintained the most hierarchical and rigid curriculum. It certainly allows for
ease in transferability across institutions.
A number of the concept inventory questions in chemistry
rely on interpreting atomistic-particle diagrams. I’ve been diving deep into
this area after recently discovering Concepts of Matter in Science Education (Springer). This multi-author
monograph contains many studies investigating how students think about
atomistic and particle models – their conceptions and misconceptions. I’m
increasingly convinced that models are crucial in the teaching of chemical
concepts but they are susceptible to all sorts of problems when students
misapply or misunderstand a model and its limits. I think that chemistry, more
so than the other sciences, leans more heavily on such models and inherits the
accompanying challenges in teaching and learning. Towards the end of the
semester, I cut out some “new content” in my non-majors course and “reviewed”
material from early in the semester using atomistic-particle models as a lens
and emphasizing their usefulness and limitations. We’ll see if this
“intervention” helped when I grade the final exams.
As to learning gains, I’m thinking of designing a
model-based questionnaire for pre-test and post-test for my General Chemistry
class next semester. Then I need to remember not to “teach to the test”.
No comments:
Post a Comment