This year the National Academies Press (NAP) released Undergraduate
Research Experiences for STEM Students: Successes, Challenges and
Opportunities. You can read it online for free here at the NAP website. I’ve read several NAP reports pertaining to science education.
Typically, a blue-ribbon panel assembles these reports narrating the current
state of affairs in the field. These reports tackle multi-faceted issues with
no clear-cut answers, but they bring the reader up-to-date with a summary of
the evidence from the panel’s research.
The first take-home message from the report: undergraduate
research experiences (UREs) in STEM are highly varied. The two most common
approaches are (1) the apprenticeship model, where students work in the lab of
a faculty member, and (2) embedding the experience within a course (referred to
as CUREs: Course-based UREs), but there are many others. The role of the
research mentor or course-instructor plays an important role in whether the
student views his or her experience positively. There is some evidence that
UREs have measurable positive impacts for historically under-represented
groups, at least with respect to retention or persistence in STEM.
Measuring impact, however, is tricky. There are very few
gold-standard randomized control trials (RCTs) measuring the effects of UREs.
Trying to eliminate the effects of self-selection and other compounding factors
is challenging, not to mention convincing your institutional review board to
approve this type of RCT. If you were a student (or the parent of a student),
you might be very unhappy finding out later that you were randomly assigned to
a control group and given a “less favorable” educational experience. This is
true of education as a whole; assessing the impact of different interventions
can be challenging. As someone whose teaching and research spans physics and
chemistry (the “harder” sciences), but who also extensively reads educational
research, I have great respect for my colleagues in the so-called “softer”
social sciences who contend with messier data that is much more difficult to
de-convolute. It is not easy to design an experiment to cleanly separate out
the particular effect of something as multifaceted as a research experience
within an already complex educational experience.
How “success” is defined also varies considerably. In
chapter 3, the report summarizes the many different goals into three broad
categories: (1) increasing participation and retention in STEM, (2) promoting
STEM disciplinary knowledge and practices, and (3) integrating students into
STEM culture. Since undergraduate research is required for chemistry and
biochemistry majors, I asked my students in Research Methods to critique and
sharpen the vague hypothesis “Participating in undergraduate research increases
student success”. (This was part of an exercise on hypothesis development a
month ago.) The students caught on quickly and proposed different ways the
words participate, research and success could be
construed. Besides qualitative differences, we also discussed the challenge of
quantifying, or coming up with a measurement acting as a proxy for a vague word
in the hypothesis.
The first of the three categories may be more easily
measured. For example, some of the goals listed include enrollment in STEM
courses (for non-majors beyond the minimum requirement), retention of STEM
majors (many students who declare a STEM major “drop out” when they realize it
is “hard”), going to graduate school in STEM, or pursuing a career in the
sciences post-graduation. (Many of our students go into biotech/pharma.) The other
two categories are not as easily measured. I suppose students could take an
exam to determine how much they know about disciplinary knowledge and practices.
(We would measure if URE participants do better on the exam). Many of the
studies touting the positive impacts of UREs come from self-reported student
surveys. Arguably, student satisfaction is a positive outcome and it may
translate into STEM careers or an appreciation for the guild of STEM, but
without RCTs, it is unclear if UREs are truly “high-impact practices” (an
increasingly annoying buzzword) as claimed. The panel acknowledges these
limitations in their summary of studies measuring impact (in different ways),
and not surprisingly, they recommend further study. (This is a common
recommendation in all the NAP reports I have read.) They also conclude that
there are some reasons to be optimistic as there is some (although limited)
evidence for some positive impact in some areas. That’s a lot of “some”.
There are several vignettes of “successful” programs.* (This
is another common feature of the NAP reports I’ve read.) These are potentially
useful if one is looking for examples and ideas to emulate. The panel also does
a commendable job framing UREs within the broader context of institutions and
their goals. If you’re thinking of starting a URE on your campus on in your
major, resources and support from the administration is needed. The NAP report
is written not just for the faculty member but targets the university
administrator and funding sources. It argues in support of funding UREs that align
with institutional goals. In addition, research that measures the effectiveness
of UREs should also be supported. A cynic might see a self-serving streak of
experts in the field recommending more research, and therefore funding, is
needed. But perhaps the report’s conclusion is true of any line of
investigation that looks promising; further research is needed. That’s how
scientists continue to make progress in their research labs, thereby
contributing to knowledge as a whole (at least that is a key institutional goal
for a research-oriented university). Indeed, the last words in the report’s
title are Challenges and Opportunities.
My department believes UREs are so important, that we
require every student majoring in chemistry or biochemistry to have a mentored research
experience per the apprenticeship model. Several of our lab-based courses also
have research projects built into the last several weeks. Have we quantified
the impact of that experience and measured it against students who did not
participate in UREs? No. And the relationship won’t be easy to untangle. We do
have plenty of anecdotal data, and we have worked hard to acquire the
characteristics that correlate with some measure of excellence.*
Every semester we host a Sci-Mix poster session where
research labs present their work, and interested students mingle and talk
science! We had our session tonight, and I’m thankful for the enthusiastic
students in my group who took turns helping to answer questions, otherwise I
might have been utterly exhausted. These events have become increasingly
popular over the years. (We ran out of food halfway through the 1.5 hour
session.) And the students seem to enjoy the mix of science. Seeing the
excitement of my research students is what helps me keep going. Those
individual conversations with students suggest they find what they do valuable!
*For a detailed list of requirements with brief
explanations for “successful” UREs, you can look at the first chapter in the (Characteristics of Excellence in Undergraduate Research) COUER report
published by the Council of Undergraduate Research.]
No comments:
Post a Comment