Ever Changing Remediation: A Specific
Example
“We’ve been doing it wrong, focusing on getting remedial students to pass
remedial courses. This new process puts remedial students directly into college
classes, and passes them from there!”
--Administrator, explaining the latest amazing new idea on August 6, 2012
Administrators are well aware that
remedial students tend to fail college classes, and rather than take the choice
of integrity (stop loaning money to so many remedial students), are always very
eager to try something, anything, to get these students to pass. Improved
passing, improved retention, is the only definition of a good education an
administrator can understand, all those unemployed but degree-holding people
occupying Wall Street notwithstanding. Almost every year I endure yet another
cockamamie plan to improve retention, and so allow me to cover the most recent
one in detail:
On August 6, 2012, I attended six hours of
meetings to address the latest method: co-requisite courses for remedial math
and English. I can’t even guess at the money spent merely to have English and
math faculty, along with administrators, travel from all over the state to
Baton Rouge for this catered meeting. First, administrators present an array of
statistics regarding how amazing the new idea is: at every level, “education”
is ignored, and instead all improvement is measured in passing rates. The
entire definition of a successful program is whether it passes students or not.
Every statistic presented to us is all about the retention. After a flood of
such data, impressive to administrators and irrelevant to educators, we’re told
of the new plan.
The new plan this time around? Instead of
having students take remedial classes before taking college classes (for
example, taking College Preparatory Algebra II before being allowed to take
College Algebra), the new plan is to have students take both courses, remedial
and college level, at the same time, and force them to get extra support. The
remedial class is more like a lab, with teaching assistants giving more
personal help. And the statistics show it works! Yay, passing rates are much
higher!
It’s
so, so, trivial to manipulate and mislead the effectiveness of these programs
via statistics, however, and most everyone outside of administration knows it—all
the grey-haired faculty simply chuckle behind the administrators’ backs, having
seen this stuff dozens of times over their careers. It takes nothing to pass
more students, for example one could just make “C” the lowest possible grade in
the course.
The new English remedial program was
particularly amusing—first, the salesman tells us how 15% of remedial students
never take a college level writing course even after passing remedial writing.
Then, he tells us his new program has 0% of these students not taking a college
writing course. How does he achieve it? The new program forces all remedial
students directly into college level writing courses. At least he knew that his
success wasn’t exactly surprising, given the requirements. Still, if success is
defined as taking a college level course, a program that forces the taking of a
college level course is guaranteed to be successful. It is, comparatively
speaking, one of the more honest plans for improved retention I’ve seen.
The math co-requisite program likewise wasn’t
above manipulation to show how successful it was, although at least it put some
effort into being misleading. Like almost all these programs, students were
self-selected; only those that accepted the warnings that it would require hard
work and dedication could enter the co-requisite program. Sure enough, this
program showed greatly improved retention rates, 80% as opposed to the more
typical 50%.
Restrict my classes to only hard working,
dedicated students, and I’ll probably do quite a bit better than average, too,
no matter what special program I try. In twenty years of being subjected to
these pitches, I’ve yet to see a program where they divided self-selecting
students like these into two groups, with the program only being used on one
group. This would go some way to showing it’s the program, and not the
self-selected students, that are making a difference. Why is this basic,
rudimentary, obvious, idea seldom, if
ever, implemented? I imagine it’s because legitimate studies that use such
controls don’t show any major improvements among hardworking students relative to
anything else (since the material at this level is learn-able by most anyone
that puts effort into it), and thus aren’t sold to administrators.
Next, we were shown how the co-requisite
math students had much higher grades and passing rates, using a variety of
poorly demonstrated interactive techniques and group projects. Group projects
are a red flag that something is bogus about the coursework, there’s just no
real learning in group projects, merely, at best, a theoretical division of
skills by students doing what they already know how to do. Nobody pays
attention to the new activities that supposedly help so much, because everyone
knows the ones running the program can give whatever grades they want and pass
everyone on a whim. Higher grades and passing rates are meaningless in a
controlled system with the person selling it giving out the grades.
Even
administrators and the folks selling the new plan know passing rates are easily
manipulated upward, so they back up the improved passing rates with an
important-sounding statistic. Every incoming student had to take a standardized
placement test, and the scores were recorded. The new students had an average
score on the test of 211. At the end of the course, they re-administered the
tests to the students that passed, and the students got an average score of
232. A 21 point improvement on a standardized test! The program works! Yay!
The pitchmen gloss over that 250 is the
cutoff score for students to go into College Algebra for the placement test, so
all they’ve done is pass students out of College Algebra who still don’t
qualify to take College Algebra despite having just passed it. Instead, the
improved scores are celebrated. Without any question from administration,
statistics like these are always taken as evidence, but a bit of critical
thinking reveals these numbers from a standardized test still don’t show
anything worth believing.
Much like with the self-selected students,
there’s no control group, no set of students outside the program to compare
test scores to. For all I know, a typical student that passes College Algebra
has a 30 point improvement on that placement test, or that simply taking the
test repeatedly will cause a 30 point improvement. In the latter case, numbers
like administration provided could be taken as evidence that the co-requisite
program is a massive disaster, with students worse off than if they’d never
entered the program. For some reason, administration never thinks to ask such a
basic question or administer this rudimentary control. It would cost a hundred
bucks or so to double-administer those tests to a class or two of students, but
I guess there’s hardly any money left over after the many thousands of dollars
spent implementing the program and pitching it to various states.
Even without a single control, there’s
another issue here. Only the students that pass the course are re-taking the
placement test, and yet it’s taken as evidence of improvement when the average
score goes up. The students that failed (or dropped) don’t retake the test, so
we don’t have their scores. Gee whiz, you take out all the F students, and the
average grade increases. Practically everyone on the stage selling this to us
has a Ph.D., a research degree, in Education, and none of them see this issue.
One of the pitchmen dares to complain that
they’re still looking for better ways to get statistics, but none of the
faculty bother to try to help with any of the above suggestions about controls
or factoring out the failing students.
An empowered faculty would point and laugh
at administrators using such poorly compiled statistics in a vacuum to justify
a major re-working of course programs. Instead, the faculty sit quietly and
take it, for the most part. One brave soul asks “How do you expect us to teach
the college material with the unprepared students sitting right there, doing
nothing?”…the people giving the pitch mumble out a response that the answer to
the question will come later, but it never does. Similarly, a question is asked
about the failing students in the program, and we’re told that the only
students that failed in this program were the ones that didn’t want help,
disregarding how that conflicts with the self-selection process.
This program could affect every remedial
student, as well as the non-remedial students who get put in co-requisite
“college” courses along with the remedial students. In short, faculty are being
strongly encouraged to follow a program that could affect a clear majority of
college students and the best argument given to us for buying into this plan,
beyond the meaningless “better retention,” is a single feeble statistic from an
uncontrolled experiment. Administration knows they can do whatever they want
anyway, and just goes through the motions of convincing faculty that it works.
Someone with a legitimate research degree, much less a statistician, would be embarrassed to
present this sort of isolated statistic as useful, and yet, one of the pitchers
of the new plan proudly tells us she’s an actual mathematics professor, instead
of saying that her doctorate is actually in Education. She gave no hint at all
of her degree in the presentation or in any of the handout materials, as though
she were embarrassed about it. I had to scramble around online to find that
detail out. I found something else online, too.
But I'll save that for next post.
No comments:
Post a Comment