Sunday, June 9, 2013

An example of how the folks in charge plan to improve remediation


Ever Changing Remediation: A Specific Example

 

 

“Never attribute to malice that which can be explained by incompetence.”

-Napoleon

     An honest look at what’s going on in higher education today might lead one to believe that the folks running the show are deliberately trying to hurt people, or at least exploit them for profit. I’d like to present an example of how administrators respond to the disaster of remediation. Look below, and see a plan to “fix” higher education, and decide for yourself if malice or incompetence best describes the actions of our leaders.

 

 “We’ve been doing it wrong, focusing on getting remedial students to pass remedial courses. This new process puts remedial students directly into college classes, and passes them from there!”

--Administrator, explaining the latest amazing new idea

 

     Administrators are well aware that remedial students tend to fail college classes, and rather than take the choice of integrity (stop loaning money to so many remedial students), are always very eager to try something, anything, to get these students to pass. Improved passing, improved retention, is the only definition of a good education an administrator can understand, all those unemployed but degree-holding people occupying Wall Street notwithstanding.

     There were six hours of meetings to address the latest method: “co-requisite” courses for remedial math and English. First, administrators present an array of statistics regarding how amazing the new idea is: at every level, “education” is ignored, and instead all improvement is measured in passing rates. Every statistic presented to us is all about retention (the word administration uses for “passing”). After a flood of such data, impressive to administrators and irrelevant to helpless educators, we’re told of the successful new plan.

     What’s “co-requisite”? Instead of having students take remedial classes before taking college classes (for example, taking remedial math before being allowed to take College Algebra), the new plan is to have students take both courses, remedial and college level, at the same time. The remedial class is more like a lab, with teaching assistants giving more personal help. And the statistics show it works! Yay, passing rates are much higher!

     It’s trivial to mislead the effectiveness of these programs via statistics, and most everyone outside of administration knows it—all the grey-haired faculty simply chuckle behind the administrators’ backs, having seen this stuff dozens of times. It takes nothing to pass more students, for example one could just make “C” the lowest possible grade in the course, so that everyone passes. 

     The math co-requisite program wasn’t above manipulation to show how successful it was. Students were self-selected; only those that accepted the warnings that it would require hard work and dedication could enter the co-requisite program. Sure enough, this program showed greatly improved retention rates, 80% as opposed to the nationwide average of around 50%.

     Restrict my classes to only hard working, dedicated students, and I’ll probably do quite a bit better than average, too. In twenty years of being subjected to these pitches, I’ve yet to see a program where they divided self-selecting students like these into two groups, with the program only being used on one group. This would go some way to showing it’s the program, and not the self-selected students, that make the difference. Why is this basic, rudimentary, obvious, idea seldom, if ever, implemented? It’s because legitimate studies that use such controls don’t show any major improvements.

     Next, we were shown how the co-requisite math students had much higher grades and passing rates, using a variety of poorly demonstrated interactive techniques and group projects. Group projects are a red flag that something is bogus about the coursework: there’s no learning in group projects, merely, at best, a theoretical division of skills by students doing what they already know how to do (a recent book, Academically Adrift, shows that the more time students spend in group work, the less they learn).

     The pitchmen back up the improved passing rates with an important-sounding statistic. Every incoming student had to take a standardized placement tes. The new students had an average score on the test of 211. At the end of the course, they re-administered the tests to the students that passed, and the students got an average score of 232. A 21 point improvement on a standardized test! The program works! Yay!

     The pitchmen gloss over that 250 is the cutoff score for students to go into College Algebra for the placement test, so all they’ve done is pass students out of College Algebra who still don’t qualify to take College Algebra. Instead, the improved scores are celebrated.

     Much like with the self-selected students, there’s no control group, no set of students outside the program to compare test scores to. For all I know, a typical student that passes College Algebra has a 30 point improvement on that placement test, or that simply taking the test repeatedly will cause a 30 point improvement. That would make the co-requisite students worse off than if they’d never entered the program…no control group means there’s no way to know. It would cost a hundred bucks or so to double-administer those tests to a class or two of students, but I guess there’s hardly any money left over after the many thousands of dollars spent implementing the program and pitching it to various states.

     Even without a single control, there’s another issue here. Only the students that pass the course are re-taking the placement test, and yet it’s taken as evidence of improvement when the average score goes up. The students that failed the course don’t retake the test, so we don’t have their scores. You take out all the F students, and the average grade increases...and this is presented as legitimate improvement. Practically everyone on the stage selling this to us has a Ph.D., a research degree, in Education or Administration, and none of them see even this issue.

     An empowered faculty would point and laugh at administrators using such poorly compiled statistics to justify a major re-working of programs. Instead, the faculty sit quietly and take it. One brave soul asks “How do you expect us to teach the college material with the unprepared students sitting right there, doing nothing?”…the people giving the pitch mumble out a response that the answer will come later, but it never does.

     Someone with a legitimate research degree would be embarrassed to present this isolated statistic, a single uncontrolled test score, as useful, and yet, one of the pitchers of the new plan says she’s studied it and is sure it works. She proudly tells us she’s an actual mathematics professor, instead of saying that her doctorate is actually in Education. Administrators in suits just stood there as she “misled” us, though some of them had to know the truth. She gave no hint at all of her degree in the presentation or in any of the handout materials, as though she were embarrassed about it. I had to scramble around online to find that detail out. I found something else online, too.

 

Faculty: “Did you see her students’ comments on Ratemyprofessors.com?”

Me: “Crap, I was going to tell you about it. I didn’t think anyone else goes there.”

Faculty: “I thought I’d do some investigation.”

--Faculty are not stupid, we know better than to take anything from administrators at face value. We also know when administration wants us to “say something about” their new ideas, they only do so because it’s easier to cram things down our throat if we open our mouths first.

 

     My online search also found what students had to say about her, and indirectly the program, at Ratemyprofessors.com. Despite the professor’s presentation claiming great teaching skills, she has miserable ratings there. Now, I grant, only the most irate students would go to an online site and post their complaints, but after being forced to watch videos of testimonials from students about how life-changing—I’m not exaggerating!—the new program is, it’s fair to present testimonials biased in the other direction.

     She’s rated in several categories at Ratemyprofessors.com, but “easiness” is by far her best category. Granted, “easiness” isn’t on a collegiate evaluation of teaching form, but this website caters towards student desires. An administration that was actually curious about challenging students-- critical to education!--would have such a question on the evaluations, and it’s very telling that there is no such question on the official evaluations. Here are some abbreviated quotes, see if you can spot any red flags:

“Hardly a teacher at all. All she does is talk, talk, talk. The second day of class she taught us how to turn on our calculator. The tests don't make sense.”

“HORRIBLE!!! she is so rude and doesnt seem like she cares about anyone including her TAs. She acts as if she is better than everyone and her lectures are completely WORTHLESS. STAY AWAY!”

“A lot of people don't go to the lectures because they are extremely boring and attendence doesn't count but sometest questions ome straight from lecture. The labs are never fully coordinated. Tests are all about theory more than solving problems. Use the answer choices to solve test questions.”

“Her tests are always about concepts with no actual calculations of answers.”

“Her tests are all on principles and alghorithms. There were hardly any problems you had to actually solve”

“stay away from this teacher taking this class with her will be the biggest waste you will ever come across”

“Wholly education/grant-funded class. Does a horrible job explaining in class. Try to get a different professor unless you enjoy being taught like you're in middle school.”

     Not a single student gave a positive review, not one, and many professors at Ratemyprofessors.com have positive reviews. Looking too closely at negative student evaluations usually isn’t a good idea, but when the same specific complaints keep coming up over the course of years, that usually means there’s some truth to it. Complaints like this had to have turned up in student evaluations administered at her school; administration only looks at passing rates. In reviewing those quotes, a reasonable conclusion about her higher passing rates would be that she gives easy multiple choice tests where you can get the answer merely by reading the questions, the program has nothing to do with it. All administration saw was higher retention, a complaint like “waste of time” is insignificant next to the only goal of merit to an administrator.

     After the pitch, the head pitchman tells us that when this program is tried at other institutions, retention rates vary from 25% to 77%; in other words, outside of controlled situations, the co-requisite plan doesn’t work. He asks that if we get a high retention, we should contact him and tell us what we did so they can put that in their recommendations for the program. Some faculty drove hundreds of miles to be told how to implement this plan (sic) to improve retention, and the people pitching it already know it doesn’t work.

     Year after year, colleges cycle through one plan to improve retention after another, as apparently clueless administration buys into one hokum plan after another. Never is it considered that maybe you can’t teach people  that don’t want to learn, and that simply not accepting such people in the first place would help far more than admitting anything with a pulse and then optimizing the best way to extract the maximum amount of money from the victims of this “higher education” scheme.

     The Cycle of Remediation:

1.       College courses with failing students in them have “low” retention.

2.       To improve retention, failing students are pre-identified and shuffled off to remediation.

3.       To improve retention in remedial courses, various plans are tried, but for reasons administration cannot fathom, remedial students still fail more than non-remedial students.

4.       To improve retention, remedial students are put back in college courses.

5.       Return to 1.

     A remedial course, years ago, was an important tool, helping a student who just had a few weaknesses preventing him from going into higher education. Administration has taken that tool and turned it into a weapon capable of channeling herds of suckers into “higher education,” where they are fleeced in the short term, paying way too much for courses that will benefit them far too little, and slaughtered later when the loan bills come due.

      So we’re back to the question: incompetence, or malice? Do the people running higher education know the evil they do, or not?

Think about it.

 

 

 

 

 

2 comments:

  1. I've wondered about that myself, I squarely cast my vote towards evil.

    ReplyDelete
  2. I wish I could squarely cast it, myself. I'm sure some are pure evil, and I'm sure some are pure stupid. But mostly one or the other? Still a question for me, after decades of consideration.

    ReplyDelete