Oh, the professorial outrage at being objectively measured!

I’m cynically amused by the protests coming from professors at the University of Texas at Austin.

The University of Texas at Austin this week became one of the most prestigious research institutions to join a faculty rebellion against Academic Analytics, a data company that promises to identify low-performing professors.

UT-Austin’s Faculty Council voted on Monday to approve a resolution recommending that the university make no use of Academic Analytics, especially concerning promotions, tenure, salaries, curriculum, and other faculty issues.

As with previous faculty protests of the company at Georgetown and Rutgers Universities, UT-Austin faculty members cited concerns about the accuracy of Academic Analytics’ data, the lack of opportunities for professors to correct errors, and the inappropriateness of numerical rankings for making complex decisions about people and education.

There’s more at the link.

The protest sounds great, doesn’t it? – that is, until you look at what Academic Analytics actually does, the data it gathers, and how it enables administrators to examine every professor and lecturer in comparison with others in their field nationwide.  The company asserts:

The Academic Analytics Database (AAD) includes information on over 270,000 faculty members associated with more than 9,000 Ph.D. programs and 10,000 departments at more than 385 universities in the United States and abroad. These data are structured so that they can be used to enable comparisons at a discipline-by-discipline level as well as overall university performance. The data include the primary areas of scholarly research accomplishment…

. . .

Academic Analytics accurate and comprehensive database is accessible through our unique online Portal. The portal offers more than 40 different tables, charts, and data cutting tools facilitating rapid answers to common questions that our data can help solve, as well as unique visualizations that provide an opportunity for the discovery of previously unrecognized data, trends, patterns, and centers of strength and weakness at your university.

Again, more at the link.

Gee!  You mean, professors can now have their actual job-related performance measured accurately in comparison to the requirements of their positions, and the performance of their peers in universities and colleges across America?  Why would they object to that, I wonder . . . NOT!!!

I think we all know why the academics are upset at being accurately assessed.  They can’t get away with “fudge factors” any more.  They might even have to stop fashionable, politically correct protests and other extracurricular activities, and get down to the business of teaching – which is, after all, the reason they were employed in the first place.  Clearly, as far as they’re concerned, accountability is very low on their list of priorities – but for people like you and I, forced to pay exorbitant fees to study under them, it’s rather more important.

In fact, why don’t we ask Academic Analytics to make available public profiles of every professor in their database?  It might help prospective students, and/or their parents, choose professors who best meet our needs – and boycott those who don’t, and the institutions that employ them.  Wouldn’t that just set the cat among the academic pigeons?

Peter

12 comments

  1. Student at an R1 institution here. I can tell you that the evaluation system is broken but this is NOT the way to fix it. In fact, a large part of the current problem is that quantitative metrics are more and more relied upon and the are precisely the ones that do the worst job of evaluating teaching. Number of publications? Easy. Impact Factor, sure. Grant money brought in? That's a big one. But credit hours taught does little to capture how useful their class is for those who take it and course evaluations are just as likely to be tanked by snowflake students who are upset that they felt challenged. Even average grade is misleading. A course with an A average could be a "gimme" or one with an amazingly gifted teacher. Ditto for a C average that could be the result of a rambling, incoherent prof or one who challenges their students to engage tough material and grades honestly.

    If you like the current state of affairs then this option is the best way to continue the trend. Personally though, I'd rather these things be decided more by the judgement of one's peers and the administrators that are hired to do so. Sure, maybe they'll choose wrong but that's how capitalism works. Demand will shift and market forces will select for the best option. Sounds a heckuva lot better to me than sort of "black box" quota system. Just my 0.02 though.

  2. tu (how Aggies…Texas A&M students and alumni…refer to that college in Austin) has long been a bastion of moonbattery. No surprise that they would reject objective data.

  3. It's difficult to rank qualitative data with quantitative data. I can understand a resistance to it. Also, having had a company come in and try to evaluate one of my former departments (don't remember which company did the evaluation) it was quite obvious from their questions they had no idea what to do with law enforcement.

  4. I dimly recall a service that tried to rank hospital care and one of their metrics was patient deaths. Everyone in San Diego thought that was a wonderful way to perform a like comparison until someone pointed out that only a handful of hospitals had trauma units and that an awful lot of patients were transported to them at death's door and it really wasn't an objective measure to say that this made these hospitals more unsafe than other hospitals that didn't have trauma centers or offer most surgical procedures and ignored that most "at risk" patients were routinely sent to only certain hospitals. It made me wary of other 'comparisons'.

  5. Actually major universities are called 'research institutions' for a reason. They long ago gave up on teaching being their primary mission. Their primary mission is research, and the fundraising that goes with it. Teaching is merely a means to an end, a necessary evil, to far too many professors out there. It was that way 30 years ago when I was getting my engineering degree and in researching it for my kids, it has only gotten worse.

    As far as metrics, well, almost EVERY technical job has difficulty finding impartial metrics to measure success. The days of measuring job success by counting pieces of production are long past, and empowering, much less trusting, university management to make good subjective evaluations is completely against the concept of 'tenure'.

    I don't have a magic solution, but I certainly admire the problem…

  6. I think administrators are a much bigger problem on campus than professors. Administrators far outnumber the professors. They eat up enormous quantities of money without actually teaching anything or doing research. They also try to control what courses that are taught and how they were taught, despite their typical ignorance of the topics. Read "The Fall of the Faculty" for good information on the topic.

  7. Similar programs are used in some European countries, but they measure scholarly publication more than teaching. All of the academic journals are ranked, so the professor gets a higher score for publishing in a "more prestigious" journal than in a less-prestigious one, regardless of the quality of the article or research.

  8. My Father was a college Professor, and always astonished (and outraged) at the bone idleness of a significant proportion of any college staff. He knew tenured idiots who had not published as may ARTICLES as he had books, and who did everything they could to duck working as thesis advisors. But my Father was a scholar by avocation, and all too many of the staff of any college are merely 'intellectuals'; people who want the reputation of thoughtfulness and possibly wisdom without doing any actual thinking.

    That said, I suspect that this kind of metrics collection is simply an effort on the part of college administration to avoid actually making any decisions of their own. If they have an outside data set, they can abdicate their responsibility and simply do what it says.

    OTOH, they COULD simply measure the loudness of individual shouting and scheming, and fire the loudest 10%.

  9. 1) If instructors graded on a rigid scale, grade distribution would be valuable. On a sliding scale, it's meaningless. If everyone masters the data to >90%, why should there be a bell-shaped distribution of As, Bs, an so on? If no one does, there shouldn't be any As. Which is why grade curves were invented to begin with: participation trophies for the mentally incapable.
    2) Peer evals would be utterly worthless, unless those peer professors regularly audited each others' classes. Never happens in recorded history.
    3) What you want is student evals, weighted based on final grade.
    Twenty bad evals from F students = 0 weight.
    One good eval from an A student = 5 pts.
    Then look at the tallies year over year, and you'd quickly note who was covering the material well, and who was merely ticking off the snowflakes and getting a rep as a tough nut.

    I was always an excellent student, and the hard but fair instructors were universally my favorites in both high school and university.
    The majority of them were mediocrities, as one would expect. And some should have been shoveling tar on a road crew.

    And I'd have the students rate them only upon graduation; they'd have the benefit of hindsight to disclose who prepared them best for subsequent classes (vs. the "easy A" types), and the drop-outs and ne'er-do-wells' comments would be self-selected out.

    Professors hate objective measurement for the same reason students do: no one likes to be evaluated. It's uncomfortably painful, and the truth would affect their livelihood.

    Of course, if students were actually footing the bill for an average $100/hr lecture (in a nominal $40k/yr degree program)*, their scrutiny would be a lot more valuable.

    {$40K/yr gets you $20K/semester, figure a 4 class load=$5K/class, each class meets notionally 3x/wk for 16 weeks, so one class session is about 100 bucks a period. Obviously, state schools are less than private ones, but only because your education is being subsidized by everyone in the state who doesn't go, whether they like that or not. So, on average, you're paying your prof almost as much as the guy who fixes your plumbing or transmission, and for something you could have taught yourself better on the internet for free. The degree is worthwhile in many cases, certainly not all, but the coursework is a joke, and a horrible value by any intelligent and objective standard.}

Leave a comment

Your email address will not be published. Required fields are marked *