Are student evaluations the best measure of teacher effectiveness?

By DONOVAN HARRELL

Provost Ann Cudd told members of Faculty Assembly on Oct. 30 that she was looking into how student evaluations are used, especially as to how they relate to the University’s promotion and tenure process.

This comes as the Educational Policies Committee decided in an Oct. 15 meeting to examine whether student evaluations of professors are an accurate, trustworthy measurement of teaching effectiveness. Research has found that such evaluations may hold inherent biases.

Cudd said she’s heard a lot of comments from faculty on the evaluations, administered through the Office of Measurement and Evaluation of Teaching, or OMET.

“In my view, evaluation of teaching should involve much more than just reporting OMET scores,” Cudd said. “So to that end, I'm considering changing our existing evaluation for promotion and tenure evaluation forms that ... to prompt other means of assessment such as holistic observation portfolios and other innovations in teaching like using adaptive learning ... or experiential learning.”

Some studies found that women and professors of color received lower ratings when compared to their white males counterparts.

Citing some of these findings, Senate President Chris Bonneau asked the Educational Policies committee to examine the “problematic” evaluations.

“The problem is that they're not particularly valid,” Bonneau said. “So, if they're not valid measures of teaching, this raises two problems: One, it causes problems for these individuals when they're applying for jobs or coming up for promotion or tenure or anything else because it looks like they're not as effective teachers when we don't know that.”

He said the second problem was that the lack of validity also prevents accurate data from being gathered about which teaching methods work best, allowing faculty to improve.

Members of the Educational Policies Committee also discussed some of the other potential pitfalls involved with student evaluations.

Helen Petracchi, an associate professor with the School of Social Work, mentioned research suggesting that student satisfaction can be tied to their anticipated grade.

Another member agreed, adding that they were concerned that the evaluations could put pressure on professors to make their courses less rigorous.

Concerns about these biases in student evaluations have been raised in the past. An ad hoc committee in March 2017 handed the former provost a resolution, which, among other recommendations, asked that Pitt “move away” from using student evaluations to measure teaching effectiveness.

Student input still needed

Joseph J. McCarthy, vice provost for Undergraduate Studies and chancellor’s liaison to the committee, said he agreed that it may not be the best idea to use student evaluations for merit increases because of bias, but the “party line” for the Provost’s Office is that teaching still needs to be assessed.

“I would be personally staunchly against eliminating OMETs,” McCarthy said. “And the reason is one of the things we need to critically assess is student satisfaction. And I think the OMET does that. And students need to have a voice.”

The name of the system could be changed, he suggested, but at the end of the day, student satisfaction still needs to be measured.

Teaching effectiveness also isn’t uniformly measured across Pitt, McCarthy said, in part because of the wide array of subjects taught.

Each school, and their departments, approach these evaluations differently depending on the course and subject. Some adjust the questions to be more course specific, and schools also can weigh the importance of these evaluations differently.

Many methods to evaluate teaching

Cynthia Golden, director of the University Center for Teaching and Learning, said student evaluations are just one of many methods the center encourages Pitt faculty and administrators to use to evaluate teaching effectiveness.

Other methods include consultations, classroom observations, course reviews, peer evaluations and assessments, surveys and more.

“These surveys are just one of kind of an overall picture of a faculty member's teaching practice,” Golden said. “There are all kinds of ways for a school or a department to get input about somebody's teaching. And I want to say that the faculty take the surveys very seriously. And they frequently comment to us on how helpful the student comments can be.”

She said professors often use the information gathered from the evaluations to improve their future courses. Most faculty, she said, also value student comments.

And even though there are studies that show inherent flaws in the evaluations, OMET tries to make sure faculty are aware of them and other resources.

“While there may be bias in student ratings of teaching, we still think that they can provide valuable feedback,” Golden said. I think having people aware of what these studies are, if you're aware of it, at least you can factor it into your thinking about the results.”

Dietrich School trying to spot biases

Administrators with the Dietrich School of Arts & Sciences, Pitt’s largest school, are well aware of the existing research on various forms of bias present in student evaluations and is taking steps to recognize and combat it.

There’s a core set of questions in the evaluations that the Dietrich School uses, and departments can ask for more specialized questions if necessary.

Rebecca Roadman, director of Special Projects and Initiatives, said that in summer 2017, the school began gathering and evaluating the responses in these surveys to help spot potentially biased questions.

Through gathering and interpreting aggregate data of responses to certain questions, Kathleen M. Blee, dean of the Dietrich School, hopes to clamp down on bias and improve the questions in evaluations in the future.

“This is something we're really concerned about,” Blee said. “And now we don't just have to guess about whether the questions are biased. We can measure them. And we can adjust those questions going forward to try to work on reducing that bias.”

In addition, Blee said she is examining other methods for evaluating teaching, including peer observation focus groups, in addition to student evaluations.

“There's no one-size way of evaluating teaching, because teaching could be in a lab and could be performance space, you know, all kinds of places,” Blee said. “Teaching is a complex art, and we need to evaluate in a whole range of different ways.”

Donovan Harrell is a writer for the University Times. Reach him at dharrell@pitt.edu or 412-383-9905.