Pitt data analyst cuts through dense forest of University rankings

By SHANNON O. WELLS

Publicly shared university rankings data are plentiful, pervasive and often interpreted in such myriad ways as to muddy the academic waters more than clarify what institutions are best, worst or middle of the road in any given field.

After nearly six years of immersing herself in rankings-related minutiae, Pitt data analyst Chelsea Kluczkowski is all too aware of the phenomenon. “When it comes to rankings, there are a lot out there, and they can be very overwhelming,” she said in an April presentation to Pitt’s Communications Council.

Kluczkowski inherited her role from Amanda Brodish, who Kluczkowski described as a “one-woman show” who fielded data requests and time-sensitive projects for the provost’s office. When the data-analyst team expanded, the primary role of rankings interpretation and organization fell to Kluczkowski.

“When I started five-and-a-half years ago, we decided that we really wanted a better understanding of university rankings and to take a more thorough approach to indicating the rankings,” she explained. “I am able to devote time into really getting to know the various rankings, the data behind them and the methodology that is utilized each year.”

With metrics and methodologies constantly in flux, Kluczkowski has worked on a variety of projects that “took a deep dive” into data to better understand how Pitt performs against its peers and look for distinct areas in which the University can improve. “Understanding the bigger picture is crucial to really grasping the importance of these rankings,” she said.

To determine rankings Pitt should most closely track, the data team divided them into four categories: National, International, Best Value and by Field/Subject. From there, they compiled a list of rankings released within each category, identifying those the team believes are worth tracking, based on the methodology used and Pitt’s overall mission.

National rankings for Pitt are derived from seven entities and media outlets, including:

  • Center for Measuring University Performance

  • Forbes

  • Princeton Review

  • U-Multirank

  • U.S. News & World Report Best National Universities

  • Washington Monthly National Universities Rankings

  • Wall Street Journal/Times Higher Education College Rankings

International rankings are drawn from organizations including:

  • Center for World University Rankings World's Top Universities

  • Moscow International University Ranking

  • QS World University Rankings

  • Shanghai Ranking's Academic Ranking of World Universities

  • Times Higher Education World University Rankings

  • University Ranking by Academic Performance World Rankings

  • U.S. News & World Report Best Global Universities

Once the desired rankings data is culled, Kluczkowski’s office strives to share it across campus as soon as release embargos allow, and update rankings on the provost’s website.  

While rankings are released throughout the year, prime time is August and September. “So if you're on my distribution list for the rankings, I apologize,” Kluczkowski quipped to Communications Council members. “I think you get an email from me probably every day in August and September.”

Information shared includes historical data, overall rankings and the metrics used to determine them. The reports also include Pitt’s performance benchmarked against peer institutions, analysis of the metrics utilized and an overview of the methodology, she said. Changes made from previous years to data sources or metric weights are highlighted.

Organizations gather data on key metrics from public sources such as College Scorecard and Integrated Postsecondary Education Data System, along with data submissions from universities and surveys like QS Employer Survey and U.S. News & World Report’s Peer Assessment Survey.

“It's important to note here that data submissions are not typically audited and require universities to ‘operationalize’ the data,” Kluczkowski said. “And this is where colleges and universities have been known to skew their data.”

That is, organizations may interpret the same data points in a way meant to further their own rankings objectives. At Pitt, the Institutional Research department submits data for institutional rankings, but individual schools deliver their own data for specific program or school rankings.

“I think the best way to interpret the questions and respond to the data submissions is to take a step back and ask, ‘If this request was coming across my desk from the provost, how would I respond?’ ” Kluczkowski told the University Times. “Setting aside that the question is being used to determine the institution's rank helps to keep the reporting honest.”

Best national universities

Organizations produce a score used to generate the university’s rank. While most entities rank universities numerically, some, such as Center for Measuring University Performance and Princeton Review’s Best Value Colleges, simply provide lists.

Kluczkowski calls U.S. News & World Report’s Best National Universities, established in 2003, “probably the most widely known rankings” Pitt reports on, drawing data submissions from schools and peer assessment surveys. Presidents, provosts and deans are asked to rank undergraduate academic programs on a scale of 1 (marginal) to 5 (distinguished). U.S. News then averages the two most recent years of peer assessment survey results for its rankings. Key measures for Best National Universities are 17 student outcomes and academic quality-based metrics including:

  • Graduation and retention rates (22 percent)

  • Graduation rate performance (8 percent)

  • Alumni giving (3 percent)

  • Graduate indebtedness (5 percent)

  • Undergraduate academic reputation (20 percent)

  • Faculty resources (20 percent)

Based on weighted normalized values across these metrics, the top performer gets an overall score of 100. Other universities are ranked in descending order based on the weighted overall scores they receive. Those outside the top 75 percent are placed in “bands” instead of their individual rank, i.e., 90 to 120 instead of, say, a rank of 93.

Best global universities

Established in 2015, U.S. News’s Best Global Universities rankings are based on analytical-services firm Clarivate’s Academic Reputation Survey, along with Web of Science and Essential Science Indicators. Goals derive from assessing academic research performance and evaluating global and regional reputations. The metrics used include:

  • Global and regional research reputation (25 percent)

  • Scientific excellence (10 percent)

  • Bibliometric indicators (65 percent)

An invitation-only survey is sent to academics from Clarivate’s database of published research, Kluczkowski said, with respondents providing their views on programs and specific disciplines. “This allows our respondents to rank universities in the field and department level instead of at the institutional level, like the survey in the Best National Universities rankings. So, it's very different,” she said.

The overall Best Global Universities score is determined by taking the sum of weighted normalized values across 13 metrics. The top performer gets an overall global score of 100, and other universities are ranked from 2 to 1,750 based on the weighted rescaled overall global score. Unlike the “band” rankings of national measurements, each globally ranked university receives an individual rank.

Graduate school rankings

U.S. News’s Graduate School Rankings, established in 2012, assess professional school programs in business, education, engineering, law, medicine, nursing and special teams in each area. Data sources include submissions from schools and another academic reputation survey where deans, program directors and senior faculty rate the academic quality of programs in their field on 1-to-5 scale. “Whenever the people get the survey, they aren’t told what school they are getting, so there is no bias in the survey,” Kluczkowski noted.

Graduate school rankings goals assess qualities that students and faculty bring to educational experience and evaluating graduates’ achievements linked to their degrees. The number of metrics varies based on schools and programs, with the top performer receiving an overall score of 100.

Limitations

Emphasizing the limitations of rankings data and assessments, Kluczkowski said, “as you can see, with just those three (categories), no two organizations look at exactly the same metrics or utilize the same data sources.”

Using different weights on metrics or processing (“tumbling”) data differently — such as taking a three-year average of graduation rates vs. the most recent year — reflect the inherent limitations of university rankings.

“Each organization has a different objective for producing the rankings,” Kluczkowski said. “So, you cannot compare performance across rankings, such as you can't compare the top 50 in U.S. News with the top 50 in The Wall Street Journal Times Higher Education rankings.”

Changing methodologies also can throw off historical trends, she added, especially in the wake of the COVID-19 pandemic. U.S. News & World Report, for example, has added graduate indebtedness and social mobility metrics to their rankings criteria in recent years, and changed metric weights to reflect changing priorities. “It's very hard to look longitudinally at ranking sometimes,” Kluczkowski said.

When it comes to interpreting and using rankings data from her office, Kluczkowski stressed that Pitt departments:

  • Understand the limitations of the rankings, recognize the driving force behind the rankings and maintain their integrity.

  • Be careful how rankings are used for marketing purposes.

  • Do not share embargoed pre-release information, which can contain erroneous or incomplete information.

  • Reach out with questions if help is needed in understanding the rankings.

Responding to a question about what makes her “roll her eyes the most” regarding the use or misuse of rankings, Kluczkowski said it’s typically when universities choose to talk about one favorable ranking alone.

“They might be the top five of something, but then they forget all the others, especially if they don't do well in them. I think it's important to point out those that you are doing well in and improving in the rankings, and also those where you are not doing so well and may be falling,” she said. “I think it's important to talk about both the good and the bad. Just to bring awareness on the bad so that schools and faculty and students — everyone knows that we're not perfect — we might need to work on a few different things.”

Shannon O. Wells is a writer for the University Times. Reach him at shannonw@pitt.edu.

 

Have a story idea or news to share? Share it with the University Times.

Follow the University Times on Twitter and Facebook.