Measuring Universities: Once More Unto the Breach

Making change sometimes involves an elaborate public discourse and preparation of affected stakeholders and in
Ontario the discourse is towards getting people in the public sector to do more
with less.  The latest target was
drawn to my attention by Alex Usher’s Higher Education Strategy Associates (HESA)
morning bulletin, which featured the preliminary report by the Higher Education
Quality Council of Ontario
(HEQCO) on The Productivity of the Ontario Public
Postsecondary System. 

To summarize: “Ontario universities have received increased absolute levels of funding and funding per student since 2002. Nonetheless, they are teaching more students per full‐time faculty member with less money per student than all other Canadian provinces. They also lead Canada in research profile and output. A pilot
study of four institutions suggests that full‐time faculty teach approximately three and one half courses over two semesters. On average, faculty who are not research intensive, as defined by the universities themselves, teach a little less than a semester course more than those who are research active.”

So it would appear that the post-secondary sector in Ontario is doing
more with less and is “already quite
productive
” but naturally we can do better so: “further critical information is required to better assess productivity
and identify the most promising steps for improvement, including: measurement
of the quality of education, especially whether desired learning outcomes are
achieved; better information on graduation rates; more input from employers on
their satisfaction with the knowledge and skill sets of postsecondary
graduates; more detailed measurement of relevant information in the college
sector, both within Ontario and across Canada; and greater detail on the
workloads of university faculty.”

The workload issue is going to be a big one especially given that one of
the tables highlighted by Alex Usher – Table 9 – attempts to compare faculty
workloads at four institutions (Guelph, Queen’s, Laurier and York) according to
whether they are in science or social science and humanities and whether they
had research output (defined as a grant or publication in the 2010-2011
period).  What they found out was
not particularly illuminating to me though Alex Usher felt it raised a question
of what all the non-research active professors were doing. (He also asks why
Quebec gets 39 percent of research council grants – that is a more interesting question but I'll let Alex Usher try and answer it at some future date).

The Ontario government would like to generate even greater future
productivity increases by changing “the design of the Ontario postsecondary
system and how it is funded. For individual institutions, the greatest
productivity opportunities may lie in greater flexibility in the distribution
and deployment of their faculty resources, particularly in the distribution of
workloads of individual faculty taking into account their relative
contributions to teaching and research.”
   I think the Ontario government would like to see an
increase teaching loads, but in a redistributive fashion so as not to harm its
lead in research productivity. 
That is: it would like universities to raise the loads on “non-productive”
faculty and either maintain or perhaps even lower them a bit on “productive”
faculty.  I am also assuming that
means more than simply saying something like all research in Ontario is going
to be done at the University of Toronto and everyone else must teach more.

However, before you can do anything, I think you seriously need to beef
up the data given what I think the ultimate purposes of the Ontario government
seems to be. You cannot just compare the average teaching loads in science and
humanities and social sciences across four universities.  True, they say this is preliminary but
the fact is these preliminary tables, figures and statements have a way of
sticking around in popular memory and then become stylized facts that generate
“policy changes”.  You need to do
much better than that. 

So, my points specifically with respect to universities and the
measurement of productivity:

  1. You need to have data on teaching and research on
    all the Ontario universities and not just a few.
  2. You need to have data on teaching and research
    across all disciplines and faculties at the university and not just a few.
  3. You need to measure faculty research output for
    more than just the last year  – why
    not the last seven as SSHRC does?
  4. You need some type of standardized measure of
    research output by institution/discipline/field as well as its impact via
    citations. (Ask Alex Usher about this. 
    I’m sure he will recommend something based on Google Scholar. The report
    uses HESA H-index numbers to compare Ontario universities to other provinces so
    I’m sure it can be done).
  5. Research grants and funds for sponsored research
    are not research outputs.  They are
    inputs.  Over the long term they
    are indeed correlated with research success in those fields that require funding to
    accomplish research but they are not an output.  They are increasingly being considered an output measure because
    universities are desperate for cash and generate revenue by taxing grants and
    contracts.
  6. Simply comparing course loads per faculty member is
    misleading.  You will need to
    incorporate some measure of class size also.  A half-course difference in teaching loads can be significant
    if the class has 300 students.
  7. Workload is not just research and teaching anymore.  Just ask any anyone who has gone thorugh a "provincially mandated" quality assurance review in order to justify quality.

And remember, in the university sector, the assorted bureacratic infrastructure for measuring productivity in itself often involves increasing faculty members workloads so we can kill two birds with one stone! Now that's productivity! Enjoy!

8 comments

  1. Sean Hunt's avatar
    Sean Hunt · · Reply

    While the workload of teaching a small versus a big course can vary, I’m not sure that institutions make that much distinction themselves. I know that my institution doesn’t; a professor is expected to teach some number of courses per year, and the size isn’t really relevant (but the large first- and second-year classes are almost invariably taught by lecturers while the professors teach the smaller third- and fourth-year ones).

  2. Brett Reynolds's avatar

    Ontario colleges use a Standard Workload Formula, which takes into account class sizes, marking type (in process, automated, essay), and whether you’ve taught the course before/recently.

  3. Thomas's avatar

    We’ve just been through the research assessment process in New Zealand. This is only an assessment of research; the government hands out money for teaching based on student numbers (for domestic students — international students pay fees), and institutions decide internally how to allocate funding and workload for their employees.
    The research portfolio (based on six years’ work) has three categories. The major one, worth about 70%, is ‘Research Outputs’ (4 nominated research outputs to be assessed for quality, plus up to 30 more to demonstrate ‘a platform of research’). These have to be outputs: articles, conference papers, reports, patents, software, etc,etc. The two minor categories, worth about 15% each, are (research-based) ‘Peer Esteem’ and ‘Contributions to the Research Environment’. That’s where you put refereeing, being on conference committees, supervising research students, getting grants, prizes, invited presentations, and a partridge in a pear tree (provided it’s a research-based partridge).
    The whole thing is then assessed by a panel assigned to a collection of related disciplines adding up to 400-800 portfolios per panel. Each portfolio is rated independently by two panel members to give two initial scores. They then agree on a second-round scores. Finally, the whole panel discusses the portfolios and assigns the definitive scores at a very long meeting.
    I think the assessment of research is pretty good (disclaimer: I was on one of the panels, so I’m biased and there are lots of things I can’t say). But the process is very expensive — both in preparing the portfolios and in assessing them. Also, there’s a good case that the funding formula gives relatively too much to the top researchers, and the process doesn’t even attempt to get a fair allocation of resources to individuals, just to larger units.

  4. Livio Di Matteo's avatar
    Livio Di Matteo · · Reply

    Thomas, thanks for sharing those insights. It sounds like a very methodical and comprehensive effort to measure research productivity. One question: how is joint authorship on publications handled in terms of weighting. For example, is the publication weighted equally across all the authors?

  5. Nick Rowe's avatar

    Livio: someone ought to do a post on that question one day. (Not me!) The only right answer is 1/number of authors. Any other answer creates perverse incentives as well as adverse selection.

  6. Thomas's avatar

    Livio: how is joint authorship on publications handled in terms of weighting.
    The assessment is supposed to be primarily of research quality, not quantity. For the four nominated research outputs the researcher writes explanatory comments. These tend to include something about who was responsible for what parts of the research. At least for collaborations within New Zealand, the fact that all the portfolios go to the same panel ensures a certain level of internal consistency in the claims — you’re unlikely to get two people both saying they did nearly all the work for a particular paper.
    There is no prescribed method for deciding how to handle joint authorship in the other 30 research outputs; it’s left up to the panel, who at least understand the norms for publication rates, co-authorship, and author order in their disciplines. I don’t think there is a universal answer to this question: Nick’s 1/number of authors also creates perverse incentives, just different ones.
    This approach doesn’t scale well: you’d need to narrow the scope of each panel to make it feasible for Canada, let alone the USA, and so you’d lose the cross-discipline comparisons that help precisely where bibliometric approaches do very badly.

  7. Chris J's avatar

    @Nick, so a physicist who spent three years of his life away from home at the LHC gets 1/3000th of a paper per publication? My last paper had ~40 authors and I am on a relatively small project.

  8. Joseph's avatar

    The number of authors issue is intractable for between field comparisons without some sort of norming approach. Entire fields (like Genetic Epidemiology or High energy physics) would implode into completely unproductive with a 1/n authorship rule. It is true that the current system of counting papers disadvantages fields with high barriers to publication (e.g. Economics) and is very favorable to fields with low barriers (e.g. Electrical engineering).
    My guess at the best compromise would be rank people relative to field and level, presuming that the median member of any academic field is trying to be productive. The problem here is entirely people trying to get themselves classes into a lower paper count field using items from a higher count field. My guess is that judgement could handle this but that no fully automated approach is immune to “gaming”.

Leave a comment