Teaching Excellence Framework (TEF): Informing Student Choice?
- 26 June 2017
Results from the first year of the TEF (so called TEF2) have recently been announced and I think it’s fair to say that the traditional way of ranking universities has been turned on its head. Probably for the first time, teaching excellence is receiving the same attention as research excellence as measured through the Research Excellence Framework (REF) which is a welcome focus for many universities such as De Montfort where we always place teaching excellence, social mobility and student choice at the centre of all our activities.
In this first iteration, TEF judged university performance in teaching excellence through a combination of quantitative performance measures in six core metrics which were used to form an initial hypothesis (Gold, Silver or Bronze). This was considered alongside a qualitative 15-page provider submission by each institution which presented additional evidence in order to make the case for a higher rating or, in the case of potential Gold, to further justify this outcome. The six core metrics are taken from the National Student Survey (NSS), non-continuation data from Higher Education Statistics Agency (HESA) and employability data from the Destination of Leavers from Higher Education (DLHE) survey. These data are then compared to what are called “benchmark metrics” which define the expected outcome based on the characteristics of the students at each institution, including factors such as age, ethnicity and subject of study to quantify institution performance. The core metrics are accompanied by split metrics, which look at variations in each of the six areas by characteristics such as gender, ethnicity, age and disability to establish how students from different backgrounds fare on the various measures relative to their peers.
While some in the university sector queried the appropriateness of the metrics in measuring teaching excellence, it is important to note that these benchmarks have been produced by the Higher Education Funding Council for England (HEFCE) in the form of UK Performance Indicators (UKPIs) since the late 1990s. Indeed, two of HEFCEs stated objectives have been to provide “benchmarks for use in institutions’ consideration of their own performance” and “the basis for comparisons between individual institutions of a similar nature, where appropriate”. Consequently, to a large extent, the TEF seems to be a formalisation of this existing process, utilising UKPIs which have existed for over 15 years. It is of course well recognised that these are “proxy” metrics that do not provide a direct measure of teaching excellence, which is one of the reasons it is important for the TEF panel to have ability to nuance the metrics using the provider submission.
While the metrics, therefore, were based on factual data in the public domain, one of the main areas of uncertainty for the sector was the contribution of the 15-page provider submission to the final outcome given the guidance from HEFCE that “the more clear-cut performance is against the core metrics, the less likely it is that the initial hypothesis will change in either direction in light of the further evidence”. As with Impact Case Studies in the last REF, this was a new form of self-assessment for many and there was a lack of consensus about how they would be used. Interestingly, in the end the initial metric-driven hypothesis was changed in just over 20% of cases by the panel, with the vast majority resulting in upgraded final assessments from either Bronze to Silver or Silver to Gold (one submission was upgraded from Bronze to Gold). It seems reassuring that the provider submission was indeed able to influence the final outcome, but I’m sure that the extent to which it contributed in some cases and perhaps not in others will be the topic of much ongoing debate.
Key to the TEF methodology is the concept that it provides information on how universities perform for the students they admit rather than more traditional absolute performance measures which, of course, are already in the public domain.
Some universities will argue that it would be better to look at absolute metrics as there may be unintended consequences. But surely that is the point.
For example, absolute metrics will, of course, benefit those universities who attract the highest qualified school leavers but they take no account of factors such as student background which is key to widening participation. Importantly, the TEF aims to incentivise institutions to address inequity amongst different student groups which is visible through publication of the split metrics.
Of course there is no commonly agreed definition of teaching excellence and I’m sure that the metrics will evolve over time but it is important that proxies are used that are routinely measured across the sector and are in the public domain and the limitations and risks of using such proxies is understood. Otherwise the sector runs the risk of recreating the overly bureaucratic Teaching Quality Assessment System of the 1990s which was found to be not fit for purpose partly due to the burdens it placed on universities.
In summary, my view is that the TEF provides a welcome focus on teaching excellence to balance the emphasis there has been on research excellence, via the REF and its predecessors, for over 30 years which has resulted in anecdotal stories of teaching not being taken seriously at some institutions.
With the government commitment to reviewing the TEF and its methodology, we may find ourselves working with new or additional metrics.
Given the potential for TEF to be returned to annually by institutions (or at the least much more frequently than the current seven-year-cycle that produces the peer-reviewed REF), the use of proxies seems to me to be a pragmatic way forward. As it is used routinely in many other sectors, we should take any opportunity to learn from these other sectors in order to flex and perfect the very welcome new emphasis on evaluating how we do the very thing that is at the heart of higher education: teaching.
Article by Professor Andy Collop, Deputy Vice-Chancellor, De Montfort University