News & Events

From faith to fact: measuring student equity initiatives

Written by Dr Tim Pitman, Senior Research Fellow

In 2013, the Federal Government provided more than $80 million for Australian higher education providers to improve efforts to attract and retain disadvantaged students in tertiary studies.

Many initiatives are now underway and whilst it is generally too early to tell if they have been effective, universities are expected to measure the relative success or failure of their efforts.

It follows that the performance of these programs should be measured in ways that allow comparisons to be made across the sector, and that findings should be widely communicated.

What’s the point of getting something right (or wrong), if others can’t learn from it?

In the case of student equity in higher education, any program or initiative needs to measure, as best it can, its effect on one or more of the following outcomes:

1. The number of disadvantaged students aspiring to higher education (e.g. applications)
2. The number of disadvantaged students accessing higher education (e.g. offers and enrolments)
3. The number of disadvantaged students surviving higher education (i.e. retention rates), and
4. The number of disadvantaged students succeeding in higher education (i.e. success and completion rates)

There are many more indicators of success, of course, but all of them are dependent on, or result in, improvements in these four.

But to what extent can universities measure causal relationships between a particular initiative and one or more of the above indicators?

This year, Aston University, Birmingham, and the Higher Education Academy, York, released a report into initiatives designed to improve the student experience. Most, if not all of these initiatives have potential to improve higher education outcomes for disadvantaged students. How the impact of these diverse programs was measured is illuminating and provides insight and direction for Australian higher education equity practitioners – both in what we should and shouldn’t be doing.

Of the 48 programs reported, a third found evidence that they improved retention rates (Criterion 3 above), and 29% resulted in improved academic performance (Criterion 4). The remainder, however, relied on less accurate measurements of increased participation (in the program, not necessarily university) and positive feedback. Whilst participant involvement and opinion are important feedback mechanisms, they should not be the primary means of performance measurement.

When implementing programs to improve student equity outcomes in higher education, meaningful performance must be measured, no matter how unique, risky or left-of-field the initiative is. It is for this reason that we at the National Centre will continue to work closely with government, researchers and practitioners to develop more comprehensive measurement tools across the sector, and to embed these principles in every equity initiative, no matter how large or small.

Disclaimer: The views and opinions expressed in the articles published on this webpage/site are those of the authors and do not necessarily represent the policy or position of the National Centre for Student Equity in Higher Education (NCSEHE), or Curtin University. NCSEHE and Curtin University assumes no liability for any content expressed on this webpage/site, nor does it warrant that the content and links are error or virus free.

Posted 28 October 2013 Posted in Editorial, General