By Howard Lee
Media reported last week that the National University of Singapore has topped a list of universities in Asia, with Nanyang Technological University edging up three places to seventh in the Quacquarelli Symonds (QS) international rankings.
Those who are versed in the education sector, however, might have a different view of such rankings. In particular, the rankings by QS presents a slightly different perspective, if one were to look at the breakdown of the methodology used, or how the results of the study were achieved.
The QS ranking measure universities based on a number if indices. These include academic reputation and employer reputation (both from global surveys), faculty-student ratio, citations per paper, papers per faculty, and the proportion of international and exchange students and faculty.
The single highest proportion of the QS survey – a combination of 30% based on academic reputation and 10% on employer reputation – was centred on impressions. That is, the results of the survey was heavily reliant on how survey respondents perceive the universities in question.
Such perceptions are due in great part to public impressions that a university has, which can be a matter of positive reputation building through word of mouth, or good advertising. In either case, such impressions might have very little relationship to the actual learning experience that students have at a particular university – the quality of education the university provides its students, its social influence, how and what types of jobs it places its students in, and so forth.
Most notably, the surveys can be completed anyone, and need not be those who are in the best position to evaluate the academic rigour of the school, such as lecturers and students. In fact, TOC tried taking part in the survey, and discovered that we were able to access the QS survey and complete it. As such, survey respondents would likely respond based on their familiarity with a particular university, while not necessarily having an accurate understanding of the university they are voting in.
Another two indices of the QS survey, citations per paper and papers per faculty, each take up 15% of the total results. However, they are at best imprecise measures of the quality of education at the universities. This is because disciplines in the sciences, engineering and economics – fields that are traditionally entrenched in Singapore universities – usually involve researchers working in teams that include graduate students. They usually publish very quickly, more so than most social science and humanities disciplines. Looking at the number of publications says little about the influence of such publications in academia, much less their impact on teaching quality.
Moreover, the influence of papers usually take a long time to become clear. Recognition for good quality research – such a those that win Noble Prizes – usually come only decades after publication. To measure their success based on citations received does not necessarily mean that they have longer term benefits, which would be a better indicator of education quality.
It is unclear how the proportion of international faculty and international students, which take up 10% of the QS score, would affect the quality of education and research. Nevertheless, it would be reasonable to assume that the high level of international students, exchange programmes, and international faculty in Singapore universities would likely give a boost to their standing in the rankings.
Faculty-student ratio, which accounted for 20% of the study, is important in terms of a university’s ability to provide guidance and mentorship for students, as opposed to putting them in big lecture halls where academic guidance for students is minimal. However, the definition of what constitutes teaching staff is not properly defined in the study. Would full-time research staff who do not teach – such as staff at various institutes at NUS like the East Asian Institute or South Asian Institute – count as teaching staff, too? If so, there may be some artificial inflation of the faculty-student ratio, but this cannot be ascertained from the results of the QS survey.
In review, it probably does not serve our purpose to say that the QS ranking is or is not an accurate representation of the quality of learning at NUS and NTU. All surveys contain lacks in their measurement indices, and it is worthwhile taking a closer look at each study that is published to get a better understanding of what it tries to measure, rather than accept the rankings in simple terms.
The QS ranking, however, should be noted for its high dependence on impressions, which can be attributable to many factors not relating to the actual reputation that a university establishes with its faculty and students, past or present.
Much more can be achieved if we step away from rankings and focus on the aspects of learning that are a more accurate representation of teaching quality: Student-faculty ratios, the benefit of research to society, and employability.
These are the aspects of education that should matter to students when deciding on their educational institution of choice, and perhaps something that our local universities should be more focused on getting high scores in.
This article was produced in part through conversations with persons familiar with our local tertiary institutions. Top image – screen capture from ST Online website.