fbpx

What’s in a poll?

By Lim Jialiang

One common narrative that we have heard constantly from those in the pro-banning camp is that this is in line with the “dominant social norms” or “community values” that Singaporeans have. Minister Yaacob Ibrahim mentions that the “prevailing norms, which the overwhelming majority of Singaporeans accept, support teaching children about conventional families but not about alternative, non-traditional families, which is what the books in question are about.”

When pressed for substantiation, this is most commonly met with statements saying that polling data supports such an observation. Sociologist Tan Ern Ser said that “Recent surveys have shown that the majority of Singaporeans disapprove of same-sex relations.” However, these are always mentioned in passing, and never cited in detail. This post is an attempt to critically examine this claim.

The most recent example is a working paper dated June here, written by Mathew Mathews, Mohammad Khamsya Bin Khidzer, and Teo Kay Key from the Institute of Policy Studies (IPS). With a quick glance, the survey looks legitimate: in terms of its sample size, it has tried its best to make sure that it is well-represented in terms of age, race, and religion. It was also done by “a reputable marketing company.” However, despite these markers of legitimacy, the survey is riddled with bad phrasing, has imbalanced response scales, biased questions and a lack of nuance, all of which have the potential to skew results.

Bad Phrasing

what is in a poll 1

Questions in this survey are often made without definitional basis. For example, in section 4, it was asked whether “increasing religiosity among religious groups would harm religious harmony.” What exactly is used in this case to quantify/define “religiosity? Is it a performative element, where people are spending more time on religious activities? Or is it one of fundamentalism, where people are becoming more literalist and stricter in terms of how they practice their religion? Both are very different things, and require further explanation on the part of the surveyor.

Moreover, it is important that the respondents, not the surveyor, who should determine if they feel that there is increasing “religiosity”. As a result, this question is double barrelled, for there are two things that require answering in this situation: “increasing religiosity” and “harm religious harmony.” As a result, doing the survey would mean that you have to answer the affirmative or the negative to two separate categories. You could well agree that there is increasing religiosity, but that it would not harm religious harmony. You could also feel that there is no increasing religiosity, and the latter question is irrelevant. This, instead, should have been a two-step question.

  1. Do you fee if there is increasing religiosity in Singapore? (Y/N)
  2. If yes, would this harm religious harmony?

Imbalanced Response scale

The response scale has also been lumped together, creating a false impression of homogeneity. There is a significant difference between someone who “strongly disagrees” and “disagrees”, which has been homogenised into one category for the presentation of survey results. Surveyors should also always balance the scale by having an equal amount of positive and negative responses so as to not skew the results. In survey-speak, this is having a good symmetry in responses. The lack of a neutral option is a “forced choice” situation, where the respondent is forced to pick a side. There is nothing wrong with this, especially for controversial and difficult questions where neutral might be the easy way out. However, having three positive options and two negative options in this case will prejudice individuals and make them select the affirmative more than the negative.

what is in a poll 2
Moreover, the scale used in terms of the survey is very vague, with terms like “somewhat” or “quite” appearing. For example, quite appears to be used as the middle order option in the image above, but linguistically, it denotes a greater intensity in terms of a person’s feelings. For example, describing a person as “angry” or “quite angry” are two entirely different things. Whilst this confusion might be somewhat avoided if the survey was presented numerically, where 1 is not affected at all and 5 is very affected, this would be very confusing indeed if it were presented in text, which it was.

Biased Questions

what is in a poll 3

The table which supposedly contained the smoking gun as to how conservative Singaporeans were is in fact the most flawed segment out of the entire survey. Firstly, the non-neutral and morally-laden language of “wrongness” prejudices respondents, which inclines the readers to give socially desirable answers instead of legitimising morally undesirable acts. This section of the survey also narrows their choices down to just three options:

  1. Not wrong most of the time or not wrong at all
  2. Only wrong sometimes
  3. Always wrong or almost always wrong

(Most Singaporeans conservative: IPS survey)

The respondents, in this situation, weren’t even allowed to choose an option that the acts presented above were NOT wrong! Moreover, the Working Paper only chooses to show the results when respondents felt that the acts above were “always wrong or almost always wrong.”

The exclusion of data, flawed as it might be, is very apparent where the selection of this response is at a minority. This section of the survey with its culled data, poor phrasing and moralistic overtones obviously show that Singaporeans are highly “morally conservative.” After all, it was designed to prejudice respondents to thinking in that manner, almost as if they were working through a checklist of sin. You would not be blamed if the conclusion drawn is that all the LGBT community seem to do is to have sex, ask for gay marriage and try and adopt babies.

Lacking Nuance

As a result, this survey is hardly a survey on attitudes towards the LGBT community in Singapore, and the lazy questioning lacks nuance. A better way as to measure approval would be to ask the respondent to consider scenarios where he would interact with a member of the LGBT community, rather than the current fixation on sex. These questions can look like this, and given a scale of 1-5, where 1 is strongly disagree and 5 is strongly agree:

  1. If your friend is gay, will you be friends with him?
  2. Would you support a gay president?
  3. Do you think being gay is something that should be made criminal?

Casting homosexuality in the checklist of sin, and focusing on sex as what the survey has done is inadequate in understanding how individuals negotiate with the idea of dealing with a member of the LGBT community in real life.  There are also undertones of the “homosexual lifestyle” when surveys only focus on sex, something which all researchers should avoid, as the LGBT community is similar to us: sex is but a part of their lives, and shouldn’t be the only focus.

There should also be the distinction that the disapproval of homosexuality can be a private or a public conviction, and the former does not necessarily lead to the latter. It does not necessarily translate into a disapproval of sexual minorities gaining rights like anti-discrimination laws.  As a result, it is flawed to draw conclusions that “Singapore” is conservative from surveys like this that ignore the many shades of grey over this contentious issue.

Community Values

People should think twice about whether they would want to legitimise “community values” using a survey like this. By the same logic as Minister Yaacob Ibrahim, we should criminalise “Sex with someone other than your marriage partner”, for adultery is seen as the most morally reprehensible act, even above gay marriage and gay adoptions. This survey ironically shows that we have a diversity of interests in so many matters at hand, but is in this case being used to justify a supposed, dominant “pro-family” stance.  If anything, even a flawed survey like this manages to show the diversity of interests that require balancing.

The beliefs that are reported here have no correlation to actual behaviour. Whilst morality can be an indicator of behaviour, it is by no means fool-proof, and should not be used as a marker for planning social policy. Social reality is messy, confusing, and makes policy difficult to do properly. This is why we must collect data properly and without moralistic prejudice, for it might lead us to draw the wrong conclusions.