Mode effect

Mode effect

Mode effect is a broad term referring to a phenomenon where a particular survey administration mode causes different data to be collected. For example, when asking a question using two different modes (e.g. paper and telephone), responses to one mode may be significantly different to responses given in the other mode. Mode effects are a methodological artifact, limiting the ability to compare results from different modes.



Particular survey modes put respondents into different frames of mind, referred to as a mental “script”.[1] This can affect the results they give. For example:

  • Face-to-face surveys prompt a “guest” script. Respondents are more likely to treat face-to-face interviewers graciously and hospitably, leading them to be more agreeable and affecting their answers. Differences between the interviewers administering the survey can also lead to a range of "interviewer effects" on survey results.
  • Phone interviews prompt a “solicitor” or "telemarketer" script. Respondents may place less priority on telephone interviews, making them more likely to satisfice (answer questions with the least possible effort) in order to finish the interview sooner. Wariness of who may be on the other end of the phone can also lead respondents to provide more socially acceptable answers than would be given in other survey modes.

Mode effects are likely to be larger when the differences between modes are larger[citation needed]. Face-to-face interviews are substantially different to self-completed pen-and-paper forms. By contrast, web-surveys, pen-and-paper and other self-completed forms are quite similar (each requiring respondents to read and privately respond to a question) and therefore mode effects may be minimised.

Users of surveys must consider the potential for mode effects when comparing results from studies in different modes. However, this is difficult as mode effects can be complex and subject to interactions between respondent demographics, subject matter and mode. Unless the mode effects are formally investigated for the survey instrument, it is difficult to quantify their size and qualitative judgments by experts familiar with the subject matter and respective modes are required instead.

Social Desirability Bias

Studies of mode effects are sometimes contradictory but some general patterns do emerge. For example, Social desirability bias tends to be highest for telephone surveys and lowest for web surveys:[2][3]

  1. Telephone surveys
  2. Face-to-face surveys
  3. IVR surveys
  4. Mail surveys
  5. Web surveys

Therefore as the data collected on sensitive topics (such as sexual behavior or illicit activities) will change depending on the administration mode, researchers should be cautious of combining data or comparing results from different modes.

Differences in Questions between Modes

Some modes require different question wording from others, in order to suit the features of the mode. For example, self-complete forms can use lists of examples or extensive instructions to help respondents answer relatively complex questions. By contrast, in telephone interviews, respondents are often limited by their working memory and are unlikely to understand a long question with multiple sub-clauses. Another example is a 'matrix' of questions, commonly found on self-complete forms, cannot be read out easily in a verbal interview; rather a matrix would generally need to be scripted as a series of individual questions.

Differences in question wording across modes may cause different data to be collected by different modes. However, this is not always the case, and appropriate adaptation of questions to a new mode can yield comparable data[citation needed]. Survey designers should consider the conventions of the mode when adapting questions. For example, while it may be acceptable to require respondents to calculate total figures themselves in a paper form, respondents may perceive it to be burdensome if this is required in a web form (where respondents might expect totals to be calculated automatically by the computer). This may in turn change their attitude toward the form, altering their behaviour and ultimately changing the data collected.

Identifying and Resolving Mode effects

Mode effects can be identified by embedding an experiment within the survey, where a proportion of respondents are allocated to each mode. Differences in results from each mode should identify the 'mode effect' for this particular survey.

Once a mode effect has been quantified, it may be possible to use this information to reprocess existing data and allow comparison between data collected in different modes (e.g. by backcasting a time series to determine what past results 'would have' been had they been administered in the new mode).

Differential Coverage between Modes

Different administration modes may inherently exclude some parts of the target population. This potentially biases the sample that is taken, and changes the data from what would have been collected using another mode. For example, people without a home phone are excluded from Random Digit Dialling (RDD) surveys, and people without internet access are unlikely to complete a web survey. This means different samples are taken from the population when using different modes. Unless mode effect experiments are specifically designed to investigate differential coverage of modes, significant differences between modes/conditions could have several explanations:

  • properties of the mode;
  • different 'types' of people responding to the different modes;
  • both the mode properties and different 'types' of respondents (in an additive fashion);
  • an interaction, where some respondents are effected by properties of the mode but others aren't.

This problem is exacerbated when in 'live' administration of a survey, multiple modes are used. Some surveys use multiple modes, allowing respondents to choose the most convenient method for them. That is, different 'types' of respondents are expected to complete different modes based on their own choices. In this case, mode effects are difficult to quantify as randomly allocating respondents to a condition does not reflect their preference. Such an experiment lacks external validity and results would not directly generalise to situations offering respondents a choice. Failing to randomly allocate participants to a condition (i.e. allowing them to have a choice, thereby retaining external validity) would mean apparent differences between modes reflect the combined effect of a) different respondent types choosing each mode and b) any mode effects.


  1. ^ Groves, Robert M. (1989). Survey Errors and Survey Costs, New York: Wiley-Interscience.
  2. ^ Frauke Kreuter, Stanley Presser, and Roger Tourangeau. "Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity". Public Opin Q (2008) 72(5): 847-865 first published online January 26, 2009 doi:10.1093/poq/nfn063
  3. ^ Allyson L. Holbrook, Melanie C. Green And Jon A. Krosnick. "Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias". Public Opin Q (2003) 67 (1): 79-125. doi: 10.1086/346010.

Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Mode-locking — is a technique in optics by which a laser can be made to produce pulses of light of extremely short duration, on the order of picoseconds (10−12 s) or femtoseconds (10−15 s). The basis of the technique is to induce a fixed phase… …   Wikipedia

  • Mode choice — analysis is the third step in the conventional four step transportation forecasting model, following trip generation and trip distribution but before route assignment. Trip distribution s zonal interchange analysis yields a set of origin… …   Wikipedia

  • Mode partition noise — Mode partition noise: In an optical communication link, is phase jitter of the signal caused by the combined effects of mode hopping in the optical source and intramodal distortion in the fiber. Mode hopping causes random wavelength changes which …   Wikipedia

  • Mode 7 — For other uses, see Mode 7 (disambiguation). Screenshot of a basic Super Nintendo demo using this graphical effect Mode 7 is a graphics mode on the Super NES video game console that allows a background layer to be rotated and scaled on a scanline …   Wikipedia

  • Effect size — In statistics, an effect size is a measure of the strength of the relationship between two variables in a statistical population, or a sample based estimate of that quantity. An effect size calculated from data is a descriptive statistic that… …   Wikipedia

  • Mode series — The Mode series is a quartet of novels by Piers Anthony. Like many of Anthony’s other fictional works, it explores themes of violence, the abuse of power, sexism and male dominance, gender roles, the environment, integrity and personal honor,… …   Wikipedia

  • Mode (statistics) — In statistics, the mode is the value that occurs most frequently in a data set or a probability distribution.[1] In some fields, notably education, sample data are often called scores, and the sample mode is known as the modal score.[2] Like the… …   Wikipedia

  • effect — The result or consequence of an action. [L. efficio, pp. effectus, to accomplish, fr. facio, to do] abscopal e. a reaction produced following irradiation but occurring outside the zone of actual radiation absorption. additive e. an e. wherein two …   Medical dictionary

  • mode confusion — (MOHD kun.fyoo.zhun) n. Confusion caused by complicated digital technology, particularly when it is difficult to discern the current state or mode of a digital device. Example Citation: Alan Cooper, whose consulting firm, Cooper, in Palo Alto,… …   New words

  • Mass Effect 3 — Developer(s) BioWare Publisher(s) Electronic Arts Director(s) …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”