Here is the CMT Uptime check phrase


February 8, 2017

West, Conrad, Kreuter, and Mittereder evaluate bias and interviewer variance under conversational and standardized interviewing

sm-interviewer-varianceThe Michigan Program in Survey Methodology (MPSM) faculty Brady West and Fred Conrad, the Joint Program in Survey Methodology (JPSM) faculty Frauke Kreuter, and MPSM doctoral student Felicitas Mittereder published a new study in the Journal of Royal Statistical Society-Series A on the effect of conversational interviewing, funded by the National Science Foundation, the Institute for Employment Research in Nuremberg, and infas in Nuremberg.

Standardized interviewing in which interviewers are trained to read the questions as worded has been argued to minimize differences across interviewers in how they administer the questions and reduce interviewer variance in the responses. Previous studies have shown that conversational interviewing can provide more accurate responses when the questions may be ambiguous. In addition to higher cost associated with longer interviews, there has been no evaluation of whether allowing interviewers to deviate from the script may also be increasing interviewer variance.

In their study West, Conrad, Kreuter, and Mittereder employed a design with 15 geographic areas in Germany and 60 interviewers, randomly assigning two interviewers to conversational interviewing and two others to standardized interviewing within each area. The sample was selected from a database so that sample members were employed, have had more than one job, have had a part-time job, have had a spell without employment, and were at least 18 years old. These criteria ensured a more complex employment history and also provided a gold standard for some of the survey questions. Criteria for selecting the survey questions were that they elicited greater need for interviewer-respondent interaction and requests for clarification, or that they showed larger interviewer effects.

They fit multilevel models to the 55 survey questions to estimate the intra-interviewer correlations (ICC) and the shift in response distributions, controlling for the area effects in the interpenetrated interviewer design. Five questions showed significant or marginally significant differences in ICC, with standardized interviewing having lower between-interviewer correlations. However, for two of these items, the within-interviewer variances were larger for the standardized interviewing, potentially contributing to the ICCs. For one question for which administrative data were available, the reduction in bias overwhelmed the increase in interviewer variance. Fourteen questions showed significant or marginally significant shifts in response distributions, all in the expected direction of more accurate responding under the conversational interviewing condition. The authors conclude that they find support for the use of scripted, standardized language in survey questions, but not restricting interviewers from providing additional clarification.

Brady West, Frederick Conrad, FraukeKreuter, & Felicitas Mittereder. Can conversational interviewing improve survey response quality without increasing interviewer effects? Journal of the Royal Statistical Society, Series A: Statistics in Society. Online: December 2016.