Here is the CMT Uptime check phrase

Blog

April 5, 2018

James Wagner Partners with Kristen Olson to Examine How Interviewer Travel Behaviors Affect Field Outcomes in the NSFG and HRS

By Brady Thomas West

Illustration of a car on a roadIn a new article published in the Journal of Official Statistics, SRC Research Associate Professor James Wagner presents interesting collaborative research on interviewer travel with former Michigan Program in Survey Methodology (MPSM) graduate Kristen Olson, who is currently an Associate Professor of Sociology at the University of Nebraska-Lincoln. Clear gaps in knowledge exist in the survey methodology literature with respect to interviewer travel behaviors: we don't really know what interviewers do daily, why they do what they do, or how these behaviors affect other important field outcomes. Interviewer travel is also a very important component of the overall field costs incurred by a face-to-face survey. Wagner and Olson specifically consider the relationships of interviewer travel patterns and behaviors with important field outcomes in two major face-to-face SRC surveys (the NSFG and the HRS), and they provide a compelling demonstration of the ability of paradata describing these patterns and behaviors to predict outcomes that are important to field operations.

More specifically, the authors consider the methodological approach of aggregating call record paradata from the NSFG and HRS to the interviewer-day level, and examine two important measures of interviewer travel that have important cost implications: the amount of distance traveled to different sampled area segments on each day, and the number of trips made to their assigned segments on each day. They also note various possible sources of error in these measures, which is an important point, and note that interviewer reports of distance traveled are highly correlated with what would be expected based on the locations of their homes.

Using cross-classified random effects models appropriate for each type of dependent variable, which account for crossed random effects of randomly selected interviewers and primary sampling units (PSUs), they initially consider various auxiliary predictors of these travel measures (including area features, such as size and an estimated difficulty of obtaining an interview measure from the Census Bureau, and interviewer characteristics, such as experience), to see what influences these travel decisions made by the interviewers in the two surveys. Next, they consider these travel outcomes as predictors of important survey outcomes, including contact attempts, contact rates, and response rates, controlling for the same auxiliary predictors. Separate rates were considered for both screening and main interviews, given the screening designs of the HRS and NSFG. I liked how the authors presented clear a priori expectations about the relationships of interest based on the very small amount of research that has currently been conducted (mainly based on simulation) in each case.

In their models, Wagner and Olson initially find evidence of substantial interviewer and area variance in terms of nearly all the outcomes, suggesting that interviewers vary substantially in terms of these travel behaviors and the field outcomes. But what might be the reasons for this variance? Considering their first question, they report that very few of the auxiliary predictors considered could predict total mileage traveled on a given day, in either survey. As a result, significant interviewer variance remains despite controlling for the various area and interviewer characteristics, which is an interesting finding. They did find some predictors of the number of segments visited: in the HRS, interviewers working in non-self-representing PSUs tended to visit more segments on each day, and more experienced interviewers tended to visit more segments as well. In both surveys, more segments were visited later in the field period. Considering their second question, they report that the number of segments visited on a given day had a positive relationship with the counts of screener and main attempts (for both surveys), as expected, for both surveys, and that the number of segments visited had a negative relationship with both contact rates and screening / main interview rates (again, for both surveys). These results held even when controlling for the number of miles traveled. However, the amount of distance traveled interestingly did not significantly predict any of these outcomes.

These findings have important practical implications for interviewer training, and the authors do a nice job of considering these implications for practice. Specifically, they strongly recommend careful monitoring of interviewer variance in the number of segments visited on each day, as an important tool for minimizing potential variance among interviewers in nonresponse bias. They also indicate that this monitoring should be used to initiate conversations with interviewers about their travel patterns and whether they could strive to be more efficient.

The authors have been working on this paper for a long time, and it made for clear and enjoyable reading. The paper makes a solid contribution to a growing literature on interviewer behaviors and the ability of survey paradata to describe these behaviors and their relationships with other important survey outcomes, and provides clear suggestions for practice for managers of field operations.


James Wagner and Kristen Olson (2018). An Analysis of Interviewer Travel and Field Outcomes in Two Field Surveys. Journal of Official Statistics, 34(1): 211-237.