Here is the CMT Uptime check phrase

Blog

December 14, 2016

Couper and Zhang Evaluate Techniques for Improving Web Survey Responses

By Brady Thomas West

Blue keys spelling Online Survey on gray keyboard

Together with his colleague Chan Zhang from Fudan University in Shanghai, China, who is a recent PhD graduate from the Michigan Program in Survey Methodology, SRC Research Professor Mick Couper recently published an article entitled 'Helping Respondents Provide Good Answers in Web Surveys' in Survey Research Methods that presented the results of a series of experiments designed to evaluate techniques for helping respondents to answer difficult or complex questions in web surveys. Specifically, the article looked at the use of drop-down menus (including all possible responses) and JavaScript look-up tables (where possible alternatives are automatically displayed and become more specific as respondents continue to type, similar to what happens with Google searches) as alternatives to simply providing respondents with an open-ended text box when asking factual questions. The specific subject matter involved prescription drugs that respondents were currently using.

The first experimental study was conducted in two opt-in web panels in the U.S. Participants from the two panels were randomized into one of the three design conditions mentioned above. Couper and Zhang found significantly higher rates of break-off, significantly higher missing data rates, and longer response times on the prescription drug use question for respondents assigned to the two alternative options. However, they also found evidence of a learning curve, in that response times significantly decreased in the alternative options with each additional drug mentioned. Furthermore, more of the standard text box responses were either machine-matchable or codable (by human coders) during post-processing, assuming that manual coding effort is an option.

Given the unexpected negative results from the first study, the authors ran a second experimental study using two additional web panels, and improved the designs of the alternative options. Specifically, the drug names used for the alternative options were no longer in all caps, and they trimmed duplicate drug names in the master database of names, reducing the number of response options. They found evidence of an improved user experience when using the alternative conditions, with less break-off and less item-missing data, and response times were also shorter in general. However, the alternative conditions still resulted in longer response times for the first drug entered. The machine matching and /or coding rates were higher in the drop-down condition, but not enough so to offset the longer response times. Importantly, respondents were not provided with an option to enter the name of a drug that they could not find on the list in the alternative conditions.

This problem was rectified in a third experimental study, conducted in the 2013 Health and Retirement Study (HRS) Internet Survey. This study increased the number of drugs reported from three to five. The authors found that break-off rates were once again low, and that the drop-down option had the highest missing data rates. Furthermore, they found higher item-missing data rates across the number of drugs reported, suggesting possible fatigue in this older population. In this third experimental study, the standard text box approach once again had the lowest response times, and there was once again evidence of a learning curve in terms of quicker response times with additional drugs entered (especially so for the drop-down condition). Across all five drugs, the JavaScript approach yielded slightly higher rates of usable (matchable or codable) data. These results suggested a trade-off between longer response times for respondents and better coding for the survey organization. Importantly, the authors did not find evidence of any interactions between socio-demographics and the experimental conditions when predicting the various measures of respondent performance.

Ultimately, the authors concluded that their results were indicative of an important trade-off. After implementing selected design changes, the alternative tools appeared to provide more matchable or codable responses (saving in post-processing time), but continued to take longer for respondents than the standard text box approach (which would also be easier to program). These results therefore have important implications for web surveys, given that respondents consistently showed evidence of learning with more frequent use of the alternative tools. If immediate coding is needed, choosing one of the alternative options would probably be best. However, if the web survey is collecting data to be coded and analyzed later, the standard text box option may be best for minimizing respondent burden.