The HCFO program ended in December 2016.
This site will no longer be updated, and some elements may not appear correctly.
A Methodological Evaluation of Non-Response on the Physician Component of the Community
Grant Description: At what point does survey non-response not merit further expenditure of finite resources? Researchers at Project HOPE/Center for Health Affairs are evaluating the cost-effectiveness of different respondent cut-off dates to the Community Tracking Study physician survey. They are examining two questions: 1) Is the survey’s level of non-response “acceptable”? and 2) What is an “acceptable” level of non-response, given the resources necessary to increase response rates and the subsequent delays in producing timely data that ensue from going after non-respondents? They are examining whether respondents differ from non-respondents on key demographic variables; whether the addition of late respondents (the last 15-20%) changes key survey estimates; and whether analyses conducted by the Center for Studying Health System Change using the physician survey would have reached different conclusions had data only been obtained from early and middle respondents. This study is using data from the CTS Physician Survey and the American Medical Association Masterfile. The objective is to inform the Center for Studying Health System Change and other organizations fielding physician surveys and to aid in survey resource allocation decisions.
Policy Summary: Survey administrators must make a tradeoff between expending additional data collection resources to increase the response rate and accepting a lower response rate in exchange for lower data collection costs. Typically, a disproportionate share of data collection resources are spent in efforts to make marginal improvements to the response rate. This study uses data from the CTS physician survey to determine how increases in the response rate affect survey estimates and data quality. Findings from this study should help research organizations to make more cost-effective decisions regarding the field of work for the CTS physician survey. The results may also be relevant for others responsible for conducting similar surveys of physicians. The researchers examined variation in survey estimates and data quality for three respondent groupings, each designed to reflect the timing of response or the ease of obtaining the response. Our bivariate analyses showed very few significant differences by respondent group in the pattern of survey estimates, in the incidence of item non-response, or in the precision of survey estimates for various analytic subgroups. When statistically significant differences were observed in the survey estimates for mutually-exclusive respondent groups, these differences were typically small in magnitude. Furthermore, even large differences between mutually-exclusive groups did not change the cumulative estimates appreciably because relatively few new respondents were being added to the respondent pool at each point. We also estimated a multivariate logit model to explain the physician’s provision of charity care, using different groups of respondents. We found great stability in the parameter estimates across respondent groups, indicating that the addition of new respondents did not change the estimated model significantly. Thus, expanding additional data collection resources to increase the survey response rate had little impact on the survey estimates or data quality. In light of these findings, it does not appear that making marginal improvements in response rates (e.g., from the high 50s to the low-to-mid 60 percent range) will change the CTS survey estimates appreciably or affect the quality of the data obtained. Although we also found that the data resulting from a very low response rate (approximately 33 percent) were not significantly different from data obtained from all respondents, it is very important to note that these much lower response rates are never likely to be acceptable for peer-reviewed publication. Surveys will still have to meet a “face validity” test by achieving a credible response rate. However, as long as the study results are not overstated to imply that extremely low response rates are credible, this study may permit researchers to make a better case for dissemination and publication of interesting results in peer-reviewed journals even when the response rate falls slightly short of current goals. Two additional points are worth noting. First, the implications of moving significantly beyond a 65 percent response rate could not be assessed through this study. It is possible that data quality could be significantly improved by achieving a much higher response rate. We have outlined a small experiment that could be conducted on a future round of the CTS physician survey to answer this question. Second, these results may not be generalizable to other physician surveys that use different data collection methods or that include a markedly different set of questions.
Research Topics
Search Grants & Grantees