ÌÇÐÄvlog¹ÙÍø

ÌÇÐÄvlog¹ÙÍø Authors

See citation below for complete author information.

Abstract

COVID-19 fundamentally changed the world in a matter of months. To understand how it was impacting life in the United States, we fielded a non-probability survey in all 50 states concerning people's attitudes, beliefs, and behaviors, designed to be representative at the state level. Here, we evaluate the generalizability of this study by assessing the representativeness and convergent validity of our estimates. First, we evaluate the representativeness of the sample by comparing it to baseline estimates and auditing the size of the weights we use to reduce bias. We find our sample is diverse and most weights are below levels of concern with the exception of Hispanic respondents. Second, we assess the convergent validity of our survey by evaluating how our estimates of attitudes, behaviors, and opinions compare to estimates from other surveys and administrative data. Third, we perform a direct comparison of our results to the Kaiser Family Foundation’s probability-based COVID-19 Vaccine Monitor. Overall, our estimates deviate from others by 1%-7% with the larger differences stemming from states with small populations and few other data sources and estimates from items with differing question wording or response choices. Here, we put forward a standard for evaluating the representativeness of surveys, non-probability or otherwise.

Citation

Radford, Jason, Jon Green, Alexi Quintana, Alauna Safarpour, Matthew D Simonson, Matthew Baum, David Lazer, Katherine Ognyanova, James Druckman, Roy Perlis, Mauricio Santillana, and John Della Volpe. "Evaluating the generalizability of the COVID States survey — a large-scale, non-probability survey." March 7, 2022.