What Is Social Science Survey Methodology?
Survey methodology is the study of systematic and random errors that occur in survey measurement and estimation, how to avoid them, and how to adjust for them. Survey methodologists are essentially “methods scientists” who view these errors and data collection methods in general as their objects of study. They come from a wide range of academic disciplines across the sciences, humanities, and professional fields, and occupy as wide a range of professional stations including academic faculty, government statistical agencies, research corporations, polling firms, and anywhere that survey data are collected. This article focuses on methodologists who identify primarily with social and behavioral sciences or who apply principles from those disciplines in their work. Social science survey methodologists often work on survey question wording, questionnaire design and pretesting, interviewer training, usability and user experience research, and interactional and memory-related error. Much of social science methodologists’ work concerns the presence and quality of individual responses, but they also apply their skills to social, behavioral, cognitive, and linguistic phenomena that affect the quality of the representation properties of survey estimates (i.e., nonresponse and coverage). Social science methodologists are found anywhere disciplinary insights into social interaction and psychological processes can improve survey data collection processes and the resulting data.
As a scientific discipline and field of practice, social science survey methodology has often-applied well-known social science concepts and theories that influence social science survey practice include social exchange theory (borrowed from sociology to explain why incentives work; e.g., Cook et al. 2013; Dillman 2014), satisficing (borrowed from cognitive psychology to explain why respondents do not put maximum effort into responding, e.g., Holbrook et al. 2003), judgment heuristics and Gestalt (from cognitive and perception psychology to explain why respondents are sensitive to subtle aspects of question display, e.g., Köhler 1970; Tversky and Kahneman 1979; Dillman 2014), and subjective expected utility (from economics and decision science to explain why people participate in surveys at all, e.g., Fishburn 1981). Psycholinguistics (Clark 1992), sociolinguistics (Maynard and Schaeffer 2002; Sacks, Schegloff, and Jefferson 1974), human-computer interaction (Gong and Nass 2007; Reeves 1998), and other social and behavioral sciences also influence social science survey methodology to a large extent. Some of these are formally integrated into social science survey methodology theory and training, while are others are not. This is clearly a broad overview and not a comprehensive list of social science theories and perspectives that are applicable to survey methodology. We discuss this topic again with respect to training below.
Survey methodology itself has a few theories and frameworks, which have strong social science associations, as well. Total Survey Error (Groves and Lyberg 2010) may be the most well-known framework for describing the entire survey cycle and its associated errors, and it includes concepts from both social science and statistical science orientations. Models of the survey response process (e.g., help explain how measurement error can occur by isolating comprehension, retrieval, judgment, and response steps involved in answering questions (e.g., Jabine et al. 1984; Sirken et al. 1999; Tourangeau et al. 2000). To explain unit nonresponse causes through a largely cognitive decision perspective, leverage salience theory (LST) asserts that a person’s propensity to respond is determined by survey design features, how salient they are to the respondent, and the subjective weight a respondent puts on them when noticed (Groves, Singer, and Corning 2000).
Social science survey methodology research also has led to a number of replicable findings that have influenced survey practice, including demonstrating the effects of telescoping, acquiescence bias, social desirability bias, response option order, and context/question order (see The Social Science Survey Methodologist’s Bookshelf below for our texts that include these concepts and findings). It can be tempting to group all social science methodologists under the largest and most well-known social sciences (e.g., sociology, psychology, political science), but there often is more heterogeneity than homogeneity across these disciplines. Psychology tends to focus on cognitive, affective, and perceptual processes within individual people. Sociology tends to focus on societal changes (e.g., demographics) and power-relationships between classes of people (e.g., social stratification). There are notable areas where these disciplines overlap, such as social psychology, which can take on both psychological and sociological orientations. Further, there has been a rise in interdisciplinary fields of study over the past half century or more such that many contemporary researchers choose to focus on specific fields of application or research problems rather than disciplinary boundaries that predefine orientations to all scientific problems (Sternberg 2005). Survey methodology is a prime example of such interdisciplinary, “problem-based” research.
Although survey methodology is itself the study of survey methods, we employ methods from a variety of disciplines. This means that social science survey methodologists should be trained in a range of social science methodologies that can be usefully applied to survey methodology, rather than just in the methods that are most commonly used in survey practice. For example, qualitative training may include conversation analysis, participant observation, ethnology, and other methods that are not necessarily found in everyday survey research in most survey organizations.[1] Similarly quantitative methodology could include experimental design and analysis, which is a common method in psychology, and is useful for testing methodological changes to a survey, as well as for controlled tests of the psychological mechanisms of survey response. While randomized experiments are not used in everyday survey research, training more students in this technique might change that. Similarly, psychometrics and factor analysis, common in education and psychological measurement, can be used for developing multi-item measures and would be a good core statistical feature of social science survey methodology training.
What Does Social Science Survey Methodology Training Look Like
Formal survey methodology training is currently only at the graduate level or in professional development contexts. At the master’s level, this training can be general or tracked, with specialization in statistical science, social science, or a substantive application area such as political science. Doctoral training tends to have a traditional PhD orientation in which students develop a research agenda around a specific error phenomenon or method. In this way, survey methodology has an academic orientation despite being an applied field at its core (i.e., if there were no surveys, there would be no survey methodology). Readers interested in the subtle differences between training programs should review their websites directly (e.g., MPSM,[2] JPSM,[3] SRAM,[4] UCONN,[5] UIC[6]). Survey methodology training can be found under various names, such as “Social Research Methods”[7] and “Applied Sociology.”[8] Social science survey methodology training also occurs in traditional social science departments that do not have formal academic specialization in methodology, including the University of Wisconsin’s Sociology program, Northwestern’s Communication and Political Science programs, New School for Social Research’s Psychology program, University of Washington’s Political Science program, and others.
There are no undergraduate specializations in survey methodology to our knowledge, but survey methods certainly appear in undergraduate social science methods courses. A few interdisciplinary undergraduate programs are poised to train students for careers in survey methodology, such as Northwestern University’s Mathematical Methods in the Social Sciences program.[9] The University of Michigan’s Psychology Department often encourages students to minor in statistics, which prepares students well for survey methodology. Any social science program that can include a strong quantitative component positions its students for entry into the field of survey methodology. Statistics programs and other quantitative majors are natural preparations for survey methodology, particularly if they include study in social science. We are likely to see more interdisciplinary programs like these as data science rises in popularity as a field.
Existing Social Science Survey Methodology Training Curricula and Training Gaps
To investigate the status of social science survey methodology training, we reviewed known U.S. graduate programs with “survey” in their name and drew on our own experiences with survey methodology programs. There are notable non-U.S. programs, such as the Survey Methodology program at the University of Essex[10] that we did not review in detail because the training model emphasizes research over formal courses, and we wanted to focus on the course-based U.S. model. One list of United States and European academic centers with survey methodology training can be found online.[11]
Reviewing programs broadly, we saw that statistical methodology and general survey methodology coursework dominate listed courses. Social science content certainly appears in general survey methodology courses, and special seminars that are not listed on program websites. But if our field is made of up social and statistical science equally, we would expect to see specialization courses split equally between social science and statistical science topics. That does not seem to be the case. Additionally, there are several areas of theory, methods, and professional practice that we think are not covered enough in social science coursework in the programs we reviewed and with which we had personal connections. We discuss those training gaps next. Identifying a gap does not mean that no program addresses this issue, but that programs tend to underemphasize it on average, and that we think emphasis on the topic should be strengthened in survey methodology training programs as a whole. Our goal is not to single out individual programs as strong or weak, but to move the entire field of survey methodology ahead in its social science training.
Training Gap 1 – Full survey lifecycle experience
Working survey methodologists can contribute more fully if they have experience with the full survey lifecycle. Experience is difficult to replicate in coursework alone but can be gained from design seminars, practica, and internships. Design seminars and practica involve connecting students with survey design and data collection opportunities in a relatively supervised and moderated environment. For example, in a one-semester design seminar, students may work with two to three clients chosen by their professor and tackle real design problems ranging from sample construction, to questionnaire design, to analysis and interpretation. Students work in small groups to produce a design, research plan, or even final product for the client. Similarly, the JPSM and MPSM practicum sequence involves designing and pretesting a questionnaire in one seminar (again, working with clients to refine research goals and questions), and analysis of the data in a second semester. Students may be involved in the data collection itself. The JPSM and MPSM practica are supported by production surveys at Westat (for JPSM) and the Surveys of Consumers at the University of Michigan, respectively (for MPSM).
Internships involve less direct supervision from faculty but require a wide network willing to support students, often for a full summer for full-time pay. Some internships are academic, involving research in one particular area of survey methodology with the opportunity to publish or directly influence survey designs, while others are more applied and provide a good link between the scholarly side of survey methodology and its applied side. Hands-on experiences that expose students to multiple aspects of survey operations (similar to medical rotations), provide valuable insight into the factors that drive budgets, error considerations, and project management/design decisions. As an applied field, survey methodology training programs should consider internships to be an essential part of training, similar to clinical internships for health service workers and therapists, or student teaching educators. These analogies could be taken literally, and students could be placed in a year’s worth of full-time employment that is considered a formal part of their training as is done with teaching and clinical psychology. The medical rotation analogy could also be taken literally, with students serving three- to four-month rotations in different areas of a survey research center, agency, or company. Within a year, the student would experience questionnaire design and pretesting, sample design, interviewing and interviewer management, and data processing. To our knowledge, no such program exists.
Training Gap 2 – Training and experience with multilingual and multicultural survey research
Multilingual and multicultural survey research seems to receive relatively little formal attention in core survey methodology training. Of the programs we reviewed closely, only SRAM listed a regular course on this topic. Given the relatively large and increasing fraction of the United States that cannot be surveyed in English, it would benefit the field greatly if other programs offered more focused multicultural, multilingual, and multinational survey methodology training and experience.[12] Cultural, cross-cultural, and multi-national factors other than language can also create barriers to sampling and measurement. Training programs should design concentrations, courses, or themes within courses addressing language-related measurement and nonresponse error, nonlanguage cultural aspects of these errors, and the principles and logistics of translation and cultural adoption. This theme could also be integrated into internships and practica.
Training Gap 3 – Qualitative research methods
In some social sciences, qualitative methods are the methods of choice. In others, they may occupy a second-class citizenship status behind quantitative and statistical methods. Yet qualitative and quantitative methods complement each other well in any comprehensive empirical science, and so-called “mixed methods” research tries to leverage the best features of each. Survey methodology is inherently a quantitative science, and thus tends to focus on quantitative methods in all areas. Qualitative methods usually are taught as methods of questionnaire pretesting and receive cursory coverage. We believe qualitative methods should take a larger and more sophisticated role in social science methodology training which would expand and enrich their application to survey error inquiry in academic studies and applied settings (see Maynard and Schaeffer 2002 as a prime example).
Training Gap 4 – The social/statistical science balance
Similar to the divide between qualitative and quantitative methods, the division between social and statistical sciences in survey methodology is stark in some survey research contexts, nonexistent in others, and extremely dynamic. One can have a successful career in survey methodology with no social science training in (e.g., as a statistician) or with no training in statistics (e.g., as a qualitative researcher). Yet survey methodology benefits most when social science methodologists have strong statistical training to support their inquiry. most modern quantitative social science training involves advanced statistical training as well as theory, driven largely by statistical advances of the past few decades. While social science survey methodologists may not be expected to have a level of mastery of statistical techniques at the same level of statistical survey methodologists, they should understand enough about designing factorial experiments, analyzing experimental data (e.g., analysis of variance [ANOVA]), factor analysis (for questionnaire development and analysis), and regression modelling to be able to conduct or contribute to research using those techniques. To motivate this training, emphasis should be placed on operational (e.g., statistical quality and process control), as well as academic (e.g., isolating multiple sources of survey error), uses of statistics, particularly for master’s-level students. Students should graduate with the ability to employ basic and modern statistical techniques to solve operational and design problems they encounter on the job, as well as to support research that contributes to the larger survey methods literature, even if they would not be considered “practicing statisticians” per se. For some students, this may mean taking statistics courses in departments that reflect the fields in which students plan to work after graduation, such as business, education, or psychology.
Training Gap 5 – Integrating of theories and concepts from other disciplines
Social science survey methodology has origins in sociology, psychology, communication, and political science, and we want to see social science methodologists benefiting from deeper study of contemporary theory and findings from these disciplines. The cognitive, social, and behavioral disciplines and subdisciplines that, to us, are among the most important to include are social cognitive psychology and neuroscience, cognitive psychology, decision science, visual perception, vocal acoustics and phonetics, sociolinguistics, psycholinguistics, media studies, political science, interpersonal communication, mass communication, usability and user experience (e.g., UX) research, and graphical/visual design (Smyth et al. 2006). Without sufficient cross-pollination, our training paradigm is at risk of becoming outdated as other fields progress but we do not adopt their most applicable findings. Much of our current best practice recommendations are based on theories and findings that are decades old (e.g., Cialdini’s principles of compliance adopted in the 1990s by Groves and Couper, or social exchange theory adopted in the 1970s by Dillman). This is not to say that theoretical advancement has not happened in survey methodology (Groves, Singer, and Corning 2000; Dillman 2014). Yet, advances in other fields may hold pivotal results for survey design decisions and our understanding of survey error in general. To keep our perspectives fresh, we recommend that survey methodology training attempt to incorporate research findings from areas that might not normally be thought of as “survey methodology.” This list is by no means extensive, but we would recommend the following topics and researchers as “fruitful tangents” for social science survey methodologists to explore: persuasion (e.g., Cialdini[13]); social-cognitive decision-making and embodied cognition (e.g., Schwarz[14]); perception (e.g., Kubovy[15]). ethnomethodology and conversation analysis (e.g., Maynard,[16] and Heritage[17]) pragmatics and psycholinguistics (e.g., Clark[18]); animacy and human-computer interaction (e.g., Reeves[19] and Nass[20]); interpersonal interaction; frame analysis (e.g., Goffman[21]); cooperative principle/Gricean maxims (e.g., Grice[22]); speech communities (e.g., Gumperz[23]); ethnography of communication (e.g, Hymes[24]); phonetics and acoustics (e.g., Johnson[25]); variation analysis (e.g., Labov[26]); social exchange theory (e.g., Cook[27]); elaboration likelihood (Petty[28] and Cacciopo[29]); and theory of planned behavior and reasoned action (Ajzen[30] and Fishbein[31]). Some of these topics and theoretical perspectives have already entered the survey literature, but their direct impact on survey practice, survey training, and theories of response and nonresponse is not always clear. To underscore our own point, we should note that many of these theories and frameworks probably have more recent incarnations, and do not represent the current paradigm in their own fields. We hope readers who share our zeal for social science theory will seek them out and incorporate them into their survey methodology work.
Our Ideal Program
To address some of the gaps outlined above, we drafted a proposed social science survey methodology curriculum. We based our program on the general format of currently-existing programs, adding additional courses where we believe there are gaps in social science methodology training. It assumes 30 credits not including a thesis or capstone project. We propose this as a terminal master’s degree; thus, the courses have been chosen to represent the aspects of survey design that social science survey methodologists tend to be faced with in practice. A doctoral preparation training program may look different.
- Data Collection Methods (3 credits): Covers research on various methods of collecting data via mainstream and experimental survey methods; Can be organized around the Total Survey Error framework or around specific modes or phases of data collection; Topics include mode effects, interviewer effects, and applications of emerging technologies to data collection
- Questionnaire Design (3 credits): Includes questionnaire development, pretesting and pilot testing (e.g., focus groups, cognitive interviews, behavior coding, split-ballot randomized experiments of question wording) and other methods for evaluating and reducing measurement error (e.g., randomized response technique)
- Cognitive and Social Aspects of Measurement (3 credits): Covers the psychological, sociological, and communication theories and principles that explain survey error and motivate design decisions; survey protocol design. Emphasizes more than how to design a questionnaire or survey scientific research that builds our general knowledge of how survey errors arise and how to avoid them
- Statistical Training (6 credits): At least two semesters of general graduate-level statistics coursework should make social science survey methodologists competitive, assuming they have moderate to strong undergraduate statistics prepartion. The first semester covers intermediate to advanced statistical techniques, such as ANOVA, basic categorical data analysis, power analysis, linear and logistic regression, multilevel models, survival analysis, factor analysis, and other statistical techniques that are commonly-used in research practice. The second semester covers Sampling, analysis of complex sample survey data, or a similar survey-specific statistical topic (See Kolenikov, this issue, on training survey statisticians).
- Qualitative Methods (3 credits): This course emphasizes practical qualitative inquiry and methods but goes beyond their use as a question/questionnaire pre-testing tool. Example topics could include ethnomethodology and conversation analysis, constant comparative method, participant observation, ethnology, ethnography, and the like.
- UX (i.e., usability), human-computer interaction, user experience or web surveys (3 credits): With the proliferation of web surveys and other electronic data collection tools, social science survey methodology students should be prepared to work on the design of computerized survey interfaces. Yet formal coursework in survey methodology programs is rare. The usability (i.e., user experience or “UX”) framework accommodates this need well, and also can also inform the design of paper survey forms, data entry interfaces (e.g., computer-assisted telephone interviewing and computer-assisted personal interviewing screens), and data dissemination websites (Bergstrom and Schall 2014; Couper 2008).
- Disciplinary specialization courses (3 credits): To keep survey methodology interdisciplinary and prepare employable students, at least one course should be taken outside the department in a topic-specialization field. Introductory, theory, or methodology courses in departments like health policy and management, epidemiology, marketing, social psychology, sociology, and education (to name only a few) are often good choices.
- Application and practicum courses (6 credits): These courses allow students to practice skills related to question and questionnaire development and data collection. Students may design and test questionnaires, or carry out some manageable part of the survey lifecycle (e.g., household rostering or interviewing). Such courses can also be good mechanisms for student-lead research papers on the data that were collected. As such, they are “bridge courses” between the social science and statistical sides of the field. A well-rounded program will generally have more than one of these courses (e.g., one on design and collection, and one on analysis). Application courses should include some level of survey management training or exposure to the lifecycle of an active “real-world” survey.
- Summer internship (paid): Full-time, hands-on experience in survey methodology and with survey research techniques can be developed further through full-time summer internships. For MS students, this can be the summer between their first and second years. Doctoral internships, if used, should be focused on research leading to publication.
- Capstone project/thesis: For a terminal survey methodology master’s degree, we think that employability should be a priority over academic contribution. With that goal, the capstone project should be flexible. Students planning to pursue doctoral studies could complete a traditional thesis or quantitative paper. Students entering or currently in the workforce could complete a quantitative paper or report directly relevant to the survey they work with day-to-day. Qualitative research projects and substantial literature syntheses should also be considered to the degree that they prepare the student for their next step and provide a concrete product to add to their portfolio. For example, a student could be given a lead role (with support) on a questionnaire testing project during his or her internship in a government agency. The reports developed during that project, if reflecting his or her creative input and work, could be compiled into the capstone project.
The Social Science Survey Methodologist’s Bookshelf
Below are a few general and social science-focused survey methodology texts that we think are essential for social science training and to which we have turned in practice (see similar list in Statistical Science article).
- Overview of survey research/methodology concepts, survey error, and survey practice
- Biemer and Lyberg (2003)
- Blair et al. (2013)
- Dillman et al. (2014)
- Fowler (2014)
- Groves et al. (2009)
- Groves (1989)
- Weisberg (2005)
- Cross-cultural and multicultural survey methods
- http://ccsg.isr.umich.edu/
- Harkness et al. (2002)
- Harkness et al. (2010)
- Tourangeau et al. (2014)
- Psychology of survey response, questionnaire design, and question pretesting
- Miller et al. (2014)
- Presser et al. (2004)
- Schuman and Converse (1971)
- Sudman et al. (1996)
- Tourangeau et al. (2000)
- Willis (2005)
- Interviews and interviewer-respondent interaction
- Bradburn et al. (1979)
- Conrad and Schober (2007)
- Fowler and Mangione (1990)
- Kahn and Cannell (1957)
- Maynard and Schaeffer (2002)
- Web surveys and form design
- Bergstrom and Schall (2014)
- Callegaro et al. (2014)
- Couper (2008)
- Tourangeau et al. (2013)
- Vehovar et al. (2015)
- Paradata and process data
- Kreuter (2013)
Conclusions and Future Directions for Social Science Survey Methodologists
Social science survey methodologists bring a unique perspective to the science and practice of survey methodology science and to the practice of survey methods. More interdisciplinary training in social science theories, concepts, and methods will keep social science survey methodology training up-to-date with contemporary social and behavioral science, improving our field as a whole. Without such training, we risk attempts to reinvent the proverbial wheel and we miss opportunities to increase of our understanding of the mechanisms by which survey errors are caused. We should keep our vision wide and train students to search broadly for explanations of survey error.
A broad perspective also keeps our eyes open to societal, technological, and communication changes that influence survey practice. New technologies (e.g., social media, and analytics dashboards) and data sources (e.g., big data), will continue to change our understanding of what it means to do survey research. While people will always crave data, it will not always be in the form of a static survey design with a preplanned sample and explicitly-defined and tested survey questions. (Indeed, it is not this way in many other sciences currently.) Knowing how to craft survey questions to avoid error means also understanding possible error sources in data wherever they come from, particularly if human intervention is involved at all (e.g., data entry into an online form). On the practice side, social science survey methodologists will increasingly be called upon to draw on visual design and user experience best practices to design control interfaces for survey opertations (i.e., dashboards), CATI and CAPI interfaces, and other “inward-facing” aspects of survey practice. Such efforts can reduce the overall “burden profile” of a survey, which includes the burden to interviewers and staff in addition to the burden to respondents.
As society’s data requirements change, so must survey methodologists’ working environments. As data products move from large, single-mode surveys with predefined samples to record sources made up of smaller components that draw from multimode surveys and other data sources, the departmentalization of survey work will likely change as well. As the sampling phase and questionnaire design phase become less clear-cut, so do the divisions between the sampling and questionnaire design departments. To anticipate this future, we need to foster and be receptive to iterative and adaptive processes in management, analysis, and design interventions (e.g., Groves and Heeringa, 2006). Small, collaborative teams will replace large bureaucratic divisions because they are more capable of dealing with complex and changing design environments. With respect to training social science survey methodologists, this means that those with operations experience and practical statistical knowledge will be sought after in survey organizations of every type. Thus, while we propose expanding the depth of social science theory training, we cannot forget the important roles of hands-on survey experience and applied statistical training in producing well-rounded and highly-employable social science survey methodologists.
Making inroads earlier in students’ training could facilitate the training of methodologists our field needs. In particular, undergraduate or even introductory high school curricula on survey research are almost completely lacking, which seems shortsighted at a time when the field of survey research is constantly evolving and requires an influx of professionals with statistical competency and interdisciplinary knowledge (see Kolenikov, Jans, O’Hare, and Fricker, this issue).
Survey methodology is both an applied and an academic interdisciplinary field at its core. As a relatively young science, there are still exciting gaps for researchers to fill in the frameworks that define the field. While the prototypical coursework available in survey methodology graduate programs provides a strong foundation for researcher training, we identified a few training gaps in the paradigm, including limited experience with the full survey lifecycle, virtually nonexistent multilingual and multicultural survey training, lack of well-developed qualitative research methods curricula, an imbalance of social and statistical science training, and the need for integration of theories and concepts from essential disciplines. Social science survey methodologists’ contributions to all components of the survey lifecycle will be strengthened if we continue improving our training and practice programs going forward.
Disclaimer
Any views expressed are those of the author (Mikelyn Meyers) and not necessarily those of the U.S. Census Bureau. Any views expressed are those of the author (Scott Fricker) and not necessarily those of the U.S. Bureau of Labor Statistics.
http://www.lse.ac.uk/study/graduate/taughtProgrammes2015/MScSocialResearchMethods.aspx
http://www.umb.edu/academics/cla/sociology/graduate_programs/ma
http://www.mmss.northwestern.edu/undergraduate/program-overview/
http://www.essex.ac.uk/coursefinder/course_details.aspx?course=PHD+L31072
http://surveyresearch.weebly.com/academic-centres-specialising-in-survey-research.html
Over 8 percent of the U.S. population ages five and older reports speaking a language other than English at home, and also reports speaking English at a level that is something less than “very well” (“well,” “not well,” or “not at all”) (Ryan 2013).
http://www.influenceatwork.com/; https://webapp4.asu.edu/directory/person/10913
http://dornsife.usc.edu/cf/psyc/psyc_faculty_display.cfm?person_id=1048572
http://avillage.web.virginia.edu/Psych/Faculty/Profile/Michael-Kubovy
http://www.ssc.wisc.edu/soc/faculty/pages/DWM_page/DWM_index3.htm