Adapting and Improving Methods to Manage Cognitive Pretesting of Multilingual Survey Instruments

M. Mandy Sha RTI International

Yuling Pan U.S. Census Bureau


This paper outlines steps in adapting and improving methods to successfully manage large-scale cognitive pretesting of multilingual survey instruments. The study is based on a U.S. Census Bureau project that pretested the Chinese and Korean translation of the American Community Survey Language Assistance Guide. By following a systematic process and guided by sociolinguistic research framework, we were successful in adapting the methodology in the current literature. Furthermore, we added and implemented the following steps in the process: conducting a systematic translation review process prior to cognitive testing, using a systematic method to identify and recruit non-English speakers, adopting a consistent method to report results, and implementing a project management approach to control and monitor the progress of interviewing. These steps proved to be advantageous and effective in managing a large-scale multilingual pretesting project.


Managing cognitive pretesting involves many operational considerations. Willis (2005) discussed 12 logistical issues, including staffing, number of interviews, length of time, and various planning activities. He also covered respondent selection and common recruitment methods. When these considerations are applied to multilingual cognitive pretesting, researchers are presented with additional challenges of staffing interviewers who speak both the source and the target language and also recruiting research participants who mirror the characteristics of the potential users of the translations being pretested. Recent literature on non-English language cognitive testing (e.g., Forsyth et al. 2007; Goerman and Caspar 2010; Pan et al. 2008, 2009, 2010; Sha et al. 2010) explored best practices and methods to manage cognitive testing of multilingual translation. The suggested methods in the literature can be summarized in five steps:

  1. Using and training language experts to conduct cognitive interviews
  2. Recruiting non-English speakers who are native speakers of the target language, sometimes coupled with English interviews to anchor findings
  3. Developing protocol in English and translating it into target languages
  4. Conducting interviews (in target languages) for at least two rounds and writing interview summaries (in English with specific quotes in target languages)
  5. Analyzing and reporting results

While prior studies have identified these main steps, they were unable to follow a systematic approach to manage those steps because translation pretesting was still at its early development stage. The study design and analysis were not well guided by an established theoretical framework. In addition, most of the recommendations were based on lessons learned from a “trial and error” format. What is more, past efforts were usually limited in the number of interviews conducted for each target language and the source language materials concentrated in single or more topics.

Our efforts to apply a systematic approach on a large-scale study offer a unique opportunity to further research in managing multilingual cognitive testing. This study is based on a Census Bureau project that pretested the American Community Survey (ACS) language assistance guides (LAGs) in Chinese and Korean. The cognitive testing project was large-scale by several standards. First, it included a high number of recruits and cognitive interviews. Table 1 shows the number of recruits and cognitive interviews conducted. Second, a team of eight highly-qualified interviewers carried out the work over a period of 2 years. Third, the ACS survey questions and topics under testing were complex and numerous. Eligible Asian respondents spoke little or no English and must meet a set of criteria that are of interest to the research in order to be selected for interviewing. (More details about the methods can be found in Sha et al. 2012.) This presents an additional challenge to the project.

Table 1 Number of recruits and interviewees.

Language Screened Found to be non-
English speakers
Met recruitment criteria
and interviewed
Chinese 404 351 1291
Korean 680 423 139
Total 1,084 774 258

1Ninety-one interviews were completed in Mandarin, and 38 were in Cantonese.

Our key efforts are described in the following sections.

Maximizing Benefits from Prior Studies in Staffing and Training Interviewers

Multilingual pretesting requires language experts who have expertise in questionnaire design and are experienced with cognitive testing. However, this combination of skill set is very rare and hard-to-find. For this reason, survey researchers usually train and maintain a (small) cadre of qualified candidates in multiple languages through projects in multilingual pretesting. For this study, we were able to put together a core team of eight experienced interviewers trained in our previous projects. These interviewers were bilingual and bicultural, having worked or studied in both their native and American cultures. Such individuals, as used in our study, can lead the panel of experts and avoid some of the constraints (e.g., trainings must be conducted in English, not the target language) experienced by previous studies where the lead researcher’s native language was English. We recommend this approach whenever possible. But our experience suggests that it is possible to use a lead researcher who is not proficient in the target language (there were a few months when Chinese language lead oversaw the Korean language expert panel because its lead was unavailable) – knowing the intent of the questions being tested and having language sensitivity can help to mend the gap with the help of language experts. When training interviewers, we suggest using the round-robin technique for practice interviews during the training. This style keeps the team communications intact and allows for experience-sharing between trainees. In addition, misunderstandings do not go unnoticed by the team lead and can be resolved immediately. But we feel that the round-robin training can be optional when additional trainings are conducted during iterative testing (i.e., second round of interviewing). Instead, we recommend using the time to discuss the changes since the previous round. We also recommend providing examples of both good and poor interview summary reports to help the interviewers learn the level of details desired for the study.

Identifying and Recruiting Non-English Speakers Using Systematic Recruitment Methods

We used a combination of recruiting methods that were reported in the prior literature for recruiting Asians: leveraging community organizations, posting flyers at a variety of public places frequented by potential respondents, and issuing print and online advertisements in non-English language media outlets. Because of the large number of respondents that needed to be recruited over 2 years and the narrow recruitment criteria, we decided to develop and implement a systematic process rather than using a “trial and error” method that some prior studies relied upon. We collected metadata about recruiters time spent, where they were recruited, and the types of methods. When compiled with the type of respondents that were recruited and their eligibility, the data guided our effort to tailor recruiting different kinds of respondents. For example, we used advertisements in local, ethnic in-language newspaper when we needed to reach a larger pool of potential respondents quickly. [More details about the efficiency of the recruitment methods to recruit Asian research participants can be found in Liu et al. (2013).] We recommend future large-scale multilingual cognitive testing studies give collecting and managing recruitment data a similar level of attention as the interview data.

Tailoring Interview Protocol Guide to Translation Pretesting and Conducting Iterative Testing

In regard to the interview protocol, we used the concurrent probing technique in the first round which was intended to detect translation problems. The concurrent probing technique allowed respondents to report their observations while they were answering the survey. The level of interruption to the respondent’s question-answer process seemed acceptable to us. For the second round where recommended changes to the translation were tested, we found retrospective probing better suited as it allowed respondents to finish filling out the LAG in one sitting before doing cognitive tasks.

In addition, we adapted the interview practice question about counting windows described in Willis (2005) and Goerman (2006) and had respondents answer “How many windows are there in the house or apartment where you live?” prior to the beginning of the cognitive interview. However, we quickly realized that this was not an ideal practice question for this type of translation pretesting. The “window” practice question was originally designed to induce the respondent to think aloud, while most of the cognitive tasks in our interview involved meaning-oriented probes designed to evaluate whether respondents comprehended the translation. Other types of probes that could improve questionnaire design were not used as frequently mainly because the source questionnaire could not be changed, a common constraint faced by many studies. In subsequent rounds, we used variations of the window question for the practice session. Although these attempts were not designed as experiments, we determined that the practice session must be kept simple, natural, and tailored to translation pretesting. For example, using the “window” practice question was problematic for Korean language interviews because “window” was understood phonetically to mean Microsoft Windows. This led to unnecessary confusion during the practice session and achieved the opposite of what a practice was intended. In general, respondents were more engaged when the practice question included an easy-to-spot translation issue that allowed the respondents to more readily point out the obvious error. For example, the Chinese language practice question used grammatically awkward translation for “how many” that were quickly identified by many respondents. This allowed the interviewer to naturally transition to administering some of the meaning-oriented probes and engaged the respondents in a simulation of the larger interview to come. We are not aware of prior research that demonstrates the utility of practice sessions for non-English cognitive interviews. This is clearly a topic for future research.

Using Consistent Method to Report Results

We adopted a two-step approach to identify translation issues. First is that prior to pretesting, we implemented a translation review process to fix any language-specific errors introduced by translators who may not be familiar with the intent and implications of the questions. Doing so allowed us to correct any translation errors prior to the cognitive interview, and then focus our attention on linguistic and sociocultural issues during the cognitive testing. While a crucial step for the review is to engage language experts who are experienced with questionnaire design, we found that following a systematic process in the review provides documentation that allows the results to be quantified and communicated to researchers who do not know the language. For example, we used a stepwise translation appraisal process examining pre-identified specific categories of translation issues. A preliminary comparison of the issues found in using a systematic process versus the traditional method can be found in Sha et al. (2010).

In the second step, we used the cognitive interviews to identify a set of translation issues that may only be observed through testing with non-English speakers. Using a coding scheme guided by sociolinguistic approaches to language and culture (Pan and Fond 2011), we were able to evaluate and clearly communicate the results by classifying translation issues in terms of linguistic rules, cultural norms, and social practices. This allowed us to compare across the Chinese and Korean language interview results, especially when they did not appear to be comparable.

Implementing a Project Management Approach

All research studies must balance the constraints of schedule, budget, and scope, even when aided with experts and expertise. For this study, the RTI project manager used a project management approach to control and monitor the progress of interviewing, as shown in Figure 1.

Figure 1 Interview lifecycle.


Despite the large number of interviews and interviewers, multiyear schedule, and the complexity of the survey instrument being tested, we completed the study on time, within budget, and met all the requirements set forth by the Census Bureau. It is important to note that we made continuous improvements on the methods based on regular debriefings among the language experts, results from the interviews, and the testing priorities of the Census Bureau. We recommend that future multilingual cognitive testing projects adopt a similar project management model.

In summary, this large-scale translation pretesting project successfully implemented the current methods in the literature, but we had to make adaptations to increase their “fit for use”. We also identified several areas for improvement. Our key efforts were:

  • Maximizing benefits from prior studies in staffing and training interviewers
  • Identifying and recruiting non-English speakers using systematic recruitment methods
  • Tailoring interview protocol guide to translation pretesting and conducting iterative testing
  • Using consistent method to report results
  • Implementing a project management approach

While a systematic approach leads to a high quality pretesting operation, some respondent difficulties simply cannot be “fixed” within the parameters of the translation but must be addressed at the source language questionnaire. Future studies should examine how to more effectively manage issues related to fixed source questionnaire during the multilingual pretesting process.


The authors thank the American Community Survey Office of the U.S. Census Bureau for advising this study: Barbara Lazirko, Todd Hugues, Herman Alvarado, Dameka Reese, and Debbie Klein. We also thank RTI survey methodologist Hyunjoo Park; this project would not have been successful without her. In addition, we acknowledge RTI’s panel of language experts and cognitive interviewers for their contribution to the study: L. Liu, J. Son, Q. Guo, G. Chan, S. Kim, Y. Harm, M. Yuan, G. Liu, H. Park, and former Census Bureau analyst V. Wake.

Disclaimer: This paper is released to inform interested parties of research and to encourage discussion of work in progress. Any views expressed on (statistical, methodological, technical, or operational) issues are those of the authors and not necessarily those of the U.S. Census Bureau.


Forsyth et al. 2007
Forsyth, B.H., M.S. Kudela, K. Levin, D. Lawrence and G.B. Willis. 2007. Methods for translating an English-language survey questionnaire on tobacco use into Mandarin, Cantonese, Korean, and Vietnamese. Field Methods 19: 264–283.
Goerman 2006
Goerman, P. 2006. Adapting cognitive interviews for use in pretesting Spanish language instruments. Statistical Research Division, U.S. Census Bureau, Washington, DC.
Goerman and Caspar 2010
Goerman, P.L. and R.A. Caspar. 2010. Managing the cognitive pretesting of multilingual survey instruments: a case study of pretesting of the U.S. Census Bureau Bilingual Spanish/English Questionnaire. In: (J.A. Harkness, M. Braun, B. Edwards, T.P. Johnson, L. Lyberg, P. Ph. Mohler, B.-E. Pennell and T.W. Smith, eds.) Survey methods in multinational, multiregional, and multicultural contexts. Wiley and Sons, Inc., Hoboken, NJ.
Liu et al. 2013
Liu, L., M. Sha and H. Park. 2013. Exploring the efficiency and utility of methods to recruit non-English speaking qualitative research participants. Survey Practice 6(3). Available at
Pan and Fond 2011
Pan, Y. and M. Fond. 2011. Evaluating multilingual questionnaires: a sociolinguistic perspective. Research and Methodology Directorate, Center for Survey Measurement Study Series (Survey Methodology #2012-04). U.S. Census Bureau. Available at
Pan et al. 2008
Pan, Y., A. Landreth, M. Hinsdale, H. Park and A. Schoua-Glusberg. 2008. Methodology for cognitive testing of translations in multiple languages. Statistical Research Division Research Report Series (Survey Methodology #2008-02). U.S. Census Bureau. Available at
Pan et al. 2010
Pan, Y., A. Landreth, M. Hinsdale, H. Park and A. Schoua-Glusberg. 2010. Cognitive interviewing in non-English languages: a cross-cultural perspective.” In: (J.A. Harkness, M. Braun, B. Edwards, T.P. Johnson, L. Lyberg, P. Ph. Mohler, B.-E. Pennell and T.W. Smith, eds.) Survey methods in multinational, multiregional, and multicultural contexts. Wiley and Sons, Inc., Hoboken, NJ.
Pan et al. 2009
Pan, Y., M. Sha, H. Park and A. Schoua-Glusberg. 2009. 2010 Census Language Program: pretesting of Census 2010 questionnaire in five languages. Statistical Research Division Research Report Series (Survey Methodology #2009-01). U.S. Census Bureau. Available at
Sha et al. 2010
Sha, M., H. Park and Y. Pan. 2010. Developing a systematic process for translation expert review: the translation appraisal system (TAS-10). Paper presented at the 65th annual conference of the American Association for Public Opinion Research, Chicago, IL.
Sha et al. 2012
Sha, M., H. Park and Y. Pan. 2012. Translation review and cognitive testing ofAmerican Community Survey (ACS) language assistance guides in multiple languages. Prepared for the American Community Survey Library Collections. Available at
Willis 2005
Willis, G.B. 2005. Cognitive interviewing: a tool for improving questionnaire design. Sage, Thousand Oaks, CA.

About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.