Cross-cultural and multilingual research explores the cultural and linguistic factors that shape human experiences. Traditionally, “cross-cultural” has been synonymous with “cross-national,” where data collection across multiple countries often necessitates multiple languages. Such results are comparative in nature between two or more populations.
In this 2025 Survey Practice special issue, however, many articles analyze data collected within a single country or focus on advancing our understanding of the experiences of a specific cultural or ethnic group. This prompted the editorial team to reflect on a critical question: Do these studies contribute to the field of cross-cultural and multilingual research?
Embracing an inclusive definition
Cultural and linguistic variations often exist within a single country, such as the United States. In fact, many countries are home to multiple nations and languages. This highlights that cross-cultural and multilingual dynamics are useful for understanding populations within single geopolitical boundaries. Recognizing this, we believe broadening the definition of “cross-cultural and multilingual” is overdue. In practice, researchers and practitioners have already been operating under this new framework. For example:
-
Survey Practice’s 2017 special issue on cross-cultural and multilingual research included articles offering practical recommendations to improve recruitment and the web survey experience for Spanish-dominant U.S. Latinos/as.
-
The American Association for Public Opinion Research (AAPOR)'s 72nd conference program introduced a track soliciting contributions in Multinational, Multiregional, and Multicultural (3MC) research. The program committee accepted papers and posters examining within-country cultural and linguistic influences in addition to cross-national research. This practice continues to this day.
-
In 2021, the World Association for Public Opinion Research (WAPOR) and AAPOR published a joint task force report on quality in comparative surveys, attributing the rise of 3MC surveys to “an increased interest in understanding the consequences of within-country cultural and ethnic heterogeneity,” among other factors.
Ethnic and language diversity will continue to evolve due to globalization, migration, and the connectivity fostered by technological innovation. Embracing a broader definition of cross-cultural and multilingual research acknowledges this evolution and gives credence to the work of surveying diverse populations within and across countries. It also ensures that researchers and practitioners remain accountable for applying appropriate methodologies and tools.
16 articles, 64 written reviews, 6 months
With this inclusive vision in mind, we curated a total of 16 articles[1] for the 2025 special issue on cross-cultural and multilingual research. The collection features 15 research articles, including 14 full articles that range from 2,000 to 3,500 words and one in-brief note. They cover methods and practices to improve the design and implementation of studies involving various populations. We discuss the articles in four thematic areas:
-
Low- and middle-income countries (four articles, including an in-brief note)
-
U.S. Latinos/as (four articles)
-
Survey translation practices (three articles)
-
Advancements in methods and evaluations (four articles)
To complement these research articles, we also included an “interview the expert” piece, which provides real-life, personal examples from leaders in our field.
The 42 authors and coauthors represent diverse sectors, including academia (four articles), federal and city government agencies (four articles), and large and small businesses in the industry (five and three articles, respectively).[2]
Each article underwent up to four rounds of feedback from the special issue editor, the journal’s seven associate editors, and the editor-in-chief, who also provided oversight for all articles. Between August 2024 and January 2025, articles were revised two to four times, with review turnarounds averaging 11 days. In total, 64 written reviews and 48 decision letters were issued.
This progression was made possible not only through the authors’ partnership and dedication but also due to the effort of a highly collaborative editorial team and effective organization.
Low- and Middle-Income Countries (LMIC)
During the peak of the COVID-19 pandemic, data collection around the world shifted to online and remote modes. In LMICs, mobile data collection has become a mainstay, but unequal rights and access to cell phones among household members create barriers. For example, interviewers may be unable to reach female respondents directly if a husband or male householder screens the call. This dynamic reflects one of the oldest access control mechanisms in respondent recruitment, known as “gatekeeping.”
Using data from the India Human Development Survey (IHDS), a large-scale panel survey, Sharan Sharma and his coauthors observed informal gatekeeping practices for married-out female migrants in over half of the Indian households dialed. Similarly, Mahmoud Elkasabi analyzed data from eight countries using the Demographic and Health Surveys (DHS) and found that respondents reached via cell phones owned by another household member exhibited certain characteristics. Together, these articles provide complementary insights into the implications of nonresponse and coverage bias in mobile data collection in LMICs and strategies to mitigate them.
A companion piece on mobile data collection in LMICs is Charles Lau’s analysis of eight national Computer-Assisted-Telephone-Interviewing (CATI) surveys conducted in Kenya and Nigeria. He and his coauthors examined the extent of respondents rounding their age to the nearest 5 or 10 years and offered practical suggestions to defuse this threat to data quality.
Response errors can also arise from a lack of familiarity with survey conventions, such as not knowing that respondents are expected to select an answer from the available response categories. Meredith Massey explored this issue in a cognitive interview study with low-income participants in Rio de Janeiro (Brazil). She documented the practical effects of low “survey literacy” and suggested five strategies to mitigate these effects. As an additional strategy, we recommend that researchers conduct an interactive pre-interview practice with participants to familiarize them with the interview “tasks,” such as probing (Park, Goerman, and Sha 2017; Sha and Pan 2013). Additionally, since knowledge of survey conventions should not be universally assumed, researchers may want to consider supplementing cognitive debriefing with techniques such as hypothetical vignettes (Aizpurua 2020; Meyers, Trejo, and Lykke 2017; Sha 2016).
U.S. Latinos/as
According to the United States Census Bureau, a total of 66.3 million people (as of July 2024) identify as having Hispanic origins, making U.S. Latinos/as the largest ethnic group, followed by Black Americans. While second- and third-generation Latinos/as may speak English as their dominant or only language, Spanish remains the most frequently spoken non-English language in the United States.
The conventional wisdom that translation encourages participation among Spanish speakers does not necessarily hold true in all situations. For self-administered surveys, it can even have a “backfire” effect on response rates, in particular if Spanish language questionnaires are not targeted to those who are most likely to use them. Kristen Olson, Minshuai Ding, and Amanda Ganshert demonstrated this effect in their experiment in rural Nebraska where they added Spanish-language materials to a mixed-mode self-administered (web and mail) survey.
Accurately identifying Latino households and predicting their likelihood of speaking Spanish is key to providing more effective language assistance. Using surnames as a predictive measure has been the prevailing method in the past, but it has not been the most precise unless combined with statistical modeling. Martha McRoy and Juanita Vivas Bastidas compared the accuracy of three techniques to predict Spanish-speaking households. These techniques differ in cost, sensitivity, and specificity, with some producing higher rates of false positives (rather than false negatives), which may better serve specific study purposes. Their experimental results suggest that data collectors now have access to an expanded toolkit for identifying Spanish-speaking households in the U.S.
This raises an important question: If we determined that most of our respondents are bilingual, what benefits could still be gained from providing a Spanish-language survey or hiring bilingual interviewers? Angel Saavedra Cisneros examined Latino “language switchers” (i.e., respondents who began the survey in English but switched to Spanish, or vice versa) in a telephone survey about politics. He suggested that, for some Latinos, answering a survey in Spanish is a symbolic choice to express their identity. This finding reinforces our view that language in survey research is not only a means of communication with respondents but also an enabling tool that gives them agency in the research process. Language of survey administration also has the potential to influence the responses (Peytcheva 2020; Zavala-Rojas 2018).
In the fourth article about U.S. Latinos/as, Ilana M. Ventura showed that first- and second-generation Latino/a immigrants often hold transnational assets in their home countries. She argued that U.S.-based surveys should include questions about transnational assets to capture a more accurate picture of immigrant finances. We believe that adopting this approach could also reveal important social trends, as income is often correlated with socioeconomic factors, health outcomes, and quality of life indicators.
Survey translation practices
Translating a survey instrument, in particular a questionnaire, is more complex than translating everyday messages such as emails (Behr and Sha 2018). First, survey topics often cover a wide range of domains, from demographics to health to income, requiring understanding across diverse subject areas. Second, surveys rely on standardization in a structured question-and-answer format, which differs from how people naturally communicate. In addition, translation is not simply a mechanical process of replacing words from one language with those in another language. Even when the correct words and grammar are used, translations may still sound unnatural if they fail to consider variations in language use that are influenced by context, the language users, and cultural norms of communication.[3]
Patricia Goerman, Alisú Schoua-Glusberg, and Ariana Muñoz Maurás exemplified this complexity in their article, where they analyzed five socio-demographic questions commonly found in U.S. surveys that become “cultural mismatches” when translated. These mismatches arise because the users of the translations do not necessarily share the same cultural frame of reference or may be unfamiliar with classification systems used in the U.S., such as race and ethnicity categories. Following their lead, we call on survey sponsors and questionnaire designers to recognize these cultural mismatches and engage survey translation experts to develop and implement appropriate solutions.
One potential solution is “advance translation,” which identifies potential translation problems while the source questionnaire is still being developed. This proactive method enables problematic questions to be refined before the questionnaire is finalized, improving final translation quality (Dorer 2023). Caitlin R. Waickman and Elyzabeth Gaumer provided a use case of this ex-ante harmonization approach on a seven-language housing survey conducted in New York City, the largest metropolis in the United States. They also advocated for “language justice,” a concept that could potentially motivate research leaders to view translation not only as a tool for reducing language barriers but as a means of promoting equity.
Cost, resource, and time constraints can sometimes deter survey projects from adopting best practices in translation or even from providing translations at all. With a project environment in mind, Liana Manuel, Luis Contreras, Lisa Lee, Barbara Fernandez, Jennifer Vanicek, Meredith Gonsahn, Eduardo Salinas, and Eileen Graf adapted the “gold standard” committee translation approach into a more streamlined process that still met the quality criteria they had established. Their article provides practical examples and actionable recommendations. It also serves as a reminder to research leaders and organizations that it is possible to survey multilingual populations when appropriate translation adaptations are carefully planned and valued.
Advancements in methods and evaluations
While Respondent-Driven Sampling (RDS) is not a new method, our collective understanding of its implementation for surveying vulnerable subpopulations among ethnic minorities remains limited. Through a scoping review of 39 studies conducted primarily in North America and Europe, Mariel Leonard identified RDS best practices and distilled them into six recommendations. This resource provides valuable guidance for researchers and practitioners seeking to maximize the success of RDS implementation and avoid common pitfalls.
Building on their application of machine learning (ML) to survey question assessment (Yan, Sun, and Battalahalli 2024), Ting Yan, Hanyu Sun, and Anil Battalahalli demonstrated how ML can also be used to process Spanish-language Computer-Assisted Recorded Interviewing (CARI) data to flag problematic items. Their approach highlights the potential of ML to improve automation, time efficiency, and reduce costs, and would be conceivably more powerful when combined with other question evaluation methods (Maitland and Presser 2016). We commend the authors for their pioneering contributions to the field of question evaluation and testing and encourage Survey Practice readers to replicate and adapt the ML pipeline for their own projects.
Ed Rincón, Dexter Purnell, and Mandy Sha developed the Multicultural Insights Test (MIT), a diagnostic tool designed to assess an individual’s basic knowledge about U.S. Latinos/as, Blacks, and Asian Americans. Compared to self-reported measures of cultural sensitivity, language background, or racial and ethnic identity, tools like the MIT provide a more objective and standardized approach for evaluating cultural awareness and knowledge. For example, the MIT could be helpful in the hiring, training, and evaluation of data collectors and study managers. However, further development and testing are warranted before the MIT can be scaled for broader application.
American Sign Language (ASL) is a natural language used by people who are deaf or hard of hearing. Marcus Berger, Betsarí Otero Class, and Angie O’Brien examined how ASL users made sense of an English proficiency question about whether a household member “speaks English” (modeled after the wording from the American Community Survey). To provide cross-linguistic comparisons, they also analyzed responses from Spanish and English speakers. Their findings revealed that interpretations varied across groups, and the article provided an interesting discussion on what speaking English means when it comes to people’s self-perceived language proficiency - which can encompass listening, reading, and writing skills, in addition to speaking.
Expanding the reach of scientific communication
A 2024 Pew survey on public trust in scientists found that while 89% of American adults view research scientists as “intelligent,” less than half of the respondents consider them “good communicators.” Notably, this reflects a decline in nine percentage points in the public’s positive perceptions of scientists’ communication skills since Pew’s 2019 survey on the same topic, raising important questions: What can research scientists do to better communicate research findings to the public? And what are the implications of scientific communication for facilitating cross-cultural understanding?
Known as superconnectors in our field, Frauke Kreuter and Mandy Sha shared real-life examples in an Interview the Expert article for this special issue. Despite having distinct careers and primary audiences, both emphasized that researchers and research leaders can and should make their messages relevant and easy to understand for both expert and non-expert audiences. Their conversation featured quotable moments from Frauke, personal stories from Mandy, and three practical suggestions for improving communication.
As readers consider their experiential perspectives, we want to point out that not all science communications benefit the public. For example, researchers and researcher leaders can inadvertently become a source of misinformation through “poor science communication,” according to the National Academies of Sciences, Engineering, and Medicine (NASEM)'s 2024 consensus study report Understanding and Addressing Misinformation About Science. To address this, NASEM recommends proactively engaging professional communicators, developing communication strategies, and accepting training and support. This echoes Frauke’s point of aligning with reputable professional associations like AAPOR, which have an established communications infrastructure.
Towards greater cross-cultural understanding
A limitation for this special issue is the underrepresentation of research from some global regions. We recognize that public opinion research and methodological adaptations remain unevenly developed across different cultural and linguistic contexts and look forward to seeing more publications that interrogate ethnocentric assumptions and advance the field as a whole.[4]
The ultimate goal of cross-cultural research is to deepen our understanding of shared and unique experiences. This 2025 open-access special issue highlights Survey Practice’s ongoing commitment to publishing research conducted across diverse regions and languages and marks the second special issue on this topic in the last decade. We encourage other journals in the field to strengthen their efforts to center cross-cultural and multilingual research in their publications.
Individual researchers and research leaders can expand and improve their scientific communication to help incubate cross-cultural understanding. We join Frauke and Mandy in their call to action for Survey Practice readers and authors: "Share your gift of knowledge and have fun doing it!"
Lead Author Contact Information
Mandy Sha
mandysha.com/special-issue
LinkedIn: @mandy-sha