| Literature DB >> 33978729 |
K Scott1, O Ummer2,3, A E LeFevre1,4.
Abstract
Cognitive interviewing is a qualitative research method for improving the validity of quantitative surveys, which has been underused by academic researchers and monitoring and evaluation teams in global health. Draft survey questions are administered to participants drawn from the same population as the respondent group for the survey itself. The interviewer facilitates a detailed discussion with the participant to assess how the participant interpreted each question and how they formulated their response. Draft survey questions are revised and undergo additional rounds of cognitive interviewing until they achieve high comprehension and cognitive match between the research team's intent and the target population's interpretation. This methodology is particularly important in global health when surveys involve translation or are developed by researchers who differ from the population being surveyed in terms of socio-demographic characteristics, worldview, or other aspects of identity. Without cognitive interviewing, surveys risk measurement error by including questions that respondents find incomprehensible, that respondents are unable to accurately answer, or that respondents interpret in unintended ways. This methodological musing seeks to encourage a wider uptake of cognitive interviewing in global public health research, provide practical guidance on its application, and prompt discussion on its value and practice. To this end, we define cognitive interviewing, discuss how cognitive interviewing compares to other forms of survey tool development and validation, and present practical steps for its application. These steps cover defining the scope of cognitive interviews, selecting and training researchers to conduct cognitive interviews, sampling participants, collecting data, debriefing, analysing the emerging findings, and ultimately generating revised, validated survey questions. We close by presenting recommendations to ensure quality in cognitive interviewing.Entities:
Keywords: Cognitive interviewing; methodological innovation; qualitative research; survey research; validity
Year: 2021 PMID: 33978729 PMCID: PMC8227989 DOI: 10.1093/heapol/czab048
Source DB: PubMed Journal: Health Policy Plan ISSN: 0268-1080 Impact factor: 3.344
Components of survey tools assessed by cognitive interviewing
| Survey tool component assessed | Explanation | Example |
|---|---|---|
| Word choice | Words used in the survey questions may not be understood by respondents, may have unintended alternative meanings, may be overly vague or specific or may be less natural than alternative words | When translating surveys from English to Hindi, we found that professional translators and Hindi-speaking researchers with experience in rural areas often selected formal Hindi words that were unfamiliar to rural women |
| Syntax | Sentences in survey questions may be too complex or too long, reducing respondent capacity to retain key features of the question | The question ‘During your time in the health facility did the doctors, nurses, or other health care providers introduce themselves to you when they first came to see you?’ contained too many words and clauses. By the time the researcher finished reading it, the respondent lost track of the core question |
| Sequencing | The order of questions may be inappropriate. Placing sensitive or emotionally charged questions too early in the survey can be uncomfortable for respondents and damage respondent–enumerator rapport, reducing the likelihood of a respondent providing a truthful and complete response | A survey on respectful maternity care initially asked post-partum women if they were verbally or physically abused during childbirth within the first few survey questions, to ensure that this crucial question was answered before any respondent fatigue set in. However, cognitive interviews revealed that women were uncomfortable with the question and unlikely to disclose abuse without first establishing rapport through a range of less emotionally intense questions |
| Sensitivity | Questions or response options may be too direct or include topics that are insufficiently contextualized, leading to respondent and enumerator discomfort and eroding rapport | When asking women about their birth companions, they found it strange and uncomfortable to be probed about whether male family members were with them |
| Response options | Response options may be insufficient to capture the actual range of responses or may be incomprehensible or uncomfortable for respondents | Likert scales with more than three response options were incomprehensible to most rural Indian women we interviewed. |
| Resonance with local worldviews and realities | Questions may ask about domains of importance to the research team but that do not resonate with respondent views or realities | ‘Being involved in decisions about your health care’ is a domain of global importance in respectful maternity care. However, in rural India, the concept of healthcare workers involving the patient in healthcare decisions was unfamiliar and, when explained, considered undesirable |
| Cognitive mismatch | Questions may access respondent cognitive domains that do not map on to the domains intended by the researchers | Women were asked whether they would recommend the place where they gave birth to a friend, as a proxy for quality of care. However, women frequently responded ‘no’ because they did not have friends, did not want to tell other women what to do or did not think they should make recommendations for other people—which was unrelated to their maternity care experiences |
| Memory | Questions or response options may seek to access respondent memories in ways that are too cognitively demanding | Recalling specific post-partum practices from many months ago may not be possible for some respondents |
Approaches to strengthening surveys
| Approach | Description | Comparison to cognitive interviewing | Issue |
|---|---|---|---|
| Expert review | Subject area experts review the survey tool and judge how well each questionnaire item truly reflects the construct it is intended to measure |
An important form of validation but provides no insight into respondent understanding and interpretation of the survey questions | Experts are unable to predict how the survey respondents will interpret the questions |
| Respondent-driven pretesting | A small group of participants with the same characteristics as the target survey population complete the survey. Researchers elicit feedback during the survey or at the end through debriefings. Feedback elicitation can include targeted probes about questions that appeared problematic, in-depth exploration of each question, probing on a random sub-set of questions, or asking participants to rate how clear the question was |
Respondent-driven pretesting may overlap with cognitive interviewing (e.g. eliciting in-depth reflection on how the participants interpret questions and formulate answers as they proceed through the survey) However, it may also differ from cognitive interviewing by focusing instead on post-survey reflections through ratings or group debriefs ( | Low methodological clarity: can be the same as cognitive interviewing or quite different |
| Translation and back translation | After translating a survey from the origin to the target language, a different translator ‘blindly’ translates the survey back. Differences are then compared and resolved ( |
Back translation includes the same close attention to language and meaning as cognitive interviewing However, it does not examine cultural appropriateness or the extent to which questions achieve cognitive match between researchers and respondents | Involves bilingual translators whose world view and experience do not match the target population’s, making them unable to comment on the tool’s appropriateness |
| Pilot testing | Enumerators administer the survey to a small group of participants with the same characteristics as the target survey in as close to real world conditions as possible |
Pilot testing explores survey length, modality (e.g. is the computer assisted personal interviewing (CAPI) programming and tablet hardware functioning properly?), and skip patterns, and catches obvious problems with content and translation. Pilot testing is undertaken by members of the quantitative enumeration team who will conduct the survey at scale and focuses on the practical application of the survey questions. One pilot test goes through the whole survey tool with a sample participant Cognitive testing is undertaken by specially trained qualitative researchers with a focus on extensive probing to understand the cognitive process underlying each response provided. One cognitive interview goes through a curated sub-set of questions from the survey tool with a sample participant Cognitive interviewing is not optimal for exploring survey length, modality and skip patterns, but involves in-depth exploration of the resonance of content with local worldviews, and close attention to vocabulary, syntax, response options, question style, and conceptual nuance | Focuses on the mechanics of implementation while cognitive testing focuses on the survey questions achieving shared understanding between researcher intent and respondent interpretation |
Figure 1.Situating cognitive interviewing within the larger process of tool development
Figure 2.How much of the survey can you test through cognitive interviews?
Sample of participants for cognitive interviewing in Kilkari
| Topic of survey questions | Round 1 | Round 2 | Round 3 | Total |
|---|---|---|---|---|
| Set 1: IYCF | 7 | 8 | 6 | 21 |
| Set 2: FP | 13 | 6 | 5 | 24 |
Approaches to eliciting feedback in cognitive interviews
| Approach | Description | Benefits | Drawbacks |
|---|---|---|---|
| Think aloud | Participant talks through their mental processes and memory retrieval as they interpret questions and formulate answers |
Lower risk of interviewer biasing responses Interviewer does not need much training |
High cognitive burden on participant Confusing for many participants, particularly poorer and less educated participants, who are unfamiliar with idea of reflecting and articulating thoughts Can be embarrassing for participants who do not understand the request Does not allow for targeted exploration of areas of researcher’s interest |
| Probing |
|
Lower burden on respondent More comfortable and natural for respondents to answer interview questions than articulate their thought processes Enables researcher to explore inconsistencies and seemingly illogical responses |
May be misinterpreted by respondents as an examination or a test of their knowledge, vocabulary skills, or cognitive abilities, thus leading to respondent withdrawal or nervousness Emergent probing in particular requires that the researcher has complete familiarity with the topic being studied as well as the intent of each survey question and overall flow and content of the entire survey so that they can track meaning and make connections across the survey questions |
Figure 3.Example question from the IYCF cognitive interview guide
Illustrative schedule of 1 month of cognitive interview field work
| Day 1 | Day 2 | Day 3 | Day 4 | Day 5 | Day 6 | Day 7 |
|---|---|---|---|---|---|---|
| Training, including topic lecture and discussion on IYCF and detailed review of the IYCF cognitive interview guide | Break | |||||
|
|
|
|
|
|
|
|
| 5 cognitive interviews (CIs) on IYCF version 1 | Morning: 3 CIs on IYCF version 1 | Continue debrief and revise survey questions, create IYCF version 2 | 5 CIs on IYCF version 2 | Morning: 2 CIs on IYCF version 2 | Continue debrief and revise survey questions, create IYCF version 3 | Break |
|
|
|
|
|
|
|
|
| 6 CIs on IYCF version 3 | Last debrief and revisions to create final version of IYCF questions | Topic lecture and discussion on FP and detailed review of the FP cognitive interview guide | 6 CIs on FP version 1 | 7 CIs on FP version 1 | Break | |
|
|
|
|
|
|
|
|
| Debrief and revise survey questions | Additional debrief and revision, create FP version 2 | 6 CIs on FP version 2 | Debrief and revise survey questions, create FP version 3 | 5 CIs on FP version 3 | Last debrief and revisions to create final version of FP questions | Break |
Recommendations to ensure the quality of cognitive interviews
| Component | Recommendations | Rationale for recommendations |
|---|---|---|
| Scope of survey tool tested |
The cognitive interview guide includes a reasonable number of survey questions to complete in 1.5 hours (likely around 30 questions) Additional guides should be developed if the total interview length exceeds 1.5 hours |
The cognitive interview requires discussion time and probing about each survey question A 1- or 1.5-hour quantitative survey tool will require over 10 hours of cognitive interview time if each question is examined in the cognitive interview; therefore, a priority sub-set of draft survey questions must be selected or multiple guides developed and the sample expanded |
| Developing the cognitive interview guide |
The cognitive interview guide includes survey questions, scripted probes and guidance on areas to explore through emergent probes |
Researchers require guidance on which areas of the survey question to explore; they must also be encouraged to develop emergent probes throughout the course of the interview |
| Recruiting and training researchers |
Researchers should be highly educated social scientists with prior qualitative research experience; quantitative survey research experience is desirable Researchers must be fluent in the local language, and, when relevant, the language of the broader team Ample time should be allocated for training, in order to cover: orientation to the purpose of the larger study, detailed instruction on cognitive interviewing, in-depth topic area teaching, question-by-question examination of the guides and role play |
Cognitive interviewing is complex and quite different from mainstream qualitative research in terms of purpose, interview skills and debriefs. The field researchers drive the quality of the interviews and are fundamental in creating the final revised survey questions Linguistic issues are the most common problems with surveys that have been translated from English into regional languages. The researchers must be fluent in both languages in order to ensure nuance is captured across the translation, while adapting the language to local norms Extensive training is vital to orienting the researchers, including ensuring they understand the topics being assessed, the intent of each survey question, and how to carry out effective cognitive probing |
| Participant sample characteristics |
Cognitive interview participants are from the same geographic area as the target respondent population Cognitive interview participants have similar socio-demographic characteristics to the survey target population Within the socio-demographic profile of the sample population, individuals with lowest levels of education, literacy and mobility and most marginalized should be prioritized |
Cognitive interviewing enables local adaption and thus requires local participants who mimic the characteristics of the intended survey respondents Cognitive failures in the drafts survey questions are most efficiently and comprehensively identified by interviewing participants who are most likely to struggle with the material |
| Conducting interviews |
Interviews should be carried out in pairs of trained qualitative researchers: one to conduct the interview and one to take notes throughout the interview |
Notetaking is as important as leading the cognitive interview because debriefs and revisions of the survey questions depend on the notes taken during data collection. Notetakers must be as well trained and experienced as the interviewers |
| Debrief and analysis |
Balance data collection with debriefing: Conduct a few (approximately 6 or 7) cognitive interviews and then allocate a day or more for debriefing and revision—do not gather a lot of data without time for reflection Multiple rounds of data collection should be conducted to test subsequent versions of the draft survey questions |
Conducting a large number of cognitive interviews before pausing to debrief and revise the survey question is inefficient and impractical |
| Supporting quantitative survey enumerator training |
Researchers who conducted the cognitive interviews should attend the survey enumerator training |
Researchers who carried out the cognitive interviewing can explain the rationale for the final wording of the questions to the survey enumerations, provide locally grounded orientation to field realities (including local vocabulary) and help the enumerators anticipate the types of challenging responses they are likely to receive in the field |