Web surveys have been mimicking paper questionnaires with respect to their layout and appearance for a long time. Even though the rapid development of the internet and internet data collection methods offers various graphic and multimedia design features, very little is known about the influence of animated web survey questions on the question answering process. In a web survey among university journal readers, we conducted an experiment exploring the effects of implementing animated faces in scale questions. By varying the visual appearance of faces in a scale question, we enhance their influence on the question answering process.
Introduction
Rating scales with faces or “smileys” as symbolic labels are frequently used in questionnaires on job satisfaction (Herman, Dunham, and Hulin 1975; Jäger and Bortz 2001; Kunin 1998) and global well-being (Andrews and Withey, 1976; Wanous, Reichers, and Hudy 1997). They are also considered especially suitable for surveying children, as they are more easily understood in comparison to text based self-report measures (Chambers et al. 1999). The advantage of these scales is mostly seen in the easier formatting of affective answers. Global self-report measures ask respondents about complex constructs like general satisfaction on broad categories and over long periods of time like one’s lifetime. Applied to the question answering process the retrieval of relevant information on global questions is nearly impossible (Schwarz and Strack 1991). Instead of relevant information, accessible information is used to generate an answer. In this case, answers are to a greater extent affective and fit easier into an affective answer scale like a faces scale. The translation of feelings into words is not necessary and the respondent only has to “check the face which looks like he feels” (Kunin 1998, 824).
Even though faces scales are used in web surveys, the present findings for these scales are mostly based on paper questionnaire experiments. The visual design advantages of web surveys can be seen as a valuable addition to the present purpose of these scales. Using faces scales implies using graphical elements. Pictures in surveys attract attention (Couper, Tourangeau, and Conrad 2007) and affect answers particularly when visual and verbal information does not match the presented question (Couper 2001). Moreover, surveys consist of words, but they also imply a visual language including symbols, numbers and graphics, which influence answers to survey questions (Christian and Dillman 2004). Even though visual content might increase respondent enjoyment, Couper, Tourangeau, and Kenyon (2004) found only little support for this hypothesis.
Our study was designed to explore whether faces scales are appropriate to measure general satisfaction. We hypothesized that the easier formatting of affective answers to a faces scale would apply especially when the faces are animated and change their visual appearance. To better understand the characteristics of a faces scale we strengthen their affective aspect by animating the faces visual appearance. Apart from the easier formatting we employ faces scales to attract attention and increase respondents’ enjoyment. If they do, respondents will spend more time to answer the question, which allows for deeper question processing and therefore increases data quality.
Methods
Our study was carried out in a survey among university journal readers and non-readers (N=1042) using a mixed-mode design of paper and web based questionnaires. Results reported were based on the web survey (N=611). Web survey respondents who read the journal answered a question using radio buttons concerning their global satisfaction with the journal in the middle of the questionnaire. Furthermore, respondents were randomly assigned to one out of three versions of the same satisfaction question measuring the overall satisfaction with the university journal, at the end of the questionnaire: a fixed design, an affective design, and a cognitive design of a faces scale.
Figure 1 illustrates the three faces scale designs and the radio button control question. The fixed design included no animation at all and mimics commonly used faces scales in paper and pencil questionnaires. In the affective design, the faces changed their color (blazing red to red-orange to dark orange to light orange to light green to grass-green) and increased their size with the cursor hovering over, while in the cognitive version the faces did not change their color and zoomed out and a text answer category was displayed. In the affective design we enhanced the emotional context and redeployed attention to the faces, while in the cognitive setting we accentuated cognitive processing by downsizing faces and offering an additional text label for the answer category. The radio button control design used the same answer categories as the cognitive design but included no animation at all. As the faces scales were part of a contract work survey, we were forced to use a 6 point scale even though a middle response option seemed to be more appropriate. On the other hand, we avoided the use of a neutral face of questionable adequacy with a straight mouth line (Elfering and Grebner 2010).
Results
Table 1 shows the distribution of responses for the three faces scale designs and the radio button version. We found no significant differences within the three face scale designs. Comparing the three faces to the radio button scale, we found significant differences between the fixed and the affective design, while the cognitive design was not significantly different from the radio button version. Moreover, answers to the cognitive and the radio button design were slightly more positive (mean = 2.5) than answers to the fixed and affective design (mean = 2.6).
Table 2 reveals the time span (measured in seconds) respondents needed to answer the questions. When answering one of the three faces designs (fixed/affective/cognitive) respondents needed about 14 seconds on average to select their answer and to click the submit button. Again we found no significant differences among the three faces scales. Comparing the radio button version to each of the faces scales yielded significant differences: the radio button version was on average four seconds faster.
In summary, the results of our study show similarities in the answer distributions for the fixed and the affective design on the one hand and for the cognitive and the radio button design on the other hand. Furthermore, all faces scale versions took respondents longer to answer in comparison to the radio button design.
Discussion
Even though we found differences between the faces scale designs, there was surprisingly no significant influence of face color and size on the answers provided in comparison to the fixed faces scale design. We hypothesize that a change in color and size might be not enough to make a faces scale a more affective measure. The differences in the distributions reveal slightly lower satisfaction for the fixed and affective faces scale designs. However, the key finding of this study is that the cognitive faces scale design provides corresponding answers to the radio button question. We therefore suggest that the cognitive design (using faces and text) can be used instead of a radio button scale for questions on global satisfaction.
Due to the fact that respondents needed more time to answer each of our faces scale designs, we assume that faces scales trigger more attention. If the focus of the respondents is needed especially on the answer categories, we assume faces scales could provide that. On the other hand, faces scales might draw attention away from the question itself, which may cause problems, especially with complex worded questions. Based on the survey design (we had to put the faces scales at the very end of the questionnaire), we are not able to assess break-offs and item nonresponse reliably. As including images to a survey slightly increases respondents’ enjoyment (Couper, Tourangeau, and Kenyon 2004; Toepoel and Couper 2011), we furthermore assume that the sparing use of faces scales might increase enjoyment and reduce non response.
Nevertheless, our study has some limitations. The use of a 6 point scale is not ideal; and a middle response option seems to be more appropriate. In addition, the question wording itself was quite specific for a faces scale. Moreover, there is slight uncertainty about the ideal design of the facial shape and the utilization of the mouth line as an indicator of well-being or satisfaction, which appears to be less adequate in eastern cultures where emotional expression is primarily coded in the eye section of the face (Yuki, Maddux, and Masuda 2007).
Overall, results suggest that faces scales using the fixed and the affective design yield response distributions that differ from the responses obtained by a cognitive faces scale or a radio button question. Based on our findings, if a faces scale has to be used, the cognitive design provides the best trade-off between entertainment, attention and adequate measurement.
Note
An earlier version of this paper was presented at the AAPOR conference, Phoenix, AZ, May 2011.