CHAPTER EIGHT DESIGNING THE QUESTIONNAIRE LEARNING OBJECTIVES (PPT slide 8-2) 1. Describe the steps in questionnaire design. 2. Discuss the questionnaire development process. 3. Summarize the characteristics of good questionnaires. 4. Understand the role of cover letters. 5. Explain the importance of other documents used with questionnaires. KEY TERMS AND CONCEPTS 1. Bad questions 2. Call records 3. Common methods variance (CMV) 4. Cover letter 5. Interviewer instructions 6. Introductory section 7. Questionnaire 8. Quotas 9. Research questions section 10. Response order bias 11. Screening questions 12. Sensitive questions 13. Skip questions 14. Structured questions 15. Supervisor instruction form 16. Unstructured questions CHAPTER SUMMARY BY LEARNING OBJECTIVES Describe the steps in questionnaire design. Researchers follow a systematic approach to designing questionnaires. The steps include: confirm research objectives, select appropriate data collection method, develop questions and scaling, determine layout and evaluate questionnaire, obtain initial client approval, pretest, revise and finalize questionnaire, and implement the survey. Discuss the questionnaire development process. A number of design considerations and rules of logic apply to the questionnaire development process. The process requires knowledge of sampling plans, construct development, scale measurement, and types of data. A questionnaire is a set of questions/scales designed to collect data and generate information to help decision makers solve business problems. Good questionnaires enable researchers to gain a true report of the respondent’s attitudes, preferences, beliefs, feelings, behavioral intentions, and actions. Through carefully worded questions and clear instructions, a researcher has the ability to focus a respondent’s thoughts and ensure answers that faithfully represent respondents’ attitudes, beliefs, intentions, and knowledge. By understanding good communication principles, researchers can avoid bad questions that might result in unrealistic information requests, unanswerable questions, or leading questions that prohibit or distort the respondent’s answers. Summarize the characteristics of good questionnaires. Survey information requirements play a critical role in the development of questionnaires. For each objective, the researcher must choose types of scale formats (nominal, ordinal, interval, or ratio); question formats (open-ended and closed-ended); and the appropriate scaling. Researchers must be aware of the impact that different data collection methods (personal, telephone, self-administered, computer-assisted) have on the wording of both questions and response choices. With good questionnaires, the questions are simple, clear, logical, and meaningful to the respondent, and move from general to specific topics. Understand the role of cover letters. The primary role of a cover letter should be to obtain the respondent’s cooperation and willingness to participate in the project. Ten factors should be examined in developing cover letters. Observing these guidelines will increase response rates. Explain the importance of other documents used with questionnaires. When data are collected using interviews, supervisor and interviewer instructions must be developed as well as screening forms and call record sheets. These documents ensure the data collection process is successful. Supervisor instructions serve as a blueprint for training people to complete the interviewing process in a consistent fashion. The instructions outline the process for conducting the study and are important to any research project that uses personal or telephone interviews. Interviewer instructions are used to train interviewers to correctly select a prospective respondent for inclusion in the study, screen prospective respondents for eligibility, and how to properly conduct the actual interview. Screening forms are a set of preliminary questions used to confirm the eligibility of a prospective respondent for inclusion in the survey. Quota sheets are tracking forms that enable the interviewer to collect data from the right type of respondents. All of these documents help improve data collection and accuracy. CHAPTER OUTLINE Opening VIGNETTE: Can Surveys Be Used to Develop University Residence Life Plans The opening vignette in this chapter describes a survey instrument, “Residence Life” program, developed to determine the housing needs of its current and future students at a large university (43,000 students). The objectives of the program evolved around the idea that high-quality on-campus living facilities and programs could help attract new students to the university. MPC Consulting Group, Inc., which specializes in the assessment of on-campus housing programs, prepared a self-administrated survey instrument to be delivered to existing students using the university’s newly acquired “Blackboard” electronic learning management system. The survey asked for demographic/socioeconomic characteristics, followed by questions concerning students’ current housing situation and an assessment of those conditions, the importance of housing characteristics, and intention of living in on-campus versus off-campus housing facilities. The questionnaire was administered using the university’s “Blackboard” (online course management system) system, but required viewing 24 screens and six different “screener” questions having the respondent skipping back and forth between computer screens depending on how they responded to the screening questions. Their response was dismal, only 17 students responded and 8 of those were incomplete. The university asked MPC to address the reasons for the low response rate, the quality of the instrument, and the value of the data collected. I. Value of Questionnaires in Marketing Research (PPT slide 8-3 and 8-4) Most surveys are designed to be descriptive or predictive. Descriptive research designs use questionnaires to collect data that can be turned into knowledge about a person, object, or issue. In contrast, predictive survey questionnaires require the researcher to collect a wider range of data that can be used in predicting changes in attitudes and behaviors as well as in testing hypotheses. Researchers should know the activities and principles involved in designing survey questionnaires. A questionnaire is a document consisting of a set of questions and scales designed to gather primary data. Good questionnaires enable researchers to collect reliable and valid information. Advances in communication systems, the Internet and software have influenced how questions are asked and recorded. Yet the principles followed in designing questionnaires remain essentially unchanged. Whether developing a survey to use online or offline, the steps researchers follow in designing questionnaires are similar.
II. Questionnaire Design (PPT slide 8-5) Exhibit 8.1 lists the steps followed in developing survey questionnaires (PPT slide 8-5): Step 1: Confirm research objectives Step 2: Select appropriate data collection method Step 3: Develop questions and scaling Step 4: Determine layout and evaluate questionnaire Step 5: Obtain initial client approval Step 6: Pretest, revise, and finalize questionnaire Step 7: Implement the survey A. Step 1: Confirm Research Objectives (PPT slide 8-6) In the initial phase of the development process, the research objectives need to be agreed upon by the researcher and an organization. B. Step 2: Select Appropriate Data Collection Method (PPT slides 8-7) To select the data collection method, the researcher first must determine the data requirements to achieve each of the objectives as well as the type of respondent demographic information desired. In doing so, the researcher should follow a general-to-specific order. C. Step 3: Develop Questions and Scaling (PPT slides 8-8 to 8-11) Questionnaire design is systematic and includes a series of logical activities. Researchers select the appropriate scales and design the questionnaire format to meet the data requirements. The researcher decides on: Question format (unstructured or structured) Wording of questions, scales, and instructions for responding to questions and scales Type of data required (nominal, ordinal, interval, or ratio) In making these decisions, researchers must consider how the data are to be collected. For example, appropriate questions and scaling often differ between online, mail, and telephone surveys. Question format: Unstructured questions are open-ended questions that enable respondents to reply in their own words. There is no predetermined list of responses available to aid or limit respondents’ answers. Open-ended questions are more difficult to code for analysis. Perhaps more importantly, these questions require more thinking and effort on the part of respondents. As a result, with quantitative surveys there are generally only a few open-ended questions. Unless the question is likely to be interesting to respondents, open-ended questions are often skipped. Structured questions are closed-ended questions that require the respondent to choose from a predetermined set of responses or scale points. Structured formats reduce the amount of thinking and effort required by respondents, and the response process is faster. In quantitative surveys, structured questions are used much more often than unstructured ones. They are easier for respondents to fill out and easier for researchers to code. Exhibit 8.2 shows examples of structured questions. Wording: Researchers must carefully consider the words used in creating questions and scales. Ambiguous words and phrases as well as vocabulary that is difficult to understand must be avoided. Researchers must select words carefully to make sure respondents are familiar with them, and when unsure, questionable words should be examined in a pretest. Words and phrases can influence a respondent’s answer to a given question. Some question topics are considered sensitive and must be structured carefully to increase response rates. Examples of sensitive questions include income, sexual beliefs or behaviors, medical conditions, financial difficulties, alcohol consumption, and so forth. These types of behaviors are often engaged in but may be considered socially unacceptable. Guidelines for asking sensitive questions start with not asking them unless they are required to achieve your research objectives. If they are necessary, assure respondents their answers will be kept completely confidential. Another guideline is to indicate the behavior is not unusual. Questions and scaling: Questions and scale format directly impact survey designs. To collect accurate data researchers must devise good questions and select the correct type of scale. Whenever possible, metric scales should be used. Also, researchers must be careful to maintain consistency in scales and coding to minimize confusion among respondents in answering questions. Once a particular question or scale is selected, the researcher must ensure it is introduced properly and easy to respond to accurately. Bad questions prevent or distort communications between the researcher and the respondent. If the respondent cannot answer a question in a meaningful way, it is a bad question. Some examples of bad questions are those that are: Unanswerable either because the respondent does not have access to the information needed or because none of the answer choices apply to the respondent Leading (or loaded) in that the respondent is directed to a response that would not ordinarily be given if all possible response categories or concepts were provided, or if all the facts were provided for the situation Double-barreled in that they ask the respondent to address more than one issue at a time When designing specific questions and scales, researchers should act as if they are two different people: one thinking like a technical, systematic researcher and the other like a respondent. The questions and scales must be presented in a logical order. After selecting a title for the questionnaire, the researcher includes a brief introductory section and any general instructions prior to asking the first question. Questions should be asked in a natural general-to-specific order to reduce the potential for sequence bias. Also, any sensitive or more difficult questions should be placed later in the questionnaire after the respondent becomes engaged in the process of answering questions. Some questionnaires have skip or branching questions. Skip questions can appear anywhere within the questionnaire and are used if the next question (or set of questions) should be responded to only by respondents who meet a previous condition. Skip questions help ensure that only specifically qualified respondents answer certain items. When skip questions are used, the instructions must be clearly communicated to respondents or interviewers. If the survey is online, then skip questions are easy to use and handled automatically. Respondents should be made aware of the time it will take to complete the questionnaire and of their progress in completing it. This begins in the introductory section when the respondent is told how long it will take to complete the questionnaire, but it continues throughout the questionnaire. For online surveys this is easy and most surveys have an icon or some other indicator of the number of questions remaining or progress toward completion. Prior to developing the layout for the survey questionnaire, the researcher should assess the reliability and validity of the scales. Once this is completed, the focus is on preparing instructions and making required revisions. D. Step 4: Determine Layout and Evaluate Questionnaire (PPT slides 8-12 and 8-13) In good questionnaire design, questions flow from general to more specific information, and end with demographic data. Questionnaires begin with an introductory section that gives the respondent an overview of the research. The section begins with a statement to establish the legitimacy of the questionnaire. Screening questions (also referred to as screeners or filter questions) are used on most questionnaires. Their purpose is to identify qualified prospective respondents and prevent unqualified respondents from being included in the study. It is difficult to use screening questions in many self-administered questionnaires, except for computer-assisted surveys. Screening questions are completed before the beginning of the main portion of the questionnaire that includes the research questions. The introductory section also includes general instructions for filling out the survey. The second section of the questionnaire focuses on the research questions. This is called the research questions section, and based on the research objectives the sequence is arranged from general questions to more specific questions. If the study has multiple research objectives then questions designed to obtain information on each of the objectives also should be sequenced from general to specific. One exception to this would be when two sections of a questionnaire have related questions. In such a situation the researcher would typically separate the two sections (by inserting a nonrelated set of questions) to minimize the likelihood that answers to one set of questions might influence the answers given in a second set of questions. Finally, any difficult or sensitive questions should be placed toward the end of each section. The last section includes demographic questions for the respondents. Demographic questions are placed at the end of a questionnaire because they often ask personal information and many people are reluctant to provide this information to strangers. Until the “comfort zone” is established between the interviewer and respondent, asking personal questions could easily bring the interviewing process to a halt. The questionnaire ends with a thank-you statement. Questionnaires should be designed to eliminate or at least minimize response order bias. Response order bias occurs when the order of the questions, or of the closed-end responses to a particular question, influences the answer given. Answers that appear at the beginning or the end of a list tend to be selected most often. With numeric alternatives (prices or quantities) respondents tend to select central values. With online surveys this is not a problem because the order of presentation can be randomized. It also is less of a problem with mail surveys because the respondent can see all the possible responses. Another way to reduce order bias with mail or self-administered surveys is to prepare several different forms of the questionnaire using a different order and average the responses. Phone surveys often present the most opportunities for response order bias to emerge. Some researchers have recently expressed concerns that questionnaire design might result in common methods variance being embedded in respondent data collected from surveys. Common methods variance (CMV) is biased variance that results from the measurement method used in a questionnaire instead of the scales used to obtain the data. CMV is present in survey responses when the answers given by respondents to the independent and dependent variable questions are falsely correlated. The bias introduced by CMV is most likely to occur when the same respondent answers at the same time both independent and dependent variable questions that are perceptual in nature, and the respondent recognizes a relationship between the two types of variables.
Questionnaire formatting and layout should make it easy for respondents to read and follow instructions. If the researcher fails to consider questionnaire layout, the quality of the data can be substantially reduced. The value of a well-constructed questionnaire is difficult to estimate. The main function of a questionnaire is to obtain people’s true thoughts and feelings about issues or objects. Data collected using questionnaires should improve understanding of the problem or opportunity that motivated the research. In contrast, bad questionnaires can be costly in terms of time, effort, and money without yielding good results. A summary of the major considerations in questionnaire design is given in Exhibit 8.4 in the text. The design of online surveys requires additional planning. A primary metric for traditional data collection methods is response rate. To determine response rates the researcher must know the number of attempts to contact respondents and complete a questionnaire. With mail or phone surveys this task is relatively easy. With online surveys researchers must work with the online data collection field service to plan how respondents will be solicited. If the data collection service sends out a “blanket” invitation to complete a survey there is no way to measure response rates. Even if the invitation to complete a survey is sent to an organized panel of respondents, calculating response rates can be a problem. This is because panels may involve millions of individuals with broad criteria describing them. Another issue is recruiting of participants. If individuals are invited to participate and they decline, should they be included in the response rate metric? Or should the response rate metric be based on just those individuals who say they qualify and agree to respond, whether or not they actually respond? To overcome these problems, researchers must work closely with data collection vendors to identify, target, and request participation from specific groups so accurate response rates can be calculated. Moreover, a clear understanding of how individuals are recruited for online surveys is necessary before data collection begins. A related problem in calculating the response rate metric online is the possibility of recruitment of participants outside the official online data collection vendor. For example, it is not uncommon for an individual included in the official invitation to participate to recruit online friends and suggest that they participate. Effective control mechanisms, such as a unique identifier that must be entered before taking the survey, must be put in place ahead of time to prevent this type of unsolicited response. Another problem with online surveys is the length of time it takes some respondents to complete the survey. To deal with this issue with online data collection vendors, first make sure “time for completion” is a metric that is measured. Research firms are just beginning to analyze online questionnaire design issues. Three specific issues that have been addressed to date are: The effect of response box size on length of answer in open-ended questions—respondents will write more when boxes are bigger. Some research firms cope with this by offering respondents the choice between small, medium, and large boxes so that the box size does not affect their response. Use of radio buttons vs. pull-down menus for responses—if a response is immediately visible at the top of the pull-down menu, it will be selected more often than if it is simply one of the responses on the list. Thus, if a pull-down menu is used, the top option should be “select one.” Appropriate use of visuals Online formats facilitate the use of improved rating and ranking scales, as well as extensive graphics and animation. Programming surveys in the online environment offers the opportunity to use graphics and scales in new ways, some of which are helpful and some of which are not. As with traditional data collection methods and questionnaire design, the best guideline is the KISS test—Keep It Simple and Short! Complex formats or designs can produce biased findings and should be avoided. E. Step 5: Obtain initial client approval (PPT slides 8-14) Copies of the questionnaire should be given to all parties involved in the project. This is the client’s opportunity to provide suggestions of topics overlooked or to ask any questions. Researchers must obtain final approval of the questionnaire prior to pretesting. If changes are necessary, this is where they should occur. Changes at a later point will be more expensive and may not be possible. F. Step 6: Pretest, revise, and finalize questionnaire (PPT slides 8-15) The final evaluation of the questionnaire is obtained from a pretest. For pretests, the survey questionnaire is given to a small (between 20 and 30 individuals), representative group of respondents that are asked to fill out the survey and provide feedback to researchers. In a pretest, respondents are asked to pay attention to words, phrases, instructions, and question sequence. They are asked to point out anything that is difficult to follow or understand. Returned questionnaires are checked for signs of boredom or tiring on the part of the respondent. These signs include skipped questions or circling the same answer for all questions within a group. The pretest helps the researcher determine how much time respondents will need to complete the survey, whether to add or revise instructions, and what to say in the cover letter. If problems or concerns arise in the pretest, modifications must be made and approved by the client prior to moving to the next step. G. Step 7: Implement the survey (PPT slides 8-16) The focus here is on the process followed to collect the data using the agreed-upon questionnaire. The process varies depending on whether the survey is self-administered or interviewer-completed. For example, self-completed questionnaires must be distributed to respondents and methods used to increase response rates. Similarly, with Internet surveys the format, sequence, skip patterns, and instructions after the questionnaire is uploaded to the web must be thoroughly checked. Thus, implementation involves following up to ensure all previous decisions are properly implemented III. The Role of a Cover Letter (PPT slide 8-17 and 8-18) A cover letter is used with a self-administered questionnaire. The primary role of a cover letter is to obtain the respondent’s cooperation and willingness to participate in the research project. With personal or telephone interviews, interviewers use a verbal statement that includes many of the points covered by a mailed or drop-off cover letter. Self-administered surveys often have low response rates (25 percent or less). Good cover letters increase response rates. Exhibit 8.5 provides guidelines for developing cover letters (PPT slides 8-18): Personalization Identification of the organization Clear statement of the study’s purpose and importance Anonymity and confidentiality General time frame of doing the study Reinforce the importance of respondent’s participation Acknowledge reasons for nonparticipation in survey or interview Time requirements and incentive Completion date and where and how to return the survey Advance thank-you statement for willingness to participate IV. Other Considerations in Collecting Data (PPT slide 8-19 to 8-21) When data are collected, supervisor and interviewer instructions often must be developed as well as screening questions and call records. These mechanisms ensure the data collection process is successful. Some of these are needed for interviews and others for self-administered questionnaires.
A. Supervisor Instructions (PPT slides 8-19) A supervisor instruction form serves as a blueprint for training people on how to execute the interviewing process in a standardized fashion; it outlines the process by which to conduct a study that uses personal and telephone interviewers. They include: Detailed information on the nature of the study Start and completion dates Sampling instructions Number of interviewers required Equipment and facility requirements Reporting forms Quotas Validation procedures Exhibit 8.6 displays a sample page from a set of supervisor instructions for a restaurant study. B. Interviewer Instructions (PPT slides 8-20) Interviewer instructions are used for training interviewers to correctly select a prospective respondent for inclusion in the study, screen prospective respondents for eligibility, and to properly conduct the actual interview. The instructions include: Detailed information about the nature of the study Start and completion dates Sampling instructions Screening procedures Quotas Number of interviews required Guidelines for asking questions Use of rating cards Recording responses Reporting forms Verification form procedures C. Screening Questions (PPT slides 8-20) Screening questions ensure the respondents included in a study are representative of the defined target population. Screening questions are used to confirm the eligibility of a prospective respondent for inclusion in the survey and to ensure that certain types of respondents are not included in the study. This occurs most frequently when a person’s direct occupation or a family member’s occupation in a particular industry eliminates the person from inclusion in the study. D. Quotas (PPT slides 8-20) Quota is a tracking system that collects data from respondents and helps ensure that subgroups are represented in the sample as specified. When a particular quota for a subgroup of respondents is filled, questionnaires for that subgroup are no longer completed. If interviewers are used they record quota information and tabulate it to know when interviews for targeted groups have been completed. If the survey is online then the computer keeps track of questionnaires completed to ensure quotas for targeted groups are met. E. Call or Contact Records (PPT slides 8-21) Call records, also referred to as either reporting or tracking approaches, are used to estimate the efficiency of interviewing. These records typically collect information on the number of attempts to contact potential respondents made by each interviewer and the results of those attempts. Call and contact records are often used in data collection methods that require the use of an interviewer, but also can be used with online surveys. Information gathered from contact records includes: Number of calls or contacts made per hour Number of contacts per completed interview Length of time of the interview Completions by quota categories Number of terminated interviews Reasons for termination Number of contact attempts MARKETING RESEARCH IN ACTION DESIGNING A QUESTIONNAIRE TO SURVEY SANTA FE GRILL CUSTOMERS (PPT slide 8-22 and 8-23) The Marketing Research in Action in this chapter extends the discussion onquestionnaire design. It includes actual Screening Questions (Exhibit 8.7) and a Questionnaire (Exhibit 8.8) for Santa Fe Grill. The research objectives guiding the survey are provided. Instructor Manual for Essentials of Marketing Research Joseph F. Hair, Mary Celsi, Robert P. Bush, David J. Ortinau 9780078028816, 9780078112119
Close