Preview (7 of 23 pages)

Chapter Eight Designing The Questionnaire Answers to Hands-On Exercise 1. Based on the research objectives, does the self-administered questionnaire, in its current form, correctly illustrate sound questionnaire design principles? Please explain why or why not. No, the self-administered questionnaire does not correctly illustrate the sound questionnaire design principles. It does not flow from the general to the specific. A better order might be Selection Factors, Perception Measures, Relationship Measures, Life Style Questions, and finally, Classification Questions. In the Relationship section, Question 25 should come before Question 22, followed by the rest in the order given. Also in Section 1: Life Style Questions, the “Questions” term in the subtitle should be changed to “Features” because the belief statements are framed in a question format. In addition, the actual scale measurement used in Section 1and Section 2 should be redesigned to eliminate all the redundancy of repeating “Strongly Disagree” and “Strongly Agree”. 2. Overall, is the current survey design able to capture the required data needed to address all the stated research objectives? Why or why not? If changes are needed, how would you change the survey’s design? Keeping the six research objectives in mind, Santa Fe Grill could try to bring the following changes in its survey design: (i) To identify the factors people consider important in making casual dining restaurant choice decisions. The restaurant decision factors are fairly well covered. It is a good idea to include an ordering scheme. A better approach would be to include an item for convenience of location and let the respondent list any other factors that they consider important. It would be more useful if some way could be devised to assign a relative importance to what is now a simple ordinal scale. A constant sum questions would be a simple way to get that information. It is a good idea to include an ordering scheme. (ii) To determine the characteristics customers use to describe the Santa Fe Grill and its competitor, Jose’s Southwestern Cafe. Customers’ description of the Santa Fe Grill is covered, but there is still the question concerning coverage of the important factors. Are all possible descriptors included in the questionnaire? Which descriptors do customers find most attractive? (iii) To develop a psychographic/demographic profile of the restaurant customers. The relevant demographic information seems to be pretty well covered. One might question if the most relevant psychographic characteristics and statement were included, as there were only 11 statements included in the questionnaire [Section 1]. (iv) To determine the patronage and positive word-of-mouth advertising patterns of the restaurant customers. These factors seem to be covered. Is there any type of word-of-mouth promotion that goes beyond recommending a place to a friend? That issue might need more attention since personal recommendations are an important reason that people choose a place to dine out. (v) To assess the customer’s willingness to return to the restaurant in the future. Likelihood of return is covered in one question. (vi) To assess the degree to which customers are satisfied with their Mexican restaurant experiences. There is just one question on satisfaction with an open-ended follow-up for people who expressed dissatisfaction. Satisfaction is a multi-dimensional construct. There is, however, a trade-off between getting complete information and having the respondent get tired of answering questions. 3. Evaluate the “screener” used to qualify the respondents. Are there any changes needed? Why or why not? The first screener is fine. It gets right to the characteristic of interest. The second screener about having eaten at some other Mexican restaurant isn’t particularly useful. Why is it relevant to have eaten at some other restaurant? There are no questions about comparing Santa Fe Grill with other restaurants in the survey. The income question doesn’t add much. There may be a valid reason for excluding people with low income, but they are part of Santa Fe Grill’s customer base. Evidence of that would be that they are there at the time of the survey. 4. Redesign questions # 26-29 on the survey using a rating scale that will enable you to obtain the “degree of importance” a customer might attach to each of the four listed attributes in selecting a restaurant to dine at. Listed below are some reasons that many people might use in selecting a restaurant where they want to dine. Think about your visits to casual dining restaurants in the last three months. For each attribute listed below, please indicate by circling the number which corresponds to your answer how important that attribute is to your decision of where to dine when selecting a casual restaurant. For each attribute, use a scale such that 1 = not at all important, 2 = somewhat unimportant, 3 = neither important nor unimportant, 4 = somewhat important, and 5 = extremely important. ANSWERS TO REVIEW QUESTIONS 1. Discuss the advantages and disadvantages of using unstructured (open-ended) and structured (closed-ended) questions in developing an online, self-administered survey instrument. The type of question format (structured or unstructured) has a direct impact on survey design. The advantage of using unstructured questions is they provide a rich array of information to the researcher, since there are no predetermined responses available to aid (or limit) a respondent’s answer. The downside is that this type of question requires more thinking and effort on the part of the respondent. If the respondent fails to comprehend what’s being asked, they may leave the question blank since there isn’t an interviewer present to intervene and address the respondent’s question/concern. Closed-ended questions require the respondent to choose from a predetermined set of scale points or responses, and reduces the amount of thinking and effort required from the respondent. 2. Explain the role of a questionnaire in the research process. What should be the role of the client during the questionnaire development process? A questionnaire is an essential tool in the research process, especially if a secondary data scan fails to address the research question and decision problem with precision and clarity. Questionnaires capture the new (primary) information from respondents. Clearly, as with every step of the research process, the client should play a central role. It is advisable to conduct an in-depth interview with the client to consider what questions should be asked, and (most importantly) if any surveys have been disseminated to the target population in question in the past, and what the outcomes were. Survey design is an iterative process, so the research team should expect to revise a number of drafts of the survey instruments before a pre-test, and expect to subject the instrument to further revisions after that. 3. What are the guidelines available for deciding the format and layout of a questionnaire? Questions and decisions about “format” and “layout” are directly related to function; namely, constructing a survey which captures primary information from the target population to address business or marketing information problem. How a questionnaire is formatted speaks to the integrated layout of sets of questions/scale measurements into a package which reads and tests well when put in front of a potential respondent. As well, the form + layout = function equation is critical when the data is captured and brought back to the research team for analysis. 4. What makes a question bad? Develop three examples of bad questions. Rewrite your examples so they could be judged as good questions. Bad questions are defined as queries which cause a fundamental “disconnect” between researcher and respondent. They can be unanswerable, leading (or loaded), and/or double-barreled. Listed below are three examples of “bad” questions and the remedies which flow from the guidelines presented: (i) “What do you think of the exceptionally poor customer service provided by the circulation desk at our campus library?” This is a leading or loaded question. The phrase “exceptionally poor” should be eliminated. (ii) “Can you see a world in which the mall-of-the-present will give way to cyberspace and the internet of the future?” This question is a brain-twister, full of odd conceptual formulations and strange suggestions. It would be better phrased as: “Will electronic commerce eventually replace the need to buy items from shopping malls?” (iii) “Will you vote for Freda Johnson in the upcoming election and are you pleased with her stance on gun control in our local community?” This is a double-barreled question. If it were followed with two response categories (e.g., “Yes” and “No”) the output from respondents cannot be segmented in a way to be of value to the researcher and the campaign team. (People could vote for Freda yet disagree with her position on gun control in the local area). This question needs to be broken up into two separate questions: (a) “Will you vote for Freda Johnson in the next election?” and (b) “Do you agree with her position on gun control in the local community?” 5. Discuss the value of a good questionnaire design. A good questionnaire increases the probability of collecting high quality primary data that can be transformed into reliable and valid information. The layout of the instrument can enhance the ability to provide valid and reliable data. A good questionnaire is designed to ensure that all sampling units are asked relevant questions, in the same manner, in the same order, and responses are uniformly recorded. The instrument is designed to be understandable to respondents, interesting enough to encourage completion, and economical to administer, record, and analyze. 6. Discuss the main benefits of including a brief introductory section in questionnaires. If the questionnaire is intended to be self-administered, it is necessary to explain to respondents the main objectives of the research. They also need some general instructions about how to complete the instrument. Finally, the introduction should impress upon respondents that their input is valuable and seek to motivate them to participate. Sometimes, the introduction assures respondents that their answers are taken confidentially and that their right to privacy will be preserved. The cover letter attached to a mail survey is intended to accomplish much of the same things. 7. Unless needed for screening purposes, why shouldn’t demographic questions be asked up front in questionnaire? Gathering demographic data is not the main purpose of the survey, so it should not be covered in the main part of the questionnaire. Because demographic information is sometimes considered sensitive by respondents, these questions are best left for the end of the session—after respondents become comfortable with the idea of answering questions. ANSWERS TO DISCUSSION QUESTIONS 1. Assume you are doing exploratory research to find out students’ opinions about the purchase of a new digital music player. What information would you need to collect? What types of questions would you use? Suggest six to eight questions you would ask and indicate in what sequence you would ask them. Pretest the questions on a sample of students from your class. This question should be assigned as a take-home question or it could be an in-class activity. Answers will vary. The research is exploratory so it may require more open-ended questions than closed-ended. The class can review surveys in class to discuss good and bad questions and the sequence of questions. 2. Assume you are conducting a study to determine the importance of brand names and features of mobile phone handsets. What types of questions would you use—open-ended, closed-ended, scaled?—and why? Suggest six to eight questions you would ask and in what sequence you would ask them. Pretest the questions on a sample of students from your class. This question should be assigned as a take-home question or it could be an in-class activity. Answers will vary. The questions will likely be closed-ended due to the specific nature of the study. Importance questions lend themselves to Likert scale formats. The class can review surveys in class to discuss good and bad questions and the sequence of questions. 3. Discuss the guidelines for developing cover letters. What are some of the advantages of developing good cover letters? What are some of the costs of a bad cover letter? The guidelines for developing cover letters include: (i) Personalization of the cover letter (ii) Identification of the organization (iii) Clear statement of the study’s purpose and importance (iv) Anonymity and confidentiality (v) General time frame of doing the study (vi) Reinforce importance of respondent’s participation (vii) Acknowledge reasons for nonparticipation in survey or interview (viii) Time requirements and incentive (ix) Completion date and where and how to return the survey (x) Advance thank-you statement for willingness to participate The advantages associated with a “good” cover letter are numerous, but the primary one is the ability of a cover letter to act as a behavioral incentive to get the respondent to complete and return the survey in a manner which is aligned with the “deadlines” associated with capturing data agreed upon by the research team and decision-makers. The “downside” of a bad cover letter concerns negative fallout concerning what the project’s about, the legitimacy of the project, and the image of the person and/or organization underwriting the request for communication. 4. Using the questions asked in evaluating any questionnaire design (see Exhibit 8.4) evaluate the Santa Fe Grill restaurant questionnaire. Write a one-page assessment. Students’ responses will vary but all should follow the considerations given in Exhibit 8.4. These are as follows: • Are the questions in the questionnaire appropriate and complete for addressing the research objectives? • Will the questions as designed provide the data in a sufficient form to address each objective? • Does the introduction section include a general description of the study? • Are the instructions clear? • Do the questions and scale measurements follow a logical order? • Does the questionnaire begin with simple questions and then gradually lead to more difficult or sensitive questions? • Are personal questions located at the end of the survey? • Are questions that are psychological in nature toward the end of the survey or interview, but before demographics? • Does each section use a single measurement format? • Does the instrument end with a thank-you statement? 5. What are the critical issues involved in pretesting a questionnaire? It is crucial to pretest a questionnaire since no amount of input from the research team and client can ever be considered “final” until the instrument is subjected to a first “litmus-test”; namely, getting it in front of the target population who will eventually be asked to fill it out in a more expensive, and time consuming fashion. The critical issue involved in pretesting a questionnaire is whether the respondents who fill it out (e.g., 15-30 in number) provide answers to the questions which are truly representative of the target population. Sometimes, because of costs and time, nonprobability sampling methods are used to pretest a survey, making the results suspect. Even if random sampling methods are utilized for the pretest there will always be those who doubt if the “slice” of responses obtained from this initial group bear any value to the information problem and decision at hand. A rule-of-thumb is: “Better to pretest the survey than not to test it at all”. It is rare that a survey returns from a pretest without needing subtle and important modifications before it “goes to market”. Chapter 9 Qualitative Data Analysis ANSWERS TO REVIEW QUESTIONS 1. How are quantitative and qualitative data analyses different? There are several ways in which these vary. The data that are analyzed in qualitative research include text and images, rather than numbers. In quantitative research, the goal is to quantify the magnitude of variables and relationships, or explain causal relationships. In qualitative analysis, the goal of research is deeper understanding. A second difference is that qualitative analysis is iterative, with researchers revisiting data and clarifying their thinking during each iteration. Third, quantitative analysis is driven entirely by researchers, while good qualitative research employs member checking, or asking key informants to verify the accuracy of research reports. Last, qualitative data analysis is inductive, which means that the theory grows out of the research process rather than preceding it, as it does in quantitative analysis. 2. Describe the three steps in qualitative data analysis and explain how and why these steps are iterative. After data collection, there are three steps in analyzing qualitative data. Researchers move back and forth between these steps iteratively rather than going through them one step at a time. The steps are data reduction, constructing data displays, and drawing/verifying conclusions. Data reduction consists of several interrelated processes: categorization and coding, theory development and iteration and negative case analysis. Categorization is the process of coding and labeling sections of the transcripts or images into themes. Then, the categories can be integrated into a theory through iterative analysis of the data. Data displays are the second step. Data displays picture findings in tables or figures so that the data can be more easily digested and communicated. After a rigorous iterative process, researchers can draw conclusions and verify their findings. During the verification/conclusion drawing stage, researchers work to establish the credibility of their data analysis. 3. What are the interrelated steps in data reduction? Data reduction involves categorization and coding, comparisons, integration, iteration and negative case analysis, and possibly tabulation. 4. How do you build theory in qualitative analysis? Theory building in qualitative work is based on “grounded” theory, meaning that the theory is based on the data collected. This is accomplished through integration. Integration is the process of moving from identification of themes and categories to the investigation of relationships between categories. Relationships may be portrayed as circular or recursive, i.e. a relationship in which a variable can both cause and be caused by the same variable. In selective coding, researchers develop an overarching theme or category around which to build their storyline. 5. What is negative case analysis and why is it important to the credibility of qualitative analysis? Negative case analysis involves deliberately looking for cases and instances that contradict the ideas and theories that researchers have been developing. It helps to establish boundaries and conditions for the theory being developed. The stance is one of skepticism. 6. Give some specific examples of data displays and explain how they may be used in qualitative data analysis. Displays may be tables or figures. They may be organized by themes or by informants. Figures may include flow diagrams, traditional box and arrow causal diagrams, diagrams that display circular or recursive relationships, trees displaying taxonomies, and consensus maps. 7. What are some of the threats to drawing credible conclusions in qualitative data analysis? Possible threats are listed in Exhibit 9.9 in the text and include the following: • Salience of first impressions or of observations of highly concrete or dramatic incidents • Selectivity which leads to overconfidence in some data, especially when trying to confirm a key finding • Co-occurrences taken as correlations or even as causal relationships • Extrapolating the rate of instances in the population from those observed • Not taking account of the fact that information from some sources may be unreliable 8. What is triangulation and what is its role in qualitative analysis? Triangulation establishes credibility by addressing the analysis from multiple perspectives including using multiple methods of data collection and analysis, multiple data sets, multiple researchers, multiple time periods, and different kinds of relevant research informants. 9. What are the various ways that credibility can be established in qualitative analysis? Credibility can be established using several tools. Cross-researcher reliability is the degree of similarity in the coding of the same data by different researchers. Triangulation establishes credibility by addressing the analysis from multiple perspectives including using multiple methods of data collection and analysis, multiple data sets, multiple researchers, multiple time periods, and different kinds of relevant research informants. Peer review is a process in which external qualitative methodology or topic area specialists are asked to review the research analysis. ANSWERS TO DISCUSSION QUESTIONS 1. Compare and contrast reliability and validity in quantitative analysis with the concept of credibility used in qualitative analysis? Do you believe the concepts are really similar? Why or why not? The concepts are similar but the method of demonstrating reliability and validity are different. However, the methods have to be different given the differences between qualitative and quantitative methods, data, and analysis. As stated in the text, “Quantitative researchers establish credibility in data analysis by demonstrating their results are reliable (measurement and findings are stable, repeatable, and generalizable) and valid (the research measures what it was intended to measure). In contrast, the credibility of qualitative data analysis is based on the rigor of the actual strategies used for collecting, coding, analyzing, and presenting data when generating theory.” 2. Let’s say your college has as a goal increasing the participation in student activities on campus. To help in this effort, you are doing an ethnographic study to better understand why students do or do not participate in student activities. How would you plan for triangulation in this study? Triangulation establishes credibility by addressing the analysis from multiple perspectives including using multiple methods of data collection and analysis, multiple data sets, multiple researchers, multiple time periods, and different kinds of relevant research informants. That means we could integrate triangulation in many ways. We could interview a variety of students including undergraduates and graduate students, students from different majors, in-state and out-of-state students, men and women, Greeks and non-Greeks, and so on. We could also use multiple methods including interviews, observation, focus groups, and ethnographic research. We could use several researchers in collecting the data. Lastly, we could measure participation issues for several types of student activities at different times of the semester, times of day, and times of year. 3. EXPERIENCE THE INTERNET. Ask permission from three people to analyze the content of their Facebook, or similar site (of course, you should promise them anonymity). If the sites are extensive, you may need a plan to sample a portion of the website (at least 5-10 representative pages). As you go through the sites, develop a coding sheet. What did you learn about social networking sites from your coding? What content categories are the most frequently occurring? What do you conclude based on the fact that these categories are the most frequently occurring at these three websites? Are there any implications of your findings for advertisers that are considering advertising on social networking sites? Students’ answers will vary but some common themes include friendships, recent activities, and groups and clubs. In analyzing content, students should consider the content shown visually in pictures, content provided about each person, and textual content on “The Wall.” Without being given access, one could not see the personal message sent. There are many implications for advertisers including reference groups, opinion leadership, brands of interest, and activities and opinions. 4. An anthropology professor over the age of 50 took a year of leave, and spent the year undercover as a student at her college. She did not take classes in her own department, but instead signed up, attended classes, took exams, and wrote papers just like any other freshmen. She lived in the dorm for part of the year. At the end of a year, she wrote a book entitled My Freshman Year which details her findings. In reporting the research methodology of her study, what methodological strengths and weaknesses should the anthropology professor address? This book is a very interesting read (for both teachers and students), but it is subject to limitations. Here are some to consider. The professor is much older than the average freshman student. While she lived and studied in the culture, she was clearly, visually different from a typical freshman student. This would without doubt color her experiences as well as her perceptions of those experiences. She only embedded herself in the culture of a single school, and in a single dorm, for a single year. This limits the conclusions she can draw about freshman life because it may differ in other environments. 5. Conduct three or four in-depth interviews with college students who are not business majors. You will be conducting an investigation of the associations that college students make with the word marketing. You can ask students to bring 5 to 10 images of any type (pictures, cutouts from magazines) that most essentially picture what they think marketing is all about. You may also conduct a word association exercise with the students. During the interview, you may want to tell informants that you are an alien from another planet and have never heard of marketing. Based on your interviews, develop a diagram that shows the concepts that students relate to marketing. Draw a circle around the most frequently occurring connections in your diagram. What did you learn about how college students view marketing? To conduct the interviews, you'll want to structure them in a way that allows for open-ended responses while also guiding the conversation toward the associations students make with the word "marketing." Here's a suggested approach: 1. Introduction: • Introduce yourself and explain the purpose of the interview. • Briefly explain that you're interested in understanding how college students perceive marketing. 2. Icebreaker: • Ask a general question to make the participants feel comfortable, such as: "Can you tell me a bit about yourself and what you're studying in college?" 3. Exploration of Understanding: • Begin the word association exercise by asking: "When you hear the word 'marketing,' what comes to mind?" Encourage them to provide the first words or phrases that pop into their heads. • Dive deeper into each association they mention. Ask follow-up questions like, "Can you tell me more about why you associate marketing with that?" • Allow them to elaborate freely on their thoughts and associations. 4. Image Association: • Invite participants to share the images they brought and explain why they chose them in relation to marketing. • Encourage discussion around each image, probing for insights into their perceptions and understanding of marketing. 5. Closure: • Thank the participants for their time and insights. • Assure them that their responses will remain confidential. • Let them know that their input will contribute to a broader understanding of how marketing is perceived by college students. Based on the interviews, you'll develop a diagram that illustrates the concepts and associations students relate to marketing. Here's a simplified example of what the diagram might look like: [Marketing Associations Diagram](https://i.ibb.co/ySj6pDx/marketing-associations-diagram.png) In this diagram, each bubble represents a concept or association related to marketing mentioned by the participants. The size of the bubble corresponds to the frequency of occurrence across the interviews. The circle around the most frequently occurring connections highlights the central themes or most common associations made by the students. What you might learn about how college students view marketing: • Common associations might include brands, advertising, social media, consumerism, persuasion, creativity, and sales. • There may be varying perceptions of marketing, influenced by personal experiences, academic exposure, and cultural factors. • Visual representations of marketing may encompass a wide range of images, such as logos, advertisements, products, and people engaging with brands. Overall, this exercise can provide valuable insights into the diverse perspectives and interpretations of marketing among college students, informing future strategies for communication and engagement within this demographic. Chapter 10 Preparing Data for Quantitative Analysis Answers to Hands-On Exercises 1. Should the Deli Depot questionnaire have screening questions? Screening questions are used in a study or research to make certain that only qualified respondents are included in the survey. The three screening questions provided in the Deli Depot questionnaire make certain that only qualified respondents—people who eat out, people who eat at Deli Depot, and people who have not completed a restaurant questionnaire on Deli Depot before—were included in the survey, otherwise the purpose of the study would not be served. Therefore, yes, the Deli Depot questionnaire should have these screening questions. 2. Run a frequency count on variable X3–Competent Employees. Do the customers perceive employees to be competent? To answer the question regarding whether customers perceive employees to be competent based on the frequency count of variable X3 – Competent Employees, we need to analyze the frequency distribution of responses related to the perception of employee competence. Firstly, we would conduct a frequency count on variable X3 – Competent Employees, which likely contains responses indicating the level of perceived competence of employees by customers. This variable may contain categories or levels such as "Very Competent," "Competent," "Somewhat Competent," "Not Competent," etc. After obtaining the frequency distribution, we would examine the proportion or percentage of customers who perceive employees to be competent. This can be calculated by dividing the frequency of responses indicating competence (e.g., "Very Competent" and "Competent") by the total number of responses and multiplying by 100 to obtain the percentage. Additionally, we can assess the distribution of responses across different levels of perceived competence to understand the overall sentiment. For instance, if a significant portion of customers perceives employees as "Competent" or "Very Competent," it suggests a positive perception of competence among customers. In conclusion, based on the frequency count and analysis of variable X3 – Competent Employees, we can determine whether customers perceive employees to be competent. If a substantial proportion of customers rate employees as competent, it indicates a positive perception of employee competence among customers. 3. Consider the guidelines on questionnaire design you learned in Chapter 8. How would you improve the Deli Depot questionnaire? Students’ answers for this question will differ. Suggestive answer: They want to know the important factors that people use to select a place to eat and how Deli Depot compares on these important factors with their direct competitors. How do they know that the list offered are important selection factors? Why not give customers a list of many possible factors? There is nothing in the survey to indicate how people feel about the factors Deli Depot considers to be its competitive advantages—special sauces, supplementary menu items, and quick delivery. Those need to be evaluated, too. Once they have the validated list of important factors they can ask for a comparative evaluation of each, like “How does Deli Depot stack up against Subway and the other local competitors?” Finally, once that data has been evaluated, Deli Depot can focus on emphasizing their strengths and fixing their important weaknesses. ANSWERS TO REVIEW QUESTIONS 1. Briefly describe the process of data validation. Specifically discuss the issues of fraud, screening, procedure, completeness, and courtesy. Data validation is something akin to a system of “checks and balances” which allows the research team to determine if interviews and observations were conducted with accuracy and precision. It is important for the team to audit the data, with a keen eye regarding the following five issues: (i) Fraud—was the person actually interviewed, or was the interview falsified? (ii) Screening—was a qualified respondent interviewed? (iii) Procedure—was the data collected in the correct area? (iv) Completeness—were all questions asked to all respondents? (v) Courtesy—were the respondents treated with propriety and respect during the interview process? 2. What are the differences between data validation, data editing, and data coding? The process of validating data is all about ensuring the interviews or observations were conducted correctly and free of fraud on the part of interviewers. Data editing and data coding, unlike data validation, approaches the data captured from respondents in a “what we see is what we get” fashion. In other words, the focus of error hovering above the editing and coding process has more to do with administrative error than slips in “procedure” and “character”. Data editing involves checking survey instruments for mistakes which may have occurred during the process of data collection. As an example, when a questionnaire is reviewed responses suggest the data is being provided by a person who is employed. However, in an early question the interviewer checked “Unemployed” by mistake. This is an administrative error, not indicative of the kind of underhanded issues of character which hover around practices such as “curbstoning.” Data coding involves grouping and assigning values to various responses from the survey instrument. 3. Explain the differences between establishing codes for open-ended questions and for closed-ended questions. Open-ended questions are useful to researchers and clients since they allow us to probe answers more deeply for clarity and specifics. The downside of establishing codes for open-ended questions is that the process of analysis is far more protracted. As well, open-ended responses are always open to “interpretation” on the part of the team analyzing the data. Ultimately, the analysis of an open-ended question ends up in the same spot as close-ended questions; namely, with the research assigning a numerical value as a code and attaching a coded value to each response. The challenge is taking all the many and varied responses provided by respondents to open-ended questions (sometimes illegible, often downright nasty—even foul, most of the time left blank) generating a master list of potential responses and assigning values to the responses. When developing a survey instrument class participants are often enticed by the opportunity to include three or four open-ended questions in their instrument. However, once confronted with the data captured by a set of open-ended questions, it is a painful and time consuming process. 4. Briefly describe the process of data entry. What changes in technology have simplified this procedure? Data entry has often been characterized as “grunt work” in the marketing research industry. It is quite likely class participants will approach this topic (and actually do it) with similar perceptions. However, changes in technology such as touch-screen capability, hand-held electronic pointers, hand-helds (e.g., palm pilots) and optical scanners have done a lot to alter this unsavory (and quite unjustified) image of data entry. As well, great leaps have been made in the online arena by such companies as Mercator Corporation. Their “Snap” software provides a seamless transition between data entry on the part of respondents and analysis on the part of the research team, effectively removing the task of data entry by “grunts” a thing of the past. Data entry includes all those tasks involved with people (your class participants) directly inputting coded data into a software package (e.g., Minitab, SAS, SPSS, etc.). This essential and important step in the research process allows an analyst to manipulate and transform raw data into useful information. 5. What is the purpose of a simple one-way tabulation? How does this relate to a one-way frequency table? One-way tabulations can be used to: • Determine the degree of nonresponse to each question in the survey (nonresponse can suggest either a “bad” question, or that the survey was given to a respondent not aligned with the target population) • Ferret out simple slip-ups in the data entry process • Profile a host of summary statistics of interest to the research team ANSWERS TO DISCUSSION QUESTIONS 1. Explain the importance of following the sequence for data preparation and analysis described in Exhibit 10.1. The sequence moves from error detection through validation, editing, coding, data entry, data tabulation, and analysis to analysis and interpretation. One could not begin to interpret without first ensuring that errors were not in the data set or the results would not be accurate. 2. Identify four problems a researcher might find while screening questionnaires and preparing data for analysis. The four areas that researchers must consider when preparing the data are listed below: • Were the proper questions asked? • Were the answers recorded accurately? • Were the respondents correctly screened? • Were open-ended questions recorded completely and accurately? 3. How can data tabulation help researchers better understand and report findings? Data tabulations help researchers to find indications of missing data, determine valid percentages, and offer summary statistics. Missing data are indicators of problems in the data set, while percentages and summary statistics are important to presenting overviews of the findings. 4. SPSS Exercise. Using SPSS and the Santa Fe Grill employee database, develop frequencies, means, modes, and medians for all the relevant variables on the questionnaire. This is an excellent way to introduce students to using the SPSS software. Solution Manual for Essentials of Marketing Research Joseph F. Hair, Mary Celsi, Robert P. Bush, David J. Ortinau 9780078028816, 9780078112119

Document Details

Related Documents

person
Harper Davis View profile
Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right