Preview (5 of 16 pages)

Preview Extract

CHAPTER ELEVEN BASIC DATA ANALYSIS FOR QUANTITATIVE RESEARCH LEARNING OBJECTIVES (PPT slide 11-2) 1. Explain measures of central tendency and dispersion. 2. Describe how to test hypotheses using univariate and bivariate statistics. 3. Apply and interpret analysis of variance (ANOVA). 4. Utilize perceptual mapping to present research findings. KEY TERMS AND CONCEPTS 1. Analysis of variance (ANOVA) 2. Chi-square (X2) analysis 3. Follow-up test 4. F -test 5. Independent samples 6. Interaction effect 7. Mean 8. Median 9. Mode 10. n-way ANOVA 11. Perceptual mapping 12. Range 13. Related samples 14. Standard deviation 15. t-test 16. Variance CHAPTER SUMMARY BY LEARNING OBJECTIVES Explain measures of central tendency and dispersion. The mean is the most commonly used measure of central tendency and describes the arithmetic average of the values in a sample of data. The median represents the middle value of an ordered set of values. The mode is the most frequently occurring value in a distribution of values. All these measures describe the center of the distribution of a set of values. The range defines the spread of the data. It is the distance between the smallest and largest values of the distribution. The standard deviation describes the average distance of the distribution values from the mean. A large standard deviation indicates a distribution in which the individual values are spread out and are relatively farther away from the mean. Describe how to test hypotheses using univariate and bivariate statistics. Marketing researchers often form hypotheses regarding population characteristics based on sample data. The process typically begins by calculating frequency distributions and averages, and then moves on to actually test the hypotheses. When the hypothesis testing involves examining one variable at a time, researchers use a univariate statistical test. When the hypothesis testing involves two variables, researchers use a bivariate statistical test. The Chi-square statistic permits us to test for statistically significant differences between the frequency distributions of two or more groups. Categorical data from questions about sex, race, profession, and so forth can be examined and tested for statistical differences. In addition to examining frequencies, marketing researchers often want to compare the means of two groups. There are two possible situations when means are compared. In independent samples the respondents come from different populations, so their answers to the survey questions do not affect each other. In related samples, the same respondent answers several questions, so comparing answers to these questions requires the use of a paired-samples t-test. Questions about mean differences in independent samples can be answered by using a t-test statistic. Apply and interpret analysis of variance (ANOVA). Researchers use ANOVA to determine the statistical significance of the difference between two or more means. The ANOVA technique calculates the variance of the values between groups of respondents and compares it with the variance of the responses within the groups. If the between-group variance is significantly greater than the within-group variance as indicated by the F-ratio, the means are significantly different. The statistical significance between means in ANOVA is detected through the use of a follow-up test. The Scheffé test is one type of follow-up test. The test examines the differences between all possible pairs of sample means against a high and low confidence range. If the difference between a pair of means falls outside the confidence interval, then the means can be considered statistically different. Utilize perceptual mapping to present research findings. Perceptual mapping is used to develop maps that show perceptions of respondents visually. These maps are graphic representations that can be produced from the results of several multivariate techniques. The maps provide a visual representation of how companies, products, brands, or other objects are perceived relative to each other on key attributes such as quality of service, food taste, and food preparation. CHAPTER OUTLINE Opening VIGNETTE: Data Analysis Facilitates Smarter Decisions The opening vignette in this chapter describes the value of data analysis. In his book Thriving on Chaos, Tom Peters says, “We are drowning in information and starved for knowledge.” The amount of information available for business decision making has grown tremendously over the last decade. But until recently, much of that information just disappeared. It either was not used or was discarded because collecting, storing, extracting, and interpreting it was too expensive. Now, decreases in the cost of data collection and storage, development of faster data processors and user-friendly client–server interfaces, and improvements in data analysis and interpretation made possible through data mining enable businesses to convert what had been a “waste by-product” into a new resource to improve business and marketing decisions. Data analysis facilitates the discovery of interesting patterns in databases that are difficult to identify and have potential for improving decision making and creating knowledge. Data analysis methods are widely used today for commercial purposes. I. Value of Statistical Analysis (PPT slide 11-3) Once data have been collected and prepared for analysis, several statistical procedures can help to better understand the responses. It can be difficult to understand the entire set of responses because there are too many numbers to look at. Consequently, almost all data needs summary statistics to describe the information it contains. Basic statistics and descriptive analysis achieve this purpose. A. Measures of Central Tendency (PPT slide 11-4 and 11-5) Frequency distributions can be useful for examining the different values for a variable. Frequency distribution tables are easy to read and provide a great deal of basic information. There are times, however, when the amount of detail is too great. In such situations the researcher needs a way to summarize and condense all the information in order to get at the underlying meaning. Researchers use descriptive statistics to accomplish this task. The mean, median, and mode are measures of central tendency (Exhibit 11.1; PPT slides 11-5). These measures locate the center of the distribution. For this reason, the mean, median, and mode are sometimes also called measures of location. The mean is the arithmetic average of the sample; all values of a distribution of responses are summed and divided by the number of valid responses. It is the most commonly used measure of central tendency. It can be calculated when the data scale is either interval or ratio scale. The mean is a very robust measure of central tendency. It is fairly insensitive to data values being added or deleted. It can be subject to distortion, however, if extreme values are included in the distribution. The median is the middle value of a rank-ordered distribution; exactly half of the responses are above and below the median value. If the number of data observations is even, the median is generally considered to be the average of the two middle values. If there is an odd number of observations, the median is the middle value. The median is especially useful as a measure of central tendency for ordinal data and for data that are skewed to either the right or left. The mode is the most common value in the set of responses to a question; that is, the response most often given to a question. The mode is the value that represents the highest peak in the distribution’s graph. The mode is especially useful as a measure for data that have been somehow grouped into categories. Each measure of central tendency describes a distribution in its own manner, and each measure has its own strengths and weaknesses. For nominal data, the mode is the best measure. For ordinal data, the median is generally best. For interval or ratio data, the mean is appropriate, except when there are extreme values within the interval or ratio data, which are referred to as outliers. In this case, the median and the mode are likely to provide more information about the central tendency of the distribution. B. SPSS Applications—Measures of Central Tendency The instructor can use the Santa Fe Grill database with the SPSS software to calculate the measures of central tendency. The dialog boxes for the sequence are shown in Exhibit 11.2. C. Measures of Dispersion (PPT slide 11-6 and 11-7) Measures of dispersion describe how close to the mean or other measure of central tendency the rest of the values in the distribution fall (PPT slide 11-6). Two measures of dispersion that describe the variability in a distribution of numbers are the: Range Standard deviation The range is the distance between the smallest and largest values in a set of responses. Another way to think about it is that the range identifies the endpoints of the distribution of values. It is more often used to describe the variability of open-ended questions where the respondents, not the researchers, are defining the range by their answers. The standard deviation is the average distance of the distribution values from the mean. The difference between a particular response and the distribution mean is called a deviation. Since the mean of a distribution is a measure of central tendency, there should be about as many values above the mean as there are below it (particularly if the distribution is symmetrical). Consequently, if each value we subtracted in a distribution from the mean and added them up, the result would be close to zero (the positive and negative results would cancel each other out). The solution to this difficulty is to square the individual deviations before we add them up (squaring a negative number produces a positive result). To calculate the estimated standard deviation, we use the formula below: Once the sum of the squared deviations is determined, it is divided by the number of respondents minus 1. The number 1 is subtracted from the number of respondents to help produce an unbiased estimate of the standard deviation. The result of dividing the sum of the squared deviations is the average squared deviation. To convert the result to the same units of measure as the mean, we take the square root of the answer. This produces the estimated standard deviation of the distribution. Sometimes the average squared deviation is also used as a measure of dispersion for a distribution. The average squared deviation, called the variance, is used in a number of statistical processes. Since the estimated standard deviation is the square root of the average squared deviations, it represents the average distance of the values in a distribution from the mean. If the estimated standard deviation is large, the responses in a distribution of numbers do not fall very close to the mean of the distribution. If the estimated standard deviation is small, you know that the distribution values are close to the mean. Another way to think about the estimated standard deviation is that its size tells you something about the level of agreement among the respondents when they answered a particular question. Together with the measures of central tendency, these descriptive statistics can reveal a lot about the distribution of a set of numbers representing the answers to an item on a questionnaire. Often, however, marketing researchers are interested in more detailed questions that involve more than one variable at a time. D. SPSS Applications—Measures of Dispersion The text uses use the restaurant database with the SPSS software to calculate measures of dispersions. Exhibit 11.3 shows the output for the measures of dispersion (PPT slide 11-7). E. Preparation of Charts (PPT slide 11-8) Many types of charts and graphics can be prepared easily using the SPSS software. Charts and other visual communication approaches should be used whenever practical. They help information users to quickly grasp the essence of the results developed in data analysis, and also can be an effective visual aid to enhance the communication process and add clarity and impact to research reports and presentations. Exhibit 11.4 shows a frequency tabulation table. A bar chart shows tabulated data in the form of bars that may be horizontally or vertically oriented. Bar charts are excellent tools to depict both absolute and relative magnitudes, differences, and change. Exhibit 11.5 is an example of a vertical bar chart based on the data from Exhibit 11.4. Marketing researchers need to exercise caution when using charts and figures to explain data. It is possible to misinterpret information in a chart and lead marketing research information users to inappropriate conclusions. II. How to Develop Hypotheses (PPT slide 11-9 and 11-10) Measures of central tendency and dispersion are useful tools for marketing researchers. But researchers often have preliminary ideas regarding data relationships based on the research objectives. These ideas are derived from previous research, theory and/or the current business situation, and typically are called hypotheses. A hypothesis is an unproven supposition or proposition that tentatively explains certain facts or phenomena. A hypothesis also may be thought of as an assumption about the nature of a particular situation. Statistical techniques enable us to determine whether the proposed hypotheses can be confirmed by the empirical evidence. Hypotheses are developed prior to data collection, as part of the research plan (PPT slide 11-9). When we test hypotheses that compare two or more groups, if the groups are different subsets of the same sample then the two groups must be considered related samples for conducting statistical tests. In contrast, if we assume the groups are from separate populations then the different groups are considered independent samples. In both situations, the researcher is interested in determining if the two groups are different but different statistical tests are appropriate for each situation. The null hypothesis is no difference in the group means. It is based on the notion that any change from the past is due entirely to random error. Statisticians and marketing researchers typically test the null hypothesis. Another hypothesis, called the alternative hypothesis, states the opposite of the null hypothesis. The alternative hypothesis is that there is a difference between the group means. If the null hypothesis is accepted we do not have change in the status quo. But if the null hypothesis is rejected and the alternative hypothesis accepted, the conclusion is there has been a change in behavior, attitudes, or some similar measure. III. Analyzing Relationships of Sample Data (PPT slide 11-11 to 11-27) A. Sample Statistics and Population Parameters (PPT slide 11-11) The purpose of inferential statistics is to make a determination about a population on the basis of a sample from that population. Sample statistics are measures obtained directly from the sample or calculated from the data in the sample. A population parameter is a variable or some sort of measured characteristic of the entire population. Sample statistics are useful in making inferences regarding the population’s parameters. Generally, the actual population parameters are unknown since the cost to perform a true census of almost any population is prohibitive. A frequency distribution displaying the data obtained from the sample is commonly used to summarize the results of the data collection process. When a frequency distribution displays a variable in terms of percentages, then this distribution represents proportions within a population. The proportion may be expressed as a percentage, a decimal value, or a fraction. B. Choosing the Appropriate Statistical Technique (PPT slide 11-12 and 11-13) After the researcher has developed the hypotheses and selected an acceptable level of risk (statistical significance), the next step is to test the hypotheses. To do so, the researcher must select the appropriate statistical technique to test the hypotheses. A number of statistical techniques can be used to test hypotheses. Several considerations influencing the choice of a particular technique are: The number of variables The scale of measurement The parametric versus nonparametric statistics The number of variables examined together is a major consideration in the selection of the appropriate statistical technique. Univariate statistics uses only one variable at a time to generalize about a population from a sample. Often researchers will need to examine many variables at the same time to represent the real world and fully explain relationships in the data. In such cases, multivariate statistical techniques are required. Exhibit 11.6 provides an overview of the types of scales used in different situations (PPT slide 11-13). With ordinal data only the median, percentile, and Chi-square can be used. There are two major types of statistics. They are referred to as: Parametric—when the data are measured using an interval or ratio scale and the sample size is large, parametric statistics are appropriate. It is also assumed the sample data are collected from populations with normal (bell-shaped) distributions. Nonparametric—when a normal distribution cannot be assumed, the researcher must use nonparametric statistics. Moreover, when data are measured using an ordinal or nominal scale it is generally not appropriate to assume that the distribution is normal and, therefore, nonparametric or distribution-free statistics should be used. After considering the measurement scales and data distributions, there are three approaches for analyzing sample data that are based on the number of variables. They are: Univariate statistics—analyze only one variable at a time. Bivariate statistics—analyze two variables. Multivariate statistics—examine many variables simultaneously. C. Univariate Statistical Tests (PPT slide 11-14) Univariate tests of significance are used to test hypotheses when the researcher wishes to test a proposition about a sample characteristic against a known or given standard. The following are some examples of propositions: The new product or service will be preferred by 80 percent of our current customers. The average monthly electric bill in Miami, Florida, exceeds $250.00. The market share for Community Coffee in south Louisiana is at least 70 percent. More than 50 percent of current Diet Coke customers will prefer the new Diet Coke that includes a lime taste. These propositions can be translated into null hypotheses and can be tested. Hypotheses are developed based on theory, previous relevant experiences, and current market conditions. The process of testing hypotheses regarding population characteristics based on sample data often begins by calculating frequency distributions and averages, and then moves on to further analysis that actually tests the hypotheses. When the hypothesis testing involves examining one variable at a time, it is referred to as a univariate statistical test. When the hypothesis testing involves two variables it is called a bivariate statistical test. The null and alternative hypotheses must be developed. Then the level of significance for rejecting the null hypothesis and accepting the alternative hypothesis must be selected. At that point, the researchers can conduct the statistical test and determine the answer to the research question. D. SPSS Application—Univariate Hypothesis Test (PPT slide 11-15) Using the SPSS software, researchers can test the responses in the Santa Fe Grill database to find the answer to the research questions. The SPSS output is shown in Exhibit 11.7 (PPT slide 11-15). E. Bivariate Statistical Tests (PPT slide 11-16) Bivariate statistical tests compare the characteristics of two groups or two variables (PPT slide 11-16). There are three types of bivariate hypothesis tests: Chi-square, which is used with nominal data, and the t-test (to compare two means) and analysis of variance (compares three or more means), both of which are used with either interval or ratio data. F. Cross-Tabulation (PPT slide 11-17 and 11-18) Cross-tabulation is useful for examining relationships and reporting the findings for two variables. The purpose of cross tabulation is to determine if differences exist between subgroups of the total sample. In fact, cross tabulation is the primary form of data analysis in some marketing research projects. To use cross tabulation the researcher must understand how to develop a cross tabulation table and how to interpret the outcome. Cross tabulation is one of the simplest methods for describing sets of relationships. A cross tabulation is a frequency distribution of responses on two or more sets of variables. To conduct cross tabulation, the responses for each of the groups are tabulated and compared. Chi-square (X2) analysis enables us to test whether there are any statistical differences between the responses for the groups. Researchers can use the Chi-square test to determine whether responses observed in a survey follow the expected pattern. For example, Exhibit 11.8 shows a cross tabulation between gender and respondents’ recall of restaurant ads (PPT slide 11-18). When constructing a cross tabulation table, the researcher selects the variables to use when examining relationships. Selection of variables should be based on the objectives of the research project and the hypotheses being tested. But in all cases remember that Chi-square is the statistic to analyze nominal (count) or ordinal (ranking) scaled data. Paired variable relationships (for example, gender of respondent and ad recall) are selected on the basis of whether the variables answer the research questions in the research project and are either nominal or ordinal data. Demographic variables or lifestyle/psychographic characteristics are typically the starting point in developing cross tabulations. These variables are usually the columns of the cross tabulation table, and the rows are variables like purchase intention, usage, or actual sales data. Cross-tabulation tables show percentage calculations based on column variable totals. Thus, the researcher can make comparisons of behaviors and intentions for different categories of predictor variables such as income, sex, and marital status. Cross-tabulation provides the research analyst with a powerful tool to summarize survey data. It is easy to understand and interpret and can provide a description of both total and subgroup data. Yet the simplicity of this technique can create problems. It is easy to produce an endless variety of cross tabulation tables. In developing these tables, the analyst must always keep in mind both the project objectives and specific research questions of the study. G. Chi-Square Analysis (PPT slide 11-19) Chi-square (X2) analysis enables researchers to test for statistical significance between the frequency distributions of two or more nominally scaled variables in a cross tabulation table to determine if there is any association between the variables (PPT slide 11-19). Categorical data from questions about sex, education, or other nominal variables can be tested with this statistic. Chi-square analysis compares the observed frequencies (counts) of the responses with the expected frequencies. The Chi-square statistic tests whether or not the observed data are distributed the way we would expect them to be, given the assumption that the variables are not related. The expected cell count is a theoretical value, while the observed cell count is the actual cell count based on your study. The Chi-square statistic answers questions about relationships between nominally scaled data that cannot be analyzed with other types of statistical analysis, such as ANOVA or t-tests. H. Calculating the Chi-Square Value (PPT slide 11-19) The formula to calculate the Chi-square value is shown below: Chi-square formula Where: Observedi = observed frequency in cell i Expectedi = expected frequency in cell i n = number of cells Some marketing researchers call Chi-square a “goodness of fit” test. That is, the test evaluates how closely the actual frequencies “fit” the expected frequencies. When the differences between observed and expected frequencies are large, you have a poor fit and you reject your null hypothesis. When the differences are small, you have a good fit and you would accept the null hypothesis that there is no relationship between the two variables. One word of caution is necessary in using Chi-square. The Chi-square results will be distorted if more than 20 percent of the cells have an expected count of less than 5, or if any cell has an expected count of less than 1. In such cases, you should not use this test. SPSS will tell you if these conditions have been violated. One solution to small counts in individual cells is to collapse them into fewer cells to get larger counts. I. SPSS Application—Chi-Square Based on their conversations with customers, the owners of the Santa Fe Grill believe that female customers drive to the restaurant from farther away than do male customers. The Chi-square statistic can be used to determine if this is true. The SPSS results are shown in Exhibit 11.9. J. Comparing Means: Independent Versus Related Samples (PPT slide 11-20) In addition to examining frequencies, marketing researchers often want to compare the means of two groups (PPT slide 11-20). In fact, one of the most frequently examined questions in marketing research is whether the means of two groups of respondents on some attitude or behavior are significantly different. There are two types of situations when researchers compare means: The first is when the means are from independent samples—independent samples are two or more groups of responses that are tested as though they may come from different populations. The second is when the samples are related—related samples are two or more groups of responses that originated from the sample population. In a related sample situation, the marketing researcher must take special care in analyzing the information. Although the questions are independent, the respondents are the same. This is called a paired sample. When testing for differences in related samples the researcher must use what is called a paired samples t-test. K. Using the t-Test to Compare Two Means (PPT slide 11-21) The t-test is a hypothesis test that utilizes the t distribution. It is used when the sample size is smaller than 30 and the standard deviation is unknown. The t value is a ratio of the difference between the two sample means and the standard error. The t-test provides a mathematical way of determining if the difference between the two sample means occurred by chance. The formula for calculating the t value is: Where: x¯1 = mean of sample 1 x¯2= mean of sample 2 Sx¯1 — x¯2 = standard error of the difference between the two means L. SPSS Application—Independent Samples t–Test To illustrate the use of a t-test for the difference between two group means, the text uses the restaurant database. Based on their experiences observing customers in the restaurant the Santa Fe Grill owners believe there are differences in the levels of satisfaction between male and female customers. To test this hypothesis the researchers can use the SPSS “Compare Means” program. The results are shown in Exhibit 11.10. M. SPSS Application—Paired Samples t-Test (PPT slide 11-22) Sometimes marketing researchers want to test for differences in two means for variables in the same sample. To examine this, the researchers can use the paired sample t-test. The results are shown in Exhibit 11.11 (PPT slide 11-22). N. Analysis of Variance (ANOVA) (PPT slide 11-23 to 11-24) Analysis of variance (ANOVA) is a statistical technique that determines whether three or more means are statistically different from one another. One-way ANOVA is used to examine group means. The term one-way is used because the comparison involves only one independent variable. Researchers also can use ANOVA to examine the effects of several independent variables simultaneously. This enables analysts to estimate both the individual and combined effects of several independent variables on the dependent variable. ANOVA requires that the dependent variable be metric. That is, the dependent variable must be either interval or ratio scaled. A second data requirement is that the independent variable, in this case the coffee consumption variable, be categorical (nonmetric). The null hypothesis for ANOVA always states that there is no difference between the dependent variable groups. Thus, the null hypothesis would be: ANOVA examines the variance within a set of data. The variance of a variable is equal to the average squared deviation from the mean of the variable. The logic of ANOVA is that if we calculate the variance between the groups and compare it to the variance within the groups; we can make a determination as to whether the group means are significantly different. When within-group variance is high, it swamps any between group differences we see unless those differences are large. Determining statistical significance in ANOVA: Researchers use the F-test with ANOVA to evaluate the differences between group means for statistical significance (PPT slide 11-24). The total variance in a set of responses to a question is made up of between-group and within-group variance. The between-group variance measures how much the sample means of the groups differ from one another. In contrast, the within-group variance measures how much the responses within each group differ from one another. The F distribution is the ratio of these two components of total variance and can be calculated as follows: The larger the difference in the variance between the groups, the larger the F ratio. Since the total variance in a data set is divisible into between- and within-group components, if there is more variance explained or accounted for by considering differences between groups than there is within groups, then the independent variable probably has a significant impact on the dependent variable. Larger F ratios imply significant differences between the groups. Thus, the larger the F ratio, the more likely it is that the null hypothesis will be rejected. O. SPSS Application—ANOVA (PPT slide 11-25 to 11-27) The owners of the Santa Fe Grill would like to know if there is a difference in the likelihood of returning to the restaurant based on how far customers have driven to get to the restaurant. They therefore ask the researcher to test the hypothesis that there are no differences in likelihood of returning and distance driven to get to the restaurant. To test this hypothesis the researchers can use the SPSS compare means test. The results are shown in Exhibit 11.12 (PPT slide 11-25). A weakness of ANOVA, however, is that the test enables the researcher to determine only that statistical differences exist between at least one pair of the group means. The technique cannot identify which pairs of means are significantly different from each other. Follow-up tests are used to flag the means that are statistically different from each other; follow-up tests are performed after an ANOVA determines there are differences between means (PPT slide 11-26). Follow-up tests are available in statistical software packages such as SPSS and SAS. All of the methods involve multiple comparisons, or simultaneous assessment of confidence interval estimates of differences between the means. That is, all means are compared two at a time. The differences between the methods are based on their ability to control the error rate. Relative to other follow-up tests, the Scheffé procedure is a more conservative method for detection of significant differences between group means. The Scheffé follow-up test establishes simultaneous confidence intervals around all groups’ responses and imposes an error rate at a specified α level. The test identifies differences between all pairs of means at high and low confidence interval ranges. If the difference between each pair of means falls outside the range of the confidence interval, then the null hypothesis is rejected and it can be concluded that the pairs of means are statistically different. The Scheffé test is equivalent to simultaneous two-tailed hypothesis tests. Because the technique holds the experimental error rate to α (typically .05), the confidence intervals tend to be wider than in the other methods, but the researcher has more assurance that true mean differences exist. To run the Scheffé post-hoc test the researchers use the SPSS compare means test. Results for the Scheffé test for the restaurant example are shown in Exhibit 11.13 (PPT slide 11-27).
III. n-Way ANOVA (PPT slide 11-28 to 11-30) n-way ANOVA is a type of ANOVA that can analyze several independent variables at the same time (PPT slide 11-28). Using multiple independent factors creates the possibility of an interaction effect. That is, the multiple independent variables can act together to affect dependent variable group means. Another situation that may require n-way ANOVA is the use of experimental designs (causal research), where the researcher uses different levels of a stimulus (for example, different prices or ads) and then measures responses to those stimuli. From a conceptual standpoint, n-way ANOVA is similar to one-way ANOVA, but the mathematics is more complex. However, statistical packages such as SPSS will conveniently perform n-way ANOVA. A. SPSS Application—n-Way ANOVA (PPT slide 11-29 and 11-30) To help students understand how ANOVA is used to answer research questions, the text refers to the restaurant database to answer a typical question. The owners want to know first whether customers who come to the restaurant from greater distances differ from customers who live nearby in their willingness to recommend the restaurant to a friend. Second, they also want to know whether that difference in willingness to recommend, if any, is influenced by the gender of the customers. On the basis of informal comments from customers, the owners believe customers who come from more than 5 miles will be more likely to recommend the restaurant. Moreover, they hypothesize male customers will be more likely to recommend the restaurant than females. The null hypotheses are that there will be no difference between the mean ratings for X24–Likely to Recommend for customers who traveled different distances to come to the restaurant (X30), and there also will be no difference between females and males (X32). The purpose of the ANOVA analysis is to see if the differences that do exist are statistically significant and meaningful. To statistically assess the differences, ANOVA uses the F-ratio. The bigger the F-ratio, the bigger the difference among the means of the various groups with respect to their likelihood of recommending the restaurant. SPSS can help you conduct the statistical analysis to test the null hypotheses. The best way to analyze the restaurant survey data to answer the owner’s questions is to use a factorial model. A factorial model is a type of ANOVA in which the individual effects of each independent variable on the dependent variable are considered separately, and then the combined effects (an interaction) of the independent variables on the dependent variable are analyzed. To examine this hypothesis the text uses only the Santa Fe Grill customer survey data. The SPSS output for ANOVA is shown in Exhibit 11.14 (PPT slide 11-29) and the n-way ANOVA means result is shown in Exhibit 11.5 (PPT slide 11-30). B. Perceptual Mapping (PPT slide 11-31) Perceptual mapping is a process that is used to develop maps showing the perceptions of respondents (PPT slide 11-31). The maps are visual representations of respondents’ perceptions of a company, product, service, brand, or any other object in two dimensions. A perceptual map typically has a vertical and a horizontal axis that are labeled with descriptive adjectives. Several different approaches can be used to develop perceptual maps. These include rankings, medians and mean ratings. To illustrate perceptual mapping, data from an example involving ratings of fast-food restaurants are shown in Exhibit 11.16. C. Perceptual Mapping Applications in Marketing Research (PPT slide 11-32) Perceptual mapping applications in marketing research include: New-product development Image measurements Advertising Distribution MARKETING RESEARCH IN ACTION EXAMINING RESTAURANT IMAGE POSITIONS—REMINGTON’S STEAK HOUSE (PPT slide 11-34) The Marketing Research in Action in this chapter provides an overview of an image study conducted for Remington’s Steak House, a retail theme restaurant located in a large midwestern city. A copy of the questionnaire used for the image study is in Exhibit 11.18. Exhibit 11.19 shows the average importance ratings for restaurant selection factors. Exhibit 11.20 lists the output for a one-way ANOVA for the restaurant competitors. Exhibit 11.21 shows the output for a one-way ANOVA for differences in restaurant perceptions. Exhibit 11.22 summarizes ANOVA findings from Exhibit 11.19 to 11.21. Exhibit 11.23 illustrates an importance-performance chart for Remington’s Steak House. Instructor Manual for Essentials of Marketing Research Joseph F. Hair, Mary Celsi, Robert P. Bush, David J. Ortinau 9780078028816, 9780078112119

Document Details

Related Documents

person
Harper Davis View profile
Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right