Preview (10 of 33 pages)

This document contains Chapters 12 to 14 CHAPTER 12 USING SINGLE-SUBJECT DESIGNS QUESTIONS TO PONDER 1. How were single-subject designs used in the early days of behavioral research? 2. What are the major characteristics of the single-subject baseline design? 3. What is a behavioral baseline? 4. Why is it important to establish a behavioral baseline in a single-subject design? 5. What is a stability criterion, and why is it important? 6. What is an ABAB design, and how does it relate to intrasubject replication? 7. What are intrasubject and intersubject replication, and what do they tell you? 8. What factors affect your decision concerning choosing a stability criterion? 9. How is uncontrolled variability handled in the single-subject approach? 10. How do the single-subject and group approaches diff er with respect to handling uncontrolled variability? 11. How is the generality of research findings established in single-subject research? 12. What is a drifting baseline, and how can you deal with one? 13. What is an unrecoverable baseline, and what can you do if you have one? 14. What can you do if you have unequal baselines between subjects? 15. What can you do if you have an inappropriate baseline? 16. What are the characteristics of the single-factor baseline design? 17. What are the characteristics of the multifactor baseline design? 18. What is a multiple-baseline design, and when would you use one? 19. Describe the changing criterion design. When is it used? 20. What are the characteristics of a dynamic design? 21. What are the major characteristics of the discrete trials design? 22. How are inferential statistics used in single-subject designs? 23. What are the advantages and disadvantages of the single-subject approach? CHAPTER OUTLINE A Little History Baseline, Dynamic, and Discrete Trials Designs Baseline Designs An Example Baseline Experiment: Do Rats Prefer Signaled or Unsignaled Shocks? Issues Surrounding the Use of Baseline Designs Dealing With Uncontrolled Variability Determining the Generality of Findings Dealing With Problem Baselines Types of Single-Subject Baseline Design 368 Dynamic Designs Discrete Trials Designs Characteristics of the Discrete Trials Design Analysis of Data from Discrete Trials Designs Inferential Statistics and Single-Subject Designs Advantages and Disadvantages of the Single-Subject Approach Summary Key Terms baseline design behavioral baseline stability criterion baseline phase intervention phase ABAB design intrasubject replication reversal strategy intersubject replication systematic replication direct replication multiple-baseline design dynamic design changing criterion design dynamic design discrete trials design CHAPTER GOALS Chapter 12 introduces students to single-subject or small-n designs, which include both the baseline designs pioneered by B. F. Skinner and the older, discrete trials type. A major goal of the chapter is to show how valid inferences about causal relationships can be drawn from single-subject data, even though group-based statistical analyses cannot be performed. Instead, techniques such as rigid control over extraneous variables, use of stability criteria, and replication provide the means to uncover causal relationships and assess their reliability. The chapter opens with a brief outline of the history of single-subject designs and then compares the baseline and discrete trials approaches. The characteristics of baseline designs are then described, including the behavioral baseline, stability criterion, intrasubject replication, and intersubject replication. The chapter then takes up the logic of the baseline design and takes students through an example implementation of the design. How to deal with problem baselines is then described followed by a description of the major types of baseline design. The next section of the chapter discusses dynamic designs. These designs are applied when you do not have a discrete independent variable. Instead, one is faced with a continuously varying independent variable. In these situations, the dynamic design is used. Students should understand the difference between baseline and dynamic designs. Discrete trials designs are introduced next, and an example signal detection experiment is described. Some attempts to apply inferential statistics to single-subject data (using multiple observations in place of multiple subjects) are discussed, and the problems noted. The chapter concludes with a summary of the advantages and disadvantages of the single-subject approach. Suggested points to cover in your lecture include the following: 1. The origins of the single-subject or small-n approach to research. 2. The differences between the baseline, dynamic, and discrete trials designs. 3. The different ways to assess reliability in single-subject and group-based designs. You might point out that both types of design use replication to assess reliability. Group designs expose several subjects to the same treatment (replication across subjects), whereas single-subject designs expose one subject to each treatment repeatedly (replication across treatments). 4. The role of the baseline in establishing a basis for comparison across treatments in the baseline design. 5. The requirement in baseline designs of rigidly controlling extraneous variables to obtain stable baselines. 6. How to establish a stability criterion, and how the stability criterion is used in baseline designs. 7. Conditions under which a simple baseline design cannot be used (e.g., irreversible effects). 8. How to use baseline designs with more than two levels of an independent variable or more than one independent variable. 9. The different types of baseline designs, and their uses. 10. How to use the changing criterion design. 11. How behavioral dynamics are observed, and what they tell you about behavior. 12. The logic of the discrete trials single-subject design. 13. The major characteristics of the discrete trials design. 14. How data from discrete trials designs are analyzed. 15. The advantages and disadvantages of single-subject designs. 16. Why standard inferential statistics cannot be used to analyze data from discrete trials single-subject designs. 17. Advantages and disadvantages of single-subject designs. IDEAS FOR CLASS ACTIVITIES Single-Subject Baseline Experiment If you have facilities available for conditioning rats or pigeons (and you can get the proper clearance from your local animal care and use committee), nothing beats a simple experiment involving reinforcement versus extinction, different reinforcement schedules, delay of reinforcement, or stimulus discrimination. Our students are always amazed to see their rats acquiring a lever-press response and demonstrating the behavioral patterns the students have previously only read about in their textbooks. You can use these experiments to demonstrate the establishment of a behavioral baseline, selection of a stability criterion, the changes in baseline that take place between treatment conditions, intrasubject replication, and intersubject replication. Our procedure is to assign two students to a rat and operant chamber and have the students observe the animal and plot its response output over intervals of 5 minutes. We take care of properly depriving the rats prior to the lab session and magazine training the rat in the operant chamber. When the rat has acquired its response and the baseline has stabilized, the students switch the rat to the treatment condition. We normally use a simple ABAB design for the experiment. When all the rats have completed all phases of the experiment, the students meet as a group and compare their graphs. This is a good time to talk about any difficulties some of the students may have encountered (failing to get response acquisition, erratic baselines, etc.) and to suggest possible causes and cures for these difficulties. When students compare their graphs, they are often surprised about how similar the performances of the different rats were to one another. We usually create a summary graph that includes the data that met the stability criterion. This makes it easy to examine the degree of intersubject replication. Discrete Trials Experiment We have used a number of discrete trials experiments in our lab with good results. Here is one of them. Difference Threshold for Lifted Weight This is a replication of Weber’s classic lifted weight experiment using the method of constant stimuli. Each participant lifts a “standard” weight followed by a “comparison” weight and then tells the experimenter whether the comparison seemed heavier or lighter than the standard (“equal” is not an allowed response). Materials You will need to obtain 24 identical opaque plastic vials with screw caps, and fill each vial with lead shot so as to obtain a series of weights as follows (in grams): Standard Comparison 100g 75 80 85 90 95 100 105 110 115 120 125 g 200g 175 180 185 190 195 200 205 210 215 220 225 g Stuff cotton over the shot to keep the shot from moving in the vial. Mark the weight of each vial on the bottom. Make up a blindfold and an informed consent form for each participant. Make up two data sheets as follows (you can block-copy these): Data Coding Sheet Experimenter______________________________________ Date____________ Participant___________________________________________ Coding: 0 = Lighter; 1 = Heavier Comparison Weights Replication 75 80 85 90 95 100 105 110 115 120 125 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Total Prop. (Prop. = Proportion of “heaver” responses = Total/20) Data Coding Sheet Experimenter______________________________________ Date____________ Participant___________________________________________ Coding: 0 = Lighter; 1 = Heavier Comparison Weights Replication 175 180 185 190 195 200 205 210 215 220 225 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Total Prop. (Prop. = Proportion of “heaver” responses = Total/20) Procedure The participant is seated across from the experimenter at a small table. The experimenter should explain the nature of the experiment to the participant, and then have the participant read and sign the informed consent form. The experimenter should then explain the procedure to the participant, and make sure the participant understands the procedure. The experimenter blindfolds the participant and then selects one of the two sets of weights. The standard weight from this set is placed in front of the participant. The participant is informed that this is the standard weight and is instructed to lift the weight and immediately put it back down. The participant should use whichever hand is most comfortable but should always use the same hand for lifting the weights. The experimenter then replaces the standard weight with one of the comparison weights from the same set, informs the participant that this is a comparison weight, and instructs the participant to lift this weight in the same way as the standard. The participant then announces “heavier” or “lighter” according to whether the comparison weight felt heavier or lighter than the standard weight. The experimenter records the participant’s response on the data sheet, marking 0 for “lighter” and 1 for “heavier,” and then replaces the comparison weight with the standard weight. This procedure is repeated until the participant has compared the standard to each of the comparison weights. The entire set of comparisons is conducted 20 times with the comparison weights presented in a different order each time. (Prior to the experimental session, you or the students should prepare a list giving the ordering of the weights for each set of comparisons.) At this point, the participant is allowed to rest for a short while. The entire procedure is then repeated with the other set of weights. Analysis The student experimenter should total the responses in each column on the data sheet and then compute the proportion of “heavier” responses for each comparison weight by dividing each of the numbers by 20. The student should then make up two graphs, one for each weight set. Each graph should indicate the comparison weights along the x-axis and the proportion of “heavier” responses along the y-axis. After filling in the point for each comparison weight, the student should try to draw a smooth S-shaped curve through the points. The curve should pass through the .50 proportion at the value of the standard weight. The student should now find the place where the .25 proportion passes through the line and then read from this point on the line down to the x-axis. This point is the lower limit. Repeating this procedure at the .75 proportion gives the upper limit. The difference threshold is then computed as follows: Difference threshold = (Upper limit – Lower limit)/2. Weber’s law holds that the ratio of the difference threshold (delta I) to the intensity of the stimulus (I) should be a constant within the limits of experimental error. That is, where k is the Weber constant. Have students compute k for the two standard weights (I is the mass of the standard weight in grams). Does Weber’s law hold? CHAPTER 13 DESCRIBING DATA REVIEW QUESTIONS 1. Why is it important to scrutinize your data using exploratory data analysis (EDA)? 2. How do you organize your data in preparation for data analysis? 3. What are the problems inherent in entering your data for computer data analysis? 4. Why is it important to examine individual scores even when your analysis will be based on group averages? 5. How do various types of graphs diff er, and when should each be used? 6. How do negatively accelerated, positively accelerated, and asymptotic functional relationships differ? 7. Why is it important to graph your data and inspect the graphs carefully? 8. How do you graph a frequency distribution as a histogram and as a stemplot? 9. What should you look for when examining the graph of a frequency distribution? 10. What is a measure of center? 11. How do the mode, median, and mean differ, and under what conditions would you use each? 12. What is a measure of spread? 13. What measures of spread are available, and when would you use each? 14. How are the variance and standard deviation related, and why is the standard deviation preferred? 15. What is the five-number summary, and how can you represent it graphically? 16. What do measures of association tell you? 17. What are the measures of association available to you, and when would you use each? 18. What affects the magnitude and direction of a correlation coefficient? 19. What is linear regression, and how is it used to analyze data? 20. How are regression weights and standard error used to interpret the results from a regression analysis? 21. What is the coefficient of determination, and what does it tell you? 22. What it the coefficient of nondetermination, and what does it tell you? 23. What is a correlation matrix, and why should you construct and inspect one? 24. How does a multivariate correlational statistic differ from a bivariate correlational statistic? CHAPTER OUTLINE Descriptive Statistics and Exploratory Data Analysis Organizing Your Data Organizing Your Data for Computer Entry Entering Your Data Graphing Your Data Elements of a Graph Bar Graphs Line Graphs Scatter Plots Pie Graphs The Importance of Graphing Data The Frequency Distribution Displaying Distributions Examining Your Distribution Descriptive Statistics: Measures of Center and Spread Measures of Center Measures of Spread Boxplots and the Five-Number Summary Measures of Association, Regression, and Related Topics The Pearson Product-Moment Correlation Coefficient The Point-Biserial Correlation The Spearman Rank-Order Correlation The Phi Coefficient Linear Regression and Prediction The Coefficient of Determination The Correlation Matrix Multivariate Correlational Techniques Summary Key Terms descriptive statistics exploratory data analysis (EDA) dummy code bar graph line graph scatter plot pie graph frequency distribution histogram stemplot skewed distribution normal distribution outlier resistant measure measure of center mode median mean measure of spread range interquartile range variance standard deviation five-number summary boxplot Pearson product-moment correlation coefficient, or Pearson r point-biserial correlation Spearman rank-order correlation (rho) phi coefficient (w ) linear regression bivariate linear regression least-squares regression line regression weight standard error of estimate coefficient of nondetermination correlation matrix CHAPTER GOALS Chapter 13 briefly reviews descriptive statistics, including frequency distributions and measures of center, spread, association, and bivariate regression. Students should learn how scores from a study are organized, summarized, and graphed. Emphasize the importance of exploring the data (exploratory data analysis, or EDA) prior to conducting inferential statistical tests. In this regard, graphical techniques are especially important, so we have introduced some of the most useful, including the histogram, stemplot, and boxplot for displaying distributions; line graphs, bar graphs, and pie charts for displaying measures of center and (exclusively in the last case) proportions or percentages; and scatterplots and multiple boxplots for displaying relationships. Each of the measures of center and spread should be covered (we have found that even those students who have recently taken statistics need a review of descriptive statistics, especially the variance and standard deviation). The characteristics and applications of each measure are discussed. The section on measures of association and related topics includes discussions of the most popular measures of correlation. Students should learn the characteristics and applications of each. Also, discuss with your students the factors that affect the direction and magnitude of the correlation coefficient. Emphasize the fact that the correlation coefficient cannot be interpreted as an index of the causal relationship between variables. Review the following points with your class: 1. How to organize data on a data summary sheet. Stress the importance of providing a logical organization. 2. The different ways of graphing data (histogram, line graph, etc.), and why it is important to graph data. 3. The characteristics and applications of each of the measures of center. Discuss with your class how the characteristics of the distribution of scores affect the decision about which measure of center to use. 4. The characteristics and applications of the measures of spread. Discuss how the characteristics of the distribution of scores affect each measure. 5. The various measures of association, especially the Pearson product—moment correlation coefficient. Discuss how to interpret correlation coefficients including their signs and magnitudes. 6. The concept behind bivariate regression and prediction. Spend some time talking about the regression line, regression weights, and the standard error of estimate. IDEAS FOR CLASS ACTIVITIES Examples from the Literature Have students go to the library and find two or three articles in scientific journals that report descriptive statistics and present the results graphically. Have them find articles that illustrate the use of various measures of center and spread and the different types of graphs. For each article, have students identify how the descriptive statistics were used and how the data were presented in the graph. Each student should also interpret the graph(s) found in each article (for example, the shape of distributions, the type of functional relationship shown). Ask students if they think the graphs make the results easier to understand. Constructing a Frequency Distribution To reinforce how to construct a frequency distribution, have each student measure the heights of five adults (friends or relatives serve nicely) and bring their data to class. Next, develop categories (for example, less than 5’0”, 5’0” to 5’6”, 5’7” to 6’0”, and so on) and build a frequency distribution. Deciding on Measures of Center and Spread To illustrate how to choose measures of center and spread, present students with the following distributions of scores. Have them determine which measure of center and spread would best describe each distribution. Distribution 1 Distribution 2 Distribution 3 5 1 1 3 2 3 6 9 2 4 1 2 5 10 1 7 2 4 6 9 2 2 7 10 5 1 9 6 10 10 5 2 2 4 9 9 M 4.83 5.25 4.58 SD 1.40 4.00 3.73 Median 5 4.50 2.50 Range 5 9 9 IQR 2 7.50 7.00 Distribution 1: The mean and standard deviation are the best descriptive statistics. In this distribution there are no extreme scores, so the mean and standard deviation provide reasonable measures of center and spread. Distribution 2: Distribution 2 approaches a bimodal distribution. There are several low scores (1–2) and several high scores (7–10). The mean and standard deviation are not the best measures to use in this case. There is no one best value to represent center. Consequently, two modes should be derived and used to describe the distribution. The interquartile range is probably the best estimate of variability. Distribution 3: Distribution 3 is positively skewed with most scores at the lower end of the scale. The median is the best measure of center in this case. The interquartile range provides the best estimate of spread. Interpreting Pearson r Chapter 13 discusses the Pearson product–moment correlation as the most widely used measure of bivariate correlation. The text also discusses how the value of r is affected by various factors. Following are four sets of scores your students can analyze with a Pearson r. Have students compute r for each set of scores. Then, for each set, have them create a scattergram, and determine whether Pearson r is the best index of correlation. SET 1 SET 2 SET 3 SET 4 X Y X Y X Y X Y 1 2 3 4 1 7 1 10 4 7 5 9 3 8 3 4 2 3 4 9 2 6 7 9 6 5 9 3 1 1 8 7 4 7 10 4 3 4 5 7 3 4 2 3 2 7 9 10 5 4 1 2 2 8 10 10 5 6 6 10 1 3 8 7 7 7 5 10 2 1 9 8 8 8 9 3 1 6 3 5 6 7 10 2 3 4 1 3 8 6 1 3 3 5 2 4 9 10 6 9 2 10 5 7 8 9 8 2 1 3 9 10 1 3 2 3 1 8 8 8 Set 1: The Pearson r is appropriate for the data in set 1 (r = .849). An inspection of a scatterplot of these data shows that the relationship between the two sets of scores is linear with no outliers. Set 2: The Pearson r is not appropriate for the data in set 2 (r = –.044). An inspection of a scatterplot of these data shows that there is a curvilinear relationship between X and Y. Thus, Pearson r will underestimate the degree of relationship between these two variables. Set 3: The Pearson r is not appropriate for the data in set 3 (r = .121). The problem in this data set is that the range of scores for variable X is restricted compared to its range in the other sets (all values ranged between 1 and 3). Consequently, Pearson r will probably underestimate the degree of relationship between these variables. Set 4: The Pearson r is not appropriate for the data in set 4 as presented. Even though the value of r is relatively high (r = .657), the presence of an outlier (1, 10) in the data set results in a lower value of r than would appear to be true given the pattern shown on a scattergram for the other pairs of scores. You might have students delete the outlying pair and rerun the analysis. Bivariate Regression Use the four data sets from the previous exercise to illustrate bivariate regression. Have students compute the necessary statistics, and discuss the results with them. Point out how the results are influenced by the “flaws” in the three “flawed” data sets. CHAPTER 14 USING INFERENTIAL STATISTICS QUESTIONS TO PONDER 1. Why are sampling distributions important in inferential statistics? 2. What is sampling error, and why is it important to know about? 3. What are degrees of freedom, and how do they relate to inferential statistics? 4. How do parametric and nonparametric statistics differ? 5. What is the general logic behind inferential statistics? 6. How are Type I and Type II errors related? 7. What does statistical significance mean? 8. When should you use a one-tailed or a two-tailed test? 9. What are the assumptions underlying parametric statistics? 10. Which parametric statistics would you use to analyze data from an experiment with two independent groups? 11. Which parametric statistic is appropriate for a matched two-group design? 12. When would you need to use a one-factor ANOVA rather than a t test to analyze your data? 13. Why should you normally use ANOVA to analyze data from more than two treatments, rather than conducting multiple t tests? 14. When would you do a planned versus an unplanned comparison, and why? 15. What is the difference between weighted and unweighted means analysis, and when would you use each? 16. What is a post hoc test, and what does it control? 17. What are Latin square designs? What are they used for? 18. If you have two independent variables in your experiment, what type of ANOVA should be used to analyze your data? 19. What are main effects and interactions, and how are they analyzed? 20. What is a higher-order ANOVA? What difficulties arise as the number of orders increases? 21. What is ANCOVA, and what does it do that ANOVA does not do? 22. What is a nonparametric statistic? Under what conditions would you use one? 23. When would you use the chi-square test for contingency tables? 24. When would you use a Mann-Whitney U test or a Wilcoxon signed ranks test? 25. What is an effect size, and why is it important to include some measure of effect size along with the results of your statistical test? 26. What is meant by the power of a statistical test, and what factors can affect it? 27. Does a statistically significant finding always have practical significance? Why or why not? 28. When are data transformations used, and what should you consider when using one? 29. What are the alternatives to inferential statistics for evaluating the reliability of data? CHAPTER OUTLINE Inferential Statistics: Basic Concepts Sampling Distribution Sampling Error Degrees of Freedom Parametric Versus Nonparametric Statistics 425 The Logic Behind Inferential Statistics Statistical Errors Statistical Significance One-Tailed Versus Two-Tailed Tests Parametric Statistics Assumptions Underlying a Parametric Statistic Inferential Statistics with Two Samples The t Test An Example from the Literature: Contrasting Two Groups The z Test for the Difference Between Two Proportions Beyond Two Groups: Analysis of Variance The One-Factor Between-Subjects ANOVA The One-Factor Within-Subjects ANOVA The Two-Factor Between-Subjects ANOVA The Two-Factor Within-Subjects ANOVA Mixed Designs Higher-Order and Special-Case ANOVAs ANOVA: Summing Up Nonparametric Statistics Chi-Square The Mann–Whitney U Test Th e Wilcoxon Signed Ranks Test Parametric Versus Nonparametric Statistics Special Topics in Inferential Statistics Power of a Statistical Test Statistical Versus Practical Significance The Meaning of the Level of Significance Data Transformations Alternatives to Inferential Statistics Summary Key Terms inferential statistics standard error of the mean degrees of freedom (df) Type I error Type II error alpha level (α) critical region t test t test for independent samples t test for correlated samples z test for the difference between two proportions analysis of variance (ANOVA) F ratio p value planned comparisons unplanned comparisons per-comparison error familywise error analysis of covariance (ANCOVA) chi-square (χ2) Mann–Whitney U test Wilcoxon signed ranks test power effect size data transformation CHAPTER GOALS The major goal of Chapter 14 is to provide students with a review of inferential statistics. In our experience, many students come into the Research Methods course with a weak understanding of inferential statistics, even if they have already taken a Statistics course. The greatest areas of confusion center on the logic behind inferential statistics and how to interpret the results of statistical tests. We have also found that students really do not have a good grasp of what the statistical tables in the text Appendix represent. Consequently, we wrote Chapter 14 to emphasize understanding of inferential statistics rather than how to compute them. After covering Chapter 14, your students should have a better understanding of the general logic behind inferential statistics and how to interpret the results obtained. The following are some areas to highlight in class: 1. The notion of inferring population characteristics from samples. Reinforce the notion that inferential statistics are aids in decision making, the decision usually being whether the observed sample means represent the same or different underlying populations. 2. Sampling error, degrees of freedom, and the distinction between parametric and nonparametric statistics. 3. The difference between a one-tailed and a two-tailed test. This is a good place to review the concept of the sampling distribution of a statistic and the meaning of the critical values found in statistical tables. 4. The applications and interpretation of the major parametric statistics. Point out the importance of having your data meet the underlying assumptions of each test. 5. The analysis of variance, especially the two-factor analysis. Many students have a great deal of trouble understanding what an interaction is, and why we do not interpret main effects when one is present. Be sure to highlight the difference between planned and unplanned comparisons, and the problem of family-wise error when many means are compared. 6. Nonparametric statistics. Cover those discussed in the chapter, emphasizing the applications of each statistic, and how the results are interpreted. 7. The power of a statistical test, and the factors that affect it. 8. The difference between practical and statistical significance. Often students equate statistical significance with practical significance. Point out that even if a difference between means is significant at p < .00000001, it can still be a meaningless difference on a practical level. 9. What the a-level adopted really means. Students often think that if something is significant at p < .001, it is more significant than at p < .05. This error in reasoning should be corrected with a discussion of the meaning of statistical significance. 10. Alternatives to inferential statistics (especially replication). This discussion will reinforce the idea that inferential statistics are only a tool to help you decide whether your results are reliable. IDEAS FOR CLASS ACTIVITIES The Logic behind Inferential Statistics Chapter 14 makes the point that inferential statistics help you decide whether, for example, observed sample means represent the same or different underlying populations. You can use the following demonstration in class to help students gain a better understanding of statistical decision making. At the very bottom of this section on ideas, we provide numbers representing six populations each having a different range of values. Construct samples (any size you want; 10 is good) by drawing numbers from the populations. (Keep track of which samples come from which populations.) Pair off the samples, and give them to your class. Have your students compute a mean and standard deviation for each sample, and then decide whether the samples come from the same or different populations. Next, have them conduct a t test for independent samples on each pair of samples and evaluate t for statistical significance. Have students note whether the observed t score is statistically significant and whether their conclusions based on the results of the statistical test agree with their evaluations made prior to doing the t test. In class discuss the following points: 1. The notion of errors in decision making (Type I and Type II errors), and why they might be made. 2. How inferential statistics such as the t test help you make statistical decisions and minimize errors. 3. The meaning of a statistically significant finding. 4. How statistical tables are used, and what they really are (sampling distributions showing the probability of obtaining scores of a given value from a random sample). Computing and Evaluating a Between-Subjects ANOVA As a review of ANOVA, you can have students compute an ANOVA (using a statistical analysis program) and evaluate the significance of the findings. Next, we have reproduced the data from a 4 x 2 factorial experiment conducted by Bordens and Horowitz (1986). The data and results from an SPSS analysis of these data are provided at the end of this section. 1. Compute a two-factor between-groups ANOVA. 2. Determine whether the main effects and interaction are statistically significant at α = .05. 3. Conduct follow-up tests to determine the locus of the significant effects. (If your statistical package does not perform these tests, you can substitute t tests while using a more stringent level of significance (e.g., α = .01) to compensate for probability pyramiding, but be sure to point out to students that in professional work they should really be using one of the post hoc tests specifically designed for this purpose.) Discuss with your class the following points: 1. What the values in the ANOVA table represent, and how they are used to determine statistical significance. 2. How to interpret a significant overall F ratio. 3. How to conduct follow-up post hoc tests and how to interpret the results. 4. The meaning of the two significant main effects in the light of a nonsignificant interaction. Data for Analysis One Charge Judged One charge filed Two charges filed Three charges filed Four charges filed 3 5 6 2 3 4 3 4 3 4 4 5 4 4 4 5 4 5 5 5 3 3 5 5 2 3 4 5 5 4 5 4 6 4 3 6 5 5 4 6 Two Charges Judged One charge filed Two charges filed Three charges filed Four charges filed 6 3 4 5 4 5 6 5 4 5 4 5 5 5 5 6 4 4 5 5 5 5 4 4 5 4 4 5 3 5 5 5 3 5 5 6 5 3 6 6 Computing and Evaluating a Within-Subjects ANOVA The previous exercise had students compute and evaluate a between-subjects ANOVA. This exercise has students compute and evaluate a within-subjects ANOVA. In Chapter 10, we described a replication and extension of Peterson and Peterson’s (1959) classic experiment on short-term memory. For this exercise, have students compute a two-factor within-subjects ANOVA using a statistical package. The two factors were, as you may recall, whether participants attempted to learn words or CCCs at either a 3 or 18 sec retention interval. The data and the results from a SPSS-PC are provided below. Follow the guidelines suggested in the previous exercise for this one. There is one major difference between this exercise and the previous one: There is a significant two-way interaction. Use this opportunity to discuss with your class how to analyze the simple main effects of the interaction, and why you do not interpret main effects when a significant interaction is present. Data for Analysis Words/3sec Words/18sec CCC/3 sec CCC/18sec 18 16 15 15 20 17 19 18 20 13 18 14 19 18 13 4 18 16 16 11 19 17 19 17 20 15 14 7 18 15 18 7 20 19 18 15 20 18 20 17 19 14 10 5 18 16 17 16 19 15 20 16 20 16 13 3 19 19 14 6 19 17 16 6 Population Data for the “Logic Behind Inferential Statistics” Exercise Instructor Manual for Research Design and Methods: A Process Approach Kenneth Bordens, Bruce Barrington Abbott 9780078035456

Document Details

person
Isabella Thomas View profile
Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right