1. What are some misconceptions of single-case research designs?

2. What is the difference between applied and basic research?

3. What is the key difference between single-case research and between-group research?

4. What are the major characteristics of case studies?

5. What are the important contributions of case studies?

6. What are some methodological limitations of case studies?

7. In the late 1880s until the early 1900s, investigations in experimental psychology used one or a few subjects and many important findings were identify by psychologists like Wundt (1832-1920; sensory and perceptual processes), Pavlov (1849-1936; respondent conditioning), and Thorndike (1974-1949; trial and error learning – now considered operant conditioning), among many others. What events stimulated the shift in focus from one or a few subjects to larger sample sizes?

8. The point that information from both groups and from individuals contribute separate but uniquely important sources of information was underscored in distinguishing two different approaches to research: the intensive study of the individual (the idiographic approach) as a supplement to the study of groups (nomothetic approach). How did the distinction of these two approaches hurt single-case research?

9. When have case studies had a remarkable impact?

10. What is the difference between operant conditioning and the experimental analysis of behavior?

11. List the several distinct characteristics (and implications of each) of Skinner’s experimental analysis of behavior.

12. What is applied behavior analysis and in what publication was it formally defined?

13. What criteria are used to establish a treatment as evidence based? (there is no single set of criteria, but there are the most commonly invoked)

END

Kazdin (2011) Chapter 2: “Underpinnings of Scientific Research”

1. List and define the three key concepts that serve as a useful guide for designing experiments and interpreting the results of studies.

2. Define internal validity and list and define the threats to internal validity.

3. Define external validity and list and define the threats to external validity.

4. Define construct validity and list and define the threats to construct validity.

5. Define data-evaluation validity and list and define the threats to data-evaluation validity.

6. Since it is not possible to design a study that is perfectly attentive to all threats of internal, external, construct, and data-evaluation validity, what should an experimenter do when designing a study?


    Make your order right away

    Confidentiality and privacy guaranteed

    satisfaction guaranteed