I have published a paper titled Drawing Inferences from Randomization Tests. It is published here in Personality and Individual Differences. My hope in writing this paper was to describe the types of inferences one can draw from the randomization tests used in the OOM software. Here is the abstract:
Randomization tests grew out of permutation tests that were developed in the 1930s. Since then statisticians have expounded upon their nature as well as their various strengths and weaknesses. Uncertainty remains, however, with regard to the types of inferences that can be drawn from randomization tests, if indeed any type of inference can be drawn at all. In this paper we propose that randomization tests can play a role in drawing what are known as abductive inferences and inferences to best explanation from empirical research. Contemporary philosophers of science hold that such inferences are central to scientific reasoning; hence, randomization tests may serve as an effective bridge between the specific realm of statistical inference and the more general realm of scientific inference.
Speelman and McGann have published a paper in Frontiers titled “Statements About the Pervasiveness of Behavior Require Data About the Pervasiveness of Behavior.” This is a nice companion piece to our Persons as Effect Sizes paper. Generally, the argument we are all making is that one must be careful to focus on the individuals in one’s study. Aggregate statistics do not tell the entire story of one’s data. OOM can be used to analyze the data presented by Speelman and McGann, as their pervasiveness index is equivalent to the Percent Correct Classifications (PCC) index. They also discuss setting up thresholds for determining the number of people classified correctly according to expectation. In OOM this goal is accomplished with the Classification Imprecision option available in most analyses.
James Lamiell and Kate Slaney (Eds.) have published their new book, Problematic Research Practices and Inertia in Scientific Psychology: History, Sources, and Recommended Solutions. There are chapters on statistics, measurement, psychologists’ distaste for criticism, and the struggle to understand persons using aggregate methods. We have a chapter in which we use OOM to analyse data from a study on Dissociative Identity Disorder. We also address strategies to help connect mainstream researchers to OOM the ideas expressed in Lamiell and Slaney’s book.
The Personality Lab at OSU has published a paper titled “Persons as Effect Sizes” in Advances in Methods and Practices in Psychological Science. In this paper we demonstrate how OOM methods are used to answer the question “How many people in my study behaved or responded in a manner consistent with theoretical expectation?”
Dr. Frank Arocha has just published an article on scientific realism in the journal Theory & Psychology. The title of the article is: Scientific Realism and the Issue of Variability in Behavior. Here’s a link to the abstract:
The paper is broad in scope and offers a clear exposition of important issues facing modern psychologists and how we might move forward from a realist perspective. This will be required reading in my courses at OSU.
A new version of the OOM software has been uploaded. A number of minor bugs have been removed from the program, and a new option for generating data from proportions and frequencies (contingency tables) has been added. A video demonstrating this new feature has been uploaded to the Instructional Videos page (see link to the right, or click here). Two new videos for editing multigrams have also been added. Please update your version of the software, and please let me know if you find any bugs in the software or have any issues when using it.
Here’s a pithy article (behind a paywall) by Kevin Weinfurt of Duke University in which he revisits Francis Bacon’s famous idols: https://science.sciencemag.org/content/367/6484/1312.full Here’s my favorite quote: “And finally, the Idols of the Theater might be updated to include the uncritical adherence to systems of ritualized rules intended to automate the inductive activities of scientists” (p. 1312). One such system is of course Null Hypothesis Significance Testing (“p < .05”). I am hopeful OOM will encourage us to avoid statistical rituals and to instead always engage our data in a theoretically meaningful manner.
A sincere word of thanks to the faculty and staff of West Texas A&M University for hosting a talk on OOM last Friday, February 14th. I am particularly appreciative of John Richeson (an OSU alumnus!) and Mark Garrison for making the visit possible. West Texas A&M is growing and has a strong core of faculty…and, as a personally relevant fact, the university has an outstanding bowling program!
Thanks to Mark Garrison for the link to this N of 1 article. Science is the search for the causal structure of the world, and the history of science shows clearly that, while randomized trials can be useful, they are not necessary to gain such causal knowledge.
Thanks to Paul Barrett for alerting us to this newly published paper: Saylors, R., & Trafimow, D. (2020). Why the increasing use of complex causal models is a problem: On the danger sophisticated theoretical narratives pose to truth. Organizational Research Methods (https://doi.org/10.1177/1094428119893452 ), In Press, , 1-14. [paywall]
As pointed out by the authors, “As use of complex models increases, the joint probability a published model is true decreases.”
The paper comes with a calculator to compute said probability:
An analogous concern in OOM is that as a path model increases in complexity, fewer and fewer individuals will be traceable through the model. It is easy to imagine a complex path model in which not a single person can be accurately traced through all of the links. What use would such a model be as an explication of causes and effects? Of course, this information can only be known if the researcher attempts to perform such person-centered analyses.