In this new preprint, we (EH Witte, F Zenker and myself) argue for better ways to evaluate our theoretical predictions in research, and become more aware of limitations of effect sizes measures (Cohen’s d). Preprint and supplement material available at 10.31234/osf.io/gdmvx
We argue for a direct evaluation of a theorized (expected) effect against the empirical (observed) effect. When the ratio between the two is apprx. 1 only then can we be certain that our prediction is adequate in view of data. Here, we introduce the Similarity Index as one way to achieve this (see formula 1 below).
Based on simulation studies, we develop a similarity interval that can be used as a guideline for the decision a) whether to adjust the theoretical prediction, b) increase the sample size, or c) consider as impractical the expected effect.
Several applications to existent findings in (Social) Psychology are provided. Likewise, we provide a step-by-step guide that researchers can use in immediately applying the Similiarity Index in their work, and also ways of interpreting its coefficients.