Statistics

Can Small Effects be Meaningful?

Standard cut-offs are not recommended when determining a suitable effect size for a power analysis. Indeed, the ‘meaningfulness’ of an effect size will depend on some subjective elements. That is, a ‘small’ effect may have drastic implications in certain contexts, while ‘large’ effects may have little to no implications in other contexts. A recent publication has provided a practical example to help differentiate a statistically versus a clinical meaningful effect size.

Residuals for Post-hoc Analysis in Chi-square

Chi-square Chi-square tests are common in psychological science (Bakker & Wicherts, 2011). These tests compare the observed (i.e., the actual frequency) versus the expected (i.e., \(expected_{i,j} = \frac{n_{rowi}*n_{colj}}{n_{tot}}\)) frequencies in a \(Row* Column\) contingency tables and are sometimes referred to as crosstabs (e.g., SPSS). Formally, the Chi-square statistic is defined as: \(\chi^2 = \Sigma\frac{(O-E)^2}{O}\) with degrees of freedom: \(df = (n_{rows}-1)*(n_{cols}-1)\) Despite the ubiquity of these tests, post-hoc analyses may be less common.

Diagnosing Multicollinearity using Variance Inflation Factors (VIF)

In an ideal world, a regression model’s predictors will be uncorrelated with each other or any other omitted predictor that is associated with the outcome variable. When this is the case, the sums of squares accounted for by each predictor will be uninfluenced by any other predictor. That is, if you ran a simple regression: Model 1 = \(\hat{Y} = \beta_{0} + \beta_{1}X1\) and Model 2 = \(\hat{Y} = \beta_{0} + \beta_{2}X2\)

F Distribution

Front Matter I remember hearing about the F-test during my third year undergraduate statistics class. I enjoyed statistics courses, more so than the average psychology student, at least I believed so. I felt comfortable with the equations to calculate SSE, MSB, and so on, but I never gave much thought about why a certain F value was considered statistically significant. In fact, we were never taught about what p-values mean during my undergrad (2012-2015).

Orthogonal Predictors Influence on Statistical Power

I recently came across a Twitter poll that piqued my interest. The specific poll asked: Including non confounding covariates (Z) in the regression y~ X + Z increases power to detect association of X with y. (assuming association of Z with y is non-zero). My immediate response was “No” because the variance predicted by the covariate will not influence the variance explained by the original predictor and, thus, not influence the standard error.

Visualizing Power

Primer on Statistical Significance Null-hypothesis significant testing (NHST) is a controversial approach to social science research. Although I will not visit the concerns in full, it’s important to have an understanding of concepts related to NHST so you can fully understand why this approach is criticized and its flaws are..well… flaws. One ubiquitous, yet, misunderstood concept is p-values. For you own knowledge: …beginning with the assumption that the true effect is zero (i.

Undergraduate Research Methods and Statistics

Research and development of effective ways to teaching research methods and statistics to undergraduates.