# Recent Posts

### Shiny Conditional Panels and pickerInput

Shiny applications offer a fantastic way to produce interactive web applications. Unsurprisingly, there are abundant R packages built specifically for Shiny to facilitate ‘telling your data story’. For example, I often use the pickerInput() function from the shinyWidgets package, which is more aesthetic and has built-in ‘Select All/None’ buttons compared to the base checkboxGroupInput(). pickerInput(inputId = 'picker', label = 'Please choose your options', choices = c("Plot", "Text", "Analysis"), multiple = T, options = list(actions-box = TRUE)) Please choose your options Plot Text Analysis This basic interface allows users to select any combination of selections.

### Residuals for Post-hoc Analysis in Chi-square

Chi-square Chi-square tests are common in psychological science (Bakker & Wicherts, 2011). These tests compare the observed (i.e., the actual frequency) versus the expected (i.e., $$expected_{i,j} = \frac{n_{rowi}*n_{colj}}{n_{tot}}$$) frequencies in a $$Row* Column$$ contingency tables and are sometimes referred to as crosstabs (e.g., SPSS). Formally, the Chi-square statistic is defined as: $$\chi^2 = \Sigma\frac{(O-E)^2}{O}$$ with degrees of freedom: $$df = (n_{rows}-1)*(n_{cols}-1)$$ Despite the ubiquity of these tests, post-hoc analyses may be less common.

### Diagnosing Multicollinearity using Variance Inflation Factors (VIF)

In an ideal world, a regression model’s predictors will be uncorrelated with each other or any other omitted predictor that is associated with the outcome variable. When this is the case, the sums of squares accounted for by each predictor will be uninfluenced by any other predictor. That is, if you ran a simple regression: Model 1 = $$\hat{Y} = \beta_{0} + \beta_{1}X1$$ and Model 2 = $$\hat{Y} = \beta_{0} + \beta_{2}X2$$

### F Distribution

Front Matter I remember hearing about the F-test during my third year undergraduate statistics class. I enjoyed statistics courses, more so than the average psychology student, at least I believed so. I felt comfortable with the equations to calculate SSE, MSB, and so on, but I never gave much thought about why a certain F value was considered statistically significant. In fact, we were never taught about what p-values mean during my undergrad (2012-2015).

### Newfoundland Youth Hospitalized more than Canadian Average?

I vividly recall my first research methods and statistics course, which I took during the second year of my undergrad. I considered myself a math-liker–not a lover–who would like research and do well with statistics. Thus, I was excited to get into the nitty-gritty of research. The course kicked off with an intense lecture regarding the nature of knowledge and believing information. Why should we believe what someone tells us?

### Orthogonal Predictors Influence on Statistical Power

I recently came across a Twitter poll that piqued my interest. The specific poll asked: Including non confounding covariates (Z) in the regression y~ X + Z increases power to detect association of X with y. (assuming association of Z with y is non-zero). My immediate response was “No” because the variance predicted by the covariate will not influence the variance explained by the original predictor and, thus, not influence the standard error.

# Projects

#### Self-injury and Online Activity

Understanding how, why, and the effects of accessing or posting online content related to suicide and self-injury.

#### Undergraduate Research Methods and Statistics

Research and development of effective ways to teaching research methods and statistics to undergraduates.

#### Rural Suicide

Seeking to support Rural Canadians affected by suicide.

#### Suicide Theory

Progressing suicide theory to better understand who, how, and why individuals experience suicide ideations or engage in suicide behaviors.

Connect with me