For the introductory statistic student confidence intervals can seem a daunting concept to grasp. Quite simply put it is an interval that we have a certain measure of confidence that the population parameter falls into. The 95% confidence is the most common value chosen in my academic circle. Nevertheless, many others may be viable as well as long as the decision is based on theory. It is quite easy to make the mistake of assuming that a larger sample size will reduce your likelihood of making a type 1 error (rejecting the null when you shouldn’t have). To demonstrate this I put together a shiny app that can be found here or by clicking on the image below. The code for this app is publicly available on github here. As you change the sample size the intervals will get smaller but the number of red line (type 1 errors) will appear roughly the same amount of time. Special thanks to Larry Cook for introducing me to this neat plot and method for demonstrating confidence intervals.