Robert Mathews said that : “Ronald Fisher gave scientists a mathematical machine for turning baloney into breakthroughs, and ukes into funding. It is time to pull the plug.”. He’s right.
In one previous life, I wrote a thesis in Philosophy. But, a specific area, Epistemology also called
theory of knowledge, because, It questions what knowledge is and how it can be acquired,
and the extent to which any given subject or entity can be known.
My thesis deal about : The tradition, since Cournot in applying mathematical modelling to social sphere and more specific, how the climate modelling interact with interdiscplinary source of knowledge (mathematics, physics,geography, philosophy).
After reading this : http://www.academia.edu/1075253/Climate_Change_Epistemic_Trust_and_Expert_Trustworthiness
It seems that the use of bayesian statistics is misunderstood.
Assume two thesis :
- According to the Bayesian’s statistic is the science who deal about the degree of of proofs in the observations. That means that, Bayesian statistics self-contained paradigm providing tools and techniques for all statistical problems.
- In the classical frequentist view point of statistical theory, a statistical procedure is judged by averaging its performance over all possible data
However, the bayesian approach gives prime importance to how a given procedure performs for
the actual data observed in a given situation.
The core of this theory have been formalised by Popper (Karl) with two main principles :
- Knowledge cannot start from nothing — from a tabula rasa – nor yet from observation. The advance of knowledge consists, mainly, in the modification of earlier knowledge. Although we may sometimes, for example in archaeology, advance through a chance observation,the significance of the discovery will usually depend upon its power to modify our earlier theories.
- Any probability is a degree of belief about something; It’s not a property.That means that any scientific model produce data absolutely or conditionnaly on probability. It’s possible to measure how the data can modify the degrees of belief (baye’s principle).
One of the result of frequentist theory, may be the most criticized, is the fisher s p-value.
A problem can be to evaluate how the diploma is important to get in the first job
– The diploma has no effect on the first job. In frequentist, we compute the p-value that can be
interpreted as the probability to observe a difference at least as important observed in the data if our hypothesis is true.
So, let us talk about the p-value and this problem :
If you cross seven times a beautiful girl each day for 10 days at the same place, can you conclude that she is always there? And then, tomorrow, youalways have a chance to speak with her.
We’re going to examine both approach : Fisher (based on p_value) or Jeffreys (Bayesian)
- With Fisher, we have :
H0 : p=0.5 and H1 :p=0.5
where p is “the probability to cross the girl”. If we reject H0 (fisher) we can conclude that the girl is always there.
the p_value is :