YTread Logo
YTread Logo

How to systematically approach truth - Bayes' rule

Apr 11, 2024
conditioned on the evidence from the probability of the denied hypothesis conditioned on the evidence.” It is easy to calculate this. Simply apply the original formula to the numerator and denominator and we get this monster: But the probability of the evidence is the same for both updates, so it cancels out and you're left with this formula: Look at the right side. There is a proportion of the background and a proportion of how well "H" and "non-H" fit the available evidence. Suppose your prior probability for H is one-third of your prior probability for not H. You then observe new evidence that is three times more likely under H than under not H.
how to systematically approach truth   bayes rule
Then H and not H are updated to be exactly as likely, which is which means that both H and non-H have a posterior probability of 50%. See how these terms work together? The ratio of the prior probabilities is called "prior probabilities" because, well, they are the probabilities of the priorities. Probabilities are a slightly different way of representing probability. The probability of rolling a die with a four is one in six. The probabilities are 1 to 5.   The probability of a fair coin coming up "heads" is 50%, which corresponds to probabilities of 1 to 1. The above probabilities in this form of Bayes'

rule

are sometimes expressed as "O of H" notation.
how to systematically approach truth   bayes rule

More Interesting Facts About,

how to systematically approach truth bayes rule...

The other factor is called the "likelihood ratio," simply because it is the ratio of the odds. It can be expressed with the notation "L of e conditioned on H". The result is called "posterior probabilities." Because, well, it expresses the probabilities of the posteriors. It is often denoted by "O of H conditional on e". And this different way of expressing Bayes'

rule

is called the "probability form" of Bayes' rule. Now, let's look at a real-world application of the probability form, taken from one of the pioneers of Bayesian networks: Judea Pearl. One night you were woken up by the sound of your burglar alarm.
how to systematically approach truth   bayes rule
He wants to know the possibility that her house is being invaded by a burglar or if it is a false alarm. Let's apply the probability form. You know that if there is a robbery, the alarm will sound with a probability of 95%. However, you also know, based on past experience, that there is a 1% chance that your burglar alarm will sound even if there is no burglary. Sometimes it just doesn't work properly, so there is some false trigger, like a tree branch hitting the window. Therefore, the odds ratio is 95. Finally, we know from local crime statistics that, in your city, there is approximately a one in ten thousand chance that a given house will be burglarized each night, which means that before the odds were 1 to 9999.
how to systematically approach truth   bayes rule
Therefore, the posterior probabilities that your house is being burglarized are 95 to 9999 after hearing the alarm, which roughly corresponds to probabilities of 1 to 105. This ratio equals 0. 0095, which means that after hearing the alarm, the probability that your house is being burglarized is 95 to 9999 after hearing the alarm. Being robbed is 0.0095 times the probability of not having your house robbed. If we convert the posterior probabilities into probabilities, we obtain that the probability of theft given the alarm is 0.94%. That's almost 1%. Maybe there is a high enough probability of taking the situation seriously, but not so high that you panic. It is worth reflecting a little more on this example.
We started with the no-robbery hypothesis, which was approximately ten thousand times more likely than the robbery hypothesis. But the alarm was only about 100 times more likely to go off in a world where there was a robbery compared to a world without a robbery. This is only enough to cover two of the four orders of magnitude needed to tip the scales. The alarm ringing causes the robbery hypothesis to gain credence relative to the non-robbery hypothesis, but only enough so that the no-robbery hypothesis becomes 100 times more likely rather than 10,000 times more likely than the robbery hypothesis. . Even if you're not interested in following these calculations step by step, the important thing to understand about Bayes' rule is that, in a simple equation, we find the essence of how to change our beliefs in light of new evidence.
In the probability form of Bayes' rule, we see that our updates are governed by multiplying our prior beliefs by the probability ratio dictated by the evidence. Whenever we find new evidence, Bayesians ask us to consider: what is the probability that we would have seen this evidence if our theory is correct? And what is the possibility of observing this evidence if our theory is not correct? Multiplying this proportion by our prior probabilities gives us our posterior probabilities: what we should believe in light of the evidence. We will now consider an interesting implication of Bayes' formula that you may recognize as a fairly central principle of scientific reasoning.
There are many more implications of this type and we will probably develop them in future videos. Bayesians typically believe that we should never be absolutely sure of any of our beliefs. In other words, we should never assign a probability of exactly zero or one to any proposition, no matter how absurd. This idea is sometimes called Cromwell's rule, after the English statesman Oliver Cromwell, who once declared before the General Assembly of the Church of Scotland: “I pray you, in the bowels of Christ, consider it possible that you are wrong.” The rationale for Cromwell's rule can be easily seen by asking what would happen if we ever assigned 100% or 0% confidence to any of our beliefs.
Let's look again at the original form of Bayes' rule. Let's say you think the chance of your house being burglarized is exactly zero. You've never heard of a robbery in your city. Robberies? What are they? Furthermore, this time we have an extremely precise alarm. The chances of it having a mistake are one in a billion. It will only give a false alarm once every billion nights, or about once every two hundred and seventy-three million years. And only in one in a billion robberies will it ever stop warning you. Even though the alarm is extremely good at its job, Bayes' rule tells us that hearing the alarm provides exactly no update to your previous belief about being robbed.
This is why. In the numerator, we have P for theft alarm, which in this case is extremely high, multiplied by the prior probability of theft. However, the prior probability of theft is zero, remember? If we multiply these two quantities in the numerator, we get zero. And we know that if the numerator is zero, it doesn't matter what's in the denominator: the result must also be zero. In other words, your complete and resolute certainty that you are not being robbed was not at all affected by the evidence. Even in the presence of an extremely ridiculously reliable alarm, your faith remains unshaken.
And that's kind of a problem, right? Wouldn't you like to at least doubt your initial hypothesis a little? As Bayesian statistician Dennis Lindley put it, one should always “allow for a small probability that the moon is made of green cheese; It may be as small as 1 in a million, but keep it there because otherwise an army of astronauts returning with samples of said cheese will not leave you indifferent.” Another way to think about this is in terms of probabilities. A probability of 100% or 0% corresponds to probabilities of infinity to 1 or 1 to infinity. Which is a little strange.
You've never observed infinite evidence, so it doesn't make sense to have infinity in your odds ratios. For Bayesians, Cromwell's rule is just a generalization of the idea that we should have an open mind. We should be prepared to change our minds about anything. And, in theory, a certain amount of evidence should be enough to push our beliefs in the opposite direction from where we currently are, no matter how confident we are at the moment. More generally, Bayesians view beliefs as a way of constructing an understanding of the world. The Bayesian view of evidence tells us that our beliefs should dance like a leaf in the wind, drifting in whatever direction the evidence pushes them.
Truly embracing Bayesian reasoning means striving to change our beliefs just the right amount in the face of new evidence. Bayes' rule is more than just a formula. It is a precise rule for reasoning. Just as the laws of physics govern the workings of the universe, the laws of human understanding govern how we should seek

truth

, and no one is exempt from them. If you choose to ignore them, your journey towards the

truth

will be hindered. Of course, Bayesians do not claim that we can realistically follow these calculations for all of our beliefs. Determining the precise likelihood ratio from real-world evidence could be extremely difficult.
It is important not to make the mistake of pretending that we can be more accurate than possible by claiming extreme confidence in specific numbers. Especially in cases where numbers only serve to elucidate our subjective impressions, which could be wrong. But even with these caveats, Bayes' rule means that there is a precise and general method for changing your mind. And even just knowing that there is a method to get to the truth should inspire us to tackle it. If we achieve this ideal, our model of the world will slowly

approach

the true nature of reality. Hello everyone, If this is the first time you are hearing about Bayesian epistemology, I encourage you to dig deeper.
An incredible intellectual journey awaits you. I have included many links and recommendations in the video description that will hopefully help you deepen your knowledge. And, as always, many thanks to our sponsors at the Universal Turing Machine level and above. I am amazed at the consistency of your support.

If you have any copyright issue, please Contact