This is something that I live doing, give my responses in the form of probability. It makes your judgement much more accurate son.
Plus I frequently ask questions like this during my interviews. Such as, what do you think will happen to the ft100 index on Monday? I am interested to learn about their thinking process and what all inputs they include when making a decision. You will be surprised how many people reply back by saying, dunno, or how long is a piece of string. Needless to say, I do not hire them because if they cannot make a simple economic decision, how can they work in my team and take decisions every 2 minutes?
Stuff to think about.
Rationally Speaking: Why we should use odds, all the time
http://rationallyspeaking.blogspot.com/2011/08/why-we-should-use-odds-all-time.html
by Ian Pollock thefertilityblogs.com It is extremely important to quantify epistemic states — specifically, levels of certainty — if you wish to think clearly. The reason why was summed up rather nicely by one of my special historical heroes, James Clerk Maxwell:
The actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man’s mind.
In other words, even if everybody reasoned using classical logic without committing logical fallacies (fat chance), practical reasoning would still be impossible, because questions in real life just never deal in certainties.*
One of the more beautiful things to discover in this world is that there are objective rules for the manipulation of subjective certainties and uncertainties. Bayesian statisticians call these levels of uncertainty “probabilities.” (Frequentists… get confused at this point, on which I hope to write much more in the future).**
One of the most unexpected beneficial side-effects of thinking probabilistically as a habit, is that it makes you realize just how much you actually know. (This is probably the one skeptical conclusion that doesn’t deflate one’s ego.)
For example, suppose that I ask you a weird question like “What did Peter Singer (the philosopher) eat for breakfast on October 12, 2007?”
The standard answer to such questions, which in my experience is elevated almost to the level of a Kantian imperative among some traditional skeptics, is “I don’t know.” Whereof one cannot speak, thereof one must be silent.
The problem with this is that you usually do know quite a lot. To illustrate, let’s consider my “Breakfast of Utilitarians” example.
To begin with, you know with near-certainty that Peter Singer didn’t eat anything that doesn’t actually exist — unicorn cheese, for example. Okay, but that’s trivial.
You also know with decent confidence that he didn’t eat anything actively poisonous — for example, fly agaricus mushrooms. But that’s pretty obvious too.
Fine, now we’ve narrowed it down to non-poisonous foods that actually exist. You also may know that he is a champion of animal welfare, and a utilitarian vegan, so all or most animal products are out of the running. Now we’re getting somewhere.
Further, the man is Australian, and of European ancestry, which ceteris paribus makes various other world cuisines (e.g., Mexican, Finnish) somewhat less likely than not.
On the other hand, you might want to revise this last consideration if you’ve read “A Vegetarian Philosophy,” at the end of which he gives a recipe for Dal, an Indian dish. This suggests, if weakly, that he might have more international tastes.
Lastly (or is it?), the meal in question is breakfast, and people typically confine certain foods to specific meals. For this reason, tofu sausages are a fairly good bet relative to others, while onion soup is a fairly bad one. We could go on, if we wished…
The point is that if you cared enough, you could probably narrow Singer’s breakfast that day down to a sizeable, but not endless, list of possibilities, each weighted by its own likelihood. A probability density distribution over Platonic breakfast-space, if you will. You may not be able to pick one specific food and say “He ate this!”, but you are far from wholly ignorant — you’ll know the best candidates. And this generalizes to almost all sensible propositions. Try it — it’s actually a rather fun exercise!
Of course, it’s still reasonable to say “I don’t know” as a quick gloss of the actual truth: “I have no special information on this question that you do not.” However, the problem with thinking “I don’t know” in the sense of full ignorance, is that it allows you — intentionally or not — to sweep all your background knowledge under the rug and pretend to yourself that some question is perfectly uncertain. Background knowledge should always be the first thing to come to your mind when considering a truth question. This helps avoid mistakes like the base rate fallacy (and more generally, fallacies wherein you ignore your own priors), and allows for good decision-making under uncertainty.
However, if humans wish to think like this as a habit, it is often much more useful to forget about probabilities per se, and use the mathematically equivalent concept of odds.
Let’s have a quick refresher on what “odds” are. We all know what a probability is (or at least, we’re familiar with the term!). Odds can be seen as ratios of probabilities. Just as we use P(A) for the “probability of A,” we may talk about O(A), the “odds of A” (where A is some apparently sensible proposition).
In terms of probabilities, O(A) = P(A)/P(~A). So for example, if there is a 66% probability of rain tomorrow, then O(rain) = 0.66/(1-0.66), or more easily 66:33, which finally reduces to 2:1 (usually read “two to one in favour”). The “:” is basically just a division sign, so O(rain) can be stated as “2 to 1” or as simply “2.” Although odds can be expressed as ratios of probabilities, they are best understood on their own terms altogether. In this case, “odds of 2 to 1 in favour of rain tomorrow” means something like “days like this are followed by twice as many rainy days as non-rainy days, to the best of my knowledge.”
Odds are even more familiar from the racetrack, where a bookie might give “10 to 1 on Longshot, to win.” What this means is that if the bookie is selling stakes for $5 each, then a single $5 stake will get you (10+1)*$5 = $55 if you win (i.e., a gain of $50 plus your $5 stake back), while a loss will simply lose you your $5 stake. (Of course, in order to make money, the bookie must think that the real odds on Longshot are even longer than 10 to 1.)
I advocate using odds rather than probabilities to quantify your epistemic states on all sensible propositions, for two main reasons:
(1) Odds have the appropriate mental associations.
Odds are associated in our minds with betting, which is an earthy activity in which irrationality might actually lose you your shirt; whereas probabilities are abstract and academic, probably associated with mathematics and statistics courses, and with Spock from Star Trek. The latter being the case, either you don’t get statistics at all (and the word “probability” just brings up memories of Spock being cold and emotionless), or you learned about probability in the context of wildly overspecified textbook problems, in which you had way more information handed to you than humans typically have in real-world situations.
By contrast, thinking in terms of odds and the racetrack forces you to let belief constrain anticipation — if you say you are 98% sure that Obama will win in 2012, that sounds to me suspiciously like “I really really hope he’ll win,” whereas “5 to 1 in favour” leads to the obvious question: “Care to make it interesting?” Suddenly your wishful thinking needs to take a back seat to whether you can afford to lose this bet. (I think the advantages of this mode of thinking at least partly carry over, even if you don’t actually bet any money.)
Moreover, probabilities sound too precise, as though they have to be calculated rigorously or not at all. Stating a 95% probability makes me ask myself (and others ask me) “Why not 96% or 94%?” By contrast, “5 to 1” seems more acceptable as a tentative verbalization of a level of certainty, the arguments for which might not be readily quantifiable.
(2) Odds map epistemic states to numbers in a way that makes sense.
Alice the juror believes that Casey Anthony is guilty, with probability 90%. Bob the juror also believes she is guilty, with probability 99%. They seem to pretty much agree with each other, and yet…
If we switch over to odds, we find that Alice gives odds of 9:1 in favour of guilt, while Bob gives 99:1. This is more than an order-of-magnitude difference! Actually, Alice is substantially less convinced than Bob; they should still be arguing! Alice still entertains reasonable doubt — at this point, she should probably vote to acquit.
And tellingly, when Eve mentions that she is 100% certain of the defendant’s guilt, a quick conversion shows that she gives odds of 100:0, aka “infinity.” This means, if taken literally (which we should not actually do), that Eve should be willing to take a bet in which being proven right earns her a penny, while being proven wrong earns her unending torture. The fact that odds explode as mathematical objects when they try to map absolute certainty is a nice feature probabilities don’t have.***
In summary:
- Quantifying uncertainty about all sensible questions is a crucial cognitive tool, both for eliminating all-or-nothing thinking and for reminding us to always use our substantial background knowledge.
- Odds are more useful than probabilities for this purpose, because they have: more appropriate mental associations for most humans; good mathematical properties showing the folly of extreme cases (perfect certainty); and an intuitive relation to frequency that humans readily understand. Also, talking in odds will make you sound badass.
________
* “But what about questions like 1+1=2?” you ask? Remember, probability has to reference the fact that it is calculated in a fallible human mind. Maybe “1+1=2” is 100% correct as mathematics (I think it is), but there is still a chance that I can mistakenly think 1+1=2 (epistemology) — for example, because aliens are messing with my brain. So I have to assign a probability slightly less than 100% to it.
** Also, it is often a point of contention as to what sorts of propositions “probability” can be meaningfully applied. For example, does it make sense to speak of probabilities where straightforward empirical evidence is lacking (e.g., “the probability that immaterial souls exist”)? Without wishing to get into this issue too deeply, I hold that this use of the word does make sense (provided any discourse about the existence or nonexistence of souls makes sense), since if we can discuss how likely souls are at all, we should be able to quantify our uncertainty in the same manner as for other questions.
*** If you use logarithms, you can get even nicer mathematical properties, but you lose all the intuitiveness.
No comments:
Post a Comment