i like Να έχεις μια όμορφη μέρα και να είσαι ζεστά ☺️you are going to meet.
Σάββατο 1 Απριλίου 2017
=
Therefore, upon learning that you are in cell #1, you should become almost certain (Pr =
100/101) that the coin fell tails. Answer: At stage (a) your credence of tails should be 1/2 and at
stage (b) it should be 100/101.
Now consider a second model that sort of reasons in the opposite direction:
Model 2. Since you know the coin toss to have been fair, and you haven’t got any other
relevant information, your credence of tails at stage (b) should be 1/2. Since we know the
conditional credences (same as in model 1) we can infer, via Bayes’s theorem, what your
credence of tails should be at stage (a), and the result is that your prior credence of tails must
equal 1/101. Answer: At stage (a) your credence of tails should be 1/101 and at stage (b) it
should be 1/2.
Finally, we can consider a model that denies that you gain any relevant information from
finding that you are in cell #1:
Model 3. Neither at stage (a) nor at stage (b) do you have any relevant information as to
how the coin fell. Thus in both instances, your credence of tails should be 1/2. Answer: At stage
(a) your credence of tails should be 1/2 and at stage (b) it should be 1/2.
5. Let us take a critical look at these three models. We shall be egalitarian and present one
problem for each of them.
We begin with model 3. The challenge for this model is that it seems to suffer from
7
incoherency. For it is easy to see (simply by inspecting Bayes’s theorem) that if we want to end
up with the posterior probability of tails being 1/2, and both heads and tails have a 50% prior
probability, then the conditional probability of being in cell #1 must be the same on tails as it is
on heads. But at stage (a) you know with certainty that if the coin fell heads then you are in cell
#1; so this conditional probability must equal 1. In order for model 3 to be coherent, you would
therefore have to set your conditional probability of being in cell #1 given heads equal to 1 as
well. That means you would already know with certainty at stage (a) that you are in cell #1.
Which is simply not the case! Hence we must reject model 3.
Readers who are familiar with David Lewis’s Principal Principle2
may wonder if it is not
the case that model 3 is firmly based on this principle, so that rejecting model 3 would mean
rejecting the Principal Principle as well. That is not so. While this is not the place to delve into
the details of the debates about the connection between objective chance and rational credence,
suffice it to say that the Principal Principle does not state that you should always set your
credence equal to the corresponding objective chance if you know it. Instead, it says that you
should do this unless you have other relevant information that needs to be taken into account.3
There is some controversy about how to specify which sorts of such additional information will
modify reasonable credence when the objective chance is known, and which sorts of additional
information leaves the identity intact. But there is wide agreement that the proviso is needed.
Now, in Incubator you do have such extra relevant information that you need to take into
account, and model 3 fails to do that. The extra information is that, at stage (b), you have
discovered that you were in cell #1. This information is relevant because it bears probabilistically
on whether the coin fell heads or tails; or so, at least, the above argument seems to show.
2
Lewis (1986)
8
3
See, for example, Hall (1994), Lewis (1994), and Thau (1994).
6. Model 1 and model 2 are both all right as far as probabilistic coherence goes. Choosing
between them would therefore be a matter of selecting the most plausible or intuitively appealing
prior credence function.
Model 2 says that at stage (a) you should assign a credence of 1/101 to the coin having landed
tails. That is, just knowing about the setup but having no direct evidence about the outcome of
the toss, you should be virtually certain that the coin fell in such a way as to create 99 additional
observers. This amounts to having an a priori bias towards the world containing many observers.
Modifying the thought experiment by using different numbers, it can be shown that in order for
the probabilities always to work out the way model 2 requires, you would have to subscribe to
the principle that, other things being equal, a hypothesis that implies that there are 2N observers
should be assigned twice the credence of a hypothesis that implies that there are only N
observers. This principle is known as the Self-Indication Assumption (SIA).4
My view is that it is
untenable. To see why, consider the following example (which seems to be closely analogous to
Incubator):
The Presumptuous Philosopher. It is the year 2100 and physicists have narrowed down the
search for a theory of everything to only two remaining plausible candidate theories: T1 and T2
(using considerations from super-duper symmetry). According to T1 the world is very, very big
but finite and there are a total of a trillion trillion observers in the cosmos. According to T2, the
world is very, very, very big but finite and there are a trillion trillion trillion observers. The
super-duper symmetry considerations are indifferent between these two theories. Physicists are
9
4
See Bostrom (2002a). Principles or forms of inferences that are similar to SIA have also been discussed by Dieks
(1992), Smith (1994), Leslie (1996), Oliver and Korb (1997), Bartha and Hitchcock (1999) and (2000), and Olum
(2002).
preparing a simple experiment that will falsify one of the theories. Enter the presumptuous
philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can
already show you that T2 is about a trillion times more likely to be true than T1!” (whereupon the
philosopher explains model 2 and appeals to SIA).
Somehow one suspects that the Nobel Prize committee would be reluctant to award the
philosopher the big one for this contribution. Yet it is hard to see what the relevant difference is
between this case and Incubator. If there is no relevant difference, and we are not prepared to
accept the argument of the presumptuous philosopher, then we are not justified in using model 2
in Incubator either.
7. What about model 1, then? In this model, after finding that you are in cell #1, you should set
your credence of tails equal to 100/101. In other words, you should be almost certain that the
world does not contain the extra 99 observers. This might seem like the least unacceptable of the
alternatives and therefore the one we ought to go for. However, before we uncork the bottle of
champagne, ponder what this option entails.
Serpent’s Advice. Eve and Adam, the first two humans, knew that if they gratified their flesh,
Eve might bear a child, and that if she did, they would both be expelled from Eden and go on to
spawn billions of progeny that would fill the Earth with misery. One day a serpent approached
them and spoke thus: “Pssst! If you hold each other, then either Eve will have a child or she
won’t. If she has a child, you will have been among the first two out of billions of people. Your
conditional probability of having such early positions in the human species given this hypothesis
is extremely small. If, on the other hand, Eve does not become pregnant then the conditional
probability, given this, of you being among the first two humans is equal to one. By Bayes’s
Εγγραφή σε:
Σχόλια ανάρτησης (Atom)
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου