Carroll’s project is very similar to David Albert’s in Time and Chance: he’s trying to locate the arrow of time in thermodynamics, claiming that t1 is in the future of t0 just in case t1 has higher entropy than t0 and there is a steady entropy increase between the two. Entropy, roughly, measures chaos—a box in which particles are all spread out has more entropy than one where particles are all bunched up in a corner, my office has more entropy than Meghan’s, and Jackson Pollock’s paintings have more entropy than Piet Mondrian’s.
Ludwig Boltzmann tried to solve this problem by showing that the VAST majority of configurations of tiny particles that could make up the middlesized dry goods around you (microstates compatible with the current macrostate) evolve to higher entropy states. So we should take it as highly probable that the next macrostate will have higher entropy than this one.
But that didn’t quite work; as Albert points out in Time and Chance, the underlying physics still goes the same way in both directions. So the argument goes backwards too: it’s highly probable that the previous macrostate had higher entropy. This means that it’s really probable that everything you see now randomly coalesced from a state of thermal equilibrium (total chaos). In fact, it’s most probable that the only thing that coalesced out of this super-high entropy state is your brain. Such a brain is called a Boltzmann brain.
The problem with Boltzmann’s original understanding of statistical mechanical probabilities, then, is this: the theory assigns a very low objective probability to the proposition that your evidence for the theory is veridical. This fact isn’t an internal inconsistency in the theory; it doesn’t even strictly imply that you don’t have evidence for it (only that it’s overwhelmingly unlikely for you to have evidence). But when the theory is combined with a plausible constraint on rationality, the Principal Principle (which tells you to set your credence equal to the objective probabilities), DISASTER! You can’t have a high credence in the theory and your evidence for the theory.
This is what David Albert calls ‘cognitive instability;’ as soon as you accept the theory, rationality compels you to doubt your evidence. As Albert and Tim Maudlin separately put it, there’s an almost Kantian transcendental argument against this theory: in order to rationally accept a theory, the theory has to have a place for you, the rational agent, in it. Boltzmann doesn’t have this, so we can’t accept his theory.
It’s important to note that cognitive instability arises from the combination of accepting (1) Boltzmann’s probability measure as true objective probabilities and (2) obeying the Principal Principle. As most everyone accepts PP as a real constraint on rationality, accepting (1) without obeying PP can’t, by anybody’s lights, amount to rationally accepting the theory. So we must reject (1).
Solution: Albert adds a Past Hypothesis (PAST), which says roughly that the universe started in very low entropy state (much lower than this one). So the objective probability that this is the lowest entropy state of the universe is 0—meaning we can’t be Boltzmann brains. As a bonus, we get an explanation of the direction of time, why ice cubes melt, why we can cause things to happen in the future and not the past, and how we have records of the past and not the future: all these things get a very high objective probability.
But (Sean Carroll argues) this moves too fast: just adding the past hypothesis allows the universe to eventually reach thermal equilibrium. Once that happens (in about 10100 years) there will be an extremely long period (~10^10120 years) during which random fluctuations bring about all sorts of things, including our old enemies, Boltzmann brains. And there will be a lot of them. And some of them will have the same experiences we do.
Now, here’s a plausible constraint on rationality for self-locating belief:
DE SE INDIFFERENCE (DSI): Proportion your credence equally amongst beings that (a) have the same experiences you and (b) you know to live in the actual world. (For a good argument for DSI, see Defeating Dr. Evil with Self-Locating Belief, by Adam Elga PPR, 2004).
It’s important to note that DSI doesn’t lead to run-of-the mill brain-in-vat skepticism unless you know that there are a lot of brains in vats in the real world with experiences just like yours--in which case such skepticism is fully justified! DSI is a very weak indifference principle.
But now we’re in the same state as we were with Boltzmann’s probability measure: given (1*) David Albert’s theory (with some input from Carroll) and (2*) DSI, a reasonable constraint on our credences, we can’t simultaneously have high credence that our theory is true and that we have collected evidence for it. So (1*) & (2*) are together cognitively unstable.
There are a bunch of solutions to this. Sean Carroll’s is to reject PAST and look for a set of dynamic laws on which the death of a universe produces WAY more baby universes with people like us than Boltzmann brains: more baby universes than Boltzmann brains allows us to have high credence in the belief that we’re in a baby universe and not a pocket of random coalescing particles surrounding by thermal equilibrium. For details, see his book. Potential problem: it looks like there will be infinitely many of both the baby universes and the Boltzmann brains, though, so motivating a probability measure that will get you the right result will be tough.
Tim Maudlin (nonseriously, I think) suggested a different solution: to PAST we add a future hypothesis (FUTURE) which says that the universe ends before too many Boltzmann brains can coalesce. So there aren’t enough that we should believe (by DSI) that we’re one.
But what I think Tim really believes (and I think that Albert and Barry Loewer agree) is that we should reject DSI. In conversation they motivated this move in one of three ways: (I) if DSI is true, than we can’t take ourselves to rationally accept David’s theory, so DSI is false (for the sort of Kantian transcendental reasons given earlier). (II) DSI obviously leads to old fashioned BIV skepticism, which is a problem for everybody, and can be rejected without comment. (III) Indifference principles suck.
I don’t think that any of these are us good reasons to reject DSI. Responses:
(I) Pointing out that we can’t accept DSI and David’s theory together while taking ourselves to be rational illuminates the force of the objection rather than dispelling it. Whether rejecting DSI can lead to cognitive stability depends on whether DSI really is a constraint on rationality. So, to motivate this response, we need some extra reason to reject DSI:
(II) We are entitled to reject ordinary BIV skepticism because we’re entitled to believe that there aren’t any brains in vats! But on Albert’s theory we have good reason to believe that there are Boltzmann brains. So we have no good reason to think we’re special, and not one.
(III) Lots of indifference principles are bad, and big general ones meant to cover lots of cases are almost all bad. This isn’t a big general one, though, and we should accept some restricted indifference principles (like the one that tells us to have a credence ~.0192 that we’ll draw the ace of spades from a full deck). We should accept at least those sufficiently restricted indifference principles that have good arguments for them and that don’t have intuitive counterexamples. This is one.
Finally, it’s not clear whether Sean’s theory is devoid of problems: in his response, Albert argued that Carroll still needed PAST. I’m not convinced by Albert’s argument, but even if it works, Albert doesn’t immediately get a pass; at best it looks like the right view will be some combination of Albert’s and Carroll’s.
PS: Much of this post concerns discussions had during the conference; I’d like to apologize to everybody whose ideas I didn’t site properly, as well as those I misrepresented. Hopefully we’ll get some video up soon so that everyone can see what people really said.