Here's the plan: (A) Run a Moorean argument against Subjective Bayesianism, (B) Note that Subjective Bayesians don't have a good reply, (C) Identify some motivations for being a Subjective Bayesian, (D) Note that these motivations are largely meta-normative, and they are satisfied if you're an expressivist Objective Bayesian; (E) Conclude that since expressivist Objective Bayesians can avoid the Moorean argument and satisfy the motivations for Subjective Bayesianism, there is reason for Subjective Bayesians to take this kind of view seriously.

This is a toned down version of a rant to which some of you have already been subjected.

A: Moorean Argument
According to Subjective Bayesians, the following are the only norms of rational belief:

  1. Have a Prior: Have a prior that satisfies the probability axioms.

  2. Conditionalize: At all times, let your credences be the result of conditionalizing your prior probability distribution on your total evidence.

  3. Maybe: the prior should satisfy the Principal Principle

  4. Maybe: the prior should be non-dogmatic. (Roughly, unless it is analytic that p, don't have a prior probability of 1 in p. You have to say something else about continuous random variables.)
Objective Bayesians accept all of these constraints, but add constraints that rule out certain prior probability functions. Some Objective Bayesians hold that there is a unique prior that is rationally permissible.

If Subjective Bayesianism is true, then it can be perfectly rational for someone with exactly my evidence to be nearly certain of the following claim:
ZOMBIE: flesh-eating zombies are about to descend from the sky and devour us all.*
It is irrational to believe ZOMBIE when you have my evidence, so Subjective Bayesianism is false. Nothing the Subjective Bayesian throws at me will be more certain than my main premise, so my argument succeeds.

B: Subjective Bayesian Replies
Some Subjective Bayesians attempt to rebut this kind of argument by appealing to “washing out” theorems. These theorems are of the form “for any n agents whose priors satisfy certain fairly weak conditions, if those agents have a sufficiently long sequence of common observations of a certain kind, then their posterior probabilities will converge”. For instance, if you have two people drawing balls from an urn with replacement, these people satisfy 1-4, they regard the sequence of draws from the urn as exchangeable, and they conditionalize on the result of drawing from they urn, then if they see enough balls drawn from the urn, their estimates of the distribution of colors of balls in the urn will converge.

The first point is that while such theorems might help in some other contexts, they won't help with ZOMBIE. Maybe if someone regarded possible zombie strikes as exchangeable, he'd eventually agree that a zombie strike is unlikely in the near future. But (i) this isn't required by Subjective Bayesianism, and (ii) it would still be irrational to be so damn confident in ZOMBIE (at least) until he stops expecting a zombie strike.

A more entertaining reply to the use of washing out theorems comes from Carnap (though probably others made the same point before him). Although it is true that the opinions of two agents in the urn scenario will converge, the following is also true. For any credence x (0 < x < 1) that the next ball will be red (call this proposition R), for any finite sequence of observations S, there is an exchangeable prior Pr satisfying (1-4) such that Pr(R|S) = x. Put colorfully, if you lined up observers with all possible priors, no matter how long a finite sequence of red draws you had, there'd always be some guy who was 99% confident that the next ball wouldn't be red (and each such guy would be fully rational by Subjective Bayesian standards).

Long story short, I don't think that the washing out stuff mitigates the obvious weirdness of believing a lot of weird stuff, including ZOMBIE.

C: Motivation for Subjective Bayesianism
Why have people been so attracted to a view that entails things that are so obviously and irredeemably crazy? I take it there is nothing too weird (at least as an approximation) about having credences that satisfy the probability axioms and updating by conditionalization. The weirdness is entirely due to the paucity of constraints on the priors. So our question is: why go for the Subjective part of Subjective Bayesianism?

Part of the explanation of why some people have gone for this permissive view about priors is that they just mean something especially weighty by words like “irrational”, “unreasonable”, and “unjustified”. They only consider some credences irrational only if they are incoherent, where incoherence is the credence analogue of having inconsistent beliefs. Thus, we had people like Carnap at great pains to argue that the requirement to use his favorite prior was a lot like the requirement to use classical logic.

The idea that these normative notions were especially loaded, so that the two groups were, to some extent, talking past each other might help explain the disagreement. But many (most?) Subjective Bayesians, I take it, think that there is no interesting other way of using these words. So this probably isn't the end of the story.

Subjective Bayesians are, I believe, more significantly motivated by two strands of meta-normative considerations. The first strand is metaphysical: what could make some priors, but not others, rationally permissible? Indifference principles don't work, and the fact that some priors, but not others, are rationally permissible needs explanation. Something has to make it true, and there's no decently natural property that could play the role.

The second strand is epistemic: how could you know that some priors, rather than others, are rationally permissible? Someone who changed his priors after making observations and followed requirement #2 would be Dutch-book-able. So what prior you select can't depend on observation. So if it is knowable that some priors are just plain wrong, it must be a priori that they are just plain wrong. But it's not analytic that certain priors are bad (or anything like that), so it is mysterious how anyone could know whether a given prior was acceptable.

These strands interact with each other: the special property had by the good priors has to be knowable a priori, on pain of us not knowing what it is. So it couldn't be something deeply external. (It couldn't be, for instance, the probability distribution from statistical mechanics, conditional on the past hypothesis.)

That's a bit of a cartoon version of the motivations, but I think it captures the gist of it.

D: How to Meet to Motivations by Going Expressivist
These objections bear a striking resemblance to Mackie's objections to moral realism. As with Mackie's objections, there are a lot of things you can say here if you don't want to accept the Subjective Bayesian's conclusions. But those who find themselves pushed to Subjective Bayesianism upon considering these issues should take note: an expressivist meta-epistemology with an Objective Bayesian first-order epistemology could accommodate these worries, without endorsing the rational permissibility of crazy beliefs about flesh-eating zombies. On the expressivist line, there's no account of it being true or false that a certain prior is good. There is only an account of what you're doing when you say “That's an irrational prior”, and such. Roughly, you're expressing your commitment to a system of norms that rejects using that prior for belief updating. Thus, you skirt the metaphysical issue about what could make one prior better than another.

What about the epistemic objection? Consider this example:
JOHN: John uses the kind of prior that most of us, more or less, endorse. He's been using it since he was a young ideal Bayesian, conditionalizing ever since. John reflects on his use of this prior, considering its justifiability. Since John endorses a system of norms that uses this prior, he says, “I'm glad I used a good prior.” When he sees another guy using the ZOMBIE prior, he adds, “It sure is crazy to be worried about that.”
I take it that it is not an essential part of this story that John, behind the scenes, endorsed in some objectionable variety of a priori reflection. So there's no problem here either.**

E: Conclusion
If you're worried about ZOMBIE and you're motivated by the standard arguments for Subjective Bayesianism, Objective Bayesian expressivism might worth looking into. The view will inherit some of the problems of expressivism, but I doubt they will be as bad as endorsing wild beliefs about ZOMBIE.

I think you can run a similar argument against most versions of Humeanism about reasons.

Disclaimer: I don't mean to implicate that I endorse an expressivist version of Objective Bayesianism.

*Technically, you could get out of this by insisting that it is part of my evidence that zombies are not going to fall from the sky and devour us all. You might think, for instance, that we know that this isn't going to happen, and we therefore ought to have conditionalized on the claim that it won't happen. This sort of reply is short-sighted though. Just insert some other example of something that clearly merits little credence, given our present evidence, but is not entailed by our present evidence. (E.g., we'll all die at the hands of nuclear terrorists within the next two years.) For rhetorical purposes, I will stick to talking about flesh-eating zombies.

**Things may be a bit trickier than this. It isn't especially clear, even if expressivism is true, how you're supposed to get your normative beliefs. However, even if it is true that expressivists like John who go around using good priors are somehow engaging in an objectionable variety of a priori reasoning, what is really crazier to believe at the end of the day: (i) it is rationally permissible to use a priori reasoning to select a prior, or (ii) it is rationally permissible for someone with my evidence to expect flesh-eating zombies to fall from the sky and devour us all? At any rate, appeal to expressivism helps here at least as much as it helps in meta-ethical contexts.

--Nick Beckstead

 
This weekend we had a pretty exciting philosophy of physics conference here in the big red R. Sean Carroll was there.  Oh, you don’t know who Sean Carroll is?  Well, he’s famous, he has a blog, and he just wrote a book, which I will read as soon as a friendly man in a brown uniform drops it at my door.   [Warning: this post may be long]

Carroll’s project is very similar to David Albert’s in Time and Chance: he’s trying to locate the arrow of time in thermodynamics, claiming that t1 is in the future of t­0 just in case t1 has higher entropy than t0 and there is a steady entropy increase between the two.  Entropy, roughly, measures chaos—a box in which particles are all spread out has more entropy than one where particles are all bunched up in a corner, my office has more entropy than Meghan’s, and  Jackson Pollock’s paintings have more entropy than Piet Mondrian’s.

 
Suppose that you're offered the following bet: If a coin lands heads you will win $1.50. If it lands tails you will lose $1. Your credence in heads is .5. (Assume that the fact that you're being offered this bets gives you no information about the odds of heads or tails.) Is it rational to accept the bet?

According to expected utility theory, it is, and this seems right (setting aside risk aversion). What does a knowledge-action account advise? Well, that depends. Consider two possibilities:

1) You are certain that the coin is fair, so you know that the objective chance of heads is .5. You can use the fact about the objective chance as a reason to take the bet.

2) You have no idea whatsoever about the bias of the coin. Using the principle of indifference, you assign credence .5 to heads. What knowledge do you have that could makes it rational to accept the bet? (Hawthorne and Stanley don't want to allow knowledge of subjective credences.) If there isn't any relevant knowledge, then it's not  rational to accept the bet.

Here the knowledge-action account seems to makes a distinction between the two cases that expected utility theory does not. This is worrisome for the knowledge-action account.

Note: Expected utility theory will distinguish between the two cases when you're allowed to acquire additional evidence first. But that's not an option here.

-Mary
 
Why is type-type physicalism so focused on the identifying mental states with brain states?

Of course the brain states will be part of the picture, but if one of the many kinds of externalism or disjunctivism is correct, the bounds of the mental will extend far beyond the bounds of the brain.

And this shouldn’t be scary or worrying for physicalism, because physics is most likely blind to the boundaries of human bodies. If say, knowing that p has different causal efficacy than mere believing that p, then we should expect that the underlying causal story is not going to be merely a story of impingements on and effects of the brain. The type-type physicalist project should be updated to include these insights.

L. Miracchi

 
A defender of luminosity might object to Williamson’s argument in the following way: 

I can concede that being cold is not luminous. That is not the kind of state I mean when I say that one’s occurrent states are luminous. Rather, I mean something like feeling like this is luminous. While both my feeling like this and my feeling like this* support my belief that I am feeling cold, only one of them can provide me with the knowledge that I am feeling like this. The idea is that, by merely being in the state of feeling like this, I am thereby in a position to know that I am feeling like this. No matter how similar feeling like this is to feeling like this*, feeling like this cannot play the sort of role for the belief that I am feeling like this* that it does for the belief that I am feeling like this.

Note that the claim that I am feeling like this is not trivial, for feeling like this is feeling a certain way, and it could be misrepresented. As an analogy, think about colors. If I claim that that box is that color, then I could either be making the trivial claim: it has whatever color it has, or I could be making the claim: it has the particular color that I perceive it as having (McDowell, Mind and World). Likewise, the claim is not supposed to be that I am feeling however I am feeling, but that I am feeling a particular way. I could misrepresent how I am feeling—e.g., I am feeling jealous of S but I form the belief that I am feeling angry at some wrong x has committed. The idea is that there is a particular color (feeling) picked out by the demonstrative, and it is that color (feeling) that is evaluated against the actual color (feeling).
(Thanks to T. Donaldson for pushing me to clarify this point.)
 
However, one might think that in the cases where I do misrepresent how I am feeling, introspection is thwarted by other factors. When introspection works properly, my belief that I am feeling like this cannot be wrong. Note that the analogy between representing one’s qualitative states and representing the colors of objects fails in an important respect. When I form the belief that that box is that color, I need to represent my qualitative state, my experience, and then on top of that my qualitative state is evaluated against a particular color. (I have to get things right twice over. However, with knowing what I am feeling, I am only trying to represent the qualitative state. Both my ability to know what colors objects are and my ability to know what I am feeling rely on my ability to reflect upon my qualitative states. However—contrasted with the case of colors—my ability to know what I am feeling is merely this ability. It is merely the ability to reflect on my qualitative states that is employed here, not a further ability to accurately perceive further states.

Now Williamson’s argument against the reliability of introspection because of the similarity of the bases does not go through, for being in a certain qualitative state cannot be one’s basis (in the sense described) for forming the belief that one is in a qualitatively different state.  On this view, the belief-forming process is still reliable, but in a different way than Williamson proposes. What is reliable is the following capacity: when I am feeling like thisn, and I reflect on how I am feeling, I can thereby come to believe truly that I am feeling like thisn. This concedes the requirement that knowledge be formed in a reliable way but avoids the problems Williamson poses for claiming that beliefs formed by introspection on one’s occurrent states are formed in a reliable way. (If there is any vagueness to this cold, it will not challenge the idea that our introspective capacities are reliable.)

One caveat: such a defense of luminosity is not committed to the idea that there could be no general concepts modifying the demonstrative element in the experience and in the piece of knowledge. For example, if I am feeling this hot, the knowledge that I can thereby non-observationally acquire is that I am feeling this hot. That is, the state need not be merely demonstratively picked out. My being this hot does not support my belief that I am this hot*, no matter how close feeling this hot is to feeling this hot*.

Thoughts?

L. Miracchi
 
The only truth I have to announce is that I'm puzzled by something. Here is a familiar case.

Smith buys a ticket in a lottery with an enormous payoff. The odds are vanishingly small that Smith's ticket is a winner. Smith knows both the odds and that the payoff is enormous. If her ticket is a loser, it's a piece of trash. Smith realizes this, so she's considering recycling it. As a matter of fact, the ticket *is* a loser, and Smith believes that it is. Smith's only evidence that it's a loser, however, is her knowledge of the odds.

One of the following must be true, but which one? (In case it isn't obvious that one of these must be true, see "PROOF" below.)

(1) Smith doesn't know that the odds are vanishingly small that the ticket is a winner.

(2) She does know that the odds are vanishingly small that the ticket is a winner, yet she's not justified in proportioning her confidence to the odds.

(3) She's justified in proportioning her confidence to the odds (and thus being virtually certain) that the ticket won't win, yet she's not justified in believing that the ticket won't win.

(4) She's justified in believing that the ticket won't win, yet she doesn’t know that the ticket won't win.

(5) She does know that the ticket won't win, yet it's not acceptable for her to reason that, since the ticket won't win, she should recycle it.

(6) It's acceptable for her to reason that, since the ticket won't win, she should recycle it, yet it's not the case that she should recycle it.

(7) She should recycle it. 

As I noted above, one of (1) through (7) must be true. But they are all hard to stomach. Very quickly, (1) through (7) have (at least) the following problems.

The scenario stipulates that (1) isn't the case, so, without reason to think the scenario is incoherent, we can't accept (1). Given that Smith wants to win the lottery, given that she paid good money for the ticket, and given that it's very little trouble for her to keep the ticket, it seems clear that (7) is false. If Smith had overriding reasons for not recycling the ticket, then (6) would seem true. But we can stipulate that she doesn't, in which case (6) looks false. And the same considerations apply to (5). It looks true if additional factors prevent Smith from reasoning that way, but we can just stipulate that they don't. Of course, Hawthorne and others would accept (4). But (4) seems unstable. Ex hypothesi, the ticket won't win and Smith isn't Gettiered with respect to her belief that it won't win. Thus, our grounds for denying that Smith knows seem to equal our grounds for denying that Smith is justified. (And speaking for myself, insofar as it's intuitive that Smith doesn't know, it's also intuitive that she's not justified.) So, it looks like anyone who wants to deny that Smith knows is pushed toward (2) or (3). But (2) seems false. Why on earth shouldn't Smith proportion her confidence to the odds? After all, ex hypothesi, she knows the odds and the odds exhaust her evidence. This leaves us with (3), which is also hard to swallow. Perhaps justified belief equals justified certainty. But then, which of our beliefs *is* justified? If we accept (3) and maintain that justified belief equals justified certainty, it will be a trick to avoid skepticism. Perhaps justified belief equals justified confidence above some threshold *below* certainty. But then, how can (3) be true while we have a significant stock of justified beliefs? Again, given that we accept (3), it will be tricky to avoid skepticism. Perhaps justified belief cannot be identified with any level of justified confidence, then. In this case, what does belief amount to, how is it related to confidence, and what explains how Smith could be justified in being virtually certain that the ticket won't win, yet not justified in *believing* that the ticket won't win?  

I'm not sure what to say about these options, except that none of them looks particularly good. What do people think? If forced to pick among them, which should we pick?

Blake

PROOF: Here’s the proof I promised above.
 
Let 'p' through 'u' name the propositions in (1) through (7) as follows.

p = Smith knows that the odds are vanishingly small that the ticket is a winner.
q = Smith is justified in proportioning her confidence to the odds.
r = Smith is justified in believing that the ticket won't win.
s = Smith knows that the ticket won't win.
t = It's acceptable for Smith to reason that, since the ticket won't win, she should recycle it.
u = Smith should recycle the ticket.

Now consider the following presentation of (1) through (7).

(1) ~p
(2) p & ~q
(3) q & ~r
(4) r & ~s
(5) s & ~t
(6) t & ~u
(7) u

Either p is true or it's false. Suppose the latter. Then (1) is true. Suppose the former. Then either (2) is true or q is. Suppose q is. Then either (3) is true or r is. Suppose r is. Then either (4) is true or s is. So suppose s is true. Then either (5) is true or t is true. Suppose it’s t. Then either (6) is true or u is. But if u is true, then so is (7). So, one of (1) through (7) must be true.

 
I think that knowledge matters, but there are doubters. To paraphrase Tim Maudlin at a recent talk here: ‘if I know that the guy has a true belief, and I know that he’s done everything right [i.e., he’s epistemic ally justified], why do I care whether or not he knows?’  That someone has a true justified belief (JTB) is enough information for  us to judge the epistemic worthiness of his actions.  Furthermore, formal epistemologists seem to have a pretty good decision theory that doesn’t rely on knowledge at all.  So, why is knowledge important?

Timothy Williamson (in Knowledge and its Limits, 2000, p. 62) argues that knowledge ascriptions are vital to explanations of behavior.  Not everyone buys this argument.  I don’t want to convince those people; rather, I think that the value of knowledge stretches beyond its usefulness in predicting behavior.  When we find out that someone knows something, we get more information than we do when we find out that they JTB the proposition in question.  It seems to me that this isn’t just more information: it’s more useful information.




 
I’ve been saying for a while that we (grad students in philosophy at Rutgers) should have a blog. So here it is. You should all by now have had an email giving you the username and password (ask me if not) so you can start posting at once!

Here’s what I think the blog might be used for:

  • People might want to blog about half-formed philosophical ideas, which they want comments on.
  • People giving grad talks might want to post abstracts in advance, to get everyone interested.
  • We could use the blog to have follow-up discussions on grad talks, colloquia etc..
  • People could post questions (“Which logic textbook is best?”, “Where can I find this paper?” and so on).
  • Social events could be announced on the blog, as well as details of the successes of the departmental sports teams.
  • People who run reading groups could post reminders and links to readings on the blog.
Tom