It seems that a lot of people started their summer courses this year by giving students handouts with short arguments on them, and asking questions like "What is the conclusion?", "What are the premises?", "Is it valid?" and so on. For this purpose, I've started making a collection of examples of arguments. Thanks to Jenn and Mike, I now have quite a few. If you have any to contribute, or would like me to email you the collection, please let me know...

Tom.
 
Here's the plan: (A) Run a Moorean argument against Subjective Bayesianism, (B) Note that Subjective Bayesians don't have a good reply, (C) Identify some motivations for being a Subjective Bayesian, (D) Note that these motivations are largely meta-normative, and they are satisfied if you're an expressivist Objective Bayesian; (E) Conclude that since expressivist Objective Bayesians can avoid the Moorean argument and satisfy the motivations for Subjective Bayesianism, there is reason for Subjective Bayesians to take this kind of view seriously.

This is a toned down version of a rant to which some of you have already been subjected.

A: Moorean Argument
According to Subjective Bayesians, the following are the only norms of rational belief:

  1. Have a Prior: Have a prior that satisfies the probability axioms.

  2. Conditionalize: At all times, let your credences be the result of conditionalizing your prior probability distribution on your total evidence.

  3. Maybe: the prior should satisfy the Principal Principle

  4. Maybe: the prior should be non-dogmatic. (Roughly, unless it is analytic that p, don't have a prior probability of 1 in p. You have to say something else about continuous random variables.)
Objective Bayesians accept all of these constraints, but add constraints that rule out certain prior probability functions. Some Objective Bayesians hold that there is a unique prior that is rationally permissible.

If Subjective Bayesianism is true, then it can be perfectly rational for someone with exactly my evidence to be nearly certain of the following claim:
ZOMBIE: flesh-eating zombies are about to descend from the sky and devour us all.*
It is irrational to believe ZOMBIE when you have my evidence, so Subjective Bayesianism is false. Nothing the Subjective Bayesian throws at me will be more certain than my main premise, so my argument succeeds.

B: Subjective Bayesian Replies
Some Subjective Bayesians attempt to rebut this kind of argument by appealing to “washing out” theorems. These theorems are of the form “for any n agents whose priors satisfy certain fairly weak conditions, if those agents have a sufficiently long sequence of common observations of a certain kind, then their posterior probabilities will converge”. For instance, if you have two people drawing balls from an urn with replacement, these people satisfy 1-4, they regard the sequence of draws from the urn as exchangeable, and they conditionalize on the result of drawing from they urn, then if they see enough balls drawn from the urn, their estimates of the distribution of colors of balls in the urn will converge.

The first point is that while such theorems might help in some other contexts, they won't help with ZOMBIE. Maybe if someone regarded possible zombie strikes as exchangeable, he'd eventually agree that a zombie strike is unlikely in the near future. But (i) this isn't required by Subjective Bayesianism, and (ii) it would still be irrational to be so damn confident in ZOMBIE (at least) until he stops expecting a zombie strike.

A more entertaining reply to the use of washing out theorems comes from Carnap (though probably others made the same point before him). Although it is true that the opinions of two agents in the urn scenario will converge, the following is also true. For any credence x (0 < x < 1) that the next ball will be red (call this proposition R), for any finite sequence of observations S, there is an exchangeable prior Pr satisfying (1-4) such that Pr(R|S) = x. Put colorfully, if you lined up observers with all possible priors, no matter how long a finite sequence of red draws you had, there'd always be some guy who was 99% confident that the next ball wouldn't be red (and each such guy would be fully rational by Subjective Bayesian standards).

Long story short, I don't think that the washing out stuff mitigates the obvious weirdness of believing a lot of weird stuff, including ZOMBIE.

C: Motivation for Subjective Bayesianism
Why have people been so attracted to a view that entails things that are so obviously and irredeemably crazy? I take it there is nothing too weird (at least as an approximation) about having credences that satisfy the probability axioms and updating by conditionalization. The weirdness is entirely due to the paucity of constraints on the priors. So our question is: why go for the Subjective part of Subjective Bayesianism?

Part of the explanation of why some people have gone for this permissive view about priors is that they just mean something especially weighty by words like “irrational”, “unreasonable”, and “unjustified”. They only consider some credences irrational only if they are incoherent, where incoherence is the credence analogue of having inconsistent beliefs. Thus, we had people like Carnap at great pains to argue that the requirement to use his favorite prior was a lot like the requirement to use classical logic.

The idea that these normative notions were especially loaded, so that the two groups were, to some extent, talking past each other might help explain the disagreement. But many (most?) Subjective Bayesians, I take it, think that there is no interesting other way of using these words. So this probably isn't the end of the story.

Subjective Bayesians are, I believe, more significantly motivated by two strands of meta-normative considerations. The first strand is metaphysical: what could make some priors, but not others, rationally permissible? Indifference principles don't work, and the fact that some priors, but not others, are rationally permissible needs explanation. Something has to make it true, and there's no decently natural property that could play the role.

The second strand is epistemic: how could you know that some priors, rather than others, are rationally permissible? Someone who changed his priors after making observations and followed requirement #2 would be Dutch-book-able. So what prior you select can't depend on observation. So if it is knowable that some priors are just plain wrong, it must be a priori that they are just plain wrong. But it's not analytic that certain priors are bad (or anything like that), so it is mysterious how anyone could know whether a given prior was acceptable.

These strands interact with each other: the special property had by the good priors has to be knowable a priori, on pain of us not knowing what it is. So it couldn't be something deeply external. (It couldn't be, for instance, the probability distribution from statistical mechanics, conditional on the past hypothesis.)

That's a bit of a cartoon version of the motivations, but I think it captures the gist of it.

D: How to Meet to Motivations by Going Expressivist
These objections bear a striking resemblance to Mackie's objections to moral realism. As with Mackie's objections, there are a lot of things you can say here if you don't want to accept the Subjective Bayesian's conclusions. But those who find themselves pushed to Subjective Bayesianism upon considering these issues should take note: an expressivist meta-epistemology with an Objective Bayesian first-order epistemology could accommodate these worries, without endorsing the rational permissibility of crazy beliefs about flesh-eating zombies. On the expressivist line, there's no account of it being true or false that a certain prior is good. There is only an account of what you're doing when you say “That's an irrational prior”, and such. Roughly, you're expressing your commitment to a system of norms that rejects using that prior for belief updating. Thus, you skirt the metaphysical issue about what could make one prior better than another.

What about the epistemic objection? Consider this example:
JOHN: John uses the kind of prior that most of us, more or less, endorse. He's been using it since he was a young ideal Bayesian, conditionalizing ever since. John reflects on his use of this prior, considering its justifiability. Since John endorses a system of norms that uses this prior, he says, “I'm glad I used a good prior.” When he sees another guy using the ZOMBIE prior, he adds, “It sure is crazy to be worried about that.”
I take it that it is not an essential part of this story that John, behind the scenes, endorsed in some objectionable variety of a priori reflection. So there's no problem here either.**

E: Conclusion
If you're worried about ZOMBIE and you're motivated by the standard arguments for Subjective Bayesianism, Objective Bayesian expressivism might worth looking into. The view will inherit some of the problems of expressivism, but I doubt they will be as bad as endorsing wild beliefs about ZOMBIE.

I think you can run a similar argument against most versions of Humeanism about reasons.

Disclaimer: I don't mean to implicate that I endorse an expressivist version of Objective Bayesianism.

*Technically, you could get out of this by insisting that it is part of my evidence that zombies are not going to fall from the sky and devour us all. You might think, for instance, that we know that this isn't going to happen, and we therefore ought to have conditionalized on the claim that it won't happen. This sort of reply is short-sighted though. Just insert some other example of something that clearly merits little credence, given our present evidence, but is not entailed by our present evidence. (E.g., we'll all die at the hands of nuclear terrorists within the next two years.) For rhetorical purposes, I will stick to talking about flesh-eating zombies.

**Things may be a bit trickier than this. It isn't especially clear, even if expressivism is true, how you're supposed to get your normative beliefs. However, even if it is true that expressivists like John who go around using good priors are somehow engaging in an objectionable variety of a priori reasoning, what is really crazier to believe at the end of the day: (i) it is rationally permissible to use a priori reasoning to select a prior, or (ii) it is rationally permissible for someone with my evidence to expect flesh-eating zombies to fall from the sky and devour us all? At any rate, appeal to expressivism helps here at least as much as it helps in meta-ethical contexts.

--Nick Beckstead

 
 
This weekend we had a pretty exciting philosophy of physics conference here in the big red R. Sean Carroll was there.  Oh, you don’t know who Sean Carroll is?  Well, he’s famous, he has a blog, and he just wrote a book, which I will read as soon as a friendly man in a brown uniform drops it at my door.   [Warning: this post may be long]

Carroll’s project is very similar to David Albert’s in Time and Chance: he’s trying to locate the arrow of time in thermodynamics, claiming that t1 is in the future of t­0 just in case t1 has higher entropy than t0 and there is a steady entropy increase between the two.  Entropy, roughly, measures chaos—a box in which particles are all spread out has more entropy than one where particles are all bunched up in a corner, my office has more entropy than Meghan’s, and  Jackson Pollock’s paintings have more entropy than Piet Mondrian’s.

 
In “The Rationality of Belief and Some Other Propositional Attitudes,” Kelly appears to endorse the following theses:

Psychological thesis: For any subject S and any proposition p:  it’s not possible for S to hold a belief that p on the basis of the belief that it is in S’s best interest to believe p.

Weak rationality thesis: For any S and any p:  it’s not possible for S’s belief that p to be epistemically rationalized by S’s belief that it is in S’s best interest to believe p. 

Strong rationality thesis: For any S and any p:  it’s not possible for S’s belief that p to be all things considered rationalized by S’s belief that it is in S’s best interest to believe p.

(Kelly doesn’t clearly distinguish between the weak and the strong rationality theses; however, it’s pretty clear that he’s committed to both.)   

Here’s a counterexample to all three theses:

A semi-evil demon hands Sheila a folded sheet of paper.  Before she unfolds the sheet, the demon tells her the following:  “On this paper I have written a single sentence.  If, upon reading the sentence written on this piece paper, you come to believe the sentence is true, I will give you a million dollars.  If not, I will inflict severe bodily harm upon you.” 

Suppose furthermore that Sheila has strong evidence that the demon is i) a reliable mind reader, ii) capable of carrying out his promise, iii) sincere in his promise.  Sheila thus comes to believe the following proposition:

P:  It is in my best interest to believe that the sentence written on the sheet of paper is true. 

Next, Sheila opens the sheet of paper and reads the following sentence:

S:  It is in your best interest to believe this sentence is true. 

Sheila reason as follows:  the written expression, “this sentence” refers to the sentence written on the sheet of paper.  By P, Sheila concludes that it is in her best interest to believe said sentence.  Since the sentence simply says that it is in her best interest to believe said sentence, Sheila concludes:

C:  The sentence written on the sheet of paper is true. 

Reflecting on this case, I think the following three claims are plausible:

1)  Sheila believes C on the basis of her belief that P.
2)  Sheila’s belief that P epistemically rationalizes her belief that C.  
3)  Sheila’s belief that P all things considered rationalizes her belief that C. 

Clearly, 1) is inconsistent with Kelly’s psychological thesis; 2) is inconsistent with the weak rationality thesis; and 3) is inconsistent with the strong rationality thesis. 

Any thoughts?

 -Bob

 
Suppose that you're offered the following bet: If a coin lands heads you will win $1.50. If it lands tails you will lose $1. Your credence in heads is .5. (Assume that the fact that you're being offered this bets gives you no information about the odds of heads or tails.) Is it rational to accept the bet?

According to expected utility theory, it is, and this seems right (setting aside risk aversion). What does a knowledge-action account advise? Well, that depends. Consider two possibilities:

1) You are certain that the coin is fair, so you know that the objective chance of heads is .5. You can use the fact about the objective chance as a reason to take the bet.

2) You have no idea whatsoever about the bias of the coin. Using the principle of indifference, you assign credence .5 to heads. What knowledge do you have that could makes it rational to accept the bet? (Hawthorne and Stanley don't want to allow knowledge of subjective credences.) If there isn't any relevant knowledge, then it's not  rational to accept the bet.

Here the knowledge-action account seems to makes a distinction between the two cases that expected utility theory does not. This is worrisome for the knowledge-action account.

Note: Expected utility theory will distinguish between the two cases when you're allowed to acquire additional evidence first. But that's not an option here.

-Mary
 
I just got back from Boston, where I spent most of spring break. In Cambridge, there is a great bookshop (the Harvard Book Store) where they have an Espresso Book Machine. I’d read about these things before, but this was my first chance to see one in action – and I was pretty impressed. 

Here's how it works. In the shop, they have a computer on which there is a searchable database of (type) books. You choose one, and the Espresso Book Machine prints out a token for you on the spot. I asked for a copy of Royce’s The Religious Aspect of Philosophy. They downloaded a scan of the book from Google, printed it out, and then charged me $8 for the book. This is a bargain – it costs $40 from Amazon. The whole process took about ten minutes; the book was still warm as I walked out of the shop. It’s a paperback, and it looks and feels cheap, but it’s hard to complain given the price. 

It seems to me that this is an exciting thing for people who do history of philosophy. Many old books – which are out of print and hard to get hold of – are actually in the public domain. From now on, it should be possible to get cheap copies of these books using an Espresso machine. 

According to Wikipedia, each one of these machines costs at least $75,000 – so a bookshop will have to get a hell of a lot of use out of one before it even makes enough to pay off the initial investment. And as I say, the books that the machine makes are of rather a low quality. However, if the technology improves, or gets cheaper, it may make a big difference to the way we buy books. Perhaps bookshops in the future will have just a few tables of books at the front (for browsing), plus the inevitable coffee-shop, and then a big printer in the back rather than rows of shelves.  

Tom
 
Fodor's new book has been getting alot of play around the interwebs. You might want to see my old blog post for background. Here's a selection of reviews.

Rutgers alum Brian Switek has a review at his (excellent) blog Laelaps: Jerry Fodor: still getting it wrong about evolution.

PZ Myers reviews the book over at Pharyngula in a post entitled Fodor and Piatelli-Palmarini get everything wrong.

Michael Ruse has a review at boston.com: The origin of the specious.

Then there's Mary Midgley at the Grauniad: Review: What Darwin Got Wrong.

The money review is of course Ned Block & Phil Kitcher's at the Boston Review: Misunderstanding Darwin.

I commented on this review. Here's what I said:

Block & Kitcher say that if Fodor's right, all causal explanations are in jeopardy. I don't think they're correct on this point.

To the best of my knowledge, Fodor has a view of causation where a causes b just in case there is a covering law: A -> B, such that a has A and b has B. So whenever a causes b, there'll be a fact of the matter which property or properties of a are such that in virtue of them, a caused b, namely all those properties X such that there is a law X -> B, where a has X and b has B (for some B). So according to Fodor, there are lots of facts of the matter about what causes what, and in virtue of what.

But when there's no covering law, there's no real causation. Fodor's example is that of history. Suppose Frenchman tend to lose battles and all and only Frenchman are short. Is it because they're short or because they're French that they lose? Or because of some other property correlated with Frenchness and shortness? Well, none of the above. There's no fact of the matter because there are no laws of the form: French -> lose or short -> lose. There just aren't any laws of history, not even ceteris paribus laws.

This is also Fodor's view regarding natural history (evolution). White polar bears proliferate. Is it because they're white or because they're snow-colored? No fact of the matter *because* there's no law of the form snow-colored -> proliferate, and no law of the form white -> proliferate. There aren't even ceteris paribus laws of this form.

Furthermore, as I read Fodor, this is what the "seeing" metaphor is doing. Natural selection can't "see" the counterfactuals like 'had the bears been white but not snow-colored, they'd've not proliferated' because they aren't entailed by biological laws, because there aren't any such things. These counterfactuals are true, but not sufficient to ground causal claims involving phenotypic properties-- because they're true in virtue of the laws of fundamental physics, and the physical state of the world, not laws of biology involving phenotypes.

I'm not saying Fodor's right. I just think that on at least this point, Block & Kitcher have his view wrong. [end comment]

Finally, there's an interview with none other than Jerry himself over at Salon.com: What Darwin Got Wrong: Taking down the father of evolution.

I won't comment on the quality of these reviews: that's for you to decide. If you see more around, I'm happy to add them to the list.

Michael Johnson
 
Why is type-type physicalism so focused on the identifying mental states with brain states?

Of course the brain states will be part of the picture, but if one of the many kinds of externalism or disjunctivism is correct, the bounds of the mental will extend far beyond the bounds of the brain.

And this shouldn’t be scary or worrying for physicalism, because physics is most likely blind to the boundaries of human bodies. If say, knowing that p has different causal efficacy than mere believing that p, then we should expect that the underlying causal story is not going to be merely a story of impingements on and effects of the brain. The type-type physicalist project should be updated to include these insights.

L. Miracchi

 
A defender of luminosity might object to Williamson’s argument in the following way: 

I can concede that being cold is not luminous. That is not the kind of state I mean when I say that one’s occurrent states are luminous. Rather, I mean something like feeling like this is luminous. While both my feeling like this and my feeling like this* support my belief that I am feeling cold, only one of them can provide me with the knowledge that I am feeling like this. The idea is that, by merely being in the state of feeling like this, I am thereby in a position to know that I am feeling like this. No matter how similar feeling like this is to feeling like this*, feeling like this cannot play the sort of role for the belief that I am feeling like this* that it does for the belief that I am feeling like this.

Note that the claim that I am feeling like this is not trivial, for feeling like this is feeling a certain way, and it could be misrepresented. As an analogy, think about colors. If I claim that that box is that color, then I could either be making the trivial claim: it has whatever color it has, or I could be making the claim: it has the particular color that I perceive it as having (McDowell, Mind and World). Likewise, the claim is not supposed to be that I am feeling however I am feeling, but that I am feeling a particular way. I could misrepresent how I am feeling—e.g., I am feeling jealous of S but I form the belief that I am feeling angry at some wrong x has committed. The idea is that there is a particular color (feeling) picked out by the demonstrative, and it is that color (feeling) that is evaluated against the actual color (feeling).
(Thanks to T. Donaldson for pushing me to clarify this point.)
 
However, one might think that in the cases where I do misrepresent how I am feeling, introspection is thwarted by other factors. When introspection works properly, my belief that I am feeling like this cannot be wrong. Note that the analogy between representing one’s qualitative states and representing the colors of objects fails in an important respect. When I form the belief that that box is that color, I need to represent my qualitative state, my experience, and then on top of that my qualitative state is evaluated against a particular color. (I have to get things right twice over. However, with knowing what I am feeling, I am only trying to represent the qualitative state. Both my ability to know what colors objects are and my ability to know what I am feeling rely on my ability to reflect upon my qualitative states. However—contrasted with the case of colors—my ability to know what I am feeling is merely this ability. It is merely the ability to reflect on my qualitative states that is employed here, not a further ability to accurately perceive further states.

Now Williamson’s argument against the reliability of introspection because of the similarity of the bases does not go through, for being in a certain qualitative state cannot be one’s basis (in the sense described) for forming the belief that one is in a qualitatively different state.  On this view, the belief-forming process is still reliable, but in a different way than Williamson proposes. What is reliable is the following capacity: when I am feeling like thisn, and I reflect on how I am feeling, I can thereby come to believe truly that I am feeling like thisn. This concedes the requirement that knowledge be formed in a reliable way but avoids the problems Williamson poses for claiming that beliefs formed by introspection on one’s occurrent states are formed in a reliable way. (If there is any vagueness to this cold, it will not challenge the idea that our introspective capacities are reliable.)

One caveat: such a defense of luminosity is not committed to the idea that there could be no general concepts modifying the demonstrative element in the experience and in the piece of knowledge. For example, if I am feeling this hot, the knowledge that I can thereby non-observationally acquire is that I am feeling this hot. That is, the state need not be merely demonstratively picked out. My being this hot does not support my belief that I am this hot*, no matter how close feeling this hot is to feeling this hot*.

Thoughts?

L. Miracchi