Block & Kitcher say that if Fodor's right, all causal explanations are in jeopardy. I don't think they're correct on this point.
To the best of my knowledge, Fodor has a view of causation where a causes b just in case there is a covering law: A -> B, such that a has A and b has B. So whenever a causes b, there'll be a fact of the matter which property or properties of a are such that in virtue of them, a caused b, namely all those properties X such that there is a law X -> B, where a has X and b has B (for some B). So according to Fodor, there are lots of facts of the matter about what causes what, and in virtue of what.
But when there's no covering law, there's no real causation. Fodor's example is that of history. Suppose Frenchman tend to lose battles and all and only Frenchman are short. Is it because they're short or because they're French that they lose? Or because of some other property correlated with Frenchness and shortness? Well, none of the above. There's no fact of the matter because there are no laws of the form: French -> lose or short -> lose. There just aren't any laws of history, not even ceteris paribus laws.
This is also Fodor's view regarding natural history (evolution). White polar bears proliferate. Is it because they're white or because they're snow-colored? No fact of the matter *because* there's no law of the form snow-colored -> proliferate, and no law of the form white -> proliferate. There aren't even ceteris paribus laws of this form.
Furthermore, as I read Fodor, this is what the "seeing" metaphor is doing. Natural selection can't "see" the counterfactuals like 'had the bears been white but not snow-colored, they'd've not proliferated' because they aren't entailed by biological laws, because there aren't any such things. These counterfactuals are true, but not sufficient to ground causal claims involving phenotypic properties-- because they're true in virtue of the laws of fundamental physics, and the physical state of the world, not laws of biology involving phenotypes.
I'm not saying Fodor's right. I just think that on at least this point, Block & Kitcher have his view wrong. [end comment]
Why is type-type physicalism so focused on the identifying mental states with brain states?
Of course the brain states will be part of the picture, but if one of the many kinds of externalism or disjunctivism is correct, the bounds of the mental will extend far beyond the bounds of the brain.
And this shouldn’t be scary or worrying for physicalism, because physics is most likely blind to the boundaries of human bodies. If say, knowing that p has different causal efficacy than mere believing that p, then we should expect that the underlying causal story is not going to be merely a story of impingements on and effects of the brain. The type-type physicalist project should be updated to include these insights.
A defender of luminosity might object to Williamson’s argument in the following way:
I can concede that being cold is not luminous. That is not the kind of state I mean when I say that one’s occurrent states are luminous. Rather, I mean something like feeling like this is luminous. While both my feeling likethis and my feeling like this* support my belief that I am feeling cold, only one of them can provide me with the knowledge that I amfeeling likethis. The idea is that, by merely being in the state of feeling likethis, I am thereby in a position to know that I am feeling likethis. No matter how similar feeling like this is to feeling like this*, feeling like this cannot play the sort of role for the belief that I am feeling like this* that it does for the belief that I am feeling like this.
Note that the claim that I am feeling like this is not trivial, for feeling like this is feeling a certain way, and it could be misrepresented. As an analogy, think about colors. If I claim that that box is that color, then I could either be making the trivial claim: it has whatever color it has, or I could be making the claim: it has the particular color that I perceive it as having (McDowell, Mind and World). Likewise, the claim is not supposed to be that I am feeling however I am feeling, but that I am feeling a particular way. I could misrepresent how I am feeling—e.g., I am feeling jealous of S but I form the belief that I am feeling angry at some wrong x has committed. The idea is that there is a particular color (feeling) picked out by the demonstrative, and it is that color (feeling) that is evaluated against the actual color (feeling). (Thanks to T. Donaldson for pushing me to clarify this point.)
However, one might think that in the cases where I do misrepresent how I am feeling, introspection is thwarted by other factors. When introspection works properly, my belief that I am feeling like this cannot be wrong. Note that the analogy between representing one’s qualitative states and representing the colors of objects fails in an important respect. When I form the belief that that box is that color, I need to represent my qualitative state, my experience, and then on top of that my qualitative state is evaluated against a particular color. (I have to get things right twice over. However, with knowing what I am feeling, I am only trying to represent the qualitative state. Both my ability to know what colors objects are and my ability to know what I am feeling rely on my ability to reflect upon my qualitative states. However—contrasted with the case of colors—my ability to know what I am feeling is merely this ability. It is merely the ability to reflect on my qualitative states that is employed here, not a further ability to accurately perceive further states.
Now Williamson’s argument against the reliability of introspection because of the similarity of the bases does not go through, for being in a certain qualitative state cannot be one’s basis (in the sense described) for forming the belief that one is in a qualitatively different state.On this view, the belief-forming process is still reliable, but in a different way than Williamson proposes. What is reliable is the following capacity: when I am feeling likethisn, and I reflect on how I am feeling, I can thereby come to believe truly that I am feeling like thisn. This concedes the requirement that knowledge be formed in a reliable way but avoids the problems Williamson poses for claiming that beliefs formed by introspection on one’s occurrent states are formed in a reliable way. (If there is any vagueness to this cold, it will not challenge the idea that our introspective capacities are reliable.)
One caveat: such a defense of luminosity is not committed to the idea that there could be no general concepts modifying the demonstrative element in the experience and in the piece of knowledge. For example, if I am feeling this hot, the knowledge that I can thereby non-observationally acquire is that I am feeling this hot. That is, the state need not be merely demonstratively picked out. My being this hot does not support my belief that I am this hot*, no matter how close feeling this hot is to feelingthis hot*.
The only truth I have to announce is that I'm puzzled by something. Here is a familiar case.
Smith buys a ticket in a lottery with an enormous payoff. The odds are vanishingly small that Smith's ticket is a winner. Smith knows both the odds and that the payoff is enormous. If her ticket is a loser, it's a piece of trash. Smith realizes this, so she's considering recycling it. As a matter of fact, the ticket *is* a loser, and Smith believes that it is. Smith's only evidence that it's a loser, however, is her knowledge of the odds.
One of the following must be true, but which one? (In case it isn't obvious that one of these must be true, see "PROOF" below.)
(1) Smith doesn't know that the odds are vanishingly small that the ticket is a winner.
(2) She does know that the odds are vanishingly small that the ticket is a winner, yet she's not justified in proportioning her confidence to the odds.
(3) She's justified in proportioning her confidence to the odds (and thus being virtually certain) that the ticket won't win, yet she's not justified in believing that the ticket won't win.
(4) She's justified in believing that the ticket won't win, yet she doesn’t know that the ticket won't win.
(5) She does know that the ticket won't win, yet it's not acceptable for her to reason that, since the ticket won't win, she should recycle it.
(6) It's acceptable for her to reason that, since the ticket won't win, she should recycle it, yet it's not the case that she should recycle it.
(7) She should recycle it.
As I noted above, one of (1) through (7) must be true. But they are all hard to stomach. Very quickly, (1) through (7) have (at least) the following problems.
The scenario stipulates that (1) isn't the case, so, without reason to think the scenario is incoherent, we can't accept (1). Given that Smith wants to win the lottery, given that she paid good money for the ticket, and given that it's very little trouble for her to keep the ticket, it seems clear that (7) is false. If Smith had overriding reasons for not recycling the ticket, then (6) would seem true. But we can stipulate that she doesn't, in which case (6) looks false. And the same considerations apply to (5). It looks true if additional factors prevent Smith from reasoning that way, but we can just stipulate that they don't. Of course, Hawthorne and others would accept (4). But (4) seems unstable. Ex hypothesi, the ticket won't win and Smith isn't Gettiered with respect to her belief that it won't win. Thus, our grounds for denying that Smith knows seem to equal our grounds for denying that Smith is justified. (And speaking for myself, insofar as it's intuitive that Smith doesn't know, it's also intuitive that she's not justified.) So, it looks like anyone who wants to deny that Smith knows is pushed toward (2) or (3). But (2) seems false. Why on earth shouldn't Smith proportion her confidence to the odds? After all, ex hypothesi, she knows the odds and the odds exhaust her evidence. This leaves us with (3), which is also hard to swallow. Perhaps justified belief equals justified certainty. But then, which of our beliefs *is* justified? If we accept (3) and maintain that justified belief equals justified certainty, it will be a trick to avoid skepticism. Perhaps justified belief equals justified confidence above some threshold *below* certainty. But then, how can (3) be true while we have a significant stock of justified beliefs? Again, given that we accept (3), it will be tricky to avoid skepticism. Perhaps justified belief cannot be identified with any level of justified confidence, then. In this case, what does belief amount to, how is it related to confidence, and what explains how Smith could be justified in being virtually certain that the ticket won't win, yet not justified in *believing* that the ticket won't win?
I'm not sure what to say about these options, except that none of them looks particularly good. What do people think? If forced to pick among them, which should we pick?
PROOF: Here’s the proof I promised above. Let 'p' through 'u' name the propositions in (1) through (7) as follows.
p = Smith knows that the odds are vanishingly small that the ticket is a winner. q = Smith is justified in proportioning her confidence to the odds. r = Smith is justified in believing that the ticket won't win. s = Smith knows that the ticket won't win. t = It's acceptable for Smith to reason that, since the ticket won't win, she should recycle it. u = Smith should recycle the ticket.
Now consider the following presentation of (1) through (7).
(1) ~p (2) p & ~q (3) q & ~r (4) r & ~s (5) s & ~t (6) t & ~u (7) u
Either p is true or it's false. Suppose the latter. Then (1) is true. Suppose the former. Then either (2) is true or q is. Suppose q is. Then either (3) is true or r is. Suppose r is. Then either (4) is true or s is. So suppose s is true. Then either (5) is true or t is true. Suppose it’s t. Then either (6) is true or u is. But if u is true, then so is (7). So, one of (1) through (7) must be true.
I think that knowledge matters, but there are doubters. To paraphrase Tim Maudlin at a recent talk here: ‘if I know that the guy has a true belief, and I know that he’s done everything right [i.e., he’s epistemic ally justified], why do I care whether or not he knows?’That someone has a true justified belief (JTB) is enough information forus to judge the epistemic worthiness of his actions.Furthermore, formal epistemologists seem to have a pretty good decision theory that doesn’t rely on knowledge at all.So, why is knowledge important?
Timothy Williamson (in Knowledge and its Limits, 2000, p. 62) argues that knowledge ascriptions are vital to explanations of behavior.Not everyone buys this argument.I don’t want to convince those people; rather, I think that the value of knowledge stretches beyond its usefulness in predicting behavior.When we find out that someone knows something, we get more information than we do when we find out that they JTB the proposition in question.It seems to me that this isn’t just more information: it’s more useful information.
I'll be talking about James's later work, particularly Pragmatism and the Essays in Radical Empiricism, and I'll concentrate on James's philosophy of perception, his metaphysics and his theory of truth. I'll also have a few things to say about the historical origins of Quine's pragmatism (yes, Quine's pragmatism).