Tuesday, February 28, 2017

The problem of priors

Counterfactuals about scientific practice reveal some curious facts about our prior probabilities. Our handling of experimental suggests an approximate flatness in our prior distributions of various constants (cf. this). But the flatness is not perfect. Suppose we are measuring some constant k in a law of nature, a constant that is either dimensionless or expressed in a natural unit system, and we come back with 2.00000. Then we will assign a fairly high credence to the hypothesis that k is exactly 2. But any kind of continuous prior distribution will assign zero prior to k being exactly 2, and the posterior will still be zero, so our prior for 2 must have been non-zero and non-infinitesimal. But for most numbers, the prior for k being that number must be zero or infinitesimal, or else the probabilities won’t add up to 1.

More generally, our priors favor simpler theories. And they favor them in a way that is tuned (finely or not). If our prior for k being exactly 2 were high, then we wpould believe that k = 2 even after a measurement of 3.2 (experimental error!). If our prior were too low, then we wouldn’t ever conclude that k = 2, no matter how many digits after the “2.” we measured to be zero.

There are is now an interesting non-normative question about the priors:

  • Why are human priors typically so tuned?

There is, of course, an evolutionary answer—our reasoning about the world wouldn’t work if we didn’t have a pattern of priors that was so tuned. But there is a second question that the evolutionary story does not answer. To get to the second question, observe that our priors ought to be so tuned. Someone whose epistemic practices involve the rejection of the confirmation of scientific theories on the basis of too strong a prejudice for simple theories (“There is only one thing, and it’s round—everything else is illusion”) or too weak a preference for simple theories (“There are just as many temperature trends where there is a rise for a hundred years and then a fall for a hundred years as where there is a rise for two hundred years, so we have no reason at all to think global warming will continue”) is not acting as she ought.

So now we have this normative question:

  • Why is it that our priors ought to be so tuned?

These give us the first two desiderata on a theory of priors:

  1. The theory should explain why our priors are tuned with respect to simplicity as they are.

  2. The theory should explain why our priors should be so tuned.

Here is another desideratum:

  1. The theory should exhibit a connection between priors and truth.

Next, observe that our priors are pretty vague. They certainly aren’t numerically precise, and they shouldn’t be, because beings with our capacity couldn’t reason with precise numerical credences in the kinds of situations where we need to.

  1. The theory should not imply that our having those priors we should requires us to always have numerically precise priors.

Further, there seems to be something to subjective Bayesianism, even if we should not go all the way with the subjective Bayesians. Which we should not, because then we cannot rationally criticize the person who has too strong or too weak an epistemic preference for simple theories.

  1. The theory should not imply a unique set of priors that everyone should have.

Next, different kinds of agents should have different priors. For instance, agents like us typically shouldn’t be numerically precise. But angelic intellects that are capable of instantaneous mathematical computation might do better with numerically precise priors. Moreover, and more controversially, beings that lived in a world with simpler or less simple laws shouldn’t be held hostage to the priors that work so well for us.

  1. The theory should allow for the possibility that priors vary between kinds of agents.

And then, of course, we have standard desiderata on all theories, such as that they be unified.

Finally, observe the actual methodology of philosophy of science: We observe how working scientists make inferences, and while we are willing at times to offer corrections, we use the actual inferential practices as evidence for how the inferential practices ought to go. In particular, we extract the kinds of priors that people have from their epistemic behavior when it is at its best:

  1. The theory should allow for the methodology of inferring what kinds of priors we ought to have from looking at actual epistemic behavior.

Subjective Bayesianism fails with respect to desiderata 2 and 3, and if it satisfies 1, it is only by being conjoined with some further story, which decreases the unity of the story. Objective Bayesianism fails with respect to desiderata 5 and 6, and some versions of it have trouble with 4. Moreover, to satisfy 1, it needs to be conjoined with a further story. And it’s not clear that objective Bayesianism is entitled to the methodology advocated in 7.

What we need is something in between subjective and objective Bayesianism. Here is such a theory: Aristotelian Bayesianism. On general Aristotelian principles, we have natures which dictate a range of normal features with an objective teleology. For instance, the nature of a sheep specifies that they should have four legs in support of quadrapedal locomotion. Moreover, in Aristotelian metaphysics, the natures also explain the characteristic structure of beings with that nature. Thus, the nature of a sheep is not only that in virtue of which a sheep ought to have four legs, but also has guided the embryonic development of typical sheep towards a four-legged state. Finally, in an Aristotelian picture, when things act normally, they tend to achieve the goals that their nature assigns to that activity.

Now, in my Aristotelian Bayesianism, our human nature leads to characteristic patterns of epistemic behavior for the telos of truth. From the patterns of behavior that are compatible with our nature, one can derive constraints on priors—namely, that they be such as to underwrite such behavior. These priors are implicit in the patterns of behavior.

We can now take the desiderata one by one:

  1. Our priors are tuned as they are since our development is guided by a nature that leads to epistemic behavior that determines priors to be so tuned.

  2. Our priors ought to be so tuned, because all things ought to act in the way that their nature makes natural.

  3. Natural behavior is teleological, and our epistemic behavior is truth-directed.

  4. The the priors we ought to have are back-calculated from the epistemic behaviors we ought to have, and our behaviors cannot have precise numbers attached to them in such a way as to yield precise numerical priors.

  5. Nothing in the theory requires that unique priors be derivable from what epistemic behavior is characteristic. Typically, in Aristotelian theories, there is a range of normalcy—a ratio of length of legs to length of arms between x and y, etc.

  6. Different kinds of beings have different natures. Sheep ought to have four legs and we ought to have two. We are led to expect that different kinds of agents would have different appropriate priors. Moreover, animals tend to be adapted to their environment, so we would expect that in worlds that are sufficiently different, different priors would be appropriate.

  7. Since beings have a tendency towards acting naturally, the actual behavior of beings—especially when they appear to be at their best—provides evidence of the kind of behavior that they ought to exhibit. And from the kind of epistemic behavior we ought to exhibit, we can back-calculate the kinds of priors that are implicit in that behavior.

This post is inspired by Barry Loewer saying in discussion that I was Kantian because I think there are objective constraints on priors. I am not Kantian. I am Aristotelian.

An unimpressive fine-tuning argument

One of the forces of nature that the physicists don’t talk about is the flexi force, whose value between two particles of mass m1 and m2 and distance r apart is given by F = km1m2r and which is radial. If k were too positive the universe would fall apart and if k were too negative the universe would collapse. There is a sweet spot of life-permissivity where k is very close to zero. And, in fact, as far as we know, k is exactly zero. :-)

Indeed, there are infinitely many forces like this, all of which have a “narrow” life-permitting range around zero, and where as far as we know the force constant is zero. But somehow this fine-tuning does not impress as much as the more standard examples of fine-tuning. Why not?

Probably it’s this: For any force, we have a high prior probability, independent of theism, that it has a strength of zero. This is a part of our epistemic preference for simpler theories. Similarly, if k is a constant in the laws of nature expressed in a natural unit system, we have a relatively high prior probability that k is exactly 1 or exactly 2 (thought experiment: in the lab you measure k up to six decimal places and get 2.000000; you will now think that it’s probably exactly 2; but if you had uniform priors, your posterior that it’s exactly 2 would be zero).

But his in turn leads to a different explanatory question: Why is it the case that we ought to—as surely we ought, pace subjective Bayesianism—have such a preference, and such oddly non-uniform priors?

Thursday, February 23, 2017

Flatness of priors

I. J. Good is said to have said that we can know someone’s priors by their posteriors. Suppose that Alice has the following disposition with respect to the measurement of an unknown quantity X: For some finite bound ϵ and finite interval [a, b], whenever Alice would learn that:

  1. The value of X + F is x where x is in [a, b], where
  2. F is a symmetric error independent of the actual value of X and certain to be no greater than ϵ according to her priors, and
  3. the interval [x − ϵ, x + ϵ] is a subset of [a, b]

then Alice’s posterior epistemically expected value for X would be x.

Call this The Disposition. Many people seem to have The Disposition for some values of ϵ, a and b. For instance, suppose that you’re like Cavendish and you’re measuring the gravitational constant G. Then within some reasonable range of values, if your measurement gives you G plus some independent symmetric error F, your epistemically expected value for G will be probably be equal to the number you measure.

Fact. If Alice is a Bayesian agent who has The Disposition and X is measurable with respect to her priors, then Alice’s priors for X conditional on X being in [a, b] are uniform over [a, b].

So, by Good’s maxim about priors, someone like the Cavendish-like figure has a uniform distribution for the gravitational constant within some reasonable interval (there is a lower bound of zero for G, and an upper bound provided by the fact that even before the experiment we know that we don’t experience strong gravitational attraction to other people).

Wednesday, February 22, 2017

Going from 2D drawings to 3D printing

I wanted make a 3D-printable Valentine's day card for my wife, with an inflated red heart on a white background, so I spent perhaps too much time playing with algorithms to "inflate" a 2D drawing of a heart into a 3D polyhedron that I can load into OpenSCAD. Here's the card I ended up with.

After Valentine's, I still worked on refining the Python code that did the inflation. The code is now an easy to use Inkscape extension, which adds the ability to save to an inflated SCAD or STL file. The official repository is here.

The final algorithm I settled on is a non-linear scheme that sets the height of the inflation of a 2D image at a given point in the image x by approximating the Lp norm (E(Txp))1/p of the exit time Tx of a random walk started at x. For further adjustment, you can replace Tx with min(Tx,K) where K is exponentially distributed, which flattens the inflation in inner regions. The code could use a lot of optimization (using pypy instead of cpython improves runtimes by a factor of 10, but Inkscape only bundles python) as on my laptop the code takes about 45 seconds with default settings on one of my simple test images.

While my original Valentine's day card used p=1, I have since found that p=2 produces more nicely rounded output.

Tuesday, February 21, 2017

Total and average epistemic and pragmatic utilities

The demiurge flipped a fair coin. If it landed heads, he created 100 people, of whom 10 had a birthmark on their back. If it landed tails, he created 10 people, of whom 9 had a birthmark on their back. You’re one of the created people and the demiurge has just apprised you of the above facts.

What should your credence be that you have a birthmark on your back?

This seems a plausible answer:

  • Answer A: (1/2)(10/100)+(1/2)(9/10)=1/2

Let’s think a bit about Brier scores, considered as measures of epistemic disutility. If everybody goes for Answer A, then the expected total epistemic disutility will be:

  • TD(A) = (1/2)(100)(1/2)2 + (1/2)(10)(1/2)2 = 13.75

That’s not the best one we can do. It turns out that the strategy that minimizes the expected total epistemic disutility will be:

  • Answer B: 19/110

which yields the expected total disutility:

  • TD(B) = 7.9.

The same 19/110 answer will be optimal with any other proper scoring rule. Moreover, what holds for proper scoring rules also holds for betting scenarios, and so the strategy of going for 19/110, if universally adopted, will make for better total utility in betting scenarios. In other words, we have both an epistemic utility and a pragmatic utility argument for the strategy of adopting 19/110.

On the other hand, the 1/2 answer will optimize the expected average epistemic and pragmatic utilities in the population. But do we want to do that? After all, we know from Parfit that optimizing average pragmatic utilities can be a very bad idea (as it leads to killing off of those who are below average in happiness).

Yet the 1/2 answer has an intuitive pull.

Monday, February 20, 2017

An argument that insects are not conscious

Suppose insects are conscious. There are at least about a billion insects per human being. So, if insects are conscious, we should be surprised to find ourselves not being an insect. But if insects are not conscious, there is no surprise there. So, it seems, observing that we are not insects gives us very strong evidence that insects are not conscious.

But this just doesn’t seem to be a good argument… Perhaps the self-sampling thesis—the thesis that we should count ourselves as randomly selected from among observers—needs to be restricted to intelligent and not merely conscious observers? But isn’t that restriction ad hoc? If we're doing such restricting, maybe we should restrict even more finely, say to observers at our exact level of intelligence?

Thursday, February 16, 2017

Contraception, liturgy and self-giving

Alice has a paper due the day after Thanksgiving. She’s already gotten all the extensions she can, and she can’t get it done except by working through Thanksgiving. She is thinking of not going to the big Thanksgiving dinner that her grandfather organizes every year, even though it brings together relatives she hasn’t heard from for a long time, has much warm family fellowship, and great food. But then she has an idea: “It’s better to attend distractedly than not at all. The table is big and my laptop is small, so I can easily put my laptop beside a plate, and then I can write all the way through dinner and finish my paper. And I’m good at multitasking, so I can still have an ear out for interesting bits of conversation, and occasionally I can put a forkful of food in my mouth or make a friendly remark to someone. It would be permissible for me to skip the dinner completely, and this is better than skipping it.”

Bob has a major exam on Wednesday. It is his habit to attend Mass daily, both for the spiritual benefits and because there is an incredible organist. He could skip Tuesday Mass, but reasons much as Alice does: “If I skip Mass, I get none of the spiritual and musical benefits. I’ll just bring my tablet, sit in the back pew so the bright screen doesn’t disturb anybody, study hard and I’ll at least get some of the benefits of Mass. After all, there is nothing wrong with my skipping Tuesday Mass, and this is better.”

Alice is being obtuse about human relationships and Bob doesn’t understand the kind of participation the Mass requires. There are some activities that one should give oneself pretty completely to—or not do them at all.

What if Bob says something like this? “But I go to Mass on many days when I’ve already spent hours working hard, and I’m really exhausted, and barely able to pay any attention to what the priest says. There is nothing morally wrong with attending Mass on days like that. But today I’m still fresh, and multitasking today I can participate at least as well as singletasking on a bad day.” And Alice can say something very similar—after all, very tired people can go to Thanksgiving dinner, too.

But that’s still not an excuse. For when one goes to Thanksgiving dinner or Mass, one should give oneself to it as much as one can (within some reasonable limit of what counts as “enough”). Both Alice and Bob are going to be deliberately withholding themselves from participation. But on the days when they attended while really tired, they weren’t doing that—they were giving what they could (it would be different if Bob ran a marathon in order to be too tired to follow the Gospel reading!).

Now, consider a common response to John Paul II’s argument that contraception is wrong because it deliberately blocks the total self-giving in sex. “Granted, contraception blocks an aspect of the union as one body. But a partial union is better than no union at all, and a couple is morally permitted to refrain from union for good reasons.” But that’s like Alice’s and Bob’s initial argument. And there is a case that can be made that sex is a liturgical kind of act, akin to Thanksgiving dinner or the Mass, and that in these kinds of liturgical acts one can’t participate while blocking an aspect of one’s participation—one needs to give one’s all, or not at all. It is better not to have sex at all than to have it while blocking one’s participation.

And then there is the riposte: “But the Catholic Church says it’s permissible to have sex while infertile. And contracepted sex has in it everything that infertile sex does.” But that riposte is just like Bob’s suggestion that studying at Mass with his tablet still leaves him as much (or more!) function as attending Mass on the days when he is really tired. Yes, that’s true, but it misses the liturgical meaning of deliberately distracting oneself with the tablet.

If it is objected that sex isn’t analogous to Thanksgiving dinner or the Mass (though I think it is), we could think about the case of Carl who is a professional movie reviewer. His wife would like to have sex with him, but he needs to watch and review a boring movie by tomorrow. So he sets up a laptop by the bed, and unites with his wife while watching the movie. Ugh! It would be better not to have sex at all.

The consent norm for sexual activity is insufficient

Consider the thesis that consent is the only norm of sexual activity. Of course, this does not imply the crazy claim that every consensual sexual act is permissible. Some consensual sexual acts violate promises, or constitute the neglect of some non-sexual responsibility (e.g., sex while driving), or just have sufficiently bad consequences for one or more people. Rather, the thesis can be taken to say that consent is the only norm of sexual activity as sexual, that it is the only distinctively sexual norm.

The thesis is still false. To see this, we will need a distinction between things that are very wrong and things that are wrong but not very wrong. Then:

  1. Every case of coitus without consent is a case of rape.
  2. Every case of rape is gravely wrong as a sexual act.
  3. There is a case of coitus which is wrong as sexual but not gravely wrong.
  4. So, there is a case of coitus which is not rape but is wrong as sexual. (2 and 3)
  5. So, there is a case of coitus which is wrong as sexual even though there is consent. (1 and 4)

(When I say that a case of coitus is wrong, I mean that at least one party responsible for the coitus is in the wrong. That party could be one of the participants in coitus, but need not be: a rapist does not actually have to participate in the act of coitus, but could instead force two other people to engage in coitus with themselves.)

I think premise 3 is very plausible. It would be quite surprising if sexual wrongness of coitus only came in grave and not-at-all varieties, with nothing in between. But I can also offer an argument for premise 3 (I’ve used this argument in a previous post which gave a similar but perhaps less clear argument) assuming that consent is the only norm of sexual activity—the target of my argument obviously can’t dispute that.

We imagine a continuum of cases of coitus, where at one extreme there clearly is no consent and at the other extreme there clearly is consent.

(For instance, it could be a set of cases where a party threatens an adverse consequence if coitus is not engaged in: at one end, the consequence is torture and at the other end it’s a minor expression of minor displeasure. Accepting coitus as an alternative to torture is not consent. Accepting coitus as an alternative to witnessing a minor expression of minor displeasure can be consent (assuming that minor displeasure is all there is; obviously, minor displeasure from a tyrant could have further adverse consequences—including torture and death).)

Assuming consent is the only norm of sexual activity, there is no sexual wrong at the consent end of the continuum and there is grave sexual wrong by (1) and (2) at the no-consent end of the continuum. Given continuous variation in cases, we would expect continuous variation in wrongdoing. So if at one end we have grave sexual wrong and at the other end no sexual sexual wrong, somewhere in the middle there should be a case of non-grave sexual wrong, which is what premise (3) says.

Note how the enthusiastic consent alternative to the consent norm nicely escapes the argument. For the proponent of the enthusiastic consent norm case can agree to (2) but say that there are some non-grave sexual wrongs. These non-grave sexual wrongs could, for instance, include some of the cases where there is consent but the consent is insufficiently enthusiastic.

Pragmatically speaking, this is a risky argument to use in teaching. The problem is that a student might try to get out of the argument by denying premise (2) which, given the rape problem on many campuses, would be very bad. On the other hand, if students have a sufficiently strong commitment to (2), this argument could have positive consequences for campus sexual culture by getting them to realize that minimally-valid consent is not enough for permissibility (even if by definition it is enough to make the act not be a case of rape).

Philosophically, there is a technical weakness in that the notion of a sexually wrong act is a bit foggy. I think one can reformulate the argument by dropping the “sexual” qualifier in the argument but specializing to cases where there is no promise breaking, there are no bad non-sexual consequences, etc. But it’s hard to explicate the “etc.”

Wednesday, February 15, 2017

Dignitary harms and wickedness

Torturing someone is gravely wrong because it causes grave harm to the victim, and the wickedness evinced in the act is typically proportional to the harm (as well as depending on many other factors).

But there are some wrongdoings which are wicked to a degree disproportionate to the harm. In fact, torture can be such a case. Suppose that Alice is caught by an evildoer who in a week will torture Alice by one second for every person who requests this by email. About a hundred thousand people make requests, and Alice gets over a day of torture. Each requester’s harm to Alice is real but may be quite small. But each requester’s deed is very wicked, disproportionately to the harm. The case is similar to a conspiracy where each conspirator contributes only a small amount of torment but collectively the conspirators cause great torture—the law would be just in holding all the conspirators guilty of the whole torture.

Here’s another way to see the disproportion. Suppose that someone is deciding whether to request torture for Alice or to steal $100 from her. Alice might actually self-interestedly prefer an extra second of torture to having $100 stolen. Nonetheless, requesting the torture seems much more wicked than stealing $100 from Alice (unless Alice is destitute).

Similarly, the evildoer could kill Alice with probability 1 − (1/2)n where n is the number of requesters. Given sad facts about humanity, everyone might know that the probability that Alice will die is going to be nearly certain, and no one requester makes any significant difference to that probability. So the harm to Alice from any one requester is pretty small, but the wickedness of making the request is great.

Another case. It is wicked to fantasize about torturing someone. And to be thought of badly is indeed a kind of harm. But if one can be sure that that the fantasy stays in the mind—think, maybe, of the sad case of a dying woman who spends her last twenty minutes fantasizing about torturing Bob—one might self-interestedly prefer the fantasy to, say, a theft of $100. Hence, the harm is relatively small. Yet the wickedness in fantasizing about torture is great, in disproportion to the harm.

Yet another case. Suppose that with science-fictional technology, someone destroys my heart, while at the same time beaming into my chest a pump of titanium that is in every respect better functioning than my natural heart. I think I have been harmed in one respect: a bodily function, that of pumping blood by my heart, is no longer being fulfilled. But blood is still being pumped, and better. So overall, I may not be harmed. (I may even be benefited.) Yet it seems that to destroy someone’s heart is to do them a grave harm. I am least confident about this case. (I am confident that the deed is wrong, but not of how wrong it is.)

In all these cases, there is a dignitary harm to the victim. And even if it is self-interestedly rational for the victim to prefer this dignitary harm to a modest monetary harm, imposing the dignitary harm is much more wicked. This is puzzling.

Solution 1: Imposing the dignitary harm causes much greater harm to the wrongdoer, and that’s what makes it so much more wicked.

But that seems to get wrong who the victim is.

Solution 2: Alice and Bob are mistaken in preferring not to be robbed of $100. The dignitary harm in fact is much, much worse.

Maybe. But I am not sure. Is it really much, much worse to have ten thousand people request one’s death rather than five thousand? It seems that dignitary harm drops off with the numbers, too, and each individual harmer’s anti-dignitary contribution is small.

Solution 4: Wrongdoings are not a function of harm, but of irrationality (Kant).

I fear, though, that this has the same problem of dislocating the victim from the center of the wrong, just as Solution 1 did.

Solution 3: Dignitary harms to people additionally harm God’s extended well-being, by imposing an indignity on the imago Dei that each human being constitutes. Dignitary harms to people are dignitary harms to God, but they are either much greater when they are done to God (because God’s dignity is so much greater?) or else they are much more unjust when they are done to God (because God deserves our love so much more?).

Like Solution 1, this may seem to get wrong who the victim is. But if we see the imago Dei as something intrinsic to the person (as it will be in the case of a Thomistic theology on which all our positive properties are participations in God) rather than as an external feature, this worry is, I think, alleviated.

I am not extremely happy with Solution 4, either, but it seems like it might be the best on offer.

Monday, February 13, 2017

Let your "yes" be yes

Jesus seems to have forbidden swearing, insisting that our “yes” should be a yes, and our “no” a no and that everything else is “from the evil one” (Matthew 5:34-37). A strong reading of this would take Jesus to be forbidding all oaths. By and large, the Christian tradition has not taken that to be the correct reading. In the US, many people who accept the Bible, including presumably this text in Matthew, swear in court on the Bible.

A plausible reading is that Jesus is engaging in hyperbole to command an integrity such that there be no need for oaths. For Christians, the same norms of integrity apply to simple assertions as to sworn depositions. I will assume that this is the correct reading.

Suppose one thinks, contrary to the main line of the Christian tradition, that it is sometimes permissible to lie. Then one has to think that it is also permissible to lie under oath in precisely the same circumstances in which it would be permissible to lie without oath. But this is an implausible consequence.

Let’s say that it’s permissible to lie to save an innocent life—that’s one of the most given criteria. Then, we have the consequence that it is permissible to swear to a false alibi for someone whom you know to be innocent in a capital case if you foresee that otherwise he will be convicted. This is already implausible. But it also means that, implausibly, even in a good and free society one might have good reason to keep one’s moral views secret. For if it were known that according to one’s moral views it would be permissible to provide a false alibi when one took oneself to know that the accused is innocent in a capital case, then one’s true alibi would be of little worth in such cases, too.

Second, whatever criterion one has is going to admit of borderline cases. For instance, in the case of “saving an innocent life”, at one end, there is the case where it is certain that the person will live if and only if one lies. Near the other end, the probability of survival is slightly bigger if one lies. It is implausible that one can permissibly lie just to give someone the slightest increase in probability of survival. If that were so, then it would be permissible to swear to a false alibi (assuming, as per my reading of what Jesus said, that swearing falsely is permissible whenever false assertion is) even when it is very likely that the innocent accused in the capital case will be let off, as long as it slightly increases the innocent’s chance of survival, and that can’t be right. And somewhere in between there will be borderline cases.

But now consider a borderline case of the permissibility of lying. If it is only borderline whether it is permissible to engage in a simple lie, to swear falsely in such a case would be simply wrong, not just borderline wrong. But this violates Jesus’s principle that the norms regarding simple lies are the same as the norms regarding false oaths.

It seems to me that the best reading of the situation is that:

  1. Lying under oath is always wrong.

  2. And so, lying is always wrong.

Friday, February 10, 2017

Measurement error

Let’s say that I am in the lab and I am measuring some unknown value U. My best model of the measurement process involves a random additive error E independent of U, with E having some known distribution, say a Gaussian of some particular standard deviation (perhaps specified by the measurement equipment manufacturer) centered around zero. The measurement gives the value 7.3. How should I now answer probabilistic questions like: “How likely is it that U is actually between 7.2 and 7.4?”

Here’s how this is sometimes done in practice. We know that U = 7.3 − E. Then we say that the probability that U is, say, between 7.2 and 7.4 is the same as the probability that E is between −0.1 and 0.1, and we calculate the latter probability using the known distribution of E.

But this is an un-Bayesian way of proceeding. We can see that from the fact that we never said anything about our priors regarding U, and for a Bayesian that should matter. Here’s another way to see the mistake: When I calculated the probability that U was between 7.2 and 7.4, I used the prior distribution of E. But to do that neglects data that I have received. For instance, suppose that U is the diameter of a human hair that I have placed between my digital calipers. And the calipers show 7.3 millimeters. What is the probability that the hair really has a diameter between 7.2 and 7.4 millimeters? It’s vanishingly small! That would be just an absurdly large diameter for a hair. Rather, the fact that the calipers show 7.3 millimeters shows that E is approximately equal to 7.3 millimeters. The posterior distribution of E, given background information on human hair thickness, is very different from the prior distribution.

Yet the above is what one does in practice. Can one justify that practice? Yes, in some cases. Generalize a little. Let’s say we measure the value of U to be α, and we want to know the posterior probability that U lies in some set I. This probability is:

P(U ∈ I|U + E = α)=P(α − E ∈ I|U + E = α).

Now suppose that E has a certain maximum range, say, from −δ to δ. (For instance, there is no way that digital with four digits can show more than 9999 or less than −9999.) And suppose that U is uniformly distributed over the region from α − δ to α + δ, i.e., its distribution over that region is perfectly flat. In that case, it’s easy to see that E and U + E = α are actually statistically independent. Thus:

P(U ∈ I|U + E = α)=P(α − E ∈ I).

And so in this case our initial naive approach works just fine.

In the original setting, if for instance we’re completely confident that E cannot exceed 0.5 in absolute value, and our prior distribution for U is flat from 6.8 to 7.8, then the initial calculation that the probability that U is between 7.2 and 7.4 equals the prior probability that E is between −0.1 and 0.1 stands. (The counterexample then doesn’t apply, since in the counterexample we had the possibility, now ruled out, that E is really big.)

The original un-Bayesian way of approaching basically pretended that U was (per impossibile) uniformly distributed over the whole real line. When U is close to uniformly distributed over a large salient portion of the real line, the original way kind of works.

The general point goes something like this: As long as the value of E is approximately independent of whether U + E = α, we can approximate the posterior distribution of E by its prior and all is well. In the case of the hair measurement, E was not approximately independent of whether U + E = 7.3, since if U + E = 7.3, then very likely E is enormous, but I assume E isn’t in other cases very likely to be enormous.

This is no doubt stuff well-known to statisticians, but I’m not a statistician, and it’s clarified some things for me.

The naive un-Bayesian calculation I gave at the beginning is precisely the one that I used in my previous post when adjusting for errors in the evaluation of evidence. But an appropriate flatness of prior distribution assumption can rescue the calculations in that post.

Thursday, February 9, 2017

Conciliationism and another toy model

Conciliationism holds that in cases of peer disagreement the two peers should move to a credence somewhere between their individual credences. In a recent post I presented a toy model of error of reasoning on which conciliationism was in general false. In this post, I will present another toy model with the same property.

Bayesian evidence is additive when instead of probability p one works with log-odds λ(p)=log(p/(1 − p)). From that point of view, it is natural to model error in the evaluation of the force of evidence as the addition of a normally-distributed term with mean zero to the log-odds.

Suppose now that Alice and Bob evaluate their first-order evidence, which they know they have in common, and come to the individual conclusions that the probability of some Q is α and β respectively. Moreover, both Alice and Bob have the above additive model of their own error-proneness in the evaluation of first-order evidence, and in fact they assign the same standard deviation σ to the normal distribution. Finally, we assume that Alice and Bob know that their errors are independent.

Alice and Bob are good Bayesians. They will next apply a discount for their errors to their first-order estimates. You might think: “No discount needed. After all, the error could just as well be negative as well as positive, and the positive and negative possibilities cancel out, leaving a mean error of zero.” That’s mistaken, because while the normal distribution is symmetric, what we are interested in is not the expected error in the log-odds, which is indeed zero, but the mean error in the probabilities. And once one transforms back from log-odds to probabilities, the normal distribution becomes asymmetric. A couple of weeks back, I worked out some formulas which can be numerically integrated with Derive.

First-order probability σ Second-order probability
0.80 1.00 0.76
0.85 1.00 0.81
0.90 1.00 0.87
0.95 1.00 0.93
0.80 0.71 0.78
0.80 0.71 0.83
0.90 0.71 0.88
0.95 0.71 0.94

So, for instance, if Alice has a first-order estimate of 0.90 and Bob has a first-order estimate of 0.95, and they both have σ = 1 in their error models, they will discount to 0.87 and 0.93.

Let the discounted credences, after evaluation of the second-order evidence, be α* and β* (the value depends on σ).

Very good. Now, Alice and Bob get together and aggregate their final credences. Let’s suppose they do so completely symmetrically, having all information in common. Here’s what they will do. The correct log-odds for Q, based on the correct evaluation of the evidence, equals Alice’s pre-discount log-odds log(α/(1 − α)) plus an unknown error term with mean zero and standard deviation σ, as well as equalling Bob’s pre-discount log-odds log(α/(1 − α)) plus an unknown error term with mean zero and standard deviation σ.

Now, there is a statistical technique we learn in grade school which takes a number of measurements of an unknown quantity, with the same normally distributed error, and which returns a measurement with a smaller normally distributed error. The technique is known as the arithmetic mean. The standard deviation of the error in the resulting averaged data point is σ/n1/2, where n is the number of samples. So, Alice and Bob apply this technique. They back-calculate α and β from their final individual credences α* and β*, they then calculate the log-odds, average, and go back to probabilities. And then they model the fact that there is still a normally-distributed error term, albeit one with standard deviation σ/21/2, so they adjust for that to get a final credence α** = β**.

So what do we get? Do we get conciliationism, so that their aggregated credence α** = β** is in between their individual credences? Sometimes, of course, we do. But not always.

Observe first what happens if α* = β*. “But then there is no disagreement and nothing to conciliate!” True, but there is still data to aggregate. If α* = β*, then the error discount will be smaller by a factor of the square root of two. In fact, the table above shows what will happen, because (not by coincidence) 0.71 is approximately the reciprocal of the square root of two. Suppose σ = 1. If α* = β* = 0.81, this came from pre-correction values α = β = 0.85. When corrected with the smaller normal error of 0.71, we now get a corrected value α** = β** = 0.83. In other words, aggregating the data from one another, Alice and Bob raise their credence in Q from 0.81 to 0.83.

But all the formulas here are quite continuous. So if α* = 0.8099 and β* = 0.8101, the aggregation will still yield a final credence of approximately 0.83 (I am not bothering with the calculation at this point). So, when conciliating 0.8099 and 0.8101, you get a final credence that is higher than either one. Conciliationism is thus false.

The intuition here is this. When the two credences are reasonably close, the amount by which averaging reduces error overcomes the downward movement in the higher credence.

Of course, there will also be cases where aggregation of data does generate something in between the two data points. I conjecture that on this toy model, as in my previous, this will be the case whenever the two credences are on opposite sides of 1/2.

Wednesday, February 8, 2017

Main academic task right now

My main academic task right now is revising my Infinity, Causation and Paradox book manuscript in light of referee comments. One of the tasks a referee set me is including more diagrams (the original only had one). This is a fun break from writing. I'm doing some of the drawings with TikZ right in the LaTeX file, and some I'm drawing with Inkscape.

Here's Smullyan's rod with exponentially decreasing density (to ensure finite total force). It's a rigid rod suspended over an infinite plane, and it can't fall down because then it would hit the plane but on the other hand it's never in contact with the plane. Art it's not.


Peer disagreement, conciliationism and a toy model

Let suppose that Alice and Bob are interested in the truth of some proposition Q. They both assign a prior probability of 1/2 to Q, and all the first-order evidence regarding Q is shared between them. They evaluate this first-order evidence and come up with respective posteriors α and β for Q in light of the evidence.

Further, Alice and Bob have background information about how their minds work. They each have a random chance of 1/2 of evaluating the evidence exactly correctly and a random chance of 1/2 that a random bias will result in their evaluation being completely unrelated to the evidence. In the case of that random bias, their output evaluation is random, uniformly distributed over the interval between 0 and 1. Moreover, Alice and Bob’s errors are independent of what the other person thinks. Finally, Alice and Bob’s priors as to what the correct evaluation of the evidence will show is uniformly distributed between 0 and 1.

Given that each now has this further background information about their error-proneness, Alice and Bob readjust their posteriors for Q. Alice reasons thus: the probability that my first-order evaluation of α was due to the random bias is 1/2. If I knew that the random bias happened, my credence in Q would be 1/2; if I knew that the random bias did not happen, my credence in Q would be α. Not knowing either way, my credence in Q should be:

  1. α* = (1/2)(1/2)+(1/2)α = (1/2)(1/2 + α).

Similarly, Bob reasons that his credence in Q should be:

  1. β* = (1/2)(1/2 + β).

In other words, upon evaluating the higher-order evidence, both of them shift their credences closer to 1/2, unless they were at 1/2.

Next, Alice and Bob pool their data. Here I will assume an equal weight view of how the data pooling works. There are now two possibilities.

First, suppose Alice and Bob notice that their credences in Q are the same, i.e., α* = β*. They know this happens just in case α = β by (1) and (2). Then they do a little Bayesian calculation: there is a 1/4 prior that neither was biased, in which case the equality of credences is certain; there is a 3/4 prior that at least one was biased, in which case the credences would almost certainly be unequal (the probability that they’d both get the same erroneous result is zero given the uniform distribution of errors); so, the posterior that they are both correct is 1 (or 1 minus an infinitesimal). In that case, they will adjust their credences back to α and β (which are equal). This is the case of peer agreement.

Notice that peer agreement results in an adjustment of credence away from 1/2 (i.e., α* is closer to 1/2 than α is, unless of course α = 1/2).

Second, suppose Alice and Bob notice that their credences in Q are different, i.e., α* ≠ β*. By (1) and (2), it follows that their first-order evaluations α and β were also different from one another. Now they reason as follows. Before they learned that their evaluations were different, there were four possibilities:

  • EE: Alice erred and Bob erred
  • EN: Alice erred but Bob did not err
  • NE: Alice did not err but Bob erred
  • NN: no error by either.

Each of these had equal probability 1/4. Upon learning that their evaluations were different, the last option was ruled out. Moreover, given the various uniform distribution assumptions, the exact values of the errors do not affect the probabilities of which possibility was the case. Thus, the EE, EN and NE options remain equally likely, but now have probability 1/3. If they knew they were in EE, then their credence should be 1/2—they have received no data. If they knew they were in EN, their credence should be β, since Bob’s evaluation of the evidence would be correct. If they knew they were in NE, their credence should be α, since Alice’s evaluation would be correct. But they don’t know which is the case, and the three cases are equally likely, so their new credence is:

  1. α** = β** = (1/3)(1/2 + α + β) = (1/3)(2α* + 2β* − 1/2).

(They can calculate α and β from α* and β*, respectively.)

Now here’s the first interesting thing. In this model, the “split the difference” account of peer disagreement is provably wrong. Splitting the difference between α* and β* would result in (1/2)(α* + β*). It is easy to see that the only case where (3) generates the same answer as splitting the difference is when α* + β* = 1, i.e., when the credences of Alice and Bob prior to aggregation were equidistant from 1/2, in which case (3) says that they should go to 1/2.

And here is a second interesting thing. Suppose that α* < β*. Standard conciliationist accounts of peer disagreement (of which “split the difference” is an example) say that Alice should raise her credence and Bob should lower his. Does that follow from (3)? The answer is: sometimes. Here are some cases:

  • α* = 0.40, β* = 0.55, α** = β** = 0.47
  • α* = 0.55, β* = 0.65, α** = β** = 0.63
  • α* = 0.60, β* = 0.65, α** = β** = 0.67
  • α* = 0.60, β* = 0.70, α** = β** = 0.70.

Thus just by plugging some numbers in, we can find some conciliationist cases where Alice and Bob should meet in between, but we can also find a case (0.60 and 0.70) where Bob should stand pat, and a case (0.60 and 0.65) where both should raise their credence.

When playing with numbers, remember that by (1) and (2), the possible range for α* and β* is between 1/4 and 3/4 (since the possible range for α and β is from 0 to 1).

What can we prove? Well, let's first consider the case where α* < 1/2 < β*. Then it's easy to check that Bob needs to lower his credence and Alice needs to raise hers. That's a conciliationist result.

But what if both credences are on the same side of 1/2? Let say 1/2 < α* < β*. Then it turns out that:

  1. Alice will always raise her credence

  2. Bob will lower his credence if and only if β* > 2α* − 1/2

  3. Bob will raise his credence if and only if β* < 2α* − 1/2.

In other words, Bob will lower his credence if his credence is far enough away from Alice’s. But if it’s moderately close to Alice’s, both Alice and Bob will raise their credences.

While the model I am working with is very artificial, this last result is pretty intuitive: if both of them have credences that are fairly close to each other, this supports the idea that at least one of them is right, which in turn undoes some of the effect of the α → α* and β → β* transformations in light of their data on their own unreliability.

So what do we learn about peer disagreement from this model? What we learn is that things are pretty complicated, too complicated to encompass in a simple non-mathematical formulation. Splitting the difference is definitely not the way to go in general. Neither is any conciliationism that makes the two credences move towards their mutual mean.

Of course, all this is under some implausible uniform distribution and independence assumptions, and a pretty nasty unreliability assumption that half the time we evaluate evidence biasedly. I have pretty strong intuitions that a lot of what I said depends on these assumptions. For instance, suppose that the random bias results in a uniform distribution of posterior on the interval 0 to 1, but one’s prior probability distribution for one’s evaluation of the evidence is not uniform but drops off near 0 and 1 (one doesn’t think it likely that the evidence will establish or abolish Q with certainty). Then if α (say) is close to 0 or 1, that’s evidence for bias, and a more complicated adjustment will be needed than that given by (1).

So things are even more complicated.

Tuesday, February 7, 2017

Counterparts and singletons

Here is an interesting problem for Lewis. Lewis says that sets are necessary beings, and hence count as existing in all worlds. Very plausibly then:

  1. If A is a set and w1 and w2 are worlds, then A in w2 is a counterpart of A in w1.

After all, if identity isn’t good enough for being a counterpart, nothing is. Note that (1) does not say that A at w2 is the only counterpart of A in w1. To handle some identical twin scenarios, Lewis may need to allow a world to have more than one counterpart of an object.

Let α be Aristotle. Let A = {α} be the singleton of α. Lewis is now committed to the truth of:

  1. Possibly α is not a member of A.

For Lewis’s criterion for whether F(a, b) is possible is whether there is a world w with counterparts a′ and b′ of a and b respectively such that F(a′,b′) holds at w. Let w be a non-actual world where there is a counterpart β of Aristotle. Since individuals are world-bound, β ≠ α. Moreover set membership is necessary, so:

  1. β is not a member of A at w.

Since β is a counterpart of α and A is a counterpart of A by (1), it follows that (2) is true. But (2) seems clearly wrong: it is impossible for Aristotle not to be a member of A.

Here’s what seems to me to be the best way out for Lewis: Require pairwise counterparts rather than individual counterparts (in fact, I vaguely remember that Lewis may do that somewhere) for possibility claims involving two objects. Thus that β is not a member of A and β and A are individually counterparts of α and A isn’t enough to make it be that possibly α is not a member of A. One would need β and A to be pairwise counterparts of α and A. But perhaps they’re not. Perhaps, rather, it is β and B = {β} that are pairwise counterparts of α and A. However, this greatly complicates the counterpart relation as well as Lewis’s identification of properties with sets.

Monday, February 6, 2017

Are there unicorns here?

Multiverse theories like David Lewis’s or Donald Turner’s populate reality with a multitude of universes containing strange things like unicorns and witches riding broomsticks. One might think that positing unicorns and witches makes a theory untenable, but the theorists try to do justice to common sense by saying that the unicorns and witches aren’t here. Each universe occupies its own spacetime, and the different spacetimes have no locations in common.

But why take the different universes to have no locations in common? Surely, just as a unicorn can have the same charge or color as I, it can have the same location as I. From the fact that a unicorn can have the same charge or color as I, we infer in a Lewisian setting that some unicorn does have the same charge or color as I (and likewise in Turner’s, with some plausible auxiliary assumptions about values). Well, by the same token, from the fact that a unicorn can have the same location as I, we should be able to infer that some unicorn does have the same location as I.

Not so, says Lewis. Counterpart theory holds for locations, but not for charges and colors. What makes it true that a unicorn can have the same charge as I now have is that some unicorn does have the same charge as I. But what makes it true that a unicorn can have the same location as I now have is that some unicorn has a counterpart of my location in a different spacetime.

But what justifies this asymmetry between the properties of charge and location? The asymmetry seems to require clauses in Lewis’s modal semantics that work differently for different properties. It seems there are properties—say, being green—whose possible possession is grounded in something’s having the property, and there are properties—say, being at this location—whose possible possession is grounded in something’s having a a counterpart of the property.

Specifying in a non-ad hoc way which properties are which rather complicates the system. Moreover, it leads to this oddity. Lewis thinks abstract objects exist in all worlds. So, he has to say that being at this location exists in all worlds. And yet the counterpart of being at this location in another world is a different property, even though this exact property does exist at that world.

There is a solution for Lewis. Lewis is committed to counterpart theory holding for objects. It is reasonable for him thus to take counterpart theory also to hold for properties defined de re in terms of particular non-abstract objects. Thus, what makes it true that a unicorn could have had the property of being a mount of Socrates is not that some unicorn in some universe has this property—for no unicorn in our universe has that property, and Socrates according to Lewis only exists in our world—but that some unicorn has a counterpart to this property, which counterpart property is the property of being a mount of S where S is a counterpart of Socrates.

If Lewis can maintain that location properties are defined de re by relation to non-abstract objects, then he has a way out of the objection. Two kinds of theories allow a Lewisian to do this. First, the Lewisian can be a substantivalist who thinks that points or regions of space are non-abstract. Then being here will consist in being locationally related to some point or region L, and Lewis can take counterpart theory to apply to points or regions L. Second, the Lewisian can be a relationalist and say that location is defined by relations to other physical objects, in such a way that if all the objects were numerically different from what they are, nothing could be in the same place it is, and counterpart theory is applied to physical objects by Lewis.

What Lewis cannot do, however, is take a view of location that either takes location to be a relation to abstract objects—say, sets of points in a mathematical manifold—or that takes location to simply be a non-relational determinable like charge or rest mass.

In particular, multiverse theorists like Lewis and Turner are committed to treating location as different from other properties. Anecdotally, most philosophers do treat location like that. But for those of us who are attracted to the idea that location is just another determinable, this is a real cost.

Sunday, February 5, 2017

Cloning and parental permission

  1. If x is a biological parent of y and z is y’s full sibling, then x is a biological parent of z.

  2. No one should be made a biological parent of someone without their permission.

  3. One’s clone is one’s full sibling.

  4. So, cloning oneself is makes one’s biological parents be the biological parents of one’s clone. (By 1 and 3)

  5. So, one shouldn’t clone oneself without one’s biological parents’ permission. (By 2 and 4)

(I also think that one shouldn’t clone oneself, period, but that’s a different line of thought.)

Saturday, February 4, 2017

Substantivalism about space is a kind of relationalism

It’s just occurred to me that substantivalist views of space or spacetime are actually relationalist: they define location by relations between objects. It’s just that they introduce one or more additional objects—say, points or space or spacetime—to fill out the theory. An entity’s being located is then a matter of the entity standing in a certain relation to one or more of these additional objects.

Moreover, a substantivalist theory couched in terms of points may have to be even closer to relationalism, in that it may need to say that what makes the points be points of the kind of space or spacetime they are points of are their mutual spatial or spatiotemporal relations.

What has a hope of being a more radical alternative to relationalist theories are property theories, on which being in a location is a property very much like having a certain electric charge—the only difference being that the location properties have a three- or four-dimensional structure while the charge properties have a one-dimensional structure. Of course, having properties will be a matter of relation on heavy-weight Platonism and on trope theories, but these relations are not special spatial or spatiotemporal relations, but just general-purpose relations like instantiation or inherence.

Of course, maybe we don’t want an alternative to relationalism because we like relationalism.

Friday, February 3, 2017

Evidentialism and higher-order belief

It seems epistemically vicious to induce or maintain a belief for which one has insufficient evidence.

But suppose that my evidence supports a quite low degree of confidence about (a) whether I have or will have any higher-order beliefs, (b) the reliability of my introspection into higher-order beleifs, and (c) whether I am capable of self-inducing a belief. I now try to self-induce a belief that I have a higher-order belief, reasoning: either I’ll succeed or I’ll fail in self-induction. If I succeed, I will gain a true belief—for then I will have a higher-order belief. If I fail, no harm done. So I try, and I succeed.

Nothing epistemically vicious has been done, even though I self-induced a belief for which I had insufficient evidence.

In light of my evidenced low degree of confidence in the reliability of introspection into higher-order beliefs, once I have gained the belief, I still on balance have insufficient evidence for the belief. But it doesn’t seem irrational to try to maintain the belief, on the grounds that one can only successfully maintain it if one has it, and if one has it, it’s true. And so I try to maintain the belief, and I succeed. So I maintain the belief despite continuing insufficient evidence, and yet I am rational.

Here’s a reverse case. Let’s say that I find myself with very strong evidence that I will do not have and will never have any higher-order beliefs. It would be irrational to try to get myself to believe this proposition on this evidence.

So perhaps we should tie rationality not to evidence for a belief, but to evidence for the material conditional: if I have the belief, it is true?

Cf. this about assertion.

Two kinds of parthood?

I want to explore the thesis that every plurality has a fusion. Suppose that we live in a non-gunky materialist world where everything bottoms out in particles, with all particles being simple. Assume:

  1. A fusion cannot gain or lose parts.

  2. A fusion continues to exist if all its simple parts do.

  3. Parthod is transitive.

  4. I can gain and lose simple particles.

Now let F be the fusion of me, who I suppose am made of multiple particles, with some particle P1 outside of me. Suppose now that the following happens: I, all my particles and P1 all continue to exist, but a new particle P2, distinct from P1, additionally comes to be a part of me. Then by (2), F continues to exist, since all of F’s simple parts do. By (1), I continue to be a part of F. By (3), P2 will be a part of F. But that violates (1).

(This is not a new argument—I vaguely remember seeing something like it.)

Maybe if we accept the universality of fusions, then the sense of “part” that goes along with fusions—the sense of part in (1), (2) and (3)—is different from the ordinary sense of “part”, as when I say that my kidneys are a part of me. Let’s talk of these as f-parts and o-parts. If we do that, then we can block the argument: P2 comes to be an o-part of me in but one cannot infer that P2 comes to be an f-part of F, since f-parthood is transitive and maybe o-parthood is transitive, but an o-part of an f-part need not be an f-part.

That doesn’t quite solve the problem. Let’s keep on elaborating the case. Suppose that I am a material object, and I eventually exchange all my particles, but these particles continue to exist outside of me. Then by (2), F continues to exist. Moreover, by (1) I continue to be an f-part of F. But interestingly, none of my particles are f-parts of F: for the particles I now have weren’t a part of F, since they were neither o- nor f-parts of me initially, and fusions don’t gain parts. Now suppose one of my particles were an f-part of me. Then by transitivity of f-parthood, that particle would be an f-part of F, which I argued it’s not. So none of my particles is an f-part of me.

This is very weird: I’m made of particles, but no particle is an f-part of me. It seems I am f-simple. (There are some alternatives, but they are also very weird.) But presumably if this is true after the exchange of particles, it’s true before—my f-simplicity status shouldn’t change in life. So I’m always f-simple, even though I have many proper o-parts, say my particles.

It’s now looking like f-parthood is very different from o-parthood, and I wonder if it’s a kind of parthood at all.

Thursday, February 2, 2017

Characterizing quantifiers by rules of inference

One standard characterization of quantifiers is that they are bits of speech that follow certain rules of inference (e.g., universal instantiation, existential generalization, etc.). This characterization is surely incorrect.

Let p be any complicated logical truth. Consider the symbols ∃* and ∀* such that ∃*xF and ∀*xF are abbreviations for p ∧ ∃xF and p ∧ ∀xF. Then ∃* and ∀* satisfy exactly the same rules of inference as ∃ and ∀, but they are not quantifiers. The sentence ∃*Fx expresses a conjunctive proposition rather than a quantified one.

In other words, the characterization of quantifiers by logical rules misses the hyperintensionality at issue. The same is true of the characterization of any other logical connectives by logical rules.

Rollerblading three and eight miles

Suppose I intend to rollerblade eight miles. And I succeed. Then I also rollerbladed three miles. But I need not have intended to rollerblade three miles, though of course my arithmetic is good enough that if asked “Will you also go three miles?” I would have answered affirmatively. Suppose that rollerblading three miles were intrinsically wrong. Then I couldn’t excuse myself by invoking the Principle of Double Effect, saying “I intended eight miles, not three.” Rollerblading three miles wasn’t intended. But it also wasn’t a side-effect like Double Effect talks about.

But isn’t rollerblading three miles a means to rollerblading eight miles? If it is, it isn’t a causal means (unless I have to rollerblade three miles before I’m allowed to do eight). Maybe it’s a constitutive means: rollerblading three miles is partly constitutive of rollerblading eight miles. But even that’s not quite right. There are many instances of rollerblading three miles within rollerblading eight miles: the first three miles, the last three miles, the middle three miles, and so on, perhaps even ad infinitum. One could make a case that rollerblading the first (or last or middle) three miles is a constitutive means to rollerblading eight. But rollerblading three miles is a disjunctive event: it is rollerblading the first three or the middle three or the last three or …. And while this disjunctive event has to happen for me to rollerblade eight miles, it isn’t a constitutive means to rollerblading eight miles.

So it looks like rollerblading three miles is neither a causal nor a constitutive means to rollerblading eight miles. This is another consideration in favor of my thesis that the Principle of Double Effect must go beyond the concept of means to that of accomplishment. For I definitely accomplish rollerblading three miles (in fact, multiple times) in rollerblading eight miles. Here's a quick test for this: Suppose that after three miles I have to stop. Then I would say: "I aimed for eight miles but all I managed to accomplish was three." But if I didn't stop after three, I would surely still have accomplished three.

Wednesday, February 1, 2017

Fingers and other alleged body parts

Squeeze your fingers around something hard. It feels like you’re making an effort with your fingers. But you’re making an effort with muscles that are in your forearm rather than in your fingers—fingers have no muscles inside them.

Now, if I thought that bodies have proper parts, I would be inclined to think that my body’s parts are items delineated by natural boundaries, say, functional things like heart, lungs and fingers rather than arbitrary things like the fusion of my nose with my toes or even my lower half. But when we think about candidates for functional parts of the human body, it becomes really hard to see where the lines are to be drawn.

Fingers, for instance, don’t make it in. A typical finger has three segments, but the muscles to move these segments are, as we saw, far away from the finger. What is included in the finger, assuming it’s a real object? Presumably the tendons that move the segments had better be included. But these tendons extend through the wrist to the muscles. Looking at anatomical pictures online, they are continuous: they don’t have any special boundary at the base of the finger. Moreover, blood vessels would seem to have to form a part of the finger, but they too do not start at the base of the finger.

Perhaps the individual bones of the finger are naturally delineated parts? But bones only have delineated boundaries when dead. For instance, living bones have a nutrient artery and vein going into them, and again based on what I can see online (I know shockingly little about anatomy—until less than a year ago, I didn’t even know that fingers have no muscles in them), it doesn’t look like there is any special break-off point where the vessels enter the bone.

Perhaps there are some things that have delineated boundaries. Maybe cells do. Maybe the whole interconnected circularity system does. Maybe elementary particles qualify, too. But once we see that what are intuitively the paradigmatic parts of the body—things like fingers—are not in the ontology, we gain very little benefit vis-à-vis common sense by insisting that we do have proper parts, but they are things that require science to find. It seems better—because simpler—to say that in the correct ontology the body is a continuous simple thing with distributional properties (“pink-here and white-there”). We can then talk of the body’s systems: the circulatory system, the neural system, ten finger systems, etc. But these systems are not material parts. We can’t say where they begin an end. Rather they abstractions from the body’s modes of proper function: circulating, nerve-signaling, digital manipulating. We can talk about the rough locations of the systems by talking of where the properties that are central to the explanation of the system’s distinctive functioning lie.