Probability

OHear: Chap. 7

Review/Preview

Probability arises

on the outskirts of scientific respectability

explanations & predictions in the social sciences

weak probabilistic generalizations

indeterminism

due to free will?

indeterminacy merely epistemic

not calculable by us by us

due to complicated determinants

not to any actual or metaphysical indeterminacy in our affairs

as opposed to the hard sciences:

strict universal laws

determinism

quantum indeterminacy and probability invades the inner sanctum: most basic
laws are probabilistic

quantum indeterminacy not inconsistent with macro determinism

in fact:

most accurate clocks are based on quantum processes;

laws of chemistry remain deterministic despite their indeterminate underpinnings

and so on . . . on up to biology, psychology, and sociology

in theory: bit of a mystery

Shröedinger's cat puzzle one expression

improbabilities multiply down to impossibilities (near enough)

Questions about probability

What is the predictive force of probabilistic explanations?

What is probability itself
Probabilistic Explanation

Example

Explanadum event: Jones, a heavy smoker dies of lung cancer at 55

Explanation

L: Smoking tends to cause lung cancer.

C: Jones was a life long smoker.

:.(p) E: Jones of lung cancer at 55.

But consider Smith  also a lifelong smoker  who dies of smoking unrelated
causes at age 95

since L was merely a probable generalization

it can allow a few exceptions  even many exceptions

which may seem good: the law is hearty

but the silver lining has a dark cloud: prediction is iffy

probabilistic laws resistant to falsification is a drawback: too much resistance
to falsification means

lack of empirical content: a statement that's equally consistent with any
way things might turn out

tells you nothing about how things will turn out

e.g., the universal weather forecast: partly cloudy, chance of rain

Moral of the Jones Smith Story According to O'Hear

Probabilistic explanations ill conceived as leading to prediction of particular
events

no rule of detachment: the lottery paradox

compare the Copenhagen interpretation of quantum probabilities

apply to the behavior of the average nucleus not directly to the
behavior of any actual particular nucleus

similarly L applies to the average smoker

average smoker is 73% male and has 1.7 children etc.

not directly to any actual particular smoker

Consequent worries about predictivity & falsifiability

prediction worry: no rule of detachment seems to mean no direct predictions
of particular events

falsifiability worry: no number of improbable outcomes can conclusively
falsify, e.g., the fair die hypothesis

L: 5/6 probability the next roll will not be a 1.

C: The die is rolled.

E: It won't comes up 1.

If it does, still this might just be that 1 or 6 case.

Try try again . . . how is this availing?

L: 35/36 probability two consecutive rolls will not both be 1s

C: The die is rolled twice.

E: It won't come up 1 both times.

If it does come up 1 twice . . . still this might just be that 1/36 case.

In practice we do regard it as availing

after 10 consecutive 1s

we would be thinking it highly likely that it's not a fair die

Question: What warrants our so thinking?

Bernoulli's Theorem: Law of Large Numbers

Basics of probability: refresher

probability is expressed as a number between 0 and 1

an event with probability 1 is certain to occur

an event with probability 0 is certain not to occur

an event with probability .5 has a 1/2 chance of occurring.

Bernoulli's Theorem

any sufficiently large sample drawn from a parent population

is apt to match the parent population in its distribution of characteristics

assuming an unbiased sampling, i.e., "that any one sample of a given size
is as likely to be selected as any other"

Practical Difficulty

the unbiased sampling condition is in reality never fulfilled

"our samples are drawn from a small section of the universe during a very
short period of time" (Ayer: cited by O'Hear, p.148)

induction headaches revisited

the silver lining

since we are most interested in this region of spacetime we're restricted
to

we can  a least partly  finesse the bias worry but tempering our claims

"we can still use [the Law of Large Numbers] to eliminate hypotheses

that do not appear [likely] to be true [at least]

in our region of space and time." (p. 149)

Conclusion: in this way probabilistic theories may be regarded as empirically
contentful, predictive, and testable

Probability and Explanation

Remaining worry: the unexplanatory feel of probabilistic "explanations"

Example

Jones got cancer because he was a heavy smoker.

Remaining puzzlement: Why Jones

When not Smith

What was there

about Jones that made him get cancer (given that he smoked)

about Smith that made him not get cancer (given that he smoked)

Probabilistic explanations seem, in an important sense, incomplete

Davidsonian Advice: regard probabilistic explanations as incomplete descriptions
of fully determined effects

When we are considering a merely probabilistic cause

We should think of our description of the cause as essentially incomplete

A fuller description would reveal some exceptionless law, e.g.

All persons with biological trait X and tar and nicotine intake Y develop
lung cancer within Z years.

May be such laws even though we don't know  and perhaps never will know
them

Upshot: no inconsistency between using probabilistic laws and the world
being deterministic

Limits of Davidson's Approach

Applicable to the cancer case & probably also the roll of the die .
. . fall is necessitated & predictable given

a roll of this force & a release at this angle

given the characteristics (e.g., elasticity) of the die and the table

the air conditions at the time of throw

etc.

Here the indeterminacy is most likely merely epistemic

it's just in our knowledge: we can't determine all these factors

not in reality: in reality there really are determining factors
that made the die fall as it did

Like the hidden variable approach to quantum mechanics.

Quantum indeterminacy

Bohm's hidden variable approach not widely favored

Consider the halflife of radioactive substances

to say U235 has a halflife of 64 years is to say

given a sample of U235

50% will have undergone nuclear decay

after 64 years

But in this case  the standard interpretation of quantum indeterminacy
says

there is no further explanation

of why this or that particular U235 nucleus decayed

Explanatory worry: in what sense do we have an explanation of nuclear
decay

in the die throw case:

we have no explanation of why it came up odd on this throw

that it had a 50% chance is no explanation of why this time

nevertheless, we think there is an explanation: not chance

in the radioactive decay case

we have no explanation of why this U235 atom decayed within 64 yr.

that it had a 50% chance of doing so is no explanation of why this one
decayed when it did

and there is no explanation: it is chance

Determinism v. Indeterminism

Determinism is the view that everything has a cause; that nothing really
happens by chance.

LaPlace's demon: if the position and velocity of every particle were known

from this together with the deterministic true laws of Newtonian mechanics

we could predict whole subsequent history of the universe

Quantum mechanics seems to imply the falsity of determinism: indeterminism

at the quantum level some things do happen by chance

there is indeterminacy in the events themselves, not just in our knowledge
of them

So how explanatory are probabilistic laws and explanations anyhow?

in the case of theoretically eliminable probabilities, e.g., the die

the conditions cited in the probabilistic antecedent of the probabilistic
law will generally be part of the (unknown) full deterministic explanation

in the case of the die

the construction of the die that makes it a fair die

is among the causes of it's actually falling as it does on this occasion

in the case of theoretically ineliminable probabilities, e.g., of quantum
events

the conditions cited in he probabilistic antecedent  it's U235

are not part of any (unknown) full deterministic explanation

Conclusion: "our outcomes will be explanatory to the extent that seeing
occurrences in terms of tendencies within populations" (p. 154) is explanatory

this of itself  in the ordinary case  is not very explanatory

explanation: you're under thirty because you're Alma College students and
99% of Alma students are under 30

here  unlike the die case  the probability cited is not part of the
cause

you're under thirty because of when you were born, not because you're at
Alma

not a worry about probability per se but about causality

like the flagpole case

the sun's angle of incidence & the length of the shadow don't cause
the flagpole's height

perhaps a worry per accidens about probability though

since universal generalizations are more apt to reflect genuine causal
regularities

and probabilistic generalization more likely to reflect mere correlations

in the quantum mechanical case

explanation: the microgram sample of X decayed completely within a year
because the half life of X is .64 milliseconds.

here  unlike the age/Alma "explanation"  there is no alternative
explanation

issue: which is it

The existence of the true alternative that makes the Alma "explanation"
nonexplanatory?

The existence of the fully deterministic elaboration that makes the smoking
explanation (partially) explanatory?

upshot

On 1, quantum mechanics will be explanatory: bullet biting feel about it?

that there's no further explanation below the population level

means that it really is a matter of chance

Example

the full explanation of why this particular atom  call it U235a
 decayed with 128 years is

"Well, there was a 75% chance of that happening."

Worry concerning potential predictivity requirement on explanation

the full explanation of why the nucleus of U235b decayed within 16 years
would be

"Well, there was a 12.5% chance of that happening."

On 2, quantum mechanics will fail to be explanatory unless

the hidden variable theory works out

and whatever determines the half life of various isotopes is among
the factors interacting with the hidden variables
Interpretations of Probability

Probability Calculus

p = 0 through 1

0 = certainty not or impossiblity

1 = certainty or necessity

intermediate values represent intermediate probabilities

multiplication theorem

general form: p(a & b) = p(a) * p(b,a)

where a and b are independent events: p(a & b) = p(a) * p(b)

addition theorem

general form: p(a or b) = p(a) + p(b)  p(a & b)

where a and b are mutually exclusive events: p(a or b) = p(a) + p(b)

p(a,b): the probability of a given b: how to understand
talk of this

when we say p(h,e) = .9 what are we talking about

two interpretations

objective: we're speaking of real tendencies in the world

subjective: we're speaking of degrees of confidence of belief

Subjectivive Interpretations: "A statement of probablity does not reflect
anything `rational or positive or metaphysical' in the world; it is merely
a psychological device which we use when we are in ignorance of the full
facts of the situation." (O'Hear, p. 160: quote from Bruno de Finetti)

Classical theory or a priori interpretation of LaPlace

probablity can be seen as subjective

we're talking about outcomes

such that we've no reason to expect one to be more probable than
another

a priori in that there's no appeal to observed frequencies in assigning
probabilities

we tote up the different possible outcomes and assign each an equal probability,
e.g.

probability of rolling a 1 in a single throw of a die = 1/6

probability of getting heads in a single coin toss = 1/2

assumes the outcomes are equiprobable

that the die is not loaded

that the coin is a fair coin\

Criticisms of classical theory

equiprobability assumption limits application to situations where outcomes
are
equiprobable (for all we know)

we often want to apply the probability calculus to cases where results
are not equiprobable (for all we know)

worse yet  even assuming equiprobability  the same outcome can be assigned
different probabilities depending on how its described

example pack of four cards (2 red, 2 black): what are the chances of being
dealt a hand of a single color

plan A: count individual card drawn at the basic alternatives: p = 1/3

plan B: count color distributions of hands as basic alternatives: p = 2/3

Carnap's Logical Theory

Enumerate the predicate and names in the language

Predicates: Run, Jump

names: Spot

Enumerate the possibile combinations: state descriptions and assign an
intitial probability to each

Spot runs. Spot jumps. (.4)

Spot runs. Spot jumps not. (.2)

Spot runs not. Spot jumps. (.1)

Spot runs not. Spot jumps not. (.3)

Calculate the probability of h (spot jumps) on e (spot runs) = 2/3

Criticisms

language dependence of the state descriptions & consequent probability
estimates

the initial weighting is based on unanalyzed induction

Popper's Paradox of Ideal Evidence

suppose we have a coin and our subjective interpretation says it has a
.5 probaility it will come up heads

meaning we are 50% ignorant of how it will turn up.

suppose we have conducted a long string of tosses with a distribution approaching
50/50

we have learned nothing: we're still 50% ignorant

but we have learned something about the coin; that the probability
of heads really is .5; it's a fair coin.

Objective Interpretations: probablity statements refer to real tendencies
individuals or sequences have to manifest specific patterns of outcomes

Frequency or Relative Frequency view

my assertion p(nexttoss) = .5 is not exactly about the next toss:
for any particular toss either it's coming up heads (p=1) or it's not (p=0)

my assersion p(nexttoss) = .5 is about what the relative frequency
of heads would be in a long series of tosses

Attraction (besides objectivity): ties probabilty closely to the means
we actually use to ascertain them on the basis of observed relative frequencies
of outcomes within the population sofar observed cases

Difficulty 1: trouble about induction redux

determination of relative frequency presumes an adequate sampling

but there's no such thing relative to an infinite or openended sequence
of events

as were often faced with in scientific applications of probability

Difficulty 2: we often do want to talk about the probability of
particular events

suppose this coin is only flipped once in its life and comes up heads

then the relative p of this coin coming up heads on any single toss was
1

but it wasn't  it was .5, this was a fair coin

or it it had never been flipped, then there would be no probability assignable

Difficulty 3: inconsistent probability assignments to the same event

RF cannot account for a single event except in terms of theoretical classes
to which the event presumably belongs

the probability of an event having a property will depend on the relative
frequency of the property's occurence among the class to which the event
is assigned.

but events belong to different classes they can be described as belonging
to

the relative frequency of the property can vary between these different
classe

so the same individual or event will be judged to have different frequency
probabilities depending on the comparison class

example: the probability the temperature would rise above 70 degrees Farenheit
in Alma yesterday

yesterday being viewed as a member of the class late October (quite
low): 1/20 say

viewed as a member of the class days whose preceding days temperatures
rose above 70 (quite high): 19/20 say.

problem: the probability of the event itself can't be both .05 and
.95

Moral of the story

RF lacks the wherewithal for speaking to the probability of individual
events

due in part to lacking the wherewithal for selecting reference classes

Inadequacy of the moral

it seems we do want to speak of the probability of individual events sometimes

if I'm betting MSU wins Saturday

I want to know the probability of MSU winning Saturday

Not:

the probablity of a team winning following a bye weak

the probability of a team winning after two straight losses

the probablity of team winning three weeks after beating Michigan

it seems some reference classes are better than other

narrower better?

the probability of a team winning following a bye week

following two straight losses

after defeating their arch rival

relevance issue: some reference classes provide more useful information
than others (c.f., are more.projectable)

injuries, matchups, and caliber of opposition as opposed to the foregoing
irrelevancies

best when "the reference class . . . can be seen in terms of the conditions
which generate the outcomes involved"

i.e., when the reference class is causally relevant

Popper's Propensity Theory

Explained

Probability is a property of the generating conditions of events, i.e.,
their propensity for causing the probable target event

e.g., the 70% chance of rain today

is a property of the current meteorological conditions

they have a 70% propensity for bringing about rain tommorrow.

Propensities are actually existing (unobservable) dispositional properties
of the physical world.

What propensities bring about

are observed frequencies in runs of events

not single events because

in genuinely indeterministic cases, nothing brings about the single event

and in genuinely determinate cases, the determining cause brings about
the single event

probability of a single event = the "measure of an objective propensity
[of the world] . . . to make it happen"

Criticisms

The nature of propensities is unclear: they're supposed to be like
Newtonian forces

but not really forces

as shown by biased coin problem

coin: 60% propensity toward heads should always overbalance 40%
toward tails.

it we're really talking forces

and how can they bring about a 70% frequency of rain (say) in a run of
events without bringing about any actual instances of rain.

And if they're not forces . . . what are they over and above relative frequencies

note the extremely gingerly extension of probability talk to individual
events

"measure of objective propensity"

In what sense measure? Two I can think of are both unhappy

indicator of:

as in "clothes are the measure of the man"

but how can the individual event's probability be what tells us what the
propensity is when we need to determine the propensity 1st, before we can
say anything of event probability?

portion of: despite his embarrassment, the President retains some
measure of respect he formerly enjoyed.

hard to think of what this measure of propensity is a portion of

unless it's a force: compare, Ed, though downgraded to a tropical storm,
still retains a considerble measure of its original force.

Popper's appeal to quantum theory:

Popper's claim: propensity theory explains the two slit result: showing
that propensities are physically real

just as putting new pins in a pintable changes the probabilities or propensities
of balls rolling down the table

even when they don't actually go near the new pins.

so does opening the second slit change the propensities of distribution
of particles which go through the first slit

Criticism

it's the actual course of the particle that's altered, not just the probability
of it reaching a certain point (as in the pin board case)

a case of physical interference quite unlike the pin board case

subatomic particles, unlike pin balls, are

"never fully isolable from the larger systems in which they operate"

what the principle of complemenarity describes.

Conclusions

Difference in emphasis

frequency theory

p(e,c) = n: saying that e is an event has a long range frequency
of occurring 70% of the time under conditions such as c

more epistemic and Humean and postitivistic (antirealistic)

propensity theory:

p(e,c) = n: says that c is a circumstance of such sort that 70% of the
time bring about something of the e sort will be brought about.

a more causal antiHumean realistic emphasis

With regard to the single case nothing follows on either theory

subjectivism vs. objectivism

with regard to truly determinate events

macroevents e.g., the fall of the coin

probabilities perhaps best thought of as subjective

not that the events are really undetermined: they'll certainly occur or
certainly not

we just don't know the determinants:

the uncertainty (or probability) is in our knowledge of reality

not in reality itself

with regard to truly indeterminate  e.g., subatomic  events

probabilities better thought of as objective

these statistical regularities at the subatomic level

real and objective

without being based on any unknown determining factors

extension to macro cases of genes, dice

may be more natural even in these cases to think of probabilities as ascribing
propensities in these cases

additionally, are we really all that sure about the underlying determinism
in such cases as

the fall of dice

& the weather?