Saturday, September 11, 2004

Technical matters

The variety of LP proposed by Carnap, Schlick or Ayer has been rejected by technically oriented philosophers ("analytic" ones) largely because it fails to properly account for many aspects of science itself. Kuhn and Popper, in particular, are credited with this.

Doc, if you ask me to give my technical opinion on the variety you propose, my answer will be roughly on the same lines. In your Wednesday, 10:55 am comment, you say that 'true scientific propositions can never be contradictory'. I maintain that some have to be, because, using theories in practice means that we have to put up with some amount of discrepancy between them and experimental results. Otherwise, no theory would ever be accepted. In response to my previous example with T1 -> 12.458 and T2 -> 12.743, you say that 'Neither theory correctly predicts [the experimental results]'. This is true but I nonetheless disagree when you say that none would have been accepted. Physics works with theories that marginally deviate with experimental results all the time. And it is a good thing it does. Because even an imperfect theory may produce useful insights before a better one is produced. If experimenters had started waiting for the theoreticans to come up with the perfect theory on every subject, physics would have stopped making any progress a long time ago.

Take, as an other example, the historic account of free fall by Galileo and the experiment on the tower of Pisa he conducted to support it. It is said that there were people who attacked Galileo by saying that his experimental results disproved his theory. And they were right ! If you drop two spheres, one big and one small, from the top of the Tower of Pisa, the big one will indeed reach the ground slightly before the smaller one because of air friction. And Galileo's theory did not include any corrective term for air friction. It just predicted that both object would reach the ground at exactly the same time. Well, we are happy that a consensus developped to ignore this small discrepancy. But, from this example, we can build exactly the same logical contradiction as in the T1 and T2 case : E, E => P, P' => ~P and P' so P and ~P :

  • E : The tower of Pisa experiment.
  • P : As per Galileo theory of free fall, both spheres reach the ground simultaneously.
  • P': The observed results. The big sphere reaches the ground slightly before the small one.

Does'nt this example conclusively show that some logical incoherence is a normal part of the process we call science ? Another similar example could actually be built using what you say about the wave vs. particle problem. There was a time where only T1 (wave) and T2 (particle) existed but not T3 (probability wave). During that time, there were certainly a lot of borderline cases that were repeatedly creeping up in experiments and for which experimenters used either T1 or T2. For each of these cases a logical contradiction example can be built along the exact same lines as above. And no one is ever going to claim that all these physicists were incompetent. Some of them might even have got the Nobel Prize before T3 was put forward.

Coming now to the part of your theory that deals with natural language. I do not see how using 'complete' instead of 'finitely verifiable' make it less vulnerable to the classical criticisms of LP. It seems to me that 'God will show up in 101000.000.000 years' is a perfectly completable sentence. The revelation, or any text that makes detailed descriptions of what will happen when 'God shows up' is the completion your definition asks for. On the contrary, 'All C14 nuclei will eventually disintegrate' still seems just as uncompletable as it was not finitely verifiable. Your Sunday 2:14 pm post defined completable as 'contain[ing] all the necessary assumptions and definitions that will make it [...] falsifiable'. I do not see how you are going to falsify 'All C14 nuclei will eventually disintegrate' unless you sit in front of your sample, waiting for your Geiger counter to go 'ping', until the end of time.


1 Comments:

At September 12, 2004 at 12:11 PM, Blogger Doctor Logic said...

Okay, Nicolas. I think you are confusing logical structure with psychology.

Most theories that are constructed make simplifying assumptions. For example, take Hooke's Law:

F = kx { F = force, k = a constant, x = displacement from equilibrium}

which gives the force exerted by a spring. The theory assumes that the spring is extended or contracted by only a small fraction of its length (x << 1). This enables the theorist to omit second order terms in the force equation. It makes the calculations much easier, and simplifies the model.

In reality, the force exerted by a spring is very complex and has higher order terms. For all we know, the formula for the force could have an infinite number of terms in the Taylor expansion, e.g.,

F = a x + b x^2 + c x^3 +... {a,b,c,... are constants}

When we “accept” Hooke's Law, we really mean we are willing to accept it as a reasonable approximation. We actually expect the experimental results to differ at some precision.

Similarly, Galileo made the simplifying assumption that air resistance was a small effect. He may not have been explicit about this assumption, and, given his personality, I can believe he would omit this assumption from his claims.

So, the logical structure of these models is like the following:

T(total) = T1 + t1 => O(total) = O(T1) + O(t1)

where T1 is the main theory being tested. t1 is the parts of a total theory which we are not attempting to test or understand. In Galileo's experiment, t1 includes air resistance, weather conditions, gravity fluctuations, difference due to reflected light etc. The Observed experimental value O(total) is equal to a contribution from the main theory O(T1) and a contribution from the unknown parts of the total theory O(t1).

The assumption is that O(t1) is small. If the experiment shows that O(total) – O(T1) is large, then either the theory T1 does not work, or that O(t1) is not small. Only further research will reveal whether T1 is reasonable or not.

Logically, there is no contradiction. Again, I maintain that this never happens.

You say “even an imperfect theory may produce useful insights before a better one is produced”. This is true, but only when O(t1) is small, at least over some domain. Saying that T1 explains a phenomena over some domain means that O(t1) is small over that domain. This is very different from saying there is any contradiction or logical “incoherence”.

The psychology of science research (and of individual scientists) is quite different. Science is expensive. In order to survive at science, you need funding and support. This means that scientists must also be politically aware. They must be champions for their theories in order to attract researchers and funding. However, to champion a theory is not the same as accepting it as unfalsifiable, or accepting without reservations (e.g., that O(t1) is small everywhere).

'All C14 nuclei will eventually disintegrate'. Just to clarify my position, this test proposition is NOT complete (and not meaningful in the traditional LP sense of the term). It is only completable to the extent that we agree it is the same as 'the mean lifetime of C14 nuclei is 8000 years'. If you want to say that the test proposition is not saying the same thing as the law of radioactive decay, then I agree that it is not meaningful. My claim is that physicists who accept this proposition in natural language only do so because they accept (even if subconsciously) the equivalence of the test proposition with the law of radioactive decay. That is, they are completing the test proposition and transforming it into the law of radioactive decay. Claiming that the average physicist would accept your test proposition as meaningful by itself just shows that the average physicist hasn't looked into things in detail (the vast majority of physicists don't study philosophy of science or logical positivism). Of course, that the average man on the street accepts a proposition as meaningful (sensible) also doesn't make it so. Indeed, the role of LP is to make clear what is sensible and what is not. If it were obvious, we wouldn't need LP at all.

'Z will be observed (or falsified) within X years and no other falsification of Z will be possible before X years, and, by the way, X is so large a time that it is probable all humans will be extinct'. Technically, this is meaningful. It is, however, a guess of the most useless kind. Since it is not scientific (i.e., it does not model any empirical data), one can always propose a Z2 such that Z2 => ~ Z, and Z2 and Z are equally probable. Still, your question was about meaningfulness, not reasonableness. Your proposition, if eventually falsifiable, is meaningful.

At least propositions about cards drawn in a casino are falsifiable in an entertainingly short period of time.

doctor(logic)

 

Post a Comment

<< Home