In
defending critical scientific realism, I shall proceed by discussing
ontological, semantical, epistemological, axiological/methodological, and ethical
questions in separate chapters. In due course, the branches of Fig. 1 will be
explicated and assessed in more detail. Here it is in order to make some
additional remarks on the philosophical method that I shall follow in my
argument.
(a)
Formal vs. historical methodology. The message of the Kuhnian revolution
was sometimes interpreted as the thesis that philosophy of science should
follow a descriptive historical method—and give up the Carnapian quest for the
logical and quantitative explication of concepts. While we may agree with
Lakatos that ‘philosophy of science without history of science is empty;
history of science without philosophy of science is blind’ (Lakatos 1976: 1), a
sharp contrast between formal and historical methods in science studies is
nevertheless misleading and unnecessary. Logical and quantitative methods are
by no means restricted to the ‘synchronic’ study of completed scientific
systems, but can be applied as well to the ‘diachronic’ study of scientific
change.
At
least since Derek de Solla Price's Little Science, Big Science (1963),
it has been clear that the growth of science (measured by the volume of the
literary output of the scientists) can be studied by quantitative methods.
Science indicators, now investigated in ‘scientometrics’ and widely used as a
tool of science policy, are as such not measures of the growth of knowledge,
since they simply count publications and citations by ignoring their semantic
content (see Section 6.1). But this is no inherent limitation of the
quantitative method. Thus, the Carnap–Hintikka measures of semantic information
or the Tichý–Oddie–Niiniluoto measures of verisimilitude can be used for
expressing, in quantitative terms, that a new body of scientific knowledge ‘tells
more’ and is ‘closer to the truth’ than an old one (cf. Section 3.5). The
latter measures also allow us to make precise such Peircean notions of dynamic
epistemology as ‘approach’ or ‘converge towards the truth’. Further, it is by
now well established and accepted that the study of theories and scientific
change can successfully employ concepts borrowed from set theory (the
‘structuralism’ of Suppes, Sneed, and Stegmüller) and logical model theory
(Pearce 1987a).
As
Nowak (1980) correctly observes, when philosophers of science give descriptions
of scientific activities, they usually make some idealizing assumptions. It is
then an important task to make these descriptions more realistic by gradually
removing these assumptions. Another task is to derive, relative to a
sufficiently accurate description of science and to some philosophical
(axiological) premisses, methodological recommendations for doing good science
(cf. Section 6.2). Formal methods may be useful for both of these tasks.
(b)
Normativism vs. naturalism. Logical empiricism is often portrayed as an
attempt to establish by logical analysis and rational reconstruction general
prescriptions for sound science, or ideal models and norms expressing what
science ought to be. Thus, philosophy of science is an a priori account of
scientific rationality. On the other hand, the pragmatist tradition—in the
broad sense exemplified by W. V. O. Quine's (1969) ‘naturalized epistemology’
and the historical/sociological approach to the philosophy of science—holds
that scientific rationality has to be grounded in the actual practice of
scientific research.11
In
my view, philosophy has a lot to learn from empirical or factual disciplines
like psychology, cognitive science, and the history and sociology of science.
But this does not mean that epistemology could be reduced to empirical
psychology, as Quine suggests, any more than ethics can be reduced to cultural
anthropology. One obstacle for such reduction is conceptual: while human
beliefs may be objects of ‘naturalized’ empirical and theoretical studies,
epistemological concepts like ‘truth’, ‘justification’, ‘confirmation’, and
‘knowledge’ are not determined by ‘nature’, but rather their specification or
definition is a matter of philosophical dispute. In the same way, the
demarcation between science and non-science is a basic problem in the
philosophy of science, and every attempt to study the actual history and
practice of science already presupposes some answer to this problem.
Another
obstacle for reduction comes from the normative dimension. As ‘ought
implies can’, it may be reasonable to demand that normative epistemology does
not go beyond the factual human capabilities in cognition. The descriptive
question of how we actually think is thus relevant to the normative question of
how we ought to think (cf. Kornblith 1985). But this does not mean that the
latter could be solved simply by studying the former. I shall illustrate this
by commenting on the debate between ‘normativists’ and ‘naturalists’ in the
philosophy of science.
Following
Lakatos in the ‘naturalist’ demand that methodology should be tested against
the actual historical record of the sciences, Laudan (1977) required that a
methodological theory should capture as rational certain intuitively clear
cases of good science. Later he rejected this ‘intuitionist meta-methodology’
(Laudan 1986). But even if Laudan explicitly acknowledged the ‘palpable
implausibility’ of the claim that ‘most of what has gone on in science has been
rational’ (ibid. 117), he still insisted that theories of scientific change
(Kuhn, Feyerabend, Lakatos, Laudan) should be tested by the actual history of
science. The following quotation suggests that Laudan in fact is willing to
include all historical cases among the relevant test cases:
In
their original forms, these philosophical models are often couched in normative
language. Whenever possible we have recast their claims about how science ought
to behave into declarative statements about how science does behave. We have a
reasonably clear conscience about such translations since all the authors whose
work we have paraphrased are explicitly committed to the claim that science,
because it is rational, will normally behave in ways which those authors
normatively endorse. . . . it is plain that philosophers of
the historical school draw the internal/external distinction so as to include
within the range for which their normative views are accountable virtually all
the widely cited and familiar historical episodes of post-16th century physical
science. (Laudan et al. 1986: 148–9, my italics)
The
programme for ‘testing theories of scientific change’ has already produced
impressive and useful case studies (see Donovan, Laudan, and Laudan 1988). But
if the cognitive aims and methods of scientists have changed throughout
history, as Laudan convincingly argues in Science and Values (1984a),
and if the scientists have acted on the basis of their methodological
principles, it simply cannot be the case that ‘virtually all’ historical cases
could exhibit the same shared pattern of methodological rules. Hence, there is
no hope whatsoever that any non-trivial normative theory of scientific change
could pass ‘empirical tests’.
If
a case study reveals, for example, that Galileo or Ampère did not appeal to
novel predictions to support their theories, does this ‘contra-indicate’
Lakatos's demand that a good theory ought to be successful in making novel
predictions? Instead of using Galileo's and Ampère's behaviour as ‘tests’ of
methodological rules, we may simply conclude that they had not read Whewell's,
Popper's, and Lakatos's writings. In this sense, a normative ought
cannot be derived from, or refuted by, a historical is.
Similar
problems arise, if the naturalist programme is applied to contemporary
scientists. Ron Giere, in his book Explaining Science (1988), gives
interesting material to show that high-energy physicists at least sometimes
behave as ‘satisfiers’. He also suggests that the long debate on whether
‘scientists, as scientists, should be Bayesian information processors’
is futile:
We
need not pursue this debate any further, for there is now overwhelming
empirical evidence that no Bayesian model fits the thoughts or actions of real
scientists. For too long philosophers have debated how scientists ought to
judge hypotheses in glaring ignorance of how scientists in fact judge
hypotheses. (Ibid. 149)
But
this view ignores the fact that, for centuries, theory and practice have
already been in a mutual interaction in the field of scientific inference.
Scientists learn to do science through implicit indoctrination and explicit
instruction from their masters, textbooks, and colleagues. So if a case study
reveals that a group of real scientists favours ‘bold hypotheses’ and ‘severe
tests’, we may judge that they, or their teachers, have read Popper. And if
some scientists do not behave like Bayesian optimizers, the reason is probably
that the Department of Statistics—and the introductory courses of
methodology—in their university are dominated by representatives of the
‘orthodox’ Neyman–Pearson school (cf. Mayo 1996).
To
avoid this kind of vicious circularity in the testing procedure, we should find
some strange tribe of scientists who have never been contaminated by any methodological
or philosophical ideas. But naturalism is certainly implausible, if it suggests
that the best advice for the conduct of science can be learned from those of
its practitioners who are most ignorant of methodology!
These
remarks indicate, in my view, that the debate between the positions of Fig. 1
cannot be resolved by studying how scientists in fact behave. While it is
important for the scientific realists to have a realistic picture of scientific
activities, and therefore to pay serious attention to historical and
sociological case studies, they should also maintain the possibility of
criticizing the way science is actually done. In considering the ontological,
semantical, epistemological, axiological, methodological, and ethical problems,
we need support from scientific knowledge, but genuinely philosophical aspects
of these issues remain in the agenda. These observations mean, against
naturalism, that the choice between the interesting positions will at least
partly be based upon philosophical premisses. This is also a motivation for
discussing many traditional philosophical issues in a book on scientific
realism.
I
am not advocating a return to a foundationalist ‘first philosophy’ in the sense
criticized by Quine. Today we often hear the claim that science does not at all
need philosophy as its foundation. This thesis has been supported in two
radically different ways. The ‘positivist’ view (Quine) urges that science may
be a child of philosophy, but has since grown completely independent of her mother,
i.e. mature science has happily got rid of metaphysics and epistemology. The
‘postmodern’ view (Richard Rorty) asserts against the ‘Kantians’ that nothing
has foundations; hence, science in particular has no foundations either (cf.
Rouse 1996). Both views seem to imply that there is no special task for a
philosophy of science: science studies simply collapse into historical and
sociological description. For the positivist, this is motivated by the belief
that science, as it is, is the paradigm
of human rationality. For the postmodern thinker, on the other hand, there is
no interesting account of rationality to be found anywhere.
I
think both of these extremes are wrong. Science as a rational cognitive
enterprise is not yet complete: its tentative results are always corrigible and
in need of analysis and interpretation, and its methods can still be improved
in their reliability and effectiveness. The ethics of science also has to be
developed as a part of the philosophical conversation about the social role of
scientific practices. Philosophy of science cannot give any absolute and final
foundation for science, but it cannot leave science as it is. There is a
legitimate need to raise normative questions about scientific enquiry and
knowledge, to set up standards, and (if necessary) also to criticize the
activities of science. To be sure, such pronouncements are fallible and cannot
be expounded from an armchair: philosophy of science and special sciences have
to be able to engage in a mutual dialogue.
(c)
Natural ontological attitude. The problem of realism has haunted
philosophers so long that every now and then there appear attempts to
‘dissolve’ this query by rejecting it.
The
most famous of these attempts was made by the Vienna Circle: in his programme
of ‘overcoming metaphysics by the logical analysis of language’, Carnap
announced in 1928 that the realism debate was a meaningless pseudo-problem (see
Carnap 1967). Schlick's famous article in 1931 declared that both realism and
anti-realism (positivism) were meaningless theses (see Schlick 1959). However,
both of these claims were based upon very narrow empiricist criteria of
meaning: translatability to the phenomenalistic language of elementary
experiences (Carnap), and verifiability in principle by observations (Schlick).
Such strict theories of meaning were soon liberalized, and the realism debate
was resurrected.12
Arthur
Fine (1984) claims that ‘realism is well and truly dead’. Its death had already
been announced by neo-positivists, the process was hastened by the victory of
Niels Bohr's non-realist philosophy of quantum mechanics over Albert Einstein's
realism, and the death was finally certified when ‘the last two generations of
physical scientists turned their backs on realism and have managed,
nevertheless, to do science successfully without it’.
Fine—well
known for his earlier sustained attempts to defend a realist interpretation of
quantum theory (cf. Fine 1986b)—is not advocating anti-realism, either:
anti-realism is not ‘the winner in the philosophical debate that realism has
lost’ (cf. Fine 1986a). What he suggests instead as ‘a third way’ is the
natural ontological attitude (NOA): accept the results of science in the
same way as the evidence of our senses, but resist the impulse to ask any
questions or to propose any additions that go beyond the history and practice
of science itself. Thus, NOA claims to be neither realist nor anti-realist,
since it refuses to talk of ‘the external world’ or to propose any particular
analysis of the concept of truth. Here Fine's position agrees with the
‘minimalist’ account of truth (see Horwich 1990) and with anti-foundationalist
denials of ‘a first philosophy’ (cf. Pihlström 1998).
Fine's
NOA can also be regarded as an expression of philosophical despair. He in
effect suggests that we suspend judgement about the positions in Fig. 1—not
because they are meaningless (as the Vienna Circle urged), nor because they are
irrelevant for the actual practice of science,13 but rather
because there are no resources for settling such philosophical disputes.
Perhaps this is a ‘natural attitude’ towards philosophy—but not one that we
philosophers are willing to take, in spite of the fact that battles between
such wholesale philosophical orientations as realism and anti-realism will
never be finally settled.
What
is more, it seems to me clear that NOA after all is a variant of realism
(Niiniluoto 1987c), even if it wishes to avoid the customary realist
jargon and its refinements (such as the concept of approximate truth).
According to Fine, NOA accepts as a ‘core position’ the results of scientific
investigations as being ‘true’, on a par with ‘more homely truths’ (Fine 1984:
86). NOA treats truth in ‘the usual referential way’—that means, presumably,
something like the Tarskian fashion (cf. Musgrave 1989)—and so ‘commits us, via
truth, to the existence of the individuals, properties, relations, processes,
and so forth referred to by the scientific statements that we accept as true’
(Fine 1984: 98). Unless ‘existence’ has a very unnatural meaning here,
this is a realist position. For example, a statement about the existence of
electrons could not be scientifically acceptable, and thus part of NOA's core
position, if electrons existed only in a subjective, phenomenalist,
mind-dependent way.14
Musgrave
(1989) suggests the possible interpretation that Fine's NOA is ‘complete
philosophical know-nothing-ism’. But if NOA, in this interpretation, leaves it
completely open which statements are to be taken ‘at face value’, then NOA
knows nothing at all—and thus is reduced to global scepticism. This is clearly
in contradiction with Fine's own statements about the core position. Hence,
Musgrave concludes that NOA is ‘a thoroughly realist view’.
In
my own evaluation, realism is undoubtedly alive. In particular, as I shall
argue in the subsequent chapters, it is only recently that realists have
developed adequate logical tools for defending the crucial theses (R4) and
(R5), i.e. for showing in what sense even idealizational theories may be
truthlike, how the truthlikeness of scientific statements can be estimated on
evidence, and how the approximate truth of a theory explains its empirical
success. Even though philosophical debates do not end with winners and losers,
critical scientific realism has at least been able to make progress (cf. Pearce
1987b).