Objectivity is a key concept both in how we talk about science in everyday life and in the philosophy of science. This Element explores various ways in which recent philosophers of science have thought about the nature, value and achievability of objectivity. The first section explains the general trend in recent philosophy of science away from a notion of objectivity as a 'view from nowhere' to a focus on the relationship between objectivity and trust. Section 2 discusses the relationship between objectivity and recent arguments attacking the viability or desirability of 'value free' science. Section 3 outlines Longino's influential 'social' account of objectivity, suggesting some worries about drawing too strong a link between epistemic and ethical virtues. Section 4 turns to the value of objectivity, exploring concerns that notions of objectivity are politically problematic, and cautiously advocating in response a view of objectivity in terms of invariance.
This Element has two main aims. The first one (sections 1-7) is an historically informed review of the philosophy of probability. It describes recent historiography, lays out the distinction between subjective and objective notions, and concludes by applying the historical lessons to the main interpretations of probability. The second aim (sections 8-13) focuses entirely on objective probability, and advances a number of novel theses regarding its role in scientific practice. A distinction is drawn between traditional attempts to interpret chance, and a novel methodological study of its application. A radical form of pluralism is then introduced, advocating a tripartite distinction between propensities, probabilities and frequencies. Finally, a distinction is drawn between two different applications of chance in statistical modelling which, it is argued, vindicates the overall methodological approach. The ensuing conception of objective probability in practice is the 'complex nexus of
This Element explores the Bayesian approach to the logic and epistemology of scientific reasoning. Section 1 introduces the probability calculus as an appealing generalization of classical logic for uncertain reasoning. Section 2 explores some of the vast terrain of Bayesian epistemology. Three epistemological postulates suggested by Thomas Bayes in his seminal work guide the exploration. This section discusses modern developments and defenses of these postulates as well as some important criticisms and complications that lie in wait for the Bayesian epistemologist. Section 3 applies the formal tools and principles of the first two sections to a handful of topics in the epistemology of scientific reasoning: confirmation, explanatory reasoning, evidential diversity and robustness analysis, hypothesis competition, and Ockham's Razor.
Unity of science was once a very popular idea among both philosophers and scientists. But it has fallen out of fashion, largely because of its association with reductionism and the challenge from multiple realisation. Pluralism and the disunity of science are the new norm, and higher-level natural kinds and special science laws are considered to have an important role in scientific practice. What kind of reductionism does multiple realisability challenge? What does it take to reduce one phenomenon to another? How do we determine which kinds are natural? What is the ontological basis of unity? In this Element, Tuomas Tahko examines these questions from a contemporary perspective, after a historical overview. The upshot is that there is still value in the idea of a unity of science. We can combine a modest sense of unity with pluralism and give an ontological analysis of unity in terms of natural kind monism. This title is available as Open Access on Cambridge Core.
A suite of questions concerning fundamentality lies at the heart of contemporary metaphysics. The relation of grounding, thought to connect the more to the less fundamental, sits at the heart of those debates in turn. Since most contemporary metaphysicians embrace the doctrine of physicalism and thus hold that reality is fundamentally physical, a natural question is how physics can inform the current debates over fundamentality and grounding. This Element introduces the reader to the concept of grounding and some of the key issues that animate contemporary debates around it, such as the question of whether grounding is 'unified' or 'plural' and whether there exists a fundamental level of reality. It moves on to show how resources from physics can help point the way towards their answers - thus furthering the case for a naturalistic approach to even the most fundamental of questions in metaphysics.
'Relativism versus absolutism' is one of the fundamental oppositions that have dominated reflections about science for much of its (modern) history. Often these reflections have been inseparable from wider social-political concerns regarding the position of science in society. Where does this debate stand in the philosophy and sociology of science today? And how does the 'relativism question' relate to current concerns with 'post truth' politics? In Relativism in the Philosophy of Science, Martin Kusch examines some of the most influential relativist proposals of the last fifty years, and the controversies they have triggered. He argues that defensible forms of relativism all deny that any sense can be made of a scientific result being absolutely true or justified, and that they all reject 'anything goes' – that is the thought that all scientific results are epistemically on a par. Kusch concludes by distinguishing between defensible forms of relativism and post-truth thinking.
Big Data and methods for analyzing large data sets such as machine learning have in recent times deeply transformed scientific practice in many fields. However, an epistemological study of these novel tools is still largely lacking. After a conceptual analysis of the notion of data and a brief introduction into the methodological dichotomy between inductivism and hypothetico-deductivism, several controversial theses regarding big data approaches are discussed. These include, whether correlation replaces causation, whether the end of theory is in sight and whether big data approaches constitute entirely novel scientific methodology. In this Element, I defend an inductivist view of big data research and argue that the type of induction employed by the most successful big data algorithms is variational induction in the tradition of Mill's methods. Based on this insight, the before-mentioned epistemological issues can be systematically addressed.