When renowned scientists now talk seriously about millions of multiverses, the old question "are we alone?" gets a whole new meaning.
Our ever-expanding universe is incomprehensibly large – and its rate of growth is apparently accelerating – but if so it's actually in a very delicate balance.
It's then incredible that the universe exists at all. Let us explain.
In a 2004 review in Science of Searle's Mind a Brief Introduction, neuroscientist Christof Koch wrote:
Whether we scientists are inspired, bored, or infuriated by philosophy, all our theorising and experimentation depends on particular philosophical background assumptions. This hidden influence is an acute embarrassment to many researchers, and it is therefore not often acknowledged. Such fundamental notions as reality, space, time and causality – notions found at the core of the scientific enterprise – all rely on particular metaphysical assumptions about the world.
This may seem self-evident, and was regarded as important by Einstein, Bohr and the founders of quantum theory a century ago, but it runs against the grain of the views of working scientists in the post-war period.
Indeed, 21st-century mathematicians and scientists seem to have little need of philosophy.
The glory days of Karl Popper, who argued that falsifiability was a hallmark of good science, and Thomas Kuhn, who noted the phenomenon of paradigm shifts, are long gone—in science, if not in the humanities.
For many years, scientific philosophy as practised by scientists has languished, punctuated only by lapses such as the Sokal hoax, when NYU physicist Alan Sokal wrote a tongue-in-cheek article with a lot of scientific nonsense that was accepted by a leading journal in the postmodern science studies field (and launched a cottage industry of similar hoaxes).
But maybe the tide is finally turning. Perhaps modern science really needs philosophy after all.
The main drivers here are some truly perplexing developments in physics and cosmology. In recent years physicists and cosmologists have uncovered numerous eye-popping "cosmic coincidences," remarkable instances of apparent "fine-tuning" of the universe.
Here are just three out of many that could be listed:
- Carbon resonance and the strong force. Although the abundance of hydrogen, helium and lithium are well-explained by known physical principles, the formation of heavier elements, beginning with carbon, very sensitively depends on the balance of the strong and weak forces. If the strong force were slightly stronger or slightly weaker (by just 1% in either direction), there would be no carbon or any heavier elements anywhere in the universe, and thus no carbon-based life forms like us to ask why.
- The proton-to-electron mass ratio. A neutron's mass is slightly more than the combined mass of a proton, an electron and a neutrino. If the neutron were very slightly less massive, then it could not decay without energy input. If its mass were lower by 1%, then isolated protons would decay instead of neutrons, and very few atoms heavier than lithium could form.
- The cosmological constant. Perhaps the most startling instance of fine-tuning is the cosmological constant paradox. This derives from the fact that when one calculates, based on known principles of quantum mechanics, the "vacuum energy density" of the universe, focusing on the electromagnetic force, one obtains the incredible result that empty space "weighs" 1,093g per cubic centimetre (cc). The actual average mass density of the universe, 10-28g per cc, differs by 120 orders of magnitude from theory.
Physicists, who have fretted over the cosmological constant paradox for years, have noted that calculations such as the above involve only the electromagnetic force, and so perhaps when the contributions of the other known forces are included, all terms will cancel out to exactly zero, as a consequence of some unknown fundamental principle of physics.
But these hopes were shattered with the 1998 discovery that the expansion of the universe is accelerating, which implied that the cosmological constant must be slightly positive.
This meant that physicists were left to explain the startling fact that the positive and negative contributions to the cosmological constant cancel to 120-digit accuracy, yet fail to cancel beginning at the 121st digit.
Curiously, this observation is in accord with a prediction made by Nobel laureate and physicist Steven Weinberg in 1987, who argued from basic principles that the cosmological constant must be zero to within one part in roughly 10120 (and yet be nonzero), or else the universe either would have dispersed too fast for stars and galaxies to have formed, or else would have recollapsed upon itself long ago.
The Anthropic Principle
In short, numerous features of our universe seem fantastically fine-tuned for the existence of intelligent life. While some physicists still hold out for a "natural" explanation, many others are now coming to grips with the notion that our universe is profoundly unnatural, with no good explanation other than the Anthropic Principle—the universe is in this exceedingly improbable state, because if it weren't, we wouldn't be here to discuss the fact.
They further note that the prevailing "eternal inflation" big bang scenario suggests that our universe is just one pocket in a continuously bifurcating multiverse.
Inflation cosmology, by the way, got a significant experimental boost with the March 17, 2014 announcement that astronomers had discovered gravitational waves, signatures of the big bang inflation, in data collected from telescopes based at the South Pole.
In a similar vein, string theory, the current best candidate for a "theory of everything," predicts an enormous ensemble, numbering 10 to the power 500 by one accounting, of parallel universes. Thus in such a large or even infinite ensemble, we should not be surprised to find ourselves in an exceedingly fine-tuned universe.
But to many scientists, such reasoning is anathema to traditional empirical science. Lee Smolin wrote in his 2006 book The Trouble with Physics:
We physicists need to confront the crisis facing us. A scientific theory [the multiverse/ Anthropic Principle/ string theory paradigm] that makes no predictions and therefore is not subject to experiment can never fail, but such a theory can never succeed either, as long as science stands for knowledge gained from rational argument borne out by evidence.
And even the proponents of such views have some explaining to do. For example, if there are truly infinitely many pocket universes like ours, as physicists argue is the case, how can one possibly define a "probability measure" on such an ensemble? In other words, what does it mean to talk of the "probability" of our universe existing in its observed state?
But others see no alternative to some form of the multiverse and the Anthropic Principle. Physicist Max Tegmark, in his recent book Our Mathematical Universe, argues that not only is the multiverse real, but in fact that the multiverse is mathematics—all mathematical laws and structures actually exist, and are the ultimate stuff of the universe.
Modern science needs philosophy
With this backdrop, a growing number of scientists are calling for head-to-head interactions with philosophers. In a recent New Scientist article, cosmologist Joseph Silk reviews these and other issues now faced by the field, and then notes that such problems, probing the meaning of our very existence, are closely akin to those that have been debated by philosophers through the ages.
Thus perhaps a new dialogue between science and philosophy can bring some badly needed insights into physics and other leading-edge fields such as neurobiology. (Indeed, there is a burgeoning sub discipline of neurophilosophy.)
As Silk explains, "Drawing the line between philosophy and physics has never been easy. Perhaps it is time to stop trying. The interface is ripe for exploration."
Explore further:An analysis of Einstein's 1931 paper featuring a dynamic model of the universe
This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).
Our existence in the universe is extremely improbable – or so runs the intuitive impression appealed to by Christian theologians, theistic Hindus, or any artist capitalizing on the emotional resonance of this thesis:
And ask our esteemed panel
Why are we alive?
And here’s how they replied:
You’re what happens when two substances collide
And by all accounts you really should have died. 
What follows is an abridged version of an argument in development as part of a longer essay. Due to space constraints, I will assume a general familiarity with the fine-tuning argument against the atheistic single-universe hypothesis (if you would like a fuller account see Robin Collins, ‘A Scientific Argument for the Existence of God,’ in M. Murray, Reason for the Hope Within’). In the arguments to follow, we will suggest that the apparently fine-tuned relationship between mathematics, science, and human agents is a more useful kind of evidence in fine-tuning proofs than the cosmological evidence. Since issues on intuition and probability are super divisive in fine-tuning arguments, I would love to hear anyone’s thoughts on the matter…
To begin: evidence in modern cosmology suggests that the universe is “fine-tuned” for life. The cosmological constants appear to be “like the just-right settings on an old-style radio dial: if the knob were turned just a bit, the clear signal would turn to static” (Manson, 272). Neil Manson gives a helpful Bayesian formalization of the fine-tuning argument to which we will be referring in the following sections. Here, K is the given that the development of life requires fine-tuning; E is the statement “the universe is indeed fine-tuned for life;” and D is the statement “a supernatural designer of immense power and knowledge exists”:
- P(E|K & ~D) ≈ 0
- P(E|K & D) >> 0
- P(D|K) >> P(E|K & ~D)
∴ P(D|E & K) >> 0
But does (1) make probabilistic sense? Can we make claims like P(E|K & ~D) ≈ 0? Arguments against the Fine-Tuning Argument most commonly attack this claim, suggesting either: (1) it is not, after all, so improbable that the universe exists as it does, or (2) the very notion of probability underlying the argument is fallacious. This essay will agree with objection (2) alongside McGrew, McGrew, and Vestrup (henceforth MMV) and Manson against Collins and Monton. However, while we will reject the fine-tuning argument in relation to the cosmological constants, we will argue that the “fine-tuned” relationship of mathematics, science, and rational agents is able to rely on a valid notion of epistemic probability and thus resuscitate the fine-tuning argument.
In the following sections, we will examine three different agents (A, B, C):
(A) Proponent of the fine-tuning argument who accepts its underlying notion of probability
(B) Proponent the probability theory that underlies the FTA, but objector to the argument on grounds of the weak anthropic principle
(C) Objector to the probability theory underlying the FTA
The fine-tuning of the math-science-agent (MSA) relationship will add intuitive force at (A), defeat the weak anthropic principle in (B), and provide hope of valid use of epistemic probability in (C).
I: Case A: MSA evidence for FTA Proponents
Since the contingency of the mathematics-science-agent relationship may not be immediately clear, let is consider some alternative worlds (proposed by Alvin Plantinga) in which science is useless or inaccessible to embodied moral agents. Plantinga suggests that we can conceive of a world of “atomless gunk with nothing happening;” we can also conceive of worlds of total chaos. But neither of these examples are helpful for our project, since it is impossible (or else extremely tricky) to conceive of embodied moral agents in these kinds of worlds. Or even if we can conceive of a humanoid form of life in the latter, the total inability to determine causal relationships excludes the FTA requirement of the possibility of morally significant actions. We can, however, easily conceive of embodied moral agents existing in a world where the degree of accessible scientific truth to which we are accustomed is placed beyond our reach (Plantinga, 28).
Plantinga’s argument echoes the one outlined in Eugene Wigner; both find it odd that effective (i.e. scientifically useful) mathematics should be so accessible in the way that it is (Wigner, 1-14). What’s more, the mathematics useful for science is placed at the furthermost reach of our abilities. And so it has been the case that “[m]athematics and natural science in the West have developed hand in hand, from the Leibniz/Newton discovery of the differential calculus in the seventeenth century to the non-Abelian gauge theory of contemporary quantum chromodynamics” (27). This is certainly a problem for naturalists, or at least Neo-Darwinian reductionists: mathematics of this sort is not required for survival. “Indeed,” Plantinga cracks, “it is only the occasional assistant professor of logic who needs to be able to prove Gödel’s First Incompleteness Theorem in order to survive and reproduce.” (Plantiga, 29).
We can foresee a number of potential objections to Plantinga’s notion of probability here, and these will be considered in sections IV and V. But granting for now that our universe is “mathematically fine-tuned” in this way, we can construct a Bayesian inference identical to Manson’s, with the following substitutions: Km is the given that the math-science-agent relationship we observe requires fine-tuning, Em is the statement “the universe is indeed fine-tuned for this math-science-agent relationship,” and D is the statement “a supernatural designer of immense power and knowledge exists”:
(1) P(Em|Km & ~D) ≈ 0
(2) P(Em|Km & D) >> 0
(3) P(D|Km) >> P(Em|Km & ~D)
∴ P(D|Em & Km) >> 0
Thus we have two parallel conclusions for cosmological and mathematical fine-tuning arguments, respectively:
P(D|E & K) >> 0
P(D|Em & Km) >> 0
For our first scenario, our hypothetical agent A – a theist convinced by the FTA and its underlying probability theoretic principles – has just encountered Wigner or Plantinga’s argument. Should this new evidence strengthen her conviction? In short, it shouldn’t, but it probably will intuitively. P(E|K & ~D) alone is already infinitesimal, so the MSA argument shouldn’t really affect the veracity of the inference. The original Bayesian inference either works or it doesn’t; it is not a matter of the scales being tipped. In other words, our agent will either be convinced to begin with or else never be convinced at all. If our agent is convinced on the cosmological evidence alone, can we imagine that will she sit down, write out P(E|K & ~D) × P(Em|Km & ~D) ≈ 0, and find herself more convinced? It is silly to think so.
But it’s easy to sympathize with its prima facie intuitive pull: although this new evidence doesn’t formally change anything, it certainly has (rightly or wrongly) a gut appeal. We can envision a theist saying, “Even the relationship between math and science is fine-tuned? More reason to accept a probabilistic proof of God!”
II: Case B: Weak Anthropic Principle (WAP) Objectors
Next, we will briefly consider the case of a weak anthropic principle (WAP) objector to the FTA. According to this principle, we should not consider it unlikely that we observe a universe fine-tuned for life: the only kind of universe in which such an observation could be made is one in which the existence of such conditions has allowed the observer to exist. Accordingly, P(E|D) = P(E|~D) = 1.
This is where an alternative kind of fine-tuning evidence is sneakily helpful. The existence of embodied moral agents (and life-permitting conditions in general) requires the cooperation of every category of cosmological fine-tuning evidence. If we maintain the initial conditions of the universe but vary the law of gravity or certain higher-order characteristics of organic molecules, life doesn’t happen. The MSA relationship, on the other hand, is contingent. Because of this independence from the cosmological evidence, the MSA relationship is still unlikely compared with the range of cosmological constants that would allow embodied moral agents to exist. Thus, if the WAP originally gave agent B reason to reject the cosmological evidence of the FTA, then the MSA fine-tuning rescues the fine-tuning argument for agent B.
III: Epistemic Probability: an Unsuitable Interpretation
Before we proceed to agent C – who finds epistemic probability unsuited for cosmological FTA claims – we will need to introduce the problem of normalizability. In cases A and B, our agents did not doubt the notion of probability that underlies the fine-tuning argument. It is my contention in this section that the fallacious application of epistemic probability to the fine-tuning argument serves a deathblow to its validity with respect to cosmological constant evidence.
Leslie writes, “What is impressive is how our world is life ‘is at the centre of an otherwise blank area’” (Leslie, 142). This analogy certainly appeals to the gut – but is it meaningful? It is this very impression that Manson and MMV successfully undermine. With respect to objective probability, the classic image of a dart hitting cherry on a massive wall is meaningful if and only if it can be constructed as a probability problem such that the space occupied by the cherry is a calculable percentage of a finite wall. But is this what fine-tuning is like? As Manson observes, “fine-tuning data typically do not say anything at all about probability” – the data suggests that the cosmic dart would only miss the cherry if somewhere were “different by one part in 10^n,” etc. (278). Can a proper sense of probability be established here?
No – here we agree with the normalizability objectors. Timothy McGrew, Lydia McGrew, and Eric Vestrup’s “normalizability problem” is characterized in the following manner: “there must be a way that the space of possible values for the cosmic parameters can be construed mathematically as a unity” (Manson 280). So, in the case of the cherry, it must make sense to be able to calculate the area covered by the cherry over the total area of the wall. The possibilities must be countably additive, i.e. for n possibilities, P1 + P2 + … + Pn must sum 1. But according to the principle of indifference, since we have no reason to favor one range of constants of another, an equal probability must be assigned to every possibility. Since all possibilities are weighed the same in this case, and there are infinite possibilities, they must sum to either P = 0 or P = ∞.
The countable additivity problem is fatal for the use of objective probability in the fine-tuning argument. The argument would be rescued if we could determine bounds for the cosmological constants – but this is impossible. Some have tried to suggest that universe exhibits a bias towards life in the way that’s Tiger Wood’s golf swing exhibits a bias to hit the fairway (Mason 278). His drive could land anywhere – why can we make the probabilistic assumption that it will land on the fairway? But this is because (as Manson acknowledges) his skill creates a bias for certain ranges of possibilities, and his limitations (i.e. his inability to land his drive on the moon) allows us to assign certain ranges with P = 0. But it is fruitless to claim that “a bias in favor of life-permitting values likewise operates in connection with the free cosmic parameters” – where would this bias come from (Manson 279)? Is it not equally unlikely?
So objective probability is useless here. But forms of probability do exist that can function without the condition of countable additivity: a group that Swinburne calls inductive probability, and Collins epistemic probability. Manson rejects this kind of probability as “esoteric,” but this isn’t really immediately fair – varieties of epistemic probability are common to basic scientific principles of verification (281). For example, the theories of common ancestry or continental drift – singular and non-repeatable cases – rely on explanation by epistemic probability. But while we have evidential reasons for assigning certain ranges of constants P = 0 in these cases, we have no such reason to do so in the case of would-be parameters of cosmological constants. Hence, epistemic probability does not apply.
To illustrate this, imagine constructing a probability distribution for the disappearance of the Mayans. Couldn’t this be explained in an infinite range of ways? Yes, but we have archaeological evidence of migration, climate change, warfare, disease, etc.; even if they disappeared without a trace, we have evidence of these forces in the demise of other societies. Furthermore, since we have no evidence to support other hypotheses from an infinite range of possibilities (alien abduction, for example), we are justified (by the likelihood principle) in assigning these ranges P = 0.
Bradley Monton argues that the most robust notion of probability appropriate to the fine-tuning argument is the subjectivist interpretation. Monton’s probability theoretic arguments are helpful in overcoming the problem of old evidence, but he ultimately fails to show that the existence of a life-permitting universe is evidentially significant except in specific cases where God communicates a limited range of constants.
But Monton’s project is doomed at a more fundamental level. To see this, all we have to do is follow his formalizations (Monton, 2006) and ask: what is his understanding of subjective probability, and how does his subjectivist interpretation narrow the range of constants? Under Monton’s take, it is left to the agent to assign probabilities to certain ranges of constants (Monton 410). But this is totally arbitrary! We have (and can have) no kind of evidence for these ranges, so it cannot be compared to the Mayan civilization or continental drift cases. Monton’s ur-probabilistic approach allows an agent to endorse either
P(G|L) > P(G)
P(G|L) = P(G),
where L is the proposition that the universe is life-permitting. Certainly this is what Collins meant when he said subjective theory leads to an “extreme form of epistemic relativism” (Abridged 29).
In a final exchange with McGrew, McGrew, and Vestrup, MMV write that “it is profoundly unsatisfying to stipulate that we can just ‘tell’ which [probability] functions are reasonable and which are not” (MMV, 206, Skeptical View). Monton somewhat absurdly reads hope into MMV’s objection: “On the subjectivist interpretation, as MMV implicitly admit, the argument is not demolished” (411). This response totally misunderstands MMV, who claim:
“If, at a critical point, the argument turns on a subjectively variable sense of which assessments of probabilities are reasonable, a sense that cannot be adjudicated in terms of any more fundamental criteria, then the [fine-tuning argument] is effectively forceless” (MMV, 206, Skeptical View).
Monton’s ultimate conclusion is a major letdown: “Since (in my opinion, at least) belief is not a matter of the will, as long as one does not hold these beliefs, one cannot refute the fine-tuning argument in these ways” (Monton 418).
IV: Case C: MSA Evidence as the Last Hope for the Fine-Tuning Argument
Finally, we will consider whether our considerations about the relationship between mathematics, science, and rational agents can provide a valid probabilistic proof of God. We noted above that this relationship is a problem that naturalists must come to terms with – it is ultimately in the same category of thought that claims the beauty and elegance of mathematics is proof of a divine creator. 
In the abridged version of Collins’ book on fine-tuning, he considers whether we can consider the cosmological constraints within the bounds of an “epistemically illuminated range”, an illuminated region of an infinitely vast, dark wall (in the cherry on the wall analogy) (Abridged, 44-5). But his cutoff is arbitrarily placed at the limit our of understandings of physics. For example, Collins sets his limit for considered electromagnetic force ranges at the limit of our ability to describe the accompanying physics. 
Collins contends that if we say his argument lacks “probative force because WR [=the range of possible constants] is purportedly infinite, we must draw the counterintuitive consequence that although the fine-tuning argument gets stronger and stronger as WR grows, magically when WR becomes actually infinite, the fine-tuning argument loses all probative force” (Abridged, 52). This is fallacious for the same reasons as above (the constraints applied to the range of cosmological constants are ultimately arbitrary), but it is in this notion of an “illuminated range” that the fine-tuning argument has hope.
Since the MSA evidence is independent of the cosmological evidence, we can take the illuminated range to be the range of constants that permits the development of embodied moral agents. Next, we can weigh the improbability of the MSA with respect the illuminated range. But doesn’t this notion of improbability run into the same problem as the cosmological evidence? And here is how – if at all – the problem of probability might be circumvented: we might claim that we have evidence for this kind of claim – evidence that suggests that most forms of life function at a cognitive capacity that is below the threshold to do complex, effective mathematics. We can thus conceive of the development of embodied moral agents whose intelligence allows them to make moral decisions but not to develop non-Abelian gauge theory. Again, this is not a frequentist or logical interpretation of probability, but an epistemic one: it is utilizing a kind of fine-tuning for which we have evidence.
MMV’s objections to Collins’ and Monton’s formulations of the fine-tuning argument leave them totally powerless as probabilistic proofs of God. Even if agent C were to think that the power of this essay’s final claim is slight, it is still reason to hope – or at least to carry out further work in this direction. It is only through this kind of fine-tuning evidence that the FTA can be resuscitated.
V: Some Objections
Objection 1: We have no reason to think that the illuminated range of constants near the constants we observe is the only illuminated range. If the cosmological constants all varied by a billion billion orders of magnitude, who’s to say embodied moral agents couldn’t exist in a different form at a different point on the spectrum?
This is, I think, the toughest obstacle to overcome if agent C is to accept the FTA. Because of the limited scope of our theories in physics [see again endnote 3], we cannot guess whether or not the big bang would even occur past a certain limit in the constants. But my knowledge of the relevant cosmology (as well my knowledge about the requisites of an embodied moral agent) is inadequate here.
Objection 2: In any kind of world in which embodied moral agents exist, we should expect some form of effective (i.e. scientifically useful) mathematics to be accessible.
To the contrary, many (Galileo, Einstein, Wigner, Plantinga) have thought it odd that math should be able to map the world so well. The intuition underlying Objection 2 is a product of the fact that Euclidean geometry appears to correspond perfect to our world. This is not something we can know a priori: Lobatchevsky’s non-Euclidean geometry proved that “only empirical observation can decide whether Euclidean geometry is true of actual space” (“Logical Positivism” 369). Again, pure math relates to an abstract world.
But the objection raises an interesting question – can’t we at least expect arithmetic to be effective in any world that supports embodied moral agents?
Objection 3: We should not expect God to fine-tune the relationship between mathematics, science, and human cognition.
On the contrary, many theists might hold that God wants us to develop and learn about the world he created. Furthermore, if God creates humans in his image, then “Science is a prime example of this image in us: science requires our very best efforts—our very best communal efforts—and it delivers magnificent results” (Plantinga, 28). Psalm 19:1 reads, “The heavens are telling the glory of God; and the firmament proclaims his handiwork.” Many scientists have taken this very seriously – Paul Dirac, for example, said, “God is a mathematician of a very high order and He used advanced mathematics in constructing the universe.”
 Lyrics from Andrew Bird’s “A Nervous Tic Motion of the Head to the Left,” The Mysterious Production of Eggs. 2005.
 Collins has promised to address the “beauty” of mathematics in his forthcoming book. The beauty of mathematics has long been influential in creative processes in mathematics, and even in theistic belief. Einstein is supposed to have quipped to Reichenbach that he trusted in the veracity of his general theory of relativity because of its beauty (even before the evidence of Mercury’s anomalous perihelion precession or the solar eclipse of 1919). G.H. Hardy’s A Mathematician’s Apology (1940) presents a version of the same thesis. In the sciences, Collins notes the example of Nobel Prize-winning physicist Steven Weinberg, who “devotes a whole chapter of his book Dreams of a Final Theory (Chapter 6, “Beautiful Theories”) explaining how the criteria of beauty and elegance are commonly used to guide physicists in formulating the right laws (Collins, God Design Fine-Tuning 20).
 Collins: “The so-called Planck scale is often assumed to be the cutoff for the applicability of the strong, weak, and electromagnetic forces. This is the scale at which unknown quantum gravity effects are suspected to take place thus invalidating certain foundational assumptions on which current quantum field theories are based, such a continuous space-time. (For example, see Sahni and Starobinsky, 1999, p. 44; Peacock, p. 275.) The Planck scale occurs at the energy of 1019 GeV (billion electron volts), which is roughly 1021 higher than the binding energies of protons and neutrons in a nucleus. This means that we could expect a new physics to begin to come into play if the strength of the strong force were increased by more than a factor of ~1021” (Abridged, 49).
Collins, Robin. “Abridged Version of Fine-Tuning Book: Teleological Argument: an Exploration of the Fine-Tuning of the Cosmos” 2008. http://home.messiah.edu/~rcollins/Fine-tuning/FT.HTM
Collins, Robin. 2002. ‘God, Design, and Fine-Tuning.’ http://home.messiah.edu/~rcollins/Fine-tuning/FT.HTM.
Dirac, Paul, ‘‘The Evolution of the Physicists’ Picture of Nature,’’ Scientific American 2008:5 (May 1963): 53.
Manson, Neil. 2009. ‘The Fine Tuning Argument,’ Philosophy Compass 4/1 (2009): 271–286.
McGrew, Timothy, Lydia McGrew, and Eric Vestrup. ‘Probabilities and the Fine-Tuning Argument: A Skeptical View’. God and Design: The Teleological Argument and Modern Science. Ed. Neil A. Manson. New York, NY: Routledge, 2003. 200–8.
Monton, Bradley. “God, Fine-Tuning, and the Problem of Old Evidence.” Brit. J. Phil. Sci. 57 (2006), 405–424
Russell, Bertrand. “Logical Positivism.” Logic and Knowledge. 1956.
Wigner, Eugene, ‘‘The Unreasonable Effectiveness of Mathematics in the Natural Sciences,’’ Communications in Pure and Applied Mathematics 13:1 (February 1960): 1–14.
Dirac, Paul, ‘‘The Evolution of the Physicists’ Picture of Nature,’’ Scientific American 2008:5 (May 1963): 53.