Part I: The fine-tuning of the universe for life (Geraint F. Lewis and Luke A. Barnes)
The story of fine-tuning for life starts with a perfectly innocent question, at least to a physicist. When trying to understand a physical system, be it a ball rolling down a slope or galaxies rushing through the cosmos, we want to build a mathematical model that predicts our current and future observations. Logical arguments and qualitative principles are great, but precise numerical predictions are the gold standard.
Broadly speaking, we build mathematical models out of three pieces. First, there are the laws of nature. These usually take the form of (or lead to) differential equations, which means that they do not tell us how the universe is; they tell us how the universe changes. Second, we need to understand the initial conditions (ICs) of the system. For example, if you want to predict the next solar eclipse, you need to know where the Sun, Earth, Moon and planets are at some particular point in time, as well as their masses and velocities. Because the laws only tell us how matter changes, we need to provide them with a starting point.
The third piece is the one that we will focus on here. When we write down the equation of a particular law of nature – for example, the standard model of particle physics, or the standard model of cosmology (together, ‘the standard models’) – we need to include a particular set of free parameters. These are simply numbers; everything else in the equation is more mathematically sophisticated, like a function, a differential operator, or a tensor. What makes these numbers the subject of our focus is that, unlike the predictions that we get out of the equations, these parameters go into the equation. That is why they are free parameters: the model does not tell us why they have the value that they have. We cannot calculate them, but we can measure them. (More precisely, they cannot be calculated from the equations in which they appear as a free parameter.) In our deepest laws of physics, these numbers are called ‘the fundamental constants of nature’, and without them, physics is sterile!
To summarise: laws, ICs, and fundamental constants.
About five decades ago, physicists started asking themselves a familiar question in a new context. As the details of the standard models began to fall into place, they did what they always do with all free parameters: they wondered what would happen if they varied them. Because they are free parameters, we want to know how sensitively the predictions of our model depend on them. All other things being equal, a model that can fit the data only for a narrow range of its free parameters is less favoured than a model that can fit the data over a wider range.
It is important to note what we are not doing at this point. By considering differing fundamental constants, we are playing a game of alternative history, wondering how the life of the universe would have played out with a different combination of fundamental constants. We are not demanding that these alternative universes actually exist.
So, we return to the core question: what would happen in a universe with the same laws as ours but different fundamental constants and ICs?
The short answer: we found extreme sensitivity. Compared to the range of values of these constants and ICs that are mathematically consistent with the equations, very small changes make dramatic changes to the resultant universe. Almost all of the other universes would be substantially simpler than our universe, with almost no capacity to build the basic components of the universe into complex entities. This precludes any known or conceivable form of life.
The basic properties of the universe, of our universe, need to be fine-tuned to allow the complexity necessary to produce a life-permitting cosmos.
This connection between the existence of us and fundamental physics was unexpected. In the words of Freeman Dyson (Reference Dyson1971),
‘As we look out into the universe and identify the many accidents of physics and astronomy that have worked together to our benefit, it almost seems as if the universe must in some sense have known that we were coming.’
Cosmological fine-tuning: the evidence
In this section, we will review some of the cases of fine-tuning: that is, considering the effects of changes in the fundamental constants of nature and ICs. Combining constants and ICs for convenience, there are thirty-one constants in the standard models: twenty-five from particle physics, and six from cosmology.
There is a subtlety here: this does not include four constants that are used to establish a system of units of mass, time, distance, and temperature. These are Newton’s gravitational constant G, the speed of light c, Planck’s constant ħ, and Boltzmann’s constant k B. Altering one of these constants would merely make an indistinguishable scale model of our universe.
The technical details of the cases below are summarised in Barnes (Reference Barnes2019) and references therein and more fully expounded in Lewis and Barnes (Reference Lewis and Barnes2016). Here, we describe the physics at a less precise, more intuitive level.
The cosmological constant
The discovery of the Cosmological Constant at the close of the twentieth century shocked and rocked the world of physics. In the decades before, astronomers had been on the quest to measure the expansion history of the universe, in particular, how much it has slowed from its immensely rapid growth in its youth to its more sedate expansion today. The deceleration rate of expansion depends on the quantity of matter in the cosmos, and so this quest would reveal not only the history of the universe but also what the cosmos is made of.
However, astronomers did not discover that the universe is decelerating. In fact, they found that the expansion of the universe is accelerating, getting faster and faster. This posed a conundrum, as matter can only act as a break on expansion, and so some other stuff, some other energy, must be present in the universe. Within their equations, the physicists found that there was room for such stuff, which they called the Cosmological Constant or Dark Energy. However, the properties of the Cosmological Constant are radically different to normal matter, providing a physical tension at every point of space. The other shock about dark energy regarded how much there was: this tension in space accounts for seventy per cent of all the energy in the universe today.
Dark energy is hidden from telescopes as it does not interact with light; its influence is only felt on cosmic scales. But what is it? Physicists realised that there was what seemed like an obvious culprit in their equations, namely that dark energy is a manifestation of what is known as the quantum vacuum.
The quantum vacuum is a consequence of one of the strange properties of quantum mechanics, namely the Heisenberg uncertainty principle. This demands that the knowledge of particular pairs of properties of a particle, such as position and velocity, are implicitly linked, and knowing more about one diminishes the knowledge of the other. This also holds true for energy and time, and it means that, for short periods of time, energy can be brought into existence before melting away again. The result of this popping of energy in and out of nothing means that empty space is not truly empty. Every chunk of empty space seethes with quantum energy, and this quantum vacuum possesses a tension just like dark energy.
Quantum fields describe the contents of the standard model of particle physics. With these equations, physicists can calculate the expected energy density of the quantum vacuum and compare it to what is required to explain the observations of the accelerating universe. Often such comparisons between theory and observations give very good agreement, but sometimes the results are within a factor of a few, suggesting that something might be missing from the calculations. How well did the physicists do in predicting the observed energy density of the Cosmological Constant?
There was not an exact match. And the results were not within a factor of a few. In fact, the prediction differed from the observed energy density by a huge amount, a factor of 10120, a one followed by one hundred and twenty zeros! This discrepancy has been referred to as the biggest embarrassment in all of physics. But with a little thought, the physicists realised this was much more a blessing than an embarrassment.
If the universe was born with the seemingly natural density of dark energy, the value calculated from quantum theories, its influence on expansion would have been dramatic. In the blink of an eye, the universe would have expanded immensely, thinning out matter to an extraordinarily low density. With maybe one electron per observable universe, the chances of forming atoms, molecules, planets, and people would vanish. In this alternate history, our universe would be dead and sterile. If the amount of dark energy had been naturally sized but negative, the same blink-of-an-eye would have collapsed the universe into a big crunch.
The question becomes: why was our universe born with a miniscule amount of dark energy? Perhaps there is another physical process that counters the immense accelerating effect of dark energy. If this process had been perfect, effectively countering all of the influence of dark energy, physicists would have been happy. Instead, this process would have to be extremely precise to shave off all but one part in 10120. If this shaving was slightly less precise, say one part in 10116, the dark energy density at the start of the universe would be ten thousand times greater than it is, and the universe would have been doomed, expanded into nothingness before any structure can form. Changing other properties of the universe gives us a little more wiggle room to produce a habitable cosmos. But this density of dark energy is still unfathomably smaller than the natural value predicted from quantum theories.
The presence of dark energy in our universe, in particular its source, remains a mystery, but the most crucial question is why there is so little of it. If there was ten or a hundred times more dark energy, there is a window for stars to form and life to emerge. But much more than this, and we simply would not be here to ponder this question.
The Higgs vacuum expectation value
One of the most important parameters in the standard model of particle physics is called the Higgs vacuum expectation value, a quantity that physicists cutely refer to as vev. Vev is important for mass, perhaps the most basic property of a fundamental particle. Some particles, like the photon, are massless. Other particles have a mass due to the Higgs mechanism.
To consider a specific example, the Higgs mechanism gives the electron its mass as follows. The Higgs field fills the entire universe. Massless particles fly through the Higgs field unaffected. Electrons, by contrast, interact with the Higgs field as they move around. This means that it takes some force to push electrons through the Higgs field; the stronger the interaction, the more force is required, and the heavier the particle is. Different particles interact more or less strongly with the Higgs field.
Imagine, if it helps, an electron wading through a river. In this (very inexact) analogy, the Higgs vev can be thought of as the depth of the river. The higher the water level, the harder it is for all the massive particles to wade through it. If we doubled the Higgs vev, every fundamental particle in the universe would become twice as heavy.
Would not this just make everything twice as heavy? No, because there are ways to make mass, and other forms of energy. As described by Einstein’s famous equation E = mc 2, mass is a form of energy. Changing the masses of the fundamental particles affects the energy budget of the universe. It alters which processes are allowed (which conserve energy) and which are not. This can make the difference between a system being unstable (falling apart is allowed) and unstable.
If the Higgs vev is too large, then all atomic nuclei are unstable. Protons and neutrons will not stick to each other. The periodic table is erased. The only atom that can be made is hydrogen: a single proton with an electron in orbit. The only stable chemical compound is H2, molecular hydrogen.
(By contrast, in our universe, protons and neutrons can be arranged into 92 naturally occurring elements with 251 stable isotopes. The PubChem database lists 119,321,246 known chemical compounds.)
This change to the fundamental properties of our universe dramatically decreases its ability to support complexity. Instead of astounding biochemical systems, we get one atom and one compound, which can only be arranged into a geometrical shape known as a ‘pile’.
If the Higgs vev is too small, then the hydrogen atom will capture its orbiting electron, transforming the proton into a neutron. After about a minute in the early cosmos, the universe contains only neutrons. This universe is even worse than the last one: zero chemical elements, zero atoms, zero molecules, zero structure.
Note that this has very little to do with life-as-we-know-it, or carbon-based life, or debates about the line between life and non-life. We could explore these boundaries, to delineate more precisely the line between life-permitting and life-prohibiting. But even very conservative limits, between universes that can stick things together and universes that cannot, put tight constraints on the fundamental parameters. Compared to the range of values of the Higgs vev that are allowed in the model – up to the Planck scale – the range of values that permits any significant complexity is less than one part in 1016.
The masses of the up quark, down quark, and electron
Even if the universe gets the Higgs vev right, it still needs to get the relative masses of the up quark, down quark, and electron right, in order for complexity and life to be possible. The Higgs vev sets a typical mass scale, but other parameters (the Yukawa couplings) give the particle masses relative to typical. Returning to the river analogy, this is the difference between someone who wades with difficulty because of their short legs (heavy particle) and someone who wades with ease because of their long legs (light particle).
A variety of interesting disasters await those who mess with these dials. We can make universes in which the proton and neutron are replaced by a particle called the Δ++ (or, with other parameters, the Δ−). And that is where the excitement ends, because these particles will not stick to anything, including themselves.
In addition, we have more ways to make the neutron-only universe, and a few ways to erase various important parts of the periodic table. We can even mess with the fuel for stars, ensuring that no gravitationally bound collection of matter can stably sustain itself using nuclear reactions. These constraints can be plotted out in three dimensions (see Lewis and Barnes, chapter 7), leaving life a small window of opportunity.
The force coupling constants
Throughout the scientific revolution, the forces of nature became simpler. From a complex mess of friction, pressure, push, and pull, by the mid-twentieth century, it was realised that there are only four fundamental forces in the universe. Everyday life is governed by two of these: gravity, which is attempting to drag your mass to the centre of the Earth, and electromagnetism, which prevents this. You do not fall through the floor due to the electromagnetic repulsion between the electrons in the outer atoms of your feet and those that make up the ground.
Rounding off the fundamental forces are the strong and weak nuclear forces. As their name suggests, these forces operate on the subatomic level, with the strong force holding the nuclei of your atoms together. The weak force is perhaps the strangest of the lot and is responsible for aspects of radioactivity.
Each force has its own fundamental constant, a number that tells you how strong this force is relative to the others. In our universe, the nuclear strong force reigns supreme, with electromagnetism being about a hundred times weaker. The weak force is a trillion times weaker again, with gravity coming in at a paltry trillion trillion trillion times weaker than the strong force.
The evolution of the universe is the story of ongoing interactions of matter and radiation, mediated by fundamental forces. A perfect example of this is a star, a massive object whose own self-gravity is trying to crush it down to a point. The electromagnetic repulsion of the plasma provides a pressure to resist this, but the gravitational squeezing drives the central temperature up to millions of degrees. High temperatures mean that atomic nuclei in the plasma are hurtling about at immense speeds, and in the subsequent violent collisions, nuclei get close enough for the strong force, which only acts over a very limited distance, to bind nuclei together. This is nuclear fusion, a process that releases energy that floods outwards from the core and provides extra support against gravity. The weak force is also at play, crucially flipping the identity of pieces during the nuclear interactions.
So a steadily shining star is the balance between the fundamental forces. If the constants that determine the force strength were different, the lifetime of stars would also be different. Increasing the relative force of gravity closes the window for stable stars: most stars are unstable, violently pulsing rather than burning sedately for billions of years. Weakening the strong force can unbind the products of stellar burning, shutting off the nuclear heat source; stars will only briefly shine as their gas radiates away its ambient heat. And without the weak force, protons in stars would simply bounce off each other, never becoming bound nuclei.
We find that, in terms of the fundamental forces, our universe possesses a combination of strengths that allows for long-lived stable stars and a quiescent stellar environment suitable for forming planets, complex molecules, and life. But the lives of stars depend upon more than the fundamental forces, with other properties such as the mass and charge of the particles that make up matter. Taking all of these into account, our universe sits in a vanishingly small volume of possible values at a point that allows us to be here and wonder about the universe.
The question is why we find ourselves at this special point of all possible universes?
Scalar fluctuation amplitude
One of the most astounding moments of the 1960s was the discovery of the cosmic microwave background (CMB). This radiation, which fills all of space, is the leftover fireball of the initial moments of the universe, cooling down over billions of years as the universe expanded. The radiation we detect comes from a time almost 400,000 years after the Big Bang, a point where the universe transitioned from an opaque plasma to a transparent sea of neutral atoms. The CMB is, therefore, a snapshot of the cosmos before the first stars burst into light.
Astronomers discovered that the radiation of the CMB is not completely smooth across the sky. There are small ripples, with a magnitude of one part in a hundred thousand, that dapples the otherwise featureless CMB sky. It was realised that these ripples reflected the state of the universe when the CMB radiation was emitted and mapped out the distribution of gravity at these early epochs. In the initial hot plasma, gravity had started to drag matter together into the sites where galaxies would eventually form, and this map of regions of clustering and emptying voids was imprinted on the sky.
The ultimate source of these ripples is thought to lie in inflation, a bout of super-rapid inflation that took place in the universe when it was only 10−36 seconds old. There are a number of reasons that cosmologists think that this inflation took place, but for this story the important aspect is that the microscopic fluctuations of the quantum vacuum that we met previously are blown up to macroscopic scales and are frozen into the matter distribution of the universe as subtle inhomogeneities. These lumps of matter grow to be one part in a hundred thousand by the time the universe cooled enough to became transparent to light.
Inflation is not fully understood, and cosmologists struggle to understand how it started, how it ended, and what was the source of energy that drove it. This uncertainty is reflected in the possible outcomes of the scale of the quantum inhomogeneities writ large across the sky.
If the scale of inhomogeneities were small, less than about one part in a million or so, the universe would have begun a lot smoother than it did. Gravity would still be doing its magic, drawing matter together, but starting from a smoother state would delay the formation of galaxies and stars. This delay allows the onset of dark energy and acceleration, which drives matter further and further apart. Reducing the scale of inhomogeneities eventually reaches the point where the universe empties out before the first star can form, and life never has a chance.
If reducing the scale of the inhomogeneities is so detrimental, you might feel that increasing it would benefit the growth of galaxies and stars and the emergence of life. But speeding up galaxy formation has its own downside, as the matter is driven together more quickly and pours together into ultradense cores and eventually black holes. Black holes are the ultimate trap for matter, and any atoms that could have been destined for stars and life are locked up forever in the black hole’s gravitational prison. If the scale of inhomogeneity was large enough, the possibility of atoms eventually getting trapped in black holes is skipped entirely, and all matter emerging from inflation is locked in black holes. Again, the existence of life becomes impossible.
So why was inflation just right? Why did its birth of rapid expansion occur for just the right amount of time to come to a close with matter inhomogeneities at a level of one part in a hundred thousand? Cosmologists are at a loss to account for this fine-tuning of the earliest epochs of the universe.
There is a glib answer to why the universe appears to be fine-tuned, if it were any different, we would not be here to ask the question. But let us now explore the perspectives, both scientific and theological, on the deeper issues surrounding fine-tuning.
Part II: Explaining fine-tuning – three perspectives
Geraint F. Lewis: Cosmological fine-tuning from a scientific perspective
If we approach the question of fine-tuning from a scientific perspective, a perspective that does not call on the supernatural as an explanation, what are the possibilities?
The immediate response of many physicists is that this is the only universe we know and can seemingly study, so we should just take the laws of physics as an attempt to understand nature within this framework. Perhaps, if we work harder and ultimately uncover the true nature of reality in the form of a theory of everything, and hopefully, within these mathematics, we will find no free parameters. Maybe all of the constants of nature will be expressed in mathematics, combinations of pi and e and other mathematical structures. With no freedom, surely there is no longer a question of fine-tuning.
This is a rather naive approach to the question, as we would still like to know why the universe decided on a particular set of mathematics. We know our universe had some sort of creation event, although we do not know if this was the ultimate point of creation, and we do not know if, in the act of creation, our universe had to be imbued with the mathematics it has, or whether there was any choice in the possible mathematics of the universe. Choice here does not mean that there is a chooser, but rather that there are some sort of stochastic processes that assign the set of mathematics at cosmic birth. We have moved more into metaphysics than physics at this point, so let us explore the possibilities of just how the universe came into being.
There are no robust theories of cosmic birth, but there are a lot of suggestions and hunches. There are ideas of evolutionary universes that give birth to daughter universes with the characteristics of their mother, converging to particular properties that result in more and more cosmic daughters. There are more outlandish ideas, such as the simulation hypothesis, where our universe is nothing but a computer simulation playing out in some higher dimension. With this, some higher dimensional being, maybe a higher dimensional graduate student, chose the laws of physics to produce interesting results. However, it is safe to say that most cosmologists see these are quirky ideas rather than potential solutions to fine-tuning.
This is where we encounter the concept of the multiverse, this author’s (GFL) favoured answer to the question of fine-tuning. The multiverse is the idea that our universe is not alone but is part of a potentially infinite ensemble of other universes. The operation of the multiverse remains unknown but might be related to the burst of inflation at cosmic birth, with individual universes condensing in patches within the overall hyper-inflating space-time. The idea is that as each universe condenses, its laws of physics crystallise out of the inflation maelstrom, it gets written with its laws of physics. Just as no two snowflakes are identical, in this process, no two universes get written with identical laws of physics, and across the multiverse, there are universes with immensely strong gravity, or non-existent electromagnetism, or electrons as massive as a basketball. The physics in the majority universe will be nothing like the physics of our own.
As we have seen from the exploration of fine-tuning, we only have to vary the laws of physics a little from our own and the resultant universe would be uninhabitable. Universes like our own, universes that can host complexity and life, will be vanishingly rare in the ensemble. Over its immensity, the multiverse is a graveyard of universes that did not make it, vastnesses with no prospects for life. Here and there, nestled amongst the gravestones, we will find a couple of universes like our own, beacons of complexity in the silence. Of course, we expect to find ourselves in a universe that has the physics that permits us to be here, a simple anthropic selection, for where else could we be?
However, the notion of the multiverse, can elicit violent objections from some physicists. It seems so wasteful to posit all of these other universes, universes that seem causally disconnected from our own and so beyond the bounds of science. Why is suggesting the multiverse different from suggesting a god to solve your physics problems? But I will close this section by noting that the multiverse, in its current form, is not a theory. It is barely a hypothesis, just a bunch of ideas and guesswork. Cosmologists are attempting to stitch these pieces into a scientific theory, a theory of cosmological origin, and then, like all physical theories, the multiverse will have to run the full gauntlet of scientific inquiry before it will be accepted or rejected. But if I were to bet, I would not write off the multiverse as the ultimate reality underpinning our cosmos.
Luke A. Barnes: Cosmological fine-tuning from a theistic perspective
Recall Freeman Dyson’s words, quoted above: ‘the universe must in some sense have known that we were coming.’ In a similar vein, astrophysicist Fred Hoyle wrote in 1981,
‘A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature’ (Hoyle Reference Hoyle1981).
But can we take this conclusion seriously?
Conversations about theism and science are often hamstrung by a conception of God that no one actually subscribes to. God is imagined to be a man with a beard on a cloud, who exists somewhere outside of (but also somehow attached to) a universe that does not need his input, but who nevertheless has a variety of old-fashioned moral opinions and is generally cranky at scientists, young people, and everyone else.
This will not do. To take the idea of God seriously, we cannot just staple a sky-fairy onto a naturalistic universe.
Let us think more carefully about two possibilities. Naturalism proposes that matter (plus energy, space, time, and whatever else you think physical stuff is) is the most fundamental reality. The natural, material world just is. It could have been different, it could have not existed at all, but it is all there is to reality. Once your explanation gets to the fundamental laws of nature, that is it. You get what you get, and you do not get upset.
On theism, by contrast, ultimate reality is a mind. The deepest level of reality is the kind of thing that has conscious experiences – thoughts, experiences, reasons, decisions. The natural world exists because it is the creation of God. For this reason, the physical universe is purposeful, rational, contingent, and orderly. It is part of a bigger story.
If we are just comparing these two possibilities – naturalism and theism – a crucial question arises, the kind of question we ask of any big-picture theory about reality: is the universe we inhabit the kind of universe that we would expect? If we could stand in front of a chalkboard that displayed the ultimate laws of nature, would we be satisfied to say ‘that’s just the way it is’? Or would we want more explanation than science – the study of interconnections and order within nature – can provide?
We could try to decide this with our imagination. Perhaps I imagine that a typical naturalistic universe is dead; meanwhile, you imagine that a typical naturalistic universe will evolve structure and complexity. Perhaps I imagine that a typical theistic universe will demonstrate God’s power and immensity, whilst providing an environment for genuine moral action and growth; meanwhile, you imagine that a typical theistic universe would be lakeside lawns occupied only by us and maximally fluffy bunnies. This does not seem to be very productive.
The fine-tuning of the universe for life presents the opportunity to address these questions systematically. Let’s not merely imagine universes; let’s model them. Let us use the best physics we have – the standard models – to represent a given possible universe. And let us theoretically generate other possible universes by varying something in those models, namely, the fundamental constants and ICs. This gives us a chance to systematically explore other universes, by varying free parameters over their ranges consistent with the models. It is practical: varying the forms of the equations is just too difficult. (Remember: someone actually has to do the theoretical physics!) The standard models are not the ultimate laws of nature on the chalkboard, but they are the best we have got. Further, they are not obviously biased for or against life a priori, because we have chosen them for their empirical adequacy.
So, now we have a controlled approach to the question of what kind of universe is typical. We have something to compare our universe to. We have a set of possible universes to examine, to see whether there is anything noteworthy, anything rare, anything suggestive of a bigger story, in the way our universe is. Are we taking our universe’s complexity for granted? Is this just any old universe?
If the kind of complexity that we find in our universe – say, the periodic table, stars and planets – were available in one form or another across most other possible universes, then we could conclude that no bigger story is suggested by physics. Bertrand Russell wrote in 1929 that we are ‘but the outcome of accidental collocations of atoms’. He could take solace in the thought that those atoms seem to be but the outcome of accidental constants of nature.
However, that is not the way it turned out. Our universe’s ability to support complexity, including life, is extraordinarily rare amongst the set of possible universes. Naturalism leaves unexplained the most undeniable feature of our universe: that it contains life. By contrast, theism, even if we are uncertain of God’s purpose in creating the universe, does not face anything remotely like the improbability that naturalism faces. If it is even somewhat plausible that, if God exists, God wants to make a universe that supports embodied life, then theism is enormously boosted, relative to naturalism.
Philip GoffFootnote 1: Cosmological fine-tuning from a limited pantheistic perspective
The above options face much-discussed problems. Some philosophers have argued that attempting to explain fine-tuning in terms of a multiverse commits the fallacy of understated evidence, the error of failing to work with the strongest evidence available. A multiverse may make it more likely that a universe is life-sustaining. However, our evidence is not merely that a universe is life-sustaining but that this universe is life-sustaining. And the existence of other universes does not make it any more likely that this universe in particular should be life-sustaining.
We can see the fallacy of understated evidence at work in the following case:
Sara goes into a casino and the first person she sees is having an extraordinary run go luck. Sara reasons, ‘Wow, the casino must be full tonight. Such a big win is highly improbable, but if thousands of people are playing, it’s not so surprising that somebody is going to win big.’
What is Sara’s mistake? After all, she has observed someone winning big, and a full casino does indeed make it more likely that someone will win big. Her error is the fallacy of understated evidence. Sara’s strongest evidence is not merely that someone won big but that this person won big, and the full casino hypothesis does not make that stronger evidence more probable.
Some have pressed that the selection effect in the case of fine-tuning makes a difference between the two cases: we could not have observed a universe that was not life-sustaining but we could have observed someone in the casino not winning big. But whilst selection effects can be important in revealing that evidence does not support what it initially seemed to – perhaps the survey results do not indicate strong support for the Conservatives once we take into account that wealthier people were more likely to participate in the survey – they do not give us permission to work with weaker evidence than is available. Moreover, we could set up the casino case in such a way that there is a similar selection effect: a sniper is hidden in the first room, ready to blow your brains out before you see anything unless the person in the first room is winning big.
The most discussed problem with theism is the problem of evil. Why would an all-knowing, all-powerful, perfectly good being allow all the terrible suffering we find in the world? Many theists have pointed to great goods that God could not have brought into existence without allowing suffering. If God is to give us free will, this leaves open the possibility that some will use their free will to cause suffering to others. If we are to face serious moral choices – e.g. whether to help the victims of natural disasters – then others must suffer so that we can choose to help or to ignore.
However, even if there are such goods that are incompatible with a pain-free world, it’s not clear a creator would have the right to harm for the sake of these goods. It would be wrong for a doctor to kill a healthy person and harvest their organs, even if that brings about the greater good of saving five lives. Likewise, it would be wrong for a creator to kill and maim with hurricanes, even if this has the benefit of giving people in the developed world serious moral choices.
Sceptical theists argue that we should not expect to know God’s reasons for creating, and hence the fact that we cannot understand why a God would allow suffering does not cast any doubt on God’s existence. Unfortunately, this option is not available to those wanting to argue for God’s existence on the basis of fine-tuning. If we have no idea how God is likely to create, then we have no idea whether life is more likely on theism that naturalism.
If only there were another option, one that avoids both of these challenges. Fortunately, there is! The postulation of a God of limited power can explain the fine-tuning – in terms of God’s purposes – whilst also explaining the suffering – in terms of God’s limitations. We can have our cake and eat it too.
What are the limitations in question? Take physics and remove the values of the constants; that will give you a certain structure. The claim is that God is only able to create a universe with that structure and is unable to fiddle with it once it is created. God wanted to create intelligent life, but the only way They were able to do that was to create a universe with the right values of the constants so that life would eventually evolve. If there were less painful way of creating life, God would have taken it, but sadly there was not.
Where do God’s limitations come from? Ultimately, we have to take something as brute and unexplained. Traditional theists take the existence of God as brute and unexplained. Limitation theists do the same but just add that God has certain limitations. Perhaps limited theism is a slightly more complex than traditional theism. But that slight increase in complexity is worth it to better account for the data of suffering.
We can also improve on traditional theism by identifying God with the universe to yield a form of pantheism. Physics is confined to dealing with mathematical structure, and is thereby neutral on the ontology that underlies those mathematical structures. There is nothing in physics, therefore, to rule out that those mathematical structures are realised in the conscious experience of God. By identifying the physical universe with the mind of God we can explain fine-tuning without having to postulate a supernatural entity outside of the universe.
At the very least, this middle ground option of limited pantheism deserves further exploration.