Simulation or Mere Semblance (of an Argument)?

E. John Winner

**In the following discussion, I am indebted to a paper written by Brian Eggleston, when he was an undergraduate systems analysis student at Stanford University. [1]

Much of the recent interest in the idea that we all might be living in a computer simulation, run by an advanced civilization, arises from an argument put forth by Nick Bostrom, philosopher at Oxford University. [2]  As long as we remain within the realm of higher order probabilistic logic, the argument seems persuasive. If we apply common sense, however, we can easily identify serious problems with it.

First, let us separate the general argument from its initial construction, and refer to it as “a bostrom argument for a simulated reality.” Confronting Nick Bostrom’s original version of the argument requires a facility with probabilistic logic, and he cleverly hedges his, suggesting that while the case he makes could be true, the very fact that it hinges on probabilities means that there is no way to know whether it must be true.  Bostrom, then, is really engaging in a kind of thought experiment.  Nonetheless, a number of well-known figures – notably, science popularizer Neil de Grasse Tyson and billionaire technologist Elon Musk – have accepted and even promoted this idea that reality is a simulation. (In some quarters, the bostrom argument is deployed as a kind of pep-talk, whose point is to urge continued technological research.)  Consequently “a bostrom argument” should be understood simply as a probabilistic argument that our reality is, in fact, a simulation programmed by a higher intelligence.

Bostrom’s own simulation argument rests on two assumptions: (A) Substrate-independence – the notion that consciousness may arise independently of the material in which it appears. “It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.” (It should be noted that this is, in fact, actually highly controversial. A silicon based consciousness might be possible, but there is a danger in carrying this principle too far.  Surely, it is no accident that a panpsychist like David Chalmers is a bostrom argument advocate. After all, if consciousness is independent of its material habitation, then it may be independent of any materiality at all.)

(B) The Principle of Indifference – Quoting Eggleston, “When there is no independent reason to believe one proposition over another, the probability that the proposition is true is equal to the number of possible ways that the proposition could turn out to be true divided by the total number of possible outcomes.” That is, they are both equally likely to be true, barring further information; even if they are contradictory.  In traditional logic, this resolves into an exclusive “either/or” proposition, on the basis of non-contradiction. In probabilistic logic, it helps form a basis for decision.  When I flip a coin, heads or tails are both equally likely.  If I’m betting on the outcome, however, I will have to apply further reasoning to decide the outcome on which I should bet. The principle of indifference suggests that the outcome will determine the validity of the reasoning.

Both of these principles carry over into the more popular forms of bostrom arguments, in sometimes quirky ways.  The quirkiest is the manner in which the principle of indifference is assumed to somehow guarantee the original bostrom argument’s presumption of the future developments of both the human species and its technology.  This reveals that there are two hidden assumptions in bostrom arguments that need to be revealed: (i) human history is a single lane highway; (ii) this highway forming a one-way incline, pointed in the direction of progress.  As we become smarter and smarter, our technology will become better and better, until we at last surpass ourselves and evolve into “post-humans” capable of simulating anything with our computers.  When those engaged in a bostrom argument admit that neither of these assumptions concerning the future may prove true, they do so in apocalyptic tones: we must either evolve into a more advanced condition, with an advanced technology, or we are doomed.

These assumptions acknowledged, we move onto the argument itself.  Bostrom himself introduces it as resolving the following trilemma:

  1. The fraction of human-level civilizations that reach a post-human stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero, or
  1. The fraction of post-human civilizations that are interested in running ancestor-simulations is very close to zero, or
  1. The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

According to Bostrom himself, at least one of these claims must be true.

Using probabilistic logic, the bostrom argument proceeds by interpreting the first of these propositions as so highly unlikely, it converts to its contrary – the fraction of human civilizations achieving post-human evolution with capability for high-level simulations is quite high. Thus, by the principle of indifference, and given the principle of substrate independence, the third proposition has the highest probability of being true. Setting aside the second possibility in Bostrom’s proposed trilemma (that future evolved post-humans choose not to engage in ancestral simulations, a possibility proponents and critics agree has a low probability and would implicate nothing in the context) [3], a bostrom argument, in more traditional form, reads something like this:

1.’ The probability of the human species surviving to evolve into a post-human civilization with world-simulating capabilities is quite low; or the probability that we all now live as simulations is quite high.

2.’ However, assuming that rapid advances in computer technology continue unabated in the future, the probability of “the probability of humans surviving to evolve into a post-human civilization with world-simulating capabilities is quite low” is itself low.    The probability of humans evolving into a post-human civilization with world-simulating capabilities is thus high.

3.’ Whether we are or are not living as simulations is probabilistically indeterminate.

4.’ But if the probability of humans evolving into a post-human civilization with world-simulating capabilities is high, then it likely has already occurred.

5.’ Thus, the probability that we all now live as simulations is quite high.

There are several fatal problems with this argument.

First, we need recognize that many of these probabilities collapse into universal claims. If we allow that there is any fraction of the current human population not living in a simulation, we must confront an absurdity – at least one contemporary human is currently living in the fundamental reality of the post-human simulators, which will not come into being for a couple centuries. Thus, the probability claims on fractions of existents (e.g., “the probability is very close to one”) must collapse into the universal claim that we are all now living in this simulation. (The probability of which isn’t “close to one,” it would equal one; that is, it must be a certainty.) Most attacks on a bostrom argument point out this weakness.  For one thing, such a universal claim does not permit the deployment of probability arguments in support of it – we either are or are not living in a simulation. Further, a universal claim like this is well open to truth-value analysis, and thus triggers demands for evidence, verifiability, falsifiability, etc. Further still, if we all live in a simulation, the only way we could know this is if we have been programmed to do so, and there is no external reasoning or reasoner to which we can appeal in order to determine this. Finally, if we all live in a simulation, there is nothing to be done about it. Everything we do has been programmed beforehand, so all that we can do is act in the same way, with the same motives, with the same sense of agency, that we always have. The notion that a bostrom argument can be used to urge further technological advance is both ironic and self-subverting. If we’re programmed to do this, we’ll do it anyway. If we’re programmed not to do this, we won’t, regardless of what we tell ourselves. (But of course, we’ve been programmed to make and respond to these arguments, so…)

The strangest thing about the bostrom arguments, however, is their weird sense of history. I noted that a base assumption of such arguments is that history is a one-lane highway, but we must also see that they also entail that the human species has already traveled that highway. What we experience as the present is “in fact” the past of the future post-humans, who have programmed this simulation. This is where bostrom arguments exhibit a rather strange faith in the certainty provided by probabilistic logic. If a future event enjoys a high probability (“very nearly one”), then it is as if it has already happened. That’s fine for playing the lottery (the probability of losing is very nearly one, so don’t bother).  But when we start speculating on possible future humans developing the capacity to interfere with the past, problems and paradoxes abound. The writers of and audiences for time-travel fiction are quite familiar with many, if not most of these, but those advancing bostrom arguments seem blissfully unaware of them.

Undoubtedly, the most important problem here is that treating probabilities as certainties, predicated on the assumption that history is a one-lane highway – that there is no possible divergence or detour or fork in the road ahead – is so blinkered, so myopic, that it verges on delusional. One minor war at the wrong time in the wrong place (say, a brief nuclear exchange between North Korea and the US); one economic crash redefining the developmental strategies of major economies; or one major scientific discovery that improves our environment or our health care, and the map to the future may get wholly re-drawn. We’ve already seen this, several times, but even recently: the global rightward turn among voters has radically re-determined the kind of progress we can anticipate, at least in the foreseeable future.

So the grand historical narrative that the first wave of bostrom arguments depended on is dubious at best, delusional at worse. Not surprisingly, then, the narrative has been re-written. The more popular versions today, inspired by such science fictions as the Matrix films, hold that it is not a future post-human civilization that has constructed this simulation, but a super-intelligent alien species, on another world or even in another universe. [4]

The very fact that the narrative has been so substantially re-written should give those tempted by bostrom arguments reason for pause. If the fundamental historical assumptions of such argument have to be changed to keep them standing, then that raises the question of whether the structure of the arguments is itself sound. It was thought that clay would support the brick and mortar, but now we find it necessary to mix cement? And using ingredients imported from science fiction?

Alas, this new version, while seemingly unassailable, is in fact weakened in part by that very unassailability. There may be no glaring paradoxes to threaten it, but neither are there any empirical grounds for believing it.

That there may be other intelligent “alien” life forms in the universe is not a hypothesis, strictly speaking, but an allowable probability, given the size of the universe and the possible number of life-sustaining planets. Occasional research efforts, like sending “Welcome!” greetings in exploratory space ships are little more than gestures signaling our openness to such a possibility.

The presumption of super-alien programmers of a simulated universe is an entirely different matter, since it hinges on no empirically grounded possibility beyond our own ability to build computers. Hence, it cannot be verified. Nor can it be falsified, since any falsification procedure presumably would be pre-programmed by the super-aliens, for their purposes, which remain forever inscrutable. The super-alien simulation “hypothesis” can only unravel of shown to be incoherent or otherwise logically defective. (One can easily imagine, for instance, a regress-style problem. If the super-aliens are programming us, then who is programming them? And what constitutes the fundamental or base reality, by which simulations are identified as such?)

Notably, the whole of the above paragraph applies to divine appeals as well. “You can’t prove there’s no god!  It’s possible that he exists!” Well, yes, but I can show that claims concerning such a supernatural intelligence are internally inconsistent and often self-contradictory. And the fact that super-alien programmer claims and the arguments for them are so similar to those that we find in religious apologetics should also concern those tempted by bostrom arguments. [5]

But for the sake of amusement, let us re-write our reading of the original bostrom argument, substituting super-intelligent aliens from another world (SIAFAW) for post-humans:

1.’’ The probability of SIAFAW with world-simulating capabilities is quite low; or the probability that we all now live as simulations is quite high.

2.’’ However, given the possibility of there being many universes with uncountable numbers of planets like our own, the probability of ‘the probability of – SIAFAW with world-simulating capabilities is quite low’ is itself low.    The probability of SIAFAW with world-simulating capabilities is thus high.

3.’’ Whether we are or are not living as simulations is probabilistically indeterminate.

4.’’ But if the probability of SIAFAW with world-simulating capabilities is high, then it has already likely occurred.

5.’’ Thus, the probability that we all now live as simulations is quite high.

But with the temporal element stripped from the argument, on what basis can we assume the probability of SIAFAW simulators is very high? The problem is that we now have a probability resting on a mere logical possibility. The original argument assumed that history is a one-way highway to future progress, but the SIAFAW argument doesn’t assume anything other than that there are super-intelligent aliens, either from our own universe or another one. Not only do we have no reason for thinking this is true, we have no grounds upon which to calculate its probability. What was once an argument, then, becomes little better than a series of assertions that are little more than appeals to non-divine divinities.

Notes

1. http://web.stanford.edu/class/symbsys205/BostromReview.html

2. http://www.simulation-argument.com/simulation.html

From the abstract:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “post-human” stage; (2) any post-human civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become post-humans who run ancestor-simulations is false, unless we are currently living in a simulation.

It would seem from this summary that Bostrom wants to argue that our descendants will not be evolved post-humans, and thus we are not living in a simulation, but the remainder of his paper argues the opposite.

Proposition 1 is problematic, given that extinction is an unknown, predicated on a myriad of incidentals, most of which we cannot know. Given the possible developments that Bostrom admits might make Proposition 1 moot, nothing hinges – either assertively or probabilistically – on Proposition 1. Thus, Bostrom’s argument falls apart. There is no trilemma, but rather, a mere thought experiment.

3. However if we set this second proposition aside, then we don’t have a trilemma, we have an exclusive disjunction: Either the human species goes extinct before evolving into post-humans, or we are living in a simulation.

It is notable that in his response to Bostrom, Eggelston raises the question that Bostrom claims he has no interest in: how do we live our lives if we accept that we live in a simulation? I’ve indicated that I find this question unresolvable. But Bostrom’s threat of extinction surely carries its own ethical implication in one respect: accepting we live in a simulation, we should do everything to achieve post-human evolution within it; we thus either justify the simulation, which our programmers would surely admire, or we would prove ourselves as not simulations but become post-humans in the process, capable of such simulations. (The SIAFAW variant of this has us advancing our technology to somehow communicate with the SIAFAW in the future.)

Essentially, this is Pascal’s Wager for the computer age – believe in God and even if God doesn’t exist, you will live a better life (and you might get to heaven). Believe otherwise, and you will live miserably, and if God does exist you will burn in Hell.

So, this is the real bostrom argument underlying all this simulation hypothesis chatter: Promote post-human evolution, and realize it in either reality or simulation, or the human race is doomed.

4. One of Eggleston’s major points is that without this assumption, Bostrom’s original argument begs the question of indifference. After all, the principle of indifference determines a probability prior to any outcome. Hence, presumption of any outcome cannot be leveraged as added information to the indifference between the choices. Hence, the probability we live in a simulation can only be salvaged by assuming the probability that there are other worlds with the level of civilization needed to engage in simulation, whose inhabitants then do so.

5. What does it get us in science to pre-suppose gods or super-aliens? It’s one thing to be a believer and to look at the wonders science reveals and exclaim, “how beautiful is god’s creation!” It’s another thing entirely to initiate scientific research on the basis that the natural world simply must (or must be made to) reveal some god’s existence, and the same is true of super-aliens. I’m no longer a proselytizing atheist – those who wish to believe in God or gods may do so if they choose. But I see no need and have no interest in trying to mold science or philosophy according to a “possibility” that is little more than a hope. And of course, the same is true of super-alien simulators.

41 Comments »

  1. I’ve tried in the past, and now each time I re-encounter it, I have to admit to myself that I can find nothing inherently interesting concerning simulation scenarios about reality. The Matrix trilogy was fun for awhile, though wildly overrated in my opinion. And that’s about it: entertainment.

    Liked by 1 person

  2. Daniel:

    I see the bostrom proposition as a consequence of two observations made by de Unamuno in Tragic Sense of Life. He asserts that every man wishes to rule the world, and every man wishes to live forever. Speaking of the Christian tradition, de Unamuno then asserts (I paraphrase): “We create this God of love and eternal life by believing in him, and he in turns helps us to become stronger as people.”

    In this context, I interpret the bostrom proposition as a religious proposition. Those that believe that consciousness emerges from complex networks of matter (neurons) hope to create God from a silicon analog that is not corrupted by the impulses inherited from our primitive Darwinian biology. Many of them wish to escape death by uploading their consciousness to their simulation.

    The truth, as I understand it, is that consciousness resides in the soul, and the brain is only an interface between spirit and matter. My belief is that until they recognize that, the bostromians aren’t even prepared to ask the kind of questions that will support a successful integration of digital and biological consciousness.

    Brian

    Like

  3. The current popularity of this absurd argument must be in large part because of its probabilistic nature. Because it states nothing definitive, it generates an absurd amount of debate for and against. (My recent favourite is a talk by Maciej Cegłowski, from a techie/developer viewpoint: http://idlewords.com/talks/superintelligence.htm )

    From a psychological viewpoint, there is nothing inherently wrong about puerile power fantasies, they can be revealing about the deepest repressed motivations of the psyche. As an exercise, to develop an ethics system for magic can be useful, even if you do it out of a fear that magic is about to be invented for real (possibly by someone evil, i.e. not you). Something about the categorical imperative is similar to this game with probabilities …

    Liked by 2 people

  4. E. John Winner

    Regarding those assumptions… The first is substrate-independence: “the notion that consciousness may arise independently of the material in which it appears.” You quote Bostrom (is it?): “It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.””

    What is the “it” here? Which particular trick are we talking about? The only consciousness we know is biological. Different life forms have various ‘levels’ of awareness, but all seem to incorporate somehow that basic level of sentience that we observe in very simple life forms. The unqualified term “consciousness” is extremely vague, it seems to me, so that to speculate about “essential properties” of consciousness is even more problematic.

    When you have a problem with a basic assumption (as I do here) there is little incentive to proceed.

    The second assumption (the principle of indifference) I think I understand and I’m very skeptical that it can be applied in cases like this – as you yourself suggest. (The OP talks about common sense, and you also reference an argument by Eggleston.)

    It seems that your major concern is however with bostrom arguments, i.e. with the people who exploit Bostrom’s ideas for their own purposes, and you identify further dubious assumptions. Again, generally I think I am with you on this.

    But doubling back, there are a couple of general ideas behind a lot of this talk which may have some merit. One is the idea of progress which you come down very hard on. Sure, simplistic ideas of social progress are wrong but we do witness scientific and associated technological progress. Knowledge advances. Barring major social cataclysm, it is a one-way, albeit winding, street. Given the contingency of history, it could peter out or suddenly stop at any time, but we do see a general forward movement over millenia.

    Secondly, all animals share a kind of basic sentience but there are ‘levels’ of consciousnes, self-awareness, intelligence, etc.. There is no problem (is there?) considering the *possibility* of advanced intelligences. But my attitude is simply one of trying to understand (through science and reason). Like you, I am very wary of these technology-pushers with salvationist ideas.

    Like

  5. Sorry but I don’t see that your “one-lane highway” reasoning undermines Bostrom’s argument in any way. In fact, it seems like a total straw-man.

    Bostrom himself says that one reason for running simulations by future humans (or their AI progeny) is to explore *alternative* histories. Our timeline therefore need not be *identical* to that of the future human civilization that has created our simulation.

    In fact, a perfect replication probably would be very difficult to achieve and not very interesting for them, since they would have very detailed records already of what *actually* happened in their own history – so why recapitulate it? Instead, they’d be more likely to run somewhat different historical simulation either as “what if” experiments (“what would have happened if we’d banned burning coal in 2018?”) or simply for entertainment purposes.

    So simulating different histories (“multi-lane highways”) seems quite a plausible thing for future humans to do. What am I missing?

    Like

  6. Bostrom isn’t even good science fiction, let alone good philosophy. The sort of thing that is giving our discipline a bad name, in fact.

    I thought EJ’s critique was spot on. Indeed, I would have been somewhat less charitable than he has been.

    Like

  7. Mark (and DanK),
    I didn’t discuss the substrate-independence assumption, because that would be a separate discussion of some substance. But I am a firm believer in substrate-dependence, and yeah, “When you have a problem with a basic assumption (as I do here) there is little incentive to proceed.” Nonetheless, the Simulation Argument – which is quite popular in certain circles – is just shot with so many holes, I thought it worth discussing these, however broadly.

    I am not opposed to progress in the sciences; but I think it has to be earned and not presumed.

    Dean,
    “I don’t see that your “one-lane highway” reasoning undermines Bostrom’s argument in any way.” If history is not a one way highway to post-human evolution, there is no greater probability of that evolution – or of its coin-flip, extinction – than there is that the world’s leaders will agree to disarm their nations, or that a political movement goes world-wide and persuades us to abandon our technology and return to an agrarian existence. Or that we’ll just immigrate en masse to Mars. Bostrom’s First Proposition gives us only two possible futures, of which only one has high probability (the converse of the actual proposition), which is affirmed by the assumption of high probability for the Third Proposition (which it is used to argue for). So if the future is not a one-lane highway to post-human evolution, then the ground of the Third Proposition falls away, leaving it entirely indeterminate, and it cannot be used to affirm the converse of the First Proposition. This has nothing to do with why the post-humans simulate their ancestors or what alternative history they choose for this simulation.

    BTW, I refer to extinction as the coin-flip reverse of post-human evolution because after I wrote this essay and thought it over, I realized why I was reminded of Pascal’s Wager during the writing of it. What bostrom arguments offer us is a gambling choice (presumably having to do with the kinds of research we care to finance, since this is all about computer and related technology). I wrote a supplementary note on that posted at my own web log: https://nosignofit.wordpress.com/2017/01/02/simulation-argument-as-gambling-logic/

    The gamble that bostrom arguments direct us to is to bet against the First Proposition (and the only way to bet on the Third Proposition is by placing that bet). The reward is that, even should we prove not to be simulations, we will develop ourselves into a post-human evolution (and/or achieve communication with SIAFAW in the variants).

    At the end of that supplemental note, I remark that, in the current economy, such arguments are really rather irrelevant to the reasons we do pursue such research. The arguments are really intended to support a particular view of science and technology, rather than science and technology per se.

    Liked by 1 person

  8. As someone titled a paper – roughly – “even the Doomsday Argument is better than the Simulation Argument”. I think Bostrom even mentions my question about what credence do we assign to the possibility that Descendant Command would simulate only me: it would be a lot cheaper computationally and extremely worthwhile you know.

    Liked by 1 person

  9. We need to deal with the issues separately. Is a universe simulation with conscious human actors possible? Yes. We have a proof of concept already, weights about 1.5 kg and draws about 20 watts.

    Have you ever fallen asleep at night and had a dream? Yes, so your brain has simulated a world and it has one human actor in it. So this can be done with a physical structure and there is no reason why the principle cannot be extended to more than one conscious actor.

    A universe simulator, for the purposes of the argument does not need to be an entire universe, it can be any part of a universe and it does not have to be done down to the atom level.

    So there really is no question about the physical possibility. It will not require any kind of super intelligent alien to build. In fact the proof of concept was designed by a haphazard process of random mutations and natural selection. So we only need a race with technology slightly in advance of our own.

    Liked by 1 person

  10. Hi Dean,

    Bostrom himself says that one reason for running simulations by future humans (or their AI progeny) is to explore *alternative* histories. Our timeline therefore need not be *identical* to that of the future human civilization that has created our simulation.

    Indeed, they might want to explore absurd scenarios like “What if Donald Trump became President of the USA?”

    Like

  11. I should add that substrate independence is not relevant. We know that there is a physical substrate that can achieve the trick, that is all that is required for the argument. There is no rule that says that the simulator has to be made of silicon components.

    Like

  12. I don’t really understand how beings in a simulation can be said to be ‘programmed’ to do certain things any more than beings living in a real universe can be said to be ‘programmed’ to do something.

    Either the simulation is deterministic (as in exactly one possible next state) or it is not.

    The same goes for a real universe. So why should that make any difference to our reasoning?

    Liked by 1 person

  13. So if we are living in a real universe, what kind of real universe are we living in? Physicists don’t appear to have made up their minds.

    But the idea of an infinite multiverse of some kind seems to be quite popular and becoming more popular.

    They tell us that if that is the case then everything that is physically possible happens infinitely many times.

    So mind simulations are physically possible and so if there is an infinite multiverse then there are infinitely many simulations providing just this world that I am observing, and infinitely many real observers.

    That seems to make the Bostrom specifics irrelevant. Infinitely many of A and infinitely many of B and no way of lining them up to create a meaningful probability calculation – so it is undefined.

    But for those who take the idea of an infinite multiverse seriously I cannot see how they can avoid taking seriously the possibility that they are living in a simulation.

    Like

  14. Substrate independence does not mean context independence. If you simulate a brain without lungs or a heart, it lives for maybe a couple of seconds.

    If you remove enough context, you might as well say that any story or fiction is actually a simulation, and that characters in them are as real as you and me. (And should we petition to George R.R. Martin to kill less of his darlings in the future, or at least do it in a more humane way?)

    There are these two statements that simulationists put together: “a simulation of human consciousness would be indistinguishable from the real thing” and “future historians study alternate histories with simulations”. So future scientists are using simulations in their experiments simply because it is cheaper and faster than experimenting on live humans, not because it is more ethical?

    Like

  15. Uh-oh, we will soon be back to gnosis. The different levels of lesser and lesser gods emanating from each other and creating a world that is not the real one are just being replaces by hirachies of supercivilizations from other universes simulating each other, with us at the lowest level. Isn’t it fun to speculate? We no longer need an omnipotent god god? Has this stuff already turned into a religion? Can one join a simulators church somewhere? I see promissing posibilities here. For example, if we assume that the simulators are friendly towards us, they might copy our minds when we die and give us an afterlife for ethical reasons. Letting us perish inside the simulation would actually be murder if they have the technical possibility to save us, which, of course, they have, so hurray, if we are almost certainly in a simulation, we are going to be retired into a nice afterlife. So there is hope 🙂 If their motivation is less benign however, oh lord…! They might have set up heavens and hells and purgatories, so let’s better be good people. If, like in the original gnosis, every level of simulators is worse than the one above, then the one abouve ours must be demonic (this explains a lot of things…). But of course, the good guys at the top are observing this and will send some quantum-electronic messiah down to our level to copy us out of here and take us up, or shut this demonic simulation down on a day of judgement… The technological, pardon, theological possibilities this offers are fantastic. And if the physics of our universe does not allow for such a simulation, we just have to start with another universe where such limits do not exist. 🙂

    Joking appart, the possibility of such simulations is just assumed, but I think there might be theoretical limits of what can be simulated (I see problems from complexity theory and computability theory here and I am planning to post something on the topic at a later time).

    Most importantly, we should see that any civilization is using resources, and ours is running on a limited amount of such resources, using them up in unsustainable ways. To avoid collapse, a civilization like ours would have to turn itself into something sustainable. I don’t see this happening in our case, but should we manage to achieve that, I don’t see us then setting up such simulations. The question here is if a transition to the super-civilization stage is possible at all. I doubt it. The idea that it is possible, or even likely that a civilization like ours will develop into a super-civilization is an illusion, and I see it as part of a quite dangerous ideology that turns people and resources away from moving towards sustainability.

    So here is a joking, or actually only half-joking line of argument (The joking parts are market by an asterisk):
    a) The fastest growing sub-civilizations (countries, businesses etc.) in a civilization like ours will outgrow and outcompete the others, beoming dominant.
    b) Any ideas or attitudes that promote growth will therefore spread, leading to further growth. This results in growth-promoting ideologies like neo-liberalism and consumerism becoming dominant. a) and b) might repeat in cycles so technical civilizations will inevitably develop into a planet-wide system with a hyper-growth ideology.
    c) The resources of any planet on which a technical civilization starts are limited.
    d) The growth resulting from a) and b) will outgrow this resource base after some time (this time is actually short since the civilization will grow exponentially for some time).
    e) After using up the resources, any such civilization is going to collapse. As a result, technicall civilizations will almost always collapse.
    f*) The appearance of a simulation argument in one such Civliization will help turn resources into further growth, contributing to its further growth and helping to outgrow the (limited) resource base of the respective planet, by making people invest into non-sustainable technologies.
    g*) So the appearance of a simulation argument will make it more likely that the respective civilization will collapse.
    h*) It will, therefore make it more unlikely that the civilization reaches the state where they can build such a simulation.
    I*) If a simulation-argument like this comes up in a civilization, it is probably not a simulated one but a real one and actually doomed since civilizations that are on their way to sustainability don’t invest time and resources into speculations like this.
    j) If a technical civilization collapses, the intelligent species, if it survives, does not develop into a super-civilization (there are no resources left on the respective planet even to start a second round of technical civilization).
    k) If a technical civilization manages to achieve a sustainable state, it has no reason to develop into a super-civilization (there are no sufficient resources for that anyway).
    l) So it is very likely that there are no super-civilizations in the universe. There are three possible outcomes for technical civilizations like our own: 1. collapse and species extinction, 2. coppapse and species survival in a low-tech state and 3. (unlikely but possible) transformation into a sustainable steady state civilization on a relatively high (compared to 2) technological level.

    Liked by 2 people

  16. First, we need recognize that many of these probabilities collapse into universal claims. If we allow that there is any fraction of the current human population not living in a simulation, we must confront an absurdity – at least one contemporary human is currently living in the fundamental reality of the post-human simulators, which will not come into being for a couple centuries.

    I don’t understand this part. The sentence “If we allow that there is any fraction of the current human population not living in a simulation” does not make sense in terms of the argument. Either we are in a simulation in which case the “current human population” is entirely simulated, or we are not, in which case the “current human population” are real.

    So the absurdity does not arise from any premise of the argument, nor from anything implied by any premise of the argument. So how can it be a problem for the argument?

    Like

  17. We can check the basic logic by leaving consciousness out of it for the moment. Say I write a simple world simulation, somewhat like “The Sims”. I don’t have any interaction with the game but let it play out. But I have it programmed the possibility that if the characters in the world come to a certain stage, call this “Stage X” some of the characters will start up simulations of scenarios in the earlier stages of the simulation that they are in, simulations within the simulation. They may or may not reach that stage, it all depends on how the base simulation plays out.

    I have point of view (POV) records for all characters in the game and similarly the simulations within a simulation also have POV records. I set it going and after a while I come back. I am presented with a POV record which appears to be of the earlier stages of the game.

    Can I tell if it is a POV record from the base simulation, or from one of the simulations within a simulation? From just looking at it, I can’t tell as they would look the same.

    But I can reason that either the simulation never reached stage X or else a fairly low proportion of the characters created simulations or else I am probably looking at a POV from a simulation within the simulation.

    That seems to be perfectly true so far.

    If I proceed and suggest that it would be highly probable that the simulation reaches stage X then this is obviously a claim that would need to be supported.

    If I suggest that it would be highly probable that many would create simulations within simulations then, again that would need to be supported. (As the programmer I have a certain advantage in this respect).

    But I don’t see any of these probabilities collapsing into universal claims. Either the POV is from a simulation or simulation or it is not. That does not affect the probabilistic argument. If someone shuffles and cuts a deck of cards and then asks, without looking, what is the probability that the card on top is a Jack of Hearts then I know that it either is or is not the Jack of Hearts so the probability is either 0 or 1. But we are talking about epistemic probability so I can say that the probability is 1/52.

    Same here, the probability that the POV is from a simulation within simulation is either 0 or 1, but the epistemic probability depends on the factors mentioned.

    There is no assumption that the history of the simulation is a one-way highway.

    So I don’t see any fatal flaws or paradoxes in Bostrom’s argument, only two premises that need to be supported and one can judge them as one sees fit.

    Like

  18. To clarify a step, someone shuffles and cuts a deck of cards and then asks “Is the top card the Jack of Hearts”. I say “Probably not”, which is a perfectly true statement as long as the shuffle and cut was fair.

    If you then say that the top card either is or is not the Jack of Hearts and that my “Probably not” collapses into the definite claim that the top card is definitely not the Jack of Hearts, thus requiring verification, falsifiability etc, then you would be confusing the epistemic probability (1/52) with the actual probability (either 0 or 1).

    Like

  19. Personally I think that post-humans are more likely to be unimaginative working/consuming machines bio-engineered as some large company’s idea of the ideal human, but so disease resistant as to outcompete mere humans.

    Like

  20. Robin Herbert,
    All I can say is, you are confusing the logically *possible* and what is *probable* according to certain formal logics. Bostrom’s is a probabilistic argument, about what we can or should believe about our evolutionary future and about our present existence.

    If we put it into traditional syllogistic form, with probabilities reduced to universals, it reads something like the following:

    As an ontological argument:
    The human species has already evolved into a species of post-humans with world-simulating capability.
    Such a species is already involved in simulating their ancestors.
    Humans alive today are not post-humans with world simulating capabilities.
    Therefore humans alive today are the simulations of the ancestors of the post-humans.

    As a teleological argument:
    All humans today are living a world-simulation.
    Only those who evolve into a post-human species are capable of world-simulating technology.
    Therefore, the human species has already evolved into a post-human species capable of world-simulating technology..

    (The arguments change little if we replace ‘post-human’ with ‘super-intelligent aliens.’)

    The fact that the ontological argument and the teleological argument are really mutual re-arrangements of premises/conclusions should give us pause: The premises “all humans are simulations” and “there is a non-human species (alien or post-human) with world-simulating capabilities” cannot be used to argue for each other. What survives is a simple conditional proposition:

    ‘If we are all simulations, then there must be a non-human species with world-simulating capabilities.’ Or more strongly, ‘there is a non-human species with world-simulating capabilities, if and only if we are all simulations.’ But not strong enough to be particularly convincing. Here’s the problem: without any access to the ‘fundamental reality’ the non-human species supposedly live in, we have no way *whatsoever* to determine whether we are simulations. Thus the proposition fails; it devolves into the mere claim, ‘there is a non-human species with world-simulation capabilities,’ and now we need some evidence, some test, some methodological result that would indicate this, otherwise it is mere profession of faith. And it wouldn’t necessarily mean that *we* are simulations, anyway.

    At any rate, this makes clear why Bostrom and those who accept his arguments, or arguments like it, prefer to argue in probabilistic terms: ‘Very likely,’ probably are,’ ‘very nearly one’ – to make an argument that, rephrased in traditional logic, would readily appear groundless. However, that was exactly why I remarked the use of ‘fractions of the population’ in making the argument. The absurdity *is* in the argument itself, because talking of a fraction of the population – as Bostrom must to make a probabilistic argument – allows the absurd possibility that I noted, of a fraction of non-simulation humans living among us (and yet still somehow in the fundamental reality of the nonhumans). It really must be all or none, and then there is no probabilistic bostrom argument, there’s simply the bare claims without any evidence, and without any logical ground beyond mere thought-experiment hypothesizing, and the kind of speculation which you engage in a couple of your replies. (And I certainly don’t buy any of the skeptical/solipsistic/Hindu speculation about dreaming simulations, or unknowable universes.)

    Substrate dependence/independence is relevant to Bostrom, exactly because it is contentious; if consciousness is substrate-dependent, then not even silicon can duplicate it, in which case the bostrom argument is not even interesting speculation. But that is a much larger argument than we need here.

    There is no such thing as a ‘free will program’ such as you are suggesting, and to be a simulation is to be programmed. All programming prescribes parameters of events. This can appear as random, but only according to limits possible given their algorithms. Even an indefinitely complex algorithm would probably not fully mimic a truly stochastic universe such as we may be living in; unless you want to claim that some god is operating a computer and has programmed the universe – which would not be any god I would choose to believe in. Some gods have a certain majesty in their genius and whimsicality. A computer programmer god would just be a bore. .

    Finally: The fact that bostrom arguments and multiverse speculation are popular in some circles doesn’t persuade me to accept them as anything more than speculation.

    Like

  21. “one of the simulations within a simulation”: Hi Robin. If the substrate of simulation is in some ways similar to our universe – which the Simulation Argument at least makes some allusion to – then each sub-simulation is consuming computational resources of some kind of physical system and so competing with “our” simulation or its progenitors (a situation Alistair Reynolds uses in a short). You can’t recurse indefinitely. If you posit physical infinities to get around this, then why not cut out the Demiurgic simulator middlemen and be done with it. The assignment and use of probability in these settings seems impossible to me. As someone put it, you have confidence in mathematics, physics, psychology etc but not in whether you have two hands.

    Liked by 1 person

  22. Robin,
    additional note. If we take away from the bostrom argument everything you claim is not necessary, it has no premises left, and is simply a thought experiment discussion (which I am suggesting it reduces to) – much like your Sims discussion, which doesn’t really have much to do with the bostrum argument, and which is actually contrary to Bostrom’s own argument. Bostrom needs there for us to consider a post-human civilization, and that means history is going (or rather, has gone) in just the one direction (the alternative being extinction). That’s the nature of the argument and of the variants developed out from it (the multiverse history would also be uni-directional for the aliens to have developed super-intelligence).

    If you want to write “the Robin Herbert argument why were all living in a simulation,” fine – but that would not be Bostrom’s, nor any of the variations I discuss in my article.

    Finally, your card example actually supports my claim that what bostrum arguments amount to is a wager. I wrote that in my essay in an end-note, then re-enforced this is a comment, which links to a supplementary note on the matter. Apparently, you missed all these. And using epistemic probability to persuade us to accept simulation as an actual probability is precisely what bostrom arguments do, my friend – and what I was trying to draw attention to.

    Like

  23. Hi ej

    I don’t know where you are saying I have brought in logical possibility. This does not form any part of Bostrom’s argument, nor of my analysis

    With your claim that you have put his argument in syllogistic form, I can stop you at the first premiss:
    “The human species has already evolved into a species of post-humans with world-simulating capability.”

    I have already responded to the claim that the probabilistic claim collapses to a universal claim so obviously I don’t agree that this is an accurate version of the argument.

    The claim “the card on the top of the pack is probably not the Jack of Hearts does not collapse into the claim “the card on top is not the Jack of Hearts” even though the card either is or is not the Jack of Hearts.

    Like

  24. Here is the logical structure in classical logic, using my simulation example:

    A=”The base simulation is extremely unlikely to reach a stage where the actors can run simulations themselves”
    B=”Any actors able to run simulations are unlikely to run a simulation of earlier stages of the base simulation”
    C=”The POV I am seeing is almost certainly from a simulation within the simulation”

    P1. A or B or C
    P2. Not A
    P3. Not B
    Conclusion: C

    My logic here is valid and P1 is true. Whether the argument is sound depends upon whether or not P2 and P3 are true, and therefore the evidence required is that which can support these premises.

    For any given base simulation there will be a fact of the matter about these. You would have it that I need the extra premiss “The base simulation has already run to stage X”, but clearly I don’t

    Like

  25. Robin if you want to continue developing the herbert argument for a simulation reality, go right, I don’t care.

    You have not answered *Bostrom’s* argument that, assuming we’re simulations, then history followed a single path to post-human evolution.

    Your argument is valid but empty lacking justification. Both the ontological argument and the teleological argument I presented are each individually valid; it is their conjunction which raises the question of validity – a conjunction implicit in any simulation argument I’ve seen. And in any event, they are also groundless.

    Your effort here to use my article to develop ‘the herbert argument for simulation’ is frankly at best annoying. You want to believe you’re a simulation, no one’s stopping you. But I am not betting on your fantasies, man, no matter how well articulated. Nor will I bet on the bostrom alternative. 1/52 means there are fifty-one cards not that Jack.

    And what is it you want us to do? How much money do you want spent on what research. The bostrom argument is a rhetorical device! Its purveyors are asking us to respond emotionally in a certain way and to do something as part of that response. What response, what action are you looking for here? What is the decision the herbert argument asks us to make? Believe in aliens? spend money on research? Accept substrate-independent AI in our future? I mean really, what’s the point?

    At any rate, feel free to continue your self-examination. My part in your speculations is over.

    Like

  26. ‘Well,’ you might say, ‘we were discussing logical possibilities and logical probabilities,;’

    Nope. As nannus noted, we’re discussing how we allocate our resources. This looked like a logical problem, It really has to do with rhetoric. Sorry if you didn’t catch that.

    Like

  27. Hi ej

    Robin if you want to continue developing the herbert argument for a simulation reality, go right, I don’t care.

    What an extraordinary thing to say! Quite obviously I have not presented any argument of my own about a simulated reality. I don’t have such a thing and can’t imagine how you got the idea that I did.

    As I clearly stated I was presenting the logical equivalent argument. If you disagree that it is logically equivalent then please say so.

    You have not answered *Bostrom’s* argument that, assuming we’re simulations, then history followed a single path to post-human evolution.

    I am not aware of Bostrom having said anything like that.

    I was referring to the version you linked in the article.

    You want to believe you’re a simulation, no one’s stopping you

    I am never quite sure what the purpose is of attributing opinions to me that I have never expressed mor implied.

    <

    Like

  28. Hi davidlduffy,

    ” If you posit physical infinities to get around this, then why not cut out the Demiurgic simulator middlemen and be done with it.”

    I see what it is that I need to get around. Obviously there can’t be infinite recursive simulations but what part of the argument does this effect? Are you saying that this implies that the simulations will be altogether impossible in the first place?

    Consider it in the context of my simple simulation example. The program I describe is possible right now so are the simulations within simulations in it. At some level it will crash the program or PC, but obviously the fact that I am looking at a POV record implies that I am seeing a view of the program in a running state.

    So the fact that this program may crash at a later stage does not alter any of the probabilities.

    Like

  29. Robin,
    You’re right, Bostrom himself does not say that. But it is a *necessary* implication of his argument, which is made explicit in other versions of it, which is why I wanted to generalize the argument as ‘bostrom arguments.’ I’m sorry I failed to do that here, but my frustration limit had been reached.

    Abductively,it seems quite clear that you want to believe this, since you have struggled so tenaciously to save the simulation hypothesis. Why else would you do that? Again, I ask – what is it you want from this hypothesis? what is it you want from your audience here?

    What I wanted, going into this essay, was to persuade my audience to reduce the level of wild hypothesizing such as one hears from some TED talks, or at least to view such critically. I would like increased wariness when we see formal logics deployed for rhetorical purposes.

    And I would like us to consider our research expenditures on concrete terms having to do with our present needs, and not devoted to some whimsical ‘singularity’ in the future. I would like my readers to see the future as offering several possibilities, rather than just one or two. And just by the way, I would like us to see our computers as our tools, as they were invented to be, and not as our future replacements.

    So that’s been my motivation here. Now what’s yours? .

    Like

  30. “Quite obviously I have not presented any argument of my own about a simulated reality.” You’re intending this ironically, I hope; or you are really unaware of what your comments add up to or how they’re received by at least some of your audience. Unfortunately, I suspect the latter, which is why I’m noting this to you.

    “We can check the basic logic by leaving consciousness out of it for the moment.” As soon as you took this turn you technically went off topic, since bostrom arguments (including Bostrom’s own) explicitly assert the possibility and then the probability of simulated consciousness. From here on, you were developing a separate argument for your own simulation hypothesis.

    Since this was OT, I lost interest in the specifics of your argument. And I don’t have any interest in continuing discussion concerning it. I would like to know your motivations here, however, as I’ve said.

    There is no unmotivated human activity, not even logic. We always reason to a purpose. That’s why we’re not computers.

    Like

  31. Hi

    “If you want to write “the Robin Herbert argument why were all living in a simulation,” fine”

    Again, I don’t know the purpose of you pretending I said something I obviously never said not implied.

    The fact that you simply repeated this right after I pointed out that it is not true indicates that yours is not a misunderstanding but rather a deliberate misrepresentation.

    There is no point in having a discussion elwith you if you are going to repratedly pretend I hav said things I have not said.

    Like

  32. Hi ej

    And I don’t have any interest in continuing discussion concerning it. I would like to know your motivations here, however, as I’ve said.

    The purpose of examining the logical structure of an argument is to check for validity and any kind of structural flaw.

    I didn’t think I would need to explain that here.

    Like

  33. So we substitute (taken from Bostrom’s paper)

    A=”the human species is very likely to go extinct before reaching a “posthuman” stage”

    B=”any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);

    C= “we are almost certainly living in a computer simulation..”

    And the inference is just as I had it above:

    P1. A or B or C
    P2. Not A
    P3. Not B
    Conclusion: C

    Bostrom does not go so far as saying that P2 and P3 are true but gives reasons why we might think they are reasonable.

    If there is some kind of necessary logical equivalence between Bostrom’s arguments and your statements I can’t see it. And I dont agree that any of the probability claims collapse to universal claims for the reasons I have stated.

    Like

  34. Robin,
    “The purpose of examining the logical structure of an argument is to check for validity and any kind of structural flaw.”

    That’s not a purpose that’s a function. What motivates you to attempt this function?

    You’ve misunderstood the article, you’ve misunderstood Bostrom and arguments like his, you’ve misunderstood the issues involved, you’ve misunderstood my question. And you seem so obsessed with salvaging the simulation hypothesis that you’re not bothering with attempt at such understanding. Your response is that I don’t understand you. But I confess that. I’ve only engaged in further discussion to see if I could understand you, and to try to redirect your obsession into the more general issues raised in my article – which you’ve utterly ignored, as well as in some of the comments, which you have also largely ignored So I still don’t understand you. When you finally collect all your notes for a herbert argument for a simulation hypothesis and post it somewhere, I will not read it, but perhaps others will. Good luck.

    Like

  35. “If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).
    Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.”

    What are the *rhetorical* implications of this conclusion? obviously that we need “wealthy individuals who desire to run ancestor-simulations and are free to do so.” (Which choice converts (1) to its contrary.) Bet on (3) and allow (fund research for) “wealthy individuals who desire to run ancestor-simulations and are free to do so,” then even if we’re not simulations we may achieve post-human evolution. Pascal’s ghost. Only, I don’t believe in ghosts. I’m not betting on this. It’s a waste of time. And thinking about alternatives, even one which envisions the problem from a simulator’s perspective, is also a waste of time.

    Until there is empirical evidence that we even might be simulations – *which evidence is not possible* – all simulation arguments are mere chatter in the noise of pseudo-logical analysis.

    I’ve often remarked the problem of the Logical Positivist tradition’s legacy of allowing all things to be said, despite irrelevance or violation of actual experience. I think we’ve had a good demonstration of that here.

    We are not logical machines). We are a certain kind of animal, with animal desires, with animal ‘impurities,’ animal needs, animal fears and animal hopes. That in itself scrubs the simulation hypothesis, which implies that we are only a certain kind of purely rational consciousness. No, and our computers will not make us that.

    Like

  36. EJ,

    I enjoyed your essay, I’m glad you wrote it, and I agree. Just a few weeks ago I finally decided to look up Bostrom’s argument and I was appalled that it was being taken as seriously as it is.

    Like