The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.

But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.

Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.

And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim realty would be toast overnight.

What am I not getting about this?

Cheers!

  • Scubus@sh.itjust.works
    link
    fedilink
    arrow-up
    12
    ·
    5 months ago

    Quantum is weird. If we are in a simulation, that would explain a lot of that, because the quantum effects we see are actually just light simulations of much deeper mechanics.

    As such, if we were simulating a universe, there’s every chance that we may decide to only simulate down to individual atoms. So the people in the simulation would probably discover atoms, but then they would have to come up with their own version of quantum mechanics to describe the effects that we know come from quarks.

    The point is that each layer may choose to simulate things slightly lighter to save on resources, and you would have no way of knowing.

    • xantoxis@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      Indeed and–interesting corrollary–if we accept the concept of reduced accuracy simulations as axiomatic, then it might be possible to figure out how close we are to the “bottom” of the simulation stack that’s theoretically possible. There’s only so many orders of magnitude after all; at some point you’re only simulating one pixel wiggling around and that’s not interesting enough to keep going down.

      There is not, as far as I know, any way to estimate the length of the stack in the other direction, though.

    • bunchberry@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      5 months ago

      I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not “rendering” things at that low of a level. The problem is that, yes, it’s probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That’s the whole reason for building quantum computers, it’s so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it. The laws of physics at a fundamental level get far more complex and far more computationally expensive, and not the reverse.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        To be clear, I’m not arguing that is is evidence, i merely arguing that it could be a result of how they chose to render our simulation. And just because it’s more computationally expensive on our side does not necessarily mean it’s more expensive on their side, because we don’t know what the mechanics of the deeper layer may have been.

        For example, it would be a lot less computationally expensive to render accuracy in a simulation for us down to cellular level than it would be down to atomic scale. From there, we could simply replicate the rules of how molecules work without actually rendering them, such as “cells seem to have a finite amount of energy based on food you consume, and we can model the mathematics of how that works, but we can’t seem to find a physical structure that allows that to function”