Does Google's cloud anchor technique for Aurgmented Reality (AR) prove the theory that the universe is a simulation?


After having written a few articles about the concept that you and I live in a virtual reality, the many reactions show that the readers are rather shocked. Because I have argued that the principle of 'quantum entanglement' is a necessary phenomenon that undermines Einstein's locality theory and also the speed of light (as a physical physical boundary), I seem to be proposing something that turns the world view of many upside down. Based on my assumption that quantum entanglement is a simply necessary 'appointment' in a multiplayer game, because everyone then observes the same thing that once came out of superposition (by the first observer) (and is materialized), it seemed to me that this principle in the virtual reality development world. And what turned out to be my big surprise? Google's Augmented Reality (AR) developer platform applies a similar principle as 'quantum entanglement'!

To make the concept that the universe is a simulation tangible, I made a comparison with an online multiplayer game. In that context, I advised some readers to take a look at the Netflix series Black Mirror, season 4 episode 1. Although Hollywood will of course never reveal the truth, because we have to be kept on the wrong track every time and at the same time there is a need to realize the transhumanistic (next) trap, you get a reasonable feeling here. one game (apart from the fact that you see Einstein's - in my eyes incorrect - wormhole theory). With the current state of technology, we are not far removed from such developments.

I understand that this subject is quite a mouthful of technical information for the average reader of my site and that most of them will probably quit, but I still try to explain the case in more detail. You can then find out that the idea that we want to explain our reality spiritually or through religion can actually be trashed. We can explain our reality purely on the basis of logic. That is why we are in a sense lucky that we are at this time of advancing technological developments, because it simply shows how our current (sur) reality works. Religions only give the extra instructions to specifically identify "the architect". My advice is that you therefore still do your best to understand the double slits experiment (from my previous articles) really well, so that you understand that matter only exists after observation. So an observer is needed. Until then, matter is in the 'all possible optionsposition; called 'superposition' by science. This can be compared to a multiplayer game in which several players, for example, enter a room and view an object. The shape of that object is already fixed in the source code of the program. Once observed by the first player, the other players must see the same, from a different angle. If 1 picks up the object from the players and reverses it, the other players must observe that rotational movement from their point of view. I used this to explain the concept of 'quantum entanglement', which may not be a solid explanation, but it gives an idea.

Quantum entanglement in quantum physics means that when, for example, two photon pairs are materialized (by observation) they always assume the opposite position and direction of rotation. If you turn over one particle, the other particle will also turn around; even though the light years are removed from the other particle. Locality therefore plays no role and the limitation of the speed of light then suddenly no longer exists. Einstein stated that there would have to be a kind of back door, so that communication would occur. This is the well-known wormhole principle. Well, if that back door is simply the data stream or what we now call 'the cloud', then we can suddenly make a representation of it. If, however, we explain this within our 3D observation, you would still have to deal with the limitation of the speed of light (if we let the communication of data in 'the cloud' via optical fiber and laser light), but we would think 4D, then that 'cloud' is everywhere present at the same time. Supposing that you were an 2D being and I would drop a sphere from the 3D through your plane, you would do the chronological observation of the appearance of a dot (out of nowhere), an ever increasing line; followed by an ever smaller line to a dot that has suddenly disappeared. However, the sphere is already above your level and also below your level. You experience the observation chronologically, while the spherical shape is actually a constant (always present) in the 3D. So if we would add an 4e dimension to our model, then that is, as it were, 'the cloud' that is everywhere present.

If in a multiplayer game player A in Japan an object turns around, player B in Canada should also see it on his screen. Quantum entanglement is thus explained by an omnipresent data flow. We thought that time (the speed of light) was the limiting factor in our 3D, but that is because we can not perceive the sphere (which was always there) and only do the chronological observation of the origin and disappearance of the line. Locality and speed of light therefore do not count in the 4D and in the always-everywhere present data stream, in which all matter is still in the source code superposition.

How interesting is it then to discover that current virtual reality techniques provide insight into the situation we are in. Take a look at the presentation below, explaining that the difference between virtual reality (VR) and augmented reality (AR) will disappear. In essence, VR is a fully digitally created world (like in a game) and AR is a digitally created layer on the transparent glasses that you still let the "real world" perceive. This difference will disappear, because they merge, as it were. The advantage of VR is that you can build a multiplayer game much easier, because you have defined the playing field yourself and thus have already defined the reference points of observation in the game. You have built a large digital room, as it were, and you know exactly from which distance, height and angle the perception of an object takes place, so that you can correctly project the image in the 3D glasses for each player. At AR you still have to solve this problem and that is possible by passing the observation and the position of the player via accurate GPS sensors to the central server in 'the cloud'. (Read more under the video)

Revelation 1: 7 'Look, he's coming with the clouds and every eye will see him '

Google has provided its developers platform for this. Of course, because it is mainly the big companies like Google that want to see us disappear more and more in the VR and AR world, so we also become more and more dependent on 'the cloud'. In order to tackle this technical problem, you therefore need reference points. You have to know exactly where each player is in the playing field (the world) and you also need to know where a perceived object is located. This has been solved by Google with a very logical principle: cloud anchors. Logical, because you just have to tell 'the cloud' where on the planet the observer is located and where objects are located from the perceived perspective. So suppose you want a dinosaur to be realistic (as AR) on a table in a room. Then you have to know the exact position of the observer. You can know this on the basis of GPS data, but also on the basis of the dual camera lenses in the glasses (which can do a depth perception, just like 2 eyes). You can then lay a grid across the table, as it were, and determine the angle of view of the viewer in relation to the table. You then divide the table into so-called anchor points and register those anchor points in 'the cloud', so that they are the same for every observer.

In this way you can project all sorts of 3D holograms in space (via the Microsoft Hololens) that are then visible to everyone. You can bring different objects into the playing field by different players, who match exactly with each other via the anchor points. So imagine that you are holding an AR sword fight projected on top of the real world (via a Microsoft Hololens), then the anchor points in the Google cloud ensure that the swords touch each other exactly at the right time and do not swab a little past each other, because the reference points are incorrect.

Of course, it only becomes interesting in the VR and AR if we can make a neurological connection through, for example, Elon Musk's company Neuralink, so that you can project the sensation of touch, smell, hearing and vision directly into the brain. By that time it is hardly possible to distinguish digitally created from real. But I am now making a sidetrack towards the preliminary stage for transhumanism. The Netflix series Altered Carbon also provides a good picture of the transhumanistic world that we could expect within a few decades. Temptation also plays a major role here, because it is of course romantically pre-painted.

The reason I tell you this is of course the fact that Google is the concept cloud anchors introduced. What does that remind you of? Is it not the case that once a materialized 3D image (as a hologram on your hololens) is the same for every observer by this principle. Does that cloud anchor you do not think about anything? What happens when information materializes from superposition at the first observation (in the double slits experiment)? It assumes a position that is the same for every observer and whose position is anchored through the principle of quantum entanglement! Eureka!

Quantum entanglement has the same function as Google's cloud anchor!

Unfortunately, I can not apply for a patent for this discovery and I will probably never be honored as Einstein, but I wanted to share my enthusiasm with you. How much more evidence do you want us to live in a virtual reality? Or is it perhaps an augmented reality where the original universe has been used to lay an augmented layer and our biocomputer is hacked full-time via central server within the "real reality"? Or is that "real reality" (our universe) in its entirety ook a simulation. Is there a question of multiple layers, where a simulation runs within a simulation? Admittedly, I now make it complex, but these are the considerations that concern me.

The quantum physical explanation states that there must always be an observer and that the information only unfolds as matter. after observation. Is the soul in it the observer and standing the cloud are all-around-present "around" us? Do you remember; that Playstation game? All code is already burned on the CD. Once you start the game, you have all the options and experience the game chronologically. Your choice (and in a multiplayer game also the choices of others) determines how the information unfolds on your screen. You are the observer. You are still waiting for the messiah avatar that will present itself from the cloud?


About the Author ()

Comments (4)

trackback URL | Comments RSS Feed

  1. SalmonInClick wrote:

    Martin I follow you completely, what you try to explain is made known in films such as Inception, Interstellar etc. The point is that infinite simulations (timelines) run simultaneously with different outcomes, which at times flow seamlessly into each other so called dimension shifts. Time does not exist outside this simulation (s), I know that institutions such as CERN are dealing with these issues and are involved in manipulation of this timeline

    First, our soul is trapped in a shell that is limited in its functioning through our brains, in addition we are trapped in a simulation / 3D dimension. The first phase of awakening is realizing that we are trapped

    Lucifer is not called the 'light bearer' for nothing, the light that we have managed to get this simulation. The last trick is to keep us definitive here via the VR agenda, a simulation within a simulation (Droste effect / maze)

    'string theory is a theoretical framework in which the point-like particles or particle physics are replaced by' objects' called strings. It describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the gravitational force. Thus string theory is a theory of quantum gravity. '

    • Martin Vrijland wrote:

      The string theory was devised to explain the quantum entanglement because the quantum entanglement ruined Einstein's locational theory. So they thought up only the strings between the particles; strings that then run through another dimension. Wow! Brilliant! Bullshit if you ask me. Just a distraction not to have to admit that we are living in a (Luciferian) simulation.

      I do not know if there are endless simulations running simultaneously. We can at least identify 1; being the one we are currently in. Incidentally, if you take the signals seriously, it is layered: that is, for example, there seems to be some sort of 'archontic layer' or 'entities low' in the simulation. In short: the Luciferian simulation alone already seems to consist of several dimensions (where a dimension is a simulation in a simulation).

      Theoretically, you might be able to argue that your soul could experience multiple dimensions in parallel. The only question is whether that would be "a smart choice" for example from our great friend Luccifer, because it is quite difficult to remain in focus if you play multiple Playstation games as a result. And it just seems that we have to stay in focus on his game. So I personally see it as less obvious that we live in parallel universes at the same time.

    • SalmonInClick wrote:

      the 'ELite' that serve this Luciferian agenda are working hard to use these technologies against the ignorant masses

      4.Artificial intelligence
      9.Quantum computing

      "We can not solve problems by using the same child or thinking we used when we created them."

  2. Martin Vrijland wrote:

    So for the sake of clarity: a dimension is nothing more than a simulation in a simulation.

    Imagine that we are building a virtual reality simulation, in which we tend to lose ourselves completely, so that we forget that we are "at the controls". That will soon be fine if we can stimulate all sensory perception directly in the neurons in our brain. If we then "live" in that simulation, the current layer is a higher dimension.

Leave a Reply

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to 'allow cookies' to give you the best browsing experience possible.If you continue to use this website without changing your cookie settings or you click on "Accept" below then you agree with these institutions.