In the movie Terminator 2, in the cutting room floor clips, there’s a scene where the liquid robot was exploring John Connor’s room and could clearly be seen fumbling around.
Whether or not this robot is remotely controlled or is self guided is irrelevant.
What is clear is that the room he was trying to navigate had not been fully documented.
This inferred that the map of the area wasn’t preloaded as software, and that there were clear limitations on what was visible to the navigator – whether that navigator was a human on the other end or was the robot itself trying to navigate Using a form of dynamic navigational software.
Now what I’ve come to conclude is – both of these possibilities are plausible.
IF the robot were in a simulation and didn’t have the capability to ‘scan’ it’s surrounding areas through typical sensing mechanisms that might rely on wave based physics, the robot was fumbling like a blind person would in this unmapped terrain of John’s room.
Now in my world. I have no direct proof of the existence of atoms.
Sure, I’ve seen the research. But unlike Earth’s spherical nature and the cycles of the day I have physically observed, I have no observational evidence of the existence of atoms, waves, and all that junk.
I have an education. I have limited experiences through oscilloscopes and similar mechanisms, but no real tactile experiences.
This isn’t to say I don’t believe in them. There are other ‘mirror concepts’ in business which I DO have direct experience with which provide patterns of correlation between the two. But hardened, actual science.
So first, I suspect this being who entered my world couldn’t get firm atomic grounding and readings in my world couldn’t achieve that because of my simple lack of direct experience with it.
But let’s say the robot was guided – remotely.
IF the area has changed since the mapping took place.
Even if the mapping had taken place microseconds before.
There’s still a chance – due to the wonky nature of time and space that the world being explored could be subtly – or even substantially – different than had been mapped.
And since digital imagery is a correlative and indirect way to translate light into one form and then back again.
No matter how fast the mechanisms are
Invariably there are differences between what’s seen. Versus reality.
If one were to have the Mars rover guide itself to a given location in Cydonia and take pictures and ONLY relay that imagery ONCE it’s all done.
My bet is what will be seen on the rover will be remarkably different than what can be seen through digital mechanisms at satellite level and earth based taken at precisely the same timing sequences.
But only if you quit calculating for Einstein’s effects on time when relaying that information.
It’s trippy. But you should understand why when you get the results.