![]() |
Skynet
A bunch of little flying robots join together before taking off :o
More to it than that just read the story and see the video. http://www.dailymail.co.uk/sciencete...ervention.html |
Reminds me of the old UFO reports of small lights joining up into one bigger light. It's all happening with the rotors, isn't it? :yep:
|
wow, pretty amazing:yeah:
I don't really see how this would somehow be related to skynet though:hmmm: |
Quote:
|
Skynet needs an altimeter. :hmmm:
|
Quote:
|
Quote:
|
Give it time though, the self-awareness factor is coming along quite nicely. Once that is down pat and the robots are taught to manufacture themselves, and you put all this together...then we're Skynet bound.
Personally though, I get more spooked by Nanites...that's where it's all going to go wrong. :damn: Grey goo :nope: |
Quote:
Why on earth would a robot want to destroy mankind? How on earth could he? To be honest, no robot now or in the near future could be called really self aware. They are nothing more than beautifully programmed machines. Machines that do what they are programmed to do, and nothing more. But let's assume for the moment that sometime in the far future someone makes a self-aware and highly intelligent robot. What reason is there to assume it'll want to kill humans? If you'd make a robot with the capability, "brains" and necessary programming to kill people, it'd be irresponsible not to include any moral values. (can be as simple as "don't kill humans", or in the case of a battle robot "don't kill any allied or neutral humans") Terminator is just a fairytale. A nice one at that - but a fairytale nonetheless:). |
I'm pretty sure that if in the future there is robots that could potentially harm us, the first thing the makers of it would do is program it to follow the Asimov's laws.
Quote:
|
Indeed, but the first thing an AI would do to increase efficiency is reprogram itself. Bit of "debugging", you might say. :D
|
Quote:
Of course, it's the self-preservation instinct, prevent robot from coming to harm. Give that too much priority and it could become a problem. Certainly not Skynet sized, but enough to cause perhaps an inconvenience or a re-evaluation of human-robot relations. It's uncertain, but yeah, I doubt it'd come to major blows. :hmmm: |
We got plenty of bezeerk robots in the USA. :DL
Their called Liberal Democrats. :har: |
Quote:
Anyway something tells me Skynet wont happen. What is far more likely to happen is robotic AI taking over in order to "save" the human race in its mind. Personally with all the lies and corruption I would rather see such a system happen. A robot wont care about how much money you have |
Quote:
In any case, on the Science Channel last night I watched a program called Through the Wormhole with Morgan Freeman, or something titled like that. It was primarily a discussion regarding intelligent design, and whether or not physics believes such a concept exists. During the last segment there was the suggestion that we are nothing other than computer programs running in some super-computer. Anyone with a basic understand of the Copenhagen Interpretation of quantum theory along with the quantum measurement problem would have found the implications intriguing. Essentially, it was suggested that quantum theory (specifically wave function collapse, or better, decoherennce) is the result of the inherent physical processes that reflect video games. To wit, one only "sees" what one is looking at, and what is NOT being observed, doesn't actually exist. Intriguing, yes, but relevant because if Moore's Law holds true, such super-computers in a very short time will be possible, meaning the the concept of "Skynet" is very real. |
All times are GMT -5. The time now is 01:57 PM. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 1995- 2025 Subsim®
"Subsim" is a registered trademark, all rights reserved.