SUBSIM Radio Room Forums

SUBSIM Radio Room Forums (https://www.subsim.com/radioroom/index.php)
-   General Topics (https://www.subsim.com/radioroom/forumdisplay.php?f=175)
-   -   Skynet (https://www.subsim.com/radioroom/showthread.php?t=170845)

SteamWake 06-09-10 04:42 PM

Skynet
 
A bunch of little flying robots join together before taking off :o

More to it than that just read the story and see the video.

http://www.dailymail.co.uk/sciencete...ervention.html

Oberon 06-09-10 05:56 PM

Reminds me of the old UFO reports of small lights joining up into one bigger light. It's all happening with the rotors, isn't it? :yep:

DarkFish 06-09-10 06:00 PM

wow, pretty amazing:yeah:

I don't really see how this would somehow be related to skynet though:hmmm:

SteamWake 06-09-10 09:20 PM

Quote:

Originally Posted by DarkFish (Post 1415605)
wow, pretty amazing:yeah:

I don't really see how this would somehow be related to skynet though:hmmm:

Skynet was the arch enemy in the Terminator series of movies. A self aware network of machines bent on the destruction of mankind.

Arclight 06-10-10 05:56 AM

Skynet needs an altimeter. :hmmm:

Dowly 06-10-10 06:01 AM

Quote:

The mini flying robot drones that join forces before takeoff - all without human help
Correct me if I'm wrong, but doesn't programming them to do that fall under 'human help'? :O: Impressive, tho. :yep:

DarkFish 06-10-10 06:43 AM

Quote:

Originally Posted by SteamWake (Post 1415696)
Skynet was the arch enemy in the Terminator series of movies. A self aware network of machines bent on the destruction of mankind.

yes I know but the "self-awareness" of this thing is nowhere even close to skynet;)

Oberon 06-10-10 10:50 AM

Give it time though, the self-awareness factor is coming along quite nicely. Once that is down pat and the robots are taught to manufacture themselves, and you put all this together...then we're Skynet bound.

Personally though, I get more spooked by Nanites...that's where it's all going to go wrong. :damn: Grey goo :nope:

DarkFish 06-10-10 02:33 PM

Quote:

Originally Posted by Oberon (Post 1416032)
Give it time though, the self-awareness factor is coming along quite nicely. Once that is down pat and the robots are taught to manufacture themselves, and you put all this together...then we're Skynet bound.

Personally though, I get more spooked by Nanites...that's where it's all going to go wrong. :damn: Grey goo :nope:

we will never be Skynet bound.
Why on earth would a robot want to destroy mankind? How on earth could he?
To be honest, no robot now or in the near future could be called really self aware. They are nothing more than beautifully programmed machines. Machines that do what they are programmed to do, and nothing more.

But let's assume for the moment that sometime in the far future someone makes a self-aware and highly intelligent robot. What reason is there to assume it'll want to kill humans?
If you'd make a robot with the capability, "brains" and necessary programming to kill people, it'd be irresponsible not to include any moral values. (can be as simple as "don't kill humans", or in the case of a battle robot "don't kill any allied or neutral humans")


Terminator is just a fairytale. A nice one at that - but a fairytale nonetheless:).

Dowly 06-10-10 02:37 PM

I'm pretty sure that if in the future there is robots that could potentially harm us, the first thing the makers of it would do is program it to follow the Asimov's laws.

Quote:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Arclight 06-10-10 02:42 PM

Indeed, but the first thing an AI would do to increase efficiency is reprogram itself. Bit of "debugging", you might say. :D

Oberon 06-10-10 02:43 PM

Quote:

Originally Posted by Dowly (Post 1416208)
I'm pretty sure that if in the future there is robots that could potentially harm us, the first thing the makers of it would do is program it to follow the Asimov's laws.

Yes. Definitely.

Of course, it's the self-preservation instinct, prevent robot from coming to harm. Give that too much priority and it could become a problem. Certainly not Skynet sized, but enough to cause perhaps an inconvenience or a re-evaluation of human-robot relations.

It's uncertain, but yeah, I doubt it'd come to major blows. :hmmm:

FIREWALL 06-10-10 03:44 PM

We got plenty of bezeerk robots in the USA. :DL

Their called Liberal Democrats. :har:

Zachstar 06-10-10 05:43 PM

Quote:

Originally Posted by FIREWALL (Post 1416263)
We got plenty of bezeerk robots in the USA. :DL

Their called Liberal Democrats. :har:

Was that really necessarily? :down:


Anyway something tells me Skynet wont happen. What is far more likely to happen is robotic AI taking over in order to "save" the human race in its mind.

Personally with all the lies and corruption I would rather see such a system happen. A robot wont care about how much money you have

Aramike 06-10-10 11:31 PM

Quote:

Originally Posted by Zachstar (Post 1416326)
Was that really necessarily? :down:


Anyway something tells me Skynet wont happen. What is far more likely to happen is robotic AI taking over in order to "save" the human race in its mind.

Personally with all the lies and corruption I would rather see such a system happen. A robot wont care about how much money you have

Maybe ... :O:

In any case, on the Science Channel last night I watched a program called Through the Wormhole with Morgan Freeman, or something titled like that. It was primarily a discussion regarding intelligent design, and whether or not physics believes such a concept exists.

During the last segment there was the suggestion that we are nothing other than computer programs running in some super-computer. Anyone with a basic understand of the Copenhagen Interpretation of quantum theory along with the quantum measurement problem would have found the implications intriguing.

Essentially, it was suggested that quantum theory (specifically wave function collapse, or better, decoherennce) is the result of the inherent physical processes that reflect video games. To wit, one only "sees" what one is looking at, and what is NOT being observed, doesn't actually exist.

Intriguing, yes, but relevant because if Moore's Law holds true, such super-computers in a very short time will be possible, meaning the the concept of "Skynet" is very real.


All times are GMT -5. The time now is 01:57 PM.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 1995- 2025 Subsim®
"Subsim" is a registered trademark, all rights reserved.