Thread: Skynet
View Single Post
Old 06-10-10, 02:37 PM   #10
Dowly
Lucky Jack
 
Join Date: Apr 2005
Location: Finland
Posts: 25,055
Downloads: 32
Uploads: 0


Default

I'm pretty sure that if in the future there is robots that could potentially harm us, the first thing the makers of it would do is program it to follow the Asimov's laws.

Quote:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Dowly is offline   Reply With Quote