I'm pretty sure that if in the future there is robots that could potentially harm us, the first thing the makers of it would do is program it to follow the Asimov's laws.
Quote:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
|