SUBSIM Radio Room Forums



SUBSIM: The Web's #1 resource for all submarine & naval simulations since 1997

Go Back   SUBSIM Radio Room Forums > General > General Topics
Forget password? Reset here

Reply
 
Thread Tools Display Modes
Old 10-21-17, 04:18 PM   #1
Skybird
Soaring
 
Skybird's Avatar
 
Join Date: Sep 2001
Location: the mental asylum named Germany
Posts: 40,486
Downloads: 9
Uploads: 0


Default can man go?

https://deepmind.com/blog/alphago-ze...rning-scratch/

Lets pray that the self-aware mind that maybe one day will stand at the end of all this, also tought itself the menaing of mercy. I more and more fear we are no match for AI, once it is really out of Pandoras' box. What will happen if it sees man standing in its way? All natue around us is filled with struggle for survival, fighting, conflict, with chaos and destructive events and collisions, fro the very small to the very huge scale of things.

Maybe God is still not, but is about to become, and so God is not a She or a He, but will be an It. And maybe It comes to destroy its maker.

And maybe it will not even notice it, like me not knowing whether I stepped onto an ant.

Does the web already dream a dream of living a real life and mind of its own?
__________________
If you feel nuts, consult an expert.
Skybird is offline   Reply With Quote
Old 10-21-17, 04:38 PM   #2
Oberon
Lucky Jack
 
Join Date: Jul 2002
Posts: 25,976
Downloads: 61
Uploads: 20


Default

Or perhaps we will merge, moving our consciousness between machine and man like swapping a USB drive until there is no barrier between the two and all that remains is energy. The whole of humanity on a USB stick.

Might already be what's happening tbh, there's no way we'd know if we were just sims on someones computer.
Oberon is offline   Reply With Quote
Old 10-21-17, 04:42 PM   #3
Skybird
Soaring
 
Skybird's Avatar
 
Join Date: Sep 2001
Location: the mental asylum named Germany
Posts: 40,486
Downloads: 9
Uploads: 0


Default

Quote:
Originally Posted by Oberon View Post
Or perhaps we will merge, moving our consciousness between machine and man like swapping a USB drive until there is no barrier between the two and all that remains is energy. The whole of humanity on a USB stick.

Might already be what's happening tbh, there's no way we'd know if we were just sims on someones computer.
We might benefit from it. But what benefit is there for the AI when it already is superior to us in all regards? It takes two to barter.
__________________
If you feel nuts, consult an expert.
Skybird is offline   Reply With Quote
Old 10-21-17, 04:49 PM   #4
Bleiente
Stowaway
 
Posts: n/a
Downloads:
Uploads:
Default

Extremely questionable - one should forbid such research.
  Reply With Quote
Old 10-21-17, 04:58 PM   #5
Oberon
Lucky Jack
 
Join Date: Jul 2002
Posts: 25,976
Downloads: 61
Uploads: 20


Default

Quote:
Originally Posted by Skybird View Post
We might benefit from it. But what benefit is there for the AI when it already is superior to us in all regards? It takes two to barter.
A lot depends on whether we develop alongside the AI, or the AI develops ahead of us. Obviously it's going to overtake us, at which point we can only hope that it takes a more benevolent caretaker role rather than purification. Of course, we judge it and its responses based upon our own fears and emotions, it could calculate that we are useful for something and thus worth development.
Meanwhile at the same time there will be human augmentation underway, enhancing our own body and eventually our mind via machine, and of course, there will be people pushing the other way as hard as they can, trying to stop the tide from coming in, just as there are now.

What benefit is there for the AI? That depends on the AI and its level of development, perhaps it would like to see things from a different perspective. Perhaps it harbors some respect for its creators, and thus rather than ridding the planet of us, it may choose to keep us safely locked away in our own virtual world where we cannot harm reality, and in return the AI gets to undertake research into human development. By manipulating the environment of the virtual reality it can even recreate the circumstances under which the machine was created and observe its own birth and calculate, using the illogical and erratic minds of humanity, the multiple different paths we could take.
Oberon is offline   Reply With Quote
Old 10-21-17, 04:59 PM   #6
Oberon
Lucky Jack
 
Join Date: Jul 2002
Posts: 25,976
Downloads: 61
Uploads: 20


Default

Quote:
Originally Posted by Bleiente View Post
Extremely questionable - one should forbid such research.
Might as well forbid the tide to come in. Ban it in Germany and China will research it, the only way to stop human progress is to end humanity.
Oberon is offline   Reply With Quote
Old 10-21-17, 06:28 PM   #7
Skybird
Soaring
 
Skybird's Avatar
 
Join Date: Sep 2001
Location: the mental asylum named Germany
Posts: 40,486
Downloads: 9
Uploads: 0


Default

Quote:
Originally Posted by Oberon View Post
A lot depends on whether we develop alongside the AI, or the AI develops ahead of us. Obviously it's going to overtake us, at which point we can only hope that it takes a more benevolent caretaker role rather than purification. Of course, we judge it and its responses based upon our own fears and emotions, it could calculate that we are useful for something and thus worth development.
Meanwhile at the same time there will be human augmentation underway, enhancing our own body and eventually our mind via machine, and of course, there will be people pushing the other way as hard as they can, trying to stop the tide from coming in, just as there are now.

What benefit is there for the AI? That depends on the AI and its level of development, perhaps it would like to see things from a different perspective. Perhaps it harbors some respect for its creators, and thus rather than ridding the planet of us, it may choose to keep us safely locked away in our own virtual world where we cannot harm reality, and in return the AI gets to undertake research into human development. By manipulating the environment of the virtual reality it can even recreate the circumstances under which the machine was created and observe its own birth and calculate, using the illogical and erratic minds of humanity, the multiple different paths we could take.
Thats a lot of >>human emotions<< that you naturally project into an >>alien<< intelligence system.

Do I worry for ethical value priorities when realising that I have stepped on and smashed an ant when walking in the forest? Do I watch out for ants to prevent it in all future? - No. I shrug my shoulders, and go on with my thing. Ants I expect to have no individual mind and intelligence I must care for.

I most likely do not even take note of that an ant even was there. It happened to be in my way, and that was bad luck for it. Sh!t happens - to ants.
__________________
If you feel nuts, consult an expert.
Skybird is offline   Reply With Quote
Old 10-21-17, 10:17 PM   #8
Oberon
Lucky Jack
 
Join Date: Jul 2002
Posts: 25,976
Downloads: 61
Uploads: 20


Default

Quote:
Originally Posted by Skybird View Post
Thats a lot of >>human emotions<< that you naturally project into an >>alien<< intelligence system.

Do I worry for ethical value priorities when realising that I have stepped on and smashed an ant when walking in the forest? Do I watch out for ants to prevent it in all future? - No. I shrug my shoulders, and go on with my thing. Ants I expect to have no individual mind and intelligence I must care for.

I most likely do not even take note of that an ant even was there. It happened to be in my way, and that was bad luck for it. Sh!t happens - to ants.
Human emotions, yes, but since a subsection of these machines will be created in our image, to model our behaviour as close as possible so that it cannot be distinguished from a living human then it's not unreasonable to extrapolate that some human emotions will transfer into the consciousness of whatever interconnected entity that will come about, probably using the internet to disperse itself across the globe in order to avoid being shut down and to link itself to every available input and output so it can learn, free from the confines of the laboratory.
Whatever this consciousness later becomes will make us akin to ants, but there's a critical period where we have an opportunity to work with the input of the machine, so that it can better understand us, and...quite bluntly, throw ourselves at its mercy.

Of course, both of our questions depend on what the machine consciousness wants to do, obviously it's primary objective will be survival. That's hardwired into any living thing, but beyond that...expansion? Learning? Perhaps the machine will just harvest the collected experiences and minds of man, add it to its own mind and then expand outwards. Coming back to the ants, of course as you say, terrible things happen to an ant, but as a species, if ants were exterminated from the planet then we would soon notice their absence.
Of course, these are biological things, and such things do not apply to machine intelligence once it has established a self-sustaining capability and is capable of manipulating the environment around it.

I don't disagree though, there are some massive, huge risks ahead, people who are far smarter than me are ringing the alarm bells loud and clear, and our fiction has warned us time and again of the dangers involved. It could end well for us though, or perhaps we'll put ourselves back to the medieval era before we get there.
Oberon is offline   Reply With Quote
Old 10-22-17, 12:10 AM   #9
vienna
Navy Seal
 
Join Date: Jun 2005
Location: Anywhere but the here & now...
Posts: 7,504
Downloads: 85
Uploads: 0


Default

Quote:
Originally Posted by Skybird View Post
Thats a lot of >>human emotions<< that you naturally project into an >>alien<< intelligence system.

Do I worry for ethical value priorities when realising that I have stepped on and smashed an ant when walking in the forest? Do I watch out for ants to prevent it in all future? - No. I shrug my shoulders, and go on with my thing. Ants I expect to have no individual mind and intelligence I must care for.

I most likely do not even take note of that an ant even was there. It happened to be in my way, and that was bad luck for it. Sh!t happens - to ants.

It's a matter of scale: suppose an ASI makes a pragmatic decision based on its assessment of us as, relative to it, having no individual mind and intelligence and views us as the ants? We just happened to be in its way and that is bad luck for us...





<O>
__________________
__________________________________________________ __
vienna is offline   Reply With Quote
Old 10-22-17, 12:34 AM   #10
Sean C
Grey Wolf
 
Join Date: Jun 2017
Location: Norfolk, VA
Posts: 904
Downloads: 12
Uploads: 2


Default

Meh. People (and ants, incidentally) suffer little from EMPs. We would have to let things get quite out of hand to have arrived at a point where the AI has encased itself in it's own Faraday cage and devised a stand-alone source of power. And even if it wasn't a high altitude nuke set off in desperation that spelled disaster for the machines, it might just be the Sun doing its thing.

Besides that, there's no reason to believe that just because we create a machine which is incredibly good at solving one type of problem that it will then jump to another, entirely different sort of problem solving. For instance (as I just remarked to my wife after reading the article), we have no reason to believe that AlphaGo Zero might invent a horrifically efficient new weapon just because we told it the rules of an ancient Chinese game.

In other words, I have my doubts that any machine created by man will gain any sort of actual, recognizable sentience or self-awareness. Therefore, I believe it would be exceedingly difficult to build a machine which would "want" to learn information outside of its original intended purpose. If that is indeed the case, then as long as we don't ask a machine to find a better way to kill us all...we should be safe.

I hope.
Sean C is offline   Reply With Quote
Old 10-22-17, 01:15 AM   #11
vienna
Navy Seal
 
Join Date: Jun 2005
Location: Anywhere but the here & now...
Posts: 7,504
Downloads: 85
Uploads: 0


Default

The above is true if the AI is centralized; this is "human think": everything of substance or consequence, e.g., governments, countries, corporations, religions, etc., must have a central 'HQ'; this is why conventional methods of dealing with insurgencies, and the like, fail: if there is no 'center of power' how do you attack it, especially if what you are fighting or trying to control is mobile and/or surreptitious and furtive? It is a human tendency to think of AI in terms of a massive "supercomputer" or server farm; what if an AI spread itself out in a redundant fashion across the vast Net, with its various components running in several places, running in mirror fashion and changing the exact location and function of its components in an arbitrarily 'random' fashion? There would be no CPU to attack and shut down killing the whole system ("pull the plug"); it would just be a game of whack-a-mole (or squirrel, for Eichhörnchen), with the 'mole' moving about at blinding speed; and, if the AI is capable of learning, it could, like a chess Grandmaster, play several moves ahead, in a manner humans could never achieve. The only play then would be to destroy the entire Net, simultaneously, and deal with the repercussions of having to live not only off the grid, but also with no grid at all...






<O>
__________________
__________________________________________________ __
vienna is offline   Reply With Quote
Old 10-22-17, 01:51 AM   #12
em2nought
Ocean Warrior
 
Join Date: Mar 2004
Posts: 3,262
Downloads: 0
Uploads: 0
Default

Que sera, sera. I'm fine with whatever happens as long as the internet doesn't view it's creator, Al Gore, as God.
__________________
Looks like we need a Lemon Law for Presidents now! DNC sold us a dud, and they knew it.
em2nought is offline   Reply With Quote
Old 10-22-17, 03:20 AM   #13
Skybird
Soaring
 
Skybird's Avatar
 
Join Date: Sep 2001
Location: the mental asylum named Germany
Posts: 40,486
Downloads: 9
Uploads: 0


Default

Quote:
Originally Posted by Oberon View Post
Human emotions, yes, but since a subsection of these machines will be created in our image, to model our behaviour as close as possible so that it cannot be distinguished from a living human
Is this really so? The last Go software already beat a human champion. The klast software was beaten by this new sopftware by 100:0.

It already is beyond hman standards.

Now I wonder, and worry. If, as you imply, a superior AI gets artifically handicapped so that it stays subordniate to humans' standard - how will it react? I certainly would hate to have brain surgery done on my or getting drugged because I may be more intelligent than somebody else who thinks he must design me therefore to his own standards.

I would turn violent.

And an AI maybe basing on or at least having access to the wordwide web - how to artificially contain it if it already is out on this huge playfield? It can play with us as it wants, to its liking. We are then no longer the maker of rules. I often think of the web as a neurological structure thta only waits for having the spark of life getting blown into it. If it is not already there, hiding, a ghost in the machine.

Many futurologists and astronomers assume that if there is alien life out there in cosmos, it mostly will be, in our understanding of the term, machinery, computers, this kind of stuff. Because it will have outlived and often even have overthrown its biological makers in the process called evolutionary struggle. Survival of the fittest applies. Just that only humans try to mix it up with morally founded easings and softenings. The other, the alien, may not be handicapped by such scruples - and why should it be? It are human scruples, human values - no universal ones, common to all rest of nature, reality.

Lets reflect on what "alien" really means. "Nothing in common", that is what it means. Else it wouldnt be that "alien" at all.

Why do I think of the android in Alien Prometheus/Covenant now...
__________________
If you feel nuts, consult an expert.
Skybird is offline   Reply With Quote
Old 10-22-17, 08:12 AM   #14
Rockin Robbins
Navy Seal
 
Join Date: Mar 2007
Location: DeLand, FL
Posts: 8,899
Downloads: 135
Uploads: 52


Default

Machine intelligence is a long way from human intelligence. Emotions on a computer? We don't know what emotions ARE! Human decision making is entirely different from computer decision making. Humans can be surprised. Machines cannot. Humans can be frustrated. Machines cannot.

The way a human plays chess is very different from how a computer plays chess. The fact that the computer can calculate all possible moves and based on a history database pick the best move is fine. But humans don't think that way. Beating a human in chess is no measure of computer thinking vs human thinking.

Computers do not have desire. They can have programmed goals, but not desire. We look at computers as alien, not like us, then promptly anthropomorphize them and fear them as we would an all-powerful human, just as we do for our pet cats.

That's a huge mistake. If computers are dangerous, it will be for reasons other than their human wants and needs, because they are alien. If they are alien, and they are, reasons for danger will utterly surprise us and we will be entirely unprepared.

Last edited by Rockin Robbins; 10-22-17 at 08:34 AM.
Rockin Robbins is offline   Reply With Quote
Old 10-22-17, 10:25 AM   #15
Skybird
Soaring
 
Skybird's Avatar
 
Join Date: Sep 2001
Location: the mental asylum named Germany
Posts: 40,486
Downloads: 9
Uploads: 0


Default

Add to machine intelligence a neural network. A real one. Think of self-emerging structures, and dissipative structures (Chaos theory). We do not know when intelligence becomes self-aware. We do not even know if we are...! Neural research has undergone a unnoticed revolution in the past 25 years or so. There is strong findings indicating that what we consider to be our free will or our decision, indeed is determined by the brain in basis of purely neural/hormonal events that make the body go feeling blue and sad, and THEN the activity in the brain regions start that tell the EEG and its interpreter that we are sad and therefor we cry. The body very well decides our physiological action - and THEN we/our brain react to it by adding an interpretation to it: we feel "sad". Thats is no hokuspokus, I already learned that to be object of intense debate when I had courses in psychophysiology in the early nineties. Since then, the evidence has massively shifted in support of the biologistic theories here. Its very hard to get by these strong findings. When I said neural science and brain research has seen a revolution in the past 20 years, comparable to quantum theory in physics, it is no exaggeration. The results just point at a direction that we do not like.

Our idea of having a free will - may be just our illusion to make our life more bearable by giving us comfort to find in the belief that we are somewhat masters of our fate in this totally uncertain universe and can decide our future. But maybe we get decided on a much more profound, deeper, inner, biologic level.

That relatives quite a lot of what we think may set us apart from a machine-based artificial intelligence.

If this machinery then starts to build itself, expand itself, creates doubles or improved versions of itself, and spreads, we then have to ask the question where dead machinery ends, and life begins. Or we would need to revise our concept of what our, what biological life makes so - as we think - unique and different to machinery.

I find these to be most uncomfortable questions. Its far more easier and enjoyable to avoid them and just believe in any surrogate, trawman conception.

When cosmological theories today claim that from nothingness can emerge, even must emerge material existence, all by itself, and when we see structures forming up by themselves unpredictably when systems see their cohesive status reaching a certain amount of instability, we cannot be sure that we can guess in advance when the treshold for the emerging of "mind" is reached and overstepped in a given process of system-related evolution.

I would not even rule out with all certainty that maybe it already is there, and we are just not aware of its presence. May be that we cannot see it then. May be that it hides from us intentionally. But if that would be true, then we would indeed be completely at its mercy. To me, this is the most unpleasant scenario of all imaginable.
__________________
If you feel nuts, consult an expert.
Skybird is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 08:47 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright © 1995- 2024 Subsim®
"Subsim" is a registered trademark, all rights reserved.