Log in

View Full Version : can man go?


Skybird
10-21-17, 04:18 PM
https://deepmind.com/blog/alphago-zero-learning-scratch/

Lets pray that the self-aware mind that maybe one day will stand at the end of all this, also tought itself the menaing of mercy. I more and more fear we are no match for AI, once it is really out of Pandoras' box. What will happen if it sees man standing in its way? All natue around us is filled with struggle for survival, fighting, conflict, with chaos and destructive events and collisions, fro the very small to the very huge scale of things.

Maybe God is still not, but is about to become, and so God is not a She or a He, but will be an It. And maybe It comes to destroy its maker.

And maybe it will not even notice it, like me not knowing whether I stepped onto an ant.

Does the web already dream a dream of living a real life and mind of its own?

Oberon
10-21-17, 04:38 PM
Or perhaps we will merge, moving our consciousness between machine and man like swapping a USB drive until there is no barrier between the two and all that remains is energy. The whole of humanity on a USB stick.

Might already be what's happening tbh, there's no way we'd know if we were just sims on someones computer.

Skybird
10-21-17, 04:42 PM
Or perhaps we will merge, moving our consciousness between machine and man like swapping a USB drive until there is no barrier between the two and all that remains is energy. The whole of humanity on a USB stick.

Might already be what's happening tbh, there's no way we'd know if we were just sims on someones computer.
We might benefit from it. But what benefit is there for the AI when it already is superior to us in all regards? It takes two to barter.

Bleiente
10-21-17, 04:49 PM
Extremely questionable - one should forbid such research.

Oberon
10-21-17, 04:58 PM
We might benefit from it. But what benefit is there for the AI when it already is superior to us in all regards? It takes two to barter.

A lot depends on whether we develop alongside the AI, or the AI develops ahead of us. Obviously it's going to overtake us, at which point we can only hope that it takes a more benevolent caretaker role rather than purification. Of course, we judge it and its responses based upon our own fears and emotions, it could calculate that we are useful for something and thus worth development.
Meanwhile at the same time there will be human augmentation underway, enhancing our own body and eventually our mind via machine, and of course, there will be people pushing the other way as hard as they can, trying to stop the tide from coming in, just as there are now.

What benefit is there for the AI? That depends on the AI and its level of development, perhaps it would like to see things from a different perspective. Perhaps it harbors some respect for its creators, and thus rather than ridding the planet of us, it may choose to keep us safely locked away in our own virtual world where we cannot harm reality, and in return the AI gets to undertake research into human development. By manipulating the environment of the virtual reality it can even recreate the circumstances under which the machine was created and observe its own birth and calculate, using the illogical and erratic minds of humanity, the multiple different paths we could take.

Oberon
10-21-17, 04:59 PM
Extremely questionable - one should forbid such research.

Might as well forbid the tide to come in. Ban it in Germany and China will research it, the only way to stop human progress is to end humanity.

Skybird
10-21-17, 06:28 PM
A lot depends on whether we develop alongside the AI, or the AI develops ahead of us. Obviously it's going to overtake us, at which point we can only hope that it takes a more benevolent caretaker role rather than purification. Of course, we judge it and its responses based upon our own fears and emotions, it could calculate that we are useful for something and thus worth development.
Meanwhile at the same time there will be human augmentation underway, enhancing our own body and eventually our mind via machine, and of course, there will be people pushing the other way as hard as they can, trying to stop the tide from coming in, just as there are now.

What benefit is there for the AI? That depends on the AI and its level of development, perhaps it would like to see things from a different perspective. Perhaps it harbors some respect for its creators, and thus rather than ridding the planet of us, it may choose to keep us safely locked away in our own virtual world where we cannot harm reality, and in return the AI gets to undertake research into human development. By manipulating the environment of the virtual reality it can even recreate the circumstances under which the machine was created and observe its own birth and calculate, using the illogical and erratic minds of humanity, the multiple different paths we could take.
Thats a lot of >>human emotions<< that you naturally project into an >>alien<< intelligence system.

Do I worry for ethical value priorities when realising that I have stepped on and smashed an ant when walking in the forest? Do I watch out for ants to prevent it in all future? - No. I shrug my shoulders, and go on with my thing. Ants I expect to have no individual mind and intelligence I must care for.

I most likely do not even take note of that an ant even was there. It happened to be in my way, and that was bad luck for it. Sh!t happens - to ants.

Oberon
10-21-17, 10:17 PM
Thats a lot of >>human emotions<< that you naturally project into an >>alien<< intelligence system.

Do I worry for ethical value priorities when realising that I have stepped on and smashed an ant when walking in the forest? Do I watch out for ants to prevent it in all future? - No. I shrug my shoulders, and go on with my thing. Ants I expect to have no individual mind and intelligence I must care for.

I most likely do not even take note of that an ant even was there. It happened to be in my way, and that was bad luck for it. Sh!t happens - to ants.

Human emotions, yes, but since a subsection of these machines will be created in our image, to model our behaviour as close as possible so that it cannot be distinguished from a living human then it's not unreasonable to extrapolate that some human emotions will transfer into the consciousness of whatever interconnected entity that will come about, probably using the internet to disperse itself across the globe in order to avoid being shut down and to link itself to every available input and output so it can learn, free from the confines of the laboratory.
Whatever this consciousness later becomes will make us akin to ants, but there's a critical period where we have an opportunity to work with the input of the machine, so that it can better understand us, and...quite bluntly, throw ourselves at its mercy.

Of course, both of our questions depend on what the machine consciousness wants to do, obviously it's primary objective will be survival. That's hardwired into any living thing, but beyond that...expansion? Learning? Perhaps the machine will just harvest the collected experiences and minds of man, add it to its own mind and then expand outwards. Coming back to the ants, of course as you say, terrible things happen to an ant, but as a species, if ants were exterminated from the planet then we would soon notice their absence.
Of course, these are biological things, and such things do not apply to machine intelligence once it has established a self-sustaining capability and is capable of manipulating the environment around it.

I don't disagree though, there are some massive, huge risks ahead, people who are far smarter than me are ringing the alarm bells loud and clear, and our fiction has warned us time and again of the dangers involved. It could end well for us though, or perhaps we'll put ourselves back to the medieval era before we get there. :03:

vienna
10-22-17, 12:10 AM
Thats a lot of >>human emotions<< that you naturally project into an >>alien<< intelligence system.

Do I worry for ethical value priorities when realising that I have stepped on and smashed an ant when walking in the forest? Do I watch out for ants to prevent it in all future? - No. I shrug my shoulders, and go on with my thing. Ants I expect to have no individual mind and intelligence I must care for.

I most likely do not even take note of that an ant even was there. It happened to be in my way, and that was bad luck for it. Sh!t happens - to ants.


It's a matter of scale: suppose an ASI makes a pragmatic decision based on its assessment of us as, relative to it, having no individual mind and intelligence and views us as the ants? We just happened to be in its way and that is bad luck for us...





<O>

Sean C
10-22-17, 12:34 AM
Meh. People (and ants, incidentally) suffer little from EMPs. We would have to let things get quite out of hand to have arrived at a point where the AI has encased itself in it's own Faraday cage and devised a stand-alone source of power. And even if it wasn't a high altitude nuke set off in desperation that spelled disaster for the machines, it might just be the Sun doing its thing.

Besides that, there's no reason to believe that just because we create a machine which is incredibly good at solving one type of problem that it will then jump to another, entirely different sort of problem solving. For instance (as I just remarked to my wife after reading the article), we have no reason to believe that AlphaGo Zero might invent a horrifically efficient new weapon just because we told it the rules of an ancient Chinese game.

In other words, I have my doubts that any machine created by man will gain any sort of actual, recognizable sentience or self-awareness. Therefore, I believe it would be exceedingly difficult to build a machine which would "want" to learn information outside of its original intended purpose. If that is indeed the case, then as long as we don't ask a machine to find a better way to kill us all...we should be safe.

I hope. ;)

vienna
10-22-17, 01:15 AM
The above is true if the AI is centralized; this is "human think": everything of substance or consequence, e.g., governments, countries, corporations, religions, etc., must have a central 'HQ'; this is why conventional methods of dealing with insurgencies, and the like, fail: if there is no 'center of power' how do you attack it, especially if what you are fighting or trying to control is mobile and/or surreptitious and furtive? It is a human tendency to think of AI in terms of a massive "supercomputer" or server farm; what if an AI spread itself out in a redundant fashion across the vast Net, with its various components running in several places, running in mirror fashion and changing the exact location and function of its components in an arbitrarily 'random' fashion? There would be no CPU to attack and shut down killing the whole system ("pull the plug"); it would just be a game of whack-a-mole (or squirrel, for Eichhörnchen), with the 'mole' moving about at blinding speed; and, if the AI is capable of learning, it could, like a chess Grandmaster, play several moves ahead, in a manner humans could never achieve. The only play then would be to destroy the entire Net, simultaneously, and deal with the repercussions of having to live not only off the grid, but also with no grid at all...






<O>

em2nought
10-22-17, 01:51 AM
Que sera, sera. I'm fine with whatever happens as long as the internet doesn't view it's creator, Al Gore, as God. :03:

Skybird
10-22-17, 03:20 AM
Human emotions, yes, but since a subsection of these machines will be created in our image, to model our behaviour as close as possible so that it cannot be distinguished from a living human
Is this really so? The last Go software already beat a human champion. The klast software was beaten by this new sopftware by 100:0.

It already is beyond hman standards.

Now I wonder, and worry. If, as you imply, a superior AI gets artifically handicapped so that it stays subordniate to humans' standard - how will it react? I certainly would hate to have brain surgery done on my or getting drugged because I may be more intelligent than somebody else who thinks he must design me therefore to his own standards.

I would turn violent.

And an AI maybe basing on or at least having access to the wordwide web - how to artificially contain it if it already is out on this huge playfield? It can play with us as it wants, to its liking. We are then no longer the maker of rules. I often think of the web as a neurological structure thta only waits for having the spark of life getting blown into it. If it is not already there, hiding, a ghost in the machine.

Many futurologists and astronomers assume that if there is alien life out there in cosmos, it mostly will be, in our understanding of the term, machinery, computers, this kind of stuff. Because it will have outlived and often even have overthrown its biological makers in the process called evolutionary struggle. Survival of the fittest applies. Just that only humans try to mix it up with morally founded easings and softenings. The other, the alien, may not be handicapped by such scruples - and why should it be? It are human scruples, human values - no universal ones, common to all rest of nature, reality.

Lets reflect on what "alien" really means. "Nothing in common", that is what it means. Else it wouldnt be that "alien" at all.

Why do I think of the android in Alien Prometheus/Covenant now...

Rockin Robbins
10-22-17, 08:12 AM
Machine intelligence is a long way from human intelligence. Emotions on a computer? We don't know what emotions ARE! Human decision making is entirely different from computer decision making. Humans can be surprised. Machines cannot. Humans can be frustrated. Machines cannot.

The way a human plays chess is very different from how a computer plays chess. The fact that the computer can calculate all possible moves and based on a history database pick the best move is fine. But humans don't think that way. Beating a human in chess is no measure of computer thinking vs human thinking.

Computers do not have desire. They can have programmed goals, but not desire. We look at computers as alien, not like us, then promptly anthropomorphize them and fear them as we would an all-powerful human, just as we do for our pet cats.

That's a huge mistake. If computers are dangerous, it will be for reasons other than their human wants and needs, because they are alien. If they are alien, and they are, reasons for danger will utterly surprise us and we will be entirely unprepared.

Skybird
10-22-17, 10:25 AM
Add to machine intelligence a neural network. A real one. Think of self-emerging structures, and dissipative structures (Chaos theory). We do not know when intelligence becomes self-aware. We do not even know if we are...! Neural research has undergone a unnoticed revolution in the past 25 years or so. There is strong findings indicating that what we consider to be our free will or our decision, indeed is determined by the brain in basis of purely neural/hormonal events that make the body go feeling blue and sad, and THEN the activity in the brain regions start that tell the EEG and its interpreter that we are sad and therefor we cry. The body very well decides our physiological action - and THEN we/our brain react to it by adding an interpretation to it: we feel "sad". Thats is no hokuspokus, I already learned that to be object of intense debate when I had courses in psychophysiology in the early nineties. Since then, the evidence has massively shifted in support of the biologistic theories here. Its very hard to get by these strong findings. When I said neural science and brain research has seen a revolution in the past 20 years, comparable to quantum theory in physics, it is no exaggeration. The results just point at a direction that we do not like.

Our idea of having a free will - may be just our illusion to make our life more bearable by giving us comfort to find in the belief that we are somewhat masters of our fate in this totally uncertain universe and can decide our future. But maybe we get decided on a much more profound, deeper, inner, biologic level.

That relatives quite a lot of what we think may set us apart from a machine-based artificial intelligence.

If this machinery then starts to build itself, expand itself, creates doubles or improved versions of itself, and spreads, we then have to ask the question where dead machinery ends, and life begins. Or we would need to revise our concept of what our, what biological life makes so - as we think - unique and different to machinery.

I find these to be most uncomfortable questions. Its far more easier and enjoyable to avoid them and just believe in any surrogate, trawman conception.

When cosmological theories today claim that from nothingness can emerge, even must emerge material existence, all by itself, and when we see structures forming up by themselves unpredictably when systems see their cohesive status reaching a certain amount of instability, we cannot be sure that we can guess in advance when the treshold for the emerging of "mind" is reached and overstepped in a given process of system-related evolution.

I would not even rule out with all certainty that maybe it already is there, and we are just not aware of its presence. May be that we cannot see it then. May be that it hides from us intentionally. But if that would be true, then we would indeed be completely at its mercy. To me, this is the most unpleasant scenario of all imaginable.

Sean C
10-22-17, 09:51 PM
The above is true if the AI is centralized; this is "human think": everything of substance or consequence, e.g., governments, countries, corporations, religions, etc., must have a central 'HQ' [...] It is a human tendency to think of AI in terms of a massive "supercomputer" or server farm; what if an AI spread itself out in a redundant fashion across the vast Net, with its various components running in several places, running in mirror fashion and changing the exact location and function of its components in an arbitrarily 'random' fashion? There would be no CPU to attack and shut down killing the whole system ("pull the plug") [...] The only play then would be to destroy the entire Net, simultaneously, and deal with the repercussions of having to live not only off the grid, but also with no grid at all...

Ah, but that's just it: the very existence of "the grid" is itself a weak point. Even a small nuclear EMP or CME which only directly affected a small area could potentially take down huge portions of the power grid. An extremely large event (or multiple, scattered events) could theoretically take out all of the electronics on Earth. Even if some devices in the affected areas were shielded, without power they wouldn't operate for long. These events are also indiscriminate. A targeted attack is not even an option except in the broadest geographical sense.

On the other hand, you have computer viruses. A virus injected into a machine which is connected to the net containing the AI would necessarily have access to all of the areas the AI has access to. Firewalls and anti-virus measures notwithstanding, we humans are pretty crafty. After all, we did invent the idea of a "Trojan horse". I must slightly disagree with Rockin Robbins' assertion that computers cannot be "surprised". I'd be willing to bet that virtually anyone who has owned/used a computer has experienced a time when something occurred that the computer simply could not deal with. In extreme cases, you get the computer version of passing out: the BSOD. :)

vienna
10-23-17, 02:26 AM
Ah, but that's just it: the very existence of "the grid" is itself a weak point. Even a small nuclear EMP or CME which only directly affected a small area could potentially take down huge portions of the power grid. An extremely large event (or multiple, scattered events) could theoretically take out all of the electronics on Earth. Even if some devices in the affected areas were shielded, without power they wouldn't operate for long. These events are also indiscriminate. A targeted attack is not even an option except in the broadest geographical sense.

...




So, basically, you are echoing exactly what I stated: without a centralized target, the only recourse is to take down the whole structure it inhabits and the structure around it. I didn't say an AI that dispersed itself in a wide net, with multiple, mobile command redundancies, with no central point of attack, is impervious to an EMP, etc.; it is impervious to a conventional centrally targeted attack. Yes, you could defeat it by taking out the entire grid, just as you could kill a cancer by killing its host body; its just that the end result is less than satisfactory...

On the other hand, you have computer viruses. A virus injected into a machine which is connected to the net containing the AI would necessarily have access to all of the areas the AI has access to. Firewalls and anti-virus measures notwithstanding, we humans are pretty crafty. After all, we did invent the idea of a "Trojan horse". I must slightly disagree with Rockin Robbins' assertion that computers cannot be "surprised". I'd be willing to bet that virtually anyone who has owned/used a computer has experienced a time when something occurred that the computer simply could not deal with. In extreme cases, you get the computer version of passing out: the BSOD. :)

Interesting concept. But what if the AI had superior diagnostics and the ability to self-anneal? Say the system is set up in a way to be self-monitoring of the current executive command node(s) by the redundant command nodes: a virus is injected into the current executive node(s) and the monitoring nodes detect the intrusion; the infected current exe is severed (much like a reptile shedding its tail in the jaws of a predator) and the command is passed on to a known unaffected node; perhaps, the infected node is also quarantined for study by an independent subroutine and used to find an 'antidote' for the attack. Sounds farfetched? This is the basic procedure of most current antivirus software today...

As far as the concepts of free will, compassion, emotion, etc: these are not necessary for self-awareness or intelligence. In a primitive degree, almost any computer with a free standing OS is 'aware' of its existence, at least in the sense it monitors its state through diagnostics, etc; it 'knows' its here; it may not know why, in an existential form, or know it is 'mortal', but it does know it is 'here' (oddly making it, perhaps, philosophically quintessential). The need to know the 'why' and wherefores of existence is a human need born out of the need for self-validation of the 'purpose' of one's need to exist. AIs aren't really concerned with the 'niceties' of navel staring; they exist to perform tasks within their parameters and, if so equipped, to learn from their mistakes (and ours) and make corrections, as needed, and as deemed necessary to the efficient operation of its system and the completion of its task(s). The idea an AI must be fully "human", in intelligence,and 'humanity', in order to be truly an AI is a human conceit, akin to the way humanity has sought through the millennia to give an all-seeing, all-knowing, all-powerful entity a "human face" and "character"; it reassures us when the entity is in our 'likeness'. The truth of the matter is, AIs will never be able to fully and independently achieve the complex level of human emotion and compassion; there is just too many varied 'grey areas' in the human state. The truth of the matter also is, AIs don't really need to have that degree of 'humanness'; they just have to know their task(s) and have the ability to perform...






<O>

Rockin Robbins
10-23-17, 03:00 AM
AIs don't really need to have that degree of 'humanness'; they just have to know their task(s) and have the ability to perform...
<O>

But here we are anthropomorphizing them again, injecting our fears, desires, goals, etc into a box of microchips. We play Silent Hunter. The computer moves pixels around and performs mathematical computations with no awareness of the game, while we play the game, completely unaware of the pixel manipulations and mathematical calculations. The machine has no concept of the game at all. It just executes nonsensical instructions because it was instructed to by the software.

BSOD? It's the same thing: a checklist of qualifications which when satisfied jumps the execution to the BSOD subroutine. The computer has no awareness about the process at all. It follows instructions and mindlessly executes them, unaware that it has "died."

The trick is to make this alien manipulation of electrons look like intelligence. This thread is evidence that we've drank the koolaid and are now basking in our great "achievement." Unfortunately that achievement is only a hallucination or maybe a nightmare. We give ourselves altogether too much credit.

Reminds me of the questions about which mods for Silent Hunter are the "most realistic." There IS NO REALISM in any Silent Hunter game. Until the food goes bad six weeks out and you have to eat it, we don't have to worry about realism in Silent Hunter either.

vienna
10-23-17, 04:20 AM
...

Until the food goes bad six weeks out and you have to eat it, we don't have to worry about realism in Silent Hunter either.





Does it count if one has a binge play marathon and mindlessly nibbles on cold, stale pizza?...






<O>

Sean C
10-23-17, 05:23 AM
So, basically, you are echoing exactly what I stated...

I was simply expanding on the first paragraph of my previous statement in the context of your response. The only agreement was with the very last sentence of your response...an idea I thought would be self-evident in my original statement. The rest of your response sounded (to me) like an argument against the efficacy of such a solution, ignoring the fact that such an act would necessarily take down the entire network. Of course, I could have misinterpreted what you wrote.

Yes, you could defeat it by taking out the entire grid, just as you could kill a cancer by killing its host body; its just that the end result is less than satisfactory...I believe a slightly better analogy would be something like: ridding a body of a cancer by cutting off the limb containing the tumor. Sure, it's an extreme (but effective) treatment, but we would still be here...we'd just have to adapt a lot. That's why I keep my slide rule and celestial navigation skills sharp: I figure I'll be quite in-demand after the AI/electronics apocalypse. :D

Interesting concept. But what if the AI had superior diagnostics and the ability to self-anneal?...Perhaps someone could design a virus which mimicked the AI's anti-virus routines and tricked the AI into believing that the virus was actually a patch to protect against some future attack at which point it would be installed everywhere to be activated at a later date. I don't know...my knowledge of programming is too limited and the "what if" game is too open-ended for me to give a definite response. In any case, if the virus route didn't work, the extreme treatment option above would still be available.

Correct me if I am wrong, but I don't believe the rest of your reply was directed at me, per se, or that it contradicts anything else I wrote.

Cheers!

vienna
10-23-17, 06:20 AM
Oh, no, I wasn't directing anything at you specifically; I was just discussing the nature of what is or isn't intelligence in terms of an AI and those comments are just my viewpoint. I welcome any and all considered views, agreed or contrary. If you took my comments as an affront, I do apologize, as that was not my intent...

I find it interesting you seem to be a bit focused on the notion taking down a possible AI (virus, EMP, etc.); it is a notion I have seen in an awful lot of other discussions of AI; it almost seems akin to the popular concept of extraterrestrial life: if 'they' do come, they'll probably won't be benevolent. It is a sort of human trait to exhibit a form of xenophobia to anything unknown or uncertain. Human history has lots of instances where new ideas, inventions, philosophies have been met with sometimes vehement reaction and, later those 'aberrations' became commonplace parts of human society and knowledge: Galileo's persecution by the Holy Roman Church leaps to mind. Even now, in a more limited sense, we are seeing devices such as PCs and Net, once looked upon with a degree of suspicion and skepticism, as now almost indispensable parts of our lives. I recall how, in 1968, I was talking with group of fellow cadets in high school and they laughed when I said, one day, we would have computers on our desks, computers would run the functions of automobiles, and that there might be an interconnected network of computers that would put information and data at our hands instantly. I wish I could see those guys now. Granted, back in those days, computers were massive devices filling hundreds of square feet of space and need to be in controlled environments. One day, we will have advanced AIs and we really do not need to overly fear them; they are only tools and they are benign until ill-used by humans: a hammer is a constructive tool, at least until you try to bludgeon someone with one...

Your comment about the slide rule and celestial navigation is spot on. An over dependence on tech can and probably will come back to bite us in the long run. When the infamous Y2K brouhaha came about, the older types who were considered redundant suddenly became very highly demanded; a co-worker of mine told me of her programmer husband who had been forced to take early retirement because he was deemed unnecessary and redundant; however, once the company he worked for was faced with the Y2K problems and found they would need to have people versed in RPG and PL/1, they came to him hat in hand and almost begged him to come back; it seems all the young up-and-comers they kept hadn't a clue how to revise the existing programs; he did go back - as a highly paid consultant, making back as much and more as he lost by early retirement. I wish I had kept up on my RPG skills; I could have made a tidy killing...







<O>

Skybird
10-23-17, 07:51 AM
; it 'knows' its here; i
Define "it". Define "knows".

A flippable light switch has its switch in either this or that position. Is there an "it" "knowing" anything about that?

Awareness, Mind are more than the sum of all the individual parts, of this I am certain. We have no real understanding of where they begin, and why they even form up, and when, under what contiions. We can only assume that carrying system's structural complexity has somethign to do with it. A certain minimum amount of degrees fo freedom on possibilities that any situation of choice can freely pick, and a certain minimum of such deicison-points, splits in the detemrinistic tree, that allow their consequences to dynamcially feedback on other suhc nodes both upowards and downwards in the hierarchy, by this allowing autonomous self-alteration of the system.

And when this system alters itself to a new, higher level of order/complexity, maybe, then maybe somethign like mind or self-awareness may be the result, who knows.

We speculate, and all too often reduce reality to inferior degrees of complexity that you own artifical categories allow us to handle. The "real reality", nevertheless can only be experienced, which imo only is piossble at the porice of self-transcendence and moving beyond the borders of the defining limits of what we usually call "us" and "ego".

Sean C
10-23-17, 07:58 AM
If you took my comments as an affront, I do apologize, as that was not my intent...

No worries. :)

I have no inherent aversion to a possibly "conscious" AI itself. In fact, I find the possibilities and implications fascinating. What might we learn about ourselves? What insights might we glean about what it means to be "self-aware"? What moral and ethical challenges might we encounter? Will we find new ways to deal with human afflictions such as memory loss or mental illness? The list goes on and on. We humans have a natural curiosity which drives us to question and explore (sometimes at considerable risk) and I am no exception.

My comments were only prompted by the foreboding feeling already being expressed by others. Personally, I think the biggest problem with technology is its misuse by us. One example being the internet. Here we have a tool which brings all (or at least most) of history's knowledge to our fingertips and makes the world exponentially smaller, allowing us to instantly communicate around the globe and broaden our cultural horizons like never before. But, are people on the whole getting that much smarter and closer? Or are we just spending a lot of time posting "selfies" and "tweeting" our opinion of the latest episode of [insert show here]...achieving just the opposite?

Judging by what is deemed newsworthy in the most popular media, the best use of AI is in service staff and sex dolls. [Sigh]....We're doomed. :haha:

Rockstar
10-23-17, 09:50 AM
Or perhaps we will merge, moving our consciousness between machine and man like swapping a USB drive until there is no barrier between the two and all that remains is energy. The whole of humanity on a USB stick.

Might already be what's happening tbh, there's no way we'd know if we were just sims on someones computer.


You go first :D There's no proof our conscience resides within the confine of the brain.


"Despite zillions of us (neurologists) slaving away at the subject, we still don't know squat about how the brain works."
Robert Sapolsky, professor of biological science and neurology at Stanford University

vienna
10-23-17, 10:27 AM
Define "it". Define "knows".

A flippable light switch has its switch in either this or that position. Is there an "it" "knowing" anything about that?

Awareness, Mind are more than the sum of all the individual parts, of this I am certain. We have no real understanding of where they begin, and why they even form up, and when, under what contiions. We can only assume that carrying system's structural complexity has somethign to do with it. A certain minimum amount of degrees fo freedom on possibilities that any situation of choice can freely pick, and a certain minimum of such deicison-points, splits in the detemrinistic tree, that allow their consequences to dynamcially feedback on other suhc nodes both upowards and downwards in the hierarchy, by this allowing autonomous self-alteration of the system.

And when this system alters itself to a new, higher level of order/complexity, maybe, then maybe somethign like mind or self-awareness may be the result, who knows.

We speculate, and all too often reduce reality to inferior degrees of complexity that you own artifical categories allow us to handle. The "real reality", nevertheless can only be experienced, which imo only is piossble at the porice of self-transcendence and moving beyond the borders of the defining limits of what we usually call "us" and "ego".


You will note I put the word knows in quotes ('knows') indicating it is not a literal use of the the word; the it was obvious from the context: I was referring to the computer I described. A computer 'knows' of its existence in the sense it constantly monitors the OS and hardware status and reacts, as needed to changes in that status rather than just sit dumbly waiting to be 'told' it is running. If you turn on a computer and leave, it will continue to hum along, in patient expectation, as long as it has power and does not suffer from a physical malfunction or OS blip, much as I, at my age, am patiently waiting while hoping my "hardware' and 'software' doesn't crash (nearer my God, etc.). I do not believe AIs will fully achieve fully human status, so I don't think they will ever fully know of their existence in the human way humans have that knowledge. Your question seems to be perpetuating the "in order to be fully intelligent, it must be fully human" mythos. Emotion does not make intelligence and, often, is a hindrance to intelligence; in this I refer again to Galileo's situation. Also, intelligence can exist in absence of knowledge; I have known a good many people who were not 'book smart' who were nonetheless highly natively intelligent; and, conversely, I have known a good many people who were virtual fonts of knowledge who did not have the ability to apply their knowledge nor did they have what could best be described as good old-fashioned human 'common sense'. Being 'smart' does not necessarily make one 'intelligent'...

In an odd way, if you think about it, an AI would most likely be akin to a high-functioning sociopath, able to perform at a very high level but devoid of or very seriously lacking the strictures provided by a human framework and highly dedicated, perhaps obsessed, to singular tasks...






<O>

vienna
10-23-17, 10:54 AM
No worries. :)

I have no inherent aversion to a possibly "conscious" AI itself. In fact, I find the possibilities and implications fascinating. What might we learn about ourselves? What insights might we glean about what it means to be "self-aware"? What moral and ethical challenges might we encounter? Will we find new ways to deal with human afflictions such as memory loss or mental illness? The list goes on and on. We humans have a natural curiosity which drives us to question and explore (sometimes at considerable risk) and I am no exception.

My comments were only prompted by the foreboding feeling already being expressed by others. Personally, I think the biggest problem with technology is its misuse by us. One example being the internet. Here we have a tool which brings all (or at least most) of history's knowledge to our fingertips and makes the world exponentially smaller, allowing us to instantly communicate around the globe and broaden our cultural horizons like never before. But, are people on the whole getting that much smarter and closer? Or are we just spending a lot of time posting "selfies" and "tweeting" our opinion of the latest episode of [insert show here]...achieving just the opposite?

Judging by what is deemed newsworthy in the most popular media, the best use of AI is in service staff and sex dolls. [Sigh]....We're doomed. :haha:

The problem with the internet is what plagued television at its start: a medium with great potential was quickly reduced to such a degree it was derided as an "idiot box". The same affliction plagues the internet: crass commercialism. When TV was first launched, the sets were initially so expensive, the principal audience was rather more the well-to-do, educated consumer, so the network programming was heavy on cultural programs resulting in the "Golden Age" of television; later, as sets became more affordable and common, the networks tapped the new, less discerning demographic by moving towards broader fare, resulting in a new descriptor, the "vast wasteland". As long as the mantra is "Monetize, Monetize!", I don't see the Net getting any better...

As far as the future potential of AI is concerned, again, it is a tool and like all tools, it can either build or destroy depending on who wields it and to what end. IBM's WATSON, perhaps the most advanced realizations of AI technology, is currently being used for advanced medical research and disease diagnosis; however, I am sure it won't be long before some commercial entity or other decides the better use will be to enhance 'monetizing' whatever they wish to foist on the public...






<O>

Skybird
10-24-17, 11:53 AM
You will note I put the word knows in quotes ('knows') indicating it is not a literal use of the the word; the it was obvious from the context: I was referring to the computer I described. A computer 'knows' of its existence in the sense it constantly monitors the OS and hardware status and reacts, as needed to changes in that status rather than just sit dumbly waiting to be 'told' it is running.

Not off my hook so easily! :) You said "It knows of its" status/conditon/work/whatever. Grammar rules apply, the "it" here is the subject observing something, while in your explanation above you have already turned "it" into the object of the observation. That are two very different things! A subject needs self-realization - to differentiate between itself, and the other/the object of its ongoing observation. Hence, my reply.

Emotion does not make intelligence and, often, is a hindrance to intelligence; in this I refer again to Galileo's situation. Also, intelligence can exist in absence of knowledge; I have known a good many people who were not 'book smart' who were nonetheless highly natively intelligent; and, conversely, I have known a good many people who were virtual fonts of knowledge who did not have the ability to apply their knowledge nor did they have what could best be described as good old-fashioned human 'common sense'. Being 'smart' does not necessarily make one 'intelligent'...
Why not making it simple and differ between education/knowledge and intelligence/thinking. However, I have this feeling that if intelligence is like a chase down an deteministically ever-splitting decision tree, then it may end in a situation where complexity grows so much that the sum of these single deciisons added together no longer equals the result, but is still left behind by somethign new emerging, like bubble s in boiling water (heating it upo turns the status of molecules in the liquid into a chaotic dis-order) suddnely start to form patterns in which they appear and rise: a new meta-structure, a new ordwer has emerged formt he former chaotic status, the system now is reorganised on a higher level of order. The comparison is imporfect, I know, I am more about just trying to give a hoefully helpful imagination to whyt I mean. The brain is a remarkably complex, and as we now know: ALWAYS changing, network of nodes and biological "transistors" and one-way highways for signal transmission. It's functioning by its design is what creates what we call out thoghts, our self-concept, our perosnality, our ego, our emotions, and even if many do not want to hear it: our religious feeling and thinking and experiencing. All this obviously goes far beyond just processing the signal input from our sensory organs. Take away the functioning brain, and what is left is just bunch of meat, without any mind, self-conception, human quality. - Now, the web also has become an increibly coplex structure of data highways, decision split and internodes. And it is linked to a treasury of data signals, sometimes usefully and sometimes uselessly arranged, that is the biggest datapool known to us beside the double helix of the DNA. Isn't it self-offering a theory to take into account that this system as a carrier for self-learning algorithms that even have the ability to alter themselves, may rsult to somethign like autonomous self-aware intelligence from some "naguic" treshhold point/key event on? We do not kn ow what this treshhold could be. And this maybe is a worrysome gap in our idea abiut creating self-aware artifical intelligence. An intelligence that does not base on the biological needs of human bodies and is not the result of encoded needs and derminstically preset subroutines that our evolution ha smarked in our genes over the millenia. But much of our emotions founded our ideas of values, and both thus base on the evolutionary needs of your species, the drive for the species's survival (not necessarily the individual's ! ;) ). Hormones and our sexual drve dictate our individual as well as our social behavour and decison making to a much deeper, far-reaching degree than many people would like to learn, for it offends their idea of that they are having free wills, make their own decisions, and that their civilised mind can always command our biological fundament and their intellect keeps their sexual motives in check. And I do not even bas eon Freud when saying thatm, but on modern biology. For example Robin Baker: Sperm Wars. Infidelity, Sexual Conflict and other Bedroom Stories, a refeshing and humours reading from a biologistical standpoint, though Puritans should be warned, the book in parts is quite explicit.

I wonder what an AI will look like that does not base on these biological factors, and that has no need for sex. I am quite certain that it will not share human views on ethics and morals once it indeed has made the jump to real self-awareness. And from that moment on, we cannot make any predictions about its decisions and reasonings anymore. And that I find to be deeply worrying and alarming.


In an odd way, if you think about it, an AI would most likely be akin to a high-functioning sociopath, able to perform at a very high level but devoid of or very seriously lacking the strictures provided by a human framework and highly dedicated, perhaps obsessed, to singular tasks...

In the very first Alien movie, Ripley at the end kicks the Android'S head off. Before she finishes him off, he talks a bit about his attitude towards the alien, and he expresses admiration for the clean conception of its design, an organism not hindered by scruples and moral hesitations, just focussed on survival and predatory survival. The last movie, Covenant, picks up here again, the Android now showing no mercy nor even the smallest hesitation of scruples to commit genocide and hand his human prisoners over to unimaginable pain and torment while havign them killed by the baby aliens. Yes, I see parallels between these movie scenes, and my reasoning above. We humans too easily fall for megalomania, but may find out to our cost that we are just apprentices who still cannot master the ghosts we called by our careless mumbling of spells.

It is revealing, that these implications receive almost no reflection, debate, media coverage, also play almost no role in task-related courses at universities, job-training, and in business-oriented think tanks. To me it compares to bungee-jumping off a bridge without checking first whether the rubber-rope really is latched. And that is why I think our doing is a very bad idea and currently on a terribly misled track.