SUBSIM Radio Room Forums

SUBSIM Radio Room Forums (https://www.subsim.com/radioroom/index.php)
-   General Topics (https://www.subsim.com/radioroom/forumdisplay.php?f=175)
-   -   can man go? (https://www.subsim.com/radioroom/showthread.php?t=233897)

Sean C 10-22-17 09:51 PM

Quote:

Originally Posted by vienna (Post 2519702)
The above is true if the AI is centralized; this is "human think": everything of substance or consequence, e.g., governments, countries, corporations, religions, etc., must have a central 'HQ' [...] It is a human tendency to think of AI in terms of a massive "supercomputer" or server farm; what if an AI spread itself out in a redundant fashion across the vast Net, with its various components running in several places, running in mirror fashion and changing the exact location and function of its components in an arbitrarily 'random' fashion? There would be no CPU to attack and shut down killing the whole system ("pull the plug") [...] The only play then would be to destroy the entire Net, simultaneously, and deal with the repercussions of having to live not only off the grid, but also with no grid at all...

Ah, but that's just it: the very existence of "the grid" is itself a weak point. Even a small nuclear EMP or CME which only directly affected a small area could potentially take down huge portions of the power grid. An extremely large event (or multiple, scattered events) could theoretically take out all of the electronics on Earth. Even if some devices in the affected areas were shielded, without power they wouldn't operate for long. These events are also indiscriminate. A targeted attack is not even an option except in the broadest geographical sense.

On the other hand, you have computer viruses. A virus injected into a machine which is connected to the net containing the AI would necessarily have access to all of the areas the AI has access to. Firewalls and anti-virus measures notwithstanding, we humans are pretty crafty. After all, we did invent the idea of a "Trojan horse". I must slightly disagree with Rockin Robbins' assertion that computers cannot be "surprised". I'd be willing to bet that virtually anyone who has owned/used a computer has experienced a time when something occurred that the computer simply could not deal with. In extreme cases, you get the computer version of passing out: the BSOD. :)

vienna 10-23-17 02:26 AM

Quote:

Originally Posted by Nathaniel B. (Post 2519827)
Ah, but that's just it: the very existence of "the grid" is itself a weak point. Even a small nuclear EMP or CME which only directly affected a small area could potentially take down huge portions of the power grid. An extremely large event (or multiple, scattered events) could theoretically take out all of the electronics on Earth. Even if some devices in the affected areas were shielded, without power they wouldn't operate for long. These events are also indiscriminate. A targeted attack is not even an option except in the broadest geographical sense.

...


So, basically, you are echoing exactly what I stated: without a centralized target, the only recourse is to take down the whole structure it inhabits and the structure around it. I didn't say an AI that dispersed itself in a wide net, with multiple, mobile command redundancies, with no central point of attack, is impervious to an EMP, etc.; it is impervious to a conventional centrally targeted attack. Yes, you could defeat it by taking out the entire grid, just as you could kill a cancer by killing its host body; its just that the end result is less than satisfactory...

Quote:

Originally Posted by Nathaniel B. (Post 2519827)
On the other hand, you have computer viruses. A virus injected into a machine which is connected to the net containing the AI would necessarily have access to all of the areas the AI has access to. Firewalls and anti-virus measures notwithstanding, we humans are pretty crafty. After all, we did invent the idea of a "Trojan horse". I must slightly disagree with Rockin Robbins' assertion that computers cannot be "surprised". I'd be willing to bet that virtually anyone who has owned/used a computer has experienced a time when something occurred that the computer simply could not deal with. In extreme cases, you get the computer version of passing out: the BSOD. :)

Interesting concept. But what if the AI had superior diagnostics and the ability to self-anneal? Say the system is set up in a way to be self-monitoring of the current executive command node(s) by the redundant command nodes: a virus is injected into the current executive node(s) and the monitoring nodes detect the intrusion; the infected current exe is severed (much like a reptile shedding its tail in the jaws of a predator) and the command is passed on to a known unaffected node; perhaps, the infected node is also quarantined for study by an independent subroutine and used to find an 'antidote' for the attack. Sounds farfetched? This is the basic procedure of most current antivirus software today...

As far as the concepts of free will, compassion, emotion, etc: these are not necessary for self-awareness or intelligence. In a primitive degree, almost any computer with a free standing OS is 'aware' of its existence, at least in the sense it monitors its state through diagnostics, etc; it 'knows' its here; it may not know why, in an existential form, or know it is 'mortal', but it does know it is 'here' (oddly making it, perhaps, philosophically quintessential). The need to know the 'why' and wherefores of existence is a human need born out of the need for self-validation of the 'purpose' of one's need to exist. AIs aren't really concerned with the 'niceties' of navel staring; they exist to perform tasks within their parameters and, if so equipped, to learn from their mistakes (and ours) and make corrections, as needed, and as deemed necessary to the efficient operation of its system and the completion of its task(s). The idea an AI must be fully "human", in intelligence,and 'humanity', in order to be truly an AI is a human conceit, akin to the way humanity has sought through the millennia to give an all-seeing, all-knowing, all-powerful entity a "human face" and "character"; it reassures us when the entity is in our 'likeness'. The truth of the matter is, AIs will never be able to fully and independently achieve the complex level of human emotion and compassion; there is just too many varied 'grey areas' in the human state. The truth of the matter also is, AIs don't really need to have that degree of 'humanness'; they just have to know their task(s) and have the ability to perform...






<O>

Rockin Robbins 10-23-17 03:00 AM

Quote:

Originally Posted by vienna (Post 2519840)
AIs don't really need to have that degree of 'humanness'; they just have to know their task(s) and have the ability to perform...
<O>

But here we are anthropomorphizing them again, injecting our fears, desires, goals, etc into a box of microchips. We play Silent Hunter. The computer moves pixels around and performs mathematical computations with no awareness of the game, while we play the game, completely unaware of the pixel manipulations and mathematical calculations. The machine has no concept of the game at all. It just executes nonsensical instructions because it was instructed to by the software.

BSOD? It's the same thing: a checklist of qualifications which when satisfied jumps the execution to the BSOD subroutine. The computer has no awareness about the process at all. It follows instructions and mindlessly executes them, unaware that it has "died."

The trick is to make this alien manipulation of electrons look like intelligence. This thread is evidence that we've drank the koolaid and are now basking in our great "achievement." Unfortunately that achievement is only a hallucination or maybe a nightmare. We give ourselves altogether too much credit.

Reminds me of the questions about which mods for Silent Hunter are the "most realistic." There IS NO REALISM in any Silent Hunter game. Until the food goes bad six weeks out and you have to eat it, we don't have to worry about realism in Silent Hunter either.

vienna 10-23-17 04:20 AM

Quote:

Originally Posted by Rockin Robbins (Post 2519842)

...

Until the food goes bad six weeks out and you have to eat it, we don't have to worry about realism in Silent Hunter either.



Does it count if one has a binge play marathon and mindlessly nibbles on cold, stale pizza?...






<O>

Sean C 10-23-17 05:23 AM

Quote:

Originally Posted by vienna (Post 2519840)
So, basically, you are echoing exactly what I stated...

I was simply expanding on the first paragraph of my previous statement in the context of your response. The only agreement was with the very last sentence of your response...an idea I thought would be self-evident in my original statement. The rest of your response sounded (to me) like an argument against the efficacy of such a solution, ignoring the fact that such an act would necessarily take down the entire network. Of course, I could have misinterpreted what you wrote.

Quote:

Yes, you could defeat it by taking out the entire grid, just as you could kill a cancer by killing its host body; its just that the end result is less than satisfactory...
I believe a slightly better analogy would be something like: ridding a body of a cancer by cutting off the limb containing the tumor. Sure, it's an extreme (but effective) treatment, but we would still be here...we'd just have to adapt a lot. That's why I keep my slide rule and celestial navigation skills sharp: I figure I'll be quite in-demand after the AI/electronics apocalypse. :D

Quote:

Interesting concept. But what if the AI had superior diagnostics and the ability to self-anneal?...
Perhaps someone could design a virus which mimicked the AI's anti-virus routines and tricked the AI into believing that the virus was actually a patch to protect against some future attack at which point it would be installed everywhere to be activated at a later date. I don't know...my knowledge of programming is too limited and the "what if" game is too open-ended for me to give a definite response. In any case, if the virus route didn't work, the extreme treatment option above would still be available.

Correct me if I am wrong, but I don't believe the rest of your reply was directed at me, per se, or that it contradicts anything else I wrote.

Cheers!

vienna 10-23-17 06:20 AM

Oh, no, I wasn't directing anything at you specifically; I was just discussing the nature of what is or isn't intelligence in terms of an AI and those comments are just my viewpoint. I welcome any and all considered views, agreed or contrary. If you took my comments as an affront, I do apologize, as that was not my intent...

I find it interesting you seem to be a bit focused on the notion taking down a possible AI (virus, EMP, etc.); it is a notion I have seen in an awful lot of other discussions of AI; it almost seems akin to the popular concept of extraterrestrial life: if 'they' do come, they'll probably won't be benevolent. It is a sort of human trait to exhibit a form of xenophobia to anything unknown or uncertain. Human history has lots of instances where new ideas, inventions, philosophies have been met with sometimes vehement reaction and, later those 'aberrations' became commonplace parts of human society and knowledge: Galileo's persecution by the Holy Roman Church leaps to mind. Even now, in a more limited sense, we are seeing devices such as PCs and Net, once looked upon with a degree of suspicion and skepticism, as now almost indispensable parts of our lives. I recall how, in 1968, I was talking with group of fellow cadets in high school and they laughed when I said, one day, we would have computers on our desks, computers would run the functions of automobiles, and that there might be an interconnected network of computers that would put information and data at our hands instantly. I wish I could see those guys now. Granted, back in those days, computers were massive devices filling hundreds of square feet of space and need to be in controlled environments. One day, we will have advanced AIs and we really do not need to overly fear them; they are only tools and they are benign until ill-used by humans: a hammer is a constructive tool, at least until you try to bludgeon someone with one...

Your comment about the slide rule and celestial navigation is spot on. An over dependence on tech can and probably will come back to bite us in the long run. When the infamous Y2K brouhaha came about, the older types who were considered redundant suddenly became very highly demanded; a co-worker of mine told me of her programmer husband who had been forced to take early retirement because he was deemed unnecessary and redundant; however, once the company he worked for was faced with the Y2K problems and found they would need to have people versed in RPG and PL/1, they came to him hat in hand and almost begged him to come back; it seems all the young up-and-comers they kept hadn't a clue how to revise the existing programs; he did go back - as a highly paid consultant, making back as much and more as he lost by early retirement. I wish I had kept up on my RPG skills; I could have made a tidy killing...







<O>

Skybird 10-23-17 07:51 AM

Quote:

Originally Posted by vienna (Post 2519840)
; it 'knows' its here; i

Define "it". Define "knows".

A flippable light switch has its switch in either this or that position. Is there an "it" "knowing" anything about that?

Awareness, Mind are more than the sum of all the individual parts, of this I am certain. We have no real understanding of where they begin, and why they even form up, and when, under what contiions. We can only assume that carrying system's structural complexity has somethign to do with it. A certain minimum amount of degrees fo freedom on possibilities that any situation of choice can freely pick, and a certain minimum of such deicison-points, splits in the detemrinistic tree, that allow their consequences to dynamcially feedback on other suhc nodes both upowards and downwards in the hierarchy, by this allowing autonomous self-alteration of the system.

And when this system alters itself to a new, higher level of order/complexity, maybe, then maybe somethign like mind or self-awareness may be the result, who knows.

We speculate, and all too often reduce reality to inferior degrees of complexity that you own artifical categories allow us to handle. The "real reality", nevertheless can only be experienced, which imo only is piossble at the porice of self-transcendence and moving beyond the borders of the defining limits of what we usually call "us" and "ego".

Sean C 10-23-17 07:58 AM

Quote:

Originally Posted by vienna (Post 2519857)
If you took my comments as an affront, I do apologize, as that was not my intent...

No worries. :)

I have no inherent aversion to a possibly "conscious" AI itself. In fact, I find the possibilities and implications fascinating. What might we learn about ourselves? What insights might we glean about what it means to be "self-aware"? What moral and ethical challenges might we encounter? Will we find new ways to deal with human afflictions such as memory loss or mental illness? The list goes on and on. We humans have a natural curiosity which drives us to question and explore (sometimes at considerable risk) and I am no exception.

My comments were only prompted by the foreboding feeling already being expressed by others. Personally, I think the biggest problem with technology is its misuse by us. One example being the internet. Here we have a tool which brings all (or at least most) of history's knowledge to our fingertips and makes the world exponentially smaller, allowing us to instantly communicate around the globe and broaden our cultural horizons like never before. But, are people on the whole getting that much smarter and closer? Or are we just spending a lot of time posting "selfies" and "tweeting" our opinion of the latest episode of [insert show here]...achieving just the opposite?

Judging by what is deemed newsworthy in the most popular media, the best use of AI is in service staff and sex dolls. [Sigh]....We're doomed. :haha:

Rockstar 10-23-17 09:50 AM

Quote:

Originally Posted by Oberon (Post 2519644)
Or perhaps we will merge, moving our consciousness between machine and man like swapping a USB drive until there is no barrier between the two and all that remains is energy. The whole of humanity on a USB stick.

Might already be what's happening tbh, there's no way we'd know if we were just sims on someones computer.


You go first :D There's no proof our conscience resides within the confine of the brain.


"Despite zillions of us (neurologists) slaving away at the subject, we still don't know squat about how the brain works."
Robert Sapolsky, professor of biological science and neurology at Stanford University

vienna 10-23-17 10:27 AM

Quote:

Originally Posted by Skybird (Post 2519866)
Define "it". Define "knows".

A flippable light switch has its switch in either this or that position. Is there an "it" "knowing" anything about that?

Awareness, Mind are more than the sum of all the individual parts, of this I am certain. We have no real understanding of where they begin, and why they even form up, and when, under what contiions. We can only assume that carrying system's structural complexity has somethign to do with it. A certain minimum amount of degrees fo freedom on possibilities that any situation of choice can freely pick, and a certain minimum of such deicison-points, splits in the detemrinistic tree, that allow their consequences to dynamcially feedback on other suhc nodes both upowards and downwards in the hierarchy, by this allowing autonomous self-alteration of the system.

And when this system alters itself to a new, higher level of order/complexity, maybe, then maybe somethign like mind or self-awareness may be the result, who knows.

We speculate, and all too often reduce reality to inferior degrees of complexity that you own artifical categories allow us to handle. The "real reality", nevertheless can only be experienced, which imo only is piossble at the porice of self-transcendence and moving beyond the borders of the defining limits of what we usually call "us" and "ego".


You will note I put the word knows in quotes ('knows') indicating it is not a literal use of the the word; the it was obvious from the context: I was referring to the computer I described. A computer 'knows' of its existence in the sense it constantly monitors the OS and hardware status and reacts, as needed to changes in that status rather than just sit dumbly waiting to be 'told' it is running. If you turn on a computer and leave, it will continue to hum along, in patient expectation, as long as it has power and does not suffer from a physical malfunction or OS blip, much as I, at my age, am patiently waiting while hoping my "hardware' and 'software' doesn't crash (nearer my God, etc.). I do not believe AIs will fully achieve fully human status, so I don't think they will ever fully know of their existence in the human way humans have that knowledge. Your question seems to be perpetuating the "in order to be fully intelligent, it must be fully human" mythos. Emotion does not make intelligence and, often, is a hindrance to intelligence; in this I refer again to Galileo's situation. Also, intelligence can exist in absence of knowledge; I have known a good many people who were not 'book smart' who were nonetheless highly natively intelligent; and, conversely, I have known a good many people who were virtual fonts of knowledge who did not have the ability to apply their knowledge nor did they have what could best be described as good old-fashioned human 'common sense'. Being 'smart' does not necessarily make one 'intelligent'...

In an odd way, if you think about it, an AI would most likely be akin to a high-functioning sociopath, able to perform at a very high level but devoid of or very seriously lacking the strictures provided by a human framework and highly dedicated, perhaps obsessed, to singular tasks...






<O>

vienna 10-23-17 10:54 AM

Quote:

Originally Posted by Nathaniel B. (Post 2519867)
No worries. :)

I have no inherent aversion to a possibly "conscious" AI itself. In fact, I find the possibilities and implications fascinating. What might we learn about ourselves? What insights might we glean about what it means to be "self-aware"? What moral and ethical challenges might we encounter? Will we find new ways to deal with human afflictions such as memory loss or mental illness? The list goes on and on. We humans have a natural curiosity which drives us to question and explore (sometimes at considerable risk) and I am no exception.

My comments were only prompted by the foreboding feeling already being expressed by others. Personally, I think the biggest problem with technology is its misuse by us. One example being the internet. Here we have a tool which brings all (or at least most) of history's knowledge to our fingertips and makes the world exponentially smaller, allowing us to instantly communicate around the globe and broaden our cultural horizons like never before. But, are people on the whole getting that much smarter and closer? Or are we just spending a lot of time posting "selfies" and "tweeting" our opinion of the latest episode of [insert show here]...achieving just the opposite?

Judging by what is deemed newsworthy in the most popular media, the best use of AI is in service staff and sex dolls. [Sigh]....We're doomed. :haha:

The problem with the internet is what plagued television at its start: a medium with great potential was quickly reduced to such a degree it was derided as an "idiot box". The same affliction plagues the internet: crass commercialism. When TV was first launched, the sets were initially so expensive, the principal audience was rather more the well-to-do, educated consumer, so the network programming was heavy on cultural programs resulting in the "Golden Age" of television; later, as sets became more affordable and common, the networks tapped the new, less discerning demographic by moving towards broader fare, resulting in a new descriptor, the "vast wasteland". As long as the mantra is "Monetize, Monetize!", I don't see the Net getting any better...

As far as the future potential of AI is concerned, again, it is a tool and like all tools, it can either build or destroy depending on who wields it and to what end. IBM's WATSON, perhaps the most advanced realizations of AI technology, is currently being used for advanced medical research and disease diagnosis; however, I am sure it won't be long before some commercial entity or other decides the better use will be to enhance 'monetizing' whatever they wish to foist on the public...






<O>

Skybird 10-24-17 11:53 AM

Quote:

Originally Posted by vienna (Post 2519887)
You will note I put the word knows in quotes ('knows') indicating it is not a literal use of the the word; the it was obvious from the context: I was referring to the computer I described. A computer 'knows' of its existence in the sense it constantly monitors the OS and hardware status and reacts, as needed to changes in that status rather than just sit dumbly waiting to be 'told' it is running.

Not off my hook so easily! :) You said "It knows of its" status/conditon/work/whatever. Grammar rules apply, the "it" here is the subject observing something, while in your explanation above you have already turned "it" into the object of the observation. That are two very different things! A subject needs self-realization - to differentiate between itself, and the other/the object of its ongoing observation. Hence, my reply.

Quote:

Emotion does not make intelligence and, often, is a hindrance to intelligence; in this I refer again to Galileo's situation. Also, intelligence can exist in absence of knowledge; I have known a good many people who were not 'book smart' who were nonetheless highly natively intelligent; and, conversely, I have known a good many people who were virtual fonts of knowledge who did not have the ability to apply their knowledge nor did they have what could best be described as good old-fashioned human 'common sense'. Being 'smart' does not necessarily make one 'intelligent'...
Why not making it simple and differ between education/knowledge and intelligence/thinking. However, I have this feeling that if intelligence is like a chase down an deteministically ever-splitting decision tree, then it may end in a situation where complexity grows so much that the sum of these single deciisons added together no longer equals the result, but is still left behind by somethign new emerging, like bubble s in boiling water (heating it upo turns the status of molecules in the liquid into a chaotic dis-order) suddnely start to form patterns in which they appear and rise: a new meta-structure, a new ordwer has emerged formt he former chaotic status, the system now is reorganised on a higher level of order. The comparison is imporfect, I know, I am more about just trying to give a hoefully helpful imagination to whyt I mean. The brain is a remarkably complex, and as we now know: ALWAYS changing, network of nodes and biological "transistors" and one-way highways for signal transmission. It's functioning by its design is what creates what we call out thoghts, our self-concept, our perosnality, our ego, our emotions, and even if many do not want to hear it: our religious feeling and thinking and experiencing. All this obviously goes far beyond just processing the signal input from our sensory organs. Take away the functioning brain, and what is left is just bunch of meat, without any mind, self-conception, human quality. - Now, the web also has become an increibly coplex structure of data highways, decision split and internodes. And it is linked to a treasury of data signals, sometimes usefully and sometimes uselessly arranged, that is the biggest datapool known to us beside the double helix of the DNA. Isn't it self-offering a theory to take into account that this system as a carrier for self-learning algorithms that even have the ability to alter themselves, may rsult to somethign like autonomous self-aware intelligence from some "naguic" treshhold point/key event on? We do not kn ow what this treshhold could be. And this maybe is a worrysome gap in our idea abiut creating self-aware artifical intelligence. An intelligence that does not base on the biological needs of human bodies and is not the result of encoded needs and derminstically preset subroutines that our evolution ha smarked in our genes over the millenia. But much of our emotions founded our ideas of values, and both thus base on the evolutionary needs of your species, the drive for the species's survival (not necessarily the individual's ! ;) ). Hormones and our sexual drve dictate our individual as well as our social behavour and decison making to a much deeper, far-reaching degree than many people would like to learn, for it offends their idea of that they are having free wills, make their own decisions, and that their civilised mind can always command our biological fundament and their intellect keeps their sexual motives in check. And I do not even bas eon Freud when saying thatm, but on modern biology. For example Robin Baker: Sperm Wars. Infidelity, Sexual Conflict and other Bedroom Stories, a refeshing and humours reading from a biologistical standpoint, though Puritans should be warned, the book in parts is quite explicit.

I wonder what an AI will look like that does not base on these biological factors, and that has no need for sex. I am quite certain that it will not share human views on ethics and morals once it indeed has made the jump to real self-awareness. And from that moment on, we cannot make any predictions about its decisions and reasonings anymore. And that I find to be deeply worrying and alarming.

Quote:

In an odd way, if you think about it, an AI would most likely be akin to a high-functioning sociopath, able to perform at a very high level but devoid of or very seriously lacking the strictures provided by a human framework and highly dedicated, perhaps obsessed, to singular tasks...
In the very first Alien movie, Ripley at the end kicks the Android'S head off. Before she finishes him off, he talks a bit about his attitude towards the alien, and he expresses admiration for the clean conception of its design, an organism not hindered by scruples and moral hesitations, just focussed on survival and predatory survival. The last movie, Covenant, picks up here again, the Android now showing no mercy nor even the smallest hesitation of scruples to commit genocide and hand his human prisoners over to unimaginable pain and torment while havign them killed by the baby aliens. Yes, I see parallels between these movie scenes, and my reasoning above. We humans too easily fall for megalomania, but may find out to our cost that we are just apprentices who still cannot master the ghosts we called by our careless mumbling of spells.

It is revealing, that these implications receive almost no reflection, debate, media coverage, also play almost no role in task-related courses at universities, job-training, and in business-oriented think tanks. To me it compares to bungee-jumping off a bridge without checking first whether the rubber-rope really is latched. And that is why I think our doing is a very bad idea and currently on a terribly misled track.


All times are GMT -5. The time now is 02:16 AM.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 1995- 2025 Subsim®
"Subsim" is a registered trademark, all rights reserved.