![]() |
Quote:
On the other hand, you have computer viruses. A virus injected into a machine which is connected to the net containing the AI would necessarily have access to all of the areas the AI has access to. Firewalls and anti-virus measures notwithstanding, we humans are pretty crafty. After all, we did invent the idea of a "Trojan horse". I must slightly disagree with Rockin Robbins' assertion that computers cannot be "surprised". I'd be willing to bet that virtually anyone who has owned/used a computer has experienced a time when something occurred that the computer simply could not deal with. In extreme cases, you get the computer version of passing out: the BSOD. :) |
Quote:
So, basically, you are echoing exactly what I stated: without a centralized target, the only recourse is to take down the whole structure it inhabits and the structure around it. I didn't say an AI that dispersed itself in a wide net, with multiple, mobile command redundancies, with no central point of attack, is impervious to an EMP, etc.; it is impervious to a conventional centrally targeted attack. Yes, you could defeat it by taking out the entire grid, just as you could kill a cancer by killing its host body; its just that the end result is less than satisfactory... Quote:
As far as the concepts of free will, compassion, emotion, etc: these are not necessary for self-awareness or intelligence. In a primitive degree, almost any computer with a free standing OS is 'aware' of its existence, at least in the sense it monitors its state through diagnostics, etc; it 'knows' its here; it may not know why, in an existential form, or know it is 'mortal', but it does know it is 'here' (oddly making it, perhaps, philosophically quintessential). The need to know the 'why' and wherefores of existence is a human need born out of the need for self-validation of the 'purpose' of one's need to exist. AIs aren't really concerned with the 'niceties' of navel staring; they exist to perform tasks within their parameters and, if so equipped, to learn from their mistakes (and ours) and make corrections, as needed, and as deemed necessary to the efficient operation of its system and the completion of its task(s). The idea an AI must be fully "human", in intelligence,and 'humanity', in order to be truly an AI is a human conceit, akin to the way humanity has sought through the millennia to give an all-seeing, all-knowing, all-powerful entity a "human face" and "character"; it reassures us when the entity is in our 'likeness'. The truth of the matter is, AIs will never be able to fully and independently achieve the complex level of human emotion and compassion; there is just too many varied 'grey areas' in the human state. The truth of the matter also is, AIs don't really need to have that degree of 'humanness'; they just have to know their task(s) and have the ability to perform... <O> |
Quote:
BSOD? It's the same thing: a checklist of qualifications which when satisfied jumps the execution to the BSOD subroutine. The computer has no awareness about the process at all. It follows instructions and mindlessly executes them, unaware that it has "died." The trick is to make this alien manipulation of electrons look like intelligence. This thread is evidence that we've drank the koolaid and are now basking in our great "achievement." Unfortunately that achievement is only a hallucination or maybe a nightmare. We give ourselves altogether too much credit. Reminds me of the questions about which mods for Silent Hunter are the "most realistic." There IS NO REALISM in any Silent Hunter game. Until the food goes bad six weeks out and you have to eat it, we don't have to worry about realism in Silent Hunter either. |
Quote:
Does it count if one has a binge play marathon and mindlessly nibbles on cold, stale pizza?... <O> |
Quote:
Quote:
Quote:
Correct me if I am wrong, but I don't believe the rest of your reply was directed at me, per se, or that it contradicts anything else I wrote. Cheers! |
Oh, no, I wasn't directing anything at you specifically; I was just discussing the nature of what is or isn't intelligence in terms of an AI and those comments are just my viewpoint. I welcome any and all considered views, agreed or contrary. If you took my comments as an affront, I do apologize, as that was not my intent...
I find it interesting you seem to be a bit focused on the notion taking down a possible AI (virus, EMP, etc.); it is a notion I have seen in an awful lot of other discussions of AI; it almost seems akin to the popular concept of extraterrestrial life: if 'they' do come, they'll probably won't be benevolent. It is a sort of human trait to exhibit a form of xenophobia to anything unknown or uncertain. Human history has lots of instances where new ideas, inventions, philosophies have been met with sometimes vehement reaction and, later those 'aberrations' became commonplace parts of human society and knowledge: Galileo's persecution by the Holy Roman Church leaps to mind. Even now, in a more limited sense, we are seeing devices such as PCs and Net, once looked upon with a degree of suspicion and skepticism, as now almost indispensable parts of our lives. I recall how, in 1968, I was talking with group of fellow cadets in high school and they laughed when I said, one day, we would have computers on our desks, computers would run the functions of automobiles, and that there might be an interconnected network of computers that would put information and data at our hands instantly. I wish I could see those guys now. Granted, back in those days, computers were massive devices filling hundreds of square feet of space and need to be in controlled environments. One day, we will have advanced AIs and we really do not need to overly fear them; they are only tools and they are benign until ill-used by humans: a hammer is a constructive tool, at least until you try to bludgeon someone with one... Your comment about the slide rule and celestial navigation is spot on. An over dependence on tech can and probably will come back to bite us in the long run. When the infamous Y2K brouhaha came about, the older types who were considered redundant suddenly became very highly demanded; a co-worker of mine told me of her programmer husband who had been forced to take early retirement because he was deemed unnecessary and redundant; however, once the company he worked for was faced with the Y2K problems and found they would need to have people versed in RPG and PL/1, they came to him hat in hand and almost begged him to come back; it seems all the young up-and-comers they kept hadn't a clue how to revise the existing programs; he did go back - as a highly paid consultant, making back as much and more as he lost by early retirement. I wish I had kept up on my RPG skills; I could have made a tidy killing... <O> |
Quote:
A flippable light switch has its switch in either this or that position. Is there an "it" "knowing" anything about that? Awareness, Mind are more than the sum of all the individual parts, of this I am certain. We have no real understanding of where they begin, and why they even form up, and when, under what contiions. We can only assume that carrying system's structural complexity has somethign to do with it. A certain minimum amount of degrees fo freedom on possibilities that any situation of choice can freely pick, and a certain minimum of such deicison-points, splits in the detemrinistic tree, that allow their consequences to dynamcially feedback on other suhc nodes both upowards and downwards in the hierarchy, by this allowing autonomous self-alteration of the system. And when this system alters itself to a new, higher level of order/complexity, maybe, then maybe somethign like mind or self-awareness may be the result, who knows. We speculate, and all too often reduce reality to inferior degrees of complexity that you own artifical categories allow us to handle. The "real reality", nevertheless can only be experienced, which imo only is piossble at the porice of self-transcendence and moving beyond the borders of the defining limits of what we usually call "us" and "ego". |
Quote:
I have no inherent aversion to a possibly "conscious" AI itself. In fact, I find the possibilities and implications fascinating. What might we learn about ourselves? What insights might we glean about what it means to be "self-aware"? What moral and ethical challenges might we encounter? Will we find new ways to deal with human afflictions such as memory loss or mental illness? The list goes on and on. We humans have a natural curiosity which drives us to question and explore (sometimes at considerable risk) and I am no exception. My comments were only prompted by the foreboding feeling already being expressed by others. Personally, I think the biggest problem with technology is its misuse by us. One example being the internet. Here we have a tool which brings all (or at least most) of history's knowledge to our fingertips and makes the world exponentially smaller, allowing us to instantly communicate around the globe and broaden our cultural horizons like never before. But, are people on the whole getting that much smarter and closer? Or are we just spending a lot of time posting "selfies" and "tweeting" our opinion of the latest episode of [insert show here]...achieving just the opposite? Judging by what is deemed newsworthy in the most popular media, the best use of AI is in service staff and sex dolls. [Sigh]....We're doomed. :haha: |
Quote:
You go first :D There's no proof our conscience resides within the confine of the brain. "Despite zillions of us (neurologists) slaving away at the subject, we still don't know squat about how the brain works." Robert Sapolsky, professor of biological science and neurology at Stanford University |
Quote:
You will note I put the word knows in quotes ('knows') indicating it is not a literal use of the the word; the it was obvious from the context: I was referring to the computer I described. A computer 'knows' of its existence in the sense it constantly monitors the OS and hardware status and reacts, as needed to changes in that status rather than just sit dumbly waiting to be 'told' it is running. If you turn on a computer and leave, it will continue to hum along, in patient expectation, as long as it has power and does not suffer from a physical malfunction or OS blip, much as I, at my age, am patiently waiting while hoping my "hardware' and 'software' doesn't crash (nearer my God, etc.). I do not believe AIs will fully achieve fully human status, so I don't think they will ever fully know of their existence in the human way humans have that knowledge. Your question seems to be perpetuating the "in order to be fully intelligent, it must be fully human" mythos. Emotion does not make intelligence and, often, is a hindrance to intelligence; in this I refer again to Galileo's situation. Also, intelligence can exist in absence of knowledge; I have known a good many people who were not 'book smart' who were nonetheless highly natively intelligent; and, conversely, I have known a good many people who were virtual fonts of knowledge who did not have the ability to apply their knowledge nor did they have what could best be described as good old-fashioned human 'common sense'. Being 'smart' does not necessarily make one 'intelligent'... In an odd way, if you think about it, an AI would most likely be akin to a high-functioning sociopath, able to perform at a very high level but devoid of or very seriously lacking the strictures provided by a human framework and highly dedicated, perhaps obsessed, to singular tasks... <O> |
Quote:
As far as the future potential of AI is concerned, again, it is a tool and like all tools, it can either build or destroy depending on who wields it and to what end. IBM's WATSON, perhaps the most advanced realizations of AI technology, is currently being used for advanced medical research and disease diagnosis; however, I am sure it won't be long before some commercial entity or other decides the better use will be to enhance 'monetizing' whatever they wish to foist on the public... <O> |
Quote:
Quote:
I wonder what an AI will look like that does not base on these biological factors, and that has no need for sex. I am quite certain that it will not share human views on ethics and morals once it indeed has made the jump to real self-awareness. And from that moment on, we cannot make any predictions about its decisions and reasonings anymore. And that I find to be deeply worrying and alarming. Quote:
It is revealing, that these implications receive almost no reflection, debate, media coverage, also play almost no role in task-related courses at universities, job-training, and in business-oriented think tanks. To me it compares to bungee-jumping off a bridge without checking first whether the rubber-rope really is latched. And that is why I think our doing is a very bad idea and currently on a terribly misled track. |
All times are GMT -5. The time now is 02:16 AM. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 1995- 2025 Subsim®
"Subsim" is a registered trademark, all rights reserved.