View Single Post
Old 03-27-08, 03:39 PM   #26
Skybird
Soaring
 
Skybird's Avatar
 
Join Date: Sep 2001
Location: the mental asylum named Germany
Posts: 42,650
Downloads: 10
Uploads: 0


Default

Quote:
Originally Posted by GlobalExplorer
@Skybird: I think you got it all wrong. It's just a pity that we are both not going to see it Artificial intelligence is still out several centuries, but it is inevitable. I expect it will be of the "island" type and never model a complete human personality because that would create insuperable problems (what if the computer personality is more advanced than any human?) and be very expensive.

First of all, your arguments are basically a conglomerate of the technophobe backlash that came about after the science fiction utopia of 50's had failed. And evidently around the 80's and 90's we were not making any significant progress towards artificial intelligence, which seems to take the burden of proof from your argument.

But neither is true. We cannot simulate such complex processes easily, neither is there any reason why we could not do it - in the future. It's going to take much more time than people expected, because the human brain is so powerful. Don't forget that evolution needed millions of years to create human intelligence, so we cannot do it in 50 years.

Just take todays computer technology, multiply it by several thousands (or possibly millions, who knows that) and there is going to be a certain threshold when the computer is going to reach and finally overtake human decision capability. This is still far out in the future, I would say at least hundred years, probably more.

If you want to call that intelligence is another question, but for me it is, I can also accept that for you it is not. Maybe it has to do with you being a (latently) religious person and not accepting that the human brain is a biochemical computer (an extremely powerful one). I am a software engineer, and I don't see any difference, just that todays computers cannot even achieve the intelligence of an insect, but they are already getting damn close.

But as I said, none of us is going to see it. And after seeing the disturbing images of a (completely harmless) roboter like big dog, I guess that this could actually be a blessing. We could be creating our own doom, just as science fiction writers have predicted.
I get it all wrong, you say. But I think that is a strange statement since I cannot see your reply touching the details and perspectives I lined out.
You seem to imply that from a software engineer'S perspective, intelliogence is not more than decision making capability. Thst is probably the most minimal conception of intelliegnce I ever heared of. A random generator already would be intelligent then. Or an AMRAAM.

It is decided by how you define intelligence. If you minimize it enough, even an automatic emergncy braking system can be called instelligence. but if that understanding of intelligence has any real valdity and meaning beyond marketing interets of software companies, can be doubted. I am ex-psychologist, and can tell you that in behaviorism, they did not had any interest for cognitive processing at all, and understood "learning" to be nothing else than forming reflex patterns. Needless to say, that beyond treating psychological problems comning from certain established stimulus-response links and penalty-reward schemes designing these "learning" processes as economic as possible (shaping reflexes, that is), not much useful came from behaviorism, and not much doing the complexity of human nature much justice. Instead they had in the 50s pages-long mathemitcal formulas that should have been a descritption why somebody raised a cup of coffee and drank it. That did not reveal much of man'S cognitions, and did not shed light on his intelligence - but it said something on the tunnel-view and stupidity of the researchers. You can treat the symptoms of phobias very well with behavioristic concepts, but you cannot explain them. While some say it is not important to explain the Why, statistics on therapy evaluations tell us something different, and very loud in voice: one thing that behavioristic concepts constantly have to fight with (more than any other therapy form), is "Symptomverschiebung". that means the patient is free from the opriginal symptom, but developes another symptom, caused by the same cause that behaviorism is not intersted to see, and has not thge tools and terminology to find and describe. Effectively fighting symptoms is all nice and well - but you need to know the initial "Why?" as well, you know. The first is pragmatic. The second is essential.



i am no technophobe, and if you wpould have read what I said on evoltuion and the importance of technolgy, you would have seen that. I am jst against a totally unleashed, totally uncontrolled, totally uncritical abuse of technology. Neurosciences also is not technophobic. It's research just led to insights and evidence that showed that the old comparison of computer hardware and human mind simply are undefendable, and are nonsens. It was a hype to see more in current teczhnology than it can be, currently. I do not know if once we have a machine showing something like self-awareness, self-reflexion, curiosity, the ability to leanr things that are totally beyond the limits of what it initially started with, or if a machine will ever have emotions, and social behavior that is not just a blind copying of human technical behavior routines, but emerges from a felt desire to be social. but I am convinced that wiothout these factors, you cannot talk of cognitive intelligence, and I am in good company with that assessement.

And in case oyu do not know, many astronomers think that most civilisations that might exist out there (mind you that 90% of the suns in our galaxy are older than our solar system) think they probably will not be organic life-depending forms of life, but machinery civilisations, "mahgcinery" in a wide meaning of course, because they argue that a civilisation living long enoigh most likely will transfer mind and cogntiion to machines that are mkuch easier to maintain than a vulnerable organic body, which is also far more vulnerable and less enduring. If you travel between the stars, or weant to survive collapsing biospheres or extreme environments, organic hulls like life on Earth is using, are the weakest, most dangerous option.

Do me some more justice, please - i am not as ignorrant to technology as you think. I am just no blind believer, uncritically hailing it, no matter what. Technology can be a benefit, a hazard, or useless. If it is a benefit it is not my problem.
__________________
If you feel nuts, consult an expert.
Skybird is offline   Reply With Quote