![]() |
Quote:
They also have a century-old debate of the conseqeunces if there is no free will. All ethics systems would brake down, then. Our laws to give penalties to people violating laws would not make any sense anymore. We would have no basis onto which to form social communities. But I do not wish to say by this that wee "must" have a free will else our societies would crumble. It's just what they debate in jurisidction, and since centuries. Quote:
Man, don'T declare yolurself lower than you are! You are a man, but want to see you as nothijg better than a complicated piece of Fischer-Technik? that might havbe been fine under the paradigm of Descartes, and past times when it was thought all nature and the universe is just a machine, often with a predefined fate and future. But we have moved beyond that, for good reasons. Quote:
Quote:
Quote:
At school we learned Pascal (first half of the 80s), and I later used GFA 3 Basic on the amiga, btw. :D Quote:
Quote:
Quote:
Quote:
You think binary here :) and argue lioke "all or nothing at all". I did not say to abandon all technology. I said to be more respnsible in our deciison makings what kinds of technology we want to drivbe, and what bbetter not. Not all technology there are - must be realsised. Not all that can be done - must be done for no other reason that it can be done. sometimes it is decisions about a certain proiject, like this huge dam in china (envrionmental and social costs putting it into question), a TV-handy (apparently bcoming a flop for nobody wants to watch long movies on a 4x3 cm display), or the transrapid. Sometimes it can be a decision about general trends and new technology classes. And not always we must accept everything of these just becasue it popped up from according research. but since these researchers are driven by business interests and investements by business comanies, we may need to tackle these economic interests for defending our freedom to decide for or against such things. Quote:
Quote:
Quote:
Quote:
I think we better stop here. |
Quote:
But you did get the general gist? Oh.. and "foreigner" is such a harsh word.. I prefer "heathen" :know: |
Yeh, I understood what you meant. ;)
|
Quote:
To say that if we had no free will we would chose to abandon ethics, the punishment of crime and that our societies would crumble is a contradiction of terms. To say that if we had no free will we would automatically and with out choice or will abandon ethics just shows that you overestimate the complexity of the human brain. Quote:
What we can not do is identify it out side of our self; when we can not directly experience it. If you hold that there is no possible way to identify it outside of our self, on what basis do you assume an emergency braking system does not have it? On what basis do you assume that an inert rock does not have it? If it is physical, it is possible to detect. We have no reason to believe it is not physical and so there is no reason to believe we will never be able to detect it, however complex it may be. Quote:
Quote:
"truly knows what consciousness even is" ?The only reason humans are creative, emotional, social, questioning of their own existence, and have a will to survive is because they are programed to do so. Quote:
There is reason to belive in a mind that is in any way separate or different from brain activity. |
Slowly but surely I'm getting tired of getting words I never said put in my mouth. Please note everybody that even small differences are important and can change the complete meaning
Quote:
the implicaion you fail to see is a totally different. If there is no free will, then you must necessarily conclude that in every situation man makes a decision to do or not to do something - he has had no other choice than to act the way he did. I assume that is a bit what GE meant when saying if one were going back in time to the same situation, one necessarily would do the same again, but he mixed it up). But if man has no other choice than to commit that crime, to act that way that is under penalty, to make that decision, it makes no sense anymore to put a penalty on him, to hold him responsible for it. Becasue he could not do any different than he did. now consider the consequences in education. Theology. Every aspect of accepting a post where you have to accept responsibility. Legal procedures. It all would break down, becasue it all and our legal culture as well is basing on that man is said to have a free will that enables him to chose. withoiut that freedom, there is no responsibility. without responsibility, nobody can be held responsible. Nobody being responsible for himself: and penlty loses it'S justification. so do ethics differing between good and evil - they become pointless. So it is not about choosing to skip ethics. It is about ethics simply not being existent. Quote:
Quote:
Quote:
Quote:
Quote:
Always arguing with extremes, eh? I think it is getting absurd now. If some of you want to see themselves as nothing more than a highly developed type of automobile with a nice electronics suite, okay, fine, I honestly hope you will never reach a rank wehre you can do real damage, but beyond that it does not really bother me. What I say, is this: 1. A hardware kit with motors, sensors and a softweare code does not equal "intelliegnce2 as the term is used in cognitive science, in the meaning of a set of characteristics most scientist do agree on to be part of "intelligence". 2. A critical mind on wether or not to embrace this technolgy and that trend does not equal to completely deny technolgy. It means simply to choose wisely. 3. Isaac Asimov, certainly not suspicious to be a mysticist, said intelloigence is what an IQ is measuring. I would point out, that word-and-paper tests do not measure anything: a testform has no sensor. Mqany psychologists would agree on certain key features of what intelligence is, but there is no one, single super-defition of it that has general consensus. the defintions depend on the perspectives of the varuious scientific fields. I also say that intelligence per se does not exist, but that the term is a meta-label for a collection of qualities that all interact and come together in order to create something that is greater than the sum of the signle components/qualities. If these qualities ever can be created by software as we define "software" today, is no certain thing, and even not a theory - it is speculation. we do not know if it can be, or not. We need to wait and see. But we can deal with the question until then, if it is even desirable to acchieve that. The understanding of "artifical intelligence" as it is used in software engineering I see as missing key features and being qualitatively inferior to what is adressed as "intelligence" in cognitive sciences. It's the same term used for two very different things. 4. trying to make the human brain being seen as a kind of computer kit, is absurd, and is rejected both by the very different weay a brain functions, and what is coming as a result from this functioning. It was an analogy that vecame popular during the hype around spreading PC technologies in the late 70s and all 80s, but today, most brain researchers reject this analogy, saying it does help the reputation of PC technology, but is crippling the understanding of what brain is. when I finished studying in the late 90s, it was not much popular anymore - except with technicians. But they claim more than what is theirs. 5. Finally, neurons in the brain are not the fastest neurons we know, yet this slow processor - the brain - is "computing" signals incoming in a way that the result not only outmatches that of computer calculations in qualitative terms, and often also in quantitative terms, but also create qualities that in the signals themselves are not even present. We see sharp images in our brains, for example - but the lenses in our eyes do not produce these, they form images that at least are distorted by 3.5 diopters, often even more, up to 6, and 7. Why and how are the signals manipulated to form "sharp images" if ourt brain never experienced by signal input what sharp images are? Brain then creates additonal information whose link to the data may or may not be known to the mind. You may smell a certains scent without being aware of it - but suddenly having a long lost memory on your mind, or a certain mood you once experienced. You taste something - and experience an emotion. You loisten to something, and feel a mood. The type of signal input does not equal the type of data result that is produced. You muscles may remember reaction patterns that feed back into the brain and make you do things you do not have recallable in your intellect'S memory. Your brain outsources data into books and the internet, and takes advantage of knowledge it never has formed itself. You literally can think beyond your limits. And in context of meditation, deep mediation and states of chnaged consciousness that may result in changed neurochemical status but is not caused by them, you even eventually can make the experience of leaving all this behind and see beyond. That'S why I use to say that time is an invention by the intellect only, but in fact is only a function deriving from brain activities, but no really existing quality of the "outside" world. Also that is why I say that mind is first - and then comes the brain. I think I was unprecise above somewhere, mixing up consciousness and mind. awareness is a third different quality, but I wonder if that maybe gets lost in translation, i sometimes think English and German use these terms differently, somewhat. So in the end, use that robodog if you think you have a use for it - but don't call it artificial intelligence, and you can be interested, fascinated, whatever - but don'T make more of it than what it is: a machine. I love Science Fiction, too, but I can see the difference between what is and what is imagined to be, and know that some things from past science fiction turned into reality, others not. Oh, and I use a computer and many modern gadgets of present households. so much for me being a technophobe. |
Quote:
We are generally programed to punish people because of the way our social minds work. There is no meta-reason beyond that, free will or not. Quote:
the way our social brains work. So is the concept of responsibility. That would be no different if we had free will or not. Quote:
not? I put it to you that your only basis for such decisions is weather or not your mind feels compelled to act socially towards the entity and that it is this, rather than any rational reason, that leads you to believe that a machine can not be conscious. Quote:
to make from it might be false. Quote:
"A hardware kit with motors, sensors and a software code" Quote:
Quote:
time. Equally, thinking of a computer as a series of logic gates is a waste of time, it will not help you play silent hunter. Thinking of a computer that is as complex, or more complex than the human brain in the same way that you might think of a computer available these days is most likely a waste of time in the same way. Quote:
Quote:
Quote:
|
Hey no need to get angry skybird. I admit I was making some exaggerations about psychology - I always like to exaggerate (other people have compained about it before) - and it was just revenge for you brushing aside computer science - which psychology is clearly competing with on the field of artificial intelligence - and for declaring someone of the capacity as Asimov as "moved beyond by time".
And it was not very nice to call me "fanatical" - I am also a critic of unbiased slavish worship of technology - because that is a fatal disregard of nature. But I think I have already understood why you feel so compelled to play down the significance of the next gen robots: in your discipline everyone is jealous that computer science has results to show - which must be a shock after those long years when everyone could ridicule cs because it could not even create a machine that walks let alone possesses the intelligence of the lowest insect. Now we see we are getting close to that point. Let me make it clear that I never said big dog is "very" intelligent. Yet I see it possesses that little bit of intelligence which is required to move. But it could be combined with a purpose - like walking around and surviving or - seeing who funds the development - to "kill", as well as a way of feeding itself (it could burn dry wood with a "holzgaser"), and we would have created a primitive lifeform. I think that deserves to be hailed as a major first step towards artificial life, not more. And I agree with you insofar that it has not brought us any closer to an artificial consciousness. However sadly you made two fatal errors in your reply to my last post, when you were admitting what I had claimed in an earlier post: that you are latently religious. Quote:
Quote:
Quote:
But I do not hate you for that - in fact I like you because you have the guts to develop and defend your own ideas against anyone. That makes you a difficult person, but keep it up! Quote:
Sorry I must keep busting your balls, really sorry. I am acting on the assumption that someone with your capacity will be able to handle it ;) |
Quote:
Quote:
Quote:
Nevertheless there is a lot of books on ethics, and traditions of ethic codes. And this may be good so, if you do not see why, live in an anarchistic society under the law of the strongest for a whil, and if you are tough and strong a guys, and survive evbemtuelly, report back and tell us of your experiences of living without ethics that differ between good and evil,l right and wrong. People are assumed in our cultural tradition to have a free will that gives them the freedom to choose for bad or good, and it is like this in most cltures there are or were. If they choose for the bad, they deserve penalty. If there is no free will, people cannot decide, if they cannot decide and are preprogrammed rehgarding their decision, they are not responsible for their action - because they had no choicer anyway,and already were decided anyway. So no ethics are needed anymore to separate between bad and good. How much energy does it take you not to see that - or at least that'S what you say? Sometimes you often give me the feeling that you do not necessarily mean what you say, but raise abstruse resistence only for the sake of resistsing to something, no matter what. maybe becasue you so often go to extreme mind experiments and haisplitting that cannot be generalised. However, I find it interest-killing. Quote:
-öearning, ever-adapting continuujm. you do not seem to understand that i do not have such ,mechanistic, linear understandings fo the likes you demand from me. You could as well ask me for the one shape of a cloud, and if I tell you that there is not the one shape of clouds, accuse me of not answering your question. You could give an imoression of an answer by what I already said. If you would want that instead of wanting to disagree. I pointed at it at least three times in this thread now, i just counted. Quote:
Quote:
Quote:
|
Quote:
I do not minimize asimov, hell, i even do not know much about his scientif work, I only new some of his fiction. but the laws of robotics he formulated cleraly are already ignored sicne years, if not decades. I again refer to the many robots the military uses - and even a missile that steers itself is a robot, so is a cruise missile, and you okow what kind of autonomously operatin drones are next. none of them gets the three laws of robotics programmed into their digital heads, and they do not manifestate all by themselves. Also, they would conflict with a weapons intention. Quote:
Quote:
and finally, psychology is not "my " discipline. but I can tell you as an insider that while there have a lot of things gone wrong in it's history, for the main regarding it's self-perceived competito9n with classical physcis's working methods that it uncritically copied without ever asking if that made sense, i can assure you that I never heared of any psychologist who is driven by the demand to compete with software enginners regarding intelligence. The e ngineers started the idea of a competiton, not the psychologists. Cognitive science and psychology simply deals more or less competently with human intellignce, and has little inteerst, if no none, in software developement of that kind. You twist things very much. Quote:
Do you know www.dream-aquarium.com ? It is the most realistically quarium screensave I ever saw, its worth to have a look, really. Tell me: these realstically moving and -looking fishes - are the also "oprimtive lifeforms" and is the screensave a primitve early stadium of an aquarium? Hardly. So my compliments for achieving what has been achieved with robodog, and bets wishes for future developement. But don't excpect me to see it as either intelligent or a lifeform. If a machine can be that, needs to be seen in a far away future, and since we have no experience with livbing, intelligent machines, we do not know if that is possible, thus all expectations remain to be speculative. You hope for it, that is okay, I liked Science fictiontoo, and have sympathy for it. buit I don't taske it for granted that it will be like that, someday. and I also do not take it as granted that in that furutre world, it even will be a desirable option. because we do not know how the far away future will be, and what man will be, then. Quote:
but they probably are not religious, since relgion is an invention of man. to me, sprituality and religion are mutually exlusive. I fight against relgion. I cannot ecape to be necessarily spritual. and i don't know if you really munderstand what I mean by "spritual". So what is it you think i latently be? ;) Keep it simple: I am mortal. that is as precise as it gets. Quote:
I dp not, and mjiust tell you that this is widespread standrd of reasearch. While memories are linked by some to chemical molekues storing them, there is not much objection in brain-reaearch that nevertheless the brain has a system, a model, a call-it-what-you-want to store memories in a way that stroage capacity is never ending. that probaly is why the brainS memory operate in a kind og holographic mode indeed. every single memory-quality does not just claim some space, but chnages the structure of the complete memory system, and in every smallest quantity of this memory, all memories can be found, like you have all visual information of the complete hologram in every single piece and bit of it if you brake it. We even have learned in experiemnts, that the brain has this capability of storing all in all places in total completeness, too. That siurpasses chemical local storing of data. It also surpasses of what a harddrive is doing. As I said, the past 20 years in bain research have been revolutionary. Quote:
Quote:
But I could have a new debate with you on the fact that atoms are 99.99999...% nothingness, and particles are just random clouds and tendencies to exist. Or in short, we enter the world of incredoibly far-leading abstraction now. Ore that the conept of time we have - probably is an illusion. Some physicist whose name I forgot, was quoted with this: in thenend it all is mind that dances with itself. A famous book by Gary Zukav tells the reader, that the Chinese word for physics translates into "Wu Li - structures of organic energy". A biuddhist might tell you that it all is just images, that are mepty, on ly the mind there is. One mind. - If I would have a new debate, that is. Quote:
Quote:
Quote:
Quote:
|
I am tired of it. But I cant let go as long as you - in between your very interesting passages - keep falling back to your "spiritual" phase, when it all gets ludicrous again.
Quote:
Quote:
Quote:
But in the end you have agreed that big dog is an achievement and we both agree that technology must be critically monitored and sometimes limited. Apart from that we will never agree - but it is not really important if human is spiritual or a supercomplex information system - in the end our existence stays the same. As to AI, the main work is still to be done, so we can only let it begin and limit the dangers of it. Lets not forget that this thread was actually about the dangers, and the primordial shock of artificial life forms, not about spiritualist world models. |
Quote:
Quote:
I would also say everything that believes to have consciousness is conscious. Lets say I could create a program - with about the same deducting capabilities as the human brain - where all low level activity is controlled by one high level structure "Self". I could then also hard wire a mechanism "Consciousness" to constantly feed in the realization to be conscious, very much like the human illusion to have consciousness and free will. It would also require a mechanism to reflect on thoughts, i.e. analyze it's own processes and create new processes from that. When asked, this computer system would say that it has consciousness, and it would also raise questions about the meaning of consciousness, because it will not find logical explanation of this thing, yet it must believe in the existence because it can experience it. I am not sure if that would be enough to solve the problem of artificial consciousnss, but the end result would be a machine that believes to be a conscious self exactly like a human. Where this consciousness takes place is a good question - it is out of this world - very much like skybird say - like every self is out of this world. I can see where the spirirualism comes from, yet for me the spirit is also physical, it only comes about through complexity. |
GE,
I typed a reply , but then found myself to be so annoyed by your arrogance and accusing me of what you extensively did - and do -yourself, that I just found it is no longer worth it for me to reply. As I see it you do not adapt to the standards set by reality, and the world, but try to make both adapting to your toolbox. You mix up the process of simulation with the object of simulation. But that simply is not my problem, but yours, and yours alone. |
Such accusations are empty with out even the most basic reasoning behind them SB.
|
You're welcome skybird ;)
|
Just to make clear what we have been discussing here.
Among other things there was dissent to which extent future developments of this robot, once it is provided with a purpose, a way too feed, to survive several years in a (harsh) environment, and other functions could be seen as a (primitive) artificial lifeform - like an insect that cannot reproduce - or not. http://www.christian-wendt.org/SUBSIM/robot1.jpg Later I defended the opinion that it seems possible to construct a machine that calculates all thought processes characteristic to the human brain, including a sense of consciousness. I basically took the position that if the thought "I feel I am alive" can take place inside the human brain - it might as well take place inside a computer. The implications are the same, i.e. the machine will believe to be conscious, very much like we do. Of course I have no idea if technology will get that far or if we hit a brick wall somewhere, or if we even want to build such a machine. Unfortunately it got a bit heated - and I was declared more or less insane - but I don't take it to heart. http://www.christian-wendt.org/SUBSIM/robot3.jpg |
All times are GMT -5. The time now is 03:41 PM. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 1995- 2025 Subsim®
"Subsim" is a registered trademark, all rights reserved.