View Full Version : Big dog - science fiction becomes reality
GlobalExplorer
03-20-08, 06:14 PM
I think everyone's gotta see this:
http://www.youtube.com/watch?v=W1czBcnX1Ww
http://www.youtube.com/watch?v=mpBG-nSRcrQ&feature=related
I think everyone who has seen the big dog videos will agree that they have more or less done it. This machine moves like a normal animal. It might actually have a better balance than most of them.
At the end of last century there was a general agreement that artificial intelligence is going nowhere. Now, just a couple of years later it looks like we are going to see robots with about the intelligence / motoric of animals in our lifetime. I am amazed.
What surprises me is how frightening this is. It's creepy. I can only imagine what it will be like to see one day a computer talk / behave like a human.
And lets not forget this comes with a lot of real danger for humanity. I guess it's time to read Asimov again - what was science fiction yesterday might become important very soon.
Here is another video of similar robots:
http://www.youtube.com/watch?v=JYptK21vAgQ
http://www.youtube.com/watch?v=qbTOpHAinhA&feature=related
Different, but still strange:
http://www.youtube.com/watch?v=jkft2qaKv_o
http://www.youtube.com/watch?v=i25cfdcum7U&NR=1
mrbeast
03-20-08, 06:29 PM
Yep, freaks me out too! :eek:
Blacklight
03-20-08, 06:51 PM
Imagine weapons mounted on that thing. You're in your little trench when suddenly you hear that familiar "BuzzzZZZZzzzzZZZZZ"... LOTS of buzz's coming from diferent directions....:eek:
I'd pretty much soil myself.
I wonder if those people who work with the machine hear that buzzing noise in their sleep at night. :doh:
AVGWarhawk
03-20-08, 06:57 PM
Very strange contraption indeed. Interesting how it handles the balance to stay upright.
Blacklight
03-20-08, 07:04 PM
Very strange contraption indeed. Interesting how it handles the balance to stay upright.
Ahh.. the wonder of gyroscopes and processing power. I'm just imagining improved models running rampant, armored, and armed on the battlefield.
That's amazing! Moves VERY realisticly in situations where it is about to lose balance. Now, I can finally get a robot to bring me my beer instead of having to walk to the fridge. :up:
Way, way, WAY in the Uncanny Valley, this thing :doh:
I don't want to look at it EVER again :dead:
Wolfehunter
03-20-08, 11:56 PM
So are they going to call this T-1? First Generation Terminator?:hmm:
kiwi_2005
03-21-08, 12:16 AM
I Robot.
Sailor Steve
03-21-08, 12:32 AM
I think it's pretty cool. They said it has a 300-pound payload--I'll bring a saddle.:rock:
Waaay too close to a blind date I had one time.
Foxtrot
03-21-08, 03:59 AM
I Robot.
Yo! Robot
Platapus
03-21-08, 05:19 AM
. It's creepy. I can only imagine what it will be like to see one day a computer talk / behave like a human.
Never gonna happen. Computers are logical, methodological, dependable, consistent, non-belligerent, unemotional.
:up:
FIREWALL
03-21-08, 05:25 AM
What a bunch of sissy's.
It's just a feck'in dog for christ's sake.
Btw nice find G. E. :up:
HunterICX
03-21-08, 06:06 AM
Seen it a while ago, still a amazing machine.:yep:
HunterICX
Sorry for the necro-bump.....
New impressive update!
Big Dog Beta.
http://www.youtube.com/watch?v=VXJZVZFRFJc
Sorry for the necro-bump.....
New impressive update!
Big Dog Beta.
http://www.youtube.com/watch?v=VXJZVZFRFJc
:p :rotfl::rotfl::rotfl::rotfl::rotfl::rotfl::rotfl:: rotfl::rotfl::rotfl::rotfl:
Skybird
03-26-08, 06:29 PM
Okay, imperial AT-AT walkers are next. :D
Serious, it still is a machine without any intelligence at all. One component of intelligence is self-awareness, another is emotions, and one maybe should even add: playful behavior and curiosity. All this is lacking in this robo-thing. What it offers is technical surrogate mechanisms and sensors that replace neurological reflexes and signal-feedbacks, controlled by preprogrammed reaction schemes that include what man has put into them, and that'S it. I do not talk down this machine in it'S teczhncial achievement - but it is nothing more than a machine: no intelligence at all.
One could even argue about wether we have made even a first step towards something like artificial intelligence. I'd say: No, not at all. Of course it stands and falls with your definition of intelligence, and if you ask 100 psychologists, you will get 30-40 different answers, depending on the view and school the person in question is representing. Well, I pointed out some key components above, regarding what I think intelligence is made of. It is not a quality for itself, but the term for me is more a meta-label for a set of features and characteristics, like the category of "card-games" describe things like Poker, Bridge, Skat, and whatever there is.
Do not make the mistake of reducing human intelligence, or even that of higher animals, to the level of mechanical automatism with control software installed in machines. 15, 20 years ago, the fascination for the digital revolution made sciences comparing the brain and the mind to components a personal computer is constructed of. When I finished studying, they had moved on and neurologists and brain researchers have understood that this comparison holds no truth at all, and only limits neuro- and psychophysiological research by limiting the possible understanding of how mind and brain are functioning. In no way today'S understanding of brain's way of working compares to a computer at all, in no way. All scientific research as well as the knowledge constructed by it's findings is basing on paradigms. And if these paradigms are too tight, to small, too minimal, they hinder the understanding of any knowledge that is beyond these paradigms' set of possible perceptions, answers, and further developement.
In other words if you think of yurself as just a machine - sooner or later you will start to act like one, become as limited in your behavior and social intelligence. - Ooops - I just introduced another concept of intelligence this robodog is not showing. ;)
Concenring Asimov and his robot rules, they are fiction only. We are close to field automatted defense cannons, we lauch remote controlled military drones which are operate fully autonomous soon. both do kill people, dirctly and indirectly. Or take military missile teczhnology, preprogrammed cruise missiles or air combat missiles that steer them selves once locked onto a target. - So "a robot shall never kill a human" is a moving memory of past times and the golden age of science fiction. Asimov'S law has no scientific, technological and/or realistc relevance at all - time already has moved beyond it.
Intelligence?
Mount multiple 360 cameras,speaker system,machine guns,ammo,nade launchers,flame throwers etc, and let the troops stay home here in the states and send entire battalions of these into Iraq....while we control, them from home....can you imagine a squad of these walking down a st. in Iraq or any country....I don't care what country,religion or whatever you are...you are going to crap your pants.
That is insane mobility/agility....they need not even worry about the sound of them...that would become a feared sound.
That is crazy.
That's amazing! Moves VERY realisticly in situations where it is about to lose balance. Now, I can finally get a robot to bring me my beer instead of having to walk to the fridge. :up:
Johon on sinun aviovaimo? :p
Huge LOL at the Beta :rotfl::rotfl::rotfl::rotfl::rotfl::rotfl::rotfl:: rotfl::rotfl:
It's our destiny, but as long as it is not asking me if I am John Connor, I'm not too worried ;)
Skybird
03-27-08, 10:48 AM
It's our destiny, but as long as it is not asking me if I am John Connor, I'm not too worried ;)
Maybe not destiniy, but probably our way of managing the challenges of evolution. Other species adapt biologically to chnages in their nenvironment, or when colonizing new living environments. Man cannot do that at the needed speed, or he is trying to live in surroundings where he cannot suvive: deep sea, space, etc. We use technology to adapt to these environments. Seen that way, technology maybe should be considered as a way of evolution. - that's why it is totally idiotic these days if romantic minds are calling for returning to the "good ol' days" and living in harmony with nature again. we have broken so many thing in the biosphere of our planet that we hardly will survive the consequences without technology and sciences, while other species try to adapt as best as they can, many of them failing and dying for not being flexible and fast enough.
GlobalExplorer
03-27-08, 01:33 PM
@Skybird: I think you got it all wrong. It's just a pity that we are both not going to see it ;) Artificial intelligence is still out several centuries, but it is inevitable. I expect it will be of the "island" type and never model a complete human personality because that would create insuperable problems (what if the computer personality is more advanced than any human?) and be very expensive.
First of all, your arguments are basically a conglomerate of the technophobe backlash that came about after the science fiction utopia of 50's had failed. And evidently around the 80's and 90's we were not making any significant progress towards artificial intelligence, which seems to take the burden of proof from your argument.
But neither is true. We cannot simulate such complex processes easily, neither is there any reason why we could not do it - in the future. It's going to take much more time than people expected, because the human brain is so powerful. Don't forget that evolution needed millions of years to create human intelligence, so we cannot do it in 50 years.
Just take todays computer technology, multiply it by several thousands (or possibly millions, who knows that) and there is going to be a certain threshold when the computer is going to reach and finally overtake human decision capability. This is still far out in the future, I would say at least hundred years, probably more.
If you want to call that intelligence is another question, but for me it is, I can also accept that for you it is not. Maybe it has to do with you being a (latently) religious person and not accepting that the human brain is a biochemical computer (an extremely powerful one). I am a software engineer, and I don't see any difference, just that todays computers cannot even achieve the intelligence of an insect, but they are already getting damn close.
But as I said, none of us is going to see it. And after seeing the disturbing images of a (completely harmless) roboter like big dog, I guess that this could actually be a blessing. We could be creating our own doom, just as science fiction writers have predicted.
CaptHawkeye
03-27-08, 01:53 PM
"Metal....Gear?"
Skybird
03-27-08, 03:39 PM
@Skybird: I think you got it all wrong. It's just a pity that we are both not going to see it ;) Artificial intelligence is still out several centuries, but it is inevitable. I expect it will be of the "island" type and never model a complete human personality because that would create insuperable problems (what if the computer personality is more advanced than any human?) and be very expensive.
First of all, your arguments are basically a conglomerate of the technophobe backlash that came about after the science fiction utopia of 50's had failed. And evidently around the 80's and 90's we were not making any significant progress towards artificial intelligence, which seems to take the burden of proof from your argument.
But neither is true. We cannot simulate such complex processes easily, neither is there any reason why we could not do it - in the future. It's going to take much more time than people expected, because the human brain is so powerful. Don't forget that evolution needed millions of years to create human intelligence, so we cannot do it in 50 years.
Just take todays computer technology, multiply it by several thousands (or possibly millions, who knows that) and there is going to be a certain threshold when the computer is going to reach and finally overtake human decision capability. This is still far out in the future, I would say at least hundred years, probably more.
If you want to call that intelligence is another question, but for me it is, I can also accept that for you it is not. Maybe it has to do with you being a (latently) religious person and not accepting that the human brain is a biochemical computer (an extremely powerful one). I am a software engineer, and I don't see any difference, just that todays computers cannot even achieve the intelligence of an insect, but they are already getting damn close.
But as I said, none of us is going to see it. And after seeing the disturbing images of a (completely harmless) roboter like big dog, I guess that this could actually be a blessing. We could be creating our own doom, just as science fiction writers have predicted.
I get it all wrong, you say. But I think that is a strange statement since I cannot see your reply touching the details and perspectives I lined out.
You seem to imply that from a software engineer'S perspective, intelliogence is not more than decision making capability. Thst is probably the most minimal conception of intelliegnce I ever heared of. A random generator already would be intelligent then. Or an AMRAAM.
It is decided by how you define intelligence. If you minimize it enough, even an automatic emergncy braking system can be called instelligence. but if that understanding of intelligence has any real valdity and meaning beyond marketing interets of software companies, can be doubted. I am ex-psychologist, and can tell you that in behaviorism, they did not had any interest for cognitive processing at all, and understood "learning" to be nothing else than forming reflex patterns. Needless to say, that beyond treating psychological problems comning from certain established stimulus-response links and penalty-reward schemes designing these "learning" processes as economic as possible (shaping reflexes, that is), not much useful came from behaviorism, and not much doing the complexity of human nature much justice. Instead they had in the 50s pages-long mathemitcal formulas that should have been a descritption why somebody raised a cup of coffee and drank it. That did not reveal much of man'S cognitions, and did not shed light on his intelligence - but it said something on the tunnel-view and stupidity of the researchers. You can treat the symptoms of phobias very well with behavioristic concepts, but you cannot explain them. While some say it is not important to explain the Why, statistics on therapy evaluations tell us something different, and very loud in voice: one thing that behavioristic concepts constantly have to fight with (more than any other therapy form), is "Symptomverschiebung". that means the patient is free from the opriginal symptom, but developes another symptom, caused by the same cause that behaviorism is not intersted to see, and has not thge tools and terminology to find and describe. Effectively fighting symptoms is all nice and well - but you need to know the initial "Why?" as well, you know. The first is pragmatic. The second is essential.
i am no technophobe, and if you wpould have read what I said on evoltuion and the importance of technolgy, you would have seen that. I am jst against a totally unleashed, totally uncontrolled, totally uncritical abuse of technology. Neurosciences also is not technophobic. It's research just led to insights and evidence that showed that the old comparison of computer hardware and human mind simply are undefendable, and are nonsens. It was a hype to see more in current teczhnology than it can be, currently. I do not know if once we have a machine showing something like self-awareness, self-reflexion, curiosity, the ability to leanr things that are totally beyond the limits of what it initially started with, or if a machine will ever have emotions, and social behavior that is not just a blind copying of human technical behavior routines, but emerges from a felt desire to be social. but I am convinced that wiothout these factors, you cannot talk of cognitive intelligence, and I am in good company with that assessement.
And in case oyu do not know, many astronomers think that most civilisations that might exist out there (mind you that 90% of the suns in our galaxy are older than our solar system) think they probably will not be organic life-depending forms of life, but machinery civilisations, "mahgcinery" in a wide meaning of course, because they argue that a civilisation living long enoigh most likely will transfer mind and cogntiion to machines that are mkuch easier to maintain than a vulnerable organic body, which is also far more vulnerable and less enduring. If you travel between the stars, or weant to survive collapsing biospheres or extreme environments, organic hulls like life on Earth is using, are the weakest, most dangerous option.
Do me some more justice, please ;) - i am not as ignorrant to technology as you think. I am just no blind believer, uncritically hailing it, no matter what. ;) Technology can be a benefit, a hazard, or useless. If it is a benefit it is not my problem.
GlobalExplorer
03-27-08, 05:29 PM
I think we are not so far apart as it might seem, in that we are both aware of the consequences of this.
When I glimpse at this creature without mother, I understand that reality feels much different than science fiction. Still science fiction is proven correct.
Everyone who watches these videos will take part in a important moment in human history, because it their first contact with an alien creature. One that we have created ourselves, and that has not even the capabilites of an insect, but will have in our lifetime. And beyond that it will make us obsolete.
It scares the **** our of me to see it move, even if I switch off the disturbing sound of the engine.
Our disagreement is only where these developments will lead.
i am no technophobe, and if you wpould have read what I said on evoltuion and the importance of technolgy, you would have seen that.
Ok you are not ;) . I only contest the idea that human intelligence is unique. And I do so on the ground that it is a technophobe refuge to capitulate before a problem of such proportions as artificial intelligence. And lack of imagination, or maybe fear.
I get it all wrong, you say. But I think that is a strange statement since I cannot see your reply touching the details and perspectives I lined out.
Your main point is that the analogy between computer (hardware) and human intelligence is untenable. You then go about proving your statements through purely psychological arguments. But all you prove to me is that the brain is much more complicated than we can imagine. Of course it is.
I yet see no convincing argument that the human brain is not a biochemical computer. From what is known today it is based on network organization and signal exchange. The neurons and chemical agents of that process represent the hardware. The memes of Humanity are the software, and it differs from individual to individual.
But if this process can take place in a few litres of ordinary matter, it can do so on different hardware. Today we have not the computing power. Today we can create a few grains of sand (chess computers, walking robots, etc), but to rival human intelligence we need a desert. There are many steps to take, but the process is already taking place.
You seem to imply that from a software engineer'S perspective, intelliogence is not more than decision making capability. Thst is probably the most minimal conception of intelliegnce I ever heared of. A random generator already would be intelligent then. Or an AMRAAM.
What is your definition of intelligence then? For me there is no diffference between a decision made by a human or by a computer, as long as the decision has the same qualities. I see no way how I can prove you have consciousness (though I am sure you have buddy ;) ), so what is the difference if I am talking with you or a computer with your intelligence?
I can say that because I contest the concept of free will. It has already been put put to question through neuroscience (there is time lag between action and consciousness), but there are still more questions as to what this means than there are answers. Still there is a lot of indication that consciousness means only registering what has already happened.
Free will also defies physics. You think humans have a free will because they can go out of the flat and turn left and the next day they do the same and turn right? But a free will would require you can go back to yesterday and turn the other way instead. Only then you have a free will, because only then the other option existed.
According to physics progreesing in time means moving in the fourth dimension, and every point in the past still exists. So you could go back to yesterday but everything would stay the same. You would always turn left -> You have no free will, at least not in the way we have come to accept. So there is no difference between you and a machine that behaves like you.
It is decided by how you define intelligence.
I think that is indeed he only difference between our points of view on that matter.
And in case oyu do not know, many astronomers think that most civilisations that might exist out there (mind you that 90% of the suns in our galaxy are older than our solar system) think they probably will not be organic life-depending forms of life, but machinery civilisations, "mahgcinery" in a wide meaning of course, because they argue that a civilisation living long enoigh most likely will transfer mind and cogntiion to machines that are mkuch easier to maintain than a vulnerable organic body, which is also far more vulnerable and less enduring. If you travel between the stars, or weant to survive collapsing biospheres or extreme environments, organic hulls like life on Earth is using, are the weakest, most dangerous option.
I know, I actually came to that conclusion myself when I was 15 years old. I think a future step will be the colonization of Moon and Mars through semi intelligent robots and possibly technobiological plant/animal life. It's a logical continuation of what he have been doing during our previous existance as a human race. And I think it is happening in many places in the universe.
I am jst against a totally unleashed, totally uncontrolled, totally uncritical abuse of technology.
Seeing the problem we are discussing, I agree. But the past showed we cannot stop these processes, because they are larger than the single individual, and they have a life of themselves.
Do me some more justice, please - i am not as ignorrant to technology as you think. I am just no blind believer, uncritically hailing it, no matter what. Technology can be a benefit, a hazard, or useless. If it is a benefit it is not my problem.
Hey, sure. I am just saying that at the moment the pendulum is swinging back again and technology is in the lead again. We are moving towards artifical intelligence, with small, logical steps. As I said at the start, it scares me, as it could mean the end of humanity as we know it.
Skybird
03-27-08, 07:37 PM
Everyone who watches these videos will take part in a important moment in human history, because it their first contact with an alien creature. One that we have created ourselves, and that has not even the capabilites of an insect, but will have in our lifetime. And beyond that it will make us obsolete.
Not so dramatic, the robodog is NOT unique. In Lübeck, Germany, for example they have develoeped an autonomous spider with eight legs that also walks all by itself. It is just not as hectical, but rests more stable on the ground. and there have been other experiments like this, two.
Ok you are not ;) . I only contest the idea that human intelligence is unique. And I do so on the ground that it is a technophobe refuge to capitulate before a problem of such proportions as artificial intelligence. And lack of imagination, or maybe fear.
I did not say that human intelligence is unique. I said that it does not compare to a software running on a traditional PC, and that the hu7man brain does not compare to the hardware of a computer. I could imagine many different ways of higher intelligence. I even do not wipe out the idea that maybe some species on Earth are as intelligent as humans, and maybe just too different as that we would realise that. Right now, research even is rewriting all that we seemed to know about certrain brainstructures being a precondition for intelligence in birds. what is currently said and formed and found there, is a silent revolution nobody takes much note of.
Your main point is that the analogy between computer (hardware) and human intelligence is untenable. You then go about proving your statements through purely psychological arguments. But all you prove to me is that the brain is much more complicated than we can imagine. Of course it is.
I yet see no convincing argument that the human brain is not a biochemical computer. From what is known today it is based on network organization and signal exchange. The neurons and chemical agents of that process represent the hardware. The memes of Humanity are the software, and it differs from individual to individual.
A PC harddisk cannot take over functions of the processor. A CPU does not calculate and store in holographic patterns. A GPU cannot learn by itself to prolduce sounds. A mainboard does not change it's hardwiring. A human brain does not think in binary code. A PC cannot become aware of itself, and cannot be emotional. A software code defines the limits of what a PC can do, and even self-programming software is by the intial code limiuted in what it can develope by itself in future self-programming. Just some points that are on my mind right out of the blue. Your enthusiams for computer hardware in all honour, but you exaggerate it, massively. In the forseeable future, computer wmaybe wioll mimic congntiojns by surrogate routines that give us the illusion of having cognitions, like the Japanese robots being given a human looks and face expressions give the illusions that they are a person with true emotions. but they are not, and comouters in the forseeable future will have no real cogntions. they will be programmed to "cheat" us on that instead.
But if this process can take place in a few litres of ordinary matter, it can do so on different hardware. Today we have not the computing power. Today we can create a few grains of sand (chess computers, walking robots, etc), but to rival human intelligence we need a desert. There are many steps to take, but the process is already taking place.
And still I say it does not compare. Computers and brains operate in two totally different working modes, the one in a digital, binary code, the other in a mode we yet have to understand, and maybe calling it holographic only is a rough approach on the real nature of "thinking". It is not about electricity being used in both "devices", it about how the sigmal'S structure is coded, and what is done with these very different types of information - and here at the very latest a brain "computes" different than a CPU.
What is your definition of intelligence then? For me there is no diffference between a decision made by a human or by a computer, as long as the decision has the same qualities. I see no way how I can prove you have consciousness (though I am sure you have buddy ;) ), so what is the difference if I am talking with you or a computer with your intelligence?
I have repeatedly mentioned several factors now. Self-awareness. True autonomy. Emotions. Social drive. Curiosity, a playful "mind". Even if I do the same thing like a computer decides by his software (and me maybe having programmed the software) - you still do not know why this is so, and what my motivations and motives and/or biological drives are when doing this or that. Also, I have a completely different image on my mind about what I do, and why - and it may even change over time, and get distorted over time.
I can say that because I contest the concept of free will. It has already been put put to question through neuroscience (there is time lag between action and consciousness), but there are still more questions as to what this means than there are answers. Still there is a lot of indication that consciousness means only registering what has already happened.
I know what results yopu mean, but they also need to be seen in a wider context and a discussion (influencing theoretical thinking) that is leading beyond these results themselves. It is a complex theme, and I had a complete physiopsychological seminary just about this. So depite the latest findings that get interpreted in a very tight frame only, I still that the answer of wether we cry becasue our brain creates emtotions causing us to cry, or if we feel sad emptions becasue our brain made us crying is not answered. It is even possible that both events have no causal link, and just fall into the same time by random happening - then we even would need to think in terms of Jungian synchronicity. :)
Free will also defies physics. You think humans have a free will because they can go out of the flat and turn left and the next day they do the same and turn right? But a free will would require you can go back to yesterday and turn the other way instead. Only then you have a free will, because only then the other option existed.
that is flawed logic, because free will does not mean to be able to deafeat the laws of physics as our models define them, but to freely decide, and on this: see above. I do not want to open that can of worms, since it is VERY much material. and I admit I woudl need to refresh it before I enage in deeper discussion of neurophysiology and philosophical implications.
According to physics progreesing in time means moving in the fourth dimension, and every point in the past still exists. So you could go back to yesterday but everything would stay the same.
Now that leads VERY far. There are so many theories in modern physics, on alternate worlds, infinite worlds coexsting at the same time and the same place, worlds constantly getting created by deciding between options and thus splitting the universe in on e where one option, and one where the other options was choosen. That bis fascinatin stuff, but I think there is a reason why I use to read a whole book when wanting to compare these many theories and implications, often from quantum physics.
You would always turn left -> You have no free will, at least not in the way we have come to accept. So there is no difference between you and a machine that behaves like you.
Sorry, but that comparsion I cannot see to be founded by your example (which is an assumption only, btw.: when you could go back in time, to that crossroad: why are you so sure that while decided once to move left, you would do it again and not different, when the same situation would return? Quantumk physics made that a questuionable assumption, for they paradoxically both guarantee randomness that cannot be calculated on a very basic level of the universe as we constructed it in our models, nevertheless claims the possibility of non-causal links, while theoretic physics even thinks about particles moving backward in time). But that is again a discussion in itself.
I think that is indeed he only difference between our points of view on that matter.
No, it is one difference, but not the only one.
I know, I actually came to that conclusion myself when I was 15 years old. I think a future step will be the colonization of Moon and Mars through semi intelligent robots and possibly technobiological plant/animal life. It's a logical continuation of what he have been doing during our previous existance as a human race. And I think it is happening in many places in the universe.
That is all speculation, including the theory of astronomers I referred to. I am always hesitant to label something as a conclusion when all I have is speculation only. It is a mind experiment. Coclusion I reserve for theories which base on some former information and finding offering some more substance. Making conclusions on the basis of speculations alone necessarily leads not to conclusions but - more speculation.
Seeing the problem we are discussing, I agree. But the past showed we cannot stop these processes, because they are larger than the single individual, and they have a life of themselves.
that is exactly the uncritical, almost fatalistic acceptance of technology I mentioned. Technology (forming by scientific developement)in human history was a trend starting slow in Western medival, today has amost a selfÄ_dynamic keeping it running. In the orient it had a better start, but then came to an almost complete halt and stagnation. So, technology does not necessarily have a life of its own. Also, it is a two-split thing. Decisions leading us to planes, we consider to be good. Planes crowding the sky and polluting the atmosphere in altitudes where they really do damage, we consider not to be good. So, we cannot say "an airplane all by itself is necessarily a good thing". And always the results of Oppenheimer's project on my mind. And finally, their is business, and this keeps things running. Technological developement not so much is keeping itself alive (and if so, then only via the intermittend variable of science finding answers which hold new questions in themselves), but is getting piushed by business that is demanding new products to make new profits. This is th drive behind scientific and technologcal developement more thna anything else. It may have been different in the past, but today it's the moeny that makes the world go round.
Hey, sure. I am just saying that at the moment the pendulum is swinging back again and technology is in the lead again. We are moving towards artifical intelligence, with small, logical steps. As I said at the start, it scares me, as it could mean the end of humanity as we know it.
It sounds profane, but I do not see it like this when often telling myself: it is like it is.
But I must not like it, and when I do not like it what kind of man would I be if not trying to change it or influence it then?
P.S. In your next holidays, get a copy of Frank Schätzing'S "Der Schwarm", you probably have heared of it, it was a long time bestseller. Notm only is it a very exciting and well-written reading, but it also offers you a conception of an alien intelligence that is totally different to everything we have discussed here. And not refering to the book, but to natural fish swarms, and the behavior of humans moving in big groups without colliding, but nevertheless forming movements patterns that you can see from the outside: that is a form of intelligence too, agree most solcial and natural scientists, calling it "swarm intelligence". Now comlpare that to a PC network intelligence. The difference should be obvious beyond that swarms are not hardwired, and are not connected in PC networks. It's two totally different things.
GlobalExplorer
03-28-08, 05:16 AM
Free will also defies physics. You think humans have a free will because they can go out of the flat and turn left and the next day they do the same and turn right? But a free will would require you can go back to yesterday and turn the other way instead. Only then you have a free will, because only then the other option existed.
that is flawed logic, because free will does not mean to be able to deafeat the laws of physics as our models define them, but to freely decide, and on this: see above. I do not want to open that can of worms, since it is VERY much material. and I admit I woudl need to refresh it before I enage in deeper discussion of neurophysiology and philosophical implications.
It is only flawed logic because you assume we have a free will. Otherwise it proves that we can not have. Ok, if reality means "many worlds" I can accept defeat, but as long as we stick to the single space-time-continuum model, we can have no free will under the laws of physics.
But forget my excurse into physics - I cannot prove or explain that here, and I admit it could be wrong. Sorry for trying to lure you into this territory, it is indeed too big.
I said for another resaon. We think that a machine that simulates human intelligence through calculations is different to us because we understand the causalities behind it. When we no longer clinge to our concept of consciousness and free will, we see that we are not different to this machine.
That is all speculation, including the theory of astronomers I referred to. I am always hesitant to label something as a conclusion when all I have is speculation only. It is a mind experiment. Coclusion I reserve for theories which base on some former information and finding offering some more substance. Making conclusions on the basis of speculations alone necessarily leads not to conclusions but - more speculation.
The correct term is conjecture. A (mathematical) theorem is a conjecture until it is proven.
A PC harddisk cannot take over functions of the processor. A CPU does not calculate and store in holographic patterns. A GPU cannot learn by itself to prolduce sounds. A mainboard does not change it's hardwiring. A human brain does not think in binary code.
I call lack of imagination. Why are you intentionally limiting AI to silicon based computers of today? Sure, with the current generation of computers we can never achieve the required processing power. There are many problems with silicon based computers, the biggest is size. A computer that has enough computing power for human intelligence is just too large and too restricted by physical limits (speed of light etc). And will therefore never be built.There are already new technologies on the horizon, like quantum, chemical or biological computers.
But you seem to know very little about software. Software can adapt and take over all those functions you mentioned. (Well it cannot do today, but software technology is still in its infancy, you do realize that?)
Or in other words, it's not the neurons in Tolstois brain that wrote "War and Peace". It was written by the program "Leo Tolstoi", on the "brain of Leo Tolstoi" computer. Both beyond reach of of todays technology, sure.
And still I say it does not compare. Computers and brains operate in two totally different working modes, the one in a digital, binary code, the other in a mode we yet have to understand, and maybe calling it holographic only is a rough approach on the real nature of "thinking".
Once we know, we will begin building such an analogue, holographic computer!
Even if I do the same thing like a computer decides by his software (and me maybe having programmed the software) - you still do not know why this is so, and what my motivations and motives and/or biological drives are when doing this or that. Also, I have a completely different image on my mind about what I do, and why - and it may even change over time, and get distorted over time.
As I said before, it could also not explain why you do these things, so there is no difference to the artificial you.
Seeing the problem we are discussing, I agree. But the past showed we cannot stop these processes, because they are larger than the single individual, and they have a life of themselves.
that is exactly the uncritical, almost fatalistic acceptance of technology I mentioned.
So you say I can stop them? Memes like artificial intelligence, communism, coca cola or the internet are larger than human beings, and they can only be stopped by other memes. I don't know how this realization would make me a slave of technology. If you tell me how to stop these thingsw, maybe I would try.
But I already see we will never agree on anthing here because I am a technologist, and you are an universalist. The problem I have with that is that you want to do everything differently, but you don't say exactly how. You have a habit are trivializing the momentous and complicating the obvious. Still it's always interesting to read you posts because of the breadth of your ideas. I just don't see any conclusion forming out of your philosophy, probably that is your philosophy.
But in all honesty I must say I don't trust you psychologists. By nature you try to mystify the brain and human existance in order to make your field inaccessible to outsiders, while in reality psychology has achieved nothing, and never will. The only real progress is made in the field of neurology, because it looks at real stuff, not tapestry patterns. (And in psychiatry, insofar as it deals with easing diseases.)
Psychologists are very much like artists who paint a red circle and explain in 10.000 words that it is anything but a circle. Engineers are dumb painters, but they will make circles very much the same.
Anyway, I will have a look at that book you mentioned - sounds interesting!
That's amazing! Moves VERY realisticly in situations where it is about to lose balance. Now, I can finally get a robot to bring me my beer instead of having to walk to the fridge. :up:
Johon on sinun aviovaimo? :p
Uh oh... *calls to Bletchley Park*
-"Guys, you remember the Enigma code you struggled with during the WWII? Yeh, well that was a crozz-puzzle compared to this! A foreigner speaking Finnish!"
-"*GASP*"
:p
Skybird
03-28-08, 06:55 AM
It is only flawed logic because you assume we have a free will. Otherwise it proves that we can not have.
no it does not prove it. I also do not know if we have a free will to decide freeley between the options available to us Free will does not mean to be able to fly if we want that, it mans just that: free choice between the options available top us in a given situation. that is how philosophy uses that term. Psychology. Jurisdiction (!). Your physics implication shas not much if anything to do with it, not in how the term usually is meant. Maybe you mean something very different? Omnipotency, maybe?
They also have a century-old debate of the conseqeunces if there is no free will. All ethics systems would brake down, then. Our laws to give penalties to people violating laws would not make any sense anymore. We would have no basis onto which to form social communities. But I do not wish to say by this that wee "must" have a free will else our societies would crumble. It's just what they debate in jurisidction, and since centuries.
I said for another resaon. We think that a machine that simulates human intelligence through calculations is different to us because we understand the causalities behind it. When we no longer clinge to our concept of consciousness and free will, we see that we are not different to this machine.
I tell you - no scientist on earth truly knows what consciousness even is, and how it comes into existence. Thus we cannot say that we are able to simulate it. What is it, we simulate, then? We simulate nothing, we define algorithms by which an automatted installation functions according to pattern that we have defined in advance. By your definition, an emergency braking system in a train or a lift would have consiousness and intelligence when a certain threshold speed is surpassed and the brakes lock. Above somwehere, you said it all is about making decision, and by what you said, and autmmated reaction would be a machine making a decision. But that is nonsens. It compares at best to a biolgicaol reflex, but consiousness and decision-making have nothing to do in reflexes.
Man, don'T declare yolurself lower than you are! You are a man, but want to see you as nothijg better than a complicated piece of Fischer-Technik? that might havbe been fine under the paradigm of Descartes, and past times when it was thought all nature and the universe is just a machine, often with a predefined fate and future. But we have moved beyond that, for good reasons.
The correct term is conjecture. A (mathematical) theorem is a conjecture until it is proven.
No, a Vermutung is based on some kind of information that is considered to be logically linked to the object of conjecture. But when all you have is speculation only: playing around with it never will lead you anywhere than just speculations, as long as you have no better data quality available to feed into your mind games.
I call lack of imagination. Why are you intentionally limiting AI to silicon based computers of today? Sure, with the current generation of computers we can never achieve the required processing power. There are many problems with silicon based computers, the biggest is size. A computer that has enough computing power for human intelligence is just too large and too restricted by physical limits (speed of light etc). And will therefore never be built.There are already new technologies on the horizon, like quantum, chemical or biological computers.
You know as well as I do that until the early 90s it was en vogue to compare the brain and the way it works and the results it creates, consciousness amongst these, to be comolared to the set of hardware a traditional computer is made of. I limit the understanding of "machinery" and "computer" to what we can imagien today for the sake of this discussion. But that is not the deciding point, but the important thing is the way in which data gets used by the mind-machine, no matter the hardware it is based upon. What bfuiture technolgies will bring, we can only make "conjectures" abiout for the short termed- future, and speculation for the very long run. Thus it remains unknown to us if ,machines of the future will ever create a true intelligence and consciousness of their own, or if they will be capable to become the new home and hull of biological consciousness (that's the theory most astronomers prefer, I got the impression, not the first). I was talking in this dicsussion of what we can forsee, at least with some reasons of justification. wild speculation is not my thing.
But you seem to know very little about software. Software can adapt and take over all those functions you mentioned. (Well it cannot do today, but software technology is still in its infancy, you do realize that?)
software today is neither-self-aware, nor creative, nor emotional, nor social. It does not have the ability to qiuestion it's own existence, to make deciisons to sacrifice itself or wanting to survive, it does not ask where it comes from, and how long it will be there. You see, all these are drives also are factors thatbinflence human'S mind, necessarily, we cannot escape that, and through our mmind thesy influence the way we live our lifes, and approach challenges and find solutions and make decisions. It all is part of this mega-label "intelligence". And if software develoepment alone will have what it takes to truly create out of nothing such qualities, is unknown. It does not adapt to thse concepts right now, as you claim, because it even has no cognitive representationt hat such qualities do exist. And later you say that it nevertheless cannot right now, but will be in the future. Let's see.
At school we learned Pascal (first half of the 80s), and I later used GFA 3 Basic on the amiga, btw. :D
Or in other words, it's not the neurons in Tolstois brain that wrote "War and Peace". It was written by the program "Leo Tolstoi", on the "brain of Leo Tolstoi" computer. Both beyond reach of of todays technology, sure.
Interesting theory, although not new. But still a brain orgnaoizes and processes information different than a CPU, and what the human software is made of, it it can be called that way, still is not really known: we have many theories, but no solid understanding. And how the brain biologically can be self-organizing, and why software can chnage areas of the hardware'S functioning patterns (in case of damage, foreign, different brain ohers can tkae over function from areas that before were used for different function): this interaction also is not explained, we only observe for sure that it exsits. the understanding of the brain is very different to what it was seen like just 20 years ago, the past twenty years have not been evolutionary, but revolutionary.
Once we know, we will begin building such an analogue, holographic computer!
Abwarten. Let's wait, watch and see. The brain is a limited object with precisely defined boundaries, yet it seems to include unlimited, infinite storage capacity. Physically that is a paradoxon like a perpetum mobile creating energy out of nothing or by ivesting less energy than is being won in return.
As I said before, it could also not explain why you do these things, so there is no difference to the artificial you.
C'mon.
So you say I can stop them? Memes like artificial intelligence, communism, coca cola or the internet are larger than human beings, and they can only be stopped by other memes. I don't know how this realization would make me a slave of technology. If you tell me how to stop these thingsw, maybe I would try.
What I said is you should not fatalistically nor uncritically embrace all and every process running just becasue it is there. We can choose. We can decide. We can decide against a certain technological tred or process. Did you hear that the Transrapid in Munich yesterday was put to grave? ;) Too expensive. We can decide against buying coca cola, an druin the company. We only need unity. Or we need to represent the decison makers of the company and decide to shut it all down.
You think binary here :) and argue lioke "all or nothing at all". I did not say to abandon all technology. I said to be more respnsible in our deciison makings what kinds of technology we want to drivbe, and what bbetter not. Not all technology there are - must be realsised. Not all that can be done - must be done for no other reason that it can be done. sometimes it is decisions about a certain proiject, like this huge dam in china (envrionmental and social costs putting it into question), a TV-handy (apparently bcoming a flop for nobody wants to watch long movies on a 4x3 cm display), or the transrapid. Sometimes it can be a decision about general trends and new technology classes. And not always we must accept everything of these just becasue it popped up from according research. but since these researchers are driven by business interests and investements by business comanies, we may need to tackle these economic interests for defending our freedom to decide for or against such things.
But I already see we will never agree on anthing here because I am a technologist, and you are an universalist. The problem I have with that is that you want to do everything differently, but you don't say exactly how. You have a habit are trivializing the momentous and complicating the obvious. Still it's always interesting to read you posts because of the breadth of your ideas. I just don't see any conclusion forming out of your philosophy, probably that is your philosophy.
Obviously I do not agree with you on this. I have said repoatedly and quite clearly in terms what I emant, and how I define terms and issues like intelligence. It's just that from your extremely limited perspective of wanting to be a technologist, as you name it, you maybe cannot follow that. As I see your argumentation, you attempt not to understand reality , but to pick it and squeeze it and minimize until it is so small that you can carry it with one hand and the means of your technologistical set of tools can handle it. In other words you do not adapt to wide reality, but you try to make it so small that it must adapt to your defintions of it.
But in all honesty I must say I don't trust you psychologists. By nature you try to mystify the brain and human existance in order to make your field inaccessible to outsiders, while in reality psychology has achieved nothing, and never will.
Then you have never spoken with physiopsychologists. BTW., I have left psychology years ago, for not the same generalizing complaijts you made, but for being critical of ceertain trneds in talking psychology and therapy, too. But that does not chnage that, like I said just above, you express annoyance with psychology only becasue it deals with qualities your fanatical technologistcal approach cannot and never will understand. And that psychology has nothing accieved at all, is simply totally wrong. The problem I see with it is that it just claims more than it deserves, or can justify, and opportunistically also produces a lot of hot air.
The only real progress is made in the field of neurology, because it looks at real stuff, not tapestry patterns. (And in psychiatry, insofar as it deals with easing diseases.)
You are not only a technologist, but an extreme materialist then, too. but that is your problem, not that of psychology. Too many experience in my life that tell me beyond doubt that there is more than just matter. In fact, you will hate me for it, like a physiology professors from ack then did, too, I do not agree on that due to a brain's activity, there is a mind - it'smore the other way around: becasue there is a mind not depending on matter, from this mind'S existence comes that there formed a brain.
Psychologists are very much like artists who paint a red circle and explain in 10.000 words that it is anything but a circle. Engineers are dumb painters, but they will make circles very much the same.
Well, if you say so...
I think we better stop here.
That's amazing! Moves VERY realisticly in situations where it is about to lose balance. Now, I can finally get a robot to bring me my beer instead of having to walk to the fridge. :up:
Johon on sinun aviovaimo? :p
Uh oh... *calls to Bletchley Park*
-"Guys, you remember the Enigma code you struggled with during the WWII? Yeh, well that was a crozz-puzzle compared to this! A foreigner speaking Finnish!"
-"*GASP*"
:p
Yeah... its all greek to me... :p
But you did get the general gist?
Oh.. and "foreigner" is such a harsh word.. I prefer "heathen" :know:
Yeh, I understood what you meant. ;)
Our laws to give penalties to people violating laws would not make any sense anymore. We would have no basis onto which to form social communities. But I do not wish to say by this that wee "must" have a free will else our societies would crumble. It's just what they debate in jurisdiction, and since centuries.
No!
To say that if we had no free will we would chose to abandon ethics, the
punishment of crime and that our societies would crumble is a contradiction of terms.
To say that if we had no free will we would automatically and with out choice or will abandon ethics just shows that you overestimate the complexity of the human brain.
I tell you - no scientist on earth truly knows what consciousness even is, and how it comes into existence. Thus we cannot say that we are able to simulate it. What is it, we simulate, then? We simulate nothing, we define algorithms by which an automated installation functions according to pattern that we have defined in advance. By your definition, an emergency braking system in a train or a lift would have consciousness and intelligence when a certain threshold speed is surpassed and the brakes lock.
Firstly, everything that is conscious knows what consciousness is by definition.
What we can not do is identify it out side of our self; when we can not directly experience it.
If you hold that there is no possible way to identify it outside of our self, on what basis
do you assume an emergency braking system does not have it? On what basis do you
assume that an inert rock does not have it?
If it is physical, it is possible to detect. We have no reason to believe it is not physical
and so there is no reason to believe we will never be able to detect it, however complex
it may be.
Man, don't declare yourself lower than you are! You are a man, but want to see you as nothing better than a complicated piece of Fischer-Technik? that might have been fine under the paradigm of Descartes, and past times when it was thought all nature and the universe is just a machine, often with a predefined fate and future. But we have moved beyond that, for good reasons.
That's nothing short of mysticism. :shifty:
software today is neither-self-aware, nor creative, nor emotional, nor social. It does not have the ability to question it's own existence, to make decisions to sacrifice itself or wanting to survive, it does not ask where it comes from, and how long it will be there. You see, all these are drives also are factors that influence humans mind, necessarily, we cannot escape that, and through our mind they influence the way we live our lives, and approach challenges and find solutions and make decisions. It all is part of this mega-label "intelligence". And if software development alone will have what it takes to truly create out of nothing such qualities, is unknown. It does not adapt to these concepts right now, as you claim, because it even has no cognitive representations hat such qualities do exist. And later you say that it nevertheless cannot right now, but will be in the future. Let's see.
How can you claim software is not self aware when you admit that we do not
"truly knows what consciousness even is" ?The only reason humans are creative, emotional, social, questioning of their own
existence, and have a will to survive is because they are programed to do so.
You are not only a technologist, but an extreme materialist then, too. but that is your problem, not that of psychology. Too many experience in my life that tell me beyond doubt that there is more than just matter. In fact, you will hate me for it, like a physiology professors from ack then did, too, I do not agree on that due to a brain's activity, there is a mind - it'smore the other way around: becasue there is a mind not depending on matter, from this mind'S existence comes that there formed a brain.
Utter mysticism again.
There is reason to belive in a mind that is in any way separate or different from brain activity.
Skybird
03-28-08, 09:41 AM
Slowly but surely I'm getting tired of getting words I never said put in my mouth. Please note everybody that even small differences are important and can change the complete meaning
No!
To say that if we had no free will we would chose to abandon ethics, the
punishment of crime and that our societies would crumble is a contradiction of terms.
To say that if we had no free will we would automatically and with out choice or will abandon ethics just shows that you overestimate the complexity of the human brain.
Fine. And who said that we would chose to abandon ethics? I did not.
the implicaion you fail to see is a totally different. If there is no free will, then you must necessarily conclude that in every situation man makes a decision to do or not to do something - he has had no other choice than to act the way he did. I assume that is a bit what GE meant when saying if one were going back in time to the same situation, one necessarily would do the same again, but he mixed it up). But if man has no other choice than to commit that crime, to act that way that is under penalty, to make that decision, it makes no sense anymore to put a penalty on him, to hold him responsible for it. Becasue he could not do any different than he did. now consider the consequences in education. Theology. Every aspect of accepting a post where you have to accept responsibility. Legal procedures. It all would break down, becasue it all and our legal culture as well is basing on that man is said to have a free will that enables him to chose. withoiut that freedom, there is no responsibility. without responsibility, nobody can be held responsible. Nobody being responsible for himself: and penlty loses it'S justification. so do ethics differing between good and evil - they become pointless.
So it is not about choosing to skip ethics. It is about ethics simply not being existent.
Firstly, everything that is conscious knows what consciousness is by definition.
No. It simply is. Intellectual understanding is something different. Most people know some achademic defintions and know some theories of understanding it - but not before they made according experience, for example in meditation, they know that the qulaity of the experience and the words of the descritpion do not match.
If you hold that there is no possible way to identify it outside of our self, on what basis
do you assume an emergency braking system does not have it? On what basis do you
assume that an inert rock does not have it?
Are you talking about consciousness, mind or "buddha-nature"? ;) A single screw does not make calculations of any kind, nor does it count the number of rotations. But of course I cannot look inside that screw. So although I cannot prove it I nevertheless claim that it has neither consciosuness nor intelligence. Forgive that I am that ignorrant.
If it is physical, it is possible to detect. We have no reason to believe it is not physical
and so there is no reason to believe we will never be able to detect it, however complex
it may be.
A Fata Morgana is existent, too. you can see it. you can describe the details it shows. You can photograph and film it. you can explain it. Existent it is - but it is not real.
Man, don't declare yourself lower than you are! You are a man, but want to see you as nothing better than a complicated piece of Fischer-Technik? that might have been fine under the paradigm of Descartes, and past times when it was thought all nature and the universe is just a machine, often with a predefined fate and future. But we have moved beyond that, for good reasons.
That's nothing short of mysticism. :shifty:
And I think you want to provocate, for the purpose of provocating. ;)
How can you claim software is not self aware when you admit that we do not
"truly knows what consciousness even is" ?The only reason humans are creative, emotional, social, questioning of their own
existence, and have a will to survive is because they are programed to do so.
Poor robot you are, living in a robot world. Descarte's et.al. worldmachine, that is. But you know, I am still totally sure that that oold piece of software code I once wrote for creating a program for Mastermind - had not any consciousness and self-awareness at all.
Always arguing with extremes, eh?
I think it is getting absurd now. If some of you want to see themselves as nothing more than a highly developed type of automobile with a nice electronics suite, okay, fine, I honestly hope you will never reach a rank wehre you can do real damage, but beyond that it does not really bother me. What I say, is this:
1. A hardware kit with motors, sensors and a softweare code does not equal "intelliegnce2 as the term is used in cognitive science, in the meaning of a set of characteristics most scientist do agree on to be part of "intelligence".
2. A critical mind on wether or not to embrace this technolgy and that trend does not equal to completely deny technolgy. It means simply to choose wisely.
3. Isaac Asimov, certainly not suspicious to be a mysticist, said intelloigence is what an IQ is measuring. I would point out, that word-and-paper tests do not measure anything: a testform has no sensor. Mqany psychologists would agree on certain key features of what intelligence is, but there is no one, single super-defition of it that has general consensus. the defintions depend on the perspectives of the varuious scientific fields. I also say that intelligence per se does not exist, but that the term is a meta-label for a collection of qualities that all interact and come together in order to create something that is greater than the sum of the signle components/qualities. If these qualities ever can be created by software as we define "software" today, is no certain thing, and even not a theory - it is speculation. we do not know if it can be, or not. We need to wait and see. But we can deal with the question until then, if it is even desirable to acchieve that. The understanding of "artifical intelligence" as it is used in software engineering I see as missing key features and being qualitatively inferior to what is adressed as "intelligence" in cognitive sciences. It's the same term used for two very different things.
4. trying to make the human brain being seen as a kind of computer kit, is absurd, and is rejected both by the very different weay a brain functions, and what is coming as a result from this functioning. It was an analogy that vecame popular during the hype around spreading PC technologies in the late 70s and all 80s, but today, most brain researchers reject this analogy, saying it does help the reputation of PC technology, but is crippling the understanding of what brain is. when I finished studying in the late 90s, it was not much popular anymore - except with technicians. But they claim more than what is theirs.
5. Finally, neurons in the brain are not the fastest neurons we know, yet this slow processor - the brain - is "computing" signals incoming in a way that the result not only outmatches that of computer calculations in qualitative terms, and often also in quantitative terms, but also create qualities that in the signals themselves are not even present. We see sharp images in our brains, for example - but the lenses in our eyes do not produce these, they form images that at least are distorted by 3.5 diopters, often even more, up to 6, and 7. Why and how are the signals manipulated to form "sharp images" if ourt brain never experienced by signal input what sharp images are? Brain then creates additonal information whose link to the data may or may not be known to the mind. You may smell a certains scent without being aware of it - but suddenly having a long lost memory on your mind, or a certain mood you once experienced. You taste something - and experience an emotion. You loisten to something, and feel a mood. The type of signal input does not equal the type of data result that is produced. You muscles may remember reaction patterns that feed back into the brain and make you do things you do not have recallable in your intellect'S memory. Your brain outsources data into books and the internet, and takes advantage of knowledge it never has formed itself. You literally can think beyond your limits. And in context of meditation, deep mediation and states of chnaged consciousness that may result in changed neurochemical status but is not caused by them, you even eventually can make the experience of leaving all this behind and see beyond. That'S why I use to say that time is an invention by the intellect only, but in fact is only a function deriving from brain activities, but no really existing quality of the "outside" world. Also that is why I say that mind is first - and then comes the brain. I think I was unprecise above somewhere, mixing up consciousness and mind. awareness is a third different quality, but I wonder if that maybe gets lost in translation, i sometimes think English and German use these terms differently, somewhat.
So in the end, use that robodog if you think you have a use for it - but don't call it artificial intelligence, and you can be interested, fascinated, whatever - but don'T make more of it than what it is: a machine. I love Science Fiction, too, but I can see the difference between what is and what is imagined to be, and know that some things from past science fiction turned into reality, others not. Oh, and I use a computer and many modern gadgets of present households. so much for me being a technophobe.
the implicating you fail to see is a totally different. If there is no free will, then you must necessarily conclude that in every situation man makes a decision to do or not to do something - he has had no other choice than to act the way he did. I assume that is a bit what GE meant when saying if one were going back in time to the same situation, one necessarily would do the same again, but he mixed it up). But if man has no other choice than to commit that crime, to act that way that is under penalty, to make that decision, it makes no sense anymore to put a penalty on him, to hold him responsible for it. Because he could not do any different than he did.
That is not an implication I have "failed to see", it is not an implication at all!
We are generally programed to punish people because of the way our social minds
work. There is no meta-reason beyond that, free will or not.
now consider the consequences in education. Theology. Every aspect of accepting a post where you have to accept responsibility. Legal procedures. It all would break down, because it all and our legal culture as well is basing on that man is said to have a free will that enables him to chose. without that freedom, there is no responsibility. without responsibility, nobody can be held responsible. Nobody being responsible for himself: and penalty loses it'S justification. so do ethics differing between good and evil - they become pointless.
So it is not about choosing to skip ethics. It is about ethics simply not being existent.
Of course ethics are not physically existent! They are just systems that result from
the way our social brains work. So is the concept of responsibility. That would be no
different if we had free will or not.
Are you talking about consciousness, mind or "Buddha-nature"? ;) A single screw does not make calculations of any kind, nor does it count the number of rotations. But of course I cannot look inside that screw. So although I cannot prove it I nevertheless claim that it has neither consciousness nor intelligence. Forgive that I am that ignorant.
So what is your basis for deciding if something (other than your self) is concise or
not?
I put it to you that your only basis for such decisions is weather or not your mind
feels compelled to act socially towards the entity and that it is this, rather than any
rational reason, that leads you to believe that a machine can not be conscious.
If it is physical, it is possible to detect. We have no reason to believe it is not physical
and so there is no reason to believe we will never be able to detect it, however complex
it may be. A Fata Morgana is existent, too. you can see it. you can describe the details it shows. You can photograph and film it. you can explain it. Existent it is - but it is not real.
The phenomenal experience is existent and real. Only the conclusions you might want
to make from it might be false.
1. A hardware kit with motors, sensors and a software code does not equal "intelliegnce2 as the term is used in cognitive science, in the meaning of a set of characteristics most scientist do agree on to be part of "intelligence".
when I started reading that sentence I assumed you where talking about a human:
"A hardware kit with motors, sensors and a software code"
3. Isaac Asimov, certainly not suspicious to be a mysticist, said intelloigence is what an IQ is measuring. I would point out, that word-and-paper tests do not measure anything: a testform has no sensor. Mqany psychologists would agree on certain key features of what intelligence is, but there is no one, single super-defition of it that has general consensus. the defintions depend on the perspectives of the varuious scientific fields. I also say that intelligence per se does not exist, but that the term is a meta-label for a collection of qualities that all interact and come together in order to create something that is greater than the sum of the signle components/qualities. If these qualities ever can be created by software as we define "software" today, is no certain thing, and even not a theory - it is speculation. we do not know if it can be, or not. We need to wait and see. But we can deal with the question until then, if it is even desirable to acchieve that.
The understanding of "artifical intelligence" as it is used in software engineering I see as missing key features and being qualitatively inferior to what is adressed as "intelligence" in cognitive sciences. It's the same term used for two very different things.
This much I have no problems with.
4. trying to make the human brain being seen as a kind of computer kit, is absurd, and is rejected both by the very different weay a brain functions, and what is coming as a result from this functioning. It was an analogy that vecame popular during the hype around spreading PC technologies in the late 70s and all 80s, but today, most brain researchers reject this analogy, saying it does help the reputation of PC technology, but is crippling the understanding of what brain is. when I finished studying in the late 90s, it was not much popular anymore - except with technicians. But they claim more than what is theirs.
I have no doubt that thinking of a brain like we think of a computer is a waste of
time.
Equally, thinking of a computer as a series of logic gates is a waste of time, it will not
help you play silent hunter.
Thinking of a computer that is as complex, or more complex than the human brain
in the same way that you might think of a computer available these days is most
likely a waste of time in the same way.
You literally can think beyond your limits.
Contradiction of terms. Clearly incorrect.
And in context of meditation, deep mediation and states of chnaged consciousness that may result in changed neurochemical status but is not caused by them, you even eventually can make the experience of leaving all this behind and see beyond.
You still think I am being in just to accuse you of mysticism? :shifty:
So in the end, use that robodog if you think you have a use for it - but don't call it artificial intelligence
Oh, I don't make any claims about bigdog at all!
GlobalExplorer
03-28-08, 02:58 PM
Hey no need to get angry skybird. I admit I was making some exaggerations about psychology - I always like to exaggerate (other people have compained about it before) - and it was just revenge for you brushing aside computer science - which psychology is clearly competing with on the field of artificial intelligence - and for declaring someone of the capacity as Asimov as "moved beyond by time".
And it was not very nice to call me "fanatical" - I am also a critic of unbiased slavish worship of technology - because that is a fatal disregard of nature.
But I think I have already understood why you feel so compelled to play down the significance of the next gen robots: in your discipline everyone is jealous that computer science has results to show - which must be a shock after those long years when everyone could ridicule cs because it could not even create a machine that walks let alone possesses the intelligence of the lowest insect. Now we see we are getting close to that point. Let me make it clear that I never said big dog is "very" intelligent. Yet I see it possesses that little bit of intelligence which is required to move. But it could be combined with a purpose - like walking around and surviving or - seeing who funds the development - to "kill", as well as a way of feeding itself (it could burn dry wood with a "holzgaser"), and we would have created a primitive lifeform. I think that deserves to be hailed as a major first step towards artificial life, not more.
And I agree with you insofar that it has not brought us any closer to an artificial consciousness.
However sadly you made two fatal errors in your reply to my last post, when you were admitting what I had claimed in an earlier post: that you are latently religious.
Once we know, we will begin building such an analogue, holographic computer!
Abwarten. Let's wait, watch and see. The brain is a limited object with precisely defined boundaries, yet it seems to include unlimited, infinite storage capacity. Physically that is a paradoxon like a perpetum mobile creating energy out of nothing or by ivesting less energy than is being won in return.
Oh-oh. Clearly you must have meant something else when you wrote this. An infinite number of permutations from a finite set - this defies mathematics. And that is profoundly unscientific, as you will certainly agree.
It's just that from your extremely limited perspective of wanting to be a technologist, as you name it, you maybe cannot follow that. As I see your argumentation, you attempt not to understand reality , but to pick it and squeeze it and minimize until it is so small that you can carry it with one hand and the means of your technologistical set of tools can handle it. In other words you do not adapt to wide reality, but you try to make it so small that it must adapt to your defintions of it.
That is intelligence. We must reduce reality to an abstraction because there is no way to fully understand reality. To do so would mean squeezing a quart into a pint pot. Or will you deny that a pebble in reality consist of a 100 billion atoms in a specific geometric / energetic configuration. Or do you want to go that far with your wide reality?
You are not only a technologist, but an extreme materialist then, too. but that is your problem, not that of psychology. Too many experience in my life that tell me beyond doubt that there is more than just matter. In fact, you will hate me for it, like a physiology professors from ack then did, too, I do not agree on that due to a brain's activity, there is a mind - it'smore the other way around: becasue there is a mind not depending on matter, from this mind'S existence comes that there formed a brain.
Now here you are finally retreating into the realm of religion.
But I do not hate you for that - in fact I like you because you have the guts to develop and defend your own ideas against anyone. That makes you a difficult person, but keep it up!
At school we learned Pascal (first half of the 80s), and I later used GFA 3 Basic on the amiga, btw.
Well thats good for you. But these are not even object oriented (but procedural languages), i.e. level of the 1970s. Today we have ontological modelling, neuronal networks, adaptive algorithms, game theory, computer vision, parallel computing, etc. I am not trying to sound smart here, you should just fresh up your ideas about advanced programming - it is no more limited to mere calling of subroutines - instead you can already create dynamic programs in which you need not know how they are exactly solving the problem. But of course its still not nearly enough to tackle the problem of intelligence.
Sorry I must keep busting your balls, really sorry. I am acting on the assumption that someone with your capacity will be able to handle it ;)
Skybird
03-28-08, 07:07 PM
That is not an implication I have "failed to see", it is not an implication at all!
Okay, then claim credits for rewriting several centuries of that part of philosophy thinking about laws and justice, becasue for them it is a problem being hotly ebated since this: since centuries. that ethics fade when their would be no free will, and free will is a precodntion for ethics having any point, is a logical implication, and very much so - but you must not be aware of it, if you don't want to. It's your free will to reject elemenatry logic.
We are generally programed to punish people because of the way our social minds work. There is no meta-reason beyond that, free will or not.
Now I must laugh. That is amateur sociology at work. .
Of course ethics are not physically existent! They are just systems that result from
the way our social brains work. So is the concept of responsibility. That would be no
different if we had free will or not.
A philosphy is not physically existent. Thanks for pointing at the obvious.
Nevertheless there is a lot of books on ethics, and traditions of ethic codes. And this may be good so, if you do not see why, live in an anarchistic society under the law of the strongest for a whil, and if you are tough and strong a guys, and survive evbemtuelly, report back and tell us of your experiences of living without ethics that differ between good and evil,l right and wrong. People are assumed in our cultural tradition to have a free will that gives them the freedom to choose for bad or good, and it is like this in most cltures there are or were. If they choose for the bad, they deserve penalty. If there is no free will, people cannot decide, if they cannot decide and are preprogrammed rehgarding their decision, they are not responsible for their action - because they had no choicer anyway,and already were decided anyway. So no ethics are needed anymore to separate between bad and good.
How much energy does it take you not to see that - or at least that'S what you say? Sometimes you often give me the feeling that you do not necessarily mean what you say, but raise abstruse resistence only for the sake of resistsing to something, no matter what. maybe becasue you so often go to extreme mind experiments and haisplitting that cannot be generalised. However, I find it interest-killing.
So what is your basis for deciding if something (other than your self) is concise or
not?
I put it to you that your only basis for such decisions is weather or not your mind
feels compelled to act socially towards the entity and that it is this, rather than any
rational reason, that leads you to believe that a machine can not be conscious.
That is not what I said. i said much more, and that "much more" had an even greater amnd complex meaning, and refers to the basis of decision-making not as a solid basis, but so,emthing like an every-chnaging- ever
-öearning, ever-adapting continuujm. you do not seem to understand that i do not have such ,mechanistic, linear understandings fo the likes you demand from me. You could as well ask me for the one shape of a cloud, and if I tell you that there is not the one shape of clouds, accuse me of not answering your question.
You could give an imoression of an answer by what I already said. If you would want that instead of wanting to disagree. I pointed at it at least three times in this thread now, i just counted.
when I started reading that sentence I assumed you where talking about a human:
"A hardware kit with motors, sensors and a software code"
If man is not more for you than a Cylon toaster, I cannot help you.
You literally can think beyond your limits.
Contradiction of terms. Clearly incorrect.
I be more precise: your thinking has the ability to reach farther on basis of past won knowledge, than what initially would have been possible to think. You even can change thinking mode/model, and synax and semantics, or the thing that you would call the software - it can transform into something totally different than it intially has been. Of course, it also can just grow, ba adding new features and keeping the rest. You can use higher smybolic codes of communication, like you use mathematics to make precise statements on things that in hard solid facts and images you cannot imagine. Your thinking can grow in capability, both quality- and quantity-wise. Your thinking can lead itself beyond it's intial capabilities, breaking these intial limits.
You still think I am being in just to accuse you of mysticism? :shifty:
Since I cannot word and deliver such my own experiences, and nobody not sharing them would be able to understand (which leaves the only option to comminicate about them with somebody who shares them, which would make the communication useless alltogether), I don't know, and am not much interested in it anyway. I think by your first posting you are in giving a purely mechanistic position, very much like GE. That's how I read your replies, whcih I perceive as bringing matters to a head, which often is the case and leaves my wondering my somebody alws is thinking in highly specific extremes. But that produces the old reliability-validity-dilemma, soemthign your postings also often remind me of. That way, I fail to see them being relevant for the usual everyday routine of events, and being valid only for a highly pointed, tightly constructed thought-set that is so tight that it does not cover much of what is relevant for "standard" reality.
Skybird
03-28-08, 07:57 PM
Hey no need to get angry skybird. I admit I was making some exaggerations about psychology - I always like to exaggerate (other people have compained about it before) - and it was just revenge for you brushing aside computer science - which psychology is clearly competing with on the field of artificial intelligence - and for declaring someone of the capacity as Asimov as "moved beyond by time".
I became angry at the end of your posting indeed. Maybe turning that direction in your posting towards generally accusing another branch of scineces wasn't that good an idea. I am very critical of psychology myself - but at least I have a better insught perpsective why i am. Yours were generalised catch-phrases.
I do not minimize asimov, hell, i even do not know much about his scientif work, I only new some of his fiction. but the laws of robotics he formulated cleraly are already ignored sicne years, if not decades. I again refer to the many robots the military uses - and even a missile that steers itself is a robot, so is a cruise missile, and you okow what kind of autonomously operatin drones are next. none of them gets the three laws of robotics programmed into their digital heads, and they do not manifestate all by themselves. Also, they would conflict with a weapons intention.
And it was not very nice to call me "fanatical" - I am also a critic of unbiased slavish worship of technology - because that is a fatal disregard of nature.
I adapted to you.
But I think I have already understood why you feel so compelled to play down the significance of the next gen robots: in your discipline everyone is jealous that computer science has results to show - which must be a shock after those long years when everyone could ridicule cs because it could not even create a machine that walks let alone possesses the intelligence of the lowest insect.
Total nonsens you tell. I do not play down future robotics. I said we should choose wisel what kind of technological ways we want to walk in the furutre, and what better not. If not totally uncritically embracing all and everything that is new equals "total rejection" for you in your opinion, i cannot help it. I am aware that that robodog's software is an achievement. but neither is it that unique as it seems, there are several other robots walking autonomously on 2, 4 and 6 legs, and i refuse to call that machine an intelligent entity. It is not intelligent. It is intelligently designed (if that is not ironic when Skybird talks of intelligent design...), but the intelligence is 100% on side of the engineers and programmers here, not on side of the machine. Man, it's just a machine! A machine more developed than some others, but a mahcine. no awareness, no consciousness, no intelligence. Just cleverly programmed software, made by homo sapiens. An automat. Like a kitchen times allowing to program six alarm times simultaneously.
and finally, psychology is not "my " discipline. but I can tell you as an insider that while there have a lot of things gone wrong in it's history, for the main regarding it's self-perceived competito9n with classical physcis's working methods that it uncritically copied without ever asking if that made sense, i can assure you that I never heared of any psychologist who is driven by the demand to compete with software enginners regarding intelligence. The e ngineers started the idea of a competiton, not the psychologists. Cognitive science and psychology simply deals more or less competently with human intellignce, and has little inteerst, if no none, in software developement of that kind. You twist things very much.
Now we see we are getting close to that point. Let me make it clear that I never said big dog is "very" intelligent. Yet I see it possesses that little bit of intelligence which is required to move. But it could be combined with a purpose - like walking around and surviving or - seeing who funds the development - to "kill", as well as a way of feeding itself (it could burn dry wood with a "holzgaser"), and we would have created a primitive lifeform. I think that deserves to be hailed as a major first step towards artificial life, not more.
not more? New lifeform? That? I cant take you serious with that vstatements, sorry. Yolun want to claim far more fame for your business than it deserves, and when not being givent hat, you complain about being totally rejected. I do not totally reject it, I just do not compare it to such unreasonable, totally overblown standards: neither standards that label a thing like robodog "intelligent", nor standards that consider such a machine to be a pürmive lifeform.
Do you know www.dream-aquarium.com (http://www.dream-aquarium.com) ? It is the most realistically quarium screensave I ever saw, its worth to have a look, really. Tell me: these realstically moving and -looking fishes - are the also "oprimtive lifeforms" and is the screensave a primitve early stadium of an aquarium?
Hardly.
So my compliments for achieving what has been achieved with robodog, and bets wishes for future developement. But don't excpect me to see it as either intelligent or a lifeform. If a machine can be that, needs to be seen in a far away future, and since we have no experience with livbing, intelligent machines, we do not know if that is possible, thus all expectations remain to be speculative. You hope for it, that is okay, I liked Science fictiontoo, and have sympathy for it.
buit I don't taske it for granted that it will be like that, someday. and I also do not take it as granted that in that furutre world, it even will be a desirable option. because we do not know how the far away future will be, and what man will be, then.
However sadly you made two fatal errors in your reply to my last post, when you were admitting what I had claimed in an earlier post: that you are latently religious.
am I? No, I am not - more precisely, i attack religion at every opportunity and hack it into pieces. But like all man, I am - amongst other qualities that define us as human being - a spiritual being, int he meaning that i, .ike verybody, searches for answers to Why am I here, how long do i live, where do I come from and where do I go, etc. Homo sapiens is spirtual, and that is a necessary effect of being self-aware, and forming a clever mind, short: developing this set of qualities that I sum up under the meta-label "intelligence". In the end, sprituality means much the same like being self-reflective, for which self-awareness probably is a precondition. and you can see even some apes, well, actually even birds being that self-aware, and thus I conclude: being self-reflective to a certain degree.
but they probably are not religious, since relgion is an invention of man. to me, sprituality and religion are mutually exlusive. I fight against relgion. I cannot ecape to be necessarily spritual. and i don't know if you really munderstand what I mean by "spritual".
So what is it you think i latently be? ;) Keep it simple: I am mortal. that is as precise as it gets.
Oh-oh. Clearly you must have meant something else when you wrote this. An infinite number of permutations from a finite set - this defies mathematics. And that is profoundly unscientific, as you will certainly agree.
I dp not, and mjiust tell you that this is widespread standrd of reasearch. While memories are linked by some to chemical molekues storing them, there is not much objection in brain-reaearch that nevertheless the brain has a system, a model, a call-it-what-you-want to store memories in a way that stroage capacity is never ending. that probaly is why the brainS memory operate in a kind og holographic mode indeed. every single memory-quality does not just claim some space, but chnages the structure of the complete memory system, and in every smallest quantity of this memory, all memories can be found, like you have all visual information of the complete hologram in every single piece and bit of it if you brake it. We even have learned in experiemnts, that the brain has this capability of storing all in all places in total completeness, too. That siurpasses chemical local storing of data. It also surpasses of what a harddrive is doing.
As I said, the past 20 years in bain research have been revolutionary.
It's just that from your extremely limited perspective of wanting to be a technologist, as you name it, you maybe cannot follow that. As I see your argumentation, you attempt not to understand reality , but to pick it and squeeze it and minimize until it is so small that you can carry it with one hand and the means of your technologistical set of tools can handle it. In other words you do not adapt to wide reality, but you try to make it so small that it must adapt to your defintions of it.
That is intelligence.
Yes it is - yours, the engineer's - not the machine's. ;) No kidding. Else your colleagues wouldn'T have been able to program that damn thing of a robodog. ;)
We must reduce reality to an abstraction because there is no way to fully understand reality. To do so would mean squeezing a quart into a pint pot. Or will you deny that a pebble in reality consist of a 100 billion atoms in a specific geometric / energetic configuration. Or do you want to go that far with your wide reality?
Not in the context of software engneering. I also said nothing like what you give above.
But I could have a new debate with you on the fact that atoms are 99.99999...% nothingness, and particles are just random clouds and tendencies to exist. Or in short, we enter the world of incredoibly far-leading abstraction now. Ore that the conept of time we have - probably is an illusion. Some physicist whose name I forgot, was quoted with this: in thenend it all is mind that dances with itself. A famous book by Gary Zukav tells the reader, that the Chinese word for physics translates into "Wu Li - structures of organic energy". A biuddhist might tell you that it all is just images, that are mepty, on ly the mind there is. One mind. - If I would have a new debate, that is.
Now here you are finally retreating into the realm of religion.
See above what I said on relgion and spirituality. It makes no sense for me to split human nature inti these many different things that all are forming into one thing and each of them being necessary to give a complete image and defintion of what man is. Yes, I do not keep sprituality out of science, and many famous physics did not do that, too. but I keep relgion out of it, and spirituality maybe means somethign different for me, than what you thinkn - i don't know what you think.
But I do not hate you for that - in fact I like you because you have the guts to develop and defend your own ideas against anyone. That makes you a difficult person, but keep it up!
I don't defend what i see as not defendable, and unreasonable, doing that would make me a very stupid person. but I try to form opinions that are so well-founded that later i must hopefully not correct them too often. that is only reasonable acting, imo.
Well thats good for you. But these are not even object oriented (but procedural languages), i.e. level of the 1970s. Today we have ontological modelling, neuronal networks, adaptive algorithms, game theory, computer vision, parallel computing, etc. I am not trying to sound smart here, you should just fresh up your ideas about advanced programming - it is no more limited to mere calling of subroutines - instead you can already create dynamic programs in which you need not know how they are exactly solving the problem. But of course its still not nearly enough to tackle the problem of intelligence.
Ehem - I was not serious. that GFA and Pascal do not compare to today'S standards, I can conclude myself. I finished school in 1985 ;)
Sorry I must keep busting your balls, really sorry. I am acting on the assumption that someone with your capacity will be able to handle it ;)
I can't see you busting my bubbles - can you handle that yourself? ;) On some very fundamental basis, which is more abiout general paradigm of world oritnetation and not just software and technologx and psychology, we totally disagree. That's how it is, and it is not any different.
GlobalExplorer
03-29-08, 07:44 AM
I am tired of it. But I cant let go as long as you - in between your very interesting passages - keep falling back to your "spiritual" phase, when it all gets ludicrous again.
Sorry I must keep busting your balls, really sorry. I am acting on the assumption that someone with your capacity will be able to handle it I can't see you busting my bubbles - can you handle that yourself?
Skybird you have a bad habit of reading and typing too fast and trying to drown disagreement with sheer verbosity. This is not the first time that you have barely skimmed my words and then accused me of something I never said.
I dp not, and mjiust tell you that this is widespread standrd of reasearch. While memories are linked by some to chemical molekues storing them, there is not much objection in brain-reaearch that nevertheless the brain has a system, a model, a call-it-what-you-want to store memories in a way that stroage capacity is never ending. that probaly is why the brainS memory operate in a kind og holographic mode indeed. every single memory-quality does not just claim some space, but chnages the structure of the complete memory system, and in every smallest quantity of this memory, all memories can be found, like you have all visual information of the complete hologram in every single piece and bit of it if you brake it. We even have learned in experiemnts, that the brain has this capability of storing all in all places in total completeness, too. That siurpasses chemical local storing of data.
I read your - very unintelligible - ideas about infinite storage in holograms, transmathematic structures and spiritualist science - nothing short of a mixture of mysticism and science as Letum said - if we were at university I think sooner or later it would lead to your removal from the lecture room - unfortunately.
Tell me: these realstically moving and -looking fishes - are the also "oprimtive lifeforms" and is the screensave a primitve early stadium of an aquarium?
Of course not because they can not interact with the physical world. Robots who: walk autonomously | feed | fulfill a purpose can.
But in the end you have agreed that big dog is an achievement and we both agree that technology must be critically monitored and sometimes limited. Apart from that we will never agree - but it is not really important if human is spiritual or a supercomplex information system - in the end our existence stays the same. As to AI, the main work is still to be done, so we can only let it begin and limit the dangers of it.
Lets not forget that this thread was actually about the dangers, and the primordial shock of artificial life forms, not about spiritualist world models.
GlobalExplorer
03-29-08, 08:17 AM
Our laws to give penalties to people violating laws would not make any sense anymore. We would have no basis onto which to form social communities. But I do not wish to say by this that wee "must" have a free will else our societies would crumble. It's just what they debate in jurisdiction, and since centuries.
No!
To say that if we had no free will we would chose to abandon ethics, the
punishment of crime and that our societies would crumble is a contradiction of terms.
Very good. In the end it would not make a difference if we don't have a free will, thats why my self doesnt fight against this idea. I we indeed had no free will we would still continue debating the question, possibly until the end of humanity. Every thing stays the same without a free will.
Firstly, everything that is conscious knows what consciousness is by definition.
I like that definition.
I would also say everything that believes to have consciousness is conscious.
Lets say I could create a program - with about the same deducting capabilities as the human brain - where all low level activity is controlled by one high level structure "Self". I could then also hard wire a mechanism "Consciousness" to constantly feed in the realization to be conscious, very much like the human illusion to have consciousness and free will. It would also require a mechanism to reflect on thoughts, i.e. analyze it's own processes and create new processes from that.
When asked, this computer system would say that it has consciousness, and it would also raise questions about the meaning of consciousness, because it will not find logical explanation of this thing, yet it must believe in the existence because it can experience it.
I am not sure if that would be enough to solve the problem of artificial consciousnss, but the end result would be a machine that believes to be a conscious self exactly like a human.
Where this consciousness takes place is a good question - it is out of this world - very much like skybird say - like every self is out of this world. I can see where the spirirualism comes from, yet for me the spirit is also physical, it only comes about through complexity.
Skybird
03-29-08, 11:46 AM
GE,
I typed a reply , but then found myself to be so annoyed by your arrogance and accusing me of what you extensively did - and do -yourself, that I just found it is no longer worth it for me to reply. As I see it you do not adapt to the standards set by reality, and the world, but try to make both adapting to your toolbox. You mix up the process of simulation with the object of simulation. But that simply is not my problem, but yours, and yours alone.
Such accusations are empty with out even the most basic reasoning behind them SB.
GlobalExplorer
03-29-08, 12:12 PM
You're welcome skybird ;)
GlobalExplorer
03-30-08, 08:06 AM
Just to make clear what we have been discussing here.
Among other things there was dissent to which extent future developments of this robot, once it is provided with a purpose, a way too feed, to survive several years in a (harsh) environment, and other functions could be seen as a (primitive) artificial lifeform - like an insect that cannot reproduce - or not.
http://www.christian-wendt.org/SUBSIM/robot1.jpg
Later I defended the opinion that it seems possible to construct a machine that calculates all thought processes characteristic to the human brain, including a sense of consciousness.
I basically took the position that if the thought "I feel I am alive" can take place inside the human brain - it might as well take place inside a computer. The implications are the same, i.e. the machine will believe to be conscious, very much like we do.
Of course I have no idea if technology will get that far or if we hit a brick wall somewhere, or if we even want to build such a machine.
Unfortunately it got a bit heated - and I was declared more or less insane - but I don't take it to heart.
http://www.christian-wendt.org/SUBSIM/robot3.jpg
XabbaRus
03-30-08, 10:01 AM
Still too slow though. When they can make it run as fast as a real dog I'll be worried.
vBulletin® v3.8.11, Copyright ©2000-2025, vBulletin Solutions Inc.