![]() |
^Not as good as the written story. :03:
|
One of the most common takes on an artificial intelligence is it must somehow be deployed in a human form, a robot or android. Perhaps it is because we equate real or advanced intelligence as a human condition and the idea an equal or greater intelligence must be based on the human physical mold. But consider an intelligence not requiring a humanoid presence, one existing only in a "non-corporeal" state. If it is one day possible to create an AI as close to human intelligence as humanly possible, is it not possible it could exist more as a network rather than a 'centralized' entity? And, given current advanced AIs seem capable of self-revision and upgrading of capabilities, could we even begin to be able to control them once the "genie is out of the bottle"? In another thread, I posted about an AI bot, OpenAI, that defeated a human world champion in a live, real time, head-to-head Dota 2 competition. Someone commented it was not particularly significant because bots had played humans before and had beaten them. What was significant is the OpenAI bot was trained, not by humans, but by another OpenAI bot: basically what the OpenAI researchers did was just give each OpenAI bot the basic rules of Dota 2 and then left them to sort out how to play the game independently through actual game play between themselves. It was only after the AIs had mastered the game and its strategy themselves that the competition with a human opponent was attempted. The AIs actually had to suss out game structure and strategy on their own without human assistance...
Now we have AIs talking among themselves in self-created languages, effectively shutting out humans from their processes. To add to the mix, Google is adding a new element to its own AI, DeepMind, the dimension of imagination: Google Has Started Adding Imagination to Its DeepMind AI -- https://futurism.com/google-has-star...s-deepmind-ai/ Agents that imagine and plan -- https://deepmind.com/blog/agents-imagine-and-plan/ Back when PCs first came into use and I had to teach people how to use them and their software, sometimes I had to get those users to get over their fears about computers. One of the first things I would tell them is a PC is a tool and it is not 'smarter' than the user by any means; I would ask them something like "What is 5 times 100?". They would immediately answer "500". I would then point out to them they knew the answer immediately because they had mental shortcuts they had learned in school via those tedious multiplication tables we all suffered through. I then told them how a computer doesn't really multiply like we do, it instead adds the number 5 one-hundred times to get to the answer. If a person were to do the same process, it would be a long tedious task, but computers are merely faster at the task than people are: they are not intrinsically smart like humans, they are merely faster... Now that AIs are being given the tools for analytic autonomy, will being faster be their only advantage?... <O> |
All times are GMT -5. The time now is 07:56 AM. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 1995- 2025 Subsim®
"Subsim" is a registered trademark, all rights reserved.