User Panel
Posted: 1/29/2020 2:28:08 PM EDT
Talking about the time when (not “if”) technology becomes self-aware. Potentially autonomous if it’s released to the Internet...
Pros? Cons? I mean I’ve seen the documentaries - Skynet, the Matrix, you name it, I love it, but if this Internet thing goes pear-shaped... it could be worse than Y2K. |
|
That Skynet thing and all those types of Terminators keep me up nights.
|
|
I have a PhD in computer science and I have zero concerns whatsoever about the singularity.
|
|
|
Well, it's not like we're short on panic threads lately, what's one more?
|
|
|
Quoted:
I have a PhD in computer science and I have zero concerns whatsoever about the singularity. View Quote There is enough not known about what self-awareness is, heuristic decision-making, emotion, etc. that even if it was replicable (and there is NO evidence that it is), we would not know how to replicate it. It would be like asking a cartographer in 800 A.D. to draw you a map of the Great Lakes region. |
|
|
Quoted: This. Computers cannot, and do not, "think". They execute decisions that humans made and that are stored within them. There is enough not known about what self-awareness is, heuristic decision-making, emotion, etc. that even if it was replicable (and there is NO evidence that it is), we would not know how to replicate it. It would be like asking a cartographer in 800 A.D. to draw you a map of the Great Lakes region. View Quote It will require some huge breakthroughs tho. |
|
Quoted:
Now tell us why. View Quote The work that has been done on machine learning, for example, still fails spectacularly, even if it has been designed and created by the brightest minds on the planet. Computers lack agency and can do no more than what we tell them. There is no concept of right or wrong, no metaphysical understanding of emotion or intelligence, no nothing. It's not even defined. Processing power is not a substitute for an understanding of why. Computers can do nothing more than mimic. Consider speech recognition and AI conversations. These things operate on the basis of reinforced learning and pattern analysis. A training program might read through a billion pages of text and then hold a conversation but it will never understand why. It can't. That's uniquely human. Understanding why is not something a computer can do. It's fundamentally impossible. Understanding creativity is not something a computer can ever do. As processing power increases there are complex problems a computer can help solve. But it can never do more than simply exist for its designed purpose. |
|
If Hollywood is to be believed a strong independent woman will arise to save us all.
|
|
Quoted:
Talking about the time when (not “if”) technology becomes self-aware. Potentially autonomous if it’s released to the Internet... Pros? Cons? I mean I’ve seen the documentaries - Skynet, the Matrix, you name it, I love it, but if this Internet thing goes pear-shaped... it could be worse than Y2K. View Quote |
|
Wait, I thought singularities were black holes that allowed time travel....
|
|
|
Quoted:
For one, we have yet to produce even a moderately complex computer system that is not fraught with errors. The technical ability of humans to create a system that is self-improving does not and cannot exist. As a general rule, the more complex a system is, the more flaws that exist. It isn't even possible to identify these flaws in most cases, much less fix them. View Quote |
|
Quoted: This. Computers cannot, and do not, "think". They execute decisions that humans made and that are stored within them. There is enough not known about what self-awareness is, heuristic decision-making, emotion, etc. that even if it was replicable (and there is NO evidence that it is), we would not know how to replicate it. It would be like asking a cartographer in 800 A.D. to draw you a map of the Great Lakes region. View Quote Machine learning applications are often entirely self taught, meaning that a person did not write the code that makes them do the things they do. It is still one of the great mysteries, people can design and program a complex neural network but nobody really knows how it actually learns to do stuff. I have written several myself and while I can explain what each part of the code does, I simply cannot tell you how the code actually interprets the data it is given to arrive at the conclusions that it does. No idea at all. True self-aware AI will be an extension of this. Deep learning is basically building a layered network of simulated neurons that link together and activate in a way not entirely unlike those in a brain. Modern tech just has not unlocked the ability to get to the fully functioning brain part yet. It will happen, and many of the required pieces are already in progress or complete, just not all at the same time in the same place. |
|
Quoted:
Most famous last words: “Oh goodness Vinkat, I have forgotten to call rule #1 in latest build, is to late to stop production deploy?” View Quote View All Quotes View All Quotes |
|
|
Quoted:
I have a PhD in computer science and I have zero concerns whatsoever about the singularity. View Quote |
|
Artificial Intelligence doesn't necessarily mean consciousness.
That's actually the scariest thing about it, that and the monkeys giving it purpose. It's called a singularity because it's an event horizon, technology that changes things so fundamentally that predictions become impossible. So why worry? |
|
Quoted: This is not entirely accurate anymore. Machine learning applications are often entirely self taught, meaning that a person did not write the code that makes them do the things they do. It is still one of the great mysteries, people can design and program a complex neural network but nobody really knows how it actually learns to do stuff. I have written several myself and while I can explain what each part of the code does, I simply cannot tell you how the code actually interprets the data it is given to arrive at the conclusions that it does. No idea at all. True self-aware AI will be an extension of this. Deep learning is basically building a layered network of simulated neurons that link together and activate in a way not entirely unlike those in a brain. Modern tech just has not unlocked the ability to get to the fully functioning brain part yet. It will happen, and many of the required pieces are already in progress or complete, just not all at the same time in the same place. View Quote |
|
On which part?
|
|
|
Quoted:
Talking about the time when (not “if”) technology becomes self-aware. Potentially autonomous if it’s released to the Internet... Pros? Cons? I mean I’ve seen the documentaries - Skynet, the Matrix, you name it, I love it, but if this Internet thing goes pear-shaped... it could be worse than Y2K. View Quote What are you worried about? What do you mean by "self-aware"? What about autonomous? (this could mean many things) |
|
|
The super smart AIs don't concern me as much as those who have control of them.
Man has become the master of this planet solely because of his higher intelligence. Ceding that collective hedge to one or a few will cleverly make the rest of us their cattle. |
|
Quoted:
I think you have a fundamental misunderstanding of unsupervised learning and neural networks. View Quote View All Quotes View All Quotes Quoted:
Quoted: This is not entirely accurate anymore. Machine learning applications are often entirely self taught, meaning that a person did not write the code that makes them do the things they do. It is still one of the great mysteries, people can design and program a complex neural network but nobody really knows how it actually learns to do stuff. I have written several myself and while I can explain what each part of the code does, I simply cannot tell you how the code actually interprets the data it is given to arrive at the conclusions that it does. No idea at all. True self-aware AI will be an extension of this. Deep learning is basically building a layered network of simulated neurons that link together and activate in a way not entirely unlike those in a brain. Modern tech just has not unlocked the ability to get to the fully functioning brain part yet. It will happen, and many of the required pieces are already in progress or complete, just not all at the same time in the same place. Unless there has been a recent breakthrough that I have simply missed, nobody really knows for sure what happens and why between the input and output layers. You can read the numbers, you can even track a bit of data from one end to the other, but what really makes everything tick is a bit...too complex for people to actually write instructions for or explain exactly how it works. It is kinda the whole point of things like this. They do things that are beyond the simple if(a) then b sort of programming. It is pretty fascinating. One of the things that I have been trying to do is figure out how to answer that question. There is a lot of speculation about how they work but nobody really knows for sure. |
|
Quoted: I think it's important to differentiate meanings here, otherwise "big brains" will just scoff at you without addressing what we are trying to talk about. (see, FS7 posts above) What are you worried about? What do you mean by "self-aware"? What about autonomous? (this could mean many things) View Quote |
|
I would less concerned with AI becoming self aware and more concerned with a Borg like Hive mind resulting from everyone being plugged directly into the net.
Already young people are far too often putting more emotion into a on-line presence than a real life one. |
|
|
Quoted:
This. Computers cannot, and do not, "think". They execute decisions that humans made and that are stored within them. There is enough not known about what self-awareness is, heuristic decision-making, emotion, etc. that even if it was replicable (and there is NO evidence that it is), we would not know how to replicate it. It would be like asking a cartographer in 800 A.D. to draw you a map of the Great Lakes region. View Quote View All Quotes View All Quotes Quoted:
Quoted:
I have a PhD in computer science and I have zero concerns whatsoever about the singularity. There is enough not known about what self-awareness is, heuristic decision-making, emotion, etc. that even if it was replicable (and there is NO evidence that it is), we would not know how to replicate it. It would be like asking a cartographer in 800 A.D. to draw you a map of the Great Lakes region. |
|
Quoted: This. Computers cannot, and do not, "think". They execute decisions that humans made and that are stored within them. There is enough not known about what self-awareness is, heuristic decision-making, emotion, etc. that even if it was replicable (and there is NO evidence that it is), we would not know how to replicate it. It would be like asking a cartographer in 800 A.D. to draw you a map of the Great Lakes region. View Quote In much the same way it is extremely unlikely that machine intelligence and awareness will resemble human or animal intelligence any more than a 737 does a bird or a submarine does a dolphin. But in its own way machine intelligence will continue to grow. It may not be conscious in any way that we recognize but that doesn’t really matter any more than if a self driving Tesla lacks the natural instincts and intelligence of a horse. In the short term augmented reality/augmented human intelligence will become a widespread thing and those early adopters will have a huge advantage over “natural” humans in many ways. The singularity doesn’t necessarily have to come from an all knowing Skynet that develops consciousness it may simply start from humans that augment themselves into weakly godlike intelligences. For a horse it doesn’t matter that a human is required to drive an automobile. It’s simply obsolete. For us pre-singularity humans it doesn’t really matter whether the intelligence at the core of a hyper intelligence is purely machine or some amalgamation of human and machine as it will still be incomprehensible to us. |
|
Quoted:
I'm not scoffing at anyone. I'm trying to illustrate that fears are driven by science fiction, not science fact. View Quote Also, much of what we have today was science fiction 50 years ago. |
|
My totally serious opinion, from an agnostic realively tech savvy GenX'er:
I believe that humans should not endeavor to create an artificial truly sentient being. And I believe that because humans are too egotistical/myopic/whatever to create something that will not in some way ultimately knock us from the top of the food chain. |
|
Quoted: I have written several from scratch, I mean from the ground up, not just modifying some tensorflow script. Expert? No, but certainly more than most here. Unless there has been a recent breakthrough that I have simply missed, nobody really knows for sure what happens and why between the input and output layers. You can read the numbers, you can even track a bit of data from one end to the other, but what really makes everything tick is a bit...too complex for people to actually write instructions for or explain exactly how it works. It is kinda the whole point of things like this. They do things that are beyond the simple if(a) then b sort of programming. It is pretty fascinating. One of the things that I have been trying to do is figure out how to answer that question. There is a lot of speculation about how they work but nobody really knows for sure. View Quote |
|
Quoted:
For one, we have yet to produce even a moderately complex computer system that is not fraught with errors. The technical ability of humans to create a system that is self-improving does not and cannot exist. As a general rule, the more complex a system is, the more flaws that exist. It isn't even possible to identify these flaws in most cases, much less fix them. The work that has been done on machine learning, for example, still fails spectacularly, even if it has been designed and created by the brightest minds on the planet. Computers lack agency and can do no more than what we tell them. There is no concept of right or wrong, no metaphysical understanding of emotion or intelligence, no nothing. It's not even defined. Processing power is not a substitute for an understanding of why. Computers can do nothing more than mimic. Consider speech recognition and AI conversations. These things operate on the basis of reinforced learning and pattern analysis. A training program might read through a billion pages of text and then hold a conversation but it will never understand why. It can't. That's uniquely human. Understanding why is not something a computer can do. It's fundamentally impossible. Understanding creativity is not something a computer can ever do. As processing power increases there are complex problems a computer can help solve. But it can never do more than simply exist for its designed purpose. View Quote View All Quotes View All Quotes Quoted:
Quoted:
Now tell us why. The work that has been done on machine learning, for example, still fails spectacularly, even if it has been designed and created by the brightest minds on the planet. Computers lack agency and can do no more than what we tell them. There is no concept of right or wrong, no metaphysical understanding of emotion or intelligence, no nothing. It's not even defined. Processing power is not a substitute for an understanding of why. Computers can do nothing more than mimic. Consider speech recognition and AI conversations. These things operate on the basis of reinforced learning and pattern analysis. A training program might read through a billion pages of text and then hold a conversation but it will never understand why. It can't. That's uniquely human. Understanding why is not something a computer can do. It's fundamentally impossible. Understanding creativity is not something a computer can ever do. As processing power increases there are complex problems a computer can help solve. But it can never do more than simply exist for its designed purpose. |
|
Quoted:
I think it's important to differentiate meanings here, otherwise "big brains" will just scoff at you without addressing what we are trying to talk about. (see, FS7 posts above) What are you worried about? What do you mean by "self-aware"? What about autonomous? (this could mean many things) View Quote View All Quotes View All Quotes Quoted:
Quoted:
Talking about the time when (not “if”) technology becomes self-aware. Potentially autonomous if it’s released to the Internet... Pros? Cons? I mean I’ve seen the documentaries - Skynet, the Matrix, you name it, I love it, but if this Internet thing goes pear-shaped... it could be worse than Y2K. What are you worried about? What do you mean by "self-aware"? What about autonomous? (this could mean many things) |
|
AI/SA awaits microscopic, densely packed, artificial neurons with dendrites able to form synapses--a brain; and high resolution sensory inputs so the being can interact and test his environment and observe reactions--exactly as we learn.
|
|
|
No concern at all. Self aware artificial intellegence may very likely be a thing some day, but I doubt that day will come in my life time.
|
|
Quoted:
Of course people know what happens between the input and output layers. The hidden layers don't design themselves. The functions applied between the input and output are exactly what the designer says they are, almost always tailored to what you want the network to do. View Quote View All Quotes View All Quotes Quoted:
Quoted: I have written several from scratch, I mean from the ground up, not just modifying some tensorflow script. Expert? No, but certainly more than most here. Unless there has been a recent breakthrough that I have simply missed, nobody really knows for sure what happens and why between the input and output layers. You can read the numbers, you can even track a bit of data from one end to the other, but what really makes everything tick is a bit...too complex for people to actually write instructions for or explain exactly how it works. It is kinda the whole point of things like this. They do things that are beyond the simple if(a) then b sort of programming. It is pretty fascinating. One of the things that I have been trying to do is figure out how to answer that question. There is a lot of speculation about how they work but nobody really knows for sure. Knowing and understanding how the network is made is easy (relatively, anyway). Knowing what it does is also easy. Knowing how it does it...that is the tricky bit. You can set parameters for your inputs, expected outputs, and while the network training is little more than adjusting the weights and biases of all the connections between individual perceptrons and layers...it is still basically magic. Especially when you get into the networks that can grow as they learn. Then you are several steps removed from the code you wrote vs the actual product. Simple networks are a lot easier to understand than the truly complex ones. |
|
John Lovell with the Warrior Poet Society just had a pro hacker on his channel talking about this.
HACKER warns of ‘The Singularity', A.I, & Zero-Day ?? |
|
Quoted: What you are saying goes against most of what the ML community at large seems to believe. There is still a lot of research being conducted to determine what is going on in and between layers in neural networks. As far as I have been able to find, so far it is still largely unanswered in any hard explanation. (that includes researching while typing this post, to see if maybe I missed that breakthrough and no, I have not) There is a fair bit of thinking or probably, but not a lot of firm answers. If you have discovered a solution to this that is repeatable, you should publish it and become famous. Knowing and understanding how the network is made is easy (relatively, anyway). Knowing what it does is also easy. Knowing how it does it...that is the tricky bit. You can set parameters for your inputs, expected outputs, and while the network training is little more than adjusting the weights and biases of all the connections between individual perceptrons and layers...it is still basically magic. Simple networks are a lot easier to understand than the truly complex ones. View Quote |
|
Quoted:
Are people concerned about toddlers taking over? View Quote View All Quotes View All Quotes |
|
AR15.COM is the world's largest firearm community and is a gathering place for firearm enthusiasts of all types.
From hunters and military members, to competition shooters and general firearm enthusiasts, we welcome anyone who values and respects the way of the firearm.
Subscribe to our monthly Newsletter to receive firearm news, product discounts from your favorite Industry Partners, and more.
Copyright © 1996-2024 AR15.COM LLC. All Rights Reserved.
Any use of this content without express written consent is prohibited.
AR15.Com reserves the right to overwrite or replace any affiliate, commercial, or monetizable links, posted by users, with our own.