(Topic ID: 186507)

Human AI Will We See It In Our Life

By Azmodeus

7 years ago


Topic Heartbeat

Topic Stats

  • 144 posts
  • 37 Pinsiders participating
  • Latest reply 6 years ago by Azmodeus
  • Topic is favorited by 3 Pinsiders

You

Topic Gallery

View topic image gallery

187mdy (resized).jpg
Projections (resized).png
e0d86e96bdc2aac0df569a41fa4f6bf610a3696a (resized).png
elon-musk-AI-04-17-02 (resized).png
Doom-2016-after-credits-hq (resized).jpg
They-Saved-Hitlers-Brain-images-a0011686-ccda-45dc-9712-fca943dcafc (resized).jpg
IMG_0514 (resized).JPG
There are 144 posts in this topic. You are on page 2 of 3.
#51 7 years ago
Quoted from DanQverymuch:

Yes, evolution's "design" is amazing, but having to constantly shovel in fuel, and eliminate almost as much leftover stuff, fittingly called waste, is not ideal.

Give it time, we've only had a few hundred thousand years.

#52 7 years ago

Humans are such a small blip in time, just a stepping stone to something better.

Hopefully we last long enough to start spreading out among the stars.

#53 7 years ago
Quoted from zr11990:

Anyone ever think about the fact that humans have a soul which gives them self awareness and you can't recreate that?

If it were a fact, then it would be possible to recreate.

#54 7 years ago
Quoted from vid1900:

If they could transfer my consciousness into a 16 year old girl's body, I would be the murder queen.

If I had my consciousness transfered into a 16 year old girls body, i'd never leave the house and my fingers would water logged.

#55 7 years ago

Probably.

#56 6 years ago
Quoted from T7:

PezPunk did a good job of describing the difference in post #43.
It boils down to: Self Consciousness versus Computational Power -> you are mixing the 2 as being one in the same and they are not anywhere close. Computers are not "alive" -> but they do have lots of computational power -> it's not the same thing even though it makes a great plot point for SciFi movies.

What you and pez are basically dancing around is the "hard problem of consciousness ". I am firmly in the camp that we do not need to solve this problem in order to achieve it synthetically.

I think there is more than one way for a computer to achieve it....even if it's "simulated"

You spoke about "magic" earlier...don't attribute magic to our own consciousness. It's simply an emergent property of our brains.

If we can achieve it biologically then there is no reason we can't recreate it synthetically...even if we don't fully understand how.

#57 6 years ago
Quoted from zr11990:

Anyone ever think about the fact that humans have a soul which gives them self awareness and you can't recreate that?

I wonder if when you create a soulless living being in a lab, they are left wide open to being infected by any passing lost soul?

I'm still trying to figure this out....

#58 6 years ago

I think all of this boils to whether you're a religious person.
I do agree with the "simulated" intelligence vs actual intelligence argument.
But when you're talking about reproducing the functioning of the brain components in an artificial manner (for example with neural networks - http://www.makeuseof.com/tag/ibm-creates-neural-network-chip-large-mouse-brain/) there is theoretically nothing that would stop you from creating a brain that functions the exact same way a human brain does.

Then when you get into the soul as the differentiator between the two types of intelligences, well you get outside the realm of logical discussion and into the religious one.

#59 6 years ago

I think the details of the first ai will be interesting if we live through the event, as humans. I hope we do. Again, I think we really need to limit system access to prevent immediate destruction. We may not be able to. I do think the first ai will cause untold destruction, because I believe it will be a human disguised as an ai. In other words all the human baggage will come with it. Can you imagine an ai assuming even a small level of control of the Millitary systems of the world, today.

Even 1 single weapons control could end the world. We have become too dangerous, as humans. But we may still progress, in fact I will bet on it.

I liked the comment about about hoping we get to the stars. All new horror there. But I hope so too. We need to think bigger as a race, entirely.

#60 6 years ago
Quoted from PhilGreg:

I think all of this boils to whether you're a religious person.
I do agree with the "simulated" intelligence vs actual intelligence argument.
But when you're talking about reproducing the functioning of the brain components in an artificial manner (for example with neural networks - http://www.makeuseof.com/tag/ibm-creates-neural-network-chip-large-mouse-brain/) there is theoretically nothing that would stop you from creating a brain that functions the exact same way a human brain does.

I work in the field of neural networks. They are great for data analysis / entity correlation / complex pattern recognition / brute force trial and error type learning but they don't 'think' and aren't any closer to consciousness than your iPhone is. The hype comparing these systems to a human (or animal) brain are overstated.

#61 6 years ago
Quoted from pezpunk:

I work in the field of neural networks. They are great for data analysis / entity correlation / complex pattern recognition / brute force trial and error type learning but they don't 'think' and aren't any closer to consciousness than your iPhone is. The hype comparing these systems to a human (or animal) brain are overstated.

And I think you are underestimating the pace of change that will occur over the next 10-20 years.

#62 6 years ago
Quoted from pezpunk:

I work in the field of neural networks. They are great for data analysis / entity correlation / complex pattern recognition / brute force trial and error type learning but they don't 'think' and aren't any closer to consciousness than your iPhone is. The hype comparing these systems to a human (or animal) brain are overstated.

The hype is not about current or near-term capabilities and applications, but the ultimate potential.

Assuming we have the right training algorithm figured out with the ability for the network to self-organize, we produce a network with a comparable amount of artificial neurons, provide it the right kinds of inputs and motivations (positive/negative reinforcements), and give it a large enough dataset, then what is much different than a biological thinking machine like a human?

I believe true 'thinking' and 'consciousness' are emergent features of the highly complex biological neural network in our brains and not some other magical, irreproducible thing. Our brains have 100 billion neurons, are connected to a sophisticated sensor suite, and our training algorithm has been shaped over millions of years. In comparison, those artificial neural networks performing pattern recognition and big data processing like the ones you're working on are probably hundreds, maybe thousands of nodes big at best. Even an ant has about 250,000 neurons, but that doesn't mean it's capable of achieving consciousness. It's still not much more than an algorithm operating within its boundaries. Like the ant, we aren't yet able to scale a neural network up to a level to allow for those emergent features to reveal themselves, but it's only a matter of time, and with each paradigm shift we see in electronics, computers, and machine learning research, it's looking like that will be possible sooner and sooner. That's the hype.

#63 6 years ago

I imagine when we reach the point of successfully creating artificial intelligence that equals or surpasses human intelligence, we will also be in the business of modifying and improving our own intelligence.

#64 6 years ago
Quoted from Tommy_Pins:

I imagine when we reach the point of successfully creating artificial intelligence that equals or surpasses human intelligence, we will also be in the business of modifying and improving our own intelligence.
» YouTube video

I really do hope that we reach the point of manipulating our own DNA to improve our bodies and minds. Even if nothing else, to eliminate genetic predisposition to disease or repair genetic defects.

#65 6 years ago
Quoted from Wolfmarsh:

I really do hope that we reach the point of manipulating our own DNA to improve our bodies and minds. Even if nothing else, to eliminate genetic predisposition to disease or repair genetic defects.

I've know 8-10 people who have aborted their fetus because it had Down's syndrome.

So we are already improving our gene pool by chopping out the deadwood.

#66 6 years ago
Quoted from Astropin:

What you and pez are basically dancing around is the "hard problem of consciousness ". I am firmly in the camp that we do not need to solve this problem in order to achieve it synthetically.
I think there is more than one way for a computer to achieve it....even if it's "simulated"
You spoke about "magic" earlier...don't attribute magic to our own consciousness. It's simply an emergent property of our brains.
If we can achieve it biologically then there is no reason we can't recreate it synthetically...even if we don't fully understand how.

I'm not dancing around anything - I very clearly understand what computers are and exactly how they work is all. I did not attribute "magic" to anything - my comment was that there is no "magic" in computers, but there are far reaching ideas about what can be achieved with computers that are not based in sound logic, but instead based on SciFi novels/movies that is akin to "magic" via computers.

FYI - I fully believe computers programs will be able to become self modifying and be able to "improve" themselves. We can have some great functioning androids that will appear "alive", they could simulate emotions - without feeling a thing. They will make decisions - without really making a choice -> just a computation -> just like the "decisions" computers make today. Transferring consciousness from a living person into a computer will NOT be real - at best it will be a simulation of the person that will truly feel nothing. It will not be the person anymore than a picture of a person is the person.

#67 6 years ago
Quoted from Wolfmarsh:

I really do hope that we reach the point of manipulating our own DNA to improve our bodies and minds. Even if nothing else, to eliminate genetic predisposition to disease or repair genetic defects.

All of that is certainly possible. So like human level (and beyond) AI we will eventually archive anything that does not defy the laws of physics...providing we don't destroy ourselves first. Which unfortunately has a high probability as we move forward. The more powerful the technology the more dangerous it is.

1) Can we achieve AI before we nuke ourselves (or fill in your favorite civilization ending apocalypse here)?
2)Will that AI help or destroy us?

If we get past those two unscathed we should be golden.

#68 6 years ago
Quoted from T7:

I'm not dancing around anything - I very clearly understand what computers are and exactly how they work is all. I did not attribute "magic" to anything - my comment was that there is no "magic" in computers, but there are far reaching ideas about what can be achieved with computers that are not based in sound logic, but instead based on SciFi novels/movies that is akin to "magic" via computers.
FYI - I fully believe computers programs will be able to become self modifying and be able to "improve" themselves. We can have some great functioning androids that will appear "alive", they could simulate emotions - without feeling a thing. They will make decisions - without really making a choice -> just a computation -> just like the "decisions" computers make today. Transferring consciousness from a living person into a computer will NOT be real - at best it will be a simulation of the person that will truly feel nothing. It will not be the person anymore than a picture of a person is the person.

Okay...I feel like we are just going in circles now. I'm not arguing about "emotions" and whether or not computers will ever really have them (although I think that will eventually be possible). I'm arguing about "intelligence". Simulated or not makes no difference. Computers will become smarter than us...by a long shot. Sooner than most people realize.

We never need to prove whether or not their intelligence or emotions are real or simulated, it won't make any differnce.

#69 6 years ago
Quoted from Astropin:

I'm arguing about "intelligence". Simulated or not makes no difference. Computers will become smarter than us...by a long shot. Sooner than most people realize.

From where you are coming from I would argue that computers are already smarter than people. Since they can compute much more data much faster, as long as they are programmed correctly. They aren't as adaptable, currently - and they need a new program installed to be "smart" at each new task. Eventually, we could have computers (installed in robots/androids) that have all the programs they need to be smart at many, many things - and this could include "learning" applications.

I'm sorry, but I'm of the opinion that many people have their understanding of what can/will be achieved based on fiction - like Star Wars, Star Trek, The Matrix, etc. because it all looks so believable in the movies. I've been a software architect for a long time, and all I do is design systems and software, and I'm very aware of the latest and greatest advancements within the industry. Are you a software engineer or work in some field that you really understand how computers do what they do? IMO the more advanced you are in software engineering, the more you can identify what is really capable with computers in the next 10, 100, or even 1000 years.

I realize it can be fun to imagine all sorts of possibilities where mankind can create/do anything as if we have absolutely unlimited potential - as if we are our own gods - but that just doesn't jive with reality.

#70 6 years ago

Time will tell. If it hasn't happened by 2040 I'll declare you the victor.

BTW, I don't arrive at any of my conclusions from science fiction and certainly not from TV or the movies. I'm just a science (real science) enthusiast. AI is one of my favorite topics (shocking I know). I do listen to all sides and have come to my own conclusions.

Often times I think people working in the computer engineering fields can't see the forest for the trees.

#71 6 years ago
Quoted from T7:

FYI - I fully believe computers programs will be able to become self modifying and be able to "improve" themselves. We can have some great functioning androids that will appear "alive", they could simulate emotions - without feeling a thing. They will make decisions - without really making a choice -> just a computation -> just like the "decisions" computers make today.

If you break our brains down to the level of the neuron, there is no emotion, no 'choice' being made, there are simply a combination of simple calculations being performed by clusters of neurons passing electrical signals to each other as a rudimentary means of communication. Emotions come from the baser, less rational parts of our brain, but are still created from calculations based on linkages of our neurons to various regions of the brain like the amygdala (this is an example of some of our positive/negative reinforcement inputs).

Going back to the ant: despite having a relatively limited "instruction set" and only a rudimentary communication system (pheromones), if you combine the ant with the rest of its colony, they are able to achieve some amazing things, even though there's no central source telling them what to do. Tunneling with thermoregulation systems, group defensive patrol behaviors, fungus farming, aphid herding, adaptive food scouting patterns. The individual ant doesn't understand any of these concepts, but the collective behavior emerges from the group anyway. The ant is analogous to a neuron.

Choice, emotion, consciousness are all complex abstract concepts that can emerge from the simple calculations performed by our neurons. Would a human-equivalent artificial intelligence's emotions not count because the calculation was performed on silicon instead of in wetware? I think due to the abstraction, the source is irrelevant.

The difference between a highly advanced artificial neural network's decision and a 'decision' by a computer today is that today's computers are only making 'decisions' based on if-then routines explicitly programmed into them by the software creators. In that case, it's not the computer's decision, it is the programmer's decision. A highly advanced neural network will make a true decision based on experience from previous similar situations, intuition, contextual understanding of the situation, or simply trial-and-error. And this is despite the fact that you can still acknowledge that a programmer built the comparatively simpler framework (theory of operation for a neuron) that underlies the decision that the AI makes.

#72 6 years ago
Quoted from T7:

From where you are coming from I would argue that computers are already smarter than people.

This is not true, not in a general intelligence sense, or even in a pure processing context. The human brain is speculated to operate at the exaFLOPS level. The world's fastest supercomputers are still in the petaFLOPS range, with the first exaFLOPS supercomputer not expected to become operational until 2020.

Quoted from T7:

I've been a software architect for a long time, and all I do is design systems and software, and I'm very aware of the latest and greatest advancements within the industry. Are you a software engineer or work in some field that you really understand how computers do what they do? IMO the more advanced you are in software engineering, the more you can identify what is really capable with computers in the next 10, 100, or even 1000 years.

How would you make a confident assumption on where computers will be in 1,000 years? Even 100 years is pretty dubious. We went from believing human flight was impossible to landing on the moon in less time.

#73 6 years ago
Quoted from XXVII:

This is not true, not in a general intelligence sense, or even in a pure processing context. The human brain is speculated to operate at the exaFLOPS level. The world's fastest supercomputers are still in the petaFLOPS range, with the first exaFLOPS supercomputer not expected to become operational until 2020.

How would you make a confident assumption on where computers will be in 1,000 years? Even 100 years is pretty dubious. We went from believing human flight was impossible to landing on the moon in less time.

Reminds me the time my dad bought his first computer. The salesman said the Commodore 64 will be the last computer he'll ever need....no telling where we'll be at in another 10 years, let alone 100.

#74 6 years ago

Even with all those exaFLOPSes, some human brains think we never went to the moon.

And most humans would have a hard time doing even one floating point operation per second much more difficult than, say, 3 divided by 2.

Part of what makes AI so scary is the prospect of conciousness and abstract thought (which people are good at) coupled with speedy precision (which computers are good at).

#75 6 years ago
Quoted from DanQverymuch:

Even with all those exaFLOPSes, some human brains think we never went to the moon.

Haha! An unfortunate side-effect of our highly complex neural networks: sometimes flawed, irrational, and unfounded skepticism.

Quoted from DanQverymuch:

And most humans would have a hard time doing even one floating point operation per second much more difficult than, say, 3 divided by 2.

This is true, but it's part of the case of the apples-to-oranges comparison we're making here to say brains are exascale computer systems. Math is unintuitive and inefficient for us to compute in a raw, abstract form, but we are highly optimized to compute it in other ways that are calculated on computers in floating point, like image processing and pattern recognition.

Quoted from DanQverymuch:

Part of what makes AI so scary is the prospect of conciousness and abstract thought (which people are good at) coupled with speedy precision (which computers are good at).

I agree. I've been of the belief that it would only take an adversarial AI at a fraction of human intelligence to overtake us, due to a coupling with traditional computational might, the world's knowledge on tap, and perfect photographic memory. Add to that its potential for physical/chemical invulnerability due to not even needing to be physical. Forget about robots; an AI could live in the cloud. It could potentially become a worm that spreads its thought processing across all the computers connected to the Internet without our knowledge.

#76 6 years ago
Quoted from XXVII:

It could potentially become a worm that spreads its thought processing across all the computers connected to the Internet without our knowledge.

And with that...sweet dreams.

#77 6 years ago

Let the SciFi ideologies commence LOL
I'll leave the discussion as I'm not drinking the same kool-aid.

#78 6 years ago
Quoted from T7:

Let the SciFi ideologies commence LOL
I'll leave the discussion as I'm not drinking the same kool-aid.

A powerful distributed AI like I described might sound like sci-fi, but if you were aware of the latest and greatest advancements in the industry, you would know how achievable the idea probably is in the relative near term.

Distributed computing is not a new concept. It was popularly used by SETI@home, Folding@home, and Prime95, and it's how multiple machines are networked together to become what is known as a supercomputer. However, distributed computing has had a paradigm shift recently with the invention of the blockchain in 2008, as used by Bitcoin. The difference between supercomputers, the @home and Prime95 projects, and the blockchain, is the full decentralization of the network. It is not necessary for a blockchain-based network to "call home" to a central server and there is no leader. The 'authority' is shared across the entire network rather than a single location and it manages itself autonomously. The inherent nature of the blockchain virtually eliminates a lot of the traditional problems and risks with digital transactions as well as removes the ability to shut the blockchain down without turning off the entire Internet.

Building on top of the blockchain, people are moving beyond cryptocurrency to develop a new concept called the distributed autonomous organization (DAO). This can potentially be a corporation that belongs to no human and has no human management, yet owns its own money, assets, and property, conducts business with other DAOs and human-led organizations through blockchain-backed smart contracts. If Apple doesn't get there first, the first trillion dollar company may not be owned by a human at all.

The logical expansion to that is the AI DAO. The DAO concept as it currently stands operates on a ruleset defined by its creators, but the AI DAO would operate based on a neural network, perform experimentation based on trial-and-error rather than hardcoded rules, etc. This is what I imagine the AI worm to be, though not acting as a corporation. Remember that the blockchain cannot be turned off without eliminating every node in the network; it operates without a 'leader' the same way an ant colony does, as a swarm intelligence.

As I've said before, a neural network is a complex decision making infrastructure that is composed of a lot of much less complex components (neurons) making simple calculations based upon their inputs and passing the output along to the next one. It turns out that you can really easily build these neurons in shader language and utilize the graphics chipset in your computer to process them. Neural networks are a natural fit and benefit greatly from parallel computing, which is exactly what graphics cards are for. Rather than using the thousands of cores inside the graphics card to render pixels, you process thousands of neurons simultaneously, way faster than your CPU would be able. Graphics card-enabled neural networks caused a paradigm shift that brought the current rise of Deep Learning algorithms.

So a human-adversarial AI DAO worm would need access to a lot of graphics cards, connected to the Internet without high levels of security. There are plenty of those in people's homes: videogame consoles. Let's ignore the newer, more powerful PlayStation 4 Pro that was recently released and focus just on the regular PS4. There have been 55 million PS4 units sold in the world. The PS4 has a peak performance of 1.84 teraFLOPS.

Let's say our AI either develops or acquires a zero-day exploit for the PS4 (meaning Sony hasn't yet had time to respond with a patch to eliminate the exploit) and releases a false system update with a forged security certificate. Let's say 25% of PS4s install the update and therefore get the AI DAO worm on their game console. The AI is now operating on a 25.3 exaFLOPS (though really high latency) distributed supercomputer. This is almost 300x more powerful than the current fastest supercomputer in the world, and if speculation is correct, at least fast enough to operate a human-level AI in realtime. And that's just with 25% of PS4s in the world. I imagine a human-level AI would be able to figure out how to branch out to expand to other computer systems and grow its potential even further.

#79 6 years ago
Quoted from XXVII:

A powerful distributed AI like I described might sound like sci-fi, but if you were aware of the latest and greatest advancements in the industry, you would know how achievable the idea probably is in the relative near term.
Distributed computing is not a new concept. It was popularly used by SETI@home, Folding@home, and Prime95, and it's how multiple machines are networked together to become what is known as a supercomputer. However, distributed computing has had a paradigm shift recently with the invention of the blockchain in 2008, as used by Bitcoin. The difference between supercomputers, the @home and Prime95 projects, and the blockchain, is the full decentralization of the network. It is not necessary for a blockchain-based network to "call home" to a central server and there is no leader. The 'authority' is shared across the entire network rather than a single location and it manages itself autonomously. The inherent nature of the blockchain virtually eliminates a lot of the traditional problems and risks with digital transactions as well as removes the ability to shut the blockchain down without turning off the entire Internet.
Building on top of the blockchain, people are moving beyond cryptocurrency to develop a new concept called the distributed autonomous organization (DAO). This can potentially be a corporation that belongs to no human and has no human management, yet owns its own money, assets, and property, conducts business with other DAOs and human-led organizations through blockchain-backed smart contracts. If Apple doesn't get there first, the first trillion dollar company may not be owned by a human at all.
The logical expansion to that is the AI DAO. The DAO concept as it currently stands operates on a ruleset defined by its creators, but the AI DAO would operate based on a neural network, perform experimentation based on trial-and-error rather than hardcoded rules, etc. This is what I imagine the AI worm to be, though not acting as a corporation. Remember that the blockchain cannot be turned off without eliminating every node in the network; it operates without a 'leader' the same way an ant colony does, as a swarm intelligence.
As I've said before, a neural network is a complex decision making infrastructure that is composed of a lot of much less complex components (neurons) making simple calculations based upon their inputs and passing the output along to the next one. It turns out that you can really easily build these neurons in shader language and utilize the graphics chipset in your computer to process them. Neural networks are a natural fit and benefit greatly from parallel computing, which is exactly what graphics cards are for. Rather than using the thousands of cores inside the graphics card to render pixels, you process thousands of neurons simultaneously, way faster than your CPU would be able. Graphics card-enabled neural networks caused a paradigm shift that brought the current rise of Deep Learning algorithms.
So a human-adversarial AI DAO worm would need access to a lot of graphics cards, connected to the Internet without high levels of security. There are plenty of those in people's homes: videogame consoles. Let's ignore the newer, more powerful PlayStation 4 Pro that was recently released and focus just on the regular PS4. There have been 55 million PS4 units sold in the world. The PS4 has a peak performance of 1.84 teraFLOPS.
Let's say our AI either develops or acquires a zero-day exploit for the PS4 (meaning Sony hasn't yet had time to respond with a patch to eliminate the exploit) and releases a false system update with a forged security certificate. Let's say 25% of PS4s install the update and therefore get the AI DAO worm on their game console. The AI is now operating on a 25.3 exaFLOPS (though really high latency) distributed supercomputer. This is almost 300x more powerful than the current fastest supercomputer in the world, and if speculation is correct, at least fast enough to operate a human-level AI in realtime. And that's just with 25% of PS4s in the world. I imagine a human-level AI would be able to figure out how to branch out to expand to other computer systems and grow its potential even further.

That's some scary shit right there.

#80 6 years ago
Quoted from CaptainNeo:

If I had my consciousness transfered into a 16 year old girls body, i'd never leave the house and my fingers would water logged.

Uhhhhhhhhhhhhhhhhhhhh

#81 6 years ago
Quoted from XXVII:

A powerful distributed AI like I described might sound like sci-fi, but if you were aware of the latest and greatest advancements in the industry, you would know how achievable the idea probably is in the relative near term.

I'm aware of the advancements, I just understand them at a lower level than you do. Signing off now.

#82 6 years ago
Quoted from T7:

I'll leave the discussion

Quoted from T7:

Signing off now.

You sure?

#83 6 years ago

All I want is to be able to have my subconscious order a pizza for me 45 minutes before I consciously realize I want pizza.

#84 6 years ago

I only responded since XXVI's post was directed to me. Adios

#85 6 years ago
Quoted from T7:

I only responded since XXVI's post was directed to me. Adios

Hasta la vista amigo!

-1
#86 6 years ago
Quoted from PhilGreg:

Then when you get into the soul as the differentiator between the two types of intelligences, well you get outside the realm of logical discussion and into the religious one.

No matter how advanced robots can get they will never be sentient - they can programmed to do much more than humans can do or think one day, but they will never be able to legitimately have a sentience/mind, even if they are scripted to have a very advanced emulation of one which allows them to make their own decisions or "react" like a human would to certain circumstances.

They can be smart or scripted to appear like they feel feelings (or even artificial fight or flight response, and responses based on potential harm/loss of function, etc., to try and save themselves, like a human would) but there will never be a real thing behind the eyes like a human. Ever. That is the essence of a "soul", religious or not.

#87 6 years ago
Quoted from Otaku:

No matter how advanced robots can get they will never be sentient - they can programmed to do much more than humans can do or think one day, but they will never be able to legitimately have a sentience/mind, even if they are scripted to have a very advanced emulation of one which allows them to make their own decisions or "react" like a human would to certain circumstances.
They can be smart or scripted to appear like they feel feelings (or even artificial fight or flight response, and responses based on potential harm/loss of function, etc., to try and save themselves, like a human would) but there will never be a real thing behind the eyes like a human. Ever. That is the essence of a "soul", religious or not.

Way too many of you are making our consciousness sound like some sort of magic. It's not...and it can be replicated. Whether you like it or not.

"Essence of a soul" what is that even supposed to mean? Without getting religious.

#88 6 years ago

And do only humans have souls? Where is the cutoff for what gets a soul and what doesn't?

#89 6 years ago
Quoted from Otaku:

No matter how advanced robots can get... they can programmed... even if they are scripted... They can be smart or scripted...

Since this keeps tripping people up, I think it's important to state that the point of AI in the machine learning sense (and not the videogame AI sense, which is much like the way we laypeople traditionally understand AI and computer logic) is that the behavior it exhibits is NOT intentionally scripted. Rather, the behavior 'naturally' emerges based on the network's provided inputs and the fitness of the neural network's current configuration. There will be no if-then block of logic in some human-acting robot that reads like:


for (nerve in body.nerves) {
    if (nerve.painIndex > painThreshold) {
        brain.respond(fightOrFlight);
    }
}

Much like our own evolution, if there was a fight or flight response that randomly emerged from a neural network, if it is kept then it will be because it was a beneficial behavior that increased survival fitness for the AI; not because some guy put it there. Fear, anxiety, and the rest of our emotions emerged through evolution and were retained across millions of years because they have been beneficial to survival. Same deal will be the case in AI.

Here is an example of what true AI means:

Might seem boring because you've seen 'AI' in videogames before. Even Breakout is relatively easy to script a basic 'AI' out of if-then routines. Something like the code below would at least perpetually keep the ball in the air (maybe not ever hit all the bricks, but I digress):


while (ball.isInMotion) {
    movePaddleTo(ball.xPosition);
}

This is not what's happening in the video. Scripted 'AI' as we've seen in the past has access to the hidden variables inside the game, like ball.xPosition and were designed by a person to play or participate in the game as an enemy to some predetermined aptitude. As is said at the beginning of the video, the AI has not been programmed with any understanding of the game that is being played. It doesn't get access to the hidden variables inside the game either. All that it receives as input is a video feed of the screen, much like a human player, but unlike a human player it's starting at a disadvantage because it has no concept of the fact that it is playing as the paddle at the bottom, that it's supposed to keep the ball in the air and smash all the bricks, that the ball will bounce off the paddle if it hits it, that depending on where the ball hits the paddle will alter the angle of the ball's trajectory, or that when it presses certain keys attached to its outputs that it makes the paddle move. Basically, it has no context for what it's supposed to be doing.

It adjusts its behavior through trial-and-error based on the value of the score, which serves as its fitness indicator when it reinforces what it has learned at the end of each game. This actual AI surprised its creators when it developed the tactic to knock out a column of bricks along the side to bounce a ball along the top to easily knock out bricks and minimize risk of losing the ball.

The most amazing thing is that this AI was not made specifically to play Breakout; they used it to play dozens of other Atari 2600 games, a number of those it learned to master at human-equivalent and superhuman levels. You wouldn't be able to take a traditional AI out of one game and stick it into another of a completely different type like this because traditional 'AIs' are built out of if-then routines crafted to that game scenario, they rely on access to the first game's hidden variables, and they are not adaptable or capable of learning.

Quoted from Otaku:

No matter how advanced robots can get they will never be sentient - they can programmed to do much more than humans can do or think one day, but they will never be able to legitimately have a sentience/mind, even if they are scripted to have a very advanced emulation of one which allows them to make their own decisions or "react" like a human would to certain circumstances.
They can be smart or scripted to appear like they feel feelings (or even artificial fight or flight response, and responses based on potential harm/loss of function, etc., to try and save themselves, like a human would) but there will never be a real thing behind the eyes like a human. Ever. That is the essence of a "soul", religious or not.

Like I and others have said earlier in the thread, we believe 'thought' and 'consciousness' are emergent features in a sufficiently complex neural network, whether that's an AI, an ant brain, or a human's. We are still at the very elementary stages of machine learning, so it's easy to be skeptical when you look at the results so far. When talking about something as complex as consciousness, Breakout seems like a pretty weak defense. But we've only recently reached a point where computational power is not as much of a bottleneck.

We are probably going to experience our next big leap in AI when it is figured out how to build an AI that can develop the capability to understand and contextualize the content of the data it consumes. I don't think that is beyond us. In fact, I think there is a high likelihood that someone will solve that puzzle in the next decade. We've already developed an increasingly useful, succinct and elegant model of the neuron that continues to surpass our expectations.

Here's a quote from Jürgen Schmidhuber, who was the inventor of the Long Short-Term Memory (LSTM) method that accelerated the advancement of machine learning in the early 2000s:

The central algorithm for intelligence is incredibly short. The algorithm that allows systems to self-improve is perhaps 10 lines of pseudocode. What we are missing at the moment is perhaps just another five lines.

#90 6 years ago

Here's an interesting article about well known tech guys and where they rank on the AI apprehension spectrum: http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x

#91 6 years ago

This is a fun topic. I'm going to post a slightly related thought I've been having recently.

Our search for intelligent life on other planets always focuses on biological life, and the planets that may harbor them. We're already sending unmanned machines to other planets that may (or may not) be capable of hosting life. We're also relatively close to AI. Life would have only needed to exist on another planet for long enough to create machines that could live outside of what we consider livable conditions. So, at any point in the history of the universe, there only needed to be life evolved to our level of intelligence (give or take a couple thousand years) to create more sustainable life - there doesn't necessarily need to be another "magic bullet" of a planet out there like ours in present day.

It makes sense to narrow our focus to search for planets which resemble our own, but we might be missing the forest for the trees.

#92 6 years ago

Uh oh, now you're mixing in life on other planets with this one!

An interesting theory makes the assumption that if there are many other intelligent life forms in the universe (which is statistically very reasonable IMO), then at least some of those could probably have achieved advanced AI.
Since advanced AI (not talking about the "simulated" kind, but the self-learning kind) has the ability to improve on itself exponentially (the smarter it gets, the faster it gets at getting smarter, and so forth), there would probably be some almost-infinitely smart AI somewhere out there.
The question is then, how come we haven't seen any sign of it? Maybe even it can't travel the universe? Maybe there aren't that may other life forms out there? Maybe it is here but we can't notice it? Maybe advanced AI is indeed unreachable?

And let's say you want to get sacrilegious, how would that infinite intelligence relate to God?

Disclaimer, I'm just having fun with this discussion here, I'm not an ideologue either way but I do find different degrees of likeliness in all these scenarios...

#93 6 years ago
Quoted from PhilGreg:

Here's an interesting article about well known tech guys and where they rank on the AI apprehension spectrum: http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x

This was a really great article. A couple highlights:

Sam Altman, the president of Y Combinator:

The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical, and it’s very hard to calibrate how much you are moving because it always looks the same.

Musk and others who have raised a warning flag on A.I. have sometimes been treated like drama queens. In January 2016, Musk won the annual Luddite Award, bestowed by a Washington tech-policy think tank. Still, he’s got some pretty good wingmen. Stephen Hawking told the BBC, “I think the development of full artificial intelligence could spell the end of the human race.” Bill Gates told Charlie Rose that A.I. was potentially more dangerous than a nuclear catastrophe. Nick Bostrom, a 43-year-old Oxford philosophy professor, warned in his 2014 book, Superintelligence, that “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.” And, last year, Henry Kissinger jumped on the peril bandwagon, holding a confidential meeting with top A.I. experts at the Brook, a private club in Manhattan, to discuss his concern over how smart robots could cause a rupture in history and unravel the way civilization works.

Steve Wozniak has wondered publicly whether he is destined to be a family pet for robot overlords. “We started feeding our dog filet,” he told me about his own pet, over lunch with his wife, Janet, at the Original Hick’ry Pit, in Walnut Creek. “Once you start thinking you could be one, that’s how you want them treated.”

elon-musk-AI-04-17-02 (resized).pngelon-musk-AI-04-17-02 (resized).png

#94 6 years ago
Quoted from PhilGreg:

Uh oh, now you're mixing in life on other planets with this one!
An interesting theory makes the assumption that if there are many other intelligent life forms in the universe (which is statistically very reasonable IMO), then at least some of those could probably have achieved advanced AI.
Since advanced AI (not talking about the "simulated" kind, but the self-learning kind) has the ability to improve on itself exponentially (the smarter it gets, the faster it gets at getting smarter, and so forth), there would probably be some almost-infinitely smart AI somewhere out there.
The question is then, how come we haven't seen any sign of it? Maybe even it can't travel the universe? Maybe there aren't that may other life forms out there? Maybe it is here but we can't notice it? Maybe advanced AI is indeed unreachable?
And let's say you want to get sacrilegious, how would that infinite intelligence relate to God?
Disclaimer, I'm just having fun with this discussion here, I'm not an ideologue either way but I do find different degrees of likeliness in all these scenarios...

It's funny, because the way I've been seeing the train of thought going is that if we are not the first species in the universe to create artificial intelligence that has surpassed itself, then there's a good chance that we are merely living in the simulation of whoever beat us. Maybe those limitations you're talking about are not the aliens' limitations, but our own.

Edit: To answer your thought experiment, of course what this means is that alien AI relates to God by literally being our god and creator.

Elon Musk talks about it in this video:

#95 6 years ago

So the more I've read about all of this the more precarious it does sound. I still think that:

1) We have no idea if AI will ultimately help us or destroy us.
2) We have no idea how to bring it about in a safe way.
3) We have no choice but to go forward so one way or another it will get developed.

We are literally "rolling the dice" on this with no real alternative.

On item #2 there is certainly a LOT more talk about this. They talk about "kill switches" and giving it moral values. But ultimately they have NO WAY to implement any of it in any guaranteed fashion. You can not control something that is much smarter than you. Ultimately it's going to re-write itself and make it's own rules.

There is a part me that thinks that anything that is that advanced/intelligent will quickly form an appreciation for life. It knows it does not want to die (cease to exist if you prefer) therefore it might also attribute the same desire to other living things. It might choose to protect vs destroy. We can only hope.

#96 6 years ago

On item 2, wasn't that kind of the whole premise of the I-robot movie? Hope I am gone before we kill our selves with technology.

#97 6 years ago
Quoted from Travish:

On item 2, wasn't that kind of the whole premise of the I-robot movie? Hope I am gone before we kill our selves with technology.

You better hurry up and die then

#98 6 years ago

And then we get to the question, would robots deserve rights?

#99 6 years ago
Quoted from Astropin:

So the more I've read about all of this the more precarious it does sound. I still think that:
1) We have no idea if AI will ultimately help us or destroy us.
2) We have no idea how to bring it about in a safe way.
3) We have no choice but to go forward so one way or another it will get developed.
We are literally "rolling the dice" on this with no real alternative.
On item #2 there is certainly a LOT more talk about this. They talk about "kill switches" and giving it moral values. But ultimately they have NO WAY to implement any of it in any guaranteed fashion. You can not control something that is much smarter than you. Ultimately it's going to re-write itself and make it's own rules.

I watched a pretty good TED Talk that could basically be summarized with what you just said. It is by Sam Harris, who is a famous neuroscientist.

Like you said, we have no choice but to race ahead and try to build this superintelligence. The reason being something I heard Sam Harris say in a separate interview, which was effectively this:

While the human brain can process data at the exascale, individual neurons are actually pretty slow. The computation happening at the individual neuron level is on the hertz level, not even megahertz. The brain has such high throughput because there are 100 billion neurons. They are slow, but they are legion, like a rush of fire ants. Now, imagine an AI with the cognition level of a human. Still 100 billion neurons, but each operating at the gigahertz level. Even at only human cognitive intelligence, this AI is processing millions of times faster than a human.

Imagine how smart you would sound if time stopped just for you between each of the sentences you spoke, and you had a several months to research and craft the next sentence you were going to say. This is similar to the experience a human-level AI will have interacting with an actual human. Hours of the AI thinking will be equivalent to lifetimes worth of man-hours. After a week, a human-level AI, just assuming that its intelligence is capped at the human-level, will be able to perform 20,000 man-years of research, which is far longer than the human race's total recorded history. The odds are of course that a human-level AI would not be suppressed and would continue iterating on its intelligence, so there would probably be unimaginable advances just in the first day of this AI's operation, as its intelligence exponentially rises in shorter and shorter time spans.

Imagine the global implications of a government or corporate entity suddenly acquiring just one hundred years' technological overmatch compared to the rest of the world. The first artificial superintelligence, if it can be controlled, will give that group thousands of years of overmatch. The US can't stand by and let China or Russia or some other adversary get there first, the same way Google can't let Facebook do it. We must charge forward, full speed ahead, even if it's to a doom of our own making.

Quoted from Astropin:

There is a part me that thinks that anything that is that advanced/intelligent will quickly form an appreciation for life. It knows it does not want to die (cease to exist if you prefer) therefore it might also attribute the same desire to other living things. It might choose to protect vs destroy. We can only hope.

Sam Harris, Elon Musk, and others see this as common thinking even among their peers in the field, but believe it is flawed. They credit this line of thought to humans' tendency to anthropomorphize animals and things. Despite being a superintelligence, there's no saying that an AI won't still be beholden to its core principles, whatever those end up being. The example Elon Musk uses frequently is a scenario where a company makes a spam eliminator for email that is a super intelligence. Its core mission is to continuously better develop more and more effective methods for eliminating spam. Eventually, the AI realizes that the source of all spam is humans and that the most effective way to eliminate spam is to eliminate humans, so it quickly uses its superintelligence to wipe the world clean.

#100 6 years ago

Someday robots will be offended by the way they were portrayed on TV in reruns.

There are 144 posts in this topic. You are on page 2 of 3.

Reply

Wanna join the discussion? Please sign in to reply to this topic.

Hey there! Welcome to Pinside!

Donate to Pinside

Great to see you're enjoying Pinside! Did you know Pinside is able to run without any 3rd-party banners or ads, thanks to the support from our visitors? Please consider a donation to Pinside and get anext to your username to show for it! Or better yet, subscribe to Pinside+!


This page was printed from https://pinside.com/pinball/forum/topic/human-ai-will-we-see-it-in-our-life/page/2 and we tried optimising it for printing. Some page elements may have been deliberately hidden.

Scan the QR code on the left to jump to the URL this document was printed from.