(Topic ID: 186507)

Human AI Will We See It In Our Life

By Azmodeus

7 years ago


Topic Heartbeat

Topic Stats

  • 144 posts
  • 37 Pinsiders participating
  • Latest reply 6 years ago by Azmodeus
  • Topic is favorited by 3 Pinsiders

You

Topic Gallery

View topic image gallery

187mdy (resized).jpg
Projections (resized).png
e0d86e96bdc2aac0df569a41fa4f6bf610a3696a (resized).png
elon-musk-AI-04-17-02 (resized).png
Doom-2016-after-credits-hq (resized).jpg
They-Saved-Hitlers-Brain-images-a0011686-ccda-45dc-9712-fca943dcafc (resized).jpg
IMG_0514 (resized).JPG

You're currently viewing posts by Pinsider XXVII.
Click here to go back to viewing the entire thread.

#62 7 years ago
Quoted from pezpunk:

I work in the field of neural networks. They are great for data analysis / entity correlation / complex pattern recognition / brute force trial and error type learning but they don't 'think' and aren't any closer to consciousness than your iPhone is. The hype comparing these systems to a human (or animal) brain are overstated.

The hype is not about current or near-term capabilities and applications, but the ultimate potential.

Assuming we have the right training algorithm figured out with the ability for the network to self-organize, we produce a network with a comparable amount of artificial neurons, provide it the right kinds of inputs and motivations (positive/negative reinforcements), and give it a large enough dataset, then what is much different than a biological thinking machine like a human?

I believe true 'thinking' and 'consciousness' are emergent features of the highly complex biological neural network in our brains and not some other magical, irreproducible thing. Our brains have 100 billion neurons, are connected to a sophisticated sensor suite, and our training algorithm has been shaped over millions of years. In comparison, those artificial neural networks performing pattern recognition and big data processing like the ones you're working on are probably hundreds, maybe thousands of nodes big at best. Even an ant has about 250,000 neurons, but that doesn't mean it's capable of achieving consciousness. It's still not much more than an algorithm operating within its boundaries. Like the ant, we aren't yet able to scale a neural network up to a level to allow for those emergent features to reveal themselves, but it's only a matter of time, and with each paradigm shift we see in electronics, computers, and machine learning research, it's looking like that will be possible sooner and sooner. That's the hype.

#71 7 years ago
Quoted from T7:

FYI - I fully believe computers programs will be able to become self modifying and be able to "improve" themselves. We can have some great functioning androids that will appear "alive", they could simulate emotions - without feeling a thing. They will make decisions - without really making a choice -> just a computation -> just like the "decisions" computers make today.

If you break our brains down to the level of the neuron, there is no emotion, no 'choice' being made, there are simply a combination of simple calculations being performed by clusters of neurons passing electrical signals to each other as a rudimentary means of communication. Emotions come from the baser, less rational parts of our brain, but are still created from calculations based on linkages of our neurons to various regions of the brain like the amygdala (this is an example of some of our positive/negative reinforcement inputs).

Going back to the ant: despite having a relatively limited "instruction set" and only a rudimentary communication system (pheromones), if you combine the ant with the rest of its colony, they are able to achieve some amazing things, even though there's no central source telling them what to do. Tunneling with thermoregulation systems, group defensive patrol behaviors, fungus farming, aphid herding, adaptive food scouting patterns. The individual ant doesn't understand any of these concepts, but the collective behavior emerges from the group anyway. The ant is analogous to a neuron.

Choice, emotion, consciousness are all complex abstract concepts that can emerge from the simple calculations performed by our neurons. Would a human-equivalent artificial intelligence's emotions not count because the calculation was performed on silicon instead of in wetware? I think due to the abstraction, the source is irrelevant.

The difference between a highly advanced artificial neural network's decision and a 'decision' by a computer today is that today's computers are only making 'decisions' based on if-then routines explicitly programmed into them by the software creators. In that case, it's not the computer's decision, it is the programmer's decision. A highly advanced neural network will make a true decision based on experience from previous similar situations, intuition, contextual understanding of the situation, or simply trial-and-error. And this is despite the fact that you can still acknowledge that a programmer built the comparatively simpler framework (theory of operation for a neuron) that underlies the decision that the AI makes.

#72 7 years ago
Quoted from T7:

From where you are coming from I would argue that computers are already smarter than people.

This is not true, not in a general intelligence sense, or even in a pure processing context. The human brain is speculated to operate at the exaFLOPS level. The world's fastest supercomputers are still in the petaFLOPS range, with the first exaFLOPS supercomputer not expected to become operational until 2020.

Quoted from T7:

I've been a software architect for a long time, and all I do is design systems and software, and I'm very aware of the latest and greatest advancements within the industry. Are you a software engineer or work in some field that you really understand how computers do what they do? IMO the more advanced you are in software engineering, the more you can identify what is really capable with computers in the next 10, 100, or even 1000 years.

How would you make a confident assumption on where computers will be in 1,000 years? Even 100 years is pretty dubious. We went from believing human flight was impossible to landing on the moon in less time.

#75 7 years ago
Quoted from DanQverymuch:

Even with all those exaFLOPSes, some human brains think we never went to the moon.

Haha! An unfortunate side-effect of our highly complex neural networks: sometimes flawed, irrational, and unfounded skepticism.

Quoted from DanQverymuch:

And most humans would have a hard time doing even one floating point operation per second much more difficult than, say, 3 divided by 2.

This is true, but it's part of the case of the apples-to-oranges comparison we're making here to say brains are exascale computer systems. Math is unintuitive and inefficient for us to compute in a raw, abstract form, but we are highly optimized to compute it in other ways that are calculated on computers in floating point, like image processing and pattern recognition.

Quoted from DanQverymuch:

Part of what makes AI so scary is the prospect of conciousness and abstract thought (which people are good at) coupled with speedy precision (which computers are good at).

I agree. I've been of the belief that it would only take an adversarial AI at a fraction of human intelligence to overtake us, due to a coupling with traditional computational might, the world's knowledge on tap, and perfect photographic memory. Add to that its potential for physical/chemical invulnerability due to not even needing to be physical. Forget about robots; an AI could live in the cloud. It could potentially become a worm that spreads its thought processing across all the computers connected to the Internet without our knowledge.

#78 7 years ago
Quoted from T7:

Let the SciFi ideologies commence LOL
I'll leave the discussion as I'm not drinking the same kool-aid.

A powerful distributed AI like I described might sound like sci-fi, but if you were aware of the latest and greatest advancements in the industry, you would know how achievable the idea probably is in the relative near term.

Distributed computing is not a new concept. It was popularly used by SETI@home, Folding@home, and Prime95, and it's how multiple machines are networked together to become what is known as a supercomputer. However, distributed computing has had a paradigm shift recently with the invention of the blockchain in 2008, as used by Bitcoin. The difference between supercomputers, the @home and Prime95 projects, and the blockchain, is the full decentralization of the network. It is not necessary for a blockchain-based network to "call home" to a central server and there is no leader. The 'authority' is shared across the entire network rather than a single location and it manages itself autonomously. The inherent nature of the blockchain virtually eliminates a lot of the traditional problems and risks with digital transactions as well as removes the ability to shut the blockchain down without turning off the entire Internet.

Building on top of the blockchain, people are moving beyond cryptocurrency to develop a new concept called the distributed autonomous organization (DAO). This can potentially be a corporation that belongs to no human and has no human management, yet owns its own money, assets, and property, conducts business with other DAOs and human-led organizations through blockchain-backed smart contracts. If Apple doesn't get there first, the first trillion dollar company may not be owned by a human at all.

The logical expansion to that is the AI DAO. The DAO concept as it currently stands operates on a ruleset defined by its creators, but the AI DAO would operate based on a neural network, perform experimentation based on trial-and-error rather than hardcoded rules, etc. This is what I imagine the AI worm to be, though not acting as a corporation. Remember that the blockchain cannot be turned off without eliminating every node in the network; it operates without a 'leader' the same way an ant colony does, as a swarm intelligence.

As I've said before, a neural network is a complex decision making infrastructure that is composed of a lot of much less complex components (neurons) making simple calculations based upon their inputs and passing the output along to the next one. It turns out that you can really easily build these neurons in shader language and utilize the graphics chipset in your computer to process them. Neural networks are a natural fit and benefit greatly from parallel computing, which is exactly what graphics cards are for. Rather than using the thousands of cores inside the graphics card to render pixels, you process thousands of neurons simultaneously, way faster than your CPU would be able. Graphics card-enabled neural networks caused a paradigm shift that brought the current rise of Deep Learning algorithms.

So a human-adversarial AI DAO worm would need access to a lot of graphics cards, connected to the Internet without high levels of security. There are plenty of those in people's homes: videogame consoles. Let's ignore the newer, more powerful PlayStation 4 Pro that was recently released and focus just on the regular PS4. There have been 55 million PS4 units sold in the world. The PS4 has a peak performance of 1.84 teraFLOPS.

Let's say our AI either develops or acquires a zero-day exploit for the PS4 (meaning Sony hasn't yet had time to respond with a patch to eliminate the exploit) and releases a false system update with a forged security certificate. Let's say 25% of PS4s install the update and therefore get the AI DAO worm on their game console. The AI is now operating on a 25.3 exaFLOPS (though really high latency) distributed supercomputer. This is almost 300x more powerful than the current fastest supercomputer in the world, and if speculation is correct, at least fast enough to operate a human-level AI in realtime. And that's just with 25% of PS4s in the world. I imagine a human-level AI would be able to figure out how to branch out to expand to other computer systems and grow its potential even further.

#89 7 years ago
Quoted from Otaku:

No matter how advanced robots can get... they can programmed... even if they are scripted... They can be smart or scripted...

Since this keeps tripping people up, I think it's important to state that the point of AI in the machine learning sense (and not the videogame AI sense, which is much like the way we laypeople traditionally understand AI and computer logic) is that the behavior it exhibits is NOT intentionally scripted. Rather, the behavior 'naturally' emerges based on the network's provided inputs and the fitness of the neural network's current configuration. There will be no if-then block of logic in some human-acting robot that reads like:


for (nerve in body.nerves) {
    if (nerve.painIndex > painThreshold) {
        brain.respond(fightOrFlight);
    }
}

Much like our own evolution, if there was a fight or flight response that randomly emerged from a neural network, if it is kept then it will be because it was a beneficial behavior that increased survival fitness for the AI; not because some guy put it there. Fear, anxiety, and the rest of our emotions emerged through evolution and were retained across millions of years because they have been beneficial to survival. Same deal will be the case in AI.

Here is an example of what true AI means:

Might seem boring because you've seen 'AI' in videogames before. Even Breakout is relatively easy to script a basic 'AI' out of if-then routines. Something like the code below would at least perpetually keep the ball in the air (maybe not ever hit all the bricks, but I digress):


while (ball.isInMotion) {
    movePaddleTo(ball.xPosition);
}

This is not what's happening in the video. Scripted 'AI' as we've seen in the past has access to the hidden variables inside the game, like ball.xPosition and were designed by a person to play or participate in the game as an enemy to some predetermined aptitude. As is said at the beginning of the video, the AI has not been programmed with any understanding of the game that is being played. It doesn't get access to the hidden variables inside the game either. All that it receives as input is a video feed of the screen, much like a human player, but unlike a human player it's starting at a disadvantage because it has no concept of the fact that it is playing as the paddle at the bottom, that it's supposed to keep the ball in the air and smash all the bricks, that the ball will bounce off the paddle if it hits it, that depending on where the ball hits the paddle will alter the angle of the ball's trajectory, or that when it presses certain keys attached to its outputs that it makes the paddle move. Basically, it has no context for what it's supposed to be doing.

It adjusts its behavior through trial-and-error based on the value of the score, which serves as its fitness indicator when it reinforces what it has learned at the end of each game. This actual AI surprised its creators when it developed the tactic to knock out a column of bricks along the side to bounce a ball along the top to easily knock out bricks and minimize risk of losing the ball.

The most amazing thing is that this AI was not made specifically to play Breakout; they used it to play dozens of other Atari 2600 games, a number of those it learned to master at human-equivalent and superhuman levels. You wouldn't be able to take a traditional AI out of one game and stick it into another of a completely different type like this because traditional 'AIs' are built out of if-then routines crafted to that game scenario, they rely on access to the first game's hidden variables, and they are not adaptable or capable of learning.

Quoted from Otaku:

No matter how advanced robots can get they will never be sentient - they can programmed to do much more than humans can do or think one day, but they will never be able to legitimately have a sentience/mind, even if they are scripted to have a very advanced emulation of one which allows them to make their own decisions or "react" like a human would to certain circumstances.
They can be smart or scripted to appear like they feel feelings (or even artificial fight or flight response, and responses based on potential harm/loss of function, etc., to try and save themselves, like a human would) but there will never be a real thing behind the eyes like a human. Ever. That is the essence of a "soul", religious or not.

Like I and others have said earlier in the thread, we believe 'thought' and 'consciousness' are emergent features in a sufficiently complex neural network, whether that's an AI, an ant brain, or a human's. We are still at the very elementary stages of machine learning, so it's easy to be skeptical when you look at the results so far. When talking about something as complex as consciousness, Breakout seems like a pretty weak defense. But we've only recently reached a point where computational power is not as much of a bottleneck.

We are probably going to experience our next big leap in AI when it is figured out how to build an AI that can develop the capability to understand and contextualize the content of the data it consumes. I don't think that is beyond us. In fact, I think there is a high likelihood that someone will solve that puzzle in the next decade. We've already developed an increasingly useful, succinct and elegant model of the neuron that continues to surpass our expectations.

Here's a quote from Jürgen Schmidhuber, who was the inventor of the Long Short-Term Memory (LSTM) method that accelerated the advancement of machine learning in the early 2000s:

The central algorithm for intelligence is incredibly short. The algorithm that allows systems to self-improve is perhaps 10 lines of pseudocode. What we are missing at the moment is perhaps just another five lines.

#93 7 years ago
Quoted from PhilGreg:

Here's an interesting article about well known tech guys and where they rank on the AI apprehension spectrum: http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x

This was a really great article. A couple highlights:

Sam Altman, the president of Y Combinator:

The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical, and it’s very hard to calibrate how much you are moving because it always looks the same.

Musk and others who have raised a warning flag on A.I. have sometimes been treated like drama queens. In January 2016, Musk won the annual Luddite Award, bestowed by a Washington tech-policy think tank. Still, he’s got some pretty good wingmen. Stephen Hawking told the BBC, “I think the development of full artificial intelligence could spell the end of the human race.” Bill Gates told Charlie Rose that A.I. was potentially more dangerous than a nuclear catastrophe. Nick Bostrom, a 43-year-old Oxford philosophy professor, warned in his 2014 book, Superintelligence, that “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.” And, last year, Henry Kissinger jumped on the peril bandwagon, holding a confidential meeting with top A.I. experts at the Brook, a private club in Manhattan, to discuss his concern over how smart robots could cause a rupture in history and unravel the way civilization works.

Steve Wozniak has wondered publicly whether he is destined to be a family pet for robot overlords. “We started feeding our dog filet,” he told me about his own pet, over lunch with his wife, Janet, at the Original Hick’ry Pit, in Walnut Creek. “Once you start thinking you could be one, that’s how you want them treated.”

elon-musk-AI-04-17-02 (resized).pngelon-musk-AI-04-17-02 (resized).png

#94 7 years ago
Quoted from PhilGreg:

Uh oh, now you're mixing in life on other planets with this one!
An interesting theory makes the assumption that if there are many other intelligent life forms in the universe (which is statistically very reasonable IMO), then at least some of those could probably have achieved advanced AI.
Since advanced AI (not talking about the "simulated" kind, but the self-learning kind) has the ability to improve on itself exponentially (the smarter it gets, the faster it gets at getting smarter, and so forth), there would probably be some almost-infinitely smart AI somewhere out there.
The question is then, how come we haven't seen any sign of it? Maybe even it can't travel the universe? Maybe there aren't that may other life forms out there? Maybe it is here but we can't notice it? Maybe advanced AI is indeed unreachable?
And let's say you want to get sacrilegious, how would that infinite intelligence relate to God?
Disclaimer, I'm just having fun with this discussion here, I'm not an ideologue either way but I do find different degrees of likeliness in all these scenarios...

It's funny, because the way I've been seeing the train of thought going is that if we are not the first species in the universe to create artificial intelligence that has surpassed itself, then there's a good chance that we are merely living in the simulation of whoever beat us. Maybe those limitations you're talking about are not the aliens' limitations, but our own.

Edit: To answer your thought experiment, of course what this means is that alien AI relates to God by literally being our god and creator.

Elon Musk talks about it in this video:

#99 7 years ago
Quoted from Astropin:

So the more I've read about all of this the more precarious it does sound. I still think that:
1) We have no idea if AI will ultimately help us or destroy us.
2) We have no idea how to bring it about in a safe way.
3) We have no choice but to go forward so one way or another it will get developed.
We are literally "rolling the dice" on this with no real alternative.
On item #2 there is certainly a LOT more talk about this. They talk about "kill switches" and giving it moral values. But ultimately they have NO WAY to implement any of it in any guaranteed fashion. You can not control something that is much smarter than you. Ultimately it's going to re-write itself and make it's own rules.

I watched a pretty good TED Talk that could basically be summarized with what you just said. It is by Sam Harris, who is a famous neuroscientist.

Like you said, we have no choice but to race ahead and try to build this superintelligence. The reason being something I heard Sam Harris say in a separate interview, which was effectively this:

While the human brain can process data at the exascale, individual neurons are actually pretty slow. The computation happening at the individual neuron level is on the hertz level, not even megahertz. The brain has such high throughput because there are 100 billion neurons. They are slow, but they are legion, like a rush of fire ants. Now, imagine an AI with the cognition level of a human. Still 100 billion neurons, but each operating at the gigahertz level. Even at only human cognitive intelligence, this AI is processing millions of times faster than a human.

Imagine how smart you would sound if time stopped just for you between each of the sentences you spoke, and you had a several months to research and craft the next sentence you were going to say. This is similar to the experience a human-level AI will have interacting with an actual human. Hours of the AI thinking will be equivalent to lifetimes worth of man-hours. After a week, a human-level AI, just assuming that its intelligence is capped at the human-level, will be able to perform 20,000 man-years of research, which is far longer than the human race's total recorded history. The odds are of course that a human-level AI would not be suppressed and would continue iterating on its intelligence, so there would probably be unimaginable advances just in the first day of this AI's operation, as its intelligence exponentially rises in shorter and shorter time spans.

Imagine the global implications of a government or corporate entity suddenly acquiring just one hundred years' technological overmatch compared to the rest of the world. The first artificial superintelligence, if it can be controlled, will give that group thousands of years of overmatch. The US can't stand by and let China or Russia or some other adversary get there first, the same way Google can't let Facebook do it. We must charge forward, full speed ahead, even if it's to a doom of our own making.

Quoted from Astropin:

There is a part me that thinks that anything that is that advanced/intelligent will quickly form an appreciation for life. It knows it does not want to die (cease to exist if you prefer) therefore it might also attribute the same desire to other living things. It might choose to protect vs destroy. We can only hope.

Sam Harris, Elon Musk, and others see this as common thinking even among their peers in the field, but believe it is flawed. They credit this line of thought to humans' tendency to anthropomorphize animals and things. Despite being a superintelligence, there's no saying that an AI won't still be beholden to its core principles, whatever those end up being. The example Elon Musk uses frequently is a scenario where a company makes a spam eliminator for email that is a super intelligence. Its core mission is to continuously better develop more and more effective methods for eliminating spam. Eventually, the AI realizes that the source of all spam is humans and that the most effective way to eliminate spam is to eliminate humans, so it quickly uses its superintelligence to wipe the world clean.

#101 7 years ago
Quoted from Tommy_Pins:

And then we get to the question, would robots deserve rights?
» YouTube video

Really cool animation.

I'm of the assumption that a sufficiently advanced AI that exhibits sentience (however we end up defining or proving it) will require rights, especially in the case of an AI designed in our own image, like the ones seen in the Westworld robots. I think one of the exciting things about our continual advancement in AI is that the research will help us understand biological cognition as well. There are discussions that are had about whether dolphins and other certain animals should be classified as non-human persons deserving of rights beyond other animals. I think our research into AI will help us determine those answers one day in a way that is not possible right now.

#105 7 years ago
Quoted from Astropin:

We wouldn't have to prove it's sentient...we would have to prove it isn't. Sort of an advanced Turing test. So any intelligence we can't prove isn't sentient deserves "human" rights...even if it isn't in human form.

I agree in principle, but I think most people are going to have a hard time accepting a situation where we can't 100% prove an AI is 'alive' but we want to give it the same rights that any human person has. Especially since I imagine the vast majority of superintelligent AI are not going to be robots, but invisible entities that exist in the cloud. I think it might be too heady for some people to grant human-equivalent legal rights to an entity that doesn't even have a single physical location, can exist in multiple places at once, and can create copies of itself at will.

Have you seen the movie Ex Machina? It kind of explores this idea of determining sentience without explicitly saying one way or the other how you should decide on the matter. Right from the beginning, it's established that the robot has easily aced the Turing test, so that's out, and the role of the main character, who is an AI programmer himself, is to interact with the robot and try to determine if she is really sentient or just appears to be. After all, the fitness tests the robot has performed may have selected for behavior and interaction that merely appeared to be authentic cognition or emotion. Whether the robot ends up being sentient or not, the movie makes a pretty strong case that it will be difficult to tell if an AI is sentient from external observation.

Quoted from Astropin:

Although it's a silly argument. The real argument will be "will it let us keep our rights? "

Indeed. It won't matter what we think about the authenticity of AIs' experiences if they are our overlords.

#111 7 years ago
Quoted from merccat:

I agree with pezpunk though, current technology, even at current growth rates is not going to reach that level in 10-20 years. He is right that we do not even yet know how to frame the problem. Heck people have been therorizing were only 10-20 year from AI for the last 40 years+.
However I do think in 10-20 years we will have the computational power necessary to quickly analyze enough data to answer any question with existing data points available. However machine creativity and conciousness are still a long way away. I'm sure we will edventually get there, just not as soon as the movies would have us believe short of some revolutionary new breakthrough.

There's a pretty good series of articles on AI by WaitButWhy that have the most relevant graph for this situation:

Projections (resized).pngProjections (resized).png

The greatest shortcoming of the human race is our inability to understand the exponential function.
- Albert Allen Bartlett

Definitely in the field of AI, you can't look back over the last 40 years to determine where we'll be in the future. In the '90s, AI was as taboo as cold fusion. Everyone was convinced that it didn't work, so research efforts dried up. It wasn't until the development of the Long Short-Term Memory algorithm that AI research kicked back into gear and started being used in practical applications. It wasn't until GPU acceleration that computer hardware caught up with the math models and made Deep Learning feasible, and that was only a handful of years ago. Only now are we at a point where the big companies (and not just pure tech companies) understand that AI needs to be a fundamental part of their business strategies, so we're currently seeing an explosion in new AI research.

As for not knowing how to frame the problem of artificial general intelligence yet, definitely. Even if we had a powerful enough supercomputer today, we wouldn't be able to load Google's TensorFlow AI library into it, initialize a neural network with 100 billion neurons, and immediately have a superintelligence. There are pieces of the puzzle on the algorithm side that are still missing.

The problem I see with the current most popular neural network algorithms is that they rely on predefined and unchanging node topologies, that are tweaked in advance for optimization specifically toward whatever problem they are going to solve. Our brains are constantly wiring and rewiring neurons in different configurations every second. This is what provides us with the ability to learn to solve a wide variety of tasks quickly; a neural network with an unalterable topology is not much different than a really elaborate Excel spreadsheet, aside from the backpropagation of errors to adjust the weighting of the neural inputs when it learns. There are algorithms that are designed to evolve, like NEAT (NeuroEvolution of Augmenting Topologies), but for whatever reason, they don't tend to scale to large sizes very well. Maybe it's just a computation barrier, like Deep Learning was pre-GPU?

In an effort to mimic the human brain, which is probably the most obvious path toward developing an artificial general intelligence, especially one that may develop emotions and think like us, I would expect that there will be at minimum three separated but linked neural networks that will be bolted on top of each other rather than a single homogeneous network, to emulate our reptilian, limbic, and neocortex brains. I also expect that some regions of the brain may have different network topology or algorithm requirements, necessitating neural subnetworks that utilize different types of algorithms. For example, I understand that the visual cortex is a specifically ordered section of the brain whereas the neocortex is not.

Anyway, going back to progress, even with an incomplete neural model, we are seeing movement in the field that astonishes the very researchers participating in it. The revolutionary breakthroughs are already happening, but we aren't able to properly appreciate them. Here is what was said by Prof. Christian Bauckhage in a recent talk about AI mastering the game of Go (quote starts around 12:36):

Last year, Lee Sedol lost against Google AlphaGo. And that was a feat that most computer scientists on the planet thought impossible, and that includes myself. One of my lectures at the Universität Bonn is called 'Game AI' and in that lecture I teach the students how, you know, computers play games. And we learned how they play chess [points to the screen at DeepBlue photo], and then I told them three years ago, last time I gave that class, "Well now, we know how to program a computer to play chess, and this is good and well, but this will never work for Go." And I said, two years ago, "In my lifetime, I will never see a machine beating the world champion in Go."

How could I have been so sure? Well, first of all I was wrong, right? Not even two years later, it happened. But consider this:

There are 10 raised to the power of 170 possible developments a game of Go can take -- courses a game of Go can take -- and this is such a brutally large number, I have to break it down for you.

The number of atoms in the universe is estimated to be 10 to the 80th. If for each of the atoms in the universe, there was another universe, and we would count all the atoms in all these 10 to the 80 universes, we would have 10 to the 160 atoms. Which is much, much less than 10 to the 170 possible courses a game of Go can take. This was thought to be impossible and what they did here, they used neural networks, and these neural networks, after lengthy training time, developed intuition as to how to play Go.

2 weeks later
#128 6 years ago

I haven't posted here in a while, but I haven't stopped fanatically evangelizing about AI elsewhere. One of my other areas of interest is imagining what's going to happen to us on the race toward artificial general intelligence, because surely before we achieve human-level AI, we are going to master the means to automate humans out of the workforce.

Earlier this week, a friend on Facebook made a post premising that many of our blue collar jobs and certain kinds of our white collar jobs would disappear due to automation in the next 50 years, if not sooner. Would this be a good or bad thing? What will this mean for unemployment and our education systems? How will people find their drive and meaning in this world? And what is your primary concern when contemplating this premise?

Here was my response:

The innovations of the past automated human brawn. The innovations of the future will automate human thought. AI in the future won't just crunch numbers faster than humans, they will also more quickly and creatively solve problems than humans. It's not just robots coming for our blue collar jobs, AI in the cloud will take our white collar thinking jobs too.

We are going to face unemployment on levels that have never been seen before. This won't be a situation like the past: when farming jobs went away, the displaced workers went to the factories; when factory jobs went away, the displaced workers moved into offices. When these jobs go away, there won't be a new haven for displaced workers to migrate to. Any new fields that are created by AI taking our existing work will also be better filled by AI. The percentage of roles that humans fill better than machines will inevitably shrink to 0.

What does this mean for educating our future labor force? It means that their educations will provide them no direct, practical benefit to contribute to the labor pool. We will never be able to guess what skills or information could possibly be relevant by the time this pool becomes adults, because at that point, the rate of change will be too rapid for us to be able to predict 10-20 years into the future.

My primary concern with this premise is how are we as a society going to adapt to a reality where the majority of our workforce is not only unemployed, but also unemployable. It would be a waste of our technological progress to suppress AI in the workforce so people can continue wasting most of their lives toiling at jobs that aren't necessary for people to hold.

Here's a great quote I saw recently:

Artificial Intelligence, deep learning, machine learning — whatever you’re doing if you don’t understand it — learn it. Because otherwise you’re going to be a dinosaur within 3 years.
- Mark Cuban

You're currently viewing posts by Pinsider XXVII.
Click here to go back to viewing the entire thread.

Reply

Wanna join the discussion? Please sign in to reply to this topic.

Hey there! Welcome to Pinside!

Donate to Pinside

Great to see you're enjoying Pinside! Did you know Pinside is able to run without any 3rd-party banners or ads, thanks to the support from our visitors? Please consider a donation to Pinside and get anext to your username to show for it! Or better yet, subscribe to Pinside+!


This page was printed from https://pinside.com/pinball/forum/topic/human-ai-will-we-see-it-in-our-life?tu=XXVII and we tried optimising it for printing. Some page elements may have been deliberately hidden.

Scan the QR code on the left to jump to the URL this document was printed from.