Episode 2 - The Future of Intelligence (Transcription)
Justin: Welcome to Hence, The Future podcast. I'm Justin Clark.
Mattimore: And I'm Mattimore Cronin.
Justin: And today we're talking about the future of intelligence. So first, let's talk about the definition of intelligence.
Mattimore: Right. So one definition I see again and again in different forms is the ability to solve complex problems, which certainly gets at part of what intelligence is, but it seems to be more than that.
Justin: There's like an ethereal quality to intelligence. It's really hard to define.
Mattimore: One interesting insight from Yuval Noah Harari's book, Sapiens, is he believes the reason humans became the dominant species over all other species was in part because of our intelligence, but the greater reason was because we were able to collaborate on a massive scale - more than any other species. And what allowed us to collaborate in such a way was our shared belief in these abstract ideas like our country, and our freedom, and human rights. And all these ideas sound very real because they've been drilled into us from such an early age, but they're actually very abstract and they're not tangible at all. And so while a colony of ants can collaborate amongst itself, it's not like all of the ant colonies across the whole world can collaborate and decide to crawl all over us and eat all the humans because they vastly outnumber us.
So in the same way when we talk about machines and they're potentially surpassing us in our intelligence, we can think about their ability to problem solve and to cooperate on a massive scale and much more quickly and rapidly than humans would be able to.
Justin: And even before we talk about AI, we can talk about human intelligence and education as it's evolved over the past several thousand years because only recently have humans invented the internet.
And now as Mattimore was saying, we can collaborate with literally anybody in the world that has access to the internet and it doesn't matter where they're from.
Mattimore: Right. And this has all happened very recently when you look at the actual time scale.
So one example that I've heard is that if the Earth were one year old, humans would only have been around in the last 10 minutes. And most of that 10 minutes, we couldn't even talk to each other, we didn't have proper language. Then we finally got language, then we eventually were able to write things down, so that future generations could see it or so that we could send a letter to someone who was far away from us. And then just recently in the last 20 years we were able to actually instantly or near instantly send messages across the world through the internet.
So this has been very much an intelligence explosion when compared to the biological snail's pace of how humans have developed their own intelligence.
Everything's happening much more rapidly now.
Justin: Right because we're not waiting on our brains to evolve in structure. We can actually augment our intelligence with stuff like Google and all of the online open course projects or the MOOCs - the Massively Open Online Courses - where you can basically teach yourself anything.
And I imagine as technology gets better and better we're going to have education that's personalized that maybe even is integrated with some sort of VR interface so you can actually go into, for example, a heart and learn about the structure of a heart. And this might make the education of medical doctors more effective. And the possibilities are endless when it comes to this new type of education.
Mattimore: Right and think about how big of a barrier it is right now to get higher education. I mean most people have to take out really high loans. It can cost $50,000 a year or more just to go to college, which is pretty much expected now in the workplace.
But because of the digital economy you could imagine a future where all of the best lectures from all of the best professors are recorded, and you can just go into a basically free virtual education experience provided by the government or from whatever companies, and you can hear the best minds talk and teach you whatever topic you're interested in. So that's something that I would like to see happen in the future for sure.
Another interesting point that Elon Musk made and that Justin sort of alluded to is that we are already cyborgs. We're not just flesh and meat humans. And the reason for that is that we have digital versions of ourselves out there on the web. You know, you have a Facebook page. You have your own LinkedIn page. You have your different virtual selves out there and you can tap into the digital world by messaging people. Anytime you have a question - like, "Oh, who was the fifth president, or what was the theory of relativity again?" or whatever your question might be - you can just speak it into your smart home or your phone or you can just type it into Google and you'll have that answer near instantly. So humans are becoming less of a hard drive where you have to store all the information that's important, like typical Catholic school where you you just get all the Latin declensions and everything just drilled into you from a memorization standpoint, and we're becoming more of a vessel or an agent where we move between different questions and the different answers that are available from the internet and that's where our real value lies as far as the economy is concerned.
Justin: And this might be a good segue into machine intelligence because the Google Home and really anything else related to intelligence is starting to be augmented by AI - even if it's kind of in an indirect way - but now there have been huge strides in terms of making general intelligence. I know OpenAI is one of the organizations that are really pushing to make safe AI a reality as soon as possible.
Mattimore: So I think it'd be good now to recap machine intelligence and how it has developed up until this point because it's been pretty unexcitng up until really the start of our lifetimes. Justin and I are in our mid-20s, and it really started to pick up right around the 90s, and the 80s was actually known as an AI winter because people were pretty distraught about their predictions not coming true as far as what AI could achieve.
But then very quickly it began to progress at a much higher pace. So for instance, people used to think, "Oh chess is this game that requires a certain panache that machines just cannot master." And when Deep Blue beat Gary Kasparov - the world champion in chess - people were absolutely stunned.
And now we look back and we're like, "Oh, yeah chess is a pretty simple game. It's not that hard to program a computer that can beat any human at it." And since then we've had things like for instance AlphaGo, which is a much more complex game. It's this game that's been played in China for thousands of years and it's been known to require a certain level of creativity that has many more possibilities than in chess, for instance. So the amount of permutations and combinations of any board at any given time is just unlike any other game that we play in the West.
And AlphaGo in six to nine months went from not being able to beat anyone, being absolutely awful at this game, to beating every person it came across whether they were the world champion... anyone! And then right after that they came out with Alpha 0, which could beat AlphaGo every time they played without AlphaGo - the original version - even scoring a single point.
Justin: Yeah, I think it was 100 to 0 and their first tournament or something like that.
Mattimore: 100 to 0! And the crazy thing about these AI systems is that it's not like the chess AI where they just programmed it specifically for chess. AlphaGo and Alpha 0 - these can learn and master any game.
Justin: The interesting thing about all of these games, too, is it requires a certain type of deep learning algorithm that wasn't really possible until 2012, something very recently when GPUs came out, which are Graphics Processing Units for those who don't know.
And the cool thing about those is they were originally designed to process graphics. So they do linear algebra and a whole bunch of other math operations very quickly or in parallel, which is a lot faster than a typical CPU. But it turns out that these mathematical operations are basically exactly what's needed for training these really complex, deep-learning models.
Mattimore: Right and it was pretty serendipitous that GPUs became such an important part of AI because they had been developed pretty much on their own in the video game world and the whole goal of these GPUs was basically to have a visual experience that was as close to the real world as possible.
So you could get these really insane computer graphics. And they realized this is actually the perfect supplement to AI.
Justin: Yeah in terms of training for sure. And they're even coming out with new specific chips for AI - which Google created the first one called a TPU a Tensor Processing Unit - and we don't have to go into the details of all the hardware of AI. But the point is that the hardware at this point is not really the problem in terms of getting to general intelligence. It's really noted as a software problem in making sure we have the right algorithms. And there have been a lot of advancements recently. If you look at Archive, which is just an aggregation of recent scientific papers in several of the big fields, there are hundreds of papers coming in seemingly every day that are related to AI and pushing the field forward.
Mattimore: Much of what gets publicized about big milestones and AI have been these games that we've been talking about. I think the next one after Go is StarCraft because that's much more complex.
And you might be thinking, "Well, okay, great. These AIs are really good at games, but how is that relevant for the actual real world?" But when you think about it, the real world is very much a game. You have a certain objective in life where, for one, you want to procreate. You also want to spread your ideas or 'memes' as Dawkins calls them. And you want to be happy. And that's pretty much what you're optimizing around for your life.
People call it all the time. They say, "Don't hate the player, hate the game." And life very much is a game especially when you're talking about optimizing the economy while also optimizing happiness. That's a very real and a very complex game and we're getting better and better at creating AIs that can create other AIs that are beating the previous version at whatever game that they've been trained on.
The other missing piece for intelligence is language, because language is the best way that we can train an AI to think. Words are associated with thoughts. And if you train an AI to know what each word means, then you can pretty much teach it how to think.
And this has been something that's one of the most exciting and also scariest developments in AI... Google. And Google's translation AI has been incredibly accurate and has made incredible strides over even just the last couple of years. They can train a system to translate between any two languages now with almost - it may have surpassed by now - but it's pretty much right at where if you were to hire a native speaker of that language, it can translate it pretty much just as well.
Justin: That's crazy.
Mattimore: And I was listening to a podcast with Steve Jurvetson who's a venture capitalist. And he was saying that, as of now, there is more code being written by AI at Google than by humans.
So Google is already in this recursive self-learning intelligence explosion and it's hard to know how close we are to achieving artificial general intelligence, meaning that machines are as smart as humans, because it's not like they would publicize that the minute that it happens.
And if you read between the lines when you see interviews with people like Steve Jurvetson, people like Elon Musk, you can almost see the inkling that they know a little something we don't, that they may know that we're a little closer than people seem to realize.
Mattimore: So this might be a good segue to talk about, first of all: Is AI surpassing human intelligence inevitable? And if so, how close are we to that?
Justin: Yeah, I kind of like Sam Harris's discussion on why it's inevitable. And his argument in this all assumes that we don't destroy ourselves before then. But, his whole argument is we are improving. It doesn't matter how fast we're improving. We could be improving at a snail's pace, which obviously isn't happening because all the signs point to an exponential improvement in our technology. But his whole argument is, if we make any improvement, doesn't matter how slow, we will eventually reach some sort of artificial general intelligence.
Mattimore: So then the question is, how fast are we improving? And if you talk to the top AI researchers, if you do a survey of the top AI researchers as has been done, the median year that they believe machines will reach human-level general intelligence with a greater than 50% probability is the year 2040. And there are a lot of skeptics in that camp.
It may be much sooner than that, it could be later. But pretty much for certain, it's going to happen in the next 100 years and it looks like much more probably it's going to happen in the next 20 to 40 years, which is what I would put my money on personally.
Justin: Right. Yeah, I would say that it's somewhere in that range, too. Maybe a little bit longer. I would say probably 30 to 50. I think it just depends on what the actual definition of artificial general intelligence or artificial superintelligence is.
So now we could talk about what we think is going to be the worst-case scenario of artificial intelligence or just intelligence in general.
Mattimore: Right. So given that AI surpassing human intelligence is inevitable because the rate of progress is greater for machines than it is for humans and given that we're pretty close - Justin I both feel that we're about 20 to 40 years out, or maybe 30 to 50, but certainly in that range of our lifetimes - we should talk about the three different scenarios: the worst case, the best case, and the most likely.
In your mind Justin, what do you feel is the worst-case scenario?
Justin: The worst case in my mind is basically all revolved around artificial intelligence not having the same sort of incentives that we do and not being aligned with our own desires as humans and even beyond humans because we also don't want AI to destroy every other species. I think the worst case would be they have some sort of objective that is misaligned with ours and there are very popular thought experiments like the paperclip maximizer where this AI is told to create paper clips to the best of its ability and it ends up using every single atom in the universe to create paper clips.
And this is just an extreme, but there are a lot of valid concerns here that it could actually destroy humans in the process, even if it seemingly has a good objective and one that seemingly aligns with humans.
Mattimore: Right, because pretty much regardless of what the objective is, there are two facets that it's going to want to maintain regardless, and that is it's going to need resources to accomplish whatever its goal is, so it's going to need energy. It's probably going to need some hardware. It's going to need to build some software. It's going to need to bring together resources to bring about whatever goal that's been programmed into it. The second facet is it's going to want to keep itself alive because it can't achieve its goal if it gets turned off. And those are two of the biggest concerns because no matter what you set as its goal - even if you set the goal as prolonging life on Earth for as long as possible, the AI system could go around and just create infinite virtual worlds where humans and all different lives are not bogged down by how long the sun is predicted to last before it burns out or anything like that and we may find ourselves plugged up to some machines, sort of similar to the Matrix, that horrible world.
I guess if you were to say the actual worst possible scenario that you could possibly imagine, it would be an AI system creating infinite pain to conscious beings. And there have been some places on the Deep Web and Reddit where they'll scare you into thinking, "Oh, once the AI comes and is sort of the master of all life on Earth, then they are going to punish anyone who didn't help the AI achieve that end." And I think this is kind of ridiculous because you're you're kind of projecting human feelings towards a machine. But if we're talking about what is the actual worst-case scenario it would be creating millions and infinite virtual worlds - essentially hell - where you're just suffering forever. That's actually been a concern on the flip side, if we create conscious AIs that have the capacity to suffer, we may create infinite versions of conscious beings that are suffering because either they're boxed in or they're not able to attain what gives them pleasure in life. It's really tricky to know but I would say that that's the worst-case scenario. Not the most likely, but that would be the worst.
Justin: And what do you think about the best-case scenario?
Mattimore: So the best case is actually a lot tougher than the worst because it's hard to imagine a future with AI being superior to humans where everything is better than it is now. In my mind, because of all the safety hazards the best-case scenario is where we are able to keep AI in a box. And it's certainly not the most likely because it seems inevitable that it will eventually get out of the box.
But in my mind, the best-case scenario would be if you had a benevolent group of humanitarians - something like the UN - all managing an AI system that was far superior to any human consciousness, but that was still under our control so we could very deliberately plan out okay, give us the best system for ending world hunger without causing overpopulation to the point where we actually create more suffering, or whatever the problems are.
How do we reduce global warming while still meeting the global energy needs of our economy? And it would basically be this sort of oracle - imagine if Google Home or Siri was so intelligent that you could ask it any question imaginable: any philosophical question, any scientific question, any mathematical question, and it would have the answer and not only that but it would be able to execute on whatever task you wanted it to.
Justin: You'd have to make sure that execution was aligned with whatever.
Mattimore: A lot of researchers don't even want to give AI the ability to execute certain tasks. Like I know Max Tegmark talks about how the safest way to box a system in in the beginning is if you set rules similar to the laws of physics where all you can send into the AI is an audio file so you could basically just send in voice commands to the system and then the only output file would be a text file so you wouldn't let it hook up to the internet. You wouldn't let it connect to any other computers. You wouldn't let it build anything because you don't know what the hell it's building. It might do some crazy nanotechnology biovirus or find some way to get out. That would be the safest way to get it in there but humans are fallible and it's very easy to imagine a scenario where an AI could convince a human to let it out, even if the humans not aware of it, just by sending it convincing text files, either by bribing it with money or with bringing back a lost loved one or with superhuman knowledge or whatever that bribe is. It's very hard to design a system that's foolproof because humans are fallible. Humans are not foolproof.
Justin: Yes. And then what I would say is a best-case scenario in my mind is having an AI that is perfectly aligned with what we want as humans and just as biological creatures. And I would take a sort of different approach to this and say if this is the best-case scenario in a utopian society, the AI is perfectly aligned, it could have complete control and isn't limited to doing the things that humans think are necessary because if it is so much more intelligent than humans, it could possibly start approaching its own problems and asking its own questions and pushing humanity and maybe post-humanity to some sort of future that is almost run by this benevolent dictator that is AI.
Mattimore: But the question is, what is the value function that's driving it towards that end? Because like the paperclip maximizer example, if your value function is just to create something, you're going to use up resources and a lot of bad side effects are going to happen. One interesting example that Elon Musk had for what a potential value function would be that would have the best possible outcome is to maximize human freedom and that really is to me the best value function because it means minimal involvement and it also means maximum power for humans.
So imagine every person on Earth, their freedom will have to be weighed against one another so there's not going to be any oppressive class and for all the technical possibilities human freedom, that could mean expansion across the entire cosmos. It can mean human life and all biological life flourishing for billions of years if we get this right and if we get it wrong, it could mean our extinction so it really is the biggest single problem that we have or will ever face.
Justin: At least that we can conceive of right now.
Now, what do you think is the most likely case given the best and worst case?
Mattimore: So this is very tricky. I think the most likely ramp up before AI reaches superintelligence is that it will be put in some sort of box, and Google will be very impressive at categorizing any sort of image you would want. They already, for instance, just came out with a persuasion AI that can literally debate other human beings in a way that is more effective from a facts and figures and rational debate perspective might not be as persuasive yet from an emotional social appeal perspective.
So the most likely case would be a very impressive AI system that is in a box and then eventually it gets out of the box and the question is what happens when it gets out of the box? To me, it seems like the most likely scenario is... Actually, I'm really not sure about the most likely scenario.
Justin: Yeah, this is a really hard question because the range of possible outcomes seems almost infinite.
Mattimore: So let's talk about how we expect the next 20 to 40 years to play out before AI reaches superintelligence because that might help us figure out the most likely scenario. So in my mind, right now narrow AI is sort of the law of the land. We already have AI that are better at translating than any single person.
We already have AIs that are better at chess, that are better at Go, that are better at all of these narrow tasks. So that is going to flourish. There are going to be many narrow AI for any possible task. For instance, I'm a growth marketer. There's going to be a narrow AI that is better at marketing than I am. All the way from creating the concept of the ad to delivering the ad to the right audience, to optimizing the ad for conversions. Every possible aspect of that narrow field is going to be better performed by an AI than a human. Eventually. First, where we are right now, it's better to have the AI in the human working together and that's that's part of where Justin and I see a lot of the opportunities in the next 10 or 20 years, really knowing how to leverage AI so that the net benefit is greater when you have humans and AI working together versus having either of them working in a silo.
Justin: So with narrow AI, do you think there's going to be a broader definition of narrow AI in the future because right now narrow AI is extremely narrow, like translating something. It has one task and that's basically it, but I could imagine there are some narrow AIs in the future that will be a little more broad and can kind of think a little cross-functionally, because I was talking to a good friend the other day about AI in the financial world. Would that be an AI that is narrow if you just told it to make money, given all of the possible data, maybe all the Bloomberg data for example, which is quite a quite a bit of data? Could this AI think about how to make money? In an abstract way, like looking at things like social media and everything else, and not be defined specifically by the Bloomberg data, for example.
Mattimore: I could totally imagine that happening, and I actually think that's an easier problem to solve than the language/social skills/actual abstract thinking side of the AI equation, so in my mind, I think that narrow AI is going to flourish in all these domains. There may even be a superior version as far as the stock market and financials goes. There's going to be a version of Watson that's able to diagnose diseases and provide a recommended course of medication better than any doctor on the planet, and eventually an AI that knows how to think, that knows language, which I would put my money on Google's AI because it's damn impressive what Google is able to do.
Justin: Yeah, their data sets are amazing.
Mattimore: I can imagine that would be the first AI that's able to have general intelligence - meaning it's able to tap into the other narrow AIs and control it in the same way that a rockstar CEO would be able to manage all the different third-party distributors and all the different employees and all the different tiers of people.
Justin: Another interesting thing about AI and the discussion of actually creating general intelligence is there's not just a single algorithm that's going to be a general AI. What's probably going to happen is there's just an ensemble of tens of thousands of models that all work in unison, like there's there's a theory of neurology maybe - don't hold me to this because I'm not a doctor by any means - but the thought is your brain works in a hierarchical fashion. And if we have an AI that also has a cortex that is the logic and passes down processing to sub-algorithms, that's kind of what people are thinking might lead to general intelligence. And Google is really poised to be to be one of the leaders in this because they have a lot of data.
Mattimore: Right. And that hierarchical way of thinking reminds me of Elon Musk's brain-machine interface, which is Neuralink and his basic idea is that there are three layers to consciousness or intelligence. The first layer is our limbic system. And this is where you have all of your instincts, when you're like, "I'm hungry," or "I'm lusty," and "I want to get this done." Like when you don't even think about it, that's your limbic system. Then you have your cortex which is where you actually have your more complex thought processes like, "Well, maybe I shouldn't eat this giant donut cheeseburger right now because in the long run that will make me less healthy." That's where you sort of check yourself. And you can do some more complex planning.
The tertiary layer that he's talking about adding to that is the digital layer and that would essentially be like, if you've seen the movie Her,it would be like at any point being able to talk and tap into a vastly superior intelligence to accomplish any task or learn anything that you want but rather than talking into it through a microphone, like in the movie Her, it would be directly communicating through your thoughts.
Justin: This is something that I would like to revise my best-case scenario and put this brain-machine interface in there where humans can actually, if they want to, read a book or at least extract the information of say, a math textbook you can just say, "Hey, give me this information," and then somehow this Neuralink accesses the book information and distills it into something you can understand and recall at a later time. It'll just increase the rate of learning exponentially. That's a that's a very overused word these days, but it's true.
Mattimore: That kind of reminds me of an episode of The Fairly OddParents,where the main character Timmy opens up his head and then his fairy godparents like just pour books into there and he becomes insanely smart.
But if you think about it, with the power of the cloud and how much data can be stored up there, and how much computation can be accomplished, if you had a layer of basically your whole version of the internet tapped in with your consciousness, then you could know pretty much everything. It would also be similar to the Matrix where he says, "I want to learn Taekwondo," and then he just blinks his eyes a bunch of times and then all of a sudden he's a master at Taekwondo.
That would be amazing and Elon Musk talks a lot about this being a necessary step for AI safety because he believes if we have a symbiotic relationship between AI and humans, we're much more likely to have a favorable outcome.
Justin: So, back to the most likely scenario. I think it's actually in the realm of possibility and likely possibility that Elon Musk or a company like Neuralink can pull off this brain-machine interface. I don't know if it's themost likely case but I would say that it is alikely case because most people working in AI seem to actually care about AI safety and - maybe this is unfounded - but I trust companies like Google and Neuralink and Apple to make decisions that are in the best interest of the future of humanity.
Mattimore: And the fact that Google is open source, the fact that OpenAI is open source - and those are two of the leaders - that gives me a lot of hope to thinking that we're going to have a good outcome. So I can imagine a scenario where, let's say 40 to 50 years from now, there is somewhat of a brain-machine interface and certain people, probably elites or people who have something that's really wrong with their their cognitive system and they need this from more of a medical perspective. Those are going to be the people that get it first. We already have a brain-machine interface when you think about Cochlear implants - that's basically enhancing your ability to hear, whereas you wouldn't otherwise be able to hear anything.
So in the same way that you can enhance your hearing, you could also enhance your vision. You can enhance your memory. You can enhance your cognitive thought processes, so I can imagine this being not widespread but certainly present 40 to 50 years from now, and the question is once AI has that symbiotic relationship with people what's going to happen to all the other people who don't buy in to this whole brain-machine interface symbiotic relationship? What do you think?
Justin: I think that that's probably a likely scenario. It's hard to pinpoint the most likely scenario. Probably something in that general realm because I don't really know how I feel about artificial general or artificial superintelligence and what that actually entails, and I also don't know how hard or how easy it will be to keep it relatively controlled, because if let's say we don't give it the ability to self-improve, I would be much less afraid of an algorithm or some sort of hierarchy of algorithms that weren't able to self-improve - it was kind of a static thing and then we could control how it self-improves. And in that case, it seems like we would at least be able to make rational decisions about the future development of AI.
Mattimore: That's a strategy that Max Tegmark has in his book Life 3.0,where once the AI system gets to a certain point where it's almost as smart as humans across all tasks and all skill sets, then they basically turn it off and they'll turn it back on. So basically restart the system whenever they want it to complete a task. And then once that task is complete, they turn it off again and they reset it back from the original starting point. So that's a good strategy once we get close to the finish line - finish line being our last invention. Think about that for a second.The last thing we will ever invent as humans.
There's no reason we would invent anything else. I mean, maybe we'll write a poem or a song that other humans like from an emotional perspective. But as far as better ways or better systems, there's not going to be a need for anything, once we develop an AI that is smarter than all humans across all tasks and all skill sets.
So in my mind, we can do that strategy for a little while of turning it off and restarting it and making sure it doesn't get too far along where it's out of our hands but it's inevitable that it will break out and break loose and be on its own at least in my mind.
Justin: And just for clarification the thought here is that it will be our last invention because it will invent all other future widgets, whatever those may be.
Mattimore: So if it breaks out, what do you think is the most likely scenario, just strictly talking about what's the most likely? Because it seems to me somewhat unlikely that we will code the exact right value function.
Justin: And the weird thing about value functions too, is they have to be expressed mathematically.
So right now it's just basically minimizing, depending on the type of algorithm, but it's just minimizing the error between the predicted value and the actual value when you're looking at a labeled data set. So to express something in a formula that perfectly aligns with what humans want seems really hard right now. But the interesting thing is we don't know what types of algorithms will be developed in the next 10-20 years because right now we're constrained by the algorithms that exist and right now there are a lot and there are new ones every day, but there might be a way to express things a little differently than just a pure mathematical formula. I'm not really sure if that's even possible and it doesn't seem possible by just saying it out loud, but...
Mattimore: And it seems like there really are two problems that are both very difficult that we're trying to solve simultaneously. The first problem is how do we develop a machine that's smarter than all of us? And that's a damn difficult problem on its own even when you're not considering the safety aspect. The other aspect is, how do we make it safe once it is smarter than all of us humans? And most AI researchers feel that this is the problem we should solve first because if we already have this mathematical equation that we believe is the best possible value function to code into the AI system, then we can have regulatory oversight where we really keep tabs on cloud storage space and how much processing and we can sort of use clues to figure out who might be pretty close to creating the superior AI and make sure that they have that value function embedded in their code.
That seems like the best path forward. Whether it's the most likely or not, I tend to be pretty optimistic, so I believe that it is likely that we will not all go extinct before our kids have had their kids. So hopefully we'll be all good.
Mattimore: Alright, so I think we should bring it back now to the purpose of this podcast.
So why have we just been talking about AI for the last however many minutes? The first reason is that we want you to be aware of these changes and we want you to continue to be aware of these changes, because it's easy to just be in your own little world of whatever your specific tasks are that you have in your career and your specific family, issues and whatever else, and not see the bigger picture of how fast are things changing at Google? How fast is this whole area developing? So to be aware and to continue to be aware first, you can continue to listen to this podcast. We're always going to tell you about what the next steps are. I would also recommend the Future of Life Institute, which was started by Max Tegmark and other AI researchers. OpenAI has some good resources on their website.
Justin: If you're more technical, they have a lot of really interesting papers on artificial intelligence development and the theory behind some new algorithms. They come out with a lot of really cool papers, I would say probably once every week or month or so they have a new paper coming out.
Mattimore: And then as far as a first book that might be good just to get you on the right footing I would recommend Nick Bostrom's Superintelligence: Paths, Dangers,Strategies,because that really is a foundational book in this whole space. Also Life 3.0 by Max Tegmark, we mentioned several times.
I think Yuval Noah Harari's book, Homo Deus,is fascinating. You should read the first book Sapiens, first. Those are all great resources and we'll make them available on our social media, if you want to check us out at hencethefuture.com or our Twitter @hencethefuture, same handle for Instagram, same handle for Facebook.
And the second step, once you're aware of this we want you also to see the opportunities and it can seem very scary that machines are going to be smarter than all humans. But in the near term, in the next 10, 20, maybe 30 years, the key to success is going to be combining the best of the machine world with the best of the human world.
If you are able to start a business that takes advantage of what AI has to offer you can be in a much better position than 99% of the people on this planet. And if you're in that good of a position once the AI superintelligence revolution happens, you have a much better chance of a positive outcome.
And the final step is to prepare yourself. So if you're in marketing or data sciences, try to follow the blogs that have interesting updates as far as, "Oh, here's a new way to do X, and new, better way to do Y. Here is the AI of X ."
Justin: And I think it's good for everybody to at least understand how machine learning works because when I first heard the term, it was kind of a very weird term. Oh what is machine learning? And how does that actually work? But it turns out that it's really just math, and it boils down to statistics right now, and I think just understanding the concepts of machine learning is not very difficult. There are there are a lot of really good resources online to get the basics of machine learning and I know you could just go to YouTube and search "intro to machine learning" and there will probably be a five-minute video that gives you a great introduction.
And so you understand the concepts of what is actually included in the field of artificial intelligence because machine learning is just one one small aspect, but it's probably one of the - at least in the more modern context machine learning is probably the the field that will push AI I forward the fastest.
Mattimore: Another way to prepare yourself is by thinking about what your actual career choice is because certain jobs are going to become obsolete much more quickly than others. So if you're an Uber driver or a Lyft driver I use you all the time, but you are not going to be having a stable career 10 to 20 years from now, possibly much sooner. So some of the jobs that used to seem like a total waste of tuition, like being a philosopher or a poet or a yogi or a masseuse - some of these jobs that don't get as much of the glam as as other jobs are actually going to be the most stable jobs going forward. So I would think about how can you develop your own career in a way where you make sure that what you're doing cannot easily be outdone by a computer in the next, let's say 10 to 20 years.
But that's going to be its whole own episode. We're going to do an entire episode on the future of jobs and automation. So we'll talk about that. But the very next episode that we're going to have is an important topic, no matter who you are, no matter where you are, no matter what you think, and that is the future of happiness.
Justin: Thank you for listening to Hence, the Future. As a brand new podcast, we would really appreciate it if you went to iTunes or wherever you listen to it and left a five-star review.
Mattimore: And if you have any questions, or would like to leave a suggestion for a topic we can discuss, you can do that on hencethefuture.com in our suggestion box.
You can also ask us a question on Twitter or Instagram or Facebook. Just be sure to use the hashtag #hencethefuture. Our handle is @hencethefuture across all social media, and we hope to connect with you. Thanks.
Listen to Hence, The Future Podcast on Your iPhone
Listen to Hence, The Future Podcast on Your Android
Suggest a Topic on our site! HenceTheFuture.com
👏 PLEASE RATE + REVIEW OUR POD! THANK YOU 👏
Mattimore and Justin