Episode 39 - The Future of Algorithms (Transcription)

Mattimore:  Welcome to hence the future podcast. I'm Mattimore Cronin.

Justin: I'm Justin Clark.

Mattimore: Today we're discussing the future of algorithms and to start I think it'd be good to address the question of what is an algorithm. And then what's the current state of the evolution of algorithms where we are right now?

Justin: Yeah, so an algorithm is really just a set of rules to accomplish something. It doesn't necessarily need to be in the context of computers. It could be like a recipe to make some food or something. But the really interesting thing about computers is the little rules that are accomplished in the algorithm are done extremely quickly. So with computers there's a whole bunch of tiny little building blocks. Let's say it's just addition and multiplication for some computing algorithm. If you build a million of these little operations on top of each other you can get some really cool results. So, for example, you know that classical computers are built on bits zeros and ones all of those zeros and ones are manipulated by tiny little algorithms that convert a 0 and  a 1 to  something else or a 1 and a 1 to something else.

So. It's just really interesting and in the whole grand scheme of things what these algorithms really are in the context of computing are just little tiny things that build up to some grand thing that we know of as computers and as technology. 

Mattimore: Yeah, I've heard a metaphor of it's like you have. An infinite schoolroom of preschoolers who are all doing little multiplications. It's very simple functions. But when you add up when you have you know, millions of these preschoolers doing small tasks for you. It can add up to some pretty incredible. 

Justin: Yeah, and I mean that's kind of what the all of this evolution in computing has been is orchestrating these little tiny preschoolers of adding and subtracting.

So now we have the super powerful computers that can do billions of operations a second nd in the back in the day like in the 60s 70s, right when computers were being created. It could do maybe a few operations per second. So we've seen this exponential growth in how fast these algorithms can run because of how quickly we can manipulate little tiny bits or zeros and ones.

Mattimore: And I think it's helpful to look at sort of where we are algorithmically. On The evolutionary path. So I've done some research of where machine learning experts think we're going and I've come they say that there's basically seven stages of algorithmic intelligence. The first one like you said is just simple calculation.

Let's say you have a coffee shop and it's like, okay. I know that each coffee is a dollar. And if someone orders three coffees, that's $3, like that's your calculation. The next one stage two is actually execution. Imagine if your coffee shop has a little button where it can just coffee and you just press that and it calculates the cost.

Like at any fast food restaurants very simple and then stage three is analysis where it can actually tell you. Oh, here's how many coffees people ordered today and oh people tend to really like this one. So, you know, then you as the human being might say, okay, maybe I'll order more of this other type stage 4 is supervised learning and then where we're at right now is Stage 5 unsupervised learning and that's like, you know, the canonical example is people spending a lot of time labeling certain data inputs to a system and now you don't have to always label the inputs. You can put a bunch of images in cats that you don't have to necessarily label each one and even an unstructured unsupervised way. It'll be able to figure out what's going on and the next two stages are unsupervised asking meaning you don't even have to know the right question necessarily the computer sort of will know what you're getting at better than you do and this is kind of like when you type into Google you start to type a question and it will auto fill the results and it kind of predicts what you want even before you know what you want and then the final stage which we have not gotten to yet is unsupervised action.

So this would be something we're not only does the AI system or algorithm know what you want and what to optimize for it'll actually do it for you. So you could imagine a system where it's like, you know in 20 years you wake up in the morning and you say Hey, Siri. What's the best way for me to spend my day to achieve success for my coffee business?

And then the AI system will say okay here. I've printed out your schedule for the day. And in the meantime, I'll be optimizing the supply chain and our marketing everything else in the background. You just focus on this task of things, which is why don't you go meet with some of your friends.

You know try to get them in their AI systems helping to know where can yeah, so it's a really interesting progression. And if you look at where we're going and how close we are to those most advanced stages of algorithmic intelligence. It's a really exciting time, but it's also a little bit freaky. How advanced yeah. 

Justin: Yeah, I think we should step back a little bit to and touch on the the two that are most prominent right now in terms of AI which is the structured learning and unstructured learning. Yeah, a supervised and unsupervised. Yeah, sorry supervised and unsupervised. So with so with the supervised learning that's kind of where everything about the machine learning algorithm is dependent on the data and the labels that people give those data and the issue with this is we talked about this before but. If there's bias in the data, so the prime example of this is inmate data. It tends to be predominantly African-American data in you know, in these data sets when it comes to figuring out which people are more likely to reoffend these machine learning algorithms become biased and they predict that African-Americans are going to reoffend more often than other races just because the data itself is biased so that's that's one of the things we need to think about. Is that all these algorithms perform some sort of action on the data. So we need to make sure first that the data itself is good and unbiased.

Something that we would be happy, you know sending to Future generations of AI algorithms. 

Mattimore: Yeah, that dirty data is a huge problem and it carries over the stereotypes of human society into the machine world. And when that happens people tend to trust the results of this AI system and oh this is this is an unbiased system making an.

Men, and some people trust it a lot more but it carries a lot of the same biases that humans have as listening to this podcast Rico decode with one of the heads of Google and she was talking about how they every day. They fight bias using the data whenever they notice it. So for instance if you search in Google CEO.

You'll only get pictures of largely white men and they notice that the first female image you got for CEO was actually Barbie CEO there were like, okay. This is a big problem. Like there are female CEOs in the world. We got to make sure so then they'll actually tweak the system so that you know female CEOs real female CEOs will appear sooner and so they are tweaking this to make the data more in line with what our values are today, but it is interesting that you we have to sort of do all this fine tuning and and the quality of the data that we put in is a huge determining factor in how successful the machine intelligence. Actually. It's right. 

Justin: Yeah, and when you move to the unsupervised learning you, a lot of the same issues aren't really there. So one of the Prime examples of an unsupervised model or I guess you could consider it reinforcement learning. So open AI is really big into letting AI agents play games. So the system will actually go and just learn by interacting with its environment.

You don't really even give it much of a data set it just kind of plays around and sees what happens. It's almost like the super elementary version of the scientific method just like testing out little hypotheses. Did this work did this not? And over time you get this really refined agent. The problem right now is those algorithms and those those agents that learn this way are only really good at the very specific things.

They're not very good at transferring to other domains, which is something we need to get to these future stages like the was it unsupervised action was yeah last stage. Yeah, like we need these agents to be able to learn and interact with environments that are totally separate from what they learned from which I think is an algorithm problem more than a hardware problem.

Like I think the hardware in the. Computing world is there it can probably handle a lot of these really complex systems, but we just need to have more sophisticated methods of learning. 

Mattimore: Yeah, and one of the challenges of more sophisticated methods of learning has been it's really easy when you're in a video game world because you can just play this game many many times over until it's perfect.

Open AI you brought up just yesterday. They beat the game Dota 2. Which is, you know, like a it's kind of similar to Starcraft. It's like a game where you go and get very complex way more advanced than then go or any of these other games, but the system is so good at that and yet when you bring it into the real world like with robots or actual physical mechanisms like a manufacturing plant or a car manufacturer whatever it's difficult because you can't do the same types of learning where they're fumbling on the floor and just barely able to walk because it's a lot more involved in the real physical world. So one thing they've been doing recently is they try to train these systems as much as possible in the digital world and then they transfer those skills to the real world. So when you have a Boston Dynamics robot its first steps aren't the actual first steps it's ever taken they are in the physical world, but it's taken many steps already in the digital world. And this idea of transference of learning learned behavior has been a big trend recently and the development of algorithms and just just one other thing that I thought was interesting about Open AI is that they noticed that these Open AI systems tend to favor short-term strategies over long term strategies like in this game just yesterday the open AI Bots would revive themselves immediately, even if their base wasn't under attack, even in the very beginning of the game, whereas no human would ever do that humans tend to they think of the long-term. So they'll save their extra lives for like when it really gets down to the wire and the humans will always laugh and dumb a eyes aren't thinking long term and then the humans lose. So interesting that like a eyes don't tend to have the same bias towards delayed gratification and it actually oftentimes ends up being a good strategy because they get small gains early on and then they keep those gains and they don't have the same tendency of humans. Like if you're kicking someone's ass in a tennis match you tend to sort of get complacent and then you're not playing as hard and then your opponent will catch up to you but AI systems aren't like that they keep  their small.If they're a little bit ahead of you in the game the keep that that difference and build on it. Rather than grow complacent. 

Justin: Yeah, that's interesting. And this is probably a little bit to do with the way these algorithms are being optimized a lot of a lot of algorithms have some sort of greedy of some sort of greedy attribute, which means they'll do what's good right now is even if there's a little bit of delayed gratification from a computing perspective a lot of times.

It's more efficient to do something good now and a lot of times that leads to a long-term good solution as well. So it's just the interesting thing is it's such a different way from how we think. But but a lot of the these algorithms are supposedly based off of how humans learn so for example, this transfer learning would be like how the human brain develops in the womb.

There's a little bit of a neural network, you know, there's an architecture in our brain that's being developed that gives us some sort of behavior so we know right and of what to do when we make it like nature versus nurture. Yeah, I mean and then the nurture part so the nature is what these algorithms learn in the their robot space.

The nurture is what they learn from interacting with the real world. So it's like there's a lot of parallels between how humans evolved and how these these systems evolve and we're little algorithms to like we have these set of rules that will get us to the next Generations.

Mattimore: So yeah, I mean, it's very telling that there are biological exploits. For that that algorithms can take advantage of so anyone who's read, you know, the five principles of persuasion reciprocity scarcity social proof, exclusivity. All of these things are ways that you can hack human behavior because we are op we have an algorithm inherent in Us and other algorithms are able to optimize and iterate a million times over until they can find the best way of exploiting those those algorithmic predictable behaviors and one thing that I think is very telling is that the most effective AI systems have been ones where there's an actual score for how well they're doing. So oftentimes it's a game video game. It's a board game. It's whatever and I think that that is going to be the key in how advanced a lot of these algorithms can get because if we're able to actually judge how advanced they are or how well they're doing then it'll be able to achieve its goals much more readily and I actually think that this might be part of the reason why china is starting to LEAP ahead of us as far as their AI capabilities because they are by 2020. They're going to have this social credit score which it's it's pretty transparent.

It's been put into law as far as what the factors are and it's if you're doing something that is considered something a good citizen would do your score increases if you do something that's considered something. Bad citizen would do and your score decreases and you can imagine that this could be one of the fundamental building blocks that their AI systems are centered around like if you have ai systems that are all centered around rewarding people with a higher score and punishing people who have a lower score. That's a really efficient fabric for society. Just technologically speaking for their AI systems and I wonder how that's going to compare to the u.s. system where there's not as clear scores for. How well people are doing as Citizens and how well machines are doing in their ability to help the citizens that they should be helping and punish the citizens that they should be punishing. 

Justin: Yeah, and that's kind of the trouble with getting these these AI systems to function well in the real world is we don't have the real world has so many rules and maybe even contradictory rules, but I mean even the laws of physics that is a set of rules that agents need to obey somehow and they also have we have a set of societal rules and laws and cultural rules that these AI systems need to follow.

So with a game let's say you have a set of I don't know ten rules in a very clear objective. When you bring this agent to the real world. It's millions of rules potentially and not really even a clear objective unless you explicitly tell a certain system what it needs to do. But even then there's a lot of sub-objectives which is also true sometimes in games, but that's one of the things that I don't think classical computers are equipped to handle.

I think the amount of computing power necessary to solve that optimization problem is totally infeasible right now, which is to maybe Quantum Computing system that can simulate extreme numbers of rules because it's Computing the the Computing substrate or like the the materials that is Computing on or moving data around on are way larger like the space that it can compute on her much larger. So, you know with called Quantum Computing they can the they computers can theoretically run computations on these particles that are entangled and have all of these wave / particle properties. Whereas classical computers are just voltages of high-voltage no voltage, like that's where the zero and one comes from. So we just need a totally different way of computing I think to be able to optimize and reach this General AI or super intelligent system that we're building for. 

Mattimore: I agree machine learning is going to be a big X Factor in how much we're able to accomplish in the next 20 years or 50 years. It's worth noting how far we've gotten with unstructured data with classical computers because for instance like if you look at Google page rank, which came out in like the late 90s compared to rankbrain, which is the current system in Google. It's night and day so page rank is very simple. Basically, it will scan your page for keywords. Like oh coffee shop Los Angeles, whatever and then it just associates that with whatever the search is. So if you search out coffee shop in Los Angeles, you'll show up which is very simple, but this was easy to game people could just like put the keyword a bunch of times the difference now is that Google actually understands the concepts behind what you're saying so it will automatically not just slowed coffee, but it'll load espresso and whatever workplace study plays good Wi-Fi the anything that's associated with it. And that is all based on unstructured or unsupervised learning.

Yeah, so it's associating like what's the what are you really trying to find and then it'll show the importance of the different results based on other people's bounce rate if they've actually found what they wanted and if they haven't but Quantum Computing is the next big leap and the thing about Quantum Computing as you've expressed is that something doesn't have to be either true or false.

It can be true and false at the same time and oftentimes when you get into the messy real world, That's the way that the world actually is. I mean, there's almost nothing you can say with certainty like anything you say the opposite is also true because words just cut things into what they mean and what they don't mean and by that definition, they're not encapsulating the totality of all that's out there.

So yeah, I think it's going to be plus. The other thing about Quantum Computing is that it's really our only option. In the face of Moore's Law ending.  

Justin: Yeah, well, there's there are a couple of obscure things that are being tried like molecular Computing like encoding information in DNA under run some sort of computation on the information in DNA or like molecular Computing.

Oh, that's his really algorithms are just a way to manipulate information somehow. So if we can encode this information in different more novel ways and maybe more compact ways. So let's say with 20 qubits, which is a Quantum bit that can be like you said is true and false at the same time and any combination of true and false so you it can be like 30 percent true 70% false or vice versa.

And every infinite possibility in the middle of that. So with quantum computers if we can achieve some sort of you know, if we can manipulate particles better and we can encode information in these particles. Then we're going to be able to run better calculations better algorithms. But the the negative side of quantum Computing though is they're very bad at Super basic things like so they're good at probabilistic things. They're not very good at what classical computers are good at like adding so with with a quantum computer. If you try to add something you're going to get like. If you add two plus two, you'll get a probability that it equals 4 then this that this equals 4, you know, it's going to be highly likely but it's way more computing power than you need.

So I think there's going to be some sort of hybrid system where we need classical computers. We also need quantum computers to do certain things like simulate physical systems or run these crazy optimizations. 

Mattimore: Yeah, and I've heard that. Position those really are the types of problems that are best solved by quantum computers. So if you're trying to find what's the best Planet that's the most like Earth for us to go potentially colonize that's a great problem for a quantum computer because it's going to find whatever. The best option is and the least amount of time based on like lots of uncertainty around whether it is or isn't exactly like what we're looking for or a similar one could be, you know finding the right sort of medication for your specific genome.

So if you have a specific strain of cancer, what's the right sort of medicine for you to take based on all those factors or if you're trying to find. Okay, what's the right sort of advertising image and message and Link and every other factor call to action for this objective and this audience like that would be a great problem also for a quantum computer because it has so many combinations that are classical computer would take forever to get there.

Justin: Yeah, and that's that's one thing that I think would be interesting to talk about is the the speed of certain algorithms because I think a lot of like the the general public doesn't necessarily think of. Speed being an important thing in terms of writing algorithms. So if you if you try to think of a way to sort a list of numbers, for example, that's kind of like the canonical example when you're learning algorithms if you just do a brute force method like you just write the most intuitive thing that seems to work for sorting. It's really slow. But if you start to break down the problem and really novel ways, you can make things run a lot faster, but sometimes so there's there's an entire field in computer science called algorithmic time complexity or you know variations of that where basically people try to figure out what is if you.

Bad if you increase the number of inputs to Infinity like how how does the time of this problem scale with the number of inputs and one so one example of this is something called the traveling salesman problem. Where the problem statement is given a list of cities on a map and distances between these cities.

What is the shortest possible route to hit every one of these cities and then and then end up at your home. 

Mattimore: So this is like basically the same problem that Uber and Lyft have to solve every single day but not as optimally as they are right now. 

Justin: Yeah, the thing is with the traveling salesman problem is it's it's exponential in time.

Meaning that if you have 40 cities the time complexity is to to the 40 operations well, and that's I mean, it's not even that many cities and if you add 50 cities. Or a hundred cities you start to have more operations than it would take if a computer had the entire time of the life of the known universe.

Mattimore: I've also heard that in go there are more combinations more possible moves than there are atoms in the universe. And the fact that a computer can already beat Go shows that this method of learning even without quantum computers is very effective. So I guess my question to you is what can machine learning accomplish that unstructured learning cannot is it or is it just a matter of the how much better it like this this scale the magnitude of its efficiency and efficacy.

Justin: Yeah. I mean it depends on what the problem itself is. So with  the traveling salesman problem. It's a very you can get good approximations of the answer to the solution, but you're not guaranteed to know the exact optimal solution so with machine learning what they're very good at doing is approximating the correct answer.

It doesn't necessarily know the exact right answer for every input ever. But with machine learning it can kind of learn all of these different parameters that are you know, how are things related to each other? How do these different inputs relate to one another and how does that affect the output and machine learning is kind of this all of them run some sort of not all of them, but like neural networks or what you think of as AI are based on neural networks, which are sort of approximations of the globally correct or right solution. So there's a lot of approximation going on. But with quantum computers, we are potentially going to have globally optimal solutions to these problems that machine learning and its current form can't solve but you know, there's this whole field of quantum machine learning. I think Google has a Quantum AI lab, but that. But they're trying to because so just for listeners machine learning especially like neural networks and deep learning and that subset of artificial intelligence is all the algorithms that optimize these things everything coming from Open AI is probably based on a neural network to some extent all of these are being optimized through some sort of approximation.

So with when there is a Quantum AI group that's working on this we might you know, we probably will be able to see globally Optimal Solutions very quickly. Yeah. So yeah. 

Mattimore: Yeah, and I want to talk a little bit about the big players in this space Not Just Quantum Computing but machine learning and just the Forefront of algorithmic evolution in general and it seems to me like there are basically just the you know, a few big players.

There's Google's deepmind. There's open AI there's Amazon Facebook Microsoft and then China. You know, they're all sort of connected within the nation state. So it seems like those really are the big players and when I was doing just some industry research, it seems like any AI startup worth their salt just gets gobbled up by Google or Amazon or Facebook or one of these guys, so it does seem to me when you're thinking about who are going to be the likely players that will come out with the next great algorithm that may be more intelligent than anything we've seen before it seems likely it's going to come from one of these players. I'm curious if you have any thoughts on what's different about them. Is there other differences in philosophy between these these players are there differences in their capabilities or their approach or how far they are along and how do you see this?

Justin: So it's hard to know without like seeing what their their day-to-day operations are. But I do think in terms of machine learning. There are a very specific set of problems that are trying to be solved by all of these companies and games are kind of the Forefront and natural language processing problems.

So, how do we understand what a certain sentence actually means that's a really hard problem for computers. But Open AI I actually release something that can generate super realistic text so much so that they didn't even release the source code. So in terms of a I think the approaches are probably similar. I think there's a lot of overlap to in the staff themselves. I think there's a lot of Faculty like Geoffrey Hinton who's kind of one of the he's one of the fathers of modern deep learning. I believe he works for Google Brain and University of Toronto and maybe advises for I don't know this for sure, but I think he advises for maybe a couple of other groups.

So there's a lot of overlap which means there's some sort of uniformity between people between these realizations. Yeah terms and this is in terms of artificial intelligence. Now. They also have these different companies that are doing different types of Quantum Computing.

They're trying to develop different methods so Microsoft, Is developing a totally different method of quantum Computing than Google or IBM or some of these other startups like Righetti or W-wave so I think with one some companies are approaching it and much different ways.

Like Google is trying to make Quantum Computing possible on a silicon chip. And Microsoft is mostly theoretical like they're trying to create something called a topological quantum computer. Which is supposed to make it much more stable, but it's totally theoretical and there's not really any hardware open to the public as far as I'm aware.

Mattimore: Well, I found the same thing in my research where it seems like they're the same sorts of DNN models that are being used by all the big players. Yeah. So there's there's just a few really popular models and it seems like we don't yet know all of the assumptions.

Necessary to have the optimal machine Learning System. If we did then it would be a lot more prevalent than it is today wouldn't just be video games. It would be everything from Healthcare to business to everything else. So it seems like this is the kind of problem that can't be solved by a lone hacker in their basement or by a small startup that doesn't have a lot of runway.

It needs to be solved by a company that can really invest in long-term thinking, you know, the theory behind it being able to you know, fund it for a long time and you know, it would be good to sort of. Project out on how we think this space can develop going forward. So I don't know how soon you want to get into the future scenarios, but I have a lot to say about about those unless there are some other big trends you want to address first.

Justin: No, I mean, I think I think it would be interesting to start getting into the scenarios and we can have a discussion about each of them. 

Mattimore: Okay. Well, let's take a quick break and then get into the future scenarios.

All right. So Justin, what do you think is the worst case scenario for algorithms? 

Justin: So if we go way far out and we think of algorithms and what the potential of algorithms are in the future and what the potential of computing is in the future. Let's say that we don't have to get into the whole simulation discussion right now.

Like are we in a simulation but at some point. If we have quantum computers, we're going to be able to simulate physical systems that are entirely realistic just within a computer and through these simulations as they get better and better we might be able to simulate entire worlds such as our own.

And if we can simulate these worlds and but we don't really care about what's happening in these various simulations some of those scenarios might be people and organisms that go through an unimaginable amount of suffering. Hmm, but they're conscious because it's a it's a simulation that's so good that it somehow spawned intelligent life and conscious life.

I think I think as as the as simulations get better and better we have the potential to create an unimaginable amount of suffering without even knowing about it. 

Mattimore: Yeah. I'm a little skeptical about that. I think it is possible with quantum entanglement. Like for instance you could imagine having an advanced simulation that is supposed to simulate society and due to quantum entanglement.

You're actually sort of pulling the strings on another dimension and real conscious beings and what their experiences are like. I think it might be a little I mean, yeah, maybe down the road it could be so Advanced that it could actually create suffering for real beings, but I'm a little skeptical on whether that is something we should worry about right now from my perspective.

The big concern is whether algorithms unite people and empower people or whether algorithms divide people and extract value from people because if you think about the most prominent algorithms today, they're all about extracting value from people. I mean at least most of them. Yeah not all of them, but the most prominent ones in Google Facebook. You can even think of like finantial algorithms something we haven't talked about yet, but these are all meant to extract value from people like if you think about the stock market and how programmatic trading works, which I know you're well familiar with basically what happens is, as some citizen makes a trade on a financial system that trade happens very slowly relative to people who are actually investing in having the fastest systems and the servers that are closest to where the actual stock market exchanges are and all of that. 

So from the time that you put in a trade to the time that you actually get the price of the trade and the stock is secure. You've already been undercut many times over by faster systems that are making money just off of those those slight fraction of a second improvements and is under and you know, that doesn't create any value in society at all.

And in a similar way Google and Facebook and I mean, I actually think Google's algorithm is pretty beneficial compared to the other players, but if you look at like Facebook's algorithm, it's really all about engagement keeping you engaged same with YouTube. Yeah as much time as possible watching this and the real reason that they want you to spend as much time as possible is so that they can extract as much value from you as possible in the form of advertising time watched and in the form of actually buying stuff on the platform.

The more time you spend on your iPhone, the more apps you buy and apple gets a 30% cut of all apps and every system works in this way Amazon works in this way. You know, Apple everything is trying to get you to spend more time spend more money extract more value and I think that's a theme that we can proudly say across all algorithms seems to be dominant right now.

Now imagine I'm in actually maybe I should save this next part for the best-case scenario. But yeah, so I will save this but yeah anything you have to react to that.  

Justin: So it really just depends on what so you when we're thinking about what type of companies we want to allow in our lives. I think we just need to think about what that companys true intentions and what their what their financial incentives are.

So with let's say Wayze for example, they might I think are they owned by Google they are yeah. Okay. Well, let's just think of them as a standalone brand for now with Wayze. What are they trying to do get you from point A to point B, I'd say that's a pretty. Good algorithm, you know that it's not really doing anything except saving you time which is good. Like there's a lot of algorithms that save time for people. It's just when we're dealing with companies like Facebook Twitter and Google to some extent are trying to keep you and can essentially control your actions to some extent. That's when we see the real problem. 

Mattimore: Yeah, and I don't think their intention is to control your actions so much as it is to predict your behavior, but the better they get it predicting your behavior the more they're actually driving your behavior.

Every algorithm is sort of trying to put you into a bucket of like, oh you're this type of person and then they feed you more content that that type of person tends to like and then you become more hardened in that archetype rather than being a nuanced person that has you know, many differing personalities and beliefs and just being a healthy human being that has many different opinions on many different affairs and it's not all driven by one underlying ideology. It seems like now people are being pushed more into their ideological buckets so that they can have a value extracted from them. I also worry that the highest growth areas of the Internet seem to be those that are very passive as far as how humans interact with. It's watching Netflix. It's consuming social media. It's scrolling through your threads. It's not about you actually being a better creator. And of course, there are those tools are out there as well. It's never been easier to start your own YouTube channel or entertainment studio or blog or book or anything like that.

But if you look at the numbers of how the vast majority of people are spending the vast majority of their time, it's being spent very passively and any sort of passive activity is ripe for trained behavior learned program behavior. In other words youth as the human being programmed as opposed to being the one programming and affecting what you want the reality to be like.

Justin: Yeah, I mean that's it that definitely seems like a near-term worst case scenario. Mine was kind of like probably hundreds of years down the line potentially.

Mattimore: Well, maybe not I mean, so I watched an interview with Elon Musk and he talks about the Google. Deep Mind as one of the Doomsday scenarios that he is most concerned about and the reason he's concerned about that at by the way he's an investor. So he's keeping an eye on them thankfully but the reason he's concerned is that Google's Deepmind system has admin access to all of Google's data, which is basically like all of humanity's data. A large portion of it health information. Yeah, and when you associate it with people's Gmail accounts and all the information in there and in Google Docs and Google drive's and just the whole Google World.

There is so much information. They can draw from that if you imagine that this system when it becomes so advanced that it's not like it becomes evil like machines aren't evil but don't have the same drives that humans do but if it has an objective that just happens to be in the way of what people would actually want in their lives.

It'll stampede over those desires just like how a construction site stampedes over an ant hill not because it hates the ant hill, but because it just happens to be in the way of the building that it's trying to build and if something like that occurs, just due to how much wealth of data and information Google controls. It would be able to fundamentally change society in ways that are hard to imagine. 

Justin: Yeah. I mean if you just think about how extreme so let's think about the how extreme AI could potentially be in the future. It'll be it'll be amazing at whatever it's trying to do if the. Incentives and goals are just a tiny bit misaligned from our own that can cause catastrophic down side effects in the future because just if you just take a tiny little difference of opinion or a difference in incentive or goal and put that to the extreme.

I mean that's similar to how a lot of things are you know today in society. There's just tiny little differences that get pushed to the extreme and I would say that's still a minor level compared to what it'll be when Ai and human agents have just slightly differing goals.

Mattimore: Yeah, and it's definitely worrisome that the most powerful people in the world like the tech billionaires are having doomsday plans for oh, I'm going to buy this house in New Zealand or Alaska in case everything goes to shit and they all talk about their plans of like getting on their motorcycle and driving up to their cabin in Big Sur where they have like food stockpiled and EMP protection and it's like okay if these are the most powerful people in the world and they're prepping for the scenario that they're effectively creating.

What does that say about the future of algorithms?  

Justin: The Optimist in me would hope that this is just them hedging.  Or maybe they're a little bit overly alarmist about it. 

Mattimore: Well, I think it shows that a lot of these Trends aren't something that humans really have much power to change.

It's like we're going to achieve. Greater than human level intelligence in machines regardless of what any individual wants so to a certain extent it's not really within any person's control to put a pause or to slow down our level of progress. So even the people who are especially the people who are most knowledgeable in this space know what could happen if it goes the wrong way and that should put all of us I noticed and that we should be actually concerned and aware and paying attention to this space and it may not be you know, of course the most likely is that it's not going to be as bad as what these people are prepping for. But just the fact that it's a possibility is pretty scary.  

Justin: Yeah, so let's transition into the best case. So I think the wrong way to approach this would be to explicitly tell an algorithm or an AI system. What its goal should be I think it should learn what the goal should be and there's a lot so with reinforcement learning which is a version of machine learning. A lot of times it goes in with no goal except for one of the learning goals in the first place should be and one of the nice things about today is we have a lot of written recorded So what we're doing right now is information for a future AI system to learn about the motivations and the wants of people so, I'm sure at some point in the next 20 years. It will dissect what we're saying right now to try to optimize what it's going to do for the human race and it's going to do that for every written piece of information. Some are probably going to be weighted more heavily than others. Hopefully, there's a way to discern that and I'm sure there would be a way to discern, you know, if there's a good or a credible person from a non credible person.

So anyways, I think that if the AI system can learn from this mass at that is everything digital then it will be able to align its incentives with us and with humans in general and if it can do that, even if it has to act differently for different people, I think, we can see a world of unimaginable prosperity.

Mattimore: Yeah, I mean the big thing the big concern that I have is the like the black box nature of these algorithms because that can lead to a runaway train car where we don't really know how these machines work but they've become so valuable to society that we wouldn't dare turn them off and then that leads to a situation where we're not really in control of our own destiny anymore.

So that's my big concern. So. In light of that my best case scenario includes the ability to query these AI systems and these algorithms to ask them why they made a certain decision and to get a comprehensible response from the system. So to say like Siri why? Have like Siri. Why did you choose this as my schedule for the day in the be like "oh blah blah" or Siri is there it does this system discriminate in any way against this group of people like to be able to just ask those simple questions and get a response or get some sort of report is going to be huge in our ability to actually steer the direction of the ship.

Because if we cannot understand what's inside the black box and all we can do is give data inputs. Then we have very little control in what is is going to come out on the other end? 

Justin: Yeah, so to play Devil's Advocate a little bit. Do do humans really know why they made certain decisions. Like human brains are pretty black boxy.

Mattimore: But we know what motivates humans for biology. It's also the will to meaning which is what makes humans most unique. Which yeah, which is that like, I don't know if you have you ever heard of logotherapy. No, I haven't what is its but pioneered by Viktor Frankl, which is a different kind of therapy where it's not about like building up your ego and your ego narrative.

It's all about your search for meaning and it takes as fact that every person's fundamental goal is the will to create meaning in their own life. And this is really what's a fundamental Android is as opposed to algorithm. Like what makes us really human is that every person is trying to sort of find their place in the universe and trying to find and create meaning in their own life and logotherapy is basically just the process of saying that it's not about you asking life.

Like what's the meaning of life? It's more about in every moment that you live life is asking you how you're going to create meaning. By your response to whatever your current situation is, whatever the current circumstances are. So basically it's like are you the type of person that is going to take on these hardships in this suffering and turn it into something noble or are you the type of person that's just going to wallow in self-pity and part of the therapy is that pretend like you've already lived life once and you're about to make the same like you've already made them same mistake that you're about to make right now. And this is your second chance at life and you can actually respond in a way where you create your own meaning. So I think that gets fundamentally at what it means to be a human being is you're always like taking all these inputs and trying to create meaning for yourself and your place in the universe and trying to figure out why you do certain things and that's not something that's fundamentally baked into algorithms.

Algorithms are they just certain you know that like you said a lot of times they'll give them an objective and they do whatever to get to that objective. But we're only now getting to the point where there's this shifting objectives and shifting of questions where algorithms really get more at what are you fundamentally trying to get to like maybe the assumptions maybe the questions weren't phrased perfectly.

So I'm going to change those assumptions. I'm going to change those questions and then I might even change the goal post a little bit because. Humans are asking isn't really what they mean to be asking and this is one of the greatest potentialities of AI and algorithms but it's also one of the greatest dangers because we may not be able to you know, choose the goalposts essentially.

Justin: Yeah, that's interesting and maybe right now what we understand as AI we can compare more to like a reptile that doesn't really know anything it just kind of reacts to its environment. But what we haven't done is created the AI counterpart to the prefrontal cortex or the latest stage of the human's brain that gives us this self-awareness and this planning and this, you know, all of this stuff that we were just talking about.

So maybe at that point once we have a sophisticated enough method to encode these, you know these learning machines then you know, maybe we can get to that point where we do have systems that can even have well, I don't yeah kind of like you said it would be bad if it had its own goals and then those goals diverged from us, but right 

Mattimore: I mean as I look forward into what I think the best outcome would be for the u.s. In particular. I think it is going to be necessary for us to have some sort of overall score for how well people are doing similar to China's score because the alternative is to have some way of measuring people but it's just not transparent. Like people don't know what the actual inputs are that people are being measured but it's sort of behind closed doors.

So if we did have some sort of. Social credit score in the US but we encoded our values into that score like we have freedom of speech as one of the values like imagine if we had a similar thing to China social credit system, but you can say whatever the hell you want so long as you're not spreading  misinformation or hate speech.

Then that would be a fairly good outcome for society if you also value things like Free Will and privacy to some extent if you encode these things into the system and then have a score that is transparent people know why they're getting the score and then you can set up society in such a way that it reinforces people to do behaviors that are deemed good by that society and to discourage people from doing things that are deemed bad by that society and you have all of the key values represented in the system then that to me seems like the most successful long-term outcome for America's algorithmic governance. 

Justin: Yeah. No, I mean that's I would say that's definitely a good place to be and then in the long term if everything works out then that'll just bring happiness and prosperity to everybody.

Mattimore: Well, that's to everyone who's willing to at least change their ways. 

Justin: I guess that's true at some point. There's probably going to be a situation where it's. You're in a huge disadvantage. If you're not, you know accepting of some of the new changes which I'm sure will be a lot more intrusive because a lot more data is going to be tracked.

It just depends on what like, what is the source of this AI system is it is it a decentralized thing? That's kind of running on everybody's computer that you know shares shares the resources of everybody that has you know a phone or a computer or something else or is it being run primarily by Google or Facebook.

Mattimore: I think because the US has a big bent towards the private sector it'll end up being a private company that'll provide the service but it'll might they might partner with the US government as you know, their provider of that, you know, Society analysis server service.

Justin: Yeah, I don't know that's that's interesting to think of where that could lead or what you know what. 

Mattimore: I think one of the biggest checks on algorithmic decision making has been the employees themselves like Google employees had a big walk out when Google tried to do, you know build surveillance technology for.

You know some of the government and so I think this is one of the big safeguards that we have and if we can continue to empower people to really think about what should the incentives be. What should the right outcomes be in the right inputs be rather than just focusing on making the system as engaging as possible, which is basically what we're optimizing for right now. Like how many comments are there? Like that's the number one engagement metric across most of these algorithms If instead we focus on the context the accuracy and the best possible outcomes and if we get to a point where a lot of these algorithms can run most of the mundane processes and then humans and engineers and philosophers and policymakers can really just decide.

What is it that we want and if we have the ability to tweak the system and query the system, about why certain outputs are being generated? Then that's a good formula for long-term success. And those are a lot of assumptions but I think it is possible for us to get there.  

Justin: Yeah, I mean that's in I guess in terms of the likely scenario most likely scenario, I'm more optimistic about this topic than some of the others we've talked about. I think it with me it tends. I'm more pessimistic about environmental things more optimistic about technological things which might be you know, my own bias leaking into these things but I think in the likely scenario, we have something similar to what you were talking about where we have this Committee of people that are trying to figure out the best way to use AI as a governing system.

What I do think is will probably get it wrong a couple of times but with all of the advancement I think that at some point we're going to be constrained by the technology. So it's it's not going to increase as quickly as we might have thought but that also gives us time to iterate shut it down do what we need to do.

So, I think with this sort of progression and especially since the people that are making these systems tend to be more Progressive and less power-hungry than you know, some other sectors will be. I think we'll probably be in a better situation long-term. And I think that they will figure something out related to it because there are so many people that are concerned about the future of AI.

I think that people are going to have to listen to the public and the experts on the subject. 

Mattimore: Yeah, I agree with your assessment. I think that not all companies are going to be totally beneficial with the way that they write their algorithms and with what they're optimizing for. But I also think that people are much more cognizant now of what the privacy philosophies are of these companies and Facebook has been called out time and time again. They just came out with this new transparency tool where you can actually see which companies have uploaded your personal information to custom audiences and it's pretty scary because you can see like, why does this Maserati dealership in Simi Valley have like have my email and phone number?

Like how did that happen? But just the fact that the trend is going more towards transparency and especially if you look on Twitter, I mean people are calling out companies every day for doing something that's not Kosher from a privacy perspective or discrimination perspective. So, I think it's going to be the survival of the most beneficial algorithms, which is a good sign and and I think Google is probably the best position of all of them just because of their philosophy on like an open system and they're really actively combating bunk data and discriminatory outcomes and they have an Ethics board that does seem to have some real power. It's not just ethics theater. Like a lot of people have said that Facebook's ethics board doesn't really have much power. It's just really for PR purposes and so I think that's a trend that some companies will have ethics boards, but they won't actually have decision-making power. So yeah, I think the most likely outcome is a good outcome for humanity and I think that in the end we're going to have a more efficient society that is a better a more equitable society I should say and I think that there's also going to be a trend towards focusing on the long tail and on discovery rather than on the short head which is like if you think of Spotify the vast majority of listens or for you know, the top songs like Ariana Grande and whatever like the top artists are but then if you look at something like band camp is all about independent artist long tail discovery and their discovery algorithm is not based on what's the most listened to it's based on giving you something new that you may not have discovered. So they don't prioritize things that have been listen to a bunch of times they make sure it's related to whatever you currently like, but they don't give any prominence to the number of listens. 

So that algorithm is fundamentally different. It's all about sort of discovery and curating and connecting with individuals. And I think we're going to see a lot more of those types of businesses, especially as we get to a post automation world where you know, everyone could have their own book their own podcast their own little coffee shop their own little whatever and it's not like people need all of these extra shops, but by highlighting the human to human connection, you can then choose. Okay, who am I going to buy my books from my music from or my whatever from I'm going to buy it from the people who I have actual emotional connections with and it'll become sort of like almost an advanced technological way of getting back to the original bartering system. And yeah, we didn't talk at all about blockchain, but I think blockchain has some big potentiality is to decentralize and really focus on circulating value rather than extracting value. Like that's one theme I kept coming to you again and again, which if we can focus on people trading with one another and money going through society rather than people like sucking up money like a vacuum cleaner and then it's not morning at exactly and then like the stimulus plans are the government pumping it more money into the people that already have these hoarded stock piles of money rather than doing something from the ground up which is like UBI which gets hated on all the time even though it's so clearly has advantages compared to the trickle-down process that pretty much no doesn't work. So I'm very hopeful that this I actually feel very optimistic around algorithms more so than many of our other topics. 

Justin: Yeah, I guess I guess there might even be a point.

Round room for a part two to talk about these because we focused a lot about artificial intelligence in this one, but it would be good to talk a little bit about some of the other prominent algorithms like the blockchain related stuff that you just brought up because that's maybe even a podcast in and of itself is so we should for sure at it.

Mattimore: I mean we can do a whole episode on the future of finance and money because money is like the operating system. That we don't even realize it's the operating system of society, but it has so many implications is everything else we do and if we just replace this operating system with an improved update that focuses on circulation rather than value extraction.

I mean, it can just work wonders for the economy and society and inequality and everything. 

Justin: Yeah, that'd be good. And of course all of that will be run by algorithms and just to make one final point to is algorithms are fundamental to everything like every software every piece of software you use is fundamentally an algorithm everything you do in real life is an algorithm.

So if you can improve your own algorithms in your own decision making processes in real life, you know, that could lead to an improved outcome in the long term for you individually. So if you make a rule turn algorithm like not to drink Coke or something. One day a week or like Coke drinker. 

Mattimore: Look it's like what are the benefits of meditation? It's really improving your algorithmic decision making capabilities. It's like if someone calls me a doofus am I going to punch him in the nose and then get sued and go to jail and have these cascading negative effects or am I going to take a deep breath and just think kind thoughts about him. Yeah. So yeah, I agree algorithms algorithms pervade everything and I look forward to the continued advancement of not only computer algorithms, but also human decision. Awesome. Well, thank you everyone for listening. This has been a future algorithms. Thank you for listening!

 Listen to Hence, The Future Podcast on Your iPhone

Listen to Hence, The Future Podcast on Your Android

Suggest a Topic on our site! HenceTheFuture.com


Your Friends,

Mattimore and Justin

Justin Clark