What makes the human mind unique? How do we know there’s a future, and how do we recall the past? In this episode of This Anthro Life, Byron Reese, serial entrepreneur, technologist, and author of “Stories, Dice, and Rocks That Think: How Humans Learned to See the Future--and Shape It,” discusses these questions and more with host Adam Gamwell.
What makes the human mind unique? How do we know there’s a future, and how do we recall the past? In this episode of This Anthro Life, Byron Reese, serial entrepreneur, technologist, and author of “Stories, Dice, and Rocks That Think: How Humans Learned to See the Future--and Shape It,” discusses these questions and more with host Adam Gamwell. Together, Byron and Adam explore the three leaps in human history that made us what we are today and how those leaps changed how we think about the future, the past, and everything in between.
[03:16] The inception of “Stories, Dice, and Rocks That Think: How Humans Learned to See the Future--and Shape It”
[05:23] Homo erectus and the Acheulean hand axe
[06:38] How the Acheulean hand axe is a genetic object, not a cultural one
[08:27] The awakening that ancient humans had undergone
[09:27] Language as a means to conceptualize the future and gain knowledge of the past
[13:02] The four things that all languages have
[16:01] How humans’ group action became more than just the sum of its parts
[18:57] A superorganism named Agora as a metaphor for how people working together can get more done
[24:06] How the probability theory helps us understand how we imagine the future
[24:37] The probability problem
[28:01] How there is predictability in randomness
[34:33] The human body as a superorganism
[36:30] The problem with data in artificial intelligence
[41:48] Galton’s regression to the mean and eugenics as a cautionary tale
[44:59] Eternal vigilance as the price of current and future technological advancements
[47:04] Why humans are not machines
[50:05] The 21st purpose of telling stories, according to Byron
[52:32] Closing statements
Links and Resources:
[00:00:00] Adam Gamwell: Hello and welcome to This Anthro Life. I'm your host, Adam Gamwell. Good to be back with you. The question I want to inquire about today is what makes the human mind so unique and how did we get this way?
[00:00:15] To help dig into this question, I'm super excited to welcome back to the podcast Byron Reese. Byron is an entrepreneur, a futurist, an author, a speaker, and is a recognized authority on AI or artificial intelligence. Bloomberg Business Week has also credited Byron with having quietly pioneered a new breed of media company. You may be familiar with some of that media if you ever checked out sites like Gigaom.
[00:00:36] Now, Byron first joined us back in 2020 to talk about AI and robotics and what that all means for the future of work and life. You can check out the episode linked in the show notes below. It's on The Fourth Age.
[00:00:47] Now, he's back with a fascinating exploration about how humans learned to see the future and ultimately shape it. The book is aptly titled Stories, Dice, and Rocks That Think. But this is not just a book about the future. Reese argues that our unique status compared to other species on this planet is due to the emergence of our ability to imagine the future and to recall the past. Basically, this takes us out of the ever-present moment that every other species finds themselves confined to. But he goes further. One of the very compelling analogs he uses to organize his thinking is envisioning human history as the development of a societal superorganism that he calls Agora, which is the Greek word for marketplace. Basically, taking what we think about as genetics in terms of passing ideas down through cells and reproduction and thinking about that culturally. How do we pass down ideas through things like stories?
[00:01:39] He organizes the book around three acts. The first explores how ancient humans undergo this idea of an awakening, we might say, developing this cognitive ability to mentally time travel using language, and ultimately then learning to tell stories and help us think through the future, the past, and what else could be. The second act picks up in 17th century France where a mathematical framework known as probability theory is born. And this is the kind of science we're seeing into the future that we use to build the modern world. Then act three picks up with the invention of the computer chip. In this case, you know, humanity creates machines to gaze into the future with even more precision. We'll begin with probability theory that we had to do a bit of manual mathematics to determine the likelihood in an event. We then use computers to help us do this much faster and at a bigger scale. What this ultimately did is it overcame the limitations of our brains, in essence.
[00:02:29] So our conversation will range across these three different acts of the book. But then also help us think through some of the deeper issues of what this means for humanity and how do we understand the cultural transmission of ideas across time and space, and what are some of the implications of our obsession with wanting to know the future and how this might shape our lives going forward. Could say super fascinating conversation and I can't wait to share it with you, so let's jump to it.
[00:02:58] I mean, to kickoff, I just want to say thanks for hopping on with me. It's really a great pleasure to be able to talk with you again. And I really enjoy your work. And, you know, coming from The Fourth Age and just kind of seeing that your AI podcast that you've worked on. And then now into this idea of understanding what role, understanding the future has played in who we are as people.
[00:03:16] I'd love to kind of think about this in terms of, you know, maybe even tracing your own interest and trajectory of how did we go from kind of The Fourth Age and areas around futurism and AI into this space in terms of putting humanity deeper into this question of what role the future plays in who we are. Like, how did we get from Fourth Age over to here, to Stories, Dice, and Rocks That Think?
[00:03:36] Byron Reese: Well, The Fourth Age was a book I wrote about AI and robots. And it's a philosophy book about them. I'm not supposed to say that, but supposed to just be like death to sales. But that's what it is. I mean, trying to figure out what are these things. Are they entities? What are we? Are we machines? Are they machines? What are their limits? What are our limits? And trying to think all that through and run different scenarios, like, well, what if blank? And what if this? And — because nobody knows, you know, the answers to a lot of the questions we're posing.
[00:04:09] So this book came from a different place. It started at a different place. It actually started with a visit to my eye doctor. And we were just talking about stuff. He asked me. He said, "You know what I've always wondered is like, what's the next species that's coming up behind us? What's gonna be the second super intelligent thing on the planet?" And I was like, "Oh. Well, that's a great question 'cause there's no distant second. There's nothing that's even close to us." And then I'm like, well, why is that?
[00:04:40] And even in the introduction, it's a, you know, where are the Bronze Age beavers? Where are the Iron Age iguanas? Where are the pre-industrial prairie dogs? And it's like they aren't, they don't exist. You know, if dolphins are really so smart, like I'm not expecting them to have made the internet but maybe telegraphs by now or a postal system or something. But they don't have anything. And beavers, they just build that same dam they've been building as long as we know. I mean, we have dams that are over a thousand years old. But then we have evidence where you can see even older ones. And it's the same dams. They don't seem to be adding, making them out of cement now. They haven't tried to generate electricity with them or anything like that.
[00:05:23] And so I really started at that spot. And I said, how are we different? So when you ask that question, where you start is with ancient people, you know? Really ancient. Homo erectus is where I started because I think that thing at the beginning about the Acheulean hand axe is just fascinating. So H. erectus is this creature that lived 1.6 million years, which is, means it was a successful species. It was all over three continents, Africa, Asia, and Europe. And it had one tool called an Acheulean hand axe. It looks like an arrowhead in the shape of a teardrop, a big one. And they would use them, you know, they would have an Acheulean hand axe and they would use it for all these different things.
[00:06:03] But there's a very strange thing about those hand axes. And that is that they don't ever change. That's 80,000 generations of H. erectus-es passing them down. And they never changed. And what's even fascinating about that is it doesn't matter where they were. The ones that wandered to the Himalayas? Same ones as the ones that stayed in Saharan Africa. And then, you know, if every H. erectus had just copied their parents, then it would've drifted like the telephone game. But I, if I held up two of these things that were made a million years apart and said, which is older? It's hard to tell.
[00:06:38] I came to learn that there's a good argument to be made that they didn't know what they were building when they were making that hand axe. That it was not a piece of technology and it was not a cultural object. It was just a genetic object, the way the beaver makes the same dam. If you let a bunch of starlings go all over those three continents, they're all gonna build the same nests. Like that's what's hard-coded in them. And I think that's what H. erectus was doing.
[00:07:01] And so then, I was like, okay, well, something happened. That's not us. Like what did happen? And that took me to the cave in Chauvet, and the cave art, and just the suddenness that all this stuff appeared fully formed just out of nowhere. You know, our oldest cave art isn't like stick figures and then eventually they put a triangle on it for a dress and — no. The oldest cave art we have is beautiful stuff, stuff I'd frame and put on my walls. And not only is that the oldest art just appears as if aliens settled on this planet or something. That's not what I think happened. But I'm saying it was that stark.
[00:07:38] But what else was shocking is just how much technology they had. Because if you were a cave painter in France 40,000 years ago and you needed black, well, you had charcoal in abundance. I mean, that's every dead fire you can pick up. But they didn't want that. That wasn't black enough for them. They took a mineral called hausmannite. And they would heat it at 1,600 degrees — very hard to do with camp fire — and it would turn a really dark black. And the thing, the punchline is the closest source of it was 140 miles away. So, you know, we had black. But it wasn't black enough. I'd rather hike 280 miles round trip to get some of this mineral just to, you know, make the, that much better. They had to do all these other things. They had to build scaffolding. They used extenders that has fat in it. They used talc — no, the fat helped it cling to the walls. The talc extended it.
[00:08:27] And so then all of a sudden, you're like, okay, H. erectus, that wasn't us. That was a beaver kind of thing. And then where did we come from and why was it so sudden? And that is something that goes by a lot of names. You know, Jared Diamond writes about it. Harari writes about it. It's kind of an awakening, a great leap forward. But it's theorized that somebody had a beneficial mutation that gave them a new mental ability. And that mental ability, speech. Wasn't it that speech was, we got it so we could communicate? I mean, that came much later. We got speech so we could think. And just imagine if you had no concept of speech, how hard thinking would be. There's a beautiful quote in the book from Helen Keller who talks about what life was like before her teacher came. And, you know, she says, "I didn't even realize I was a thing that was different than the universe. I didn't realize I was an entity." And only when she learned essentially speech, that's when she said consciousness was born.
[00:09:27] So you can see how the book starts to get going at that point. What I think came along with it, 'cause then you have to say, well, we have language and animals don't have language. Boy, I wrote a lot about that in there 'cause there are people who will push back on that and I respect that. I think even if you believe animals have some form of language, I think everybody can agree it's, you know, not like ours. So, you know, we got language, we learned to think. And I think what that meant is we could conceptualize the future and we had knowledge of the past.
[00:10:01] And the future and the past are two things that don't really exist. They're just made-up things. There's no such thing as the future. It's just this moment. And animals don't know there's a future. And, again, I put a lot in there that said, you know, you can test that a lot of ways. There may be a few animals who can conceive of a few hours in the future, maybe. But then, you know, don't try to sell them a 401(k) or something. Like they don't have that kind of time horizon.
[00:10:29] So now, you see that humans are like, whoa, not only can we think in structure, we can imagine the future and we think of the past. And so what we did is we started telling ourselves stories. And, again, that isn't like "Once upon a time" kind of stories. It's how we think from a day to day. It's, we run scenarios. That might be a better way to think about it. Like you're in traffic and you're like, okay, well I gotta get out of this. And you think I could try to cut in front of that car and get on that exit. And then I could do that, or I could do this, or I could do that. So you're telling yourself these stories. And to power those stories, you're recalling specific events in the past, which animals don't seem to remember either. They have procedural memory. You can teach a dog to sit and the dog will know what "sit" means. But the dog doesn't remember, last Tuesday, he told me to sit and I did and I got a reward.
[00:11:25] Now, all of a sudden, you get some picture of people that we had this new way of thinking of the future and the past. And boy, if you can think about that, if you imagine there's two people and one of them can imagine the future and can picture alternate futures, and then one who doesn't know the future exists, like who do you think is going to have kind of mastery over their life and their environment? And it would be the person who can plan for the future.
[00:11:50] Adam Gamwell: Yeah. I think that that's such an interesting kind of timeline framework, too, in terms of like, what made us us, in essence. And even this idea, right? You know, you kind of drew on like Diamond and Harari, too, and these folks that have helped us kind of think what is a grand narrative in terms of what emerged as modern humans, you know? It's funny 'cause we sometimes say "behaviorally modern." But even I think kind of in your rendering, too. It's like, really, it's like linguistically modern almost, right? We're modern because we have linguistics for something.
[00:12:13] But it's interesting too to even think about this in like, again, the Helen Keller quote really stood out with me, too, in the book where it's that, it's like how she describes that she became conscious almost, right, when she like got language as a way to think. And so, even this idea I thought was really provocative and worth ruminating on, too, is that language is a way that we then were able to think. And that thinking ultimately means I can then think forward and backwards in time in a way that we see other animals just don't do.
[00:12:38] I mean, this, I thought was, again, like this really provocative idea in terms of what this then allows us to do. I mean, it's, I almost have, you know, it's one of the chicken and the egg scenario in terms of do we already have episodic memory or did language make that come out, right? Was there something that shifted in the way that we think about memories? I know that's kind of a neurological question. But even this role in terms of like what that will do to the way that we conceive of ourselves and how our brains work.
[00:13:02] Byron Reese: All languages have these four things, which is really amazing thing to think about that if you found one of the uncontacted peoples that are still around, you would find they have a language and it's gonna have these. There are languages that only have 350 words in the whole lexicon. That's it. And they have these four things. And one of them is placement. The ability to think about other times and other places. And I'm guessing here, I'm guessing, but it would seem like that came part and parcel, our ability to use language to displace language. But like you said, it's a chicken and egg thing. But I, if I were a betting man, I would put my money there.
[00:13:42] And I love thinking about that this happened maybe to one person — one person — and boy, they didn't have anybody to talk to. I remember when they came out with those video phones, like in Sharper Image in 1988 or something. And I was like, well, why would you buy one if you don't know anybody else who has one? Like you just sit around waiting for somebody. And that's what that person did. Like they were like, man, wish I had somebody to talk to. But nobody else knows how to talk. And so three generations later, their little band of 120 people, now they all had that favorable mutations, and then they could all chatter amongst themselves.
[00:14:16] But then, how would any other group of humans have been? Not even if they were hostile towards them. It's just like, how do you outcompete the people who can communicate and the people who can think about the future? The people who can imagine scenarios? Like how do you outcompete them? And there's been really interesting things about how quickly favorable mutations can like through the planet, can happen quickly. And that maybe what gives it the illusion of happening all over the world at once. Maybe it did. We don't know how that's possible. But maybe it did.
[00:14:49] Adam Gamwell: Yeah. I mean, 'cause it's like what the advantage of we can look back and say what happened 40,000, 60,000 years ago, right? When we see the fluorescence of cave art, you know, is a great example that happens in multiple places that are disconnected seemingly at the same time. Like I think it's great 'cause it could be at the same time. But also, it's like we are talking, there's a good time horizon there. So, you know, to us, as we like project backwards and tell that story, it feels like it's collapsed in evolutionary time. We actually don't know that 'cause you're right. It may have actually just been like one band of people that then just generation, generation, generation over time.
[00:15:17] I liked what you said a little bit ago in terms of the idea of creating a cultural object versus a genetic object. And, you know, thinking about that with language, I think to your point, too, it's like that there was a genetic shift that gave us language that made it kind of turn on in our brains or however that may have taken place. But it's interesting from that genetic shift, then we had a bunch of cultural objects that came out of that, too. So it's interesting to kind of think about the relationship between those and what that will do.
[00:15:42] Byron Reese: But what we ended up doing, of course, is eventually there were other people that could talk. And we learned to vocalize these stories. Again, a lot of this is guessing. But you could imagine two people are now talking and they're like, we could go up behind that mountain. Or we could do this, we could do that. And they're conversing.
[00:16:01] Imagine there's five people who decide they want a mammoth steak for dinner, okay? And so they say, we're gonna go take down a mammoth. But we need a plan. And so, they talk amongst themselves and say, okay, here's the plan, you know? You two, hide in trees. And you two, and I'll do this, and we'll get a mammoth that way. And so, the mammoth is just minding his own business. And these five people attack the mammoth. And not only are they all kind of doing different things, they're yelling instructions to each other. "No, no, no. Go over there. Go over there. Go." Now, if you're the mammoth, you're not fighting five humans anymore. You're fighting one creature with one mind. 10 arms and 10 legs and five heads, but it's one creature. It's a superorganism. It's birthed the beginnings of that superorganism. And I think that's just like a crazy idea. That's how our group action became more than just the sum of our parts.
[00:17:00] I will say one other quick thing, which is another real advantage that this speech gave us, or even thinking in speech. For however many billions of years that we've had life on this planet, a DNA-based life, DNA's been the only place you could write anything down. And if you wanted to know, remember not to eat those berries, probably gonna take 10,000 years to get that written, right? Then it'll work. Great. But then one day, humans get this new ability and our genetic code gets augmented by what we store up in here. And then I can just tell you, don't eat the green berries. They'll make you sick. And that spreads in five minutes. And that's a huge thing.
[00:17:47] And then what happened is we learned to write much later and that writing became our species-wide genome. We can do all these things of species that no individual can do. We can make a smartphone whereas no person could make a smartphone. But the species knows how to do it 'cause it's got this expansive genome that's mostly junk, just like your DNA is mostly, you know, episodes of Gilligan’s Island or whatever. But it's got some good stuff in it that is useful.
[00:18:20] So what happened is we started telling stories, "Once upon a time" stories. And I think as that superorganism, you know, there's five people that attacked that mammoth, that unitary form of thinking, I call it Agora. Agora is an old Greek word for a marketplace kind of in the middle of the town where it's noisy and all the business is getting transacted. And it's just where it's all at. And that's what Agora is. And what happens, I think, is that over time, as Agora grows, the kinds of stories we tell each other change dramatically.
[00:18:57] So like imagine there's a superorganism and you can just think of it as a metaphor for how people working together can get more done. Like can be as simple as Agora's a living thing that lives and dies. And, well, you've gotta imagine back when Agora woke up, and I think it was 50,000, 60,000 years ago, came to emerged, what would be the first story Agora would tell? I mean, like the most striking thing in that world, I think, is the night sky. Like every night, you get that canopy, you know, of 3,500 stars. You can see that big band of the Milky Way. And you would lay there and look at that. And you couldn't help but make up stories.
[00:19:38] Now, a very interesting thing, Ursa Minor or Major — Big Dipper — is around the world regarded as a bear. And you know kind of what it looks like. It's the four stars that make kind of a square and then it's got the tail.
[00:19:49] Adam Gamwell: But a very long tail if it was a bear, right?
[00:19:52] Byron Reese: Evidently. Like that's, in a lot of places, they're like, that's a bear. And it's like, well, what are those three stars? That's its long tail. And it's like, bears don't have long tails. And it's like, yeah, well, that's what it is. It's a bear with a long tail. Now, there are these people in Siberia in that area, were, 15, we're going back 15,000 years, the Ket people. And they saw bears all the time. And they're like, that's no bear. Well, the four are a bear. But the three other ones, well, that, that's three hunters chasing the bear. And if you squint really hard next to the second hunter, you see a little faint star right next to it. And you could see it then, of course, no light pollution. And they said, oh, that? That's a bird that knows where the bear is and is guiding the hunters there. A helper bird.
[00:20:39] Now, 15,000 years ago, Ket language, so forth. Then this ice bridge forms, maybe it's 20,000 years ago, like we're way back, that connects Siberia and Alaska. And we think only 70 people went across it. I always kind of imagined it like an exodus. But it looks like just very few because of the genetic diversity of the people that were here in 1491. You can tell it's very few. The genetic diversity is much narrower. And in Western Europe, they had this tradition that that's a bear with a long tail.
[00:21:14] Now, here's kind of the big finish. In Native American tradition, multiple Native American languages share words with Ket. So it's not a big leap to say, well, those would've been the people who came over, like they were living there already. And we have these cognates, these words are the same. And they identify that as, oh, that? That's a bear being chased by three hunters. And that middle one's a helper animal helping it along. So we can say with pretty good assurance that that story's 20,000 years old. And it came over when they walked across the ice bridge.
[00:21:50] And I just, that's the earliest stories we tell. And then later, you know, we moved into towns. Or we started seeing these things called strangers, and how do you deal with strangers. And you need Aesop's Fables, and you need all that kind stuff. And that's how the stories changed over time as Agora grew and matured.
[00:22:12] Adam Gamwell: We're gonna take a quick break. Just wanted to let you know that we're running ads to support the show now. We'll be right back.
[00:22:23] With the rise of individualism that came with like urbanity and came with kind of Western European and Greek thinking and then, beyond that, into contemporary societies, this idea like it's, it feels uncomfortable, I think, perhaps to us as individualist thinkers here in the western part of the world, that we may be part of a superorganism, right? That we may not actually be so individual in the same way that we don't think about cells themselves as, you know, being particularly individually important. But they make up us, like our bodies, right? And then all the bacteria that makes us up.
[00:22:52] And so, I think this is, it's a really compelling and nice idea that like culture is Agora's genetics, you know? It is what's passed down from generation to generation, like through stories as this kind of arena is helping us think through it. I think it's super fascinating and like a really compelling story that we tell ourselves. Like but then also that we can think about like what is it that we're actually doing in the world, right? So I think that even this idea of like the implications of the stories that we tell ourselves, it's something else that really kind of is stuck with me thinking through this. Like this is also like what happened in act two. You talk about what the rise of probability theory and then even into act three in terms of the rise of computing.
[00:23:26] So I'd love to kind of think a little bit about these areas, too, in terms of this major shift that you talk about that happens in the 16th century, where it was both a new form of kind of mathematics but then also a new — it was interesting that you spend some time on, that we had to think of a new conceptual system to make reasonable understandings of how it was that we predicted the future. That was super fascinating to kind of realize that why were we not able to figure this out before. That was, this was totally new to me, to kind of think about this idea of how hard it was for us actually to build the math and get it right in terms of like how do we predict what might come next in a way that feels buttressed by science or mathematics.
[00:24:06] So tell me a little bit like what happened in this space? Like how did this come up? And like why was it such a challenge to put together this predictive system in the first place?
[00:24:14] Byron Reese: Well, you can imagine you come out of 50,000, 60,000 years ago with language. You can imagine the future. You can think, I can do that, or that, or that. But you can't predict it. Or you can't predict it reliably. Until at some point, we were like, wow. It'd be nice if we had a science that could not just allow us to conceptualize it, but actually try to figure out what's gonna happen.
[00:24:37] And this is in 1652. There was this math problem. Now, this math problem had been around for a long time. We know about it for at least a century. People talk about it. People put out different solutions for it. But nobody got it right. And the thing about it is this is a crazy, simple math problem. I think I put in there like if your middle schooler gets this right, they don't even get like a happy face sticker. Like they get nothing. Like this is a basic, simple problem.
[00:25:04] I'll set the problem up, which is you've got two people, Harry and Tom, H and T, Harry and Tom. And they decide to play this game where they're gonna flip a coin five times. And every time heads comes up, Harry gets a point. And every time tails comes up, Tom gets a point. They flip it three times and the score is Harry-Heads-2. Tails-Tom-1. And then they have to stop the game. You can imagine they lost the coin, you know? It fell in the crack of the floor or something. They didn't have a coin anymore. And so the question is how do you split the pot in a fair way?
[00:25:39] And so for a hundred years, people are like, well, you know, Harry had twice as many points. He should get two parts of the pot and Tom should get one. And then you're like, well, that doesn't quite work. And then they're like, well, how about this? Harry only needs one point and Tom needs two. Same thing. And it's like, well, no, 'cause if they were playing to a million and the score was 900,999 versus 900,998, you wouldn't say, wow, you know, we're gonna split it that way.
[00:26:11] And so the real way you would solve it — there's a number of ways — but the real way like just in casual conversation, you would say, well, the only way Thomas is gonna win is if the next two tosses are both tails. And then you say, well, that next toss is 50-50 chance of being tails. And then the next toss after that is 50-50 chance. So you can see there's four different way those two tosses could have gone. They could be head-head, head-tail, tail-tail, and tail-head. And only one of those, Tom wins. So Harry gets three-quarters of the pot 'cause he wins in three out of four cases and Tom gets one quarter.
[00:26:45] So like nobody could solve this for hundreds of years. These were like smart people. And you gotta, at one point, kind of forgive them because, you know, they didn't even have the equal sign yet. Like we used to say, like why does the future happen the way it happens? And we had all these different theories. They're kind of four. I would just enunciate them. But probably like one is determinism. The future can only happen one way, right? A leads to B leads to C leads to D. It's just written that way. Or it can only happen that way. Another thing is things are destined to happen that way. Slightly different. Somebody decided before you were born that this was gonna happen. Couldn't escape that. Another way is free will. People just choose and there's no way to know what people are gonna choose.
[00:27:31] Another way would be something called synchronicity where things are all connected. And so, if you want to know what's gonna happen here, you've gotta study this over here. That's why they would, you know, read spots on livers. It wasn't a scam. Like if you read through all the textbooks and all that we found and all the marked up things and all the tests they took about how to read these spots. And it's true in a way. Like if I raised my hand, as Noam Chomsky once said, it causes the moon to move. That's true. But why would it? It's a crazy idea.
[00:28:01] Any case, nobody thought that the future was random. That different things could happen and that you associate probability with those. And that was like the big "whoa." And so, once we knew that though, we still were a ways away because we had a lot of other misunderstandings with the future. But one of the big surprises of these five — I'm only gonna go through one of them — is that there's predictability in randomness.
[00:28:27] You know, if somebody had asked me, if you flip a coin a thousand times, how many times is it gonna be heads? I've been trained to say 500. I've been trained to think 500. But if I didn't know that, and I can tell you what I would say, it'd be like, no way to know. You might get 200, and then you might get 900, then you might get exactly 500. Who knows? But that's not true. The odds that you'll get under 400 or more than 600 are one in 16 billion. Like it's not gonna happen. And think about that, like how much predictability there is, therefore, in randomness.
[00:29:00] So we wanted to predict the future. We realized we needed a science to predict it. We realized our reality was really math. Like that's kind of what goes on underneath everything. And we learned how to master. And we conceived of this idea of probability. So we come out of 1652, Blaise Pascal and Fermat finally solved the problem. And they build the math to do it. And then within 10 years, everybody knows how to — I mean, you got books on statistics now. Like it was the "Aha" moment that like all of a sudden we could predict all these other things that we hadn't been able to predict before.
[00:29:34] And we learned all these things about the world. One of my favorite is governments used to sell annuities to raise money. So if the government needed money, it would make a deal with you. It would say, you give us a thousand dollars and we'll give you a hundred dollars a year for the rest of your life. And that's how they would raise that money. Now, if you had two people come in wanting that deal and one's 20 and one's 80 years old, do you really charge them both the same?
[00:30:01] Adam Gamwell: Doesn't sound smart.
[00:30:03] Byron Reese: No. I mean, you're gonna pay that fella a hundred bucks a year for the next 80 years or something. That other guy, he may not even make, the other guy may not even make it off the door. Like he gives you his thousand dollars and maybe you make one payment. But you see, we know that. But they didn't. They thought your odds of dying every year were the same no matter what your age was. If that were true, then that works fine.
[00:30:27] And you can kind of forgive them for thinking — you can and you can't. You can forgive them because we're not used to premature death. To them, that was commonplace. Like, you know, a mule's gonna kick somebody in the head and kill them. They may be 20, they may be 40, they may be 80. Don't know. Mule could kick anybody. Odds of the 20-year-old, the 40-year-old, and the 80-year-old dying are all the same. Like you can forgive them for the capriciousness of death that they experience.
[00:30:51] The reason you can't forgive them though is if you spent two hours walking around a cemetery, just writing down the ages of everybody who is buried in it, you could build a mortality table that shows you that actually, as you get older, your chances of dying go way up.
[00:31:08] And so that kind of stuff started happening. So it started pricing these things different. Then the strangest thing happens. If the listener can imagine that normal curve, that bell curve, what we found though is that all kinds of things in the physical reality of our world behave that same way. And that's a mystery because that bouncing is randomness. And if you find out something like, oh, I don't know, suicides or murders, follow that normal curve, then you have to say, well, it's random. So at its core, it's random.
[00:31:42] And there's some evidence for that. Like in the United States, a few years ago, 166 people died at their workplace being electrocuted. And the next year it was a 160. How does that happen? It's kind of like the coin tossing, wouldn't you think? Like, well, some years, nobody's gonna electrocute themselves. And some years, it's gonna be a lot. And it's like, no. The only way you can electrocute yourself is bounce. You had that little BB bounce 10 times to the left. Somebody had to leave the power on. You had to not test it. You had to touch it. You — and like each one of those is a bounce to the left. And then that's gonna happen a very predictable number of times. And it turns out it does. And same thing with automotive things. It takes a certain amount of bounces to the left in order to imagine what's gonna happen.
[00:32:30] So all of a sudden, we started finding all this kind of stuff in our world. And we got a lot of predictability about the future. And so then you get the financial system, and then the insurance industry springs up, and all these other things get developed 'cause we have the math underneath it. And that gets us to 1952, which is when act three starts.
[00:32:51] And so in 1952, we turned on our first transistor computer. And we decided we wanted to build machines, rocks that think, or machines that could predict the future for us. So we are not having to, you know, slide rule it and paper it and all that. I think I mentioned somebody who spent their whole life computing pi to like 30 digits. And it's on their tombstone. And, you know, now you can compute it to a million digits in three seconds. And so we said we want machines that can do this predicting work better than us. And that's what we have. Of course, we have now forecasting machines that we don't even understand anymore. And then that brings us to the third section.
[00:33:30] Adam Gamwell: Yeah. That's really great, too. And it is interesting to think about that too in terms of when we are following our desire to know the future and thinking about, you know, how do we best do that. Are teachers, are they like culturally genetic abnormalities in the Agora in terms of they're like more likely to pass on neurons to other beings? Are there superspreaders?
[00:33:54] Byron Reese: Maybe. You know, one thing that — since I had been writing, I have discovered I have a deep love for research papers. And the reason is because I have learned that everything you can imagine is somebody's passion. And they write a 14-page research paper about why crickets can do whatever they do or I don't know what. And I love reading them because they're always long because it's like that's their thing that they're like so excited about. And so I think it's more like a division of labor, a specialization thing, like different parts of Agora learn these different things and kind of share them with other people.
[00:34:33] Adam Gamwell: That makes, that makes good sense. I know I'm just saying this, but it's like, it really sticks in my head the idea of like a cell division of labor too in terms of if I'm using my superorganism metaphor, right, for what's physically happening in a body or an organic being and how we're doing this too. Like cells also, you know, they divide labor into specific — some fight diseases. Some of them carry oxygen. Some of them carry nutrients, right? I think it's a very apt idea. Yeah.
[00:34:55] Byron Reese: I mean, the crazy thing about it is all those cells are doing all those different things. But they all have the same DNA. And somehow, a liver cell says, I'm just gonna use this part of the DNA and make this protein. And a different cell says, I'm just gonna use it. So like, the cell has to kind of know where it is in the body. And it uses a whole different part of the genome that's something else. And it's crazy. It's a big mystery. It's a big mystery. That's what the next book's about. What is life?
[00:35:25] Adam Gamwell: What is life? Oh, man. Okay, cool. Okay, well I wanna know about this, too, 'cause — so is it, does it begin with the superorganism? 'Cause you mentioned the five people planning to kill a mammoth. I guess, is life a superorganism? Is that —
[00:35:35] Byron Reese: That's in this book. The next book starts with the cell. I mean, if you're gonna talk about life, you start with the cell. I build up from there all the way to consciousness. Or so I try. So act three is about, it's really about artificial intelligence, the good and the bad. So it really takes that idea that our genome is now this whole, everything's written, everything that's there, it's all the instructions, how to do everything.
[00:36:01] A person has, is made out of like 30 different elements. A smartphone's made out of 60. A smartphone's a lot harder to make than you are. And just how to do that. That DNA is miles long. And what our hope was, what our hope is that what happened is we built these machines and they were real, real good at crunching data, and finding patterns, and making predictions. And they can say, that's a spam email. And they can say, here's how you can get to the restaurant you're going to. Like they're good at that.
[00:36:30] But our bottleneck is not our technology, it's not our computers. It's our data. And most of our data is pretty poor. And we have to train the machine and tell the machine when it's right and wrong and all that. We have to normalize the data. I mean, like if there was not another advance in artificial intelligence for 20 years, we have plenty to do. Like there's no way we're gonna get it all done 'cause we do so have the data. So we said, Aha! Why don't we start collecting the data with computers as well? So we get, they collect their own data and then they look for patterns in it.
[00:37:04] And so, I hypothesize that throughout the course of human history, people have learned stuff, and then forgot them, then they learn the stuff, and they die. Or they learn stuff, and they teach it to somebody else, and they die. Or they learn stuff, they teach it to somebody else, and they mess it all up. And how do you make any progress in that world?
[00:37:26] And the answer is, what if you built a machine that could remember everything? Everything. Don't think about the dystopia part yet. I mean, we can come to that. But just imagine for a moment if every single thing about your life was recorded. Everywhere you went, every word you say, every word you typed, everything you looked at, what your eyes did, how they tracked, what was your physiological response, every bite of food you've ever taken is analyzed by the spoon before you put it in your mouth. Every time your heart beats is recorded. Like everything. Love it or hate it, that's what we're building. It isn't really Big Brother's building it like we want them to do all this.
[00:38:06] Now, imagine if you had all that data that the machines had collected and say we had it for the last a hundred years, just imagine how you can mine that and get insights because we are not used to thinking like that. Like we're kind of loose cannons the way we live our life. Like we have like two pieces of anecdotal evidence, and then we make a decision about like what city am I gonna move to? And it's like, oh, my uncle says Tulsa is nice. And that's what we do. Like we just stagger through life just sort of capriciously making decisions 'cause we don't really have the data. But imagine you have a hundred years' worth of everything everybody did. The system could start suggesting what you should do. You don't have to do it.
[00:38:47] But I think I liken it in the book to — you know, if you buy a metal detector, and you go to the beach, and you're walking down the beach, and it goes. And, well, you can dig anywhere on the beach you want. I mean, I personally would dig where the metal detector says to dig, you know? I'm not gonna say, oh, you're not the boss of me, metal detector. I'll dig over there. Like that's the reason you have the metal detector is it's gonna tell you where to dig.
[00:39:10] Look. If knowledge is power, that's the ultimate empowerment. I don't know how you can argue ignorance is good, that us forgetting all this stuff is good. So that's kind of what I see us doing with that technology. Now, in many ways, it can be misused, right? States can misuse it, you know, and track everything. You could find a dissident in three minutes, you know, have you had all that data. And so, there's all kinds of problems that can come out of it.
[00:39:34] I give all these examples in the book of things like hookworms in the South. They didn't know that if you didn't dig your outhouses more than six feet down, the hookworms could get up through the dirt. You walk around barefoot, and they're gonna come in your feet. And then you're gonna get like lethargic. The data could have told us that in three seconds.
[00:39:54] You know, iodine, iodine deficiency, before we started adding iodine to salt, iodine deficiency was this thing. When they started adding iodine to salt, I think we went up like five IQ points. Like that's not me making up a number. I think it's five. And they said in some areas it was 10 to 15 points from that one thing. The system could find all of that. I mean, a natural human IQ might be 300. But, you know, the fact that, you know, we're all doing something, I don't know, brushing our teeth. And because of that, who knows what it is, right?
[00:40:25] Adam Gamwell: If I can ask about that, like one thing that has me thinking about too 'cause I think it's a really fascinating point that when we take examples, like we can cut out both like hookworm infection problems, and then one of the results is that IQ goes up, which is a pretty incredible change, right? And I think you mentioned the removal of lead-painted walls is another example, too, where it's we see like functional and social and mental changes in people when we make large-scale changes like that.
[00:40:49] I think something else you mentioned in the book that's important is like how do we help not go down the dystopia story, right, in terms of that. Like it could, 'cause it could be misused. And I think like one of the things that, I don't wanna give too many spoiler alerts, but just like one of the challenges from probability theory is the, was the rise of eugenics as an idea that we could do this through people genetically and make a like statistically better person. Putting the air quotes, right? And then we might move it forward to this question of if we have an AI or a tech system telling us what we should do, it could also do something that feels cold similarly, right? That like, it would feel like it's not doing it that feels in line with humanism, right?
[00:41:23] Byron Reese: Yeah. And the worst news is you can do the cold thing. And then if somebody says, man, that's kind of cold. You say, well, that's what the data says. And it's sort of like, well, that's beyond, you know, recrimination at that. Oh, that's what the data said. And it's like, you know, at my heart, I'm a humanist. Like I love people. The only thing I like about technology is how it empowers people. So that's why I put the chapter in there about eugenics.
[00:41:48] And what's particularly ironic about that whole thing is, so there's this idea in the math section called regression to the mean. And what it says is if you're really tall, your kids aren't gonna be as tall as you. And the thinking is like this. Like if you drop that BB down, that thing, and it bounces, and if you're super tall, will it bounce to the left 10 times? And man, you're at six foot six. And then you take that BB out, that's you, and then you say, okay, what about my kids? And you drop it back in the top. There's no reason to think it's gonna go boop, boop, boop. It may get one good boop by having your DNA. But all the other things had caused it to happen.
[00:42:30] So what happens is all outliers, over time, return to the mean. And that idea came from this guy, Thomas Galton. And he said, regression to the mean, you can't beat it. Like it's always gonna return to the middle. And then he's like, eugenics. We shall make people better by just getting rid of all the people that bounce to the right too many times. Right, right, right, right. Oh, no, get rid of them.
[00:42:55] And I remember I was telling this to my like 15-year-old son. And he was like, wait a minute. That's the guy that did the regression to the mean thing. Like how could he not know you can't do eugenics? Like you take a smart person, and then maybe they have a kind of smart person as a child, and then their kid's just normal. And they told Galton that. And Galton had like this really weird, convoluted explanation about it that — probably believed it. Like, you know, self-delusion is such a thing because eugenics are sanctified by science and math. It was like, oh, that sounds good.
[00:43:28] And eventually, the forced sterilization of Americans made it all the way to Supreme Court. And they ruled in favor of it. They were like, it's okay to forcibly sterilize people. And it was no five, four squeaker. It was eight to one. Eight to one. That, yeah, if there are people that it's good of society to forcibly keep from having children, then — and, you know, that lasted up till, what, 1980 or something, that we still did that. And it was a cautionary tale I put in the book.
[00:43:58] Adam Gamwell: I think it was interesting and important to have it, have that cautionary tale in this, in the probability section 'cause then as we get to the AI and tech space, that's like the, a lot of the modern narratives that we tell ourselves. And there's the cautionary tales, too. It's interesting because we have some of them where it's that there could be subtle racism in technology in terms of that like, you know, when they're first putting together smart camera phones that wouldn't take pictures of people with melanated skin and didn't know how to, that it built filters for that.
[00:44:22] But then on top of that, like what does that mean for how law enforcement might pass down sentences if they're using technology to help them figure out where these different parts go. And so, it's interesting 'cause I think it's also helping us like take seriously the case. And then you raise this in the book too, that, you know, tech is about empowering humans in its best form. And, but the other side is you gotta recognize that people are the ones that are still making this. But it's interesting because, you know, you also noted this, that a lot of the AI and tech systems are kind of moving a lot beyond what we understand or we have the capacity to. And so, there is this interesting question of like, how do we make sure that like in putting in parameters, those positions that will help keep the positive aspects.
[00:44:59] And so I think, you know, one point I'd love to hear your thoughts on it is that you mentioned that there's questions in terms of like GDP is probably not the best way to talk about what's good for society. Like what other measurements might we have and do. And when we think about, you know, what are we looking to measure and then use for predictive possibilities with our technology, what are we trying to see? And so I think even this, the GDP question stuck with me too 'cause if we're only trying to talk about like an output of an economic sense, what does that mean for human well-being? And like what if we measure that and use our tools to kind of think in that direction? Because the truth is these are stories we tell ourselves and we can tell different stories, right? And so, why don't we, right? We don't have to tell the Terminator as our tale, right? It's not inevitable. Well, it's not fate, I might say, right?
[00:45:39] Byron Reese: That is right. You know, they say, who was it that said the price of liberty is eternal vigilance? You know? The price of all of this stuff of like, well, how do we keep this thing from happening, this bad thing from happening? And, you know, I wish there was this silver bullet. But it's eternal vigilance. Like look, there are some countries where you can do a lot of it by statute, right? Like you can pass laws that try to define acceptable behavior and punish unacceptable behavior. But there are parts of the world that that would not work. And I don't have an easy answer for that.
[00:46:11] Adam Gamwell: Yeah. But I think it's important, too. I mean, that's one thing I appreciate about kind of following your work over the years is that we need optimists too that are realist, right? And then what I read into your work. And so it's an important part because, you know, pessimist voices get loud and like part of I think other element of being human is that we are hard-coded to see the negative, right? Because this is how we survived, right? Like if there's a tiger where we think it is, right, our brain is better to be safe than sorry, right? We're safe than eaten by a tiger. So we might like lean towards and say, I'm gonna like listen to the slightly negative-sounding thing even though there isn't a tiger.
[00:46:42] And I think we do some of the same stuff here, too, where we're saying, okay, we could easily veer into Big Brother territory or Terminator territory or Matrix territory. But obviously, I mean, like I'm just quoting three very big stories, right, that we all think about in this. And so it's, I think, I mean, one thing I take away from the book and thinking about this too is that there is power in optimism. And that there is something important about humans that isn't replaced by machines.
[00:47:04] I think that's another point that you make that I think is really important too with this is that we're not here to be replaced by machines because they're actually speaking a slightly different language, right? The idea of math as the language of the universe and then what it is that computers are speaking is not the same thing that we're doing right now. We're being recorded by zeroes and ones. But we're humanized. We're being humans, you know, talking. And I think that's really interesting. That was something I had not conceded before is like the actual language difference between machines and people. And if I may, that was, I think one of the, that's a pretty good like feather in the cap in terms of, you know, why we're not machines. We speak a literal different language.
[00:47:40] Byron Reese: That is a fascinating point, actually. I mean, I wrote all that. But I never like connected that dot.
[00:47:45] Adam Gamwell: Well, I'm glad we can podcast them. That's what we're here for. But I think it's interesting too 'cause it's like, you know, one thing else I took away is that, you know, a lot of the folks, the computer scientists, the folks that are making the technology, right, the sensors, the readers, the AI, you know? I think you noted that in your AI podcast too that, I forgot what it was but like 80% of them think that we're machines whereas the general public it's like 15%, right? So it's like even this —
[00:48:08] Byron Reese: 96%.
[00:48:09] Adam Gamwell: Wow. Okay. 96.
[00:48:10] Byron Reese: Yep.
[00:48:11] Adam Gamwell: You know, that's interesting.
[00:48:12] Byron Reese: Yeah. Only four people in the podcast. I mean, I can name them, said, we're not machines. Everybody else said, of course. What else would we be? What else would we be if we're not machines? That's the entire basis for why we can make general intelligence. That's the basis for why most AI people believe general intelligence is possible because they think we're a machine with general intelligence.
[00:48:35] I'm with the four. And, you know, I always tell people that like I'm a minority viewpoint on this. Like I don't wanna represent that like A.) It's settled. Or even that I'm in the majority when I don't think general intelligence. That kind of AI that you see in like Commander Data or C-3PO or Eva in Ex Machina or Her, Scarlett Johansson's character in Her. Like I don't believe any of that's possible. Like literally impossible.
[00:49:03] Unless you can make mechanical people, which by the way has been a long, long dream. People wanna build mechanical beings and control them. And I can give you examples 3,000 years old. But you can just think about Frankenstein, 200 years old, right? The idea that you can just slap together a bunch of parts and yell, you know, hook it up to a lightning bolt and yell, "It's alive. It's alive," and it wakes up. I mean, that's what this next book is about. It's about how, you know, it asks, what are you? It gives you like eight choices and so.
[00:49:38] Adam Gamwell: I think that's super cool. Now, I'm excited. I'm assuming it's not done yet. But I'd love to talk about that too either as it develops or as it comes out, certainly.
[00:49:44] I enjoy talking with you. I think you have a lot of really good ideas and I'm, again, enjoy seeing the work that you're putting out there and like helping us put together some of these bigger picture questions that is not afraid to be optimistic but then is saying but if we take seriously how these changes happen, like we may be aware of, then what is going to happen? And like what better way to do that than thinking about what is the future and how do we know what it is. So, well positioned, I think.
[00:50:05] Byron Reese: So I guess I can close with the epilogue of the book. So the first part of the book identifies 20 purposes of told stories. Like why do we tell stories? And, you know, I mean, they're to relay our history, to teach morals, to — all of these things. And I try to figure out 20 of them. And then in the epilogue, it opens with sometimes the story doesn't make sense until the very end. Like that's where it all comes together.
[00:50:30] And there was this 21st one that I really held back. It's like the secret one that rewards anybody who made it that far. And, you know, there's like two very different narratives of what we are. One of them is this very kind of mechanistic one. That we are essentially just bags of chemicals and electrical impulses and we kind of bounce and careen through life. And if you're lucky, you bounce into another bag that you connect with, and then someday you die. And your bag, you know, you're this localized spot that can fight entropy for a little while, and then you die. And very soon, everybody's forgotten you were ever there. And that's sort of like this bleak view.
[00:51:12] But there's an, you know, that says your life is just a sequence of minutes. One led to the other, led to the other, led to the other, led to the other, led to your death. The other one, the one that I think most people believe, and it ties back to that are we machines, is that everybody's life does matter. That everybody's life has intrinsic worth and that the universe is different for them having been around.
[00:51:35] And it says that, you know, Carl Sagan wrote this thing about how we're made out of star stuff. You know, stars used to be hydrogen and helium. Those were the — But as a fusion happens, all the heavier elements get formed and they explode. And that's what makes you and me. And that's an amazing thing. But that's kind of a scientific "wow" thing.
[00:51:53] The flip side of that coin is that's not what you are. You aren't residue from a fallen star. You are a story. Your life is a story. And so that's purpose 21. Stories are what give life its meaning. That you can take all those minutes that make up your life and you can tell a story about them. You can weave a story about them. And that, that is your true self. That has meaning and that has significance.
[00:52:18] Adam Gamwell: Amen to that. I thought that was a nice secret ending there. But I think you're right. Even the idea that we can go from stories are kind of how we started being human and then it's where we are today, too, you know? It's like, it's still what differentiates us from everything else, in essence.
[00:52:32] I'm super excited to, you know, have folks check out your book and see what's cooking here. I think it's good for me, a great read, really fun and like great, lot of thought-provoking elements. So I appreciate you doing the research and putting it out into the world. And, yeah, excited to share it. Thank you so much for joining me on the show today. It's been, it's been great as always.
[00:52:47] Byron Reese: I would love to come back if you would have me.
[00:52:49] Adam Gamwell: Awesome. Yeah, anytime. Anytime.
[00:52:52] Once again, many thanks to Byron Reese for joining me on the podcast today. It's been a pleasure to chat. And I'm excited for folks to check out his new book, Stories, Dice, and Rocks That Think: How Humans Learned to See the Future--and Shape It. You can check out a link for the book below in the show notes.
[00:53:07] And as always, I wanna get in conversation with you. How are you thinking about the different acts in terms of humans' development for seeing the future, learning to recall the past, the rise of language and storytelling, the rise of probability theory, and then finally, the invention of computers as a way to help us think deeply and more quickly in and about to the future and potentially to shape it. What are you most excited about in terms of humanity's future around technology? What do you think we need to watch out for? And how might we think deeply about what this means for business, for technology, for culture, for keeping people well and foregrounding things like human wellness as we find ourselves moving deeper into the 21st century?
[00:53:44] I think there's a lot to ponder with. So hit me up on @thisanthrolife at Twitter, or shoot me a message over at firstname.lastname@example.org, or shoot me a message from the website, thisanthrolife.org, and get in contact that way. As always, love to hear from you and keep me posted on new ideas, what's happening out in the anthro sphere. And if you're interested in writing for the This Anthro Life Substack, also shoot me a message. I'm looking for folks that may want to put together some content, some written stories, and pieces like that. Love to hear from you. And as always, we'll see you soon. I'm your host, Adam Gamwell. Be well, be safe, and we'll see you next time.
Entrepreneur, Futurist, Author, Speaker
BYRON REESE is an Austin-based entrepreneur with a quarter-century of experience building and running technology companies. He is a recognized authority on AI and holds a number of technology patents. In addition, he is a futurist with a strong conviction that technology will help bring about a new golden age of humanity. He gives talks around the world about how technology is changing work, education, and culture. He is the author of four books on technology, his most recent was described by The New York Times as “entertaining and engaging.”