Dec. 6, 2023

How to Ensure Human Autonomy in the Age of Algorithms and AI with Brian Evergreen

This episode of This Anthro Life explored the intricate relationship between human autonomy and the rapidly evolving landscape of artificial intelligence. The discussion highlighted the crucial need for diverse voices to champion human well-being, flourishing, and autonomy in the face of technological advancement. Brian Evergreen's work, dedicated to empowering leaders to prioritize people over profit and harness technology to elevate our humanity, emphasized the importance of ensuring that AI serves as a tool for human betterment rather than a force of subjugation.

What does the concept of autonomy bring to mind for you? How has AI already begun reshaping how we work and make decisions? And do you think AI and algorithms should play a role in organizational decision-making? Why or why not?

In the episode of This Anthro Life, the discussion delves into the intersection of human autonomy and the evolving landscape of AI. It emphasizes the need for diverse voices that prioritize human well-being, flourishing, and autonomy in the face of technological advancement. Brian's work, aimed at empowering leaders to prioritize people over profit and harness technology to enhance our humanity, underscores the importance of ensuring that AI serves humanity rather than subjugating it. The conversation also touches on the historical context of work, particularly during industrialization, where the focus was on efficiency over people. This highlights the need to reevaluate the role of AI in the workplace and consider how it can be harnessed to prioritize human values and autonomy.
Brian's framework for autonomous transformation emphasizes the importance of envisioning the future before technologies, ensuring that people feel a sense of belonging and purpose in the face of technological advancement. It underscores the unique capabilities of humans, such as belonging, exploration, and envisioning future states, and emphasizes the complementary relationship between humans and machines.




Key takeaways:


  • The importance of envisioning what future state an organization wants to create
  • Examples of working with companies to define their vision, challenges, and strategic approach
  • The role of autonomy in transformation and moving from digital to autonomous systems
  • How AI is highlighting the need for a renewed focus on culture and the human experience
  • Differences between leaders who view people as expendable vs. those who don't
  • The time lag between deciding to use new tech and it being developed for production
  • How leaders can focus on expanding revenue through new opportunities
  • The history of how work was set up during the industrial revolution
  • Transforming beyond viewing organizations like mechanical systems and people as cogs
  • The potential for technologies like AI to take work off humans' plates and elevate them to more creative roles





Timestamps:
0:00:00 AI's impact on work and society with Brian Evergreen
0:01:49 Chess and music experiences and their impact on leadership.
0:06:29 Autonomy in workplace transformation.
0:12:22 Work history and technological change.
0:20:07 AI's impact on organizational culture and success.
0:25:43 Balancing profit and doing good with AI.
0:29:43 Profit, greed, and altruism in business.
0:33:09 AI strategy and its implementation.
0:43:23 Improving worker experience in a manufacturing setting.
0:48:09 AI, human capabilities, and belonging in organizations.
0:51:05 AI's impact on jobs and leadership.
0:56:31 Leadership, technology, and decision-making.
1:00:54 Autonomous transformation with Brian Evergreen.




About This Anthro Life This Anthro Life is a thought-provoking podcast that explores the human side of technology, culture, and business. Hosted by Adam Gamwell, we unravel fascinating narratives and connect them to the wider context of our lives.

Tune in to https://thisanthrolife.org and subscribe to our Substack at https://thisanthrolife.substack.com for more captivating episodes and engaging content.

Connect with Brian Evergreen
Purchase Autonomous Transformation through TAL’s Bookstore Affiliate Link to support independent bookstores, the author and the podcast!
Linkedin: https://www.linkedin.com/in/brianevergreen/
Twitter: https://twitter.com/brianjevergreen Website: https://brianevergreen.com/
Instagram: https://www.instagram.com/brianjevergreen/

Connect with This Anthro Life:
Instagram: https://www.instagram.com/thisanthrolife/
Facebook: https://www.facebook.com/thisanthrolife
LinkedIn: https://www.linkedin.com/company/this-anthro-life-podcast/
This Anthro Life website: https://www.thisanthrolife.org/
Substack blog: https://thisanthrolife.substack.com

Transcript

Adam:

Hello and welcome to This Anthro Life, the podcast that delves into what it means to be human in the 21st century. I'm your host, Adam Gamble. And today's episode is all about the future of human work in the age of artificial intelligence. Now, autonomy, what does this concept bring to mind? It's something that has steadily captured the imagination of futurists and leaders alike. And what does it mean for humans to have the power of self-governing in a world where algorithms and artificial intelligence have become mainstays in decision-making processes? And what does that mean for how we do work? These questions shape my conversation with polymath, corporate strategist on AI, and founder of the profitable good company, Brian Evergreen, who's my guest today. Now Brian has a new book out called Autonomous Transformation, creating a more human future in the era of artificial intelligence that immediately caught my attention. Because the truth is, as compelling as consumer-facing AI products like ChatGPT are, AI behind the scenes is deeply reshaping how we work, how leaders lead, and what future they set their sights on. In other words, we need more diverse voices across the industry landscape that focus on human well-being, flourishing, and autonomy. especially in the era of AI. And Brian's work does just this. So join us as we embark on a journey of discovery and reimagine a world where leaders are empowered to make a difference, and putting people just as high if not over profit, and where technology serves to enhance our humanity. Can't wait to dive right in, so let's get to it. Spoiler alert, we know I'm an anthropologist and so kind of interested in the human side of your work and yourself. And so one of the fun pieces I thought kind of going through your work, Autonomous Transformation, two kind of fun facts that popped out there. One is that you play chess and the other one is that you're a musician by training. And so I'd love to kind of start there and think about how these two pieces have played into how you approach the question of autonomous transformation and thinking about the role of technology in business and building a more human future.


Brian:

 

I love that question because I think that our past experiences define, you know, the current and the way that we look at the future as well. And so I think that my chess background plays has played a big role in everything that I've done in a way that I wouldn't have expected, you know, as a kid playing in tournaments. I didn't think I was training for something other than just these chess tournaments to try to win trophies. And so it's been interesting then to enter corporate spheres and realize that I've basically went through the pretty intensive strategy training with a very quick feedback loop. Even the longest game I ever played was an eight-hour game. That's still a much faster feedback loop on the the strategy that you set than, you know, most of what we experienced in the corporate world, which is, you know, you put together a strategy, it might be years before you actually see whether or not you might be in a different role by the time the data comes back of whether that was the right strategy. And so that's, I'd say, from a chess perspective, I think the biggest role it played in the way that I think about what I do and approach work is strategy. And looking out to what I want to create first, and then thinking, of all the different possibilities and decision trees and game theory of if I do this, they do that. But they might not do that. What if they do these other things? And even when you have a strategy set and you think, okay, that's my ideal, that's what I'm going to go create, you move your piece. they could move a different piece you weren't expecting and that could throw everything off and you have to reset your strategy live. Sometimes in milliseconds, if you're playing speed chess, for example. So I'd say from a chess perspective, that's probably the impact that that's had on me, at least the one that's most apparent to me. And from a music side, I'd say that's really, you know, especially for using anthropology is probably where we overlap the most in terms of the humanity side of things. Because chess can be a little bit, you know, you're not supposed to play the other player, you're supposed to just play the board and focus on the situation and not the person because you can get distracted, or you could sort of fall, there are lots of social engineering types of tricks people play to make it seem like they don't know what they're doing and then when and things like that. Whereas in the music sphere, I think really, I think my experiences started with an interest in creating beautiful music in boy choir when I was eight. And from there, I don't know how many numbers of choirs I sang in and I wrote music playing mostly piano. And I think that probably the pursuit of creating something that is nice to listen to, that expresses something that's meaningful to other people in a group collaborative setting, you know, under often, almost always under the direction of a conductor. I think that is a very interesting corollary to what it's like to be in the corporate sphere, right? Because in a way, we're a group of people coming together to do something. But in the music side, you automatically filter out anybody that isn't interested in creating something beautiful to express something meaningful to someone else. But because of the fact that we need to work to put food on the table, that's not the case necessarily in the corporate side of things. And so I think that from a music perspective, I think that's probably the biggest impact that's continued, I've carried with me is that I enter every room in hopes of meeting other people who want to create something good. and something meaningful together. Even if it's something that could otherwise seem mundane, I think we still as people have a choice to decide how we show up and what in that brainstorming session or in that meeting, what it is that we do and the energy that we bring. And I think that starts from the background I have in music and then from a leadership perspective, The conductor, the exact same piece of music, depending on who's conducting it, even if they're all great conductors, can be interpreted so differently. And I think the same is true of strategy. You could have the best strategy in the world, and that could be like having a technically brilliant conductor that makes sure to get everyone in time and the music is executed perfectly. Maybe if the conductor's not connecting with the choir and the orchestra, And they haven't connected them to the meaning of what is it that we're singing. We're singing in Latin. We don't even know what any of this means. There's plenty of conductors who just gloss over it and just, you know, cue everybody in. But the ones that make the most beautiful music are the ones that talk about the history and the story and where was this composer coming from and what were they trying to express and what does that mean for us now and what are we going to bring tonight to this audience that's, you know, taking time out of their Saturday night or whatever it might be to come listen and create that sort of special human connection to the music where otherwise it could just be, you know, we're executing something.

 

Adam: 

 

Brilliantly said. I love both of those answers. And you raise so many important points in terms of what is an organization trying to accomplish, right? When we think about that in a corporate sector and as places that we go for work and places that we cohere around for work. And, you know, something you said there in terms of the viability and the importance and the need to kind of be in a space. And when you enter it, kind of look for both something good and something meaningful that we can kind of co-create together as humans. And that's, I think one of the, you know, proclivities we have is this weird species that we like to get together with strangers and organize things and kind of put ideas together. You know, we're a very weird species like that, you know? And I think what's cool about that too is, you know, the music angle also is that there's Oftentimes, you said, we will self-select for folks that are not interested in music, will not play it, or either don't, won't, or can't. But then those that want to be there, it's also the element of passion, too, in terms of there's something that's really emotionally evocative and that you feel deeply towards. And that's something that I was thinking about as reading your book and as we're discussing here. Also, this idea, because when we think about where organizations might be going in the future and as they're transforming, so much of the challenge that we see people budding up against both leaders and employees, workers. I don't want to call them followers. It sounds weird. Leaders and workers, I guess, is a concern over like, am I going to lose my job? Ultimately, will I become purposeless? And so there is this kind of question around, how do I be able to feel safe about having the passion I might have for work? How do we kind of keep passion in that conversation as we're thinking about transformation? And so one of the ways I've been thinking this in my head and I want to get your thoughts on is that you, I think really interestingly, when you're looking at transformation settled on autonomy as the other side of that conversation. And that's something that we do not hear people talking about very much. I think this is why your work is resonating with a lot of folks right now. So let's talk about like, what is autonomous transformation when it comes to an organization? I'm curious, like why you settled on autonomy as the thing, the counterpoint to transformation.

 

Brain: 

 

So I love that. And I'm glad that you teach that out. And I'd say that the root of the word autonomy is, you know, the right or the power of self-government of government or governing, not government from a political perspective, but just being able to do something without needing to ask anybody else, let's say. And so in the work that I've been doing in the tech sector for a number of years, that's sort of been one of the goals that I've had. And that's something that I've noticed in terms of what we've created is when we create a system that doesn't need to double check with a human or whatever it might be, in more often than not, in the majority of cases, it's able to autonomously handle something that otherwise a human would have to be keeping track of. And a good example is is manufacturing plant where they have this kind of cutter that cuts steel. And an operator was listening and he didn't realize he was listening to sound, but he was listening for and there were these distinct sounds that when that sound happened, I need to go adjust the temperature. And when that sound happens, I need to adjust the angle. And when that sound And so he had tons of other operational responsibilities. But in the meanwhile, there's sort of this ongoing task of listening. And if he hears that sound going over and doing that, so he doesn't have more issues down later. And so that's an example where we were able to put in a machine and instrument for sound. So basically start measuring for sound and then work with him to correlate what sound correlates to what action or what change you make to the machine in order to fix it. And then now that was able to be run autonomously. And so maybe the first couple of weeks he would hear it and then he would hear it self-adjust and think, okay, good, it took care of it. And then over time that got to the point where it's not, now that's not something he has to think about. And that doesn't mean that his job went away. It just meant he could focus on all the other things that he has in front of him. And so I think that the word autonomy, when I think about transformation, we've been moving from analog to digital, but digital doesn't imply that you've reached any level of intelligence or autonomous, you know, action decision-making for systems. And so for me, the next goal post that I put out there, I think in 2019 is when I first codified it as let's move from digital to autonomous. That's the next goal post. And so the idea of transforming our markets and sectors and organizations through the capability of autonomy, I think is something that I found very appealing and very empowering for people. Because more often than not, I spoke with the CEO of the Advanced Robotics Manufacturing Institute that was funded under the Obama administration at first, and it continues to to thrive. He shared that based in every implementation he's seen of robotics, and he has a lot of purview, he has yet to see an implementation of robotics replace human workers. I think there was one time where they tried to do that and they realized, oh, it actually costs more to do it this way than what we were doing before. And there were issues they ran into. And so they ended up getting rid of the robotics and bringing the people back. In the vast, vast, vast majority, nearly 100% that he's seen, it's actually been that, hey, by bringing in more robotics and AI, we're able to take work off of the human workers' plate that they were overloaded in the first place and fill the gap. Because most manufacturers, just to use one sector as an example, can't hire enough workers today to come work in these sort of dull, dirty, dangerous jobs. And so especially because now with the internet, people can then go find jobs in Silicon Valley or New York City, and there might be higher paying and, you know, safer, frankly. So there's already a gap. It's not coming, it's already here. And so the use of autonomy and autonomous systems is an opportunity to sort of address that gap so that we can keep having the goods and the clothing and the food and all the things that we rely on.

 

Adam: 

 

You've helped us kind of think through that. It's not just from analog to digital and now we're done, right? We've made it. Or even digital to AI, whatever that would be, but it's actually this notion of what are we allowing people to do and what does technology help us do in that process? And it seems like one of the reasons that this has been, it feels super fast and also a long time coming is when we actually look at kind of the history of how we've set up work, something that you also write about, in the book is that like, and the importance, I think of knowing the history of how work has been set up and why this is so revolutionary now to think about what autonomy is, because, you know, I'd love to have you kind of break this down with us too, in terms of understanding that we're having a technological change. We think it's like this, this weird notion of it's either, you know, one can take my job or not, or going to make it, you know, make my work safer or give me more time as a human. But then really, what's important that we find, and that you've talked about this too, is that it actually ends up being social problems that we see that we're having when we're trying to change the technology. And this comes back to how we've set work up. So can you walk us through a little bit about this idea of what kind of history of work are you drawing on as we think about what transformation is happening and where it's taking us?

 

Brain: 

 

Absolutely. So the history of work that I'm drawing from is really, I mean, obviously, there's a long, long storied history of work from a human perspective. And you as an anthropologist know that far better than I. The part that point in history that I take up in my book is, it's 1911. And so if you imagine, if you, you know, close your eyes and imagine it's 1911. And for you know, Henry Ford has just come out, I think, three years earlier with the Ford Model T. And we're basically in the process of inventing the middle class. So we talk all the time today about the middle class and about threats and risks to the middle class. Back then it was, we're going to create a new class. It's either there's the haves and have nots, and now there's something in between that we can create. And people who used to not be able to, who'd never, the vast majority of people had never even been you know, two cities over, you know, let alone maybe one town over. And if they were, they had to take the train. And so this idea that, oh, you can, you know, people that normally would not have been able to afford, certainly not a horse and buggy, now they can afford a car. And now we have to pay roads. And now this idea of a Sunday drive and the eight-hour workday and a five-day workweek, and all of this was created at that time. And the issue they were facing was a problem of national efficiency because, and that was something that President Theodore Roosevelt spoke about at the time, is the greatest challenge facing the nation, at least for the U.S., was efficiency because they had more demand than they could possibly supply in terms of cars and other goods. And so I think that at that time, the first management consultant, Frederick Taylor, looked at that challenge and took the call and said, okay, what can we do about this? And when he, when he walked into factories, he thought, okay, there's, there's these groups of people that have been brought together that have no, you know, background or experience, because this is something new that we're doing. And, and so now we're manufacturing things. But unlike weaving, there's not this master-apprentice sort of history. And so if you walk down a line, you see a great deal of variability between every piece that's manufactured because there's not really a standard way to do any of that. And so, and it's not like there was any kind of digital guides or anything digital, right? It's just all word of mouth. It's all just getting in there and showing someone how to do it and then hoping they sort of remember and do it well enough that the product ends up being good. And so his solution to that, he had this idea where he thought, okay, this is not rigorous. It's not standardized. It's not repeatable. And so he he said, Well, what is and he started with a scientific method and said, What if we combine that with management, which was also basically a brand new term. And so and so what he did is he watched and he put he got teams, he brought teams of experts in, and they would watch these people make these goods and time them with stopwatches stopwatches and document, okay, there's 15 different ways people are doing this. Let's pull the best things out of all of them and design the perfect process. And so this idea of process mapping, process, re-engineering process, you know, all of that was, was, and process optimization was all invented at that time by, by Frederick Taylor and his team. And this idea of Taylorism. One of the underlying issues with that is that Taylor had this fundamental view on the worker, which he referred to as a man of sluggish mind. And he said something like a sluggish mind or even an intelligent laborer in a way of contrasting different types of workers. So you can kind of see there's somewhat of a flawed premise about people underlying some of the things that he wrote about and created. And ultimately, if you boil it down, he said in the past, the man has been first, and he meant the human, but he said the man has been first. In the future, the system must be first. And ultimately, he succeeded. Now, in most organizations, they're run like a large mechanical system based on the worldview that was developed during the Industrial Revolution, that if everything's deterministic, and everything's a mechanical system, then how can we tweak and tune a mechanical system of an organization or a company to get the output that we want. And the people end up just being, we hear about being cogs in a machine, that was fine, that wasn't accidental. And so now we finally run into a point where treating people, I have to say, treating people like cogs in a machine used to be profitable. It was actually, you could achieve your goals by treating people like they were cogs in a machine in the 20th century. You can't do that anymore. It is quickly becoming unprofitable. Things like quiet quitting, the fact that people can access jobs anywhere, they can learn new trades online for free in a matter of months. you know, and not to mention, you know, the way that partners will end partnerships, people will get will boycott brands. So whether it be workers, consumers, partners, investors, shareholders, every single force is now turning on and starting to show the cracks between running an organization like it's a mechanical system and treating people like they're cogs in a machine versus treating them like running the organization like a social system and treating people like they matter and like, you know, tying meaning into the work that you're doing and the economic difference between those two approaches. The best example I can give you is Microsoft, where, you know, my alma mater, so to speak, where in Steve Ballmer, his leadership style is you know, a case study in that sort of mechanistic, treating an organization like a large mechanical machine. And Satya, on the other hand, when he joined, when he stepped into the office of CEO, he said to one of his lieutenants, he said, I want to, we need a new business strategy, a new technology strategy, and the new people strategy. And he placed the people strategy at the same degree as the others. And, and his and there's a lot of other examples of his sort of social systemic leadership style. And the economic, you know, difference between the two. Shares are up, last time I checked, nine times what they were in 2014, and just nine years than when he stepped into office, nearly 10x the whole company's value. And there's other companies that are because you say, well, that's just because it's the tech sector. What about IBM? What about Oracle? What about like, there's so many other companies that were at a similar point as Microsoft in 2014. That's not to disparage them at all or their leaders, but just to say that there's there's something unique and special about what happened with Microsoft. And for me, as I worked there, I would say it was a leadership. The leadership.

 

Adam: 

 

That's fascinating, too. And I appreciate you sharing that, too. Because as we think about too, it's oftentimes, we've all heard of Microsoft, but only so many of us have actually worked there. And so it's cool to think about when we see a leadership change, what does that actually mean internally and for one's own perception and experience of working there? And to see that when we can have leadership place people on the same level as our tech strategy, as our go-to-market strategy, as our product strategy. The difference I can make, it's nice to say, well, look, here's the economic output of that. Actually, we've seen stock orders of magnitude higher now. We're seeing that inflated satisfaction higher. Redundancies are cut down the idea that people are leaving just because they are unhappy or lowering too. So it's like, It's really interesting to kind of think about that and to recognize that that's this major change that we're seeing. And on one level also, it sounds a little crazy, right? To kind of think, okay, we're coming from this history of basically thinking of ourselves as cogs and machines. And I'm thinking of like literally space leaf sprockets from the Jetsons, right? They made cogs where George Jetson was a cog in a machine. So now it's like we need the people as part of that story too. And it's funny that like, one thing I've been thinking about this too, and I've been curious to get your thoughts on is, is there something about AI as the technology, is there something about AI that's forcing a lot of organizations' hands to rethink the people side of their business? I mean, or has it kind of just been on its way and we're just changing, or it's like AI just a smack in the face saying, actually, you got to think about people.

 

Brain: 

 

I think there is something unique about AI that's causing this because you know, only 13% of AI initiatives are making it into production or AI. In other words, only 13% of data science projects are successful is another way you could say it. Yeah. And of the ones that are successful, 7% are still in production two years later. So that's just atrocious considering the potential value of these technologies and the amount of investment going in, right? Think about all the waste. And so for me, I mean, in 2019, when I was a US AI strategy lead for Microsoft, I was working with Fortune 500 C-level executives to set up their AI agendas and their strategies and plans around them. And so I kind of got a firsthand look at what was working and what wasn't. And I eventually got to the point where I could really start to tease out and predict how well an organization could do on their AI initiative, not based on how much money they were going to, they'd set aside or how great their current brand recognition. They were all fortune 500 companies. They're all household names. So it wasn't, they all had similar amounts of resources. They all had, you know, similar amounts of brand value and lots of potential use cases or opportunities where the technology could do something really spectacular. And so, It ended up coming down to culture being, for me and my team, the greatest predictor of whether or not they'd be able to do something powerful and meaningful with the technology, not just the budget or the brand. And so I think that we have run into a point, I sometimes joke that the industrial revolution has run out of steam. And so I think we've finally reached a point where we talk about going into industry 5.0 and right now in industry 4.0, I say, no. Like, let's all agree as a society, let's thank the Industrial Revolution for the goods that it did provide us. Like, there's no doubt that it has been the backbone of the modern society that we benefit from. But there's also, we can also acknowledge the issues and the challenges that we face. thanks to that and lay it to rest. And it's time for a new era. It's time for us as leaders to determine it. Let's move into a new era beyond another enlightenment or another renaissance level era. And so I think that AI is highlighting the need because it's inarguable the economic potential seven out of 10 of the Fortune 500, or sorry, seven out of 10 of the top publicly traded companies in the world are tech companies. And so it's inarguable that there's value there. But so many organizations are struggling to harness the potential. And I could, you know, guarantee you that in the vast majority of cases, it's not because they don't have good data scientists, or it's not because, oh, because of their bad data, because there's lots of organizations that that didn't have good data, but that have somehow made the leap. Ultimately, it comes down to the culture of the organization and the way that people are being treated and the meaning that they feel connected. Because data scientists, there's only 10,000 data scientists in the world that can create real, like what we consider AI type of level data science solutions, contrasted with You know, a hundred million domain experts. And so those 10,000 data scientists, if each fortune 500 company hired two of them, or, you know, or what was it? 20, 20, and then we're done. There's no more data scientists left. Right. So, and so they're, they have a line out the door waiting for them of opportunities. And so if they don't feel connected to the meaning or they don't feel respected by the domain experts or, or, or any number of reasons could throw something off track. And now that specific domain expertise that it takes to get. that to architect and build that solution is gone. And you might have been 75% of the way there. And it's going to be pretty challenging for even equally talented or skilled data scientists to come pick up the work and get the final 25% across the line. And so I do think all that to say, long answer to your question, but I do think that AI is the maybe breaking point where people are realizing, okay, we have to change the way that we lead if we want to harness the economic and societal.

 

Adam: 

 

No, that's fascinating. And such an interesting point is, you know, thinking about it's like data scientists as an endangered species, but thankfully we can still make more of them though, which is good. You know, we have the technology. Yes, we can make more of them. Yes. But I think that's a really fascinating point to kind of help us reflect on the fact that one, we should… I love this idea of like, let's get past industry 4.0, 5.0 and say like, that's not the point here. That is ultimately doing kind of like ancient God worship. That is not very helpful to us today in terms of like, let's talk about the power of the Industrial Revolution. It's like, yeah, thank you for getting us here. But also let's talk about and think about what the future we could envision. What could we do now that we have both new technologies, but then also humanity is finding its way a little bit more to the forefront of an organization. And we see things like purpose and meaning as key elements. And this is, I think, one of the things that gives me a lot of hope for the future too, in terms of Gen Z is really pushing for this in terms of the kinds of work that they ask for, what they require out of a corporation or organization they might work at. And that gives us all, I think, a lot of hope in terms of what we could do. It's interesting, like, dichotomy of, of course, you know, oh, the youth is always a little bit more rebellious, right? They're kind of pushed back against the power structures. But at the same time, You know, they will become the business leader. Some of them already are. But to your point, too, we're already seeing the economic incentive to do so. But then on top of that, there's also the other element of like, what does it mean to actually do good through our work? And so, I mean, even your organization of the profitable good, and you talk about this as also a terminology in the book itself, is I think a compelling prospect to think about in terms of there doesn't have to be a disalignment or disincentive between making money and doing good. And it's interesting how AI can sit at that inflection point and help us think about that. So let's walk through that a little bit too. And tell me, how do we think about the idea of profitability going together with AI?

 

Brain: 

 

There's a lot you mentioned that I want to respond to, and I'm going to start from the end and go backwards. So yeah, so profitable and good and AI. You know, like you mentioned Gen Z, I think we've definitely seen in the terms of the broader market conversation, a lot of concerns about the future and about the decisions that are being made in closed boardrooms. And on the other hand, for me, being in some of those, many of those closed boardrooms, seeing people trying to figure out ways to do good and make the right decisions while avoiding systemic collapse. Because, for example, I'm just going to say it, if we immediately stop drilling for any new oil, then it would be a matter of weeks, maybe months, before we got to the point where our system collapsed when all of our food supply chains and our clothing and all the goods that we rely on, medicine that people rely on to live, are suddenly no longer going to be able to be transportable. Therefore, we're done. Then it doesn't become, oh, we're climate disaster in 10 years. It's systemic collapse and dystopian future in less than a year. For me, it was looking at these two things and seeing passionate, concerned people on both sides that for me feel like often they're not able to even get in the same room to have a conversation and they're missing each other in the conversation. And so I looked at that and thought, okay, some of the root of that is thinking that all corporate, you know, anyone who's a corporate leader is clearly greedy and doesn't care about people, which again, really went against my own personal experience of working with these people and seeing how earnest they were to try to solve some really big, hairy, thorny 21st century issues. And so, yet again, they would run into various challenges of, okay, even if we did all the part from our side, to solve that, then we still have regulatory things that we'd run up against, or we wouldn't be able to force the political, you know, leaders in that country where that's happening, to enforce, you know, the thing that we're asking them to enforce, etc, etc. And so what I determined is, you know, that basically, I wanted to add a little bit more clarity to the conversation. So I started with the question of the word profit. And so profit, as I wrote about in the book, you may have read it, basically profit itself is neutral. Profit is merely the difference between what it costs you to make something and what you sell it for. And if you think about the idea of profit in terms of how we live, and it just zoomed out, you know, if you think about our government in the US, at least 98% of the government budget is based on taxes that were paid by the profit made by organizations. So if none of our organizations were profitable, eliminate 90% of the government, 98% of the government's budget. And if you think about nonprofits, like, you know, all these amazing and, you know, altruistic organizations that are out there trying to do good in the world, right? They rely on donations from the government and from organizations that made a profit and from people that made enough of profit in their life that they now have extra money that they can give to those organizations. So even though they themselves are set out as a nonprofit, they're still reliant indirectly or actually still directly on profit that other people made and give to them. And so if you remove all profit Again, from the US or any country in the world, it's just a matter of time before total systemic collapse and the nonprofits aren't going to be able to help and the government's not going to be able to help. No one can help if there's not the lifeblood that's currently within our economic system. Maybe someone can reinvent a new economic system where profit isn't the lifeblood, but right now it is. And so when a company sells food and they, you know, there's a cost differential between what they're selling it for and what it costs them to make. That differential is how they pay their people, how they pay the people who made the food that then can pay their people and so on and so forth. And so profit itself being neutral, then I thought, okay, but there is absolutely greed and exploitation. So that would be, you know, on that side of the spectrum, that's money at any cost. is I'm willing to pay anything, usually whether it be human lives, whether it be human experience, whatever it might be, the environment. I don't care about that because I want this money more than I care about any other values that I could or should have. And that's absolutely greed and that absolutely exists. And then the other side is altruism, which is good at any cost and where you're willing to do anything to do good. And I think that there's been a call, I think, sometimes that corporate leaders need to be altruistic, good at any cost. But the thing that I think is missing from that conversation is that that cost could be systemic collapse that we all rely on to survive. And so then I thought, OK, so to add clarity to that, then. anything between profit that's neutral in the middle to altruism, just up short of altruism, where it's good at any cost, where you're doing good, but you're still generating some form of profit, and you're doing good in an economically sustainable way. There's these concrete companies that are coming out with new ways that are carbon neutral, or even that capture carbon in the process of making that concrete. And they're going to sell that concrete at a profit, which means they'll get to keep making more concrete. And they could eventually eclipse or overtake all other concrete, theoretically, all other concrete providers in a way that's really, really good for our environment, because they're generating a profit from that. And so that's sort of what I mean when I talk about profitable good. And I think when I think about applications of AI, I think there's a lot of times where I've worked with people to try to figure out ways to use AI to guarantee there's no child labor and supply chain or to be able to get like a UN sort of level certification that there's ethical sourcing, for example. The tech ended up not being the issue. Yes, we could figure out the tech. Usually it was the profitability equation, the level of partnerships across a coalition, across a sector, and with government institutes as well across many countries. And so it ended up being a much bigger, thornier problem than something we could just apply AI, even if we had the right technology. And so that's where I think that's part of the reason that I chose profitable good as one of the sections that I worry about.

Adam: I think that idea is really helpful too. On one level too, because two things, I realized I asked you that question in relationship to AI, then I'm like, yeah, AI wasn't the point though, actually, I realized it as I'm asking it too. And you wonderfully kind of dodged why that is the case. You're like, so this is why this is not, AI is not the point. The tech is solvable. And that's, I think, one of the key pieces, I think, that a lot of leaders can get stuck on, right? It's like they're looking at the technology itself and thinking that's the thing that is either going to transform us or that we have to transform to work with it. But you're kind of highlighting that that's not really the point. We can ask questions of meaning, purpose, and impact around the technology. The technology can be part of the equation, but it isn't the thing. It can both serve and… It too can be neutral, right? Technology itself can be neutral or it is neutral until we do something with it.

 

Brain:

 

100% I call it tool worship in the age of AI, because I think that, you know, we got to a point now where it's, it's like if you, you know, got a skill saw for your birthday, and you stood up and started looking around your house on where am I going to use this to make my house better? And the truth is you're probably going to do more destruction than good because you don't have a plan. And so I think the same is true. You don't have saw-driven transformation in your home. Sounds scary. Or electric screwdriver, hammer transfer. Yeah, that's true, actually. Yeah. The wording on that is a little tricky. Electric screwdriver-driven transformation, that's not what you do. You say, oh, I want to remodel my kitchen. Okay, great. What kind of kitchen? What do you want your kitchen to be like? Well, yeah, I want more counter space. I want it to be open. I want to tear down this wall. Well, great. Let's figure out what's load bearing or not. Let's make sure we've got the safety codes, you know, that we can be in line with. At no point in that conversation are you saying, so you're telling me that this saw is X percent faster than this saw? Right? That's, that's not the conversation when you're remodeling a home, you're talking about the future you want to create for your home. And you're trusting that the experts are going to take care of the tool part, because they're the experts. And that's not, you know, you don't need to become and you don't need to, you know, put on a tool belt and become a general contractor to be able to do that. Your role in it is to envision the future and then to find and make sure you're partnering with the right team. And so I think the same can be true when it comes to AI, is that people often think, okay, every company needs to become an AI company in the sense that every single company needs to develop the top level, which we talked about earlier in this conversation, is not actually possible, because there's just not enough data scientists in the world to do that. So then it becomes about, okay, And often when I would be brought in to help partner with these executives on their AI strategy, the first thing I'd ask them is, OK, great, we're going to build a strategy. What's the vision? So coming back to the chess analogy, they picked up the piece and they were looking for where on the board should I put this? And it's like, well, hang on, don't, you know, because in chess there's touch move, right? Don't touch the move until you're ready to move it. Otherwise you're in chess, at least legally required to, you know, you're not legally, but required in tournament to move it, even if it's not the right move and you're going to lose. But so I think people picking up the AI piece and saying, where am I going to put this thing instead of saying, Well, let's see, we're in the retail sector and this is what's changing right now. This is how consumer expectations have changed. This is the way that from a supply chain perspective and from an ecosystem and from a sustainability perspective, these are all the market dynamics that are currently in play. And this is something that we feel like would really advance and build on our core competency and advance and offer some kind of new or interesting value to our customer base or to our partners. And so that's what we want to build. That's the future that we want to be and that's what we want to create. and then working backwards, what would have to be true for us to create that? Well, you might not need AI at all. It might all be social innovation, or it might all be on the manufacturing floor. You need to figure out if you can sustainably or profitably create that. Like LEGOs are struggling with right now, trying to create LEGOs out of recycled plastic. It's a really, really admirable goal, but they're still figuring out, can we do that profitably and stay in business? If we can, then we're absolutely going to do it. If we can't, we'll have to figure something else out, right? And so starting with that future in mind that you want to create, then working backwards, as opposed to starting with a, you know, pushing on the end of a rope, starting with a technology and saying- I think that's really a powerful point in this idea, because you talk about this in the book too, in terms of like, how do we help leaders envision what they're looking for, what that future they want?

 

Adam:

And so, and I appreciate the way you articulate that, that we ought not to get stuck on the technology or the tool as the way to do it. You know, again, this like solid driven transformation is a, funny and terrifying ideas I think about the movie Saw from the early 2000s.

 

Brain:

 

 No, I know. That's why the wording was tricky. I'm going to make a mental note not to use it.

 

Adam: 

 

Don't trigger Lion's Gate, like, hey, it's copyright. But I think an important point there where it's like, how do we help bring about or envision what it is that we're looking for? And so both we have to have a strategic perspective and then also envisioning the future that we want. I think that something that's interesting there too is how, you know, maybe you can share an example or two of like how you've worked with leaders to do this, to kind of walk them through the process of let's envision what that future could be. Because I think, I imagine that's probably a little scary to some C-suite executives. Not that I may ever admit to being scared, but you know, there can be some challenge in terms of saying, let's talk about how we can envision that future. And they say, well, what does that mean? You know, here's what I want. It's like, well, let's put that in perspective and kind of draw those lines. What does it look like kind of in the space when you're working with them?

 

Brain: 

 

You're the first, I think, podcast host to ask me that. So I'm super excited to answer that question. Achievement unlocked. So the way I put it is that I think that the one thing I say is that the skills that you that you need in order to grow inside of an organization. Let's say you're starting out in the mailroom like the classic, you know, GE story. I started in the mailroom and now I'm a C-level executive. The skills that you need are different than the skills you need once you're in that executive leadership position. You need to be good at following orders. You need to be good at asking the right questions and understanding what your boss or what your management wants and then going and executing extremely well on that and also working well with others and lifting others up as you continue to grow and being a good leader and helping. But in a way, even in leading that team, it's still about executing on, in a lot of cases, a vision or a plan that's been told to you that you need to now be a good executor. And so, and it could be that, okay, there's a challenge that we're trying to address. You need to come up with a plan and then address the challenge. But what happens when you're an executive is no one, I mean, sure, there's shareholders and there's market dynamics and there's customers and partners asking things of you. But at the broadest level, you get to choose which challenges are important to you and your organization when you're in the C-suite. You get to decide. You get to say, for instance, if you're a chocolate company, do we want to be able to guarantee there's no child labor in our supply chain? It's been an issue for years. We ourselves don't hire in Hershey, Pennsylvania. We're not hiring children, but multiple companies down our supply chain that we don't even have access to that information of who they hire. There is a potential that we wouldn't know if they were using child labor, but is that something that we can solve? Is that something that we can fix? The short answer is yes, they could start to build all their own farms and it would take a number of years to get to the point where they could shift the reliance. It might take them a decade to get to the point where they can shift all the reliance over onto farms that they own and operate and cutting multiple middle people out of the equation. i'm guessing the profitability would be there because you know imagine you know one halloween being able to say or or Valentine's day or whatever, you know, major chocolate buying time of year, being able to say, Hey, these are the 20 chocolate bars, or these are the 20 candies that are, we can guarantee because we, you know, we invested in building every single cacao farm that we harvest from ourselves. That would be really powerful. And so all that to say, in terms of your question of the, of the answer of being in the room, you know, where, where it happens, so to speak. and having those discussions. I'll give you an example of an organization that wanted to be able to, they have a turnover issue. And the turnover issue is strong enough that they just feel like, okay, how are we possibly going to fill this gap? where we hire people, our experts train them, and we're not able to keep them long enough till they get to the point where, you know, it takes about five years in this context for them to be able to operate the machinery and equipment safely. And so, but they're leaving after two and we spend six months training them. So this is just not, it's not as feasible. And eventually in the plants that we have, we're going to run out of people like in those local towns that are working age. And so, especially with, you know, Amazon bringing in warehouses and other competitors and things. And so, and not to mention all the tech jobs that people can now start to get and learn about remotely even let alone moving. Right. So, so it's a big problem. And they said, we, we actually, you know, we need to figure this out. And so, you know. It'd be easy to start the conversation with, well, what can we do with AI? How do we address this with AI? And that's part of what their IT wanted, right? Their IT department said, we just need to increase the digital worker experience. And so I think the example that I'll share from those conversations is that what we just ended up discussing was, okay, well, what I'm hearing from you is that you as a leader have constituents, so to speak, within your organization, that the technologists, the sort of tool worshippers, for lack of a better phrase, that think the tools can solve this. But you don't necessarily think that. What do you think will solve it? And just asking the open questions. And what we kind of came down to was, okay, so I think that people are leaving, like if you zoom all the way out to the systemic context of the market, You know, you could have the best experience in the world from a tool perspective, but people aren't finding themselves. They aren't seeing their future at your organization. They don't, they don't see, okay, I'm joining as a, you know, minimum wage worker working in a relatively unsafe environment. And, but you know what, it's going to pay off because in five years time, if I, if I put in my time and I do a great job and I, you know, then I could be promoted, then I could be a plant manager. And that's what that salary looks like. And then from plant manager, then I could go on, these are the various avenues into more of a corporate role. They can see that line very clearly in the tech sector. It's extremely clear, junior data scientist data site, and you all over Glassdoor, you can see the salary ranges that you're going to get. And so but they can't see that they can't picture themselves, they can't see their future, the way they could 100 years ago, you know, moving to Dearborn, Michigan, and being a manufacturer was the Silicon Valley level, like that was the hottest job. And, and unfortunately, it's just currently, at least in the US, I can't speak for every every market, but it's, it's, you know, sort of shifted. And so we need to reinvent jobs at this company, and we need to do we need. And so what we ended up deciding was, we're going to do a series of design of experiments to say, Sure, in this factory, in this plant, let's up the digital worker experience. In this one, let's tweak economic incentives and try to be really creative around that. And in this one, we're going to actually just do social innovation. We're going to put pictures on the wall of the things that we're making, people experiencing the product that we're making and the value that that creates for them in their life. We're going to create very clear and focus on how the managers treat their organizations. And we're going to take pulse checks the way that we would in a corporate culture. We're going to do the same thing. in on the front line in a manufacturing context to check the emotional health of these workers and talk to them about their future and create very clear, you know, paths for progression with clear expectations and some degree of transparency around what they'll be paid. And so as you can see, there's not a lot of AI that comes into the conversation yet, because it's more about, well, what are we actually solving for? Because if people, you could have the hottest AI thing in the world, but if people don't see their future there, and they don't see how they're gonna be able to get the level of compensation that they feel like they need to be able to, let's say, have a family or travel or whatever their personal goals are, it doesn't matter how cool the tech is, what matters is different in that sphere. And then part of, yes, there was an innovation on tech that came out of the whole thread, which was that because of the challenge of these workers that have operational responsibilities, continuously training over and over again the new employees, what they ended up doing was building a digital, they called it a companion, that basically the employee could ask, and this is before chat GPT came out, but they could ask you and it would use generative AI to understand the intent of what they were asking about. And then it could surface a video, like if there was a technique that they wanted to see done because they weren't sure the right way to do it, they want to get it the right sequence, let's say. They could say, I'm about to do this technique on this machine. Can you pull a video of one of the experts doing it? And it could pull it up and they could watch it as many times as they want without taking up anyone else's time or without feeling silly for having to ask again or anything like that. And then they could go do it themselves. And the feedback from the floor was that that was a massive help to the operational leaders that had been taking time to go you know, repeat themselves over and over for new hires, because they would have to train and then still execute on their daily lists of tasks. And this way, that training wasn't completely offloaded, but significantly reduced. And those new hires felt empowered by the technology to ask more questions and be able to, you know, get more information without feeling like they're interrupting someone else's time. And so, but that's one small component in a broader social strategy that I would argue is the point.

 

Adam: 

 

Yeah. No, it's funny. The subtitle of your book also really matters, right? We say autonomous transformation and people jump to the AI part, but it's like, how do we create a more human future in the age of AI? And so it's like, you actually have a secret anthropological book here. But that's a really powerful point. And actually to that point, a quote that really stood out to me that I'll read real quick is what you said, that a more human future is one in which humans have broken away from the steel chains of the industrial revolution and that they're able to employ the skills with which we have been uniquely gifted to imagine future states in which we would like to exist. And I think that really encapsulates the story that you just told there too, in terms of having the ability to not feel like I'm just one cog in a machine and that I have to just do what I'm told. But then on top of that, I have the capacity to imagine both myself at an organization and the meaning and value that that work provides for others also. This has been something that's really interesting in conversations I've had with other folks too, and other business leaders, is as we talk about what the future could be, one of the big questions and problems that we always run into when doing consulting projects is the idea of do people feel like they belong? And belonging is a fundamentally social question. And it's so clear also in your work why tech is not the answer. It's an answer that can enable, but it's not the point. And I think that's such a powerful, it comes through very powerfully, I think, in kind of how you're sharing that story too. And I guess a reminder for us to think about, as business leaders and as we want to help be positive change agents for transformation with organizations, it does ultimately come down to the people that we're working with. And that's like, if people are there, that helps us do the work. The other side of this too, of course, is that AI is not here to replace work. It's going to replace some jobs over time by function. But one of the other things that you kind of write about in this space that… Oh, I've got you. I'd love to kind of get your thought on this too in terms of the notion of capabilities, right? That AI and humans and tech, they help us do different things. And this is something else I think for leaders that stuck with me as something that is important to share with leaders is think about the levels of capabilities, right? Human reasoning is one of the pieces to kind of help us keep in the equation. So how can we think about the role of capability when we're thinking about adopting technology in organizations?

 

Brain:

 

I love that question. And I'd say a couple of things. So if you think about a Venn diagram of humans and machines in terms of capabilities, I don't think I ended up putting it in the book, but it's something I've shared a lot when I give when I give talks or when I work with clients. And so the things that humans are uniquely good at, you mentioned one of them is belonging, we're uniquely good at creating belonging. So if you're someone's calling in, because they're having a customer care issue, a machine is not it could be the best machine you can imagine, it won't make them feel like they belong as a part of your brand and your customer base and like they're known or seen, even if it answers their question very efficiently, they just can't, that's just not a capability that it has. Right. And then on the converse, machines are tireless, right? They don't get bored. They're very precise. They're very fast. And so if you think about the way that when we go out to do something as humans, like zooming all the way out and think, okay, we're setting out to do something. Let's use exploration as an example of like outer space, let's say, and let's say we do something and we figure out, okay, there's this new planet that we're going to go try to land on and create a community around and all that. The first stage of anything that we do as humans is exploration, where we try to create words around that thing. And we name planets, or we, you know, we name all the different sections. And if we encounter new substances or materials, we're going to experiment with them and name them, right, and new species of whatever, right. So The first stage is that we name something and we create language around it. And then the second stage is that we create knowledge. I'm speaking to an anthropologist, so I know that you know this better than I do. Once we've created language, then we create from that language, then we can start to observe and create knowledge and say, OK, OK, now we know that every time we do X, Y, Z, then, you know, this other thing happens and that's repeatable. So now we have knowledge around that. And then the last step, not last, but the next step is causality of understanding why. Why does that happen? Sometimes we can't know. Sometimes we can start to understand, you know, as we dig deeper, why that thing is happening the way that it's happening, which gives us a sense for, oh, so if we change this or that about it, then a totally different thing will happen. Or, you know, we hypothesize and we test it. And then, you know, so we're going from language to knowledge to understanding. And then once we understand something and we have a degree of expertise, then we can operationalize that expertise. And that's when we start creating real data. And then once we've operationalized our expertise, we've created processes and standards, and we have systematized understanding and repeatability of something, Now we're getting to the sphere where machines are super useful, right? Because we get to the point where the thing that the next phase of that is to just repeatedly do whatever that thing is as precisely as quickly, you know, with with as few errors as possible. without getting bored. And humans are not as good in that side of the equation. Machines are fabulous in that side. Humans in the exploration side, we can absolutely use machines to go explore dangerous areas and to collect samples and all sorts of stuff. But in terms of actually creating a mental model for this brand new thing that's never existed and creating the language and the knowledge and the understanding and so on, Machines become a partner in that in terms of execution, but you have to have the human brain at this point and vast foreseeable future to actually go through that process of going from language to knowledge, which is why language models, that's just the first stage. If you're in an organization, a language model, Language models don't have knowledge or understanding. We as humans do, but there's knowledge graphs and causal graphs. And so there's ways you can try to recreate that. But really the ultimate goal is to make operationalized work be able to be done repetitively and faster, which elevates us as humans to turn and focus on the creative stuff and that exploration that we're really best at. And so I think you mentioned earlier about jobs and how AI will replace jobs. I would argue that leaders will replace jobs if they can. Leaders who view people as expendable, they would have replaced them pre-AI the moment they had the opportunity or the excuse, and they'll do it now and they'll blame it on AI. But yet, in the tech sector earlier this year was insane amounts of layoffs, not due to AI, but due to management decisions and leadership decisions. And so I think that it'll be easy for people to look and say, OK, AI is coming for jobs. I would argue that it's really up to the leader, because from the point that AI is determined, we're going to leverage this technology to do this repetitive thing that we currently have humans doing. Because if it's not repetitive, we can't really use AI for it. So that's why I'm saying that repetitive piece. From the moment they determine that, they have six months at a minimum, but more likely on average, a year and a half to two years before the technology is developed and tuned to the point that it can actually be put into production and that work that currently makes up those jobs could go away. or be offloaded. And so that means there's two years to create a people strategy or to create a new line of business that has more of that exploration where you're going to need people to be thinking about this or to be doing manual things that are related to this new product line that you're either exploring or creating. And so if leaders at that point of decision on the technology have an expansionist mindset of what are we going to do to add to top line revenue for our organization and honor our people and the tribal knowledge that we have in the culture, At that point, that's a decision that they can make. It's either through fear or through just a lack of attention to detail that a leader would make a technology decision and not pay attention to or think about. Or coming back to our original conversation, it could be, all right, by eliminating those jobs, we're saving X amount of dollars. cost-saving and therefore I'm going to get my bonus or my promotion. And I would argue that you can get that bonus and promotion by expanding top-line revenue just as effectively, if not more effectively than by cutting costs.

 

Adam: 

 

You just gave me hope for the future of all work now. A great way to think about it. No, but I think that's a really powerful point. I mean, I hadn't conceptualized it that way. So that's actually a really powerful rendering to kind of recognize that oftentimes we, as much as we tool worship the tech for doing something new, we also blame it for then making hard decisions, but really it ends up being leadership's vision or lack thereof in terms of like, well, let's, we have to like help the bottom line versus the top line. And that's a really, that's a powerful reminder.

 

Brain: 

 

Yeah, AI never knocks on the door of a factory and then just comes in and starts doing work, right?

 

Adam: 

 

No. That's cool. Maybe as a final wrap-up question, is there anything in the process of writing this book or the work that you've done around it in consulting that has unexpectedly challenged your assumptions about how either business works or technology works or how consulting even works? That's a new way that's shaped your thinking that came from doing this work.

 

Brain: 

 

Oh man, that's a good question. There's so many. I can't, I want to, yeah, hit me with the hardest one at the end. No, if anything, I would say that the process of writing the book was, I'd been asking so many questions for years and been asked those questions by leaders. So I'd say that if anything, it was more like the process of writing the book was the chrysalis period of, you know, from a butterfly perspective. And I don't mean me being the butterfly, but these ideas that sort of, Had been there and had been out there for me and it's sort of a liquefied and went into a I guess a cocoon so to speak right and then when they came out I realized oh there's. You know that from an ecosystem perspective from a leadership perspective probably the easiest one that I think it's so. The thing I feel a lot of times, every time I talk to somebody like Kim Scott, who wrote Radical Candor, or any leader that I've had the opportunity to connect with that's come out with something that they feel pretty proud of, they usually say, yeah. And then when I say it, it's so obvious to everyone, it doesn't seem that powerful. But getting to the point where you're seeing it enough to be able to say it and say it simply is a really, really thorough process. I'd say probably the biggest one is that, from my background, like we talked about before, in chess, there's very clear rules. in music that someone wrote the music and you're following a conductor. And then in business and the corporate sphere that I entered at Accenture and AWS and Microsoft, there's all these rules and structure placed around them. Probably the biggest one, I guess, it seems so easy and simple now that I'm going to say it out loud is that all the rules that we run by, the idea of best practices, the idea of, you know, anything that we look at, we say, we have to do that, because, you know, that's the the rule, let's say, or that's the that's how we that's how we work, right? We're, we're doing digital transformation, because that's, that's the current highest good that anyone has named. Right. And so the idea of saying, well, we can name, like, I dare I, I could name a higher good. You could name a higher good and say, okay, I want to create that future. And so I'm going to try to tell everybody that will listen about it. And like the leaders at these corporate organizations or in government institutions or wherever it might be like, Whether you know it or not, you're either backing into the future and you are creating the future without really being intentional about it, or you're being decisive and you're looking forward and you're choosing a future and you're building toward that future. Those are the only two options if you're a leader. And so this idea of being able to say, hey, you're a C-suite executive at one of the largest companies in the world. You have a hand right now in creating the future. And sure, you can just optimize the existing value proposition. you know, serve your three-year tenure and then move on to your next opportunity. That's fine. That's the current pattern. Or you could look out and you could be bold. And so that's something that I find really, to your point, like you mentioned in your feedback to my last thing, that's for me been the most empowering point is that like we get to decide, like someone stood up and said, I think that the system should be first going forward because the human has been first, but it's time for the system. And now I've stood up and I'm saying, in the past, the system has been first. In the future, the human must be first. And the idea that I and we can choose that and say that, and then work to build that together, I think that's probably the one that has been the most paradigm shifting for me.

 

Adam:

 

 And that's exciting. That's cool. And wonderfully said too. I mean, I very much appreciate the idea where it's like, how do we… It's like, you say something succinctly, you're like, okay, well, give me 35 minutes to write it down. And then I'll get it. My one sentence, you know.

 

Brain: 

 

Exactly. Mark Twain, he wrote a letter, right? You might've heard this one where he wrote a letter and he said, I'm sorry, the letter is so long, I didn't have time to make it shorter.

 

Adam: 

 

Awesome. Brian, this has been such a fun conversation. I really appreciate your perspective and storytelling and business acumen. It's been really great to kind of learn from your wisdom here and also really enjoyed your book. So I'm excited to get it in the hands of listeners and yeah, keep doing the good work. This is great stuff.

 

Brain:

 

I appreciate it. Thank you so much. It's been great to join you and I've enjoyed the conversation as well.

 

Adam: 

 

And that wraps our episode on autonomous transformation with the insightful Brian Evergreen. Now, as we reflect on our conversation today, we've been treated to a wealth of thought-provoking ideas that span the nuances of corporate strategy to the evolving landscape of work in society. And Brian's perspective has brought a fusion of art, strategic vision, and technological advancement in ways that challenged me to consider more deeply the ways in which we navigate through these transitions and the impact they can have on our lives. I want to extend my gratitude to Brian for sharing his expertise and engaging in conversation today that I think has left us all with a greater understanding of the complex interplay between humans and machines, strategy and purpose. The lessons that we've dug through today not only apply within the corporate realm, but I think carry significance for each of us in our daily lives and the wider societal fabric. So as you go about your day, you know, I invite you to think about these kinds of ideas as well. How might the shifts that we've discussed today mirror or impact your own professional journey or the broader societal and global landscapes that you find yourself in? What insights or ideas resonated with you and how might they influence your outlook on leadership, technology, and purposeful work? I really appreciate the invaluable support of our listeners and watchers, and if today's conversation has piqued your interest and you're eager to dive into more, then you can definitely check out Brian's book that's linked below in our show notes to The Anthrocurious. bookstore where you can help support the podcast, the author, and independent bookstores all in one go. It's a win-win-win. And as always, I want to hear from you. So please share your thoughts, feedback, suggestions for future episodes by reaching out to me on the website or in social media. And your insights really are vital to the ongoing mission of exploring diverse, interconnected worlds of anthropology and its impact on our lives. So as a wrap up, you know, I invite you to take one small step to support the growth of the podcast. You know, you can either subscribe here on YouTube or on your favorite podcast listener, leave us a review, leave a comment, you know, share the episode with someone who you think will love it. This really helps things out. And of course, don't forget to check out the Anthro Curious sub stack if you want to get some written versions of some of this content and dive deeper that way. So thanks once again for being a part of the community. And until next time, keep your curiosity thriving and your spirit open to the ever evolving world around you. I'm your host, Adam Gamwell, and this is this Anthro Life.

 

Brian EvergreenProfile Photo

Brian Evergreen

Author and Advisor

Brian Evergreen is best known for his work advising FORTUNE 500 executives on artificial intelligence strategy. Building on his experiences working at Accenture, Amazon Web Services, and Microsoft, Brian guest lectures at Purdue University and the Kellogg School of Management and is a Senior Fellow in the Economy, Strategy, and Finance Center of The Conference Board, sharing the unconventional and innovative methods and frameworks he developed leading and advising Digital Transformation initiatives at many of the world's most valuable companies.

Brian is the founder of The Profitable Good Company, a leadership advisory company that partners with and equips leaders to harness the economic and societal of technology by creating a more human future in the era of artificial intelligence.