episode 225
Who’s the AI Stakeholder: Leaders or Employees?
episode 225
Who’s the AI Stakeholder: Leaders or Employees?
Rob and Justin had a plan. Scale Justin’s brain across the entire P3 consulting team. Build an AI agent that bottled up his frameworks, his instincts, the way he navigates AI conversations with clients. In theory, everyone gets smarter overnight. It was a solid idea. The tech worked. The knowledge base was deep. The guardrails were tight. And almost nobody used it. Not because it was broken. Because the team wasn’t waking up thinking, “Man, if only I could channel Justin right now.” That wasn’t the fire in front of them. So instead of feeling like leverage, the agent felt like homework.
And that’s the punchline. You can build something powerful and still miss the mark. No one was losing sleep over not having this tool. No one’s bonus depended on it. So it drifted. Not rejected. Just… optional. That’s a brutal place for a “strategic initiative” to land. The fix isn’t a better tool. It’s sequencing. Define the services, train the team, build the human infrastructure that makes the tool land on a surface that’s ready for it. Every AI project that has worked traces back to the builder being a direct stakeholder. Not adjacent to the problem. In it. Proximity to the pain is doing a lot of work that no amount of clever architecture can replace.
When leaders are the ones excited about AI and employees are the ones expected to use it, you’ve got a stakeholder mismatch. And that mismatch is quietly killing more AI initiatives than any technical failure ever will. If you’re planning a rollout, or already wondering why yours isn’t sticking, this episode is for you. Be sure to subscribe on your favorite podcast platform for new content delivered directly to your inbox.
Episode Transcript
announcer (00:04): Welcome to Raw Data with Rob Collie. Real talk about AI and data for business impact. And now, CEO and founder of P3 Adaptive, your host, Rob Collie.
Rob Collie (00:20): Hi, Justin. Welcome back to another fabulous recording of The Raw Data Podcast.
Justin Mannhardt (00:26): Have you moved the gray baffler?
Rob Collie (00:29): The gray baffles behind me move around all the time. They might even move on their own. No one's in this room except me doing these podcasts once a week. Who knows what's going on in this podcast room? Podcast lair gnomes.
Justin Mannhardt (00:42): I'm going to hire a team of job hoppers. I'll coordinate with Jocelyn, obviously. We're going to start messing with your podcast lair somehow.
Rob Collie (00:51): Sounds great. You can start by cleaning it up. I think she would really be into that. I mean, she does come in here, and occasionally be like, "Wow, you really need to clean that place up." I'm like, "The room doesn't even exist."
Justin Mannhardt (01:01): It's like a portal in Minecraft.
Rob Collie (01:03): It's Schrodinger's mess. Don't open the door and it's not even there. But then she goes, "But I know it's in there." I'm like, ah. So we do have a bit of a short recording window today. We should probably get right to it.
Justin Mannhardt (01:15): Let's jump in. I got fatherly duties pressing down on us today, my man.
Rob Collie (01:19): I understand. So we decided that today we were going to share a couple of lessons learned, valuable lessons, kind of learned the hard way by doing things a less than optimal way about how to think about AI. And they're kind of related. I'm going to tee up one of them right off the bat, which is it's a natural starting point. You and I, we've 100% lived this, where the first things you start building are things for which you are a stakeholder.
Justin Mannhardt (01:53): True.
Rob Collie (01:53): 100% true of the things that I have built, both personally and professionally, I have been a stakeholder, sometimes a very shortlist stakeholder for all of them. The Haystack screening agent that helped us get through all of our job applications in a more efficient manner, well, I was the one doing the manual first review.
Justin Mannhardt (02:16): Yeah, you were the one feeling that pain of doing that. Yeah.
Rob Collie (02:18): So not only did I benefit from it, but I was also really close to the problem. I understood the problem intrinsically, and I did a great job with it. The thing was amazing, and we're about to use it again. And another one was the GRIFF copywriter. Again, I'm a direct stakeholder, but also trying to empower other people on the team so that I wasn't the bottleneck for writing copy, and I wanted to make sure that we were consistent in everything. So again, very up close and personal problem. And then the two personal agents that I've been talking about a lot, the couples' coach that helps Jocelyn and I sort of work on our relationship, as well as the one that helps her manage one of her health conditions. These are right in my face.
Justin Mannhardt (03:01): Yeah. You're very intimate with all the ins and outs and details of all four of those things.
Rob Collie (03:08): And now for you, I think there's a similar vibe, right?
Justin Mannhardt (03:11): Yeah. I would say the thing that I've had the most success with is the AI system that just runs my personal productivity system.
Rob Collie (03:21): Which is a thing to see, but I still haven't gotten a demo.
Justin Mannhardt (03:24): Yeah. Well, we'll have to get there at some point. But again, it's highly tuned to the way I want to do things and the way I want things to work. And I am the direct benefactor of how this works. And yeah, sure, other people benefit tangentially because maybe I'm more on the ball with things. There's a couple more.
(03:45): I built a chat agent. The intention was to help the team in the pursuit of exploring AI opportunities with their clients. And what's interesting about this is it's sort of maybe a little bit different because I wasn't necessarily super close to that specific activity. I did go out and talk to clients, but I was designing something that really was achieving a goal I had, which is to create conditions where we were having more of these conversations.
Rob Collie (04:18): So that was actually my second example.
Justin Mannhardt (04:20): Oh, great.
Rob Collie (04:21): So I was right there when you were building that agent to help the team talk about, learn about what AI was like. In a way, it was like a way to scale your brain.
Justin Mannhardt (04:34): That was the idea, is to create a vehicle for that type of leverage. Yeah.
Rob Collie (04:38): I mean, I was all in on this idea as much as you were. Now you were really enthusiastically engaged with the problem. For a while there, we lost you.
Justin Mannhardt (04:50): I was in the Batcave for sure.
Rob Collie (04:52): It's one of the ways you know that it's kind of, in a way, like on target.
Justin Mannhardt (04:56): I'm like declining meetings.
Rob Collie (05:00): I'm having a relationship with this agent that I'm developing. But in hindsight, one of my observations about that was that we didn't get great adoption of that agent. And the lens through which to view that is, I think, in hindsight, the consulting team wasn't viewing it as one of their current problems that they had right in front of them on a daily basis, that they weren't thinking like Justin enough. That wasn't like in their daily workflow, like, oh, I need to think like Justin and be talking about AI. That wasn't a current reality for them.
(05:38): You and I both, I know you were the one hands-on building it, but I was as much a part of greenlighting and supporting this as you were, you and I both were kind of like latching onto AI as a way to pull a future business reality forward, kind of neglecting the human plane of all of this, which is that even if we gave them the most magical, amazing, scaling Justin's brain coach of all time, we're just adding something to their to do list.
Justin Mannhardt (06:14): Yeah. And another thing, I think this lesson is really informative for basically any type of chat-based agent you would go out and build, regardless of its purpose. I mean, this thing had a super rich knowledge base. We told it everything about us and what we sell and all the, oh man, the cosmic power, right? But then when you studied how people were actually using it, they were finding ways to use it that weren't its intended purpose necessarily. They would just like, hey, I need help responding to this issue. And they would get on brand, on message, on plane, support, but it wasn't being used in the way we thought.
(07:00): And I think the other thing that was really interesting is it was, I'll punch up maybe what you said, it was additive. We added something to their plate. We added a to do for them that wasn't surfacing in a natural workflow or a natural pain point that they were identifying with. And so I think that's a barrier to adoption we didn't really think to flag for ourselves.
Rob Collie (07:29): In hindsight, if we'd done our homework, eaten our vegetables, we would've done the responsible things first. We would've defined very clearly sort of our menu of new sellable services, educated the team at least at the high level as to what they were, established the incentive programs that go with them so that people understand how they benefit, as well as how and why the clients benefit, and then supported that with maybe exactly the same agent that you built.
Justin Mannhardt (08:02): Yeah. I think my point of view about solutioning AI is constantly evolving basically at the pace that AI is progressing. You're spot on with this. And we used to talk about this with BI and data too. Let's get really clear on the problem or the value we're trying to unlock, and then work backwards from that faucet. I'll bring faucets first in here.
(08:29): I was so absorbed in AI is cool, and I'm like coming up with what I still think is like a pretty interesting way to go about building a knowledge base up and putting the guardrails in an agent and how to do all those things, but it was a bit tech forward, and then trying to pull the business along for the ride. And I think as business partners to clients, everybody's showing up with cool demos and frameworks and all of this, but you run this risk of missing the point. Or worse off when this gets like this, or the people you're trying to serve or equip or empower, them not seeing the point.
Rob Collie (09:09): One of the principles that's starting to really gel for me is any sort of AI solution, whether it's a chatbot, whatever, like that you give to other people to use, it needs to be better than the alternative. It needs to really, really help them make their life better. And if you're going to put something in front of customers, like an AI chatbot in front of customers, it better be better than the human experience.
(09:43): I think the same's true internally. And I think the agent that you built is better if they already sort of have either the problem or the goal. But we did not set up the problem or the goal ahead of time. You want to go use AI to pull that reality forward and like call it solved. I can't imagine, if you and I are following for this in the early going, I mean, this is going to be happening everywhere. It's just going to be over and over and over again. And like AI is going to be deemed a failure in so many cases, in these cases it won't be because of hallucination necessarily, it'll be because they just designed the social system wrong.
Justin Mannhardt (10:28): Some of this is a trap that was created by, I fell into this myself on more than one occasion, running around trying to answer the what do we do with AI question? And so I think for a while, I would look externally to what's happening in the world and the market, and all these things and see people struggling to identify use cases. But I'm like, ooh, I got lots of use cases. I got all kinds of use cases. Let me go. Let me go. Let me go. And I think it's just that opportunity to say like, whoa, okay, what's causing our business to struggle? Where are we leaking margin? Where are those problems? And then like, okay, how does AI fit into solving that problem is like just a different combination of clarity.
Rob Collie (11:16): I agree with that approach. I mean, I see it sort of like two different sides of the coin. On the one hand, thinking about it through the lens of how it helps the business is the way. And on the same positive note, people who are in business are good at thinking about things through that lens. So when you're asking them a question like that, that's not going to be an away game for them. That's going to be a home game. At the same time though, they're coming to you asking how should we change, and asking how do we adopt AI? Because they know that that is absolutely still going to be front of mind for them.
Justin Mannhardt (11:56): Yep. That's still the question.
Rob Collie (11:59): So saying, whoa, whoa, whoa, let's go back to before the Earth cooled and let's talk about... No, no, you're the AI people. I don't want to tell you about my business. You can sort of imagine how this is going to be a subversion of their expectations in a way.
Justin Mannhardt (12:17): I view the dynamic maybe just slightly differently. You brought up the home field advantage. I think that actually is a good thing. Because every business leader knows these are the places where we make or lose or whatever the case might be. I just think as a person trying to bring AI into the equation, if you can at least have that context to give yourself stable footing for why you're recommending or building what you're building, and having that clarity of how the people are going to actually use something and what it's really going to do for them. Which I didn't have in this example, right? I had this hope that everybody would just see it the way I was seeing it.
(12:59): A lot of companies, and you hear about stories in the news, they're just saying things like, "Oh, we're empowering everybody and we're giving them this AI and that AI." And then they're coming back around six months, and be like, "Well, we're not really sure what it's doing for us." The stories aren't connected. The human experience of the people trying to use these things isn't connected with what they're being provided in a clear way.
(13:23): I don't think we're the type that would show up and be like, "Let's not worry about AI. Let's talk about strategy." But it is sort of like clarity of I understand the human plane level of change that would be beneficial here, and does what I'm doing match that? Because we've talked about on the podcast, there's lots of different ways to use AI. There's chat AI, there's headless AI, there's agents running around in the background. There's all kinds of stuff. And then there's AI to build a middleware solution to a problem, or replace a little lap, right? It's sort of exciting now because AI is just so capable of so many different things. World's your oyster. What are we trying to affect change-wise here? The way I'd describe the missed first step there is success was simply building this thing.
Rob Collie (14:17): You've heard me talk about the school of fish defense. Fish that can be eaten by other fish tend to school together as a defense mechanism, which seems counterintuitive. I'm a predator. Wouldn't I want them all to group together? So I just swim through the cloud of them and say, "Om, nom, nom." But it turns out that they're nimble enough that I, the predator, have to maintain sort of like radar lock on one of the fish long enough to complete the transaction. I have to adjust my course a few times to catch the fish that's trying to evade me. And in the course of all of these fish evading me all at once, and flitting across my visual field, it distracts me, and I can't stay focused on any one fish long enough to follow through.
(15:06): One of the downsides of the reality of the approach you're describing is that it's too many places to apply. It's like just even getting off the starting line. Again, this is just reality. It's sort of like this other problem. I've actually seen this with one of the companies we're talking to is just sort of like getting through the process of picking something to get started on.
(15:27): Trust me, it's good for us that the addressable surface area of a business that we can now touch is so much larger. It's a majority of a majority instead of a minority of a minority. Yeah, two thumbs up, good problem to have. But it does bring this other downside, which is now, it used to be like one or two fish swimming along the ocean floor, and those were the ones we were going to go try to grab. Now it's like the whole school, the whole business. We have to develop new muscles there.
Justin Mannhardt (15:56): I do think that is something that companies are struggling with is just a surplus of opportunity space.
Rob Collie (16:04): Well, and also, I still think really just not even understanding how it all works. AI is still just this giant genie mystery. I say that relatively confidently because the confident and clear picture that's developed in my head, and it's one that you share, I can kind of look at it and inspect it and go, "Yeah, this was not easily won, this clear picture." I can feel it having been extracted from the noise.
(16:32): And I have become relatively calibrated over the years, saying, "Okay, this is one of those cases where something that seems simple to me now, I do remember how unclear it was to me." And that distance between the two, I'm as incentivized as they come to develop this picture, and I've got a tech background, I've got all these sorts of things. If this was a non-trivial door to walkthrough for me, I should expect that the world is going to lag a bit. So on top of all of this, it's like, what is this mysterious thing?
Justin Mannhardt (17:07): And I do think experiences are really necessary. What I mean by this is your sort of first visceral experience of awakening to what's happening, and like that personal hands-on stakeholder experience we talked about at the beginning. I think it's really important for people because it sort of is the thing that lights you up and helps you have your aha moment or gets it to click for you.
(17:33): And we've been watching people at P3 have those moments over the past several months, and in different situations for different reasons, different projects. It's really cool. And I think that's the benefit of being able to show up to a client and have things like examples and ideas and demos or products because they start those conversations. A high percentage of conversations I've had with clients and prospective clients, we'll talk about a solution to this idea, like we have a demo or something. And 30 minutes later, we're talking about five or six other things and now they're engaged.
(18:10): So they do serve a purpose in closing that gap. I think the key is like, can you find the energy where these things start to align, and you've got likelihood of buy-in and adoption and a real change for the people that would use something. Easier said than done.
Rob Collie (18:26): It just keeps coming back to me, at the end of the movie Needful Things, the devil is talking to Ed Harris. And he's like telling them, he's just like laying out his algorithm, this is what I do. I come to town, I've been doing this for centuries. I come to town and I stir up trouble. I give them what their hearts desire, and everyone goes after each other. But in the end, I always give them weapons. That's how it always ends every time.
(18:52): I just kind of feel like I think I'm slowly easing my wife towards the Claude Code. You know how your personal productivity system, you've not consumerized it beyond the Claude Code terminal yet, and you don't really need to. You're not trying to stand up someone else with this tool. So you get to the point where your Claude Code project has enough functions that it can call, you just sit down and it plans your day for you. You don't need to turn it into an iOS app that has all of these capabilities. That is actually a pretty significant lift. That last mile of consumerizing it is a long mile.
Justin Mannhardt (19:39): I thought about that at the beginning of this journey when I started building things with Cloud Code because I had used different SaaS applications to do lists or projects or whatever. And I thought, oh, I'll go build an application that works exactly the way I want it to. Because I was always duct taping or having a little work around here and there.
(19:59): But then yeah, there was a lot of inertia in like the application layer crap that I might have intention to open source the goods here as an example of, hey, here's how you can build something for yourself that I don't need a UI. I do have a UI layer in a way back in Notion because Claude and I, we both use Notion for all the things that are behind it. I make Claude do most of the things. But then when I need to go in and do something, I have a place that's human friendly for me to go and do that.
Rob Collie (20:33): Yeah. Yeah. So I did do that long last slogging mile for the couple's coach.
Justin Mannhardt (20:42): You have an iOS app, right?
Rob Collie (20:44): iOS app. The LLM coach that we talk to has access to all of the tools that it needs to do all the things that it does. But the health researcher is maintaining a database of hard data of symptoms, but also like a journal of conversations, and even hypotheses of like how we might do better and things like that. It's a lot squishier. I just frankly haven't had time to complete the consumer experience of it. But like it's a bigger lift than the couples' coach, which was itself a pretty big lift to get it to the point where it was consumer friendly. I have an iOS app for it now, and it does have an LLM coach and has all kinds of things in it. But it's just not there yet. And so every time she engages with it, it fails her in some way.
(21:30): And so I see her all the time asking questions of ChatGPT about her health. Part of me dies every time I see this because I know that ChatGPT doesn't have access to this database of everything that's been going on with her. Her entire history is available to, quote, unquote, "my thing." ChatGPT doesn't got that. But I got her over the fear in the last 24 hours, and she's a technical person. Still, I didn't care for the Claude Code terminal when I first sat down with it.
(22:00): She's been having long chats with my Claude Code terminal because, again, it has access to everything. It's not going to mess things up in the way that the consumer versions of this currently do. And now I'm kind of hinting at her like the devil in Needful Things. I'm like, "You know what? You can start adding features to the iOS app. And Jocelyn, I'm not trying to make this your problem. I'm not trying to get out of work here. I'm trying to give you a greater sense of ownership. You're going to be more invested in these things."
Justin Mannhardt (22:31): Wow. We will follow this journey closely.
Rob Collie (22:36): Yes. Lots more to talk about. I'm sure we've got all kinds of things in the hopper for next week. Until then.
Justin Mannhardt (22:42): I'll see you on the flip side.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
Subscribe on your favorite platform.