episode 224
The Dangers of Letting AI Speak for You, and Why Selling AI Might be Easier than Selling Dashboards
episode 224
The Dangers of Letting AI Speak for You, and Why Selling AI Might be Easier than Selling Dashboards
There’s an easy button for hard conversations now, and it’s dangerously good. You’ve got something complicated to say. It needs nuance. It needs empathy. It probably needs a little courage. The AI will draft the whole thing in seconds. It sounds smart. It sounds reasonable. You skim it. You send it. And most of the time, nothing bad happens. The problem is that the time it does go bad is the exact situation where you thought you were being thoughtful. This week’s Raw Data walks straight through one of those moments, from both sides of the exchange, and it’s a reminder that outsourcing the structure of your thinking is not the same thing as being clear.
Then there’s the part that’s almost more interesting. Thirteen years ago, the first real client engagement couldn’t get traction around dashboards. The connection between “this is my business” and “data should change how I run it” just didn’t stick. Same people, same company, different conversation recently around AI. Immediate traction. Leaning forward. Connecting dots in real time. That difference isn’t about better slides or better storytelling. Dashboards improved a slice of the business. AI shows up in the messy motion of the whole thing. In workflows. In manual processes. In strategic questions leaders don’t have time to chase down. That shift in surface area changes everything.
AI isn’t a toy and it isn’t a ghostwriter. It’s leverage. Real leverage. The kind that can remove friction across an organization faster than dashboards ever could. But leverage only works if you’re still the one steering. That’s really what this episode comes down to. Listen in, then decide where AI belongs in your workflow and where it needs to stay out of your head.
Episode Transcript
Announcer (00:04): Welcome to Raw Data with Rob Collie, real talk about AI and data for business impact. And now, CEO and founder of P3 Adaptive, your host, Rob Collie.
Rob Collie (00:18): All right. Well, Justin, welcome back, as usual. We've got a few things we want to talk about today. But first, we got to address a real burgeoning international issue.
Justin Mannhardt (00:31): There's a lot of those.
Rob Collie (00:33): This one almost burgeons on human trafficking. I mean, that's a joke. I mean, you don't want to make light of human trafficking. But let's just say that there are people being traded as international commodities right now, right under our noses. I'm here to blow the roof off. I'm here to put some sunshine into this problem that the world isn't really paying adequate attention to.
(00:54): But it starts off with me confessing that I've spent a good chunk of the past two nights watching a lot of Olympic ice dancing. Now, there's an old beer commercial in which they say something like, "Hey, if you admit one more thing like that, we're going to take your man card." And a younger version of me, a less savvy version of me, wouldn't have appreciated this at all.
(01:17): I still can't really get into the regular figure skating that much, but the ice dancing is pretty awesome. I think it's mostly because I've had two eras of playing hockey and playing poorly.
Justin Mannhardt (01:29): Okay.
Rob Collie (01:30): The younger me played, and just didn't have a sense of subtlety and nuance as much as I do now. But I've recently, I haven't played in a little while, but I just came off of a several year period of playing hockey again. And watching what these people can do on their skates, unbelievable. I know that the people who jump and twist in the air and then land and don't fall is also absolutely incredible. But I don't really have any frame of reference on that. I absolutely have a frame of reference on what these ice dancers do. Bobbing and weaving and cutting and turning and spinning and making moves that are much more similar to what you would make in an athletic activity, like in a sport.
Justin Mannhardt (02:09): Yeah. Change of direction.
Rob Collie (02:10): I mean, it's just ungodly. So, many, many, many, many moons ago we did an episode in which we laid out the rules of sport, and none of this stuff qualifies as a sport according to those rules. But these people are athletes, for sure. And if they wanted to, let's say they'd never played pickleball or something like that, and you've been playing for a while, and you challenge one of these ice dancers to play pickleball against you, 15 minutes in, they've got it dialed and they are annihilating you. They have control over their body and coordination that is just ...
Justin Mannhardt (02:42): Elite of elite.
Rob Collie (02:44): Right. And so, the respect for this, even though it's ice dancing, so I'm getting it. Jocelyn is loving ice dancing. So, me, loving it with her has been a really good thing. I'm getting good relationship bonding out of this as well.
Justin Mannhardt (02:57): The scoring is interesting, too.
Rob Collie (02:58): It's bananas, isn't it? The real time nature ... Oh, you've been ... We're going to take your man card, too.
Justin Mannhardt (03:03): Yeah.
Rob Collie (03:04): You're on thin ice, man. Oh, I didn't even mean to do that.
Justin Mannhardt (03:07): Ba dum tss.
Rob Collie (03:09): So, here's the dark underbelly of ice dancing where we trade humans like commodities. So, the first thing I start noticing about the US team is that they're all Russians, not all of them. But a lot of the US skaters are Russian-born or born to Russian parents in the United States. The Russians were always really, really, really, really good at skating. If I'm a Russian watching this, I'm watching the Americans stealing my heritage. It's just brutal.
(03:39): But then the flipside is you see all of these skaters skating for other countries, and they're clearly Americans. There's an American skating for Lithuania. There's so many Americans skating for Canada. And what's happened is the US has essentially "imported" all of these Russian skaters who are so good that they've crowded the upper echelons of US skaters out of their own team, and so they need to go find another flag to skate for.
(04:11): And some of them had some tie to these other countries, and some of them absolutely didn't. It's like free agency. Countries are now just offering citizenship to people, based on their Olympic athlete potential, to South Korean skating for Hungary, no connection at all, but they're a citizen now. And so, there's this international arbitrage of talent finding its way, the market's becoming more efficient, essentially. I mean, even one of the Russian skaters is named Diana Davis.
Justin Mannhardt (04:44): Oh, wow.
Rob Collie (04:45): Now, she was born in the United States and then moved at age three. Her mom's Russian, her mom took her back to Russia when she was three years old, and her mom has been a long time Russian skating coach.
Justin Mannhardt (04:58): Now, I'm just wondering, there's got to be people whose profession it is in certain athletic circles to help elite talent find their way into the top competition. It's like the transfer portal in college football, man.
Rob Collie (05:17): Yeah.
Justin Mannhardt (05:17): Well, I can't be a starting quarterback here? That's fine. Go somewhere else.
Rob Collie (05:22): Going for that NIL. I don't know. Maybe that's been going on forever and it's just this is my first time tuning into it, just getting a real kick out of brisk international trade and ice dancers, wouldn't expect. All right. So, transitioning. I had another thought, which is you and I have been doing this podcast together for many, many, many, many episodes like triple digits, right?
Justin Mannhardt (05:43): Yeah.
Rob Collie (05:44): Imagine what we'd be talking about today if we didn't have AI to talk about.
Justin Mannhardt (05:47): Oh, my gosh.
Rob Collie (05:48): Would've closed up shop by now. We'd have talked all the data that needed to be talked. And so, we want to thank AI for sustaining our interest in this podcast and giving us something to do. And so, we're going to do that again today, obviously.
Justin Mannhardt (06:05): Shocker.
Rob Collie (06:05): Yeah. And you I, we have planned a couple of things to talk about. So, the first one, a little bit of a departure for us, because we usually talking about building custom AI solutions, custom systems that are for business. This goes back to the personal usage of AI that doesn't require all that customization, the off-the-shelf usage. There is a cautionary tale that we want to share with people. Basically, it takes the form of do not let AI speak for you. As tempting as it is, when you have a big email to write or some difficult and nuanced communication to craft ...
Justin Mannhardt (06:47): Easy button.
Rob Collie (06:49): Yeah.
Justin Mannhardt (06:49): Easy button.
Rob Collie (06:51): It says easy, and then in fine print, it says, press here to really, really mess things up.
Justin Mannhardt (06:56): Yeah. Yeah. We've both had our experiences letting AI speak for us, learning this lesson in important ways.
Rob Collie (07:07): I've been on both ends of it, in situations that you're aware of.
Justin Mannhardt (07:11): Yeah.
Rob Collie (07:12): And sometimes, it takes being on both sides of something, a dynamic for it to really crystallize for you. But basically, the problem goes like this. You have something nuanced to communicate to a colleague or a business partner or a customer or ...
Justin Mannhardt (07:28): Employee.
Rob Collie (07:29): And it's going to consume a lot of energy. It needs to be in written form. You think it needs to be in written form anyway, so you have to produce some document or email or slide deck or whatever. So, you reach for the AI and it constructs a whole framework of how to communicate this to the other person right off the bat and then fleshes it out, because it is optimized around pleasing you, the person who's writing it. It's going to advocate for you, the person who's writing it, in ways that you might not, if you were crafting this on your own. One example of this, famously between the two of us, was you had a great idea back in the fall.
Justin Mannhardt (08:10): Great idea.
Rob Collie (08:11): It was a great idea. Capital G, capital I.
Justin Mannhardt (08:15): One of my many famous great ideas.
Rob Collie (08:17): I was on board. I thought, "Heck, yeah. That is a great idea. I'm an enthusiastic participant in this study." We didn't know it was a scientific study at the time, but it was, where you and I were working on something together trying to figure out a division of duties between the two of us. It was mostly just nuanced and complicated, but there was a little bit of like, "How do we get out of each other's way?" And you had an idea like, "Hey, Rob, I've written you a prompt for you to sit down with Claude."
Justin Mannhardt (08:44): I sat down with the AI to get you a prompt.
Rob Collie (08:47): Yes. You sat down with AI to give me a prompt for me to have a conversation with Claude to produce a document for you. And I said, "Oh, heck, yeah."
Justin Mannhardt (08:57): What could go wrong?
Rob Collie (08:58): So, good. This is so nice. You know why I liked it so much? In hindsight, he's pre-thought this for me, and now the AI, the LLM is going to take me on a journey and I don't really need to work that hard. I get to be a passenger. Any opportunity to turn down the brand a little bit like that is like, "Oh, it's just such a warm idea." And so, what did I do? I went through this prompt. And after a long back and forth interview, it produced a very sizable document.
Justin Mannhardt (09:25): Yeah.
Rob Collie (09:25): I mean, okay, I read it. But the amount of energy that I expected to invest in this thing was easy. And if I had gone through it super, super methodically thinking, "Is this what I would say?" I wouldn't have produced that document at all. At all. And Justin, we trust, man, he gave me a prompt. I followed the prompt. I'm just following instructions. And here you go. I'll send this back to you. And it freaked you the hell out.
Justin Mannhardt (09:57): Yeah.
Rob Collie (10:00): It was a problem, right?
Justin Mannhardt (10:02): Yeah. It was a problem.
Rob Collie (10:03): Because it advocated in a way, way over the line of what even I would have. I wanted to almost protect myself from that realization, because otherwise I'd have to do this whole thing manually, right?
Justin Mannhardt (10:16): Yeah.
Rob Collie (10:17): Oh, man. It was awful. It was quite a rescue to pull that back once it had gone over. And I know that it freaked you out. It wasn't great.
Justin Mannhardt (10:28): I've had the same thing happen to me where I've tried to use AI to basically get interviewed to make it easy and then send something over to somebody else and had that same thing. What I've realized because changed some of my own personal workflow as a result of this experience in certain places.
Rob Collie (10:44): Yeah. Tell us about that, because you mentioned that to me before we record it, and I think that's a fascinating reversal.
Justin Mannhardt (10:50): Well, I think what I realized is there is this trick that you play on your brain when you do this. Because you're like, "Oh, I did have cognitive output. I was asked questions. I answered them. I considered my answers." And then, I struggle with this anyways, so then you get basically this wall of text document and you don't read it carefully. You just skim it.
Rob Collie (11:18): You think you're reading it carefully, but you're not.
Justin Mannhardt (11:20): You're not. And you're like, "Oh, good. Let's ship it." What I realized for myself was I skipped all the hard parts. I skipped all the hard parts of actually having conviction about an opinion or having the right level of empathy for the person I'm trying to communicate with, all of these things. And so, there's certain things now where I think I probably even said it on this show, we could probably go find it two years ago in the archives where I said something, "Anytime I'm going to write a document, I start with AI."
Rob Collie (11:54): You don't start blank. Let the AI do the first draft. That was explicitly part of your methodology.
Justin Mannhardt (12:00): Yeah. And I'd give some bullet points or some initial seed or explain what I'm looking for, and then I'd let it go. And I think I was mesmerized by that experience of like, "Wow." Almost thinking, "Yeah. I would've written the same thing, but now I don't have to." It took me maybe longer than I would've liked to come around here, but I realize what I'm getting is that mediocrity of an end state that's not truly representing my opinion, my voice. It's really important to have an opinion about things nowadays, especially with how AI can work. So, now, when I'm writing things, whether it's an article or a communication or a policy, I do the first draft myself.
Rob Collie (12:42): 180 reversal.
Justin Mannhardt (12:44): Writing is something I've struggled with all throughout my life. But I just, I'll let it flow, just stream of consciousness. This is what I'm thinking about. I'll feel like I get it reasonably solid in terms of its content, the opinion, the point of view. Then I'll start using AI to refine it and critique it and find the things to improve. And I've found that more successful getting to the end result of that and feeling like, "No. That's me." That's what I actually think about this situation.
Rob Collie (13:13): When you let AI do the first draft, it already is constructing the outline of the argument or the outline of the proposal or whatever in a way that you very often wouldn't. And there's no amount of wordsmithing and sanding off rough edges here and there. That's going to change that. You're going to end up just falling path of least resistance and sticking to its approach, which might bake in failure.
(13:45): And even if this process of letting AI speak for you is a win 95% of the time, the 5% where it's a loss is going to be worth a lot. It's going to cause problems that make the other 95% of wins probably not worth it. You've really got to be careful with this.
Justin Mannhardt (14:08): Listen, knowledge work has enough challenge with AI already. But I think there's a certain respect for craft of opinion that I wouldn't want us to lose. You've probably heard some of the bigger stories, how some of the bigger consulting outfits, they've been sued by governments because of its AI, is spouting hallucinations in their research and stuff like that, and it's like, "This is happening."
Rob Collie (14:35): And also in the legal field, citing cases that don't exist, and good Lord.
Justin Mannhardt (14:39): You think about the structure of an opinion or the structure of an argument that needs to be supported with all these things. If AI is doing the first take at whatever that argument or idea or thing you want to convey in a piece of communication, it's building its own construction of that. When you're trying to refine it like, "Oh, that's not quite right," or "That word isn't exactly, I wouldn't use that word." You're actually just poking more holes in the whole thing.
Rob Collie (15:07): Even just fundamentally it not understanding the assignment. Because the way you described the assignment is incomplete to the LLM. You say, "Hey, I have to write this thing to explain or propose this thing." You're leaving out things like, "Oh, and I better not freak them out." You're not constructing the request of the LLM in a manner that says, "There's actually two stakeholders you need to care about here, and I'm only one of them." Which was something that really surprised me about the couples coach that I built for my wife and I, was how good it was at not being my advocate when it shouldn't, because it knows in its system instruction that it is trying to help both of us.
(15:52): So, when I try to get away with something, I'm trying to tell it, "Nah, I think I'm going to skip that. I don't want to tell her that." The agent's like, "Brah, brah, brah, brah, brah. So, I'm going to give you some tough love here, buddy." That's not how it behaves very often when it thinks it's got an audience of one. And it's really bananas. That even though it's only talking to me at the time, it's still in its system instruction that it knows there's another party. And I don't think most of the time when you're using these off-the-shelf tools that you're not baking that in and it's not smart enough to do it on its own yet. It should get there. But it's not.
Justin Mannhardt (16:24): I read a post the other day, I'm sure it was something I saw in my LinkedIn scroll, I want to say it was a study of how the quality of the output of an activity where AI was used is very highly correlated with the quality of the input. We're talking about writing and conveying communication. So, if the input is sloppy and high-level or whatever the output is going to be, seemingly well-constructed, well-done, but in this zone of mediocrity, these systems are so capable and they know so much about so much. On certain things that I do, I really need to be forced to clarify my position on things.
Rob Collie (17:13): Right. Crucial question. Your 180 reversal on AI writes the first draft versus you write the first draft, did you make that 180 before or after you gave me the prompt of death?
Justin Mannhardt (17:26): After.
Rob Collie (17:27): Okay. Good. Because if you've done it beforehand, you still did that to me like, "You bastard."
Justin Mannhardt (17:34): Yeah. No. Yeah. After. Listen, AI is a really good thought partner. I still maintain that if you're trying to get some clarity on something. But if you're writing for other people, if you want to put out an article or a video or a post or you're communicating with a client about a project, abdicating that is a real risk right now. Unless you've invested in some custom system that understands everything.
Rob Collie (18:01): That can be built, right?
Justin Mannhardt (18:02): Yeah.
Rob Collie (18:03): One that is designed for crucial high-stakes conversations where people on the other end have feelings, thoughts, concerns, fears, all that kind of stuff. Recently, full circle, I've been on the receiving end of a couple of AI-generated documents that freaked me out and that overrepresented the other person's interest in the same way. However, many times that's been successful on the sending end, this one wasn't worth it for them or for me. I mean, it was just really unfortunate putting these in the 10 Commandments. Like, "Do not let AI speak for you."
(18:37): In fact, before I started writing the book, I had this cutesy idea, if I'm writing a book about AI, of course, I'm going to use AI to help me write the book. And I even had this idea, one of the intro chapters would be like, "AI helped me write this book." I sat down to write it and I instantly realized, "No. AI is not going to help me write this. It's not even going to write a word of it." Now, it's helping me research things. When I want to know about relative costs of GPUs and CPUs and how many cores they have and all that kind of stuff, I'm not going to go look that up myself. I'm going to send my buddy out to get that information for me.
(19:13): And it's a thought assistant, not a single word of what I've written has come out of an AI, and it won't. Even though I have systems that are trained to speak like me, to write like me, and they're pretty good at it. I haven't touched them and I'm not going to. I mentioned on last week's podcast, I'm writing a whole chapter right now that I'm probably going to cut going through the process and owning every word and what I call ruthless rereading.
(19:45): Every sentence in every one of my books was written, rewritten, chopped up, read, ruthlessly criticized. I reread everything as the reader, and that's how I do my emails. That's how I do team announcements and all of that. It's not like I sit down and if it's 1,000 words, I didn't just type 1,000 words. I typed probably 3,000 words. And when I use AI to write something for me, nope, I haven't been organically part of creating it. If I bring that lens to it, I can just iterate with it infinitely, I suppose. But anyway, not going to write the book with it.
Justin Mannhardt (20:24): You're a better man for it.
Rob Collie (20:25): And that's what it's all about.
Justin Mannhardt (20:26): In the 3,000 words though, how many word equivalents are the index finger smashing the backspace key?
Rob Collie (20:33): Oh, many. Now Control-X, that whole sentence, move it up here, restructure the thing beneath it. I mean, there's just ... If you watch a timelapse of something that I write, it would be a really amusing timelapse. Holy cow. It didn't start that way. Changing gears, maybe?
Justin Mannhardt (20:56): Shift. Shift.
Rob Collie (20:57): I mean, we've solved that one. I had a really interesting opportunity yesterday to close a 13-year loop.
Justin Mannhardt (21:05): It's a long time.
Rob Collie (21:06): It's a long time. It's a long loop. So, 13-year cycle. The first client I ever had at this company, after forming the company officially anyway, was a family business that's a wing of my family. They're cousins of mine, and they hired me as an investment. They're like, "Yeah. We believe in you. We'll back you. But we're not just going to put money in your company to get you started. We're going to hire you for something." And I thought, that's a fine deal. Let me earn that seed money, right?
Justin Mannhardt (21:32): Yeah.
Rob Collie (21:33): It wasn't like some big investment. It was just like they paid me for an engagement. I failed them. It just didn't work. The failure was in the human plane. It wasn't a tech problem. We had Power Pivot. We didn't have Power BI. Maybe Power BI would've helped a little bit, because it'd been more graphical, whatever. But I think the fundamental problem was is that I failed to help them connect their business conceptually with the idea of being informed by data and how it might help them.
(22:05): I was telling the team earlier today that one of the benefits we've had at P3 in our history is that most of the customers, most of our clients have come to us and they self-identify as they've already made that connection. And I think most of the world might not have, but we get this selection bias where people are like, "Okay. We understand where ... We've already mapped out our business and found some places in it where if we had better data, we know we'd be making better decisions.
(22:35): But my cousins didn't hire me for that. They hired me like, "Hey, you're just a wonder kin with data. We've never done any of that. Clearly, you can help us, right?" So, I had to do that for them and I failed to do that for them. I wasn't able to bridge that gap. And I learned a lot of powerful lessons from that. A lot of the things that we've talked about many, many, many times over the years, verb-oriented reports rather than nouns. That's where I first learned that lesson.
(23:04): Things like working forward from the data rather than backwards from their business. These are the lessons that I took away from that. I mean, I definitely learned, I didn't just go, "Oh, well, whatever," that left a mark on me. I let them down, essentially. So, it informed a lot of my future philosophy. It informed a lot of the way I would train people. It informed a lot of how we approach things as a company going forward, et cetera.
(23:28): But that said, 13 years later, even with all that extra wisdom, I think if I sat down with them again today and tried to make that data pitch, I think my chances of success would've gone up from 5% back in the day to like 50% today. They were still far enough away from making that connection on their own that I wouldn't necessarily be able to bridge the gap. I think I'd have a chance, but not necessarily.
(23:56): By contrast, though, yesterday, 13 years later, we had a conversation with the same company, the same people about AI. And you would think that, "Oh, well, AI is way more complicated than dashboards and all of that." So, it's even, "This is really ... If dashboards went over their head and we had a hard time making a connection over that, what's AI going to do?" It turns out they really get it.
Justin Mannhardt (24:25): Why do you think that is?
Rob Collie (24:27): Yeah. I've done a lot of reflection on that. So, I think there's two reasons why. Okay. Three. The first reason is I think we're really good at explaining it, but at the same time, I'm not any better at explaining AI than I was at explaining BI. But I do think that, compared to most, most operations, we are lightyears ahead in our ability to explain things. That's a precursor to this. But the other two reasons are the ones that I think are about AI and why AI is easier to understand. Which is again, not what I expected, but it's proving to be true.
(24:59): So, the first one is something that we have talked about a few times on the podcast, which is dashboards are relevant to a minority of workflows out of business. All the workflows that are out of business, dashboards aren't going to improve all of them. And then even within the workflows that can be improved, the dashboard only represents a part of that problem. It's also a minority of that workflow. So, you could think of dashboards as addressing a minority of a minority of a business's total workflows, which is small. Whereas AI completely inverts that. It's the majority of workflows, not all of them, but the majority of workflows that can be improved in some way.
(25:41): And then within each workflow, it's actually the majority of that workflow where the surface area for improvement with AI, where it can attach. So, we're going from minority of minority to majority of majority. That just increases right off the bat, statistically speaking, the chances that we might connect between us and the client, because there's just broader surface area. In the minority of minority, we're hunting for needles in the haystack in a way. And this one is like, "Nope. You just throw a dart at the map and you hit something." So, there's that dynamic.
(26:10): But the other dynamic, I think, is even more important. And this is really, really cool and really encouraging in that a business leader is well aware of all the moving parts in their business. They've got that on lockdown. They just know it. They know all of the things that are happening or that need to happen. So, they know about all the workflows, like movement, motion, action is something that they see. So, they can reel that off to you.
(26:39): And imagining how there are places in all of that action where there's a lot of wasted time or wasted effort or manual grunt work or things that don't get looked at because there just aren't enough hours in the day and not enough people. I think that's a much easier leap for people to make. Getting people to make that leap from, "I know there's things going on in my business," where can data improve it is actually really hard. That's a big cognitive leap.
(27:14): And this other stuff, how to use AI to do these things is I think really blows people's minds at the moment. They're not getting that at all. But when we sit down and actually methodically talk to people about their business and try to find surface area places, it turns out, the same guy that I struggled to help bridge that gap to data-driven was absolutely on fire in this meeting yesterday. I mean, he was getting it. Starting at points in time, going two, three steps ahead of us, like, "Oh, okay, okay, okay. Like this?" And I'm like, "Oh, yeah." That meeting yesterday really opened my eyes.
(27:55): And here's one last thing that, again, I still don't even remember if I told you any of this. There was a moment where normally the way to turn the corner on these conversations is to pick something. Let's pick something. We'll get started on that thing. We'll build that thing. And in the course of talking to them about all this stuff, we were approaching that moment in the conversation where that's normally the turn that we would need to make.
(28:16): Instead, I found the words coming out of my mouth saying, "Hey, I think actually our first question is not what we should pick, but whether we should pick one or two in parallel?" It didn't feel like a leap for me to say that based on what we've been talking about and it didn't feel pushy. It didn't feel salesy. And they're like, "Yeah. Let's do two." So, we're going to do one that is very heads down detailed operational. There's this awful, awful manual process. There's 150 to 200 printed out invoices a week. They get scanned in and they're from all kinds of different suppliers. They don't follow anything resembling as consistent format, and they're scanned in.
(28:58): So, they're like pictures of pieces of paper that have been fed through a scanner and by different people, too. So, by different people probably with different scanners. I mean, it's just like a mess. But at the same time, every one of those invoices has been manually entered by a manager at a store. There's a process, we just got to go through and make sure that whatever was on the written invoice is what got entered in the system, black. Okay. So, that's down in the weeds and we think we can automate that. This is not hurting humanity in any way to take that job back from some poor human being, whoever's the one that draws the short straw for that one.
(29:35): But then at the same time, realizing that the leaders of the company that we're talking to on that meeting, wouldn't be getting a lot of reps, wouldn't be getting a lot of exposure to that system. And so, in parallel, the other one we're going to do is almost the opposite end of the spectrum is we're going to build them a chat with data system across all of their data. So, it's be more of a traditional BI engagement, but just to get the Power BI model to a level where an AI chat interface can be connected to it.
Justin Mannhardt (30:08): Right.
Rob Collie (30:09): And this can become a strategic even advisor. It is not just chat with data. It's not just like, "Hey, I can ask any question I want and get any answer," which is amazing, but why not blend that with strategic researcher that goes and researches their industry on a particular question that whatever they have that day cross references against how their business is performing. That's like the chat with data part. It's automating the Power BI models.
(30:35): And/or as you discover something you say, "Hey, that's concerning that that happened and we didn't notice it. Dear Agent, please look for this in the future and let us know when that happens." And the agent goes, "Got it. I'll be checking for that on an ongoing basis from now on." There's so many outgrowths. It's not just chat with data. It's going to blend into so many other things and it's going to change the way he thinks about his business.
Justin Mannhardt (31:02): Yeah. This is the tipping point, I think, for leaders when you're thinking about AI and BI together. And I've talked to a lot of leaders and executives over the past 12 months about this particular scenario. Information recall is not exciting to them. It's cool when you talk to them, you'd be like, "Oh, cool. Look, we can ask, what was this metric?" Or "Tell me about performance." And you get the information recall. They're looking for something deeper. Help me understand why that is. What can I do about it? What are the implications? Much more in line with what you're describing. I think AI will serve that information recall role, for sure. But that's not where people want to go here.
Rob Collie (31:48): It's funny. We've fallen for our own principle or fail to apply our own principle. When we first saw what chat with data could do, we're like, "This is really obvious to us, that mapping the question you have to the dashboard that answers it and hoping that it's there, remembering that it's there and all that, that is a really hard problem. And that is a friction and an inertia that we tolerated as a species for a long time and we're not going to tolerate it anymore." You're still going to have dashboards, but needing to map your question to one that has to go away.
(32:19): But we still, I fell for this, we've saying forever that it's always about improvement. It's not about being informed. That still applies. What's the improvement? So, this thing needs to help you in that and it needs information to do it. "Hey, what can we do about X?" is an amazing question.
Justin Mannhardt (32:46): Even you go to a company that is very mature with their analytics and their reporting and their dashboards. Those are the conversations in those rooms. It's good that they can get the information, recall it, but they're asking those questions sitting around ...
Rob Collie (33:01): As they should be.
Justin Mannhardt (33:02): ... the conference room table.
Rob Collie (33:02): As they should be. They shouldn't be thinking as data people. All right. Well, I'm glad that you pointed that out, because a correction in my thinking that was long overdue.
Justin Mannhardt (33:11): I think what's really interesting about the story when you describe really the surface area of opportunity. Another thing that I ... It's more of a hypothesis for me at this point than it is something I feel like is proven true. So, the surface area is really big, which is really exciting. But also, the variability in the initial investment to try ideas is much tighter than I would've expected, especially in the data and BI space that we used to spend all our time in. It's like, "Oh, you really knew the difference between little project, big project."
(33:48): You describe the back office processor thing with some researcher thing and you realize the delta between the levels of efforts is not as big as you thought it would be with AI. And I think part of that is the AI-powered nature by which you can start building some of these things, it's almost like everything's fast. I think what's neat about this is I believe what's going to win out here is when people find the right spots to land these darts, to use your analogy.
(34:19): Juan Garcia, this is an episode I keep coming back to this idea of leverage. They didn't waste time figuring out how AI could try and save them a couple basis points in a place nobody cares about. They found the leverage.
Rob Collie (34:32): We had exactly that conversation yesterday with this company I was talking about. We had two different hyper-detailed places to apply. We were going to have that high-level one, the chat with data, strategic advisor kind of thing, but we also wanted to pick one of those way down in the weeds like that, and we had exactly that leverage conversation about the two. And in fact, the initial instinct was there was more leverage in the other one. Then we talked about it some more. We're like, "Oh, actually, there is more leverage and it's the one we chose." So, it was not the first kneejerk reaction with a little bit more thinking. It was like, "Oh, yeah, actually."
Justin Mannhardt (35:08): Super cool. And even if this particular company, let's say you encounter some obstacle or you realize, "No. There's not as much leverage there as we thought." I think the institutional knowledge gained of understanding where AI systems can be pushed to then becomes an asset to further change in the organization.
Rob Collie (35:31): Totally. Developing that intuition, that intuition of where AI can help your company is going to be relatively commonplace at some point. I don't know. It's just going to be obvious to people in, I don't know, 18 months, 2 years. I have no idea.
Justin Mannhardt (35:47): It'll be fast.
Rob Collie (35:48): Well, I don't know about that, because people don't move that fast. I mean, I know that examples travel fast. I mean, I'm haunted by having seen what Power Pivot was going to do to the world, which became Power BI. I'm by my impression of it, which was that I'm sitting there in 2010 saying, "Hey, by 2012, this is going to be ubiquitous. All the old methodologies are going to be gone." Amazing foresight, terrible depth of perception. And all these years later, we still see some ungodly percentage of Power BI models that aren't models.
Justin Mannhardt (36:27): We bid on a project to do a transition from Cognos, like last week.
Rob Collie (36:32): If you told me in 2010 that in 2026 the penetration, the true uptake of this stuff, even though it was the number one thing in the world, the true uptake of it was still like ... I mean, God. Does it even round to 50%? So, I don't know. There's other dynamic of explaining to you just now having the same conversation about AI 13 years later that I had about BI and it just being so much easier, speaks to this one might uptake a lot faster.
Justin Mannhardt (37:07): I would predict we'll have similar longtail effects. I think the widespread understanding of what AI is capable for and things start to coalesce around methods that work, I think people will see them. But then I think you got to step that back down. Well, who's going to actually act on that and who's going to have the level of conviction to really change their organization?
Rob Collie (37:35): Don't know. Maybe we'll close on this. I took a little bit of heat on LinkedIn for something I said in our last episode.
Justin Mannhardt (37:43): Did you now?
Rob Collie (37:44): Yeah. I did. I haven't had a chance to circle back and address this heat. Maybe you addressing it here is best. So, we're talking last week about enterprise IT and their reluctance in some circles to take on custom code and preferring that it come much more as close to out-of-the-box as possible. And I think I said something like, "Well, the organizations that get out of their own way there and are willing to just embrace that they're going to have more custom code in their" ... I think I said something that they're going to lap the other companies.
(38:17): The pushback that someone gave me on LinkedIn was like, "Oh, I can't believe you've, Rob, devolved to that fear, uncertainty and doubt like fearmongering. And then the biggest problems for most SMBs right now are environmental conditions. It's the business environment. There's just so much that has nothing to do with AI. So, I wanted to circle back and just briefly mention like, "Yeah. Okay. Lap is probably the wrong word." And I was talking about enterprise, an explicit point we're making in that show, in that episode that I don't think the average SMB when they see these sorts of opportunities is going to hesitate.
(38:54): If they truly believe that they can execute with some custom code, they're not going to worry about the technical debt so much and they shouldn't. The scale of their organization isn't going to pile up. The critical mass of technical debt that enterprises do fundamentally, legitimately have to worry about. I don't think I was even suggesting really in the episode that Fortune 500 company number one is going to lap Fortune 500 company number two, because of a difference in these philosophies.
(39:22): But if you just look at the IT department in isolation for a moment and their capabilities, one IT department will be lapping the other one in terms of the capabilities that they're delivering. That might not translate into Fortune 500 number one killing off Fortune 500 number two or anything like that. I've always fallen on the side of, "Look, let's move the needle." It's an easy thing for me to emphasize with a word like lapping. Anyway, didn't intend to suggest that we were going to be having extinction events in the Fortune 500 if people are slow to get the memo here. But there is going to be a difference in effectiveness that's going to be noticeable.
Justin Mannhardt (40:06): I've been noodling on this idea just because the technology itself is progressing at a rate we've never experienced anything quite like it. We can draw parallels to the .com boom or the BI revolution, but this is a different category. At this point in time, I think it's going to be more important to find the conviction for change than it is to get things right technically or to find the perfect best practice. I think the people that do come out ahead in this are the ones that said, both of us, we have to stare in the face like, "We're consultants. We're knowledge workers. We're technology professionals."
(40:50): This stuff is changing a lot for us. And so, the conviction to change is I think a really important thing. So, I think it's even more important than going excruciatingly fast than it is other things, just realizing that things are going to work different in the future and we're going to start figuring that out and having the conviction to do that.
Rob Collie (41:13): I like that. I think that's a good note to end on. I give you the rare last word.
Justin Mannhardt (41:17): The last word.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
Subscribe on your favorite platform.