My New Clarity and Confidence About AI

Rob Collie

Founder and CEO Connect with Rob on LinkedIn

Justin Mannhardt

Chief Customer Officer Connect with Justin on LinkedIn

My New Clarity and Confidence About AI

It’s time to dive back into the wild world of AI with Rob Collie. Get ready to cut through the hype and uncover the real deal on artificial intelligence.

In today’s episode, Rob examines the varied faces of AI, from its potential to revolutionize industries with breakthrough innovations to its ability to streamline our daily activities and make routine tasks more efficient. Have you ever wondered how AI might be influencing your life without you even noticing? Rob brings these abstract concepts down to earth. He provides a fresh perspective on how AI operates silently yet significantly in the background of our daily interactions and decision-making processes.

You’ll leave this episode with a clearer understanding of how to interact with AI effectively, recognizing its benefits while being mindful of its limitations. Rob’s insights will help you navigate the AI landscape with confidence, armed with the knowledge to identify genuine opportunities and avoid common traps.

Don’t miss out on your weekly dose of tech reality! Subscribe to Raw Data on your favorite podcast platform. Stay ahead of the curve and discover how technology is reshaping our world!

Episode Transcript

Rob (00:00): Hello friends. I'd like to start this week's episode with a story that I heard, and the story goes like this, there's a new cell phone tower that's been constructed on a hilltop, and it's pretty close to a popular neighborhood hiking trail. And a news reporter goes out there on a weekend to interview hikers on this trail, and get their opinions about the new cell tower. One person he talks to says, "Oh, yeah, I really am enjoying, and benefiting from the improved cell reception." A second person complains that their headaches have gotten worse recently, and a third person says, "Hey, there's no big deal. But I get this buzzing in my head when I get too close to it", and then later when the hikers are all gone, the reporter concludes their segment by turning to the camera, and saying, "I wonder how all of these opinions will change later next week when they actually connect the power to this thing, and turn it on."

(00:55): And in a way, this story reminds me of where we're at with AI today. There are so many confident takes out there, and they're all just a bit premature, and they're driven by emotion, and our human need to force narratives over limited information. In that vein, the takes on AI these days tend to fall into one of three buckets. They're all believable, they all contain an element of truth, but taken individually, they're all contradictory with each other, and that leaves us with a confusing picture. But I think I've actually reached a bit of a minor breakthrough, maybe even a significant breakthrough in my own view of these conflicting takes, and I committed myself to recording a podcast about it as a means of forcing myself to crystallize it. I'm going to identify the three categories in which I think most takes most things you read, or encounter about AI, most opinions fall into one of these three buckets.

(01:51): I'm going to identify those buckets, and then one by one I'm going to walk through them, and explain what I think is both true, and false about each one. And in the process, the picture that emerges is of why things are so confusing, and even better, a way in which they can become much less confusing for all of us. All right, so here are the three categories. Category one are the takes that basically say, "Hey, generative AI is very different from all previous tech revolutions. It is a true disruptor, the most uniquely disruptive thing we've ever encountered, and nothing is going to be the same again." The second class of take is more along the lines of, "Oh, come on. It's the same as always. It's just another tool that enhances productivity. It's not going to replace much of anything you're doing already. It's just going to take some of the grunt work out." The third kind of take is even more skeptical, which is that all of this AI stuff is just way overblown. It's a hype bubble, and it's going to pop.

(02:53): Let's dive in with that first category of take. There's actually two separate opinions hiding in this same take. The first opinion is that generative AI is truly different from other tech developments, and the second half is, is that nothing in the professional world, nothing in society, nothing's going to be the same as a result. So, let's take that first half first. I believe this is true. I believe that generative AI actually is different from other tech revolutions, and tech developments, and tech breakthroughs that I've experienced. I mean, it's at least as big, or probably on par with when the internet first became a thing. And the reason why I think generative AI is a qualitatively different type of breakthrough than other technical revolutions that we've experienced in my time is that it's the first time we've had a computer interface that not only understands freeform human input, truly understands it, and then can respond in kind. It can respond in a human manner. Now, we've had plenty of attempts at this sort of interface over the years.

(03:59): We've certainly had many, many, many, many attempts at what we called natural language interfaces where you can ask a question of something in whatever your natural language is, like English, or whatever, and they've always fallen pretty short. And their ability to respond has also been incredibly minimal, and generative AI has changed all of that. In many cases, it's indistinguishable from a human being. On the other end of the line, if this was an instant message conversation that you're having with one of these systems like Copilot, or ChatGPT, or whatever. In fact, it's kind of funny that oftentimes the only way that we would know for certain that it's not a human on the other end is how quickly it can respond, and how much knowledge it has access to. Kind of ironically, the number one way that we can tell these things are not human is because they appear superhuman. That is incredibly significant. We now have a human computer interface that is fundamentally unlike any other human computer interface, or interaction that we've ever had before. That is a very, very, very fundamental change.

(05:09): And much like the story where they haven't plugged in the cell tower yet, we're really at the very earliest stages of finding out what this new type of interface can do, and what it's going to do to software, and what it's going to do to technology, and what it's going to do to business. So, if you're telling yourself that these generative AI tools are no big deal, I really don't think so. It really does change the nature of human computer action forever, and this particular genie is never going back in that lamp. But the second part of that take, the part where it says, and nothing is going to be the same. That part of this most dramatic of the three categories it takes that part warrants some healthy, healthy skepticism. First of all, the current trajectory of these generative AI tools limits their usefulness to places where hallucination isn't a concern. Now, we did a whole episode on hallucination, and why hallucination is not just a problem in AI, but probably the problem. Go look that episode up.

(06:12): But the short version is that hallucination is a fancy word for saying AI makes mistakes. Of course, one of the problems when an AI makes a mistake is that we're kind of relying on it because of its superhuman capabilities. So, when it's wrong, how do we know it's wrong? Now, my current opinion is that the generative AI tools are never going to solve the hallucination problem. And the reason is, is that these generative AI tools on their current trajectory anyway, they're like a form of computational intuition rather than reasoning. These things are going to bluff when they don't know the answer, because in a sense they're always bluffing, and because it's always bluffing from intuition, it isn't going to be a reliable witness when you ask it to explain itself, and justify its recommendations. Now for a funny example of hallucination, and bluffing just today, I asked ChatGPT how much volume is occupied by five pounds of fat?

(07:12): My wife, and I were marveling at how differently clothes can fit with just a five pound change in our weight. And ChatGPT happily told me that five pounds of fat occupy three gallons, so like 12 liters. And this just blew my mind, and it blew my mind so much in fact that I just went ahead, and just posted it on Facebook saying, "Y'all, I just found out that, oh, my god, five pounds of fat is three gallons." Turns out that's not even true. It's not even a little true. People asked me like, "I don't know, you might want to double check your math there, Rob." Five pounds of fat is actually less than a single gallon. So, I, while writing a podcast script about AI, and intuition, and hallucination, I spread disinformation without verifying it. Now imagine if the stakes had been higher. Yikes. So, the hallucination problem is one of two really big limiters on the practicality, and viability, and applicability of generative AI. And I'm going to circle back to these limitations when I wrap up later in this episode. But let's move on to the second limitation.

(08:14): There are cases, a lot of cases actually, where these AI tools simply don't have the information. They don't have the context at all to answer your question. So, for grins, I deliberately asked ChatGPT a question that I knew it wouldn't have enough information to answer. So, I asked it, how much should we at P3 Adaptive charge our clients for our services? What should our rate structure be? This is ridiculous. Of course it's not going to know the answer. Now, to its credit, in this case, it knew that it was able to recognize a case where it doesn't even have the opportunity to apply its computational intuition. It didn't even have the opportunity to bluff. So, that's kind of nice. And so instead of telling me how much we should charge, it gave me a skin-deep summary of factors to consider in terms of inputs to how much a consulting company should charge. It didn't ask me follow-up questions like a human expert would. A human expert when asked that same question, how much should we charge?

(09:11): They would know which pieces of information are most relevant for me to provide in order for them to formulate an answer for me. Whereas the generative AI kind of needs to be trained on everything. It would need to be like a digital fly on the wall at our firm from every sales call to every last little client interaction, and then it would boil that ocean, and then back into what's important out of all of that noise. But as impossible as that is, of course, having it be a digital fly on the wall for every single possible customer touch point, even that wouldn't be enough. It would actually need to watch many firms like ours because even the entirety of our company's existence is still just one sample, and not enough training data. So, the upshot of these twin problems, the hallucination problem, and the insufficient context, insufficient training data problem, those two boundaries, those barriers, they limit the places where generative AI is going to be useful.

(10:10): So, the second part of that take that nothing is going to be the same is pretty much false because there are definitely cases where it doesn't affect it. It can't be used, it won't be useful. Now, like I said, I'm going to tie all this back together at the end after I get through all three of these categories of takes. So, that's category one. The first category of take is guess what? Partly true, partly false, which is why these things are so complicated, so hard to navigate. There's a shred of truth, and a shred of untruth in those takes. Second category of take this one is it's just a productivity tool that kind of replaces nothing. It doesn't replace anything that you do, it's just going to make certain things faster, easier, or cheaper, whatever. Well, this take is mixed in its accuracy. Even though the hallucination, and context problems that I talked about with take one, even though those do significantly restrict the cases where generative AI is useful, well the places that are still left that are not ruled out by those problems are still pretty massive.

(11:14): Just one example, consider the notion of a call center like a help desk, the freeform human interface that generative AI is the first to get right. Combine that with the fact that in most help desk situations, all of the relevant data, all the relevant information, the generative AI systems can be trained on it. Just like these call centers, if you're at a call center like say for an insurance company, the person working in that call center did not need to grow up from age eight working in insurance. They might've just been assigned to this call center, and hired there just a few months ago, and trained up on these flow charts, and everything, in much the same way that a generative AI can be trained up but much faster, and much more comprehensively. So, there's the context, right? It absolutely will have sufficient context, and so it sort of clears that second bar that I was talking about.

(12:07): But returning to the hallucination side of things, the fact that even human help desk agents are a pretty low performance bar to clear human help desk agents are not anything close to a hundred percent reliable. We've all had plenty of examples of cases where we got bad advice, or got handled improperly. So, the hallucination problem, all it has to be is as good, or arguably potentially even better than at least the first line of human call center rep. And the combination of those factors, the sufficient availability of context, the relatively low bar for hallucination in terms of what the accuracy rate needs to be, and also honestly the narrowness of the domain that these things are trained on also significantly decreases the incident rate of hallucination. And again, the consequences of a hallucination aren't as bad in this situation because again, humans also quote, unquote hallucinate in these cases. The call center situation, that whole scenario is kind of a game over sort of thing. A massive reduction in human workforce is underway in that space, and is still gaining speed.

(13:16): Now, another example that I want to talk about briefly, and then circle back to again later is code. Writing code, programming has some characteristics that makes it resistant in a way to the hallucination problem. It makes the hallucination problem less of an issue, believe it, or not, for writing all kinds of different computer code. One reason why hallucination isn't such a big problem for code is that code is so structured it follows really rigid formal rules, and it's also very, very focused in its nature. So, the chances of a generative AI drifting off the mark in its answer in such a formal logically constrained space, the chances of that are quite reduced compared to more general questions like what should our pricing be? In some very real sense, code writing is like an ideal problem for a computer, for a generative AI system to solve, and it's no accident that the first place where we got the term Copilot was from GitHub, another Microsoft division, and GitHub Copilot has been quietly super effective for multiple years now, predating the ChatGPT revolution, and very heavily used, and used to great effect.

(14:33): Such a powerful example of success in fact that Microsoft decided to take the word Copilot, and have it be the name for all of their generative AI systems. Good branding, not necessarily the same thing, but definitely good branding. But another crucial point about code with regard to the hallucination problem is that code can be tested completely independent of the generative AI system that made it. So, you can take the code that comes out of a generative AI system, almost like turn the generative AI system off, if you will, take the code over to your other system, and test it out in that sandbox, you can kick the tires. So, for example, if generative AI were to write you a DAX measure, like a Power BI DAX measure, you could test it out in different reports at different levels of detail at specific conditions like end of month, basically all the ways that you would test a DAX measure that you yourself wrote. So, even if the AI that wrote the code, or the formula in this case, and formulas are code, it's well documented on multiple episodes of the show.

(15:34): Even if the AI that wrote that code, the formula isn't able to be transparent in a 100% trustworthy way, you don't really need it to be like that, because the code itself has a transparency, and testability of its own after you unplug from the AI. Now this isn't exactly the same as the call center help desk case because a human referee is still required in this system to make the whole thing trustworthy, and I'll come back to this in a little bit. So, just put a pin in this thought for now. This brings us to the final of the three categories of popular takes on AI, and in this category, the takes are that AI, generative AI in particular is an overblown hoax, and it's a hype bubble that is soon to burst. Based on the things I've already said in this episode, I think this one is pretty clearly false, but again, the reason why this category of take even exists is because there is some element of truth to it.

(16:31): We've been seeing stories for a while that these big AI projects aren't yielding results, and just recently we've seen big organizations like Goldman Sachs coming out with the report saying that AI's impact isn't justifying its cost. In fact, this week my feed is full of articles saying AI is a dead end. Now, this really shouldn't surprise us that we're starting to see some cracks in the hype. My spider sense has been tingling pretty hard for a long time now that what we're seeing with AI is like one-third real, and two-thirds social phenomenon, and this recent spate of articles is kind of a necessary backlash to that two-thirds distortion. We need this phase, this backlash phase to help us sort out the one-third real from the two-thirds illusion. Now of course along the way we're going to get plenty of clickbait headlines, and plenty of clickbait articles that really don't know what they're talking about, but that's the cost of information in today's modern world.

(17:29): So, what's the illusion when I say that it's two-thirds social phenomenon. What is that? There are two big sources driving the illusion. The first one is that the best marketing lie that you can ever tell in the software space is that hard things are now going to be simple, and just as crucially that you'll no longer need nerds, you'll no longer need techies, you'll no longer need people with the data gene. Now, I blogged about this particular kind of lie a lot back in the day with regard to Tableau's sales, and marketing strategy. Tableau very intelligently focused on what finished dashboards could do. Look how touchy-feely interactive these things are. Even a child can do it. You're just mashing on these bars with your finger, or your mouse, and you get answers. You just sold this illusion that it was just that easy, and they never once, very carefully in all of this, never emphasized how the dashboards that they were showing you how they got built, they kind of wanted you to think that the dashboard, and Tableau were synonymous.

(18:38): The dashboard was the software. There wasn't any intermediate development work, which of course wasn't true. Now because Tableau was the first company to really effectively tell that lie, that data was now going to be easy, they were the first one to effectively tell that lie at scale. That propelled Tableau to a peak valuation of like $15 billion. It became a publicly traded company, and I believe it has much more to do with their effectiveness at telling that lie than it did with any sort of innovative technology, or unique IP that they had built. To be clear, despite the fact that Tableau's success I think largely rested on their ability to market, and sell around that lie, society at large still gained something tremendously valuable, which is that interactive dashboards were now expected. They became table stakes even though static reports were still very much around in the post Tableau era, they were no longer considered to be an adequate solution for disseminating key information across your organization.

(19:38): The bar had been raised. The net effect of them telling that lie effectively is that it did change the world for the positive. So, even the Tableau phenomenon of the early 2010s, you can think of it as two thirds social phenomenon. The lie that data is now going to be easy, and one third super substantive that dashboards should become mainstream, and static reports should finally get exposed as the limited artifacts that they were. Okay, so here we are again with the same sort of thing. Generative AI is another opportunity to tell the world that data, and programming, and technology, all of that is going to be easy, and you won't need those technical specialists anymore. You won't need nerds. And this time the scope of this lie, the opportunity to tell it, is far larger than it was in the Tableau case, because every software company can now be in on it, and not just a handful of BI companies who are struggling to be the leaders in telling the dashboard lie. You better believe that every software company is going to jump in on this as a marketing message.

(20:47): They're going to jump in with both feet regardless of how much truth there is behind it, regardless of how much technology, and actual value that they're able to provide at the moment, they are going to stretch that truth to its absolute limits when it comes to their marketing hype. And remember the marketers producing all this content, the ones who are generating these strategies that are pushing all this information out into the world, those marketers don't understand AI hardly at all. They don't understand its limitations, they don't care. It's game on, and they understand it, and they're right to pursue it. At the same time, they're going to be responsible for producing a tremendous amount of information that we end up accidentally consuming. Now, the other factor that is responsible for the social phenomenon, the two-thirds illusion around AI is that generative AI seems kind of bottomless, and unclear not just to the average person but to titans of industry as well.

(21:44): Circling back to our recent podcast with Sean Rogers, he described a moment like in the past year, or 18 months where corporate advisory boards around the world, particularly in the software industry, nearly unanimously commanded their respective companies, the organizations to freeze their existing projects, and go all in on gen AI. And that's pretty unusual. Take a moment, and think about it. It's not often that corporate boards interfere so directly in software companies tech stacks. This worldwide intervention by corporate boards is weird. They weren't saying the normal corporate board stuff like it's time to open new markets, or explore offshore resources, or let's shift focus from top line to bottom line financials, yada yada. Nor were they saying, "Hey, we need you to develop new versions of your software that target specific industry verticals." Those would be normal corporate board type things to do. Instead, they were saying something this time around like throw out all of your existing plans, and go come up with plans that involve AI. Now what do those plans look like? We, the corporate board, we don't know you. Folks go figure that out. How bizarre a moment is that?

(22:57): We want you to change all of your plans, but we don't even know what we want you to change them to. That my friends is a panic moment. These corporate board members are very often very successful. They're very often billionaires in their own rights, and they're surrounded by smart advisors, but they're still human beings. They aren't AI researchers, and they're really just as in the dark as the rest of the world. Remember, these are the kinds of people that kind of want to get the nerds out of the equation, right? They're the ones that they're really effective at business. They're generally speaking, not programmers, they're not techies at all. And worse though, they're number one job as board members is to not have the company collapse, and die on their watch. That's a bad look. Mostly their job is to occupy the spot, soak up the prestige of it, collect whatever the compensation is provided, and not rock the boat. They do provide benefits like sage business wisdom, and they provide connections that help the company along, and to prosper, and all that kind of stuff.

(23:57): There's suddenly a chance that your software company can go extinct. The normally passive, and incremental nature of corporate boards suddenly becomes quite galvanized. Now again, it doesn't matter how real, or large that threat is, if there's even a 1% chance of it, corporate boards are going to have a strong knee-jerk reaction, and then add in the reality of the power of the marketing lie, which they are all aware of. They know about the power of those lies, and suddenly the entire software world is all aboard a train, and no one the boards nor the software teams themselves know where it's actually headed. So, there is absolutely an element of hysteria here, and the fact that the world's largest software companies are leading this panic, it doesn't mean as much as we think it should. They are very much figuring it out as they go. But remember, one-third of the AI hype is justified, and that one-third is a really big deal. Okay, let's recap a bit. Number one, generative AI is different.

(25:03): It's the first time we've had a free-form human computer interface that seems to actually understand our intent without requiring us to be anything more than conversational in our input. And it's also the first time that a computer's responses have ever really been indistinguishable from a human response. It fails spectacularly sometimes, but the cases where it succeeds are real. And in fact, a lot of those cases where it succeeds are well beyond what we would ever expect from another human being. But there are two walls that gen AI runs into. One is the hallucination problem, and my current belief that it's never going to be truly solved. The other wall is the context problem. Does it have all the information? And a lot of the questions, and tasks we have in the business world are things that the AI will never have been trained on, and probably never will be, but there's still a lot of room between those two walls.

(25:56): There are cases where the hallucination problem isn't a big deal, like the consequences of it aren't high. Either the domain is so structured, and narrow that hallucination rates are tiny, and, or the stakes are low when a hallucination does occur, or because human referees are involved to kind of shock absorber those hallucination instances. And there are many cases when the full context, the full context of information is necessary is available to the generative AI, or can be made available with reasonable investment. And finally, the hype is outpacing the reality, because gen AI is the greatest gift to marketers, and the greatest panic inducing threat to corporate boards that really any of us have ever experienced, but don't fall into the trap of thinking, and binary here because there is substance under the hype. And as a crude guideline, I suggest the two thirds hype, one third real kind of mental model of what's going on. Now before truly wrapping up, there are a few additional, and important thoughts that didn't really fit elsewhere in my outline, so I'm going to leave them here.

(27:00): One, everything I've been talking about here is generative AI, the ChatGPT, Midjourney, Dali, or is it Dali? I don't know. Copilot revolution of the last 18 months. Now that's distinct from machine learning, which has been around for a long time now, and much easier to understand, and apply, and in fact it's even much easier to apply now in the fabric world, and it's kind of like super ripe for the plucking from a business value perspective, generative AI is also distinct from artificial general intelligence, AGI, which is the thing that Sam Altman is talking about that's going to cost trillions of dollars. It's going to monopolize like a significant fraction of the world's computing hardware, and even a significant fraction of the world's electricity. It's going to take all of that to develop AGI if it can be developed at all along this trajectory. Now, personally, I think AGI can be developed, but I don't know, again if our current trajectory takes us there versus whether we're going to need brand new breakthroughs, and research, or completely new kinds of hardware. I don't know.

(28:01): The people who are involved in these efforts think that we're on track. I don't know if that's true, but it's a data point for sure that they think we're on track. Now, either way though, I don't worry too much about AGI because it either happens, or it doesn't. And honestly, there's nothing you can do to prepare. None of us can do anything to prepare for a world that is that different unlike generative AI, and machine learning, and things like that, which you can, and should be using, or at least slowly planning for. Really the only valid professional move with regard to AGI is to ignore it. Seriously. If, and when it happens, it's going to solve so many of humanity's problems, or cause us problems on a scale that we've never experienced before. And either way, it's going to be kind of a tide that sweeps us one direction, or another, and we're all going to be in the boat together. Now, back to generative AI. There was another part in the Sean Rogers interview when he mentioned that it was all in edicts that the corporate boards gave their software companies.

(28:57): He was describing this moment where all of those software companies were kind of wandering, and lingering around the starting line for six to 12 months, again because they didn't actually know what they were going to do to satisfy the edict, but now he said there's some separation starting to happen, and he hinted at some of the sorts of functionality that software firms are going to be revealing in the coming months. Now, the examples he gave were, I think, kind of telling. For example, a feature functionality that provides more context sort of automatically provides more context on a dashboard. So, to get more specific, let's say this dashboard is Power BI dashboard for instance, shows that a particular metric, important metric to you, is trending downward. Okay, why is it trending downward? Now, those of us who are data practitioners, we really don't need much help with that. We know where to look, whether that means clicking some features of the dashboard to drill down on various details, or maybe even quickly pulling together in a dashboard specifically designed to answer that drill down question.

(29:57): I mean, we tend to be close to the data models that built this, and we know how they work, and we know how to manipulate them to get that follow-up answer. So, that's kind of what the data gene is all about. But the average consumer of a dashboard often does not know what to do to answer that next question down. And oftentimes even though they don't know, they're a very important business decision maker. And so in the future there's a chat interface on the dashboard that you can ask a question like, "Hey, why is that metric X down this month?" And it gives you some potential explanations to consider as to why it might be down, I mean, that's great. That application of AI sits so neatly between the two walls of context, and hallucination. You're taking context first. The AI in that case has access to the entire data model, and all of its historical data as well as the definition of how that particular metric you care about how it's calculated.

(30:49): So, it can quickly decompose behind the scenes, the overall metric trend, the macro trend into its individual inputs, and the trends in each of those inputs, like the micro trends under the hood. And because of how structured, and focused all of this is, the hallucination chances go way down. Even better, though, there's still a human referee in this case, the user whose final judgment is the bottom line. So, there's still a bit of a shock absorber of sorts for even the minor hallucinations that occasionally leak through. That's a cool feature, and it's kind of not that big of a deal is it? For years now, we've already had features that attempted to do this sort of thing in Power BI, but the difference now is that the interface itself, the ability to essentially converse with the dashboard in the same way that we'd IM with a colleague is far, far more engaging, and intuitive than all of those prior interfaces were. And the quality of their responses is also going to be much, much better than those previous interfaces were ever able to do.

(31:50): And as people become accustomed to these chat bot interfaces appearing like all over the place, as they become accustomed to them actually being useful, those chat interfaces are going to become incredibly intuitive via repeated exposure. So, this sounds like an old idea, and it is, but this time it's so much better executed thanks to generative AI. The old way wasn't really useful, but the new way is going to be, so don't sleep on it. Now, continuing down that road, let's zoom back for a moment, and put that specific feature into a historical context. Think of it this way for a moment. A feature like that is just, kind of ironic use of the word just. Is just continuing the trend of shrinking the distance between the business, and the technical ability to execute. I've long said that if Power BI had existed from the beginning, we would've never had the IT department in charge of BI. In many places today, we do see the business now building all, or most of its own BI content like data models, and reports with IT providing the software, and infrastructure.

(33:00): But not too long ago, it was responsible both for all of the infrastructure, and for building all of the consumable content. Now, that IT-driven model, it does still sadly linger today in lots of places, but it's clearly on its way out now before things like Power BI, and yes, even Tableau, the tech distance between the business, and BI was just too great. The tech was just too difficult, and abstract, and slow. People who spent their careers acquiring business sense, and experience simply didn't have an entire second career to spend getting good at that antiquated tech. And honestly, even if they did, they wouldn't have wanted to. And the same was true of the tech specialists in reverse relative to acquiring all the business knowledge. But Power BI dramatically shrunk that gap. It didn't bring the technology, the ability to execute, it didn't bring it within range of all business people, but for the roughly one in 16 data-geners lurking in the biz, it was the difference between lights off, and lights on.

(34:00): The resulting convergence where the business knowledge, and needs now lived so much closer to the ability to execute that changed the world. So, a feature like a chat interface that allows non-data-geners to explore why a trend is going a certain direction, really just again, ironically just seems like a continuation of that trend. It's a perfectly welcome continuation of that trend, and it's one that actually enhances the value of the things we create in Power BI. It's definitely not as big of a leap as Power BI itself was, but perhaps more significant than any other single improvement that they've made to the platform. So, if we're going to see something gen AI related in our space that is significant as Power BI itself, it's going to be something a bit more than this. And this, dear friends brings me back around to code. As I said earlier, code writing is kind of like the ideal task for gen AI.

(34:59): It's structured, it's focused. There's a human referee involved to adjudicate hallucinations, and the code is testable outside of the AI context. So, we should expect Copilot, for instance, to get pretty good at helping us build Power BI semantic models. Now at the time of recording, DAX Copilot is terrible. I mean, worse than having nothing. I tried to test today, and the suggestions it gave me for a formula were so misleading as to move me farther from the right answer, negative value. It didn't seem to understand my intent, and it didn't seem to understand the structure of my data model. But folks, those seem like very solvable problems. I don't expect DAX Copilot to remain terrible. In fact, I expected to get pretty good, and probably in a short timeframe, so we should plan on it getting good, and look ahead to how that will change our workflows.

(35:59): And right off the bat, I think the most important thing to keep in mind is that in this case, gen AI writing formulas for us, or helping us write formulas is really just, I'm going to use that word again, just continuing the trend we've already been living. It once again, is going to bring the ability to execute even closer to the biz knowledge, and needs power. BI came along, many traditional BI folks whose careers were invested in the old ways, viewed Power BI through a binary lens. It was either a hype bubble hoax to them, or an existential threat. Does that sound familiar? Now what happened of course with the Power BI revolution is that the amount of demand for BI services exploded. Projects became so much faster, and more affordable that orgs who had been priced out of the BI game could now afford it. And even orgs who could afford it before could now afford a lot more of it.

(36:53): And also the whole thing just became a lot less risky at that pace, and cost, and the ROI is insanely better too. So, BI was just a lot more valuable overnight things with better ROI get more spending, they get more attention. And I've obviously been thinking about the implications that this has for the business that P3 does with its customers. And I want to use our consultants, our P3 consultants deliberately as an example here. From the beginning, they've always been more thinker than they have been tech developers. When we advertise, and we recruit for new talent for our consulting team, we never use the phrase BI developer. We never say we're hiring a Power BI developer. Who we hire, who we look for, and who we need are curious data-geners who are sympathetic to our client's needs, and take them on as their own, as if they're their own needs.

(37:50): Creative architects of solutions to problems who aren't ever executing on a tightly specified paint-by-numbers set of requirements. That whole paint-by-number requirement thing has never worked. That was one of the real deadly lies of old BI was that you could write a requirements doc, execute on it, and succeed. When a near-perfect DAX Copilot arrives, and I'm saying when not if that is definitely going to change some things for sure, but in the context of our consulting team, specifically in the kinds of people we've hired throughout our existence, a near-perfect DAX Copilot is just, that word again, is just going to make things faster. And maybe that sounds surprising to you if you're looking at this from the outside, and you say, "Well, they're primarily a Power BI consulting company, don't they mostly write formulas?" But if the ability to write DAX had always been really the only important thing, we would've hired a very different kind of consultant. We'd have a very, very different hiring profile we would be using, for example, offshore resources.

(38:59): Technical skill in something like DAX is actually not that hard to come by on the open market, I mean. I'm going to say something that I think is very important, and so because it's so important, I'm going to say it twice. Most of the time, the important thing, the valuable thing, is knowing what formula you need to write as opposed to knowing how to write that formula. So, to help it sink in, I'm going to say it again. Most times the valuable thing is knowing what formula, or code you need to write as opposed to knowing how to write it. I experienced exactly a moment of that recently, the most recent time that I touched Power BI with my own hands, I experienced a powerful reminder of this where I realized that I needed a different kind of metric. That the analysis that I was performing at the time was the charts, and the graphs, and everything that I had were misleading. And I needed to use some relatively sophisticated logic like sort of intermediate grade, sophisticated logic in my DAX to create that new metric.

(40:04): Now, this future version of DAX Copilot might've helped me get that measure written a bit faster. And also if I was someone who wasn't able to write that measure myself, if I didn't possess this sufficient DAX knowledge to do it, quite probably would've helped me get over that hump. That's great. There are still formulas in DAX that I don't know how to write, so there is a frontier there, which is going to be helpful to me, and that gets it done, helps me write it as well. But either way, I still had to do a bunch of debugging on my own even after I had done the most important human thing, which was realize that I needed it, and specified it in my head what I needed. That part, the gen AI is never going to do for me. AGI will if, and when it arrives, but these generative AI systems are not going to produce that insight, again, because they are more computationally intuitive than they are reasoning, and creative.

(40:57): And then after I have the formula, I still had to put it through its paces. I had to test it. I even had to change some of the coefficients. I had to flip some things around in there, and test it with numbers that I already understood, because it wasn't right the first time. Even if it had been right the first time, I still would've had to put it through its paces to validate that it was right. Think of it as like a sandwich. There's like the inspiration that I need this formula. Then there's the writing of the formula, and then there's the validation of the formula, and one in three I still have to do. And number two, the middle step is going to get faster. And in some cases, not just faster, but also the difference between I can do it versus I can't do it. Help me research a difficult formula. If we apply that to our practice, if our consultants once again get faster at their jobs, well, that's good news, right?

(41:46): The original revolution was that things got faster, things got more democratized, things got more affordable, and that is what P3 was founded around. We founded this company on the idea that we needed the traditional consulting model to change in order to take advantage of the faster pace enabled by Power BI, and its related family of tools. It's why we chose to put the word adaptive in our name. So, can we adapt again? Yeah, you bet. And because we're built on thinkers, and multi-talented decathletes as our consultants rather than developers, and specialists like the traditional firms, we've got a lot better field position than most, to the extent that as a, if you define yourself as someone who writes code, if that's something that you do as a full-time job, or if your company's business model is based in large part on a developer model of writing code, writing formulas, writing scripts, I do think the Copilot wave is a big threat to your career, and I think it's a big threat to your business model.

(42:53): Even at P3, this is going to change the way we think about ourselves a little bit, but really just by turning a dial, it's not going to be by some qualitative new switch that gets flipped. And by what percentage do we think things like a DAX Copilot will speed things up? I really don't know. I think it's probably in the range of, let's say 10% on the low end, and 50% on the high end. Now, at the end here, I want to take a moment, and thank you so much for listening because this podcast, and the fact that there are people listening to it, and we have a responsibility to produce something valuable for you has actually been the forcing function, I mentioned this at the very beginning, has been the forcing function for figuring a lot of this stuff out. The amount of clarity that I have right now around AI is so much greater than it was even three months ago, and I owe that clarity in large part to the fact that we've been producing this podcast.

(43:50): It forces us to think things through. It also forces us to bring on guests like Forrest Brazeal, and Sean Rogers to talk through these sorts of things. And this process is plugged in so many puzzle pieces. And another thing that I'm responsible for at P3 is defining, and shaping what our services portfolio looks like for our customers around AI. And of course, just like the world's gigantic software companies are still very much figuring out, and discovering what their AI strategy should be. Ours isn't quite done either. It's a work in progress, but I do have, I think, some really solid previews in that regard. One thing that's really clear to both myself, and Justin, and others at the company is that unlike BI with dashboards which kind of have almost like a universal value proposition that everyone can understand the possibilities with gen AI, and even things like machine learning are so different, and they're going to be custom. Every little application of them is going to be partially unique in its own way.

(44:54): And so we will never have, to the same extent that we have the answers of what you should do about your BI strategy, we will have less clarity than that with respect to any customer, any client of ours. The customer, you, business leaders are going to ultimately be the ones with most of that answer. So, one of the biggest things we can provide to our clients is the kind of clarity that we've been working so hard to acquire for our own company in this space. Now, of course, if you already know that you want to go do a gen AI project to do XYZ, absolutely we're ready to go. We'll help you with that. I think the majority of us are still in that figuring it out space. And so if I could beam for example, just like telepathically transmit clarity into the business leaders at our clients to give them this sort of awareness, to help them make that mapping of the opportunities in their business, to the types of things that are possible today, and to be able to more confidently answer questions like, what are we doing about AI?

(45:57): To be the person who knows to be the superhero within your organization, that is the pathfinder that guides the organization through these admittedly quite uncertain, and chaotic times. That's super, super valuable service. Now, if you agree with that, if you disagree with it, whatever, either way, I'd love to hear what you think, because again, all interaction is super, super valuable, especially at this point in time. So, hit me up on LinkedIn heck, email me directly, Rob@P3adaptive.com, or if you're in the steering committee, the Raw Data Steering Committee on LinkedIn, you can ask the question there, or provide your feedback there. Definitely like to hear from you whether it's about this clarity thing I was just talking about, or if it's a question about any of the opinions, or thoughts I offered in this episode. Hey, it's all fair game. And with that, we'll see you next week.

Check out other popular episodes

Get in touch with a P3 team member

  • Hidden
  • Hidden
  • This field is for validation purposes and should be left unchanged.

Subscribe on your favorite platform.