episode 208
AI is “Just” a New Faucet, plus the Value of Getting Specific
episode 208
AI is “Just” a New Faucet, plus the Value of Getting Specific
Last week we got a facelift—new name, new look, same deep data dives. This week? We prove the rebrand wasn’t just cosmetic.
Rob kicks things off with a time machine moment: his first gig at Microsoft in the ’90s, building the Windows Installer. The running joke back then? “Installing yesterday’s apps tomorrow.” Cut to 2025, and that exact same code shows up while he’s configuring an AI tool for data modeling. Build something right, and it really sticks around.
And that’s the bridge, AI context management isn’t some brave new world. It’s the same discipline that made Power BI models and Copilot integrations actually useful. You don’t need to burn it all down and start over. You just need to get specific enough to matter.
If you’ve suffered through bloated “AI strategy” decks or watched a model confidently hallucinate through your business logic, this episode’s for you. The fix isn’t fancier AI—it’s giving it structure, purpose, and the right context to work with. That’s how you turn a show pony into a workhorse.
Bottom line: AI isn’t a revolution. It’s a new faucet. And the people who know how to connect it—and what to feed it—are already leading the next wave of transformation.
Episode Transcript
Speaker 1 (00:04): Welcome to Raw Data with Rob Collie, real talk about AI and data for business impact and now CEO and founder of P3 Adaptive. Your host, Rob Collie.
Rob Collie (00:20): Welcome back to the show. Justin, you had a week off from the podcast last week, but now back to the grind, the most grueling thing you do in a work week, right, is record a podcast with me.
Justin Mannhardt (00:31): I'm so glad that it isn't.
Rob Collie (00:34): Yeah, one of the more enjoyable parts of our week doing this show for sure,
Justin Mannhardt (00:38): Without a question, and it always comes typically towards the end of a day for me. So I get to finish the day having a chat with Rob, and then I get to pack up and go home.
Rob Collie (00:50): And it's kind of late in the week, it's Thursday, but before we dive into the real topic today, I want to share kind a funny story/rant that just really cracked me up last week. We have to jump in the way back machine for a moment to set it up.
Justin Mannhardt (01:05): Okay.
Rob Collie (01:06): My first job at Microsoft in 1996 was testing, I was a test engineer for Office 97 setup. This is a bad job, just yuck, okay? And I knew kind of from the moment I walked in the door that being a tester wasn't really my calling no matter what it was. I wasn't someone who came in every day believing that things didn't work and trying to break them. I was much more of the optimist, let's go create something type and so I wanted to be a program manager. No one wanted to hire me at Microsoft as a program manager. No one wanted to let me make that shift right then, but guess what? Job opened up and I ended up being program manager for the Windows installer V1, which is this MSI technology. Some of you listening to this know what it is. Most of you are blissfully ignorant of what it is, but here's the thing. If you've used Windows PCs, you've used this thing thousands of times, so if I showed you a screenshot of what one of these setup installs looks like, you would be like, "Oh, that thing." Right?
(02:10): Basically, it is the not for all software, but it is the standard installation software that is used by many, not just Microsoft, used by many application vendors all over the world to create setups for their software to install on Windows. I was not in charge of, but I was the only program manager on version one of that technology, and we were working on it in Office, and then we gave it to Windows so that Windows could give it to the rest of the world, but the very first customer of this was Office 2000. Office 2000 was the first major application to use what was code named Darwin at the time to install, and so we were building it in Office, and then the plan was as soon as we were done with it for Office 2000, then we were going to ship the whole team over to Windows, and Windows was going to take it from there. That was the plan. That is what happened, but the Darwin Project, Windows Installer project was really grueling.
Justin Mannhardt (03:07): Fond memories?
Rob Collie (03:09): And it took much longer to build this thing than anyone anticipated. The project just ran and ran. It was running over budget, running over time. It was threatening the whole release schedule for Office 2000, and about then was also when web-based install of applications was becoming a thing like ActiveX controls. Those are long gone. Even today, there's many other different ways to install software on Windows. It's not just MSI, not just Windows Installer. Anyways, it seemed like we were on the dawn of this new era. We weren't even in 2000 yet, right? This is like 1998 when this is happening. It's like the internet is kind of brand new and everything's going on, and Internet Explorer was still a hot new technology and Netscape was a thing. My friend Jeff, I worked really closely with him. He said, "Oh man, there's something I want to tell you and I think it'll bum you out."
(03:59): I don't want to tell you, but it's just too good so I'm going to tell you. Behind the scenes, our vice president Steven Sinovsky is saying things like Darwin installing yesterday's applications tomorrow, and at the time of course I was a little bit crushed. I mean as a youngster working on this project, you don't want to hear that the VP is making fun of your project, but at the same time, part of me was like, "Well, yeah, that's actually hilarious." It's really funny because not only were we taking a long time to get this thing done, but also when you would run an install, it would take a lot longer than we were wanting it to, and so everything about it really struck a chord with being like this is a delicious joke.
(04:49): But anyway, the moment last week when I was just cracking up was doing all this work with AI that I've been doing lately, and AI just loves itself some Markdown format, this MD format. It doesn't give you Word docs, it doesn't give you HTML most of the time. It's obsessed with Markdown. I kind of understand why Markdown is so good for it. I mean nothing on my computer speaks Markdown. I want a Word doc, I want an email, whatever, Markdown doesn't do it.
(05:18): Finally, I'm like okay, I'm going to install Typora. Typora is this awesome editor for Markdown and all kinds of other formats, and it'll do conversions between things and sooner or later, you need something like this. So I sit down to install Typora. I press go, and what am I looking at? What's staring me in the face, but a Windows installer installation screen in 2025. 27 years after Steven's very well-formed joke, I'm now installing something 27 years later to help me with cutting edge AI work and it's being installed with this thing that we skewered and I kind of wanted to jump out of my seat and say, "Hey, take that, Steven."
Justin Mannhardt (06:05): Since we're on the topic, I recently had to switch over to a new laptop, and so that process involved installing a bunch of things and I've got one, two, three, four, five MSI files in my downloads right now.
Rob Collie (06:23): Yeah, I believe it. It's so wild how technology works. So it turns out that this tool that we built and it got better over time after we shipped it to Windows, gave it off to the Windows team, it worked on many more releases of it, refined it even further, probably made it run faster to be perfectly honest. That version one was still kind of like a tool that was perfectly adequate and perfectly competent at doing the job it was built to do. Not a sexy job at all, which is why it was available to me as my first program management job. No one wanted this but me. The thing just still works. They haven't even bothered to update the graphics. Seriously, the icon that you see in Windows for an MSI file, I didn't draw any of those, but I did coordinate the voting between five candidate options of what that icon would be. I facilitated the process where we selected that icon and that icon has remained unchanged for 27 years. It's kind of a testament to so much can change and then parts of it stay the same.
Justin Mannhardt (07:23): It's even got the dialogue Windows are still that weird taupe, beige-y color.
Rob Collie (07:30): The battleship gray of the time, and who's going to invest effort in like, "Well, we should go update the forms package so this looks more modern." No one cares.
Justin Mannhardt (07:41): Nope.
Rob Collie (07:42): So if it ain't broke. I ran into someone, actually, I think we're going to try to have him on the podcast. I ran into someone at Excel's 40th birthday party who is also peripherally part of that project and oh boy did we have a fun time reminiscing about the war stories and the landmines we stepped on and everything. Anyway, so AI-related because I'm installing Typora. There I am, I'm staring at my past and I'm like, "Yeah, I see." We did something good turns out.
Justin Mannhardt (08:09): Well, it still works.
Rob Collie (08:11): Okay, so last week I recorded a solo podcast titled You are the AI Cavalry and you being data people, whether hands-on data practitioners or business leaders who aren't afraid of data, who see the value in data. This is where AI leadership and AI impact is going to come from. There isn't another cavalry, and I kind of wanted to get your thoughts on that because I didn't run any of that by you before I recorded it.
Justin Mannhardt (08:38): Something I've been thinking about after I listened to the episode is even in what is really a short period of my own career and time with data. I've gone through so many phases of tools I was working with and that idea of the AI Cavalry, if you're the type of person that's always trying to figure out how to use information or automation or streamlining or whatever to make a difference in a company, this is sort of like a gold rush opportunity. I kind of got that excitement vibe for me as well, whether it was with a spreadsheet or with SAP, Lumira, or with Power Pivot, or with SQL, or with Power BI, or with whatever. I've always been trying to do those things and nothing has really changed, I mean I've improved obviously as an individual over this time, but nothing's really changed about my innate desire to make things better. If you've always had an innate desire to make things better, what an opportunity some of this stuff is two things that stood out to me, the comfort in the fact that you can't change the LLM.
(09:55): But you don't need to. It sort of reminds me of I actually don't have a freaking clue how Vertipaq does anything, but I don't need to, and then I think when you look at, "Well, I don't need to understand how that works because I can understand how to inform its context or hook it up to different systems and sources and stitch together something." I had a meeting today internally we're talking about something is this idea of we have the power to minimize the AI's need to think about this so it can think more about that. We kind of could see where we had control and the opportunity, and so I thought that was really neat. I think in the early going, there was a lot of fear and trepidation even for myself around like I don't know how to train a multi-billion parameter, large language model, and I probably never will. Does that mean I'm toast?
Rob Collie (10:52): It turns out that the economies of scale are such that they spend hundreds of millions of dollars getting these systems built, and then they need to rent that system in that form for a long time for a lot of money to pay off the expense of developing it, of training it in the first place. You can't be like, "We'll just retrain it again and again." It just doesn't work that way, at least not today. Things can change. There can be new breakthroughs. As of right now, that is how things work.
Justin Mannhardt (11:26): That's right. So yeah, that was probably the thing that resonated with me the most is realizing how much opportunity and capability you have. I've gotten pretty technical at times in my career, but I'm not slinging around lines and lines of code every day. That's not what I do, but to see I have all these ways I could build things that are useful to myself and others that don't require me to really know about how the LLM is built or works or how it's trained in that regard. So that was cool.
Rob Collie (11:59): I'm glad that you liked that. I mean I think in a way last week's episode is me reverse engineering and explaining back to people and even to myself. When I come up with ways to explain things to other people, at that moment is also when it's becoming clear for me, it's a turned around. You know the old Einstein saying, "Until you can explain something simply, you don't actually understand it well enough." So forcing myself through a process like writing and recording last week's podcast is really valuable for me even as I'm sharing it with the world. It's win-win, but it's kind of like reverse engineering for me in a way why it was scary before and why it's not now
Justin Mannhardt (12:41): True.
Rob Collie (12:42): If that's a gift that I can give to my fellow data engineers, hands on practitioners, business leaders, whatever, all of us who are comfy with data, which again is a subset of the population, is that spreadsheet, is that cool or is it gross?
Justin Mannhardt (13:00): Right.
Rob Collie (13:00): 15 out of 16 say yuck.
Justin Mannhardt (13:04): It's like a Rorschach test. What do you see here?
Rob Collie (13:09): Anyway, the first step to getting off the starting line is to shed the fear. Second step, but it's really kind of the same as the first step is understanding where it fits, what it means to succeed with it, but it's hard to get to that second step while you're afraid, and it's also hard to not be afraid when you haven't gotten to the set. It's sort of like these things need to happen at the same time in a way, but I didn't know that I was going to say what I said until I wrote it. It's like, "Oh, right, you're an AI professional." Guess what you can't change. You can't change the AI.
Justin Mannhardt (13:43): Right. I like that a lot and then a thought that has kind of been bouncing around in my own head, I think we might've even had a couple questions about it from the team even. If what we can do as the data engineer, business leader with data crowd can do is really bring data and context and organize it in a way that AI can use it in a useful way, does that change P3's philosophy at all? Does the mission now become well, now we do need to plum data to somewhere so AI can use it. If you've listened to this show for a while, we talk a lot about being faucets first, and that's how we've been successful with business intelligence and dashboards and reports for a long time, but if we can't change the AI and the mission is, "Well, I need AI, but enterprise data is needed and all these things are needed." Does that change how we have to think and work at all? That's a curiosity amongst the ranks here.
Rob Collie (14:46): For the benefit of new listeners, let's briefly explain what we mean by faucets first. In the data and BI era, which is still very much a live and going concern that the data and BI era is not over. It's really just a stand in contrast. It's to contrast our approach with the approach that almost every firm uses other than us. When you hire a firm and you say, "Look, we need better information, we need dashboards, whatever." The usual firm will say to you, "Okay, absolutely, we'll get you that, but first we need to go build you a whole bunch of gleaming plumbing. We've got to get your data estate in order."
Justin Mannhardt (15:23): We need some lake houses, some warehouses, some pipelines.
Rob Collie (15:28): Medallions of various flavors.
Justin Mannhardt (15:31): Scalability, reliability, disaster recovery ability.
Rob Collie (15:35): And it's not that we don't believe in any of those things, right? We do believe there's value in those things. However, we also believe that most of the time when you start building those things first, they become their own obsession, their own goal and you're not on track to deliver actual business value. You're overbuilding that infrastructure by orders of magnitude while at the same time under building it, you're not building it in the right ways. The actual needs that you eventually need, you're not building it. So this is how these projects become so expensive and so time-consuming, which is not a bad thing for the consulting firm that sells the project. It's a very good thing for them. It's just not good for the customer.
(16:18): And then eventually someday you see your first dot on a chart and you're like, "That's in the wrong place." So for us, the metaphor is instead of building plumbing, gleaming plumbing to nowhere, we start with the faucet because you're thirsty. You came here saying, "I'm thirsty." You didn't come saying I needed plumbing, and so we start with the faucet. If we need to run some hoses to the faucet to make sure that the faucet is in the right place, that it's delivering the right kind of water, whatever, we won't overbuild the infrastructure behind the scenes until we know that the faucet's right, and guess what, that way not only are you saving money, but you're getting your water faster sometimes in the first couple of days, and you're like, "Okay, we're going to use this while the hose is running to it. It's going to make us money.
(17:04): And then we say, "Look, there will be some benefits of running a real pipe to it." Replace the hose, right? Maybe there aren't any benefits. Maybe there are, but at least everyone will understand a hundred percent why that plumbing is now valuable, and then you go build exactly that no more than you need and doesn't take very long. So that faucet's first mentality is central to our philosophy as a company, but last week in this episode, if you listen to it, sounds a lot like plumbing, doesn't it?
Justin Mannhardt (17:31): Hooking up MCPs.
Rob Collie (17:33): Running arrows from this system to that system and all that kind of stuff. So everything I said in last week's podcast, I stand by it, but if anyone had the impression that it's not going to be faucets first for us, no, it's a 100% still going to be faucets first for us. AI is now a new kind of faucet and it's a great faucet. We're still not going to overbuild infrastructure. We're still going to make sure that it's delivering on the business requirements that it's exciting you getting something that's making you happy because it's delivering so much value, and then we can talk about hardening, robustifying infrastructure.
Justin Mannhardt (18:09): Robustifying.
Rob Collie (18:11): Yeah, I mean if it's not a word.
Justin Mannhardt (18:13): I feel like that could be a weird Al Riff on a Rage song.
Rob Collie (18:18): Oh, yeah.
Justin Mannhardt (18:18): Robustify.
Rob Collie (18:24): In an alternate universe, all we do as a company is produce nerd covers of popular songs, right? I could help write lyrics. That would be the end of it. No musical talent, no singing talent, but I can pen a covered lyric.
Justin Mannhardt (18:39): We have a high number of musicians here at P3.
Rob Collie (18:41): We do. We have the capability. It turns out that even a Power BI model is just kind of like its own form of really lightweight plumbing. You're connecting it to multiple places sometimes with exported text files, sometimes with something resembling a hose, sometimes something even resembling like a bucket of water that you carry over there for a moment just to make sure, but if you look at a Power BI data model, no matter what the backend piping is that's powering it, that's feeding it, that data model is a form of very intelligent, high efficiency plumbing of a sort that comes together very quickly, and it's not just plumbing. It has logic in it.
(19:18): It captures the rules of your business and all of that and I think there's a very strong parallel here in the AI space, making sure the right MCP providers are available or that it's got the right vector search capability and keyword search so that we can get it what it needs at the right time, and also not too much as you were saying earlier. You can wear these things out by giving them more than they need or giving them too much to think about at once.
Justin Mannhardt (19:47): Even the simple idea of an AI that can work on top of your semantic model, if we were starting from scratch on a business problem that needed a new semantic model and now AI is in the equation alongside of reports or instead of reports, we would still approach that model the same way, run the hose into the AI and say okay, does this kind of do what it needs to do? Yeah, okay, what does it need plumbing wise? Oh, it needs something like external memory. Oh, we need to harden the way the data gets to the model because it makes sense. It's the same idea. Can we move fast so you're interacting with something? In the past, it was a dashboard. Today it might be an AI experience, so you can say yes, this is the thing that's going to help me.
Rob Collie (20:37): And there are parallels in the security world when the AI agent is helping me, it should be able to see different documents than even when it's you. So the equivalent of row-level security and Power BI. Who's the user? Okay, well we're going to act like them. If it's a backend agent, a headless agent automated running behind the scenes, well, maybe that needs its own level of security or maybe even that one is impersonating and acting as individual users as times even though it's not interacting with that user. The nightly thing that summarizes your day for you in the morning probably should still run as you, even though you're not sitting at the computer at the time.
Justin Mannhardt (21:20): Yeah.
Rob Collie (21:21): A lot of parallels in that regard. I also envisioned when you were talking about that when we're building a Power BI model in this new world, I very soon imagined us to be doing multiple things in parallel to test whether it's working properly. In the past it was all just like, "Okay, let's slap together a dashboard, a really rudimentary dashboard and see if the charts look like they are given the right numbers, etc., like validating and all that. At the same time, you could be firing up a chat session in parallel and asking it questions and seeing if it's got what it needs, right? It's the same sort of rapid iteration, really close to the funnel, really close to the faucet.
Justin Mannhardt (21:58): That idea has been really tangible for me and even very recently as some things have been working on the last couple of weeks of how you approach these things and iterate fast and understanding, even though I can't change the AI, I can still do a lot of interesting things. So I've watched some of our agents from our AI platform working with Power BI and I'll watch them execute multiple DAX queries and then try and reason through the results of them together, and I go, "Oh, I'm actually chewing a lot of context right now. It'd be easier if the LLM didn't need to jump through all these hoops if this was a routine question, that's something I want to fix in the model." Or, "Oh, it'd be cool if this was one of the tools in the MCP so it didn't have to figure this out all by itself every time. It just made it easier for it."
(22:50): And so I'm seeing these pathways of how you would start finding where to plumb, I guess where would I start plumbing to really make this faucet be the cleanest, purest, the smoothest water available in the 612. You know what I mean?
Rob Collie (23:12): Yeah. I'm getting flashbacks of the M&M movie Eight Mile, right? He's up there talking about context windows, like you chew context. There's no way to finish that sentence, that lyric without it being dirty if it's M&M. I pulled up, didn't follow through. Mostly family show. Yeah, speaking of which, your son's now asked the home automation system at your house to play the podcast that dad has with his friend?
Justin Mannhardt (23:43): That this is the cutest thing ever. So I got home last night and I was putting my bag away and everything and my two boys are in the kitchen. We have one of those Google Home displays in the kitchen and my oldest, he goes, "Hey, mom, did you know can ask Google to play raw data and it'll play dad and his friends show." Because my kids have seen you when we've been on a meeting and they come in the background or something, and so I'm like, "Of course it does." I've just never thought to ask Google or Siri to play our podcast before.
Rob Collie (24:16): Did you have a moment there where you're like, "Oh, wow, I'm their dad and they look at me sort of like a God, like I have an aura." Such a strong dad moment that they're looking up to you that much that they want to listen even just for a little bit to your show?
Justin Mannhardt (24:31): It was the closest thing I had felt to them thinking I was really cool because before they asked me what do I do? And I go oh, I help people with data. They're like, "What's data?" And we will try to explain that in terms they can understand. They're young, but they still have YouTube channels that they like, so they very much understand the idea of an influencer type persona. So the fact that dad is a part of a podcast is very cool.
Rob Collie (24:57): Oh, yeah and they're young enough to still be wowed by it as opposed to being like, "Yeah, my dad's a poser."
Justin Mannhardt (25:04): Yeah, right.
Rob Collie (25:05): Thinks of himself as this. My kids are in their 20s and one of them's even a computer science major. They don't listen to my podcast. I'm touched that they said dad and his friend. I like that. That feels good.
Justin Mannhardt (25:19): Yeah.
Rob Collie (25:20): And you mentioned chewing context.
Justin Mannhardt (25:22): Yeah.
Rob Collie (25:22): And we've explained this a couple of times on the show, but I think this is one of those things that you almost can't explain it too many times. Every time we talk about it, it's worth it.
Justin Mannhardt (25:32): For sure.
Rob Collie (25:33): Context window is essentially how much is this thing being forced to carry in its brain in that session about you, about this whole interaction? And there's a few ways in which if you use too much context, there are at least three ways that it can manifest as a symptom. Something can go wrong. One of them is you reach the end of the chat limit. It's like brain full.
Justin Mannhardt (25:59): Sorry, come back later. Chat, sleep now.
Rob Collie (26:05): And that is a cloud desktop. That is what hits you over and over again. Cloud desktop, the chat gets too long, too quick. It's very frustrating. That's one way that happens. The second way it happens is you give it too much information all at once and it literally gives you an error and says, "I can't ingest all of that because that amount would exceed the length of my quote unquote chat limit." But it's really the context window limit.
(26:28): You don't really think about it as you're having an ongoing conversation. It's supposed to remember that conversation so that like a human being would if you were talking to it, and as the conversation grows longer, you've been eating it more and more at that context window, and then the third class of way that you can use too much is when you haven't gotten any explicit error. You haven't hit the end of the chat. It hasn't said, "No, I can't take all that on." But it starts to get tired and it just starts forgetting things or making mistakes. I mean ChatGPT is really famous for it gives you a link to a document. That document link is not going to work six hours from now, but it might not even work now. Have I told on the show, I don't think so, about asking our Power BI agent on P3 AI asking it to assign fair teams in the hockey league?
(27:19): It's really fascinating. A, how well it did, and B, how tired it got. I asked it, said, "Hey, you've got access to our Power BI model. You've got all of the players in the league and who's active in the most recent season. Take the rosters. There's seven teams, eight players each, so 56 players that are active in the current season. Based on their performance, their lifetime performance, not just the performance of this season. Make seven fair teams." It's your job agent to create the rosters for next season to make a fair and competitive league, and I gave it rules, make sure there's enough defense men and enough forwards on each team. You don't want to have a team with all forwards and no one wants to play defense or vice versa, but also make it fair in terms of how strong the teams are, and I mean it really took to this problem in a way that impressed me. Think about what a human being we need to be doing if we were going through all of this.
(28:20): There's so much thinking like shuffling people, no, that's not going to work. I put too much talent there. So exhausting what it has to go through that it got to the end and it had a great system for everything. I looked at the seventh team that it assembled and it had summed up the points per game for all the players for each team and was trying to hold them in a narrow range like plus or minus one and a half points per game total for the team, and sure enough, spits out seven teams, eight players each and the numbers all look right. Can I look at the seventh team? I'm like, "Wait a second. That team is really not good." I know the players on it, right? This team will not do well in the league.
(29:11): The sum of their points per game if you did it manually was seven, but it was adding it to 14. It is capable of adding those seven numbers together and getting the right answer. This isn't like the classic how many hours are in strawberry type of problem where repeatedly face plant and fail. No, it is able to do that, but it wasn't able to do that and get it right after being so tired, and whether that was because it had chewed so much context window or just because it had had to do too much thinking all at once. Again, I'm not an AI researcher, I'm not building LLMs, but you'd start to develop a bit of a spider sense for this stuff after a while.
Justin Mannhardt (29:52): Yeah, I had a super similar experience to that today actually. I was effectively going through reassigning assets to different people and it came up with one analysis and I said, "Well, let's do it a little differently." Let's say the total population of assets I needed to work with was a hundred. It just started only giving me 30. It's like, oh, you're tired. You can't remember that there's a hundred and at one point he even was like, "Hey, can you re-upload the list for me? I forgot it." Literally asked me to do that.
Rob Collie (30:26): Yeah, there's an economy, sort of like a thriftiness that needs to be applied here in the same way that you wouldn't load a bunch of data into a Power BI model that will never be relevant. If no one's ever going to care about those rows or those tables, don't load them. It's just going to get in the way. It's going to slow things down, it's going to clutter up the interface. There's just the list of tables, etc. Right? There's parallels here and so in order to get the right information to the right AI system at the right time, you can't just give it everything. You have to be precise about it or at least a little bit thrifty. You don't have to be to the nth degree precise, but you need to be a little bit careful that you're keeping it as relevant as possible. Don't make it think too hard about things that you could make easy for it.
Justin Mannhardt (31:19): Yeah, and those are interesting things to realize when you see them and you kind of understand these things like I actually don't want my LLM to keep trying to figure this out because it's something I could just go get if I invested a little sweat equity and it figuring it out isn't the magic sauce.
Rob Collie (31:38): Oh, here's a great example of that. So I picked up an old data model of ours, one that I built a long time ago. It's a demo data model with fake data that simulated one of our company's like earliest and most successful Power BI projects, and we couldn't create a demo using the customer's real data obviously, right? So we invented a fake company and fake data. We put a lot of effort into it, and I remembered the model from seven years ago. I thought I'd included metrics in it that essentially like the weighted average of performance across five different sectors of the business, which are five different data tables, five different fact tables in the data model. I thought I'd created an all up weighted average score. So if you want to look up a particular office or a particular territory or division of the company and see what their overall score was on customer service across the entire customer journey, which is a long journey, I thought I'd created metrics and it turns out I hadn't, or at least the version of the model I found didn't have them.
(32:44): And so as I was having an interactive chat with our P3 AI Power BI agent against this model, and I'm asking it for the weighted average performance, it was going and essentially creating that metric itself, inventing it, writing some sort of DAX query and calculation that was generating it, and it was putting a lot of effort into that. It's a perfect example of go create that metric and make sure that the metric is named in a way and has the right metadata on it so that it's recognizable to the LLM and it knows that it can just ask for it.
(33:22): Because then you don't want to ask it like, "Hey, in which corners of the country, which regions of our company divisions, whatever, have improved the most over time?" Like oh, now it's got to do the trended version of this thing that it's inventing, right? It's like you're just burning too much of its brain on something that you really want to save that brain for other things, and again, the more you burn the brain, the chances that it's going to make a mistake also goes up. It adds numbers that should add to seven and it gets 14.
Justin Mannhardt (33:51): These are the places I think as members of the AI Cavalry if you will, where we can just create so much value in these types of systems. Just identifying those places where you make it easier for the brain or you create more capacity for it. It's a really fun thing.
Rob Collie (34:10): And then again, just to reiterate the whole point of last week's episode, everything we're talking about right now, economizing, giving it just the right things and making it not work too hard. All of that is in the context of also most important thing is making sure it has the information that it needs, that it doesn't come out off the shelf with.
Justin Mannhardt (34:31): Right,
Rob Collie (34:32): Its access to information, rules, directions, clarifications as well as hard data, all of that stuff that it doesn't have access to. That's our job. As Amiranet says that these off the shelf LLMs have PhDs in everything, but as far as your new business is concerned, every time you interact with them, they're a new hire. Every time you wake them up, you got to give them the employee handbook again. This data over here means XYZ, and its essentially like its boot sequence every time. Well catch you next week.
Justin Mannhardt (35:04): All right, man.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
Subscribe on your favorite platform.