Can AI Go Straight from Requirements to Power BI Models?

Rob Collie

Founder and CEO Connect with Rob on LinkedIn

Justin Mannhardt

Chief Customer Officer Connect with Justin on LinkedIn

Can AI Go Straight from Requirements to Power BI Models?

Nobody loves requirements docs. They’re the corporate equivalent of writing a novel just so someone can skim the back cover. The real question is whether you can ditch all that and go straight from “here’s what I need” to a working Power BI model. In this episode, Rob and Justin push AI into that role and see what breaks, what builds, and what actually saves you time.

Turns out, the magic isn’t in making AI look impressive on a demo slide. It’s in whether it can wire up tables, relationships, and measures fast enough that your team can skip the plumbing and jump right to the good part: asking “does this answer the question?” instead of “why won’t this table join?” That’s the test, and it’s the only one that matters.

From tools that feel like friendly appliances to those that lean full hacker-mode, Rob and Justin run the gauntlet. They even crack open Copilot’s inner workings to see how answers really get formed. It’s a gritty look at whether AI can finally cut the “first-hour tax” every project pays and give leaders a faster path to value.

Episode Transcript

Rob Collie (00:00): Welcome back, Justin. We're really stringing them together in a row here, although we already know that, I think next week, it's a solo Rob podcast.

Justin Mannhardt (00:08): I think it's going to have to be.

Rob Collie (00:11): Yeah. You got an AI pilot project next week.

Justin Mannhardt (00:15): Doing one of my favorite things, jumping on an airplane to go see a client and talk about cool stuff.

Rob Collie (00:21): Film at 11, as they used to say.

Justin Mannhardt (00:24): It's a need-to-know basis-type thing, but yeah.

Rob Collie (00:28): Fodder for future conversations, as opposed to something for the moment. We've had a little bit of a miniature tradition building here where we start off and I kind of rant about something small and pedantic, and I've got one for you. This gives me a reason for all the things that go through my head during an average week that are just like, "What am I doing," right? "Oh, this is useful. We'll talk about this on the podcast." Do you in your neighborhood have the garbage cans that are on wheels and have a square cross-section, that are designed to be grabbed by the garbage truck with the arm?

Justin Mannhardt (01:03): My neighborhood does not, but I am familiar with the design because other neighboring neighborhoods do.

Rob Collie (01:09): Okay, so in your neighborhood garbage collection, someone's actually grabbing the can with their hands and dumping it into the garbage truck?

Justin Mannhardt (01:17): Yeah. The City of Minneapolis, because in alleyways, so someone has to roll our can away from the garage, but they hook it onto an apparatus that then tilts it up, dumps it, and puts it back down.

Rob Collie (01:27): Then it is the square-profile garbage can that I'm talking about. It's just not being grabbed off the curb by the arm.

Justin Mannhardt (01:34): Yeah. They don't have the arm thing that grabs the whole thing.

Rob Collie (01:36): They roll it over to the truck, and then the truck grabs it and dumps it?

Justin Mannhardt (01:40): That's right, and they are square, yes.

Rob Collie (01:42): It's the same arrangement that I have here in Seattle, because the streets are way too narrow and they allow parking on the street. There's no way that these arms would ever be able to reach all the cans. Now, of course, when I lived in Indiana in the suburbs, they're just driving along with this arm that reaches out and grabs the thing. Either way, in both situations we've removed this incredible amount of dangerous, repetitive labor of lifting and dumping cans by hand. Doing that hundreds and hundreds of times a day, the repetitive stress injury from this, this is not something that a human body can sustain over a long period of time. This is a real advancement.

(02:24): Now, in Indiana, where the arm grabs it off of the curb, you orient the cans so that they face the street, so when they grab it and lift it up and dump it, well, the lid flops open and stuff falls out. The back of the handle that you drag it by, that's facing towards your house. What you would think of as the front of the can is facing the street. Okay. Now, in Seattle, someone's coming around to grab these by hand. That means they have to walk around the back of it. If you orient the cans like that, the person who's picking up has to walk around the back of it, grab it by the handle, spin it around, and then drag it out to the truck.

(03:02): I moved to Seattle and I see everyone's got their cans oriented in the, quote-unquote, facing-the-street way, just like in Indiana. Sometimes they're up on the curb. They're not even in a driveway. They're going to have to be dropped down onto the street. For the first several months I lived here, I'm just like, the guys that come along, this is so annoying for them. The hardest part of their entire job, and honestly, the thing that's probably most likely to injure them over time, is the twisting them around, reorienting them so that then they can drag them into the street. Eventually after a few months, I just built up the courage to flip them.

Justin Mannhardt (03:36): Man, I'm so proud of you.

Rob Collie (03:39): I'm the one person in the neighborhood who has the handles facing out.

Justin Mannhardt (03:43): Oh, man. You gotta come visit. Everybody's got handles towards the alley here, man.

Rob Collie (03:48): Really? Okay.

Justin Mannhardt (03:48): It's a standard practice.

Rob Collie (03:51): In the suburbs of Seattle, they've got the arm that lifts things, and apparently that's just enough to bleed into the city or something, and everyone's got this groupthink. Handles out, folks. Handles out. You know what's funny? After I did this, after I switched, I have found that when I come home later in the day or whatever, my cans have been put back in places that are a little bit more polite and a little bit more considerate to me than they used to be, right?

Justin Mannhardt (04:19): What goes around comes around. Turn your trash cans around, folks.

Rob Collie (04:22): We had a podcast guest on a long time ago, Scott Louvau, who has a website called Relentless Optimizer. He has that in terms of code optimization. I'm not so much into that, but optimizing everyday life and thinking about the human beings around me and how to optimize for the whole system, both a blessing and a curse. I couldn't banish these thoughts from my head if I wanted to.

Justin Mannhardt (04:45): I'm happy to learn that on our avenue here in Minneapolis, we've been doing right by your theory here.

Rob Collie (04:51): Good humans.

Justin Mannhardt (04:52): Man.

Rob Collie (04:53): You're humaning properly.

Justin Mannhardt (04:55): Now, the thing that we could do a better job of here in Minneapolis ... and this is required by law ... it snows a lot here, and you gotta clear the snow around all your receptacles so that they can wheel them away. What gets us sometimes is it'll snow on trash day. The alleys are the last thing they plow, so then they come down the alleys and they pack the snow up against everybody's trash bins. They're usually next to our garages. You gotta get out there and you gotta clear it away. Otherwise they won't take it.

Rob Collie (05:29): I just assumed that in the winter, y'all swapped out your wheels for skis on these trash cans.

Justin Mannhardt (05:35): That's a good idea, actually.

Rob Collie (05:37): I mean, it still wouldn't matter, right, but it's a funny visual. "Have you put the snow tires on your garbage cans yet?" All right. Switching gears to the real thing, you have been in the laboratory doing some mad scientist-type stuff lately. We haven't had an opportunity for you to catch me up on what you've been up to, and what better place to do that than on the podcast? Two birds with one stone. Let's go down the rabbit hole.

Justin Mannhardt (06:01): Let's do it. I've been using AI. I feel like saying on a daily basis is an insult. It's like on an hourly basis. I would say the best way to describe my use case for the most part to date is more along the lines of an executive thought partner, wrestling through problems, writing things, that kind of stuff. I've done very little work with AI in the pursuit of building solutions of any kind. I said I want to jump into this pool, and I've had a catalyst to do so. It's like a lot of people. If you don't have a catalyst for something, it's really hard to go out and learn something. It's hard to read a tutorial or have something pushing you.

(06:48): We're going to be going and visiting a client next week. We're going to be talking with them about some AI stuff, about some Power BI stuff, and so I had a reason to take an interest in getting back into those things. I said I just want to test drive how much damage I can do with Claude Code in a few scenarios, and maybe I'll talk through some of those and you can ask me some questions, and we can pontificate about what this may or may not mean for the future, that kind of thing. I would say what's really interesting to me is how many different types of things I can do with an AI assistant that in the past I would've imagined someone needed to develop purpose-built software for.

Rob Collie (07:34): Okay. Before we dive down that, can I offer a couple of quick meta observations? Because I think we're going to get into the sauce pretty quickly here. First of all, Brian Julius and Sam McKay have been very, very, very, very high on Claude Code for a while. I tried it, and I bounced off. My first attempt with Claude Code, I recoiled, but then I tried Cursor instead and Cursor was more my speed. It was a little bit less intimidating me because I am not used to working in shell prompts, command windows, and all these sorts of things that real developers do all the time. I'm really interested in even just personally having that defanged for me. I did find Claude Code to be a little bit too scary for me, and then Cursor was like a warm hug by comparison.

(08:28): The other meta observation I want to make is you said you've been using AI on an hourly basis for a number of years now, and for the bulk of those years I have felt very much, capital B, capital J, Behind Justin. I had this glorious few months where I'm out ahead of you. My journey with a lot of these tools has been like having this renaissance of interest in tech in general as a result. This, though, feels like getting back to the natural state of affairs. You're off doing things that are ahead of me. This is the way that things are supposed to be. I welcome it.

(09:06): It's like when Ken Puls didn't understand DAX, and I wrote a book because I'm like, "Ken understands everything better than me. This is an injustice in the universe. He wants a book. I'm going to give him a book," and we'll skip ahead in the story. He eventually read the book and understood it, and suddenly was back to now writing DAX that I didn't understand. I'm like, "This is where we're supposed to be."

Justin Mannhardt (09:27): I think this rhythm of leapfrog is really healthy for us.

Rob Collie (09:30): I just don't know how many times we should rely on it to be me. I've been really re-energized, again, about tech in general ...

Justin Mannhardt (09:39): Yes.

Rob Collie (09:40): ... in a way that I haven't felt in many years, as a result of my last four months have been like that. Anyway, let's get into things that scare me. Like Claude Code.

Justin Mannhardt (09:51): The terminal scares Rob.

Rob Collie (09:54): It has the word terminal in it. Terminal means fatal.

Justin Mannhardt (10:00): It's true. "Would you like to approve this bash command in the terminal?"

Rob Collie (10:07): Like, "Oh." Okay. Just to start us off with, what is Claude Code? We've got a lot of different people listening. I would say that most people listening to this have not seen Claude Code.

Justin Mannhardt (10:22): Claude Code, I think you could best think of it like an extension you would install in your development application of choice. You technically install it just all by itself on your computer, but the way I interact with it or where most people interact with it, in my case, I open Visual Studio Code, which I've used over the years to do things like notebooks or build databases or whatever, and it sits there off to the side in a pane, just like an empty little chat deal.

Rob Collie (10:56): Pane, terminal, bash. We've named things to be scary, right? Okay, fine. Pane is not spelled the same way.

Justin Mannhardt (11:06): Honestly, it doesn't look like much of anything.

Rob Collie (11:09): Yeah, it doesn't.

Justin Mannhardt (11:11): I've got it open right now, and it's just got a little cursor indicator of here's where you could type some things. What have I done with it? Well, I had been playing around with GitHub Copilot also in a similar capacity, but I've found this to be much better in my experience so far, which is not years and years, but the difference is pretty obvious for me.

(11:34): I'll give a shout-out to Rui from Microsoft, Romano. He put out some things on social media a while back just demonstrating the idea that we could use AI to build semantic models, and that's the first thing I did. I just went and I got Rui's demonstration from GitHub and I walked through that, and I was like, "Yeah, this is pretty neat." I was like, "Nah, this is pretty clean room. This is teed up to work."

(12:04): I did that for myself against the database that we have here at P3 Adaptive. What I did is I wrote a document. Just by myself, I wrote a document explaining the database I had, what was in there, and the fact that I just wanted to build a Power BI model for it.

Rob Collie (12:23): This document is just text?

Justin Mannhardt (12:25): Just a text document.

Rob Collie (12:27): Almost like a chat, but you're just writing a long chat.

Justin Mannhardt (12:31): Yeah, long chat. I'll come back to that. I took time, basically, to write out a pretty long prompt, is the way you could think of it.

Rob Collie (12:40): It's written in a way that another person would understand?

Justin Mannhardt (12:43): Yeah.

Rob Collie (12:44): It could be like a, "Dear John, here's how my database works and here's my ambitions."

Justin Mannhardt (12:49): Yeah. "These are the types of things I'm interested in being able to analyze, I want to be able to have these types of metrics," just describing that all naturally. Then I simply asked Claude Code, "Can you build me a Power BI project based on this document?" Now, in this document, I borrowed some things from Rui. I also referred to a link or a separate document that explained the Power BI project format, so Claude can understand that.

Rob Collie (13:18): Right, you have to train it up.

Justin Mannhardt (13:19): Yeah. It just gives it all the context. I did that and I pressed Go. It's funny because Claude Code, when you watch it work, it uses funny words. Instead of a processing icon, it says, "Bamboozling, wandering about, finding the hullabaloo." It's kind of entertaining. I don't know, maybe a minute goes by, if that. It was fast. It says, "Hey, I'm done. I've created your Power BI project files, and they're in this project directory." What you get in a Power BI project, you get two folders. One is the definition of your reports, one is the definition of your model, and then the Power BI file itself, what we normally see as a PBIx, the binary file.

(14:04): I'm like, "Sweet." I go and I doubleclick to open the PBIx file, and it opens and I get this error message, and it's really cryptic. It doesn't make any sense whatsoever. I go, "Oh, damn, it didn't work." I copied all the error text and I said, "Hey, Claude, I got an error." It goes, "Let me check this. Oh, I see. I didn't put the setting in the right spot, and I fixed it and then it was fine." It was legit. There was a model that had power query definitions already, had tables, had some measures. It didn't do any visuals. I didn't go back and test that to full extent, but I was like, "Okay, that's pretty interesting."

Rob Collie (14:43): How good is the model?

Justin Mannhardt (14:45): It's good. It's got dimension tables and fact tables, and it's related to each other in a logical way.

Rob Collie (14:52): How long was your original document?

Justin Mannhardt (14:55): I don't know. It'd be like a couple of pages in a Word doc, maybe.

Rob Collie (15:00): How technical? I mean, it was still written in English and it was still like prose. It wasn't like tables of definitions and blah, blah, blah. How technical was the document? Were you using things like fact and dimension lingo?

Justin Mannhardt (15:16): No. In all fairness to myself here, this was on one of those days where I have a thousand things to do and I was like, "Wow, it made a model. It looks legit, it's in star schema, I can see the measures, and I got 17 meetings to go to."

Rob Collie (15:30): Okay.

Justin Mannhardt (15:31): I want to come back around to it and keep going, but I was like ... what's interesting, and I'm curious what you think about this, because I think I finally formed an opinion on it for myself ... is that initial document, whether it's two pages, four pages, ten pages, is honestly something I would have never taken the time to write prior to this technology. I would always just sit down with the client, do our brainstorming, whiteboarding. I'm a machine. I'm off and running, building things in Power BI Desktop. Doing that document in the past would've been seen by me as a waste of time.

Rob Collie (16:11): Well, yeah, I completely agree with that. We've skewered the idea of a requirements document a jillion times.

Justin Mannhardt (16:18): Many times.

Rob Collie (16:19): Okay, so it is a really interesting topic. First of all, a couple of pages of essentially human-readable description and ambitions is not the same as a requirements document of old. This is still something we wouldn't have done in our methodology before today, but this is not saying, "Long live requirements documents." We're not reviving or changing our stance on those things, this big, monolithic monster that's supposed to capture everything and never even comes close to the mark, but takes weeks and weeks and weeks to generate. Nope, we're still not doing that. Let's be really clear.

(16:55): Just piecing through this, the real question is would we do this in the near future with a client as a means of getting off the starting line more quickly. I'm going to answer your question with a question. I might even be able to answer this, but it's easier just to formulate the question first. In your situation, this scenario you just described, you're working with an internal database that you already understand.

Justin Mannhardt (17:21): Correct.

Rob Collie (17:22): You're working inside of a business.

Justin Mannhardt (17:25): Yeah, still a bit of a clean room.

Rob Collie (17:27): I mean, it's not really a clean room. It's more like you're up to your eyeballs in the dirt. You're already swimming in it, and so there's a self-mind meld that you're able to pull off because you are both, in this case, the developer. Even though you're working with Claude Code, you're still the developer.

Justin Mannhardt (17:44): Correct.

Rob Collie (17:45): You're also deeply embedded in all of the tribal knowledge and of the nuance and all the business rules of our business, and all of the data and how it's all structured and blah, blah, blah. You have all of that on lockdown without even having to think about it. In the case of working with a client, let's say it's the first time we're working with this client, just to keep the example clean. We don't have that, as the consultant. We don't have that in our heads yet, all that stuff.

Justin Mannhardt (18:11): Correct.

Rob Collie (18:12): The question is, coming up with that document, the two- or four-page document, how efficient is it going to be to do that with the client, versus one of the things that has been so useful for us over the years is there is no communication more efficient than, "Oh, do you mean this?" Our methodology has long been getting a sense of their ambitions and their goals and all that kind of stuff up front. Eventually we get to the point we just kind of load data and try to get to some of those outcomes, and it drives an efficiency in communication that doesn't fit through describing in advance words very effectively.

(18:57): That's my question back to you. This approach we're describing certainly sounds very promising for accelerating development and improving some of the efficiency of even the way that we work, and we're about as efficient as it gets today. Is that two-to-four-page document itself going to become a less-efficient version than something else that we would still do with Claude Code?

Justin Mannhardt (19:18): It's a great question, because having the right approach and setup needs to be demonstrably faster than how I could go in the past. I think there's reason to believe it could get there, but I think you'd approach it differently. For example, you'd want a way to distill actual conversation with someone into clear business objectives that they're trying to achieve with their data in this document. Maybe there's a pre-technical scrape. Harvesting the schema out of a database or a SharePoint list or something is a pretty easy thing, but it's going to need that, right? It's going to need to have that information.

(20:01): You and I have sat in the same room during jumpstarts before, and you get to that point where you're like, "Okay, we've downloaded a lot, this is great. I need about an hour. Why don't you guys go check your emails?" If that period of time vanished, and now we're more quickly into, "Okay, you mean like this," and we're on the canvas doing things a bit faster, that's the scenario I want to try and see if I can't prove out over time. Can we eliminate those waiting games while someone's just getting that first incarnation of a model together? "Okay, I got something we can start riffing through here." The first use case I went through was basically can I accelerate the initial build of a model, not the final build. I don't want to project here that I've got something I would ship to production, but I certainly got somewhere that would've taken me much longer if I just did it the old way.

Rob Collie (20:59): That hour or hour-plus where we typically go in and just have to just jam through a bunch of linear work. We know we need to load that table, and it probably needs to look like this and it needs some modifications. Here's the power query. We need to write the blah, blah, blah, blah, blah. Make the relationships between things and write the base measures.

Justin Mannhardt (21:16): We already know you're going to want X permutations of time intelligence versions, so let's throw that in there.

Rob Collie (21:23): Then you can really start the iteration phase, where we just keep getting better and better and more and more and more on target. Is there ever a point in that process, based on your experience thus far, where you would stop using Claude Code and go direct to making the changes yourself? "Oh, that measure that we've written that way, you'd think it needs to be written that way, but really you need to filter out X and Y. You need to turn it into a calculate, and remove certain kinds of records because they shouldn't count," or they should count as negatives or something like that. Whatever, the usual, very, very standard stuff. Would you still go to Claude Code and say, "Hey, make that change," or would you go and make that change directly?

Justin Mannhardt (22:10): I'm deeply curious to see how far I can get through a Power BI project without doing any of the work myself. The scenario you're describing, I want to see can I ask Claude Code to do it and how often does it do it well, and how often do I have to redirect or correct it. I think practically speaking, there will be times today where I just do it, but you bring up a really interesting human-playing question, and maybe I want to spin back to another use case because I think it's a good parallel.

(22:51): I'm an expert. I'm a bona fide, certified, from-Microsoft expert. When you're like, "Hey, can we do this complex thing," I'm like, "Sure." I have spent years of my life learning how to do that complex thing, and as good as Claude Code is ... it's very good, it can write very good DAX ... I'm also very good. We're more like peers, you know what I mean?

Rob Collie (23:14): It's hard to know how much you're benefiting from all that expertise, because you can't unsee all of your expertise.

Justin Mannhardt (23:23): Exactly.

Rob Collie (23:23): You have no idea. We're blind, by definition, to how much we benefit from that expertise.

Justin Mannhardt (23:30): Yeah. This is the interesting parallel in my experience. I've done a number of other cool things with Power BI project files specifically, even things like I haven't even opened the model yet. "Claude Code, can you analyze this model and tell me what it's all about? Can you tell me all the places that this measure is used in the reporting?" Really cool, cool, cool stuff. The parallel here is ... and it's related to this ... so we've been doing a lot of work checking out Power BI Copilot, and specifically the one that an end user might use on the side of a report.

Rob Collie (24:03): The chat with data?

Justin Mannhardt (24:04): The chat with data experience.

Rob Collie (24:05): Ask my questions of the data, get my answers, as opposed to finding the right dashboard. That version?

Justin Mannhardt (24:10): That version. As I've mentioned on a prior episode, I despise the fact that these chats are not persisted. Please fix.

Rob Collie (24:18): Close the window, refresh the window, chat gone.

Justin Mannhardt (24:21): It's gone. We're doing a lot of experimentation. We're going to be working with clients on this. I said, "I need a way to have this stuff." You can download the diagnostics from the chat. What word? The diagnostics.

Rob Collie (24:36): It's right there in the UI.

Justin Mannhardt (24:38): Yeah, it's right there. You just download it. It's got a little doctor symbol next to it, or a stethoscope.

Rob Collie (24:43): Continuing the phrases that scare people.

Justin Mannhardt (24:47): Yeah. Terminal, pane, diagnostics.

Rob Collie (24:50): Diagnostic sounds like the thing that you run when something's gone wrong, which it is, right? Just for the benefit, it even has the word die in it.

Justin Mannhardt (25:01): You download this thing and it's just a big ugly JSON, and if you had a really long conversation, it'd be even bigger and even uglier. We're trying to understand what Copilot is doing. We want to know is it performing well, is it getting things right, is it getting them wrong. A problem I was seeing, but I was like, "I wonder what I could do here," I downloaded some of these files, and I just fired up Claude Code and I said, "Here's the deal. I get these diagnostic files that explain what's going on." I first just said, "Hey, can you write up a Python script that would parse this out to recreate the conversation?" That was all I asked for, and on the first go, it nailed it, because in this JSON, you get this big ugly thing and it's not perfectly linear of question, answer, question, answer, question, answer. There's all this other junk in between, right?

Rob Collie (25:54): Of course, yeah.

Justin Mannhardt (25:56): I'm like, "Wow, that's cool." I started with that, and then two hours later, I actually had a GUI application that I could select a folder on my computer that had these files. I could pick one, and there'd be a pane in the middle that recreated the chat so I could see the chat exactly how it was had with the user ... this is actually a pretty complicated riddle that I'm still refining ... then recreate what Copilot actually did in another pane.

Rob Collie (26:28): How is it reasoning through the process?

Justin Mannhardt (26:31): How it reasoned through getting to an answer, and that includes things like how it reformatted the question into a set of instructions, if there was any DAX queries going on, that I can get the DAX query back. This is the parallel back to some of the questions you were asking about Power BI. I've never written a piece of software outside of ... I've never built an application. It's not something I've ever needed to do. I don't know, know Python. Most of my Python work has usually just been because that's been a very convenient way to call APIs to get data and then work with data frames to do things, but this thing's got buttons and click behaviors and colors and all this kind of stuff, and I don't know anything about any of this. All I'm able to do is observe what's not working right in this thing I'm trying to build, and tell that back to Claude Code.

Rob Collie (27:25): I had the same experience with Cursor, which is a bit more of an appliance relative to Claude Code, meaning I don't need to deal with command prompts, I don't need to deal with terminals. I don't even need to deal with VS code. I install Cursor and Cursor is the only thing I have to deal with, and I just have to give Cursor the ability to write to files on my hard drive so that it can make changes to the code that it's writing for me. It has these two modes. There's sort of question-and-answer mode where it's not making changes to the code, and then there's the version where it's making changes to the code. It took a little bit to get used to that, but that wasn't that big a deal, and I had it code for me Conway's Game of Life.

Justin Mannhardt (28:07): I think you told me about this.

Rob Collie (28:09): It really captured the imagination of 20-year-old me when I was in college. Conway's Game of Life, it's not a game. There's no game. There's no winner, no loser. It's just a simulation on a giant checkerboard. You put checkers on the checkerboard and have rules about what determines whether or not in the next time generation. The clock is ticking forward, click, click, click. At each clock tick, there are rules that determine whether or not a square on the checkerboard is going to have a checker on it next time, based on how many checkers were rounded on the previous round.

(28:41): It's very simple rules. You end up with these incredibly amazing geometric animations that happen on the screen and little bugs that crawl their way across the screen infinitely, and they look like insects. They have a tail that's almost like flapping them across the screen. It's all from these really, really simple little dumb rules. Saying to Cursor, "Give me a version of this."

(29:05): I remember in college, my friend and I spent a weekend, or a part of a weekend anyway, coding this up in C, and getting it working and being fascinated by it. It didn't have any creature comforts in it. It was not easy to set up starting patterns and press Go, and certainly you couldn't remember what you just did. There was no replay or anything like that. First of all, Cursor just knew what Conway's Game of Life was. I didn't have to explain the damn rules of it. If you're surrounded by three checkers, two or three checkers, you'll have a checker next time, and anything else, you're not. This web app is running on my web browser on my computer almost immediately, and then I started adding features. I'm like, "Okay, I want to know how many generations it runs, how many clock ticks it runs before it ends," because oftentimes it'll just end. You just end up with all the checkers disappear. Okay, fine. We'll add that feature. We give a little readout that was a readout of how many generations did it run.

(30:04): It turns out, a lot of times it ends up in a state where there are checkers, but it's the same exact checkers that it had last time, so it just stops. It gets stuck in this particular spot, like a square of four checkers will stay forever. That's also the end, as far as I'm concerned. I said, "Hey, let's add some detection. If it's the same as the last timestamp, that means we're done. If things haven't changed, we're also done." I wanted to add that detection step, because I was really interested which patterns had the longest runtime, because they all do eventually run down, and then I realized that there are some that oscillate. It looks like an X and then a T, and then an X and then a T, and it just keeps bouncing back and forth between those two. Okay, let's add a detection step that if this timestamp is the same as two timestamps ago, it's also done. All these sorts of things. Eventually I started getting bugs, and again, I'm not looking at the code at all. I think it was writing JavaScript.

Justin Mannhardt (31:05): That makes sense.

Rob Collie (31:07): I'm not looking at the code at all, but I did have to start reasoning with the thing, arguing with it.

Justin Mannhardt (31:14): Yes, yes.

Rob Collie (31:15): "No, no, that really isn't working." Eventually it walked me through the process of, "Okay, hey, let's add a logging feature to it, and then can you give me that log?" It's like I'm a technical project manager at this point.

Justin Mannhardt (31:28): Yes, yes.

Rob Collie (31:30): It's like my old job at Microsoft where I was working with the developers and interesting that my old job at Microsoft, where I would be in charge of the user experience, but also I needed to be technical enough to interface with the developers, that maybe my old job at Microsoft is now suddenly a lot more valuable.

(31:51): Anyway, so eventually I got the bugs fixed and everything. This thing is I think back to the version of me who made this in college, who was a much better programmer than this person today, and, boy, did I smoke. My friend, by the way, is a way better programmer than I was. The two of us together did this. I mean, this thing had features upon features upon features and creature comforts, and the ability to save the starting state and replay it, and just was effortless. Yeah, it was really cool.

Justin Mannhardt (32:17): I had some similar experiences, where the app would almost get too complicated in terms of the things it was doing right versus wrong, kind of learning the hard way. "Oh, well, I'll just ask it to rip that whole thing out. We'll go bit by bit, now that I better understand what I want to get to." The thing that also was a little interesting, I don't know if you had this experience with Cursor, but Claude Code ... and I'm sure there's ways to control this with its rules and all this stuff that I haven't gotten too deep into yet ... it'll push beyond what you asked for. It infers more levels of like, "Oh, and we should have it work like this." Sometimes I was like, "Yeah, that was really smart." Sometimes I'm like, "Yeah, but I didn't want that to happen at all."

Rob Collie (33:05): I 100% got that with Cursor. In my version, the very, very first version of Life that it gave me came with stop, start, pause and speed control, that I didn't ask for. I could control how many ticks per second were happening. You know this human is going to ask for this next.

Justin Mannhardt (33:24): Right?

Rob Collie (33:24): Based on seeing prior art like that. It was able to see many, many, many, many code projects. It was already trained into its original memory in its pre-training data. Conway's Game of Life has been coded. It's probably in GitHub a thousand times. It's such a nerdy, esoteric thing, and yet because it's a nerdy, esoteric thing, guess what programmers like to do? "I'll really nail this." I'm like "All those other people who've come before me who've done Conway's Game of Life, they're amateurs."

Justin Mannhardt (33:54): "I'm going to put some detection features in it."

Rob Collie (33:55): The thing is, based on my experience, I don't think that was a very common feature, because the combination of telling me how many generations had run and that detection, those two features interfered with each other and caused the bug. While it's statekeeping for both of those, somewhere in there was where it went sideways, which is probably where a human programmer would go sideways as well. It was very interesting.

(34:17): In the time we've got left, let's push the fast-forward button. We have a lot of people listening, who one of our things here is to always try to turn this into business understanding and business impact and all that stuff. Clearly we're still early in all of this. We're early in the tool's maturity. Even more importantly, we're early in the cultural adoption of this stuff.

Justin Mannhardt (34:40): Absolutely.

Rob Collie (34:41): Nothing happens overnight. We have to allow time. That's why I'm saying we're jumping in the fast-forward machine. The picture evolving for me is that all the things that you need today and work on today, you're still going to need. We've talked about this extensively. Power BI models are a much better anchor point for conversational AI chat with data than the raw data, so you're still going to want Power BI models. They're going to be faster to develop. Then you and I talking about our experiences building these applications ... in your case, the diagnostic log viewer, and in my case, the completely-useless-to-a-business Conway Game of Life ... the ability to produce custom line-of-business software, there's going to be a gold rush explosion in that.

(35:32): Whether you're doing it for yourself or whether you're hiring a firm like ours, the age-old thing. When something becomes less expensive, civilization makes more of it. There's going to be an expansion in productivity as a result of lowering the cost of all of these things, in the same way that adopting the PC accelerated things. That took some time, but it eventually ran its course. PCs were everywhere at some point, and we'd reached steady state in terms of PC adoption. There'll be some amount of time of really, really hyperspecific custom line-of-business software. That'll run its course. There'll be a lot of work and a lot of ROI to be had in that space. Then, the thing that that leaves is the other thing that we've been talking about, is ongoing AI-powered workflow optimization and constant tuning.

Justin Mannhardt (36:32): Constant tuning.

Rob Collie (36:33): Constant improvement. Here at P3, we have a dedicated, trained copywriting agent. It even has a name. It's Griff, how to riff with Griff today.

Justin Mannhardt (36:48): We need to hook Griff up to text-to-voice and get it on the podcast, by the way.

Rob Collie (36:53): That'd be freaking amazing. The effort has been invested that this thing is us. I spend a lot of time riffing back and forth with it, just like you were riffing back and forth with your application. I'm still very much writing, just writing a lot faster with something that knows how to write for us. Even that, I'm tuning that all the time.

Justin Mannhardt (37:18): I can't see a world where that stops, because this technology is just fundamentally so different than anything else we've seen. If we go write a function in an application that's supposed to take X and add it to Y, we're always going to get the sum of those two numbers. You swap out whatever you're using with Griff for the next generation of LLM, and when you use these things that are more of these creative, exploratory applications. There's always, always, always an opportunity to tune them and improve them. My experience continues to confirm that startup cost for ideas goes to near zero. I was even thinking, after I built this little app, things I was responsible for in prior roles, where if I could just have had a functioning example of what I was trying to explain, oh, man. That could have gotten through.

Rob Collie (38:13): Or in my example. This morning, Kellan had some feedback. Our president and COO had some feedback on something I was working on, a web page that I've been working on, had an additional section. He thought, "I think there's one theme that might be missing here." I had time and energy between meetings to just sit down and engage with that and jam something out, with the help of Griff, that I would've needed a chunk of a day set aside to wind my own motivational muscles up. I think got a running start to go and do that, because it would've been such a lift. It's such an energy cost.

Justin Mannhardt (38:53): Yes.

Rob Collie (38:53): Now it's something that is so kind of fun and light that I can fit it in between two meetings.

Justin Mannhardt (39:01): Exactly right. You'd be like, "Oh, man. Listen, I don't have time for that until next week. Oh, it can't be next week because I've got to travel." Startup costs, the ability to move fast. Even if there's still a period of time where you do need to pull in experts at different stages of ideation, which time's going to tell on that. As a business leader, there's just so much surface area of things we could entertain, things we could try and not lose tremendous amounts of time.

Rob Collie (39:29): Just to circle back to one more theme, it's interesting. What I can get out of Griff is different than what others can get out of Griff, because I'm still an expert writer. I'm still an expert storyteller. I'm still an expert at identifying emotional themes that I think are somewhat universal across people in this space. These are my talents. Those still come with me when I'm working with an AI copywriting agent that I have trained. In the same way that you were talking about earlier, when you're working with Claude Code and a Power BI model, you can't unsee or unlearn all of your expertise. You still bring that with you to the story.

Justin Mannhardt (40:13): It's been a lot of fun to just get in and have some real tangible things that I wanted to try and solve, and it's given me a point of view on the ways we can approach situations differently. Yeah, I'll look forward to coming back in the future and sharing what other crazy application I've decided to build for some use case.

Rob Collie (40:32): All right. Well, I'm looking forward to it as well.

Check out other popular episodes

Get in touch with a P3 team member

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form

Subscribe on your favorite platform.