episode 199
The “Dobie Moment” (The Awesome Power of AI/CoPilot Frontends – AND a Cautionary Tale)
episode 199
The “Dobie Moment” (The Awesome Power of AI/CoPilot Frontends – AND a Cautionary Tale)
Most of us have been in the trenches long enough to know when something’s about to flip the script. And brother, we’re standing at the edge of a cliff most data folks don’t even see coming.
Rob Collie thought he had Power BI figured out. Then Copilot did something impossible; it cracked a question that should’ve left it scratching its digital head. But it didn’t just answer. It nailed it. That’s what we’re calling the Dobie Moment—when AI stops being a fancy calculator and starts being genuinely scary-smart.
Here’s the thing nobody’s talking about, your semantic models aren’t just sitting there anymore. They’re waking up. And when Rob and Justin break down what happened in this episode, you’ll see exactly why that should make you sweat a little.
They’re not here to blow smoke. They’ll show you the magic, sure, but more importantly, they’ll show you where the landmines are buried. Because when AI starts connecting dots you didn’t even know existed, confidence and correctness become two very different animals.
Bottom line: The future of data just knocked on your door. You can pretend you didn’t hear it, or you can listen to this episode and actually be ready when your models have their own moment of reckoning.
Your call. But don’t say we didn’t warn you.
Episode Transcript
Rob Collie (00:00): Well, hello again Justin.
Justin Mannhardt (00:01): Hello Rob. Nice to see you.
Rob Collie (00:03): Likewise. We've had a really great little tradition going here lately where I just show up with the thing we're going to talk about.
Justin Mannhardt (00:08): And you told me on I believe Monday that you had a topic, and you're like, "Ah, but I'm not going to tell you."
Rob Collie (00:14): All right, so what I want to talk about today, we can talk about a couple of different things maybe, but the number one thing I want to share with you is this kind of mind-blowing experience that I had with Power BI Copilot this week. It's two things perfectly at the same time in opposite directions. One, it is a perfect demonstration of jaw-dropping power of having an LLM working with you as your interface to the data. It's shocked, even me. You've heard me now and you and I both on this podcast for several episodes, talking about just how this natural language AI powered interface to your semantic models is like the future, and I couldn't be more excited about it. And yet even then, this week, a perfect demonstration of the power, the awesome power of these things, and also a cautionary tale on how making sure your semantic models are constructed properly and your prep for AI and instructions are constructed properly, so that you don't end up in ambiguous situations.
(01:18): Again, I'm just really leaning into using my old hockey league as a forcing function for a lot of this. Because it's not a business scenario, there's all those caveats, it's hockey data, all that kind of stuff. It's not life and death at all. But at the same time, I have such a direct access to these human beings, and they are enthusiastic about it. Data is their story. It's the story of their community. Not just the story of their hockey games, it's the story of their community. These are people whose kids have now grown up together. It's interwoven into their lives. So I said, "Hey," and I had Luke, producer Luke, wearer of many hats, set up the commissioner, the person who runs that league, with a login so that he could log in and use the co-pilot version. Because the version that I published of these dashboards that I published to the league, obviously doesn't have the co-pilot experience in it.
Justin Mannhardt (02:11): It's a to web type thing.
Rob Collie (02:14): Straight up Power BI published to web. So we need to give him a login on our actual Power BI tenant. And as I was thinking about giving this to him, I'm now turning over the AI interface to a member of this community, to a leader of that community. And the idea is that he's going to take it to the bar with him after games or before games. There's a pre-bar and a post-bar. I don't think I ever once made it to both the pre and the post-bar on the same night.
Justin Mannhardt (02:40): That's a lot. I turn into a pumpkin as you know. I don't think I can do both.
Rob Collie (02:44): They do call these things beer leagues for a reason. Anyway, so I was thinking about things he was going to ask, and if he's sitting there with his buddies anticipating the kinds of questions they're going to ask. And this is really cool because it forces you to think about how are they going to approach it, what are the questions they're going to ask? No one is going to walk up to this thing and say, "How good of a player is Aaron Dobrzynski?" I don't even know how to pronounce the guy's last name. I've known him for years. Aaron Dobrzynski. I'm sorry if you're listening Dobie. So this is why everyone calls him Dobie.
Justin Mannhardt (03:22): Oh, that's better.
Rob Collie (03:23): That's his name. Dobie.
Justin Mannhardt (03:24): Oh, isn't that like a Harry Potter, "Dobi sorry, master."
Rob Collie (03:27): Right, right, right, right, right. He's not in the player list as Dobie. He's in the player list in the player's table in this model as Aaron
Justin Mannhardt (03:38): Jacabarshubikob.
Rob Collie (03:39): Exactly, right? Okay. So I'm like, "Wait a second, they're never going to type that," and I don't have a nicknames table in the model that translates real name into nickname. This is not going to work. So I sat down and asked it, "What kind of a player is Dobie?" Question mark, press enter, and it nails it. Comes back and says, "Aaron Dobrzynski..." Right?
Justin Mannhardt (04:03): No.
Rob Collie (04:04): "... often called 'Dobie'..." in quotes, is a, blah, blah, blah, blah, blah, "... very effective player, scoring this many goals per game," and I just can't get my jaw off the floor fast enough.
Justin Mannhardt (04:12): How? How?
Rob Collie (04:14): Right? I had some theories.
Justin Mannhardt (04:17): I call foul.
Rob Collie (04:18): I know, right? Kind of blew my mind. So there's a guy in the league, his name is Joe Mulvey, but everyone calls him Joe Flow and there are two Joe's in the league with different last names obviously, neither one of them has last name of Flow. I ask it what kind of player Joe Flow is and it nails it, "Joe Mulvey..." blah, blah, blah, and I'm just like, "What is going on?" So my theory is that we have this superlatives table. There's one table in the whole model that does have nicknames in it.
Justin Mannhardt (04:46): That's what I've been wondering, yeah.
Rob Collie (04:48): It's not related to the player's table. So again, there's no way that this model can just go look at it and say, "Ah, player X equals nickname Y." It is timestamped, so these superlative entries like when someone has a really good night or does something funny and we hand enter them into the superlatives table. It is tied to a particular week, so it would be able to match up. If it was really, really, really going crazy, we'd be able to match up, like Dobi was listed as having a five goal night that night. Cross referencing against players who had five goal games that week, it could be doing things like that. It could be Joe Flow was the only Joe who was active in the league at the point in time. The other Joe wasn't playing at the time when Joe appears in the superlatives table.
(05:31): So this was my theory, and I was so excited about it that I was already telling people, that this thing was so smart it was like traversing this model in this really sophisticated manner. And I told my friend Dave Gainer, who's been on the podcast, I told him this story, he's like, "Nah." He's like, "It's not doing that, Rob." I'm like, "Come on, it's totally doing that." He's like, "No, it's just figuring out the names from the names. It's just figuring out Dobie from Aaron Dobrzynski." Right? I'm like, "Come on, I can't be doing that." So what I do? I went and I deleted the superlatives table from the model, and yeah, it still gets them all right.
Justin Mannhardt (06:03): Interesting.
Rob Collie (06:04): In fact, I asked it What kind of player is Shorty?" It's another nickname. There is someone in the league named Justin Short, and it decided that Justin Short is Shorty, and that's right. Now this player table has several hundred players in it. In other words, the LLM itself is figuring this out, and it is looking at the list of players obviously, but it is trying to match the nickname to a player. And one of the things that our friends at Microsoft had told me is that the current version of Copilot, the current Copilot kind of linearly follows the path to find an answer. It's going to come up with an answer if it can. It just didn't have any better guess than Shorty equals Justin Short, and it didn't have any better guess than Dobie equals Aaron. I think it broke the tie between the two Joes, based on who's played more games and who's played more recently.
(07:02): That's all it would have to go on. And so its ability to make that leap again is kind of awe-inspiring, but it could have gotten this stuff wrong. What if there's someone else in the league that we call shorty? What if Justin Short is 6'5?
Justin Mannhardt (07:17): And his nickname is, like-
Rob Collie (07:19): Tiny. And what if Joe Flow was the other Joe? It's exciting on the one hand and also a cautionary tale on the other. Now, I do think that future versions of Copilot are probably going to reason through this differently, and aren't going to be quite as hell-bent on providing a single answer. They might be even willing to come back and ask you, "Hey, I don't know who Shorty is. Here's a couple of the candidates." That's kind of the experience we want, right?
Justin Mannhardt (07:47): Yeah. I like to better when you were explaining it to me and I thought it was able to do semantic matching on your superlatives table, and was able to reason through-
Rob Collie (08:00): That was comforting.
Justin Mannhardt (08:01): That was comforting because it was like, oh, there's an anchor there. But when it is effectively just making its best guess, that sure seems like a place where you'd want some sort of safeguard in the response like you're describing, "Hey, I'm not quite sure who Dobie is. I think you mean Aaron Jacabagurshgabarshkan."
Rob Collie (08:18): Poor Dobie.
Justin Mannhardt (08:19): You're right. It's sort of discomforting of how I think this is what we've seen in some of the Copilot stuff where it's just sort of a few steps behind the frontier models in just its confidence in its response, and hopefully the newer versions will resolve some of that, but it's pretty impressive that it actually got it right on three separate occasions.
Rob Collie (08:39): Yeah, I mean I started off by just saying, "What kind of player is Sparr," last name Sparr. Like, instead of saying Ryan Sparr. That was my first one. And I was halfway expecting it to maybe not answer that one. There's only one person with last name, Sparr. It was very, very, very cool that it got that one. The Dobie one blew my mind, the Shorty one blew my mind, and then the Joe Flow one, I was just like, "Okay, what is going on here?" But I do think this is the kind of thinking we need to adjust to, when we are in control of the dashboard. So we're building the model and we're also building the dashboards and we all agree that the model has far more capability than what the dashboards can represent. The dashboards are kind of like a bottleneck on all of that knowledge, but it's the best we've had, and we've never even thought to question it for many, many years.
(09:27): We never even thought to question it. The dashboards weren't perfect as a conduit for that knowledge, and now we're seeing that they're not. It's almost like a tyranny to the dashboard regime, in that I, the dashboard builder, the model builder, get to control the conversation with the users in a way that isn't quite right. Dobie is Dobie to all of them. No one ever calls him Aaron, ever. In all the years I knew him I've never once had anyone call him Aaron, even in jest. So the symbol running around that represents this human being is not one that's in the model. And that's just the tiniest example of what I'm talking about, which is that our constructed experiences, our constructed dashboards are our conception of how people should interact with our information.
(10:20): That's when we're at our worst. When we're at our best, these dashboards are our best hypothesis, best theory of how they want to interact with things. It's still a guess, a varying degrees of informed guess. And by the way, not everyone wants to interact with it the same way, so there's no way you can get it right for everyone. It's like ordering a single pizza for 5 million people.
Justin Mannhardt (10:46): Impossible.
Rob Collie (10:46): Impossible, right? So putting the control in the hands of the end consumers of the information is going to stress our models in ways that we are at least by default not prepared for.
Justin Mannhardt (11:03): Even before LLMs exploded onto the scene and we've now seen sort of the early stages of where they're going to go with respect to working over semantic models, there was always a certain class of dashboard like the dashboard in your car. You're always going to look at that thing, because it's very process driven or it drives an executive meeting or whatever the case is. You're going to need that thing. Then there's a small team of people who are just constantly doing all kinds of ad hoc analysis over these models. It's not possible to have enough dashboards to cover that demand area. And so now AI coming into that space is going to be really exciting, because it, like you're describing, puts the user in control of the conversation, but also new because it's going to introduce concepts that we've never had to think about in our models.
Rob Collie (11:59): Even in the case where we have built a dashboard that is meant to address a particular need, they're now going to come to this system. If it's like the dashboard, the one that everyone knows about and knows intrinsically, those sorts of dashboards that you use every day, you can interact with them with your unconscious thought and your mouse way faster than you would with any chat interface. That's going to be a really high efficiency interaction, and you're still going to want those dashboards for sure.
Justin Mannhardt (12:27): For sure.
Rob Collie (12:28): Forget the ad hoc analysis for a moment. Even in the case where I have as a dashboard developer, builder of dashboards, I have built a dashboard to answer the question that's in their head. They don't know where it is. Even if they've seen it, they don't know necessarily, and this is going to blow people's minds, unless you think about it for a moment. Even if they've seen that dashboard, even if they flip to that dashboard right now with the question in their head, there's not a 100% chance that they're going to go, "Oh, this is the place I need to be to answer my question." It takes a lot of cognitive work.
(12:56): And if the dashboard is constructed in a way that's... The perfect example I have of this was someone asking me, "Hey, can you build me a dashboard for the hockey stuff that shows who has the highest winning percentage?" Or something like that. And I'm thinking to myself, no, that's the skater sortables dashboard, and win percentage is one of the columns on there, and you just click on the top of that column and it sorts descending, and you're going to know, right? I'm thinking that, but they don't know that. Even if they've seen that in their recent history, they don't know that. So that's good news that this interface can now help them answer their question, but we don't get to control the way they ask the question.
Justin Mannhardt (13:32): That's right. Let's just imagine like a simple bar chart that's showing some type of metric over time, but someone wants that filtered down to certain categories or subcategories or segments of the data, but those filters are nowhere to be found on the report page. That cognitive load is also spent trying to figure out, "Is this actually what I'm trying to figure out?"
Rob Collie (13:55): Yep. And the names of the metrics, the titles we put in each visual, even if you're getting it right 80% of the time, which is a pretty high batting average when you consider again that everyone's different, and that's still 20% failure, means that in an average day everyone's going to be running into a failure. I think this is going to be one of my favorite demos essentially, because it's both first and foremost jaw dropping that it connects the dots between these nicknames and the names without any help at all, while at the same time also a cautionary tale. When future versions of copilot upgrades are released, it's going to behave differently on these situations. This is one of those places where you can just be... Sure, we're not sure exactly what the new behavior will be, but it will be different.
Justin Mannhardt (14:42): Yeah, it's hard to predict even what the clarity of solving some of these challenges would be. Is it going to be a more advanced class of metrics available in models, is it going to be more enrichment on dimensions, or is copilot just going to be, instead of a linear conclusion, is it going to be more capable of reasoning what the user and helping clarify what they want, and then rendering queries or whatever it needs to do to get that type of answer?
Rob Collie (15:13): Yeah, it's going to be a fuzzy hybrid of multiple approaches to answer this. Another perfect example of this is, I don't have a dashboard that is like, who's the best player. I don't have a measure anywhere in the model that says player score or best player or whatever. Nothing like that. What's the first thing that people ask when they walk up to this thing? "All right, who's the best?" Even me.
Justin Mannhardt (15:42): Yeah.
Rob Collie (15:43): That was the first thing I asked it. "Who's the best player in the league?" And hanging out at the bar with your friends, ribbing each other over whatever. Like, what's the question? "Okay, who's a better player? Me or him?" That was one of the first questions that Sparr asked it.
Justin Mannhardt (15:57): Did it say Spar was a better player?
Rob Collie (15:59): It did not. But it used to, until I gave it some prep for AI instructions, it used to give up on that question. Because it had no sense of what made a better player.
Justin Mannhardt (16:12): Sure. And it didn't think to ask. It just said, "I can't.".
Rob Collie (16:15): "I can't do it."
Justin Mannhardt (16:16): They might as well be like, "Sorry, X094X7536ZY24."
Rob Collie (16:21): Not quite that bad, but yeah.
Justin Mannhardt (16:24): "Contact your system administrator."
Rob Collie (16:27): Reminds me of the old favorite error in DAX, was "An expression has been used as a Boolean," and blah, blah, blah, blah, blah, and I'm like, "Oh, God." And then they eventually changed that to true false instead of Boolean, but it's still awful. Anyway, so it didn't have any sense of better, so I had to tell it how to translate better. I'm having some real success with that. When I asked about who's better or to compare players or whatever, I told it, lean heavily on the per game version of the metrics, not total goal scored lifetime. People who've been around the longest are going to do the best in that. And also based on their winning percentage.
(17:06): But the other thing I told it, I'm not having as much success with this, I suspect future versions will do better. I also told it, "When two players don't play the same position, be a little bit more nuanced in your comparison of them." Players who play on offense are closer to the point of the action where stats get recorded. We don't record in that league shots blocked or breakaways defended. There aren't any defensive stats really. It's all offensive stats. Goals, assists, penalty minutes, and if you're a goalie, goal allowed.
Justin Mannhardt (17:43): Not even saves?
Rob Collie (17:45): No one's there recording saves. I mean, holy hell, this is a really low tech environment here. We're lucky to have a full-time scorekeeper as rotating volunteer for every game that's even entering things in the scorecards. So of course the forwards are going to have better numbers than a defensive player. If a forward and a defensive player have the same sort of numerical profile, the defender is almost certainly a better player. I'm trying to get it to include that nuance in its answer, and it doesn't, when I ask it to compare two players that are different positions. But then I can follow up and say, "Do they play the same position?" And it goes, "No, they don't." And then I said, "How does that influence your answer on comparing them?" And then it gives me a better answer.
(18:25): If I actually force it to answer the question, it then leans on my prep for AI instructions and says, "Yeah, okay, you're right. Comparing Leo..." which is who Sparr compared himself to, is Leo and Sparr were the two that formed the league together. By the way, there is no Leo in the players' table either. It's Leonid.
Justin Mannhardt (18:45): Clever.
Rob Collie (18:46): All of this stuff is nickname, right? So who's better? Leo or Sparr? Leo plays forwards, sparr plays defense. It just says, "Leo's a better player, because you got better numbers." Then I ask it follow up, it's like, "Oh, you're right. It's kind of hard to tell, isn't it?" And that's the answer that I want. I want it to give me that answer.
Justin Mannhardt (19:04): It'd be an interesting trait to see how well it comes through in the future versions of Co-Pilot. One of the qualities that tends to separate in my experience, really great analysts from others, is that healthy skepticism and curiosity about what they're seeing, which is different than the LLM that just can't help but satisfy your question. Because we're working with our data with semantic models, you kind of want Co-Pilot to have that sort of curious, not quite sure, skeptical attitude about some of these questions, so that it guides itself to a point of clarity with the user before it jumps and answers some of these things.
Rob Collie (19:46): Let me give you one more human interest angle from this, and I don't know in the end if we're going to discover parallels to this in the business world or not, but Sparr's playing around with it and it comes back and says, "Hey, it currently answers a type of question that I don't want it to answer." Remember, he's like the caretaker of this community. He's not just the lead commissioner. He has created this community and makes the community work, and it's a valuable thing. It is a gem. He says, "Right now, if you ask it who the worst player is in a particular category, it will tell you." He's like, "I don't want that energy around here." He's so caring and responsible. He doesn't want this AI interface to answer that question, and he says, "Can it instead respond and say, 'When you're playing with your friends, no one is the worst'?".
(20:36): And I put that in the prep for AI instructions. Do you think it worked?
Justin Mannhardt (20:41): 'm going to say it did.
Rob Collie (20:42): It did. It absolutely did. When you ask who's the worst, blah, blah, blah, it comes back and says exactly that. It also does say something almost apologetic, like it knows it's not answering your question, but it does give the sentence, "When you're playing with your friends, no one is the worst."
Justin Mannhardt (21:01): The human interest side of that is really interesting, but it's also... it's an intriguing and honestly an encouraging fact that those instructions in the prep for AI will work at that type of level. Because you can imagine there'll be all sorts of scenarios in a business context that you can't boil down to look up tables and fact tables and metrics and all this stuff. You're going to need to give it that type of clarity. And the fact that it Responds well to this example is a good thing.
Rob Collie (21:36): Yeah. Again, I think it probably has minimal, if any, parallel in the business world, because the only kinds of questions that should be off-limits are ones you don't have access to the data. And because copilot does respect the existing security model, if you haven't been given access to your peer's performance, you're not going to be able to ask about it and you're not going to need special instruction. But the fact that there is that kind of high-level knife switch type of instruction that can be given is really kind of encouraging.
Justin Mannhardt (22:04): Yeah. Or even like you were describing earlier where you have in your prep for AI instructions to use fairer metrics of the per game Versions. Things like that. You can imagine when comparing performance Quest business units, we want to use these metrics as the basis of comparison.
Rob Collie (22:20): Interacting with real stakeholders in something like this, just open your eyes to things you would never expect. Another piece of feedback you gave me was, I had given prep for AI, some very specific instructions. I was like, "Look, it comes down to points per game and winning percentage." I wanted to be really helpful to prep for AI and say best players are defined by that. It just would come back with those two stats. Just do the comparison of the two players or whatever, by those two stats. And Spar said a not very interesting answer. You're expecting more from the AI interface than just have these two numbers Dumped on you, right? And so I went back and changed the instruction to be, when asked a question like that, who's a better player or who's the best or whatever, give a more nuanced long form answer, leaning on the class of statistics that are per game. And then I had to tell it, "Don't count penalty minutes per game as a good thing."
(23:16): It didn't know that higher numbers there were bad. Because it was just called PIM. The measure's just called PIM. If the measure had been called penalty minutes, given its knowledge of hockey, it might've figured that out, so there is nothing that's going to substitute for direct interaction with the stakeholders. It's really going to force us to do things the right way. And the fact that we ever believed that we could really nail dashboards for everybody without lots and lots and lots of visceral real-world interaction with the stakeholders, it was always kind of an illusion. This interface is really laying that bare for me.
Justin Mannhardt (23:57): I'm just reflecting on the number of times somebody wanted a dashboard to be able to do something and it either didn't or it was on a page nobody knew how to get to, and somebody would say like, "Well, we didn't understand that was a requirement." This is going to accelerate those types of realizations and we're going to need to have a much different style of proactive thinking about how people are going to engage with data, to not only make sure that they get the information that they need and want. Going back to the beginning of this episode, talking about Dobie, if I know someone as Dobie, using a business context, I refer to a class of products with an acronym and it's not present there and it's giving me an answer, but I don't have a means to clarify how you got there.
Rob Collie (24:46): I think this hockey thing, it's kind of neat because it sort of dials up the rate at which we're going to encounter these sorts of problems. We're going to encounter these problems more frequently, I think in this hockey context than we are in the business context maybe, but it's not going to be like 10 times as much, it's going to be like twice as much. A dashboard that calls a product XYZ and everyone else calls it something else, as long as you subconsciously know that, you know to go looking for it. But when you sit down and ask the chat interface, you don't want to have to type the name that you don't think of it as that you translate into all the time. You want to type the name that you... We're going to run into this for sure.
Justin Mannhardt (25:25): Oh, yeah. None of us are innocent here. We've all done, air quotes, "Creative things" in models, to achieve something on the visual canvas or to get certain granularity tricks. It'll be really interesting to see how some of these maybe more old school workarounds play in this LLM space. Again, we did those things to achieve something on the visual canvas. Disconnected tables or whatever it was, how do those things hold up in this environment will be interesting to see.
Rob Collie (25:59): Oh, very interesting. If it doesn't detect a relationship between two tables-
Justin Mannhardt (26:03): I'd have to go find it, but I've personally done some pretty creative things that involved like road level security, but then you wanted teams to be able to get comparative metrics like an aggregated, anonymized comparison table. Those kinds of tricks you would do it for the visual layer, and so I'm just curious, how does some of those quirks play in the AI space?
Rob Collie (26:30): In some sense, anything I do on behalf of the visual layer, on the one hand, you think that's completely going to still be legitimate for the Q&A, the natural language interface. But on the other hand, the things that I've done that are sort of forcing the visual layer to do something that it doesn't really necessarily know how to do, it's an inventive solution. When the LLM inspects the model, is it going to be able to pick up on, "Oh, that's how I answer the question?" So yeah, I understand what you're getting at. It's not the fact that it was built for the visual interface. That's the problem. We were overloading concepts and using them in ways that they weren't originally explicitly intended. The inventive stuff, fun stuff.
Justin Mannhardt (27:15): And the user, they have a completely different experience, so when you roll up to a open chat window, your brain is in a different spot.
Rob Collie (27:24): As it should be. That's what we want. We want you to be able to maintain that natural mindset when you're using this thing, otherwise you're probably not going to use it.
Justin Mannhardt (27:35): Even dashboards I know really well, the mental inertia where I'm like, "Oh, what page? And do I..." Now I'm just in a thin report building...
Rob Collie (27:45): Our power BI estate at P3 is large enough, comprehensive enough that I have this problem. I've seen things answered places, but I don't remember where, or I don't even know which model to go to, which workspace to go to. Again, even for someone relatively technical like me, it's paralyzing. It sometimes very much is the difference between talking myself out of needing an answer to the question.
Justin Mannhardt (28:13): You just find yourself back in Slack, asking, "Hey, do you know this?"
Rob Collie (28:18): There's an old Mitch Hedberg joke that I love, which is like, "I sit around in hotel rooms writing jokes, but every now and then I've left the pen on the other side of the room, so then I'm left with this conundrum of either getting up to get the pen, or convincing myself what I thought of ain't that funny." It's going to be a brand new way of looking at everything. I'm excited for it, but I do think it's going to bring an additional level of responsibility to us as model builders that professionally we can adapt to it, but we're by default, not prepared for it. We're not anticipating the increase in the scrutiny, the level that our models have to reach. We've been able to cover for their sins by controlling the conversation. We're not going to be able to do that anymore, and I'm ready for it. At least the masses. Let's go.
Justin Mannhardt (29:15): When copilot improves and keeps improving, we're all going to find ourselves revisiting models that we thought needed nothing.
Rob Collie (29:23): That's right. 100%.
Justin Mannhardt (29:25): I'll see you there.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
Subscribe on your favorite platform.