What Exactly IS an AI Agent? We Propose a Friendly, Non-Gatekeeping Definition

Rob Collie

Founder and CEO Connect with Rob on LinkedIn

Justin Mannhardt

Chief Customer Officer Connect with Justin on LinkedIn

What Exactly IS an AI Agent? We Propose a Friendly, Non-Gatekeeping Definition

This isn’t another AI think-piece, it’s a full-on data brawl. Copilot is out here plagiarizing Rob’s pivot table crusade while the self-appointed nerd police try to lock down the definition of agentic AI. Meanwhile, thirty years of fantasy football become the unexpected proof that tuning beats buzzwords every single time.

What starts as a slip from Copilot turns into a bigger story about how AI really works. Off-the-shelf tools can sound impressive, but they collapse into clichés when they’re not tuned to the person using them. The difference isn’t just in efficiency, it’s in credibility. Get it right, and AI amplifies your voice. Get it wrong, and you sound like everyone else mailing it in.

Don’t settle for AI that sounds like everyone else. Listen in and hear what happens when tuned workflows collide with real-world stakes.

Episode Transcript

Rob Collie (00:00): Welcome back, Justin. This is unprecedented. Two in a row.

Justin Mannhardt (00:03): I think it's three, isn't it?

Rob Collie (00:04): Is it really three in a row? Then really unprecedented.

Justin Mannhardt (00:07): I could be completely wrong, but I'm going to call it three until someone proves me otherwise.

Rob Collie (00:12): It's just been the summer, people with families and things. You go and you take trips and you do vacations and things and you also get sick, again because of kids.

Justin Mannhardt (00:18): No, you're right. This is two.

Rob Collie (00:20): Is this really two in a row? Yeah, we're a data firm. We could go look.

Justin Mannhardt (00:23): Yeah. We have the receipts.

Rob Collie (00:25): Last week, we started off with a rant about something good happening, pivot tables being auto-refreshing and why that was so petty, upsetting to me. I've had a follow-up text conversation with Brian and Dave after that. It's been pretty funny. Dave went and asked Copilot if Microsoft should rebrand pivot tables. Copilot came back and said, "Yeah, it does seem like it's techno jargon gatekeeping, and if you wanted it to be used by more people, you would want to rename that." He goes, "Okay, what should we call it?" He came up with three suggestions and the first one was summary tables.

Justin Mannhardt (00:59): Unbelievable.

Rob Collie (01:00): Yeah.

Justin Mannhardt (01:00): Microsoft 365 Copilot is saying this.

Rob Collie (01:04): Yeah, I've developed a bit of a habit lately, a game that I like to play, which is, I talk to an LLM and I explicitly tell it not to search the web because I want to know what's baked into it. I want to know what it knows about the world without resorting to web search because it's fascinating finding out the frontier of what it knows. I can ask it, where did Rob Collie live in the early 200s and it says, "Oh, he was probably in Seattle, because he was at Microsoft." That's, again, without telling it not to search the web. It's bananas. You ask it why did the McDLT Sandwich eventually get canceled at McDonald's? It has a very, very detailed understanding of all the dynamics that went into the canceling of the McDLT extra duper styrofoam sandwich. I asked ChatGPT last night in the wake of that conversation, "Don't search the web, but what did Rob Collie want to rename pivot tables to be?"

(02:00): What I was wondering was, have the LLMs been trained? Essentially the only person in the world who's been saying that pivot tables should have been renamed as far as I know is me. What if the summary tables suggestion that it made to Dave, Copilot, what if that was, essentially the only data it's been trained on about what pivot tables should be named instead is summary tables by me? What if that was how it arrived at that? I want to think that it's arriving at the right answer just like I did, confirming it, but what if it's just been biased by the only data it's got on this?

(02:30): I asked it. I asked ChatGPT, figuring that it's the same open AI behind the scenes for Copilot as well. Don't search the web and the answer starts off really, really compelling. Rob thought that that was a gatekeeping techno barrier thing. He campaigned hard for it to be called Data Cruncher, but it's making up all of this backstory. I'm like, yeah, tell me more about what Rob thought about. It is just absolutely manufacturing it all. It wasn't saying the real reason we didn't change it, which was that it was just too much of a shift in the object model, the documentation, people's understanding of they've got to change their noun everywhere. All of the reasons why we didn't do it, none of those came up. It was just completely manufacturing something. Then I say, "Okay, now search the web and tell me what Rob Collie wanted to name pivot tables," and it just nails it, and it references our podcast.

Justin Mannhardt (03:33): As opposed to an older blog or something.

Rob Collie (03:36): Yeah. The thing is, a lot of my older blogs, unfortunately, have been cleaned up for SEO purposes.

Justin Mannhardt (03:41): Oh sure. Yeah, that makes sense.

Rob Collie (03:43): Some of my favorite writing, I need to circle back with our web team and say, "Hey, where is that stuff in the archives?" I don't want to lose that stuff. There's some things I wrote that were really valuable, at least to me. Now that I've done that, I have an idea of a new really informal working definition of what agentic AI, what we mean by it. In the same way that when, if you were around in the data world circa 2010, there was this phrase 'Big data.' If you were one of the ultra cool kids in the know, you understood what big data was: really, really big, terabytes and petabytes of data and everything, but to most people, big data just meant it's almost their first exposure to BI. When people would ask me, does your company do big data? Most of the time the correct answer for them was, "Yes."

Justin Mannhardt (04:37): No didn't connect.

Rob Collie (04:39): Yeah. To them, several hundred thousand rows of data, that certainly seems pretty big, and it is. You had to gauge the audience who was asking the question. The official gatekeeping, uber nerd definition of something isn't necessarily what the rest of the world ends up using. I think there's probably that same split in reality today with the concept of agentic AI. There's a gatekeeping uber nerd. If it's not X, Y and Z, it's not agentic and you losers calling this thing agentic when it isn't. There's that kind of people and those are not terribly useful people.

(05:13): To me, agentic AI is just coming to mean anything that is custom tuned workflow, any repetitive workflow that you have at your company or even in your personal life. Repetitive I think is key because if it's not repetitive, you don't have the opportunity or the ROI in training something up or building a custom workflow around the LLMs to make that workflow better. Any line of business workflow that isn't just walk up to ChatGPT and ask it a question. There's so much usage of ChatGPT, LLMs in general that is cold start. We've all got it. I probably ask seven or eight cold start questions a day, but these other things that you start to tune to improve your workflow, they can vary greatly in sophistication.

Justin Mannhardt (06:01): I like this definition, because it puts into contrast some of the features you're seeing in these subscription products like ChatGPT. As a counter example, not finely tuned, not repetitive, quick backdrop of context. I work at an office outside of my house, something I choose to do. However, school has started. Part of my responsibilities as a dad is to pick the kids up from school, which means I tend to end my workday at the house. I realized I want a bit of a workstation at the house, because I don't like sitting at the table. ChatGPT has this thing called agent mode and you can turn it on. I said, "Hey, I need to set up a workstation at home and this is what I need and here's my budget. Can you give me a list of things?" It goes off and it searches the web and does all these things, but I akin that experience to just asking ChatGPT a question. I don't really care that it searched the web and looked at all these rankings and made me product recommendations.

Rob Collie (07:09): What does it do in that situation with Agent Mode turned on that it wouldn't normally do? What's the actual difference?

Justin Mannhardt (07:15): I think the main thing is it will interact in front of you with more web pages. It'll do the search the web automatically. It'll create a file, so it just did more things all by itself.

Rob Collie (07:29): Normal ChatGPT will search the web as soon as you ask it a question that it knows it can't answer without searching the web, right?

Justin Mannhardt (07:35): Why did I turn this on? That was my question. I was like, did I need to turn this on? I don't know. Maybe.

Rob Collie (07:40): The difference might just be that it's got a coefficient somewhere, some sort of variable that's been defined as six instead of two. How far am I willing to go in response to a request?

Justin Mannhardt (07:50): Someone will hit me up on LinkedIn. They'll be like, "Hey, I just wanted to let you know what Agent Mode really does," so I like your definition a lot. That's the right way to think about it.

Rob Collie (07:58): Along these lines, I have a personal late-breaking agentic AI use case. Even though it's a personal use case, it's from my personal life, you'll hear here in a moment that is not about a terribly serious topic. It's also deadly serious at the same time. As you're listening to this example, I would encourage you, dear listener, to think about cases in your business that resemble this, because there are plenty of examples in everyone's business that resemble what I'm about to say. This is my 30th year playing fantasy football.

Justin Mannhardt (08:31): 30th year.

Rob Collie (08:32): 30th season of fantasy football. I started in '96 and this is 2025. It's my 30th season playing fantasy football. This is one of those achievements that's actually, it was more impressive or sad depending upon, it was more extreme of an example when I had been playing for seven years, because when I'd been playing for seven years as far as the general public was concerned, fantasy football had only existed for two, so I'd been playing for five years with spreadsheets at the point where the rest of the world started to get used to being on a website and things like that, but now, 30 years is so long ago that no one knows that that predates the mainstreaming of the activity. Anyway, I have a lot of high labor workflows.

Justin Mannhardt (09:15): You do indeed.

Rob Collie (09:17): That I engage in during a fantasy football season and they're repetitive. Let me give you an example of one. This is just a fun one, not even competitive. I like to send, at least to my family league, I like to send either a predictions email each week or a recap email each week. It just gives the group something to attach to. It's like a water cooler. It gives it a center of mass so that the community, especially with this family league, because there's people across the country that we don't get to see very often or talk to very often and it's an excuse to interact with them and it's an excuse to have fun with them and all that stuff. Having these emails, if you don't have them, you're missing two thirds of the value of even having the league.

Justin Mannhardt (09:57): Yeah, you're just looking at your stats on your app.

Rob Collie (09:59): Yeah, you're going through all the work, but you're getting only a third of the value, but no one else is going to send the emails. It falls to me and I'm really busy. Earlier today, a couple things. Number one, I set all the leagues that I'm commissioner for and I asked the commissioner of the third league that I'm not commissioner for that I'm in charge, the admin. We set all the leagues to be visible to the public. When you're not logged in in a browser, you can go to our league sites now and see people's records, people's teams, all that stuff. I was hoping that was sufficient to allow LLMs to access them. It is not. ESPN does appear to be blocking direct access. It knows that it's being crawled by Claude or ChatGPT. It's weird, though. When it goes through web search, it's not blocked that way.

(10:47): Claude does try and is clearly being blocked. ChatGPT won't go to an individual URL. It will only access it via its web search tool. I don't know if I gave it the name of our league and said, "Can you find it?" I doubt it's even indexed. Anyway, the point is ESPN doesn't want to be crawled, so I have a problem there, but let's set that aside, because I think we can crack that, because if an anonymous browser can get to it, that also means that something like Power Query can get to it. I think we just need a new tool in our custom AI environment that I'm going to be asking Jamie to help me set up, which is an anonymous browser whose only job it is to download the HTML file or take a screenshot or something so that I can get a picture of all the matchups.

(11:29): It's an eight-team league. It's take four screenshots and say, "Here's who's playing who this week, their teams and their records and everything." Get me off the blank page, help me write the league email, but my friends, that wouldn't be my definition of agentic. Even though it is repetitive, it's not tuned. It's not customized. It's not given extra context or extra workflow around it. Certainly, one of the things I want to do is be able to go to it and say, "Hey, I'm ready for this week's predictions email. Let's write it." I do want it to be able to just call to the website. That's a solvable problem. Is it worth our work? Yes, no, maybe, I don't know. But that's something I definitely want it to do, but here's the other thing.

(12:07): I created a database and I'm telling it about the people in the league. I'm telling it who's who. These two are brothers. This one is a preteen. These two are his parents. I'm working with Claude for this one and Claude's asking me. I'm saying, "By the way, Claude, ask me follow-up questions, because you know we're going to be writing fantasy football emails. What would you want to know about these people?" I'm having this Q&A with it. It's interviewing me and it's storing information about these people.

(12:32): It's asking me like, "Hey, is Steve a trash talker?" I'm like, "Oh, yeah, Steve is definitely a trash talker." "Can he back it up?" I'm like, "No, not really, but it's never stopped Steve." It's getting that all in there, so the personality of the people involved and where the line is of how hard to joke and what the tone should be and all that stuff, but I also have a database of how to write like me. Of course I want it to write like me, because here's the thing. I'm going to be editing whatever it gives me and I'm going to be interacting with it to help tune things. I'm not just going to fire and forget. There's still a human in the loop here and I will even give it, in a given week, I might even give it some inspiration like, "I definitely want to emphasize this theme."

Justin Mannhardt (13:17): Something you noticed.

Rob Collie (13:19): Right, but it gets me crucially off the blank page, which is crushing. The blank page is crushing.

Justin Mannhardt (13:27): Always.

Rob Collie (13:28): It's the difference between doing it and not doing it. If all that came out the other end every time was ChatGPT or Claude's completely from scratch impression of our league, who's going to want to read that? Not only would I be editing it, I'll be editing it in ways where I'm putting personality back into it.

Justin Mannhardt (13:47): It'd be quite heavy on the edit.

Rob Collie (13:49): Right. We had Forrest Brazeal on the podcast. It seems like a long time ago now, and one of his sayings has stuck in my head ever since then, which is, why should I bother to read something that you didn't take the time to write? I believe that. I think he's correct to ask that question. You know it when you receive something that was just written by AI.

Justin Mannhardt (14:11): Sometimes just so egregious, you've seen some of these examples on social media where people are showing, even things like a recruiting email where there's still the prompt in the middle of the email about "Insert an interesting nugget about..." Gross.

Rob Collie (14:26): Yeah. I think there's both a human and a business tightrope to walk here. We do not want to sacrifice our humanity to these tools.

Justin Mannhardt (14:42): Not at all.

Rob Collie (14:43): By the way, if you do sacrifice your humanity these tools, it's going to be bad for your business as well because you're just going to look like everybody else that's just mailing it in, didn't bother to take the time to write it. At the same time, perhaps paradoxically, you might think at first, I am all in on the idea of in particular using what I'm calling agentic AI. My more modest definition, my more realistic definition of agentic, not the gatekeeping uber nerd one. I'm all in on using agentic AI to help us express ourselves.

(15:21): Okay, how do I reconcile these two? First of all, I'm putting in the effort upfront. I'm investing in the customization of these systems so that they do an authentic job, or at least from the beginning, I had to teach it to write like me and it's still not perfect. Of course, it's not perfect. Over time as I see it doing things that I wouldn't do, I say, "Okay, let's add a new rule that we don't do that. I would never talk like that and I would say something like this instead," so I'm investing effort in telling it about the league. I'm investing effort in telling it about how I write and how I sound so that the distance between what it spits out each iteration isn't so great from what I would authentically write myself. I don't have as much work to do to sand off the edges.

Justin Mannhardt (16:07): Like you said earlier, it could be the difference between the email getting out to the league and not getting out to the league.

Rob Collie (16:12): That's right. There's so many times where something, in terms of speed, time and energy cost, it's not like, "Oh, you got it done faster." It's the difference between it happening and not. We're going to get an email now and we weren't. We weren't going to get emails. Now, the other thing that I think makes me feel completely okay with it is that in the end, I'll just admit to a little trade secret here. I've been using an agent behind the scenes that is not just trained on how Rob writes, but is also trained on everything about P3 Adaptive, our core values, our differentiation, our ideal customers, everything about us that's relevant. I've also been training it on what Copilot is, for instance, what Microsoft Power BI Copilot is and what our opinion is on it. I put a lot of effort into training up this system, customizing the system, and I've been using it to help me write some of my LinkedIn posts. Now, am I just saying, "Hey, write me a LinkedIn post?"

Justin Mannhardt (17:07): Post it every Thursday at 2:17 PM.

Rob Collie (17:12): Anyone that follows my LinkedIn account knows there's no such thing as that kind of regularity. We're definitely not dialed in, but what I will do is, I'll sit down and I will just jam out in totally raw, informal form. I'll just say, "Hey look, I've got this thing I want to write. Here it is." I'm going to tell it the story that I want to tell, but I'm not trying to wordsmith it and get it into the way that I write professionally. I'm just talking to it, essentially. It says okay, and it gets me off the blank page and it doesn't turn it into ChatGPT-ese, because it's trained on who we are. It doesn't say things that we wouldn't say because, again, I've given it the guardrails and the guidance of how we approach things and then I'll iterate with it.

(17:56): I might spend, on a 200-word post, I might still spend 30 minutes, even sometimes 45 minutes on this thing. It's not like I'm not putting in the time. It's not like I'm not putting in the care. In fact, quite the opposite. I'm putting in even more care now because the cost of each improvement, of each fix is so much lesser. Even if I had been capable of sitting down in 30 or 45 minutes and writing a 200-word post, which I'm probably not, because my standards are too high. Let's say that it was the same amount of time, again, which it isn't. The energy cost, the amount that it drains me is night and day different.

(18:37): Having this thing that's trained and knows all the things it could possibly know in advance that matter and does a really good job and understands what I'm saying in terms of, no, no, that paragraph sounds a little too X. I want it to sound more like this. Here's an example. Boom, it just does it. That's like riffing with a teammate that's not getting tired. That teammate is sustaining me. What comes out the other end, I very much feel passes the test of, I wrote it in terms of the human and the business sense. That's the bar I'm going to hold it to. The same thing is going to be true of this personal fantasy football agent that is going to write prediction emails. By the way, it's also going to be researching trades for me, because this is not just about-

Justin Mannhardt (19:23): This is about winning. Let's just be clear. This is about winning.

Rob Collie (19:28): It is about winning, yes, and we are going to be scanning the website. I don't know. This is a thought thread that I've been really pulling on today because this idea of this fantasy football agent thing, I was thinking about it from the trade perspective last night, but this morning, I realized that it's just as useful to me in this league email since. We were talking to a client today who has been transcribing and recording two-hour budget meetings at HQ and then handing that off to one of the services. It produces an AI podcast that describes all the decisions that were made. It's a podcast that describes the budgeting decisions that were made, what they are and why.

Justin Mannhardt (20:20): An internal podcast?

Rob Collie (20:21): An internal podcast, and all their people who weren't in the meeting, when they're driving in their cars the next day, because there's a lot of driving around in this business, they're listening to the podcast on their way to the work site.

Justin Mannhardt (20:35): That's cool.

Rob Collie (20:36): Meet the people where they're at. They're in their cars. They don't want to listen to a narrated article. Sleep, but they'll actually listen to two fake people.

Justin Mannhardt (20:48): That's amazing. Yeah, I don't want to listen to the recording of the budget meeting either.

Rob Collie (20:53): Yeah, keep me interested.

Justin Mannhardt (20:54): Wow. Kudos. That's a good one. There's almost a maturity you can think about in your use of AI, either as an individual or as a team. Reflecting back on my use, there was a time where I was fumbling with this thing, not sure if it was useful. Then I got to this point where I don't create any professional document without starting in AI in some fashion. I do like what you described. I'll just word salad, "This is the problem, this is what I want to do, help me get structure," but then you can elevate that to your point with the tuning being a critical part. If the tuning isn't there, my effort to refine it or in some cases, give up on the AI and go back to the blank page because I'm so far off the mark of where I need to be, that tuning increases that value of reducing the startup costs, decreasing the distance to finish, but still being very authentic about it.

Rob Collie (21:58): Help me be me faster.

Justin Mannhardt (21:59): Yeah, more effectively.

Rob Collie (22:02): All right, with all that said, yes, I've known this about you for a long time. You write. You get off the blank page with almost any professional content that you're going to produce and share. You're leaning on your teammate, usually ChatGPT, to help you with these things. Here's the thing. I think you've been doing that within the confines of the commercially available capabilities of essentially off the shelf ChatGPT.

Justin Mannhardt (22:30): No doubt about it.

Rob Collie (22:31): I think you're really good at using those off the shelf capabilities. You're probably using them basically to the fullest of their extent. Then I'm over here in this other zone doing things that are not off the shelf. I've been a little bit stingy about it. I haven't been making that system available. Now, of course, there's a switching cost. When I was talking about earlier about the definition of agentic, I think the things that you're doing with ChatGPT, the writing that you do and all that stuff, I think it does pass the agentic test because the thing is, you have done a lot of tuning. You've just done it in ways that are accessible via the ChatGPT web interface. Why don't you walk us through some of that?

Justin Mannhardt (23:11): I have quite a bit of custom instructions set up in my account.

Rob Collie (23:17): Yeah. Just be clear for people who don't understand that. In ChatGPT, in your account, your account settings, there's personalize or something like that. You can give it some amount of text about you in these different text fields. One of Brian Julius's core principles is, don't waste any of those. Use every character. It's an opportunity to tell it about you and your business, whatever, your job, all of that. You've taken advantage of that feature. That's tuning right off the bat. Of course, that's tuning for all of your workflows.

Justin Mannhardt (23:49): That's tuning for all of them. I've described who I am, what I do professionally, what I do personally, what I like, what I don't like, how I want it to behave with me when it's responding. It does a very consistent job of that, which I like, because if you haven't done that, you really should, because I can fire up a chat about a problem I'm trying to solve at work. I feel like I'm interacting with the same thing as when I fire up something about planning a vacation for my family. I have just a common experience. It's like going to a friend, in a way. It's like, "Oh yeah, when I talk to Rob, that's how Rob is and we talk about a lot of things." I recommend doing that.

(24:33): I probably got more structured than with the way I prompt on different problems to try and affect tuning that way. I use canvases quite a bit. This is maybe a lead, but you could think of a canvas as a shared memory or a shared data asset in a way where me and ChatGPT, we're both working on this document together and reading it every time, so there's some value that comes through that in that way as well.

Rob Collie (25:01): You also use projects. You also, I think, mentioned previously that you reference previous chats.

Justin Mannhardt (25:09): Mm-hmm.

Rob Collie (25:09): All of these are ways where you're not really hacking. These are all methods for giving ChatGPT more information about you, more information about your workflow, more information about your preferences. Again, it's on this tuning and customizing theme, but it runs into some limitations. One of them is honestly just manipulating all of that user interface. It gets a little clunky.

Justin Mannhardt (25:32): Yeah, it does.

Rob Collie (25:34): When you're in a particular, let's say you're drafting some sort of document for internal consumption at the company versus drafting a proposal for a client, there are certain aspects about who we are and who you are remain constant, but there's a difference.

Justin Mannhardt (25:51): Yeah, the audience is different. The purpose of the document is different.

Rob Collie (25:56): In that moment, you're dealing with a largely global system that knows about you. Now in order to get specific, you need to go into a project or you need to reference a previous chat or something like that. You need to manually do something. This is the road that I walked. I started off with the built-in ChatGPT stuff. I filled up the custom instructions for me and I was astounded at how much better that made things. Then I said, "Oh, look at this, the custom GPT route."

Justin Mannhardt (26:24): It's another layer of custom instructions.

Rob Collie (26:26): It's another big text box where you can put custom instructions and it'll even help you write the custom instructions if you want to chat with it. Here's the thing about that that's cool is that those instructions only count when you're using that custom GPT. You're able to invoke it, opt in. Now I'm going to write a proposal. Now I'm going to write an internal thing. Even then, those rules, the custom instructions, they become clunky to edit over time. You can publish the custom GPT, which is another place where the personal manipulation of the user interface breaks down. It's difficult to share. It's difficult to enable this for a team.

(27:08): The custom GPT gets you in that direction, but it has a pretty limited capacity for instructions. It won't take that much before it says, "Hey, I'm full." Then you can add extra files and things like that that it will search over. That's great and all, but it's like if you want to just provide it a lot of instruction, I found that I outgrew it. That led me to this idea of essentially offsite storage, an external database, in my case in Notion. There's lots of places you could do this. You do it in Azure or whatever, offsite storage of instructions, and then agents that modularly load just the databases that they need at the time that they need them.

(27:56): Here's another thing. The ChatGPT custom GPT, let's say you want one that writes proposals, one that writes internal things, one that writes ad copy or something like that. All of them need to know who P3 is. They need a shared definition that they can all pull from and you don't want to copy that three times, because it gets out of date. If an individual agent can pick and choose which databases it's loading, which databases of instructions you can say, it's like picking off the shelf. I want you to load one, five and seven, and that makes the copywriting agent the ad agent, but no, one, two, and four make the internal communications helper, that level of modularity.

(28:33): Then also frankly, the database's ability to hold more, the database's ability to be updated granularly because you can then over time as you discover that it's off track somewhere, you can add a new rule or you can tweak the rule. You can hand edit it or have the AI itself, if you're the admin of it, you can have it help self updating. That led me to Claude Desktop and all this custom local install of various MCP servers and all that stuff. It was fascinating. Again, it was awesome for a week, but then I wanted to share it and now I've got to publish all these instructions for people, all the shit that they need to install on their computer. Frankly, Justin, I still have not installed all this stuff on my laptop.

Justin Mannhardt (29:14): You and I, we sat down and you walked me, install this and do this and run this from the terminal and do that. I'm like, it didn't take us that long, but I was like, "Man, there's-"

Rob Collie (29:22): Brutal.

Justin Mannhardt (29:22): Brutal.

Rob Collie (29:23): Yeah, this is never going to be a team solution. That has led us to our own custom website, P3AdaptiveAI.com. It's purely for internal use. This website allows us, we have our own, essentially, ChatGPT style web chat interface, but we have backend control over it where we can install all the things it needs to connect to. We don't have to tell people how to install. We just send them to the website. That front end already has all of the right plumbing installed to talk to these databases, and we can pre-configure specific agents that can be favorited. You can just bookmark them and that agent knows to load this, this, and this.

(30:03): After it loads those, it can say, "Hey, I also have these other couple of things that you might want to load. Do you want to load any of those." You can say yes or no, depending on what you're going to do, and you're off and running. This is the thing that I haven't made available to you yet. There are use cases that you're using ChatGPT for right now, for which I think this custom agent framework that we've been developing, to be clear, we're first developing it for our internal use so that we can get good at it and then make it available as something we can do for our clients. Justin, I apologize that I have not come to you and said, "Hey man, did you know there's this company, P3 Adaptive?"

Justin Mannhardt (30:41): I'd love to be a client.

Rob Collie (30:42): I want to offer our services for free to you.

Justin Mannhardt (30:44): I hear they've got a fantastic consulting team, some of the best, smartest people on the planet. Could you help me?

Rob Collie (30:51): I've heard good things. I would love to help you with this. Okay, even the switching cost from your existing ChatGPT system isn't going to be great, because what we're going to do is we're going to say, "Go create us a database of instructions" and then we're going to go just copy/paste an old chat or whatever it is that ChatGPT is using for its context instructions, which you have to manually feed it or whatever. We just go grab that, give it to our system and say, "Reverse engineer from this the starting point for the rules that go in this database."

Justin Mannhardt (31:27): There you go,

Rob Collie (31:28): We're off and running. It's not going to be perfect yet. Then you're going to start using it and saying, "Hmm, I need a new rule," or I need to modify that rule and it'll do it.

Justin Mannhardt (31:37): Beam me up, Scotty.

Rob Collie (31:39): That's going to be exciting.

Justin Mannhardt (31:40): Watching you guys work on that has been really, really cool and really inspiring. That tuning concept you start to connect the dots of, it seems subtle and I think it seems subtle to me even up until recently where I could take a particular workflow and go to one of these subscription-based products, whether it's Claude or ChatGPT or Gemini or whatever, and get a lot of value from that. Then you lose this ability to keep it grounded in how you want to think about something. You could pick any business workflow. Let's go to a customer service situation. You're handling customer complaints. I could probably get ChatGPT to help me quite a bit in how to respond to those things effectively, but it's not going to have any consistency or grounding in policy or grounding in solutions that have worked before at our company. That next level of maturity when thinking about these agentic workflows that I think we're all on.

Rob Collie (32:41): You can even include rules like, always make sure to reference back to this core principle, almost referencing back to core values or goals that we've set as a company. You put a rule in that says, "Help me remember essentially to tie all of this back to where we're headed," because it's something that we're so focused on explaining the what and the how or whatever the initiative is or whatever the change is or whatever, but tying it back to the greater framework is something that sometimes slips our mind. It's easy to forget and it shouldn't. That rule could be a very helpful rule. It'll either ask you, how does this tie back so that we can write something that ties back or it's going to figure out a best guess at how it ties back, and it's going to put that in there. As you're reviewing it, you can go, "Oh, right, we need to do that."

(33:34): Even if that guess, its interpretation of how it ties back, isn't quite right, you can fix it. This modularity of tuning, you don't feel like you have to go out of your way to tune. Also when you're tuning, you're not taking great risk. With the custom GPT, it's going back and re-editing all of the instructions every time, essentially. It's a big block of text and it might change more than what you want it to change, whereas the database thing is like, "No, no, it's very, very precise." It's only going to go update a record and you've just got a much cleaner audit trail. Just the whole thing just feels like you're constantly improving it, and it's not expensive. It actually feels good. You're being good to your future self every single time you improve it.

(34:23): Then when you're able to take that and publish it like I have, we have multiple people at the company now who have access to this writing agent that always knows our brand framework. I just put a rule in. I showed you this yesterday. We just edited a rule into our core brand framework, into our AI-powering database, the database that powers our AI that explains exactly what our approach is to guaranteeing our jump starts, what it means, how we talk about it, and it's going to be consistently referenced like that every time.

(34:55): Human beings do this, too. We talk about a guarantee. Every time, we're going to say something slightly different about it, and that's not how you should be. You need to be consistent in marketing materials. You need to be consistent with people in general. It helps them remember. It helps them have clarity to go and put the effort in. We did this manually. We didn't have the AI write this. We as a team, as a marketing team, hand edited this database record, whereas a lot of the database records have been AI authored based on interactions and conversations and stuff. This one, we wordsmithed down to the punctuation and it's now encoded everywhere and we're gaining that consistency. I'm really just excited about all of this.

Justin Mannhardt (35:33): That's great. That's a lot of progress in a short period of time when you think about it. We have our own AI website, Rob, with agents and stuff in it.

Rob Collie (35:42): Hell, yeah.

Justin Mannhardt (35:43): Wired up to databases.

Rob Collie (35:45): More and more every day, actually, by the way. We spent a lot of time, money, and energy with a branding firm. Let's shout them out. Space Force Strategy. Love these guys. They didn't invent a brand framework for us. They reverse engineered it from who we already are.

Justin Mannhardt (36:01): Discovered it.

Rob Collie (36:02): It sounds like, okay, if they just discovered it, did they do anything? Yeah, they extracted the essence of what it is to be us and helped us get clear about it for ourselves without changing us. We don't have to change to embody this brand, but it results in quite a bit of content about us and how we should approach things and how we should talk about things and all of that. It arrives in a slide deck. How do you normally activate this? You make everybody read it.

(36:35): Do you make them read it every week? No. That'd be cruel. You make them read it once and you ask them, "Do you understand it?" Everyone goes, of course I understand it. I understand it. The people who wrote it understand it. Guess what? What happens when you turn human beings loose on a deadline to produce, I don't know, a new webpage header, even just the headline and the first three sentences of copy for a webpage. They regress from the brand framework back towards their own personal brand almost inevitably. Even the branding professionals, they made the brand framework, they're not going to perfectly embody it.

Justin Mannhardt (37:17): Yeah, and you leak back to your own point of view or you start to have the gradient deviation away from a core idea. You don't feel like it's quite different, but it's eroding in its consistency.

Rob Collie (37:31): Yeah, there's a dilution in quantity as you make more and more stuff, but there's also an increasing dilution over time because what you're really doing every day is sitting down to match what you did yesterday. Your freshest memory about how you wrote about stuff was yesterday, not the brand framework. It's a copy of a copy of a copy of a copy, so all of this drifts. Turns out, though, that you can now take your brand framework that you acquired at significant cost from people with real expertise, and I don't think an AI could have done this. This is where humans are necessary, but then take that human output, turn it around and codify it into rules that are then loaded into an agent. It's not just making things easier, faster, lower costs, lower energy costs for people like I've been talking about, but it also brings a level of consistency and impact and punch.

(38:24): It scales that brand framework so much more effectively. It's a completely different way of thinking about things. This is a creative, squishy concept, this brand framework, identity, tone, voice, core values, differentiation, all of that, what we stand for, our origin story, all those sorts of things, and essentially turn that into almost code. It's almost like software now. It would've been just a static document that sat there and people forgot about, but now it's live. It moves with us.

Justin Mannhardt (39:05): It's on. It's turned on.

Rob Collie (39:07): It is so cool, and it really illustrates just how much things can change while at the same time being the same. It doesn't destroy the world here. I've been telling the Space Force folks, this increases the value of what they produce. A brand framework that is easier to follow, that is going to be more consistently followed, that can be processized.

Justin Mannhardt (39:31): Yeah, easier to activate, easier to sustain.

Rob Collie (39:34): Creates more value. Their service is now more valuable. How cool.

Justin Mannhardt (39:41): Way to go, guys.

Rob Collie (39:43): In the same way that Copilot natural interface querying of a data model, of a Power BI data model, makes Power BI models more valuable.

Justin Mannhardt (39:53): I did have a free moment this morning and I opened up Griff, our copywriting solution and said, "Hey, can you give me some punch for some headlines how we might talk about chat with data Copilot in Power BI?" My favorite idea that it came up with was the headline, "Your data has been waiting for this conversation." It's been wanting to be more valuable in a way. Just like the brand framework, it wants to be incredibly valuable and useful and scaled and sustained and cared for and nurtured.

Rob Collie (40:25): Note that that headline is so good, but it also understands what's going on, because you told it to load the Copilot database, so it understands what Copilot's about and it understands us. It sounds like us. It's something that we would say, if it just happened to occur to us. We'd be like, "Yeah, that's the thing we should say." Magnifique.

Check out other popular episodes

Get in touch with a P3 team member

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form

Subscribe on your favorite platform.