episode 191
An Inflection Point in AI? Understanding MCP and Multi-Agent
episode 191
An Inflection Point in AI? Understanding MCP and Multi-Agent
For a while, you could pretend AI was still a someday problem. Not anymore. Rob Collie and Justin Mannhardt are back, and this time they are tackling Model-Context-Protocol (MCP) and Multi-Agent Systems, two shifts that could finally put an end to the human copy-paste Olympics.
This isn’t about shinier tools. It is about AI that plugs in without the duct tape and starts doing the work without making you babysit. Rob and Justin dig into what’s real, what’s coming, and why “waiting to see” is no longer a strategy.
MCP is being called the USB-C of AI. Multi-Agent Systems are making AI check its own work, so you do not have to. Translation? The gap between “early” and “too late” is closing fast, and the status quo isn’t going to cut it much longer.
If you are tired of the hype but know you can’t sit this one out, this episode is the advice you have been waiting for. So, tune in and enjoy! If you like what you hear, don’t forget to leave us a review on your favorite podcast platform. Your feedback helps new listeners discover the show!
Also in this episode:
Episode Transcript
Rob Collie (00:00): Out there if you've been listening, or not listening the most recent weeks, we haven't had any episodes. Have you been out there wondering like, "Oh no. Is the podcast dead? Is raw data going away?" It is not. It's not going away.
Justin Mannhardt (00:14): Fear not.
Rob Collie (00:15): We have been busy with a number of different things, there's a lot of things going on. We've also been planning, if your ears have been burning, dear listeners, we've been talking about you. We've been talking about our plans for what we do with this show, and it definitely doesn't involve ramping it down.
Justin Mannhardt (00:29): Quite the contrary, I would say.
Rob Collie (00:31): Yeah, so that's not what we're going to say about that Forrest Gump style for the moment, but it's been a little while since we've sat down and talked like this, Justin, and kind of a lot's happened.
Justin Mannhardt (00:41): The normal state of things right now is that there is a lot happening all of the time.
Rob Collie (00:48): Business as usual has become kind of not usual. Everyone can feel it too. The world is changing in the matrix when the cat kind of glitches. That's what happens when they change something. There's something changing in the operating system of the world right now. We've been talking a lot about AI on this podcast.
Justin Mannhardt (01:10): Indeed.
Rob Collie (01:10): And using it in fact as a vehicle for conversation, for figuring things out. We actually advance the state of our understanding through these conversations with each other as much as we're sharing. It's a mix. It's a mix of both. They've certainly been very valuable to me. And one of the themes that we've been hitting on for a while has been something along the lines of, it makes a lot of sense to be paying attention to these things. It makes a lot of sense to be sort of kicking the tires, but it's probably too early to start making big commitments or really getting going, because things are changing so much, et cetera. And my sense lately, even within the past two weeks, has been a little different. Now doesn't really feel like that. Now it feels like there are some really tangible things that we could start doing. Just in general, we, the royal collective we-
Justin Mannhardt (02:03): Who are we?
Rob Collie (02:05): That's my first high level observation is that I'm feeling like we've changed phases.
Justin Mannhardt (02:12): Two things I saw recently in the feed sphere that I think resonate with what you're teeing up here. One was someone suggesting this idea that the pace of progress with AI had begun to plateau. And I don't agree with that. The models are getting better all the time. You see lots of new startup companies with different cool things all the time. What I've experienced that as is we the royal, we as a society and a community, we are becoming far more acclimated to AI. It's becoming more and more normal in its existence, and so, the applicability is becoming more and more clear to a lot of people. And I think the big leaps aren't as obvious and dramatic. I saw that and I thought that was interesting.
(03:01): And then this one's funny, it was a tweet, I can't remember who it was. But they said, "When our children grow up, they're going to think we were digital caveman. We used to google questions and see pages of lists of blue underlined text." And it's true, I kind of don't Google any more, unless I know exactly what I'm looking for. And I want to get to a website that I know is out there. And that's an example for me of, I take my general query to ChatGPT pretty often. Those are some things that resonated when you were saying, "Yeah, we're in a different place today."
Rob Collie (03:36): In fairness, in terms of your own personal productivity, I think you've been way out ahead of the curve of not everyone, certainly the 98 plus percentile of people I know in terms of your own personal adoption for productivity purposes, right?
Justin Mannhardt (03:50): Yeah.
Rob Collie (03:50): Ahead of me.
Justin Mannhardt (03:51): For sure.
Rob Collie (03:52): I'm kind of famously slow with new tech, which as a footnote, is one of the things that made me such, I think an effective and unlikely simultaneously advocate for Power Pivot and Power BI. When it first came out, I used to tell people, "Look folks, I don't adopt stuff. I don't believe in anything."
Justin Mannhardt (04:10): You're skeptical by default.
Rob Collie (04:12): I have adopted this stuff. I believe in it, and I love it. I don't want everyone to understand how wild it is for me to say that I love a piece of technology. And that really resonated with people, but also I think it also made me a better teacher because I'm used to not adopting things quickly.
Justin Mannhardt (04:26): Yeah, you're authentic.
Rob Collie (04:28): Except in the case of Power Pivot and Power BI, I'm used to not being one of the aggressive early adopters of something new. So for you to be ahead of me in that space, it's just kind of, "Oh, here we go again. It's another one of these." I've also been kind of personally getting off of my own starting line there a bit.
Justin Mannhardt (04:44): Yeah.
Rob Collie (04:45): There's some really interesting personal examples. We have these Apple Watches, Jocelyn and I. And they tell you things about your health. And like, oh my God, are they not reliable witnesses? I trust it to tell me my heart rate. I bet it's pretty good at that. The first time we put these things on, we asked it about our heart rate variability, which of some metrics, some KPI I didn't even know existed. It told both of us that we were basically near death. We're like, "Well, we're in pretty good shape. We're in probably the best shape we've been in since our 20s. We asked our doctor and he's like, "Yeah, everyone in our practice has an Apple Watch and they come in and they say their heart rate variability says they're dead." Anyway, so Jocelyn, my wife was having some chest pains the other day and it's probably just muscles between the ribs or whatever.
(05:31): But we'll do the EKG, do the EKG feature on the watch and I do it too, and both of them come back saying we're completely normal, except that her graph looks jagged and wild and mine looks like this perfect work of art. So now she's even more freaked out. Because the watch is telling her that she's healthy, but our graphs don't look the same. So we screenshot those and I, fed those into ChatGPT and I cropped the screenshot that said, "Heart normal." I fed these two waveforms and said, "Hey, this is person one and person two. Is there any reason why person two should be concerned? This from Apple Watch?" And it comes back and gives all kinds of reasons why we shouldn't be concerned. Now it's medical advice, right? Got to be careful, all that kind of stuff. Compare that process to Googling it.
Justin Mannhardt (06:18): Oh God, just give up. You get 47 hits to WebMD and you get in there and you're just like, "Oh my gosh, I have tuberculosis."
Rob Collie (06:29): But this thing ChatGPT that I gave it to looks at and understands the graph. It's getting a sense of what's in the graph and reading it, so you're getting some sort of semantic meaning there. I'm increasingly thinking about ways in which ... There's the obvious stuff. There's using AI to write better code and write it faster. We're doing that at P3, and everyone's at a different point on their adoption curve, and we're talking about figuring out ways in which we can better institutionalize this. And so there's that obvious stuff. There's also the Power BI fabric-adjacent AI.
Justin Mannhardt (07:10): Copilot and-
Rob Collie (07:11): All that kind of stuff. We want to make sure, and again, it's the royal we, not just P3, it's our clients. It's anyone listen to this. The other category we want to be paying attention to is the other stuff. We did an episode where we talked about agents and I'm much more prepared to talk about agents.
Justin Mannhardt (07:30): Yeah, because you've gone down a little journey.
Rob Collie (07:33): I've been on a journey that feels like the beginning of a journey, but in itself has also been a bit of a microcosm. I have been down and back again in the past couple of weeks in a big way. Two things that really lit me up as I've been on this recent trip. Number one is MCP.
Justin Mannhardt (07:50): Which stands for?
Rob Collie (07:52): Model Context Protocol. I even know what it stands for.
Justin Mannhardt (07:54): Oh man, look at you.
Rob Collie (07:59): A lot of the stories I've been telling myself, or the pictures I've been developing in my head about AI over the past six months, MCP changes them. MCP changes those pictures-
Justin Mannhardt (08:10): Big deal.
Rob Collie (08:10): ... dramatically, is an example for our listeners that I think will be very tangible. If you're using AI to help you write some DAX, let's say, well, either you're using Copilot from Microsoft, which is difficult to turn on and is lagging behind in terms of its capabilities. It's lagging behind the public models like Gemini and ChatGPT. Those models are better at writing DAX-
Justin Mannhardt (08:40): Indeed.
Rob Collie (08:40): ... than what Copilot is. And so you kind of want to use the latest and greatest. You want to use the better models, so then you become what I call the human message pump.
Justin Mannhardt (08:51): Yeah. MEETWARE.
Rob Collie (08:51): Yeah, so you could start doing things like giving it pictures of your data model and things like that. You could take screenshots of the scheme of you. There's all kinds of tricks that you can lean on to give it more and more context. That's the thing, context, the context of the problem I'm working on, and then you describe the problem and say, write me the formula. And I was doing this the other day with Power Query and it sucked. It just sucked. This experience sucked.
Justin Mannhardt (09:15): The Copilot experience or the MEETWARE experience?
Rob Collie (09:18): The MEETWARE experience where I was the human message pump where I'm describing to it, "Here's what the data looks like now and here's what I want to look like after the transformation. Can you help me write the" ... And it gives me the code, so I got to copy that out of the ChatGPT window and I got to go paste it into the damn power, and guess what? Syntax error. This doesn't even run. So then I've got to, what is the error message and what line is it on? I've got to go then go back to ChatGPT and say, "Hey, you know what? That code you gave me, you chat online, blah, blah, blah. I'm getting this error." And it goes, "Oh, right, you are. Yeah. Here's the version that works." And so I paste that over there and something else goes wrong. I'm just like, I am not going to do that.
(09:59): But with MCP, which has been described I think as like USBC for AI, universal plug and play, it won't be long before the Power BI environment is forced to. Microsoft might not want to do this, I don't know. They're going to have to open an MCP port in Power BI desktop so that I can plug in my own model of choice and the MCP protocol will provide all that context, what the data model looks like, what the columns are, what the formulas are, everything that would be needed to essentially train up, ChatGPT, Gemini, whatever to make it a better assistant on that model. All of it's there. Why do I have to be the copy paste or retype it human message pump? And then when it gives me the code and there's an error message, guess what? MCP can pass that error message. I don't have to be the one doing it. That's a big deal, and we're already seeing a lot of things being wired up to MCP. I was talking to Brian Julius the other day and he showed me, Zapier has MCP to everything.
Justin Mannhardt (11:11): Yeah, it's crazy.
Rob Collie (11:13): What a gift to Zapier. What have they been? They've just been this universal plug and play, really kind of a boring, boring, boring business in some ways that everybody needs.
Justin Mannhardt (11:26): Great product though.
Rob Collie (11:27): Great product, great product. Seriously.
Justin Mannhardt (11:29): We're fans.
Rob Collie (11:30): We love it here, but when you just want to be like all you are is the translator for a million different systems, well, then along comes AI and MCP and suddenly Zapier is just like they just happened to have been sitting on some of the most valuable real estate in the entire universe.
Justin Mannhardt (11:49): Yeah, because they have the map, they have all this API to API stuff figured out, and-
Rob Collie (11:54): That's a lot of work that they've put in. So the human message pump as you call it MEETWARE. I don't know. We're going to have to have an argument over which of these phrases we ... The human's message bus.
Justin Mannhardt (12:08): Yeah, service bus.
Rob Collie (12:09): Okay, so maybe the use of the word bus will bring you over to me. Anyway, that barrier that's been there is a false barrier. MCP is the way that's going to tunnel around all of that. So that's one thing that's really lit me up and said, "Okay, I'm seeing with different eyes now." And the other one is this thing called multi-agent. And it kind of goes hand in hand in a way. Just thinking about writing code for a moment, writing formulas, writing scripts, whatever, something that I think everyone can relate to. Whether you're a business leader listening to this or a practitioner, you certainly have people close to you who are writing formulas.
(12:47): And who are writing power query scripts and things of the sort. If you have one system, one brain, one AI brain that's off writing code for you, why can't you have another one whose job it is to look at the results that the code writer agent produces? It's the validator. So human said, "Give me X, Y, Z." Agent one goes off and writes the code, the script, whatever. And then agent two essentially runs it and inspects the output. Let's assume there aren't any errors and then decides, "Okay, does that match what the human's instructions intended? And if not, can close the loop and re-instruct agent one to make the fixes? And this might even happen multiple times behind the scenes before the human even sees the result.
Justin Mannhardt (13:46): It's a fascinating concept. We did an episode where we talked about agents. I think I was describing how at the time some of the challengers were figuring out how to successfully constrain the scope of an agent's task responsibility so that it stayed on rails to an extent. But if you have this sort of multi-agent approach, you can do that. You can make it very clear, and you sort of create a system of checks and balances that is theoretically more durable. I went to a music college and I studied audio production and engineering. And one of the things that we were taught early on is, "You as the same engineer should not record, mix and master the same piece of music. Because you become tone-deaf from the recording, because you're hearing the things as you're cutting it to tape for the first time and your biases in your brain exists."
(14:40): And so I think it's sort of the same idea with the multi-agent is, you sort of remove those memory caches in a way, or you can have different context prompts for the agents so they think differently, and it's kind of neat. And a way you can try this for yourself without getting into scary code and building something is just, let's say you got to produce a document in your job. Maybe it's a financial report, a sales proposal, a summary of a customer engagement, whatever it is. Use ChatGPT to help you write that document, and then take that document over to a different AI tool, take it to Gemini, take it to Claude, take it to Copilot, and then ask it for a critical review. And that'll just give you some sort of easy, low effort way to understand this concept of how different AI tools would react to different steps in the process, versus using the same AI tool and just riffing with it down the chat.
Rob Collie (15:37): Right, and MCP makes all this easier as well?
Justin Mannhardt (15:42): Yeah.
Rob Collie (15:42): The thing you just described would have involved the human message bus. I got a copy this from over here, go over there. If you have a third environment that's sort of like your console, and it sends the original request for the document to ChatGPT, and then as part of its routine then takes the output from that, again, this is all happening over MCP. And then sends the output over to Gemini for a critical review, then not only does it give me the original document, but it also gives me what the editor thinks.
Justin Mannhardt (16:18): This is like a suggested revision.
Rob Collie (16:20): All in one. These two concepts definitely have a lot of significant interplay with each other. And they've just really kind of marked an inflection point, and as far as I know, MCP is really new.
Justin Mannhardt (16:34): It's gained a place in the hype cycle recently, yeah.
Rob Collie (16:37): At this point. It was created nine months ago. That's years ago.
Justin Mannhardt (16:40): Yeah, right. Yeah, November. It came from the Claude team at Anthropic.
Rob Collie (16:45): It's existed for six months.
Justin Mannhardt (16:47): Yeah, it's crazy. Which is kind of fascinating too because the human service bus problem, Anthropic saw that and be like, "Oh, let's go build a piece of technology that fixes it."
Rob Collie (16:57): And that's just regular software engineering that's just regular, like, "Oh, there's a feature missing from this ecosystem." We can all understand that. I haven't looked at the protocol itself. I have no idea what the chatter looks like going over MCP. Is it just a bunch of readable English? It's a bunch of instructions and stuff. I have no idea, but the idea that almost every endpoint, every system, whether it's an AI system or a line of business system or even a personal productivity system, is going to eventually have these and eventually might be really soon.
Justin Mannhardt (17:29): Yeah. Concepts that have been talked about frequently is this idea of a single human leader or person working with multiple AI's in the way they would work with a traditional team. You think about a common workflow that someone like you or I experienced, we're curious about something, so we go ask a direct report like, "Hey, can you dig into this?" And they involve team members and then it gets back to us. Kind of a similar thing going on with agents that can be concerning, depending on the nature of the work you do. But I think it's opportunistic as much as it is anything else that we'll have different ways of getting things done.
Rob Collie (18:14): We know that AI is going to be eliminating classes of jobs.
Justin Mannhardt (18:20): Just like the steam pump did. Just like-
Rob Collie (18:23): Well, maybe not just like.
Justin Mannhardt (18:25): Yeah, sure. Just like, okay, but technological innovation changes how the job ecosystem looks. It does.
Rob Collie (18:36): Even if you set aside that dystopian type stuff for a moment, there's also so much surface area for things to be possible that aren't possible today. We are not replacing a human being. You're able to achieve something that you weren't able to achieve before, because you didn't have enough human beings. A project that I'm working on right now just as a starting point is helping us search our own podcast transcripts for themes. You and I, over the last couple of years. I mean, we've talked about a lot of things. We've had observations that sort of just spontaneously emerge facilitated by this conversation, that at the moment we go, wow, that that's a pretty powerful concept. We should probably get that out there, in other ways than just hoping people listen to that 30-second stretch of the podcast.
(19:28): It's not actioned. We finish the recording, it gets edited and it goes out, and that's kind of the end of it. Hiding in that episode were three or four really important sound bites I think would be really helpful for people to hear, and also frankly helpful for our business, our business at P3 to be the people that are purveying, that kind of information, even in more digestible form, it would also bring more visitors, more listeners to the podcast, et cetera. And so having an intelligent search that has superhuman speed, but isn't just relying on keywords, it's like looking for themes. Mining our own podcast for time stamps et cetera, of these, sort of significant moments is a very, very, very interesting example of what I'm talking about. That isn't going to be replacing anyone. We would never deploy human capital at our company to pouring over hundreds of podcast transcripts.
Justin Mannhardt (20:30): One thing I've seen in the last few months that's interesting about the concept of how AI will affect people, jobs, et cetera. Was there was a study supported by Harvard Business School and a variety of researchers that they were studying the variations in quality between individuals, teams, individuals using AI and teams using AI. And the group that performed the best in this particular study was teams using AI. This idea of like, "Oh, there's going to be single person companies doing what mountains of people could do." Well, that might happen in some cases, but what I took away from this was remembering, well, what makes a team function at the highest level? Everybody being on the same page, everybody acting from the same source of information and knowledge. Well, AI could be a great asset to that. That's one of the things I took away from that study is if everybody was being supported by the same super powered source of knowledge and support, and they're functioning better as a unit, because they all are using AI together.
(21:41): I thought that was really interesting because a lot of my personal experience with AI, as you mentioned. Has been for my own personal productivity, making me myself more effective. And so that was just the first time I saw some significant body of research about this idea of teams and AI compared to individuals and AI, which was really interesting.
Rob Collie (22:01): Yeah, that is fascinating, and it makes sense. And it's also really validating to me, because all of that personal productivity stuff you've been doing way out in front of me, you've been barking up the wrong tree.
Justin Mannhardt (22:17): Yeah.
Rob Collie (22:19): Okay. Well, more to come. A lot more to come actually.
Justin Mannhardt (22:22): Yeah.
Rob Collie (22:23): I'm experiencing an inflection point.
Justin Mannhardt (22:25): My closing thought here is, if you're listening and you're finding yourself stricken with the common feelings of being behind or not knowing where to go or not knowing how to start, this is a great time to start. Nobody's really all that far behind in the grand scheme of things.
Rob Collie (22:44): I'm glad you brought that up. Because I mean, I'm even more sympathetic to that than you are. I think because some of this stuff just comes so easily to you. I'm more reluctant. I'm a bit more hesitant. In a way, so there's that FOBO, fear of becoming obsolete, fear of falling behind. When you say, "This is a really good time to get started," they're sitting there going like, "Okay, so not only am I all those other things, but now I'm missing out again on a really good time to start." They don't know how to start. If that's the way you're feeling, hit me up on LinkedIn. Because I've been on those roads. I'm still kind of on those roads, but I have turned a corner in my own personal confidence about what the road ahead looks like, and it isn't as doom and gloom as it seems.
(23:30): I told some friends last night, the first thing you have to do is understand that this stuff is coming sooner than you want it to. And it's "worse" than you want it to be. You got to get through a lot of self-denial. You got to get through a lot of denial, that it's a long way out, and you've also got to get through the denial that it's not going to be good enough. Those are self-protective mechanisms that when you let go of them, oh my gosh, there's a moment of terror. But you need to let go of them, because they're also some of the things that are holding you back. You kind of need that shock of discomfort, that shock of fear to get to the other side, where you can start thinking clearly again. Because it's not as lights out, airtight end of the world as some of the most breathless hype makes it out to be.
Justin Mannhardt (24:23): Right.
Rob Collie (24:23): But you've got to get past this other thing first. It's sooner and it's worse than what you would want it to be. It's going to be okay, which we have to get through that, so if that's the place you find yourself, reach out to me on LinkedIn.
Justin Mannhardt (24:36): Throw those DMs.
Rob Collie (24:38): I don't know how many people I can help, but I can at least therapy you through it a little bit.
Justin Mannhardt (24:43): I dig it. Maybe I'll send you a LinkedIn message.
Rob Collie (24:46): You will not.
Justin Mannhardt (24:49): Will you take mine? Will you take mine?
Rob Collie (24:50): You get out here, you get out of here. You're too far ahead. Nope.
Justin Mannhardt (24:54): You're going to decline my meeting?
Rob Collie (24:56): Wait, we're going to have a different meeting.
Justin Mannhardt (24:57): I'm going to create an AI agent that runs a LinkedIn profile just to pester you now.
Rob Collie (25:03): Some fake, non-existent person.
Justin Mannhardt (25:05): Fake bot.
Rob Collie (25:05): That I can't tell the difference, right?
Justin Mannhardt (25:08): "Justin, what have you been up to the last couple of weeks with AI?" "Wasting Rob's time."
Rob Collie (25:12): Yeah, and spamming Rob about Bitcoin.
Justin Mannhardt (25:16): Spamming Rob. Perfect, I love this.
Rob Collie (25:22): All right, well, until next time.
Justin Mannhardt (25:23): Yes, sir.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
Subscribe on your favorite platform.