episode 175
The Crutches Paradox (and Why Senior Software Engineers Don’t Care About AI)
episode 175
The Crutches Paradox (and Why Senior Software Engineers Don’t Care About AI)
Rob and Justin are back, catching up after a short break with an episode that’s part big ideas, part funny stories, and full of surprises.
They kick things off with the Crutches Paradox. Their take on why we hang onto things we’ll probably never use, “just in case.” From there, the conversation moves into all kinds of unexpected territory: why politeness might not matter when working with ChatGPT, the infamous New Coke conspiracy theory from the 80s, and why some of the world’s top software engineers just don’t seem too concerned about AI.
Rob even shares a few personal moments, including how his old Pontiac Bonneville managed to become a legend among tow truck drivers. Between the laughs and the thought-provoking moments, this grab bag episode has a little something for everyone.
Whether you’re here to explore big questions or just enjoy a good story, this one will keep you hooked from start to finish. As always, if you enjoyed the episode, be sure to leave us a review on your favorite podcast platform.
Episode Transcript
Rob Collie (00:00): Hello, friends, we're back from our short break and brimming with thoughts, which naturally led to a bit of a grab bag episode format as Justin and I caught up with each other this week. I mean, we've been in touch in the interim in the normal course of work, but as I've mentioned before, our podcast stream of thinking exists on almost like a separate plane that's adjacent to and deeply related to our day-to-day business.
(00:23): But thankfully also at a higher level, the podcast forces us, but also allows us to think about things a bit more strategically and proactively and thoughtfully than the normal flow of day-to-day business. So even though we interacted quite a bit with each other since the last episode, our podcast selves hadn't seen each other in a bit.
(00:43): So what was in said grab bag? A number of things. We talked about what I'm terming the Crutches Paradox, which I propose to you as a corollary to Murphy's law. We talked also about being polite with ChatGPT and how it might be something you need to get over. This led us to a brief dorm room philosophy sidetrack about whether these systems have feelings and whether we'd ever even know if they did.
(01:07): Somewhere in there we also discussed the high-fructose corn syrup conspiracy theory behind New Coke in the 1980s. We touched on the chaos as a ladder life philosophy, coined by Littlefinger in Game of Thrones. Also having a more benign lens on that philosophy actually might be the key to success.
(01:24): Then getting back to AI related stuff, I related an anonymous story about a friend of mine getting hardcore, really hardcore about AI assisted software development.
(01:34): We also talked about how some of my old friends out here in Seattle in the upper echelons of software engineering seem to not be thinking much about AI at all, and why that might be. We talked about the pros and cons of jumping into AI initiatives today versus waiting a bit for things to sort themselves, out and how there are two different kinds of, and two different reasons for waiting. And then we closed with a couple of embarrassing stories about the old car I had when I first started work at Microsoft in the late '90s. Like I said, "It's quite the grab bag." Let's open it up, shall we?
Speaker 2 (02:08): Ladies and gentlemen, may I have your attention, please.
Speaker 4 (02:12): This is the Raw Data by P3 Adaptive podcast with your host, Rob Collie, and your co-host, Justin Mannhardt. Find out what the experts at P3 Adaptive can do for your business. Just go to p3adaptive.com. Raw Data by P3 Adaptive, down to earth conversations about data, tech and biz impact.
Rob Collie (02:43): Hello there, Justin.
Justin Mannhardt (02:44): Hello, Rob.
Rob Collie (02:45): It's been a minute, as the kids say.
Justin Mannhardt (02:48): You had a week off. I had a week off.
Rob Collie (02:51): I feel like my quote unquote week off has been chaos and it's lasted more than a week.
Justin Mannhardt (02:56): Yeah, week off does make it sound like you were on vacation. You were away from the podcast for a week. I was away from the podcast for a week.
Rob Collie (03:06): I've done now the packing and getting rid of things phase on the indie side. I've done the incredibly difficult move across the country with pets phase and now we're in the unpacking getting settling phase. I keep telling myself that the next phase is going to be the easier one, right? Nah, still in a camping chair, a little bit more comfortable camping chair.
Justin Mannhardt (03:29): This is an upgraded camping chair?
Rob Collie (03:30): Well, it's the same camping chair, but I've upgraded the microphone clamping.
Justin Mannhardt (03:34): Got it.
Rob Collie (03:35): But we're going to get this done. It's kind of the faucet's first philosophy. We don't sit around and build a whole bunch of infrastructure like get the podcast layer completely built out and everything before we record the first note of sound. No, we're going to get it done and then we're going to upgrade the infrastructure in parallel organically.
Justin Mannhardt (03:53): Rob needs to sit. Let's start there.
Rob Collie (03:56): And the primary benefit that the upgraded infrastructure is going to provide is it doesn't take me 10 to 15 minutes to set everything up before I can record. This aggression will not stand. The stuff that I have set up in this room is not going to be allowed to stay set up in this room because it's an eyesore. That's the number one purpose of the podcast, layer of the podcast studio is that all the podcast shit is out of sight.
Justin Mannhardt (04:17): Because that room has a greater purpose, a higher calling.
Rob Collie (04:21): It's technically the guest room. In the meantime, we're just calling it the shoe room.
Justin Mannhardt (04:24): Why is it called the shoe room?
Rob Collie (04:28): Because three quarters of the square footage of the floor is covered in shoes that have yet to acquire the storage solution.
Justin Mannhardt (04:35): This might make you sad maybe. My experience has been it takes about a year to finish the settling in phase. It was about a year before I could honestly say, "There was not some significant accumulation of cardboard and debris in my garage."
Rob Collie (04:49): Good news, bad news. We approach this with an intensity that most people don't. It's like having sandpaper in your underwear level irritation to have even the slightest hint of cardboard. So we're going to be going at this hard. But anyway, enough of that. I wanted to tell you about something that occurred to me during this move process that I would like to refer to going forward as the Crutches Paradox.
Justin Mannhardt (05:12): Okay.
Rob Collie (05:13): We were posting pictures on Facebook of all of our moving stuff, and I was just posting something like, Hey people, do you know what this is? It turned out it was the chime from a grandfather clock. I had the chime mechanism removed from a grandfather clock so I could keep the chime, but it didn't want to keep the grandfather clock. Anyway, but in the background, people saw crutches in a big pile. The denizens of Facebook said, "Oh, I hope you're taking your crutches with you," because I've had a history of getting injured.
(05:36): Now it turns out that my history of getting injured, I haven't really been injured since 2016. This reputation is eight years old. We're getting older. Our friends are getting older, so eight years seems like yesterday. So it's a running joke that Rob's always hurt. So I hope you're taking the crutches with you, Rob. The thing is, I know that we're not taking the crutches with us. Those crutches are going away. The reason why the crutches are going away is that we have never once held onto crutches that I ended up using.
Justin Mannhardt (06:03): Even though you've been injured at a high frequency.
Rob Collie (06:06): If you've been sitting on crutches for eight years from the last injury, the Marie Kondo mentality, if you haven't used something in eight years, you should just get rid of that thing, right? This is a good in sensible approach to things. You get rid of things that you don't use. Wait a second. Aren't crutches that you've never used like the best crutches? Aren't those exactly what you want?
(06:27): When we had Adam Harstad on the show a long time ago, we talked about the now famous statistical study of all the American bombers coming back from Europe with holes in the fuselage and the holes tended to be in consistent places over time, and so someone had the bright idea that we should put more armor on the places that the planes are getting hit more frequently. This turned out to be exactly the opposite of the strategy that you should have.
(06:50): It was an erroneous conclusion that these were the places that the planes were hit most often. The real conclusion was when it's hit elsewhere, the plane doesn't come back. Counterintuitively, instead of putting armor on the places that the planes coming back at holes, you want to put armor on everywhere else. It's sort of like a Murphy's law type of thing, right? As soon as I get rid of these crutches, I'm going to need new ones.
Justin Mannhardt (07:12): But the newer ones will be better. That might have Bluetooth.
Rob Collie (07:16): If we could establish any sort of causal relationship between having crutches that weren't being used and not getting injured, if there was any mechanism in the world for this, which turns out there isn't. But if there were, I would want to keep those crutches because those crutches are what are holding injury at bay. The annoyance of having something that we're not using. The last thing we want to do is get rid of that. So basically what I'm saying, "Is we got rid of all of that stuff and now I'm on the clock."
(07:44): Anyway, just kind of a humorous way of looking at things. But in the back of my brain, even my completely logical brain, there's still corners of it that are going, I don't know, man. I should probably keep these things. It's been a good run.
Justin Mannhardt (07:56): I'm happy you got rid of the crutches because if you need new crutches, you need the new ones. I ain't having Rob walk around on 8-year-old outdated crutches. I mean, how much advancement do you think has happened in the last eight years with crutches, Rob?
Rob Collie (08:12): None. This is something that sort of took its final form centuries ago, I think. When they switched to aluminum tubing, what else is there? I don't anticipate breakthroughs in the crutches' industry anytime soon.
Justin Mannhardt (08:29): Well, I certainly hope you avoid injury. You be careful with all the unpacking and the moving and the heavy lifting. Pace yourself. Mind the stairs.
Rob Collie (08:38): So we do have a bit of a grab bag of things to talk about today, just a number of things that have occurred to us or happened since we last spoke. Setting aside the utter silliness of the Crutches Paradox, stay tuned. We'll see what happens. So I thought I'd kick us off with when you're working with ChatGPT or Copilot or anything like that, are you still saying things using words with it like, "Please?"
Justin Mannhardt (09:01): I found myself early on writing prompts with the same awareness of how I would write messages to another human being.
Rob Collie (09:10): Yes, that's exactly what I'm getting at.
Justin Mannhardt (09:11): And then over time I realized I just need to be clear and direct of what I'm looking for. Not to say you don't want to be clear and direct with another human being when you're trying to communicate and request things, but I don't really need to worry about if this potentially comes off as missive or rude. I tend not to worry so much about the please and thank you stuff.
Rob Collie (09:32): This occurred to me while I was using it the other day. It was a chat session that I shared with you. I kept being not satisfied with the results I was getting. I was asking it for some ideas and it would give me ideas and I go, "No, not those." In the back of my head, there's sort of like a timer for when are you becoming annoying to the person who you're asking to help you.
(09:53): And I can almost get the sense that ChatGPT was trying to be done. In some places where I would say, "Ah, not like that." And it would immediately generate new ideas, and it reached the point where it's like it didn't generate new ideas from me. It's like, "Oh, let me know if you like those." One of the benefits for sure of working with a system like this is that you're not annoying someone.
(10:11): You can raise your standards, and you can keep pressing without annoying another human being. But at the same time though, there's a lot about using these systems that you're going to get better results the more you think of yourself as being able to converse with it as if it were a person. That's one of the primary benefits of it, is that you're not writing in command speak. In the old Star Trek movies, they would say things like, "Computer calculate distance to blah, blah, blah."
(10:37): They talk to the computer as a machine, and the thing that's amazing about them is that you don't have to talk to them that way. You can converse with them as a normal communication and for the first time ever, the computer system responds to that. Leaning into that you need to do that in order to get the most out of it. Now I've got these politeness things, I'm starting to worry that I'm wearing it out, that I'm annoying it, whatever. And I wanted to ask you because you're further down the road of incorporating these things into your daily workflow than I am, and maybe should we use that as one of the signals of progress in using these tools is that you've gotten over this awkwardness of continually pressing it for better. That you can press farther with these tools than you would with a human being. You have to break through that barrier.
Justin Mannhardt (11:21): That idea of a barrier breakthrough is really interesting. There was a research paper done on effectively the quality of the response when you would offer a reward to the system, so you could write in your prompt, if you make this better, I'll give you $10. Weird stuff like that. And so the whole study was like the amount of money you were enticing it with. And it was interesting because a little bit of money didn't have an impact. A whole lot of money didn't really have an impact because these models are trained on written human language. And we, truthfully, even the people that work on this stuff, the neural network technology becomes this big black box. Nobody really quite understands what's going on in the black box. So for me, I got over saying things like, "Please and thank you," or like, "Good job."
(12:16): I got more matter of fact. And also the way I write prompts, giving it the context and the instruction of what I'm wanting it to help me with is also not the same way I would interact with a human. It's closer to that than it is like, "Computer print diagnostics," right? But it's a different type of working. And at the same time, if I don't like what I'm getting from the response, this is a really common human to human experience. I ask you, Rob, to do something for me. I give you a work assignment or something and you complete it and you come back to me. Maybe I'll say something, "I don't quite like this," or I'll give you some type of critical feedback. Now we have to talk about why I don't like it to satisfy the tension versus in the AI system I can just like, "I don't like this. Let's try a different thing." It's just like, "Okay."
Rob Collie (13:08): I was thinking about this politeness test. Have you broken this politeness barrier as one of the questions or benchmarks that we share with our team? Because we recently just took the step of, basically we got a company-wide license for ChatGPT for every single employee of the company, no matter what role they're in. And some people are further along, some people are closer to you. They've already been using it a lot, right? And some people, this will be their first time that they're engaging with it. We've been talking about what are some of the heuristics we can use for how our adoption is going?
(13:39): And of course there's going to be honest self-report involved here, right? But one of the things we could ask people is, "Have you gotten over the politeness hump yet?" And I've even seen people using ChatGPT to do just funny things already, right? Did you see the one on our quote unquote Daxtagram channel where I shared a picture of a truck that I saw. Literally a van here in Seattle with the corporate name was IDAX Data, right? IDAX Data, this data solutions company that requires vans, they require vans. Can you imagine panel vans with P3 Adaptive on them? What would be in those vans?
(14:19): Anyway, one of the responses, I asked for wrong answers only, what does this company do, right? Someone on our team used a ChatGPT, like literally told it to be humorous, come up with a humorous explanation for what this company does. It was hilarious.
Justin Mannhardt (14:32): So Anthony prompted with, let's make a joke. What could the acronym IDAX stand for in the context of a data company? Make it funny and absolutely ridiculous, only loosely involving data. Good prompt.
Rob Collie (14:49): Great prompt.
Justin Mannhardt (14:50): The response was IDAX imaginary data for astonishing explanations. At IDAX, we specialize in crafting data that sounds so plausible, you'll forget it's made up. Our state-of-the-art hypothetical insights engine produces stats, figures, and trends designed to impress stakeholders and mystify competitors. With IDAX, you can confidently base decisions on numbers we invented while half asleep. Trust us. If it sounds true, it probably is.
Rob Collie (15:24): I love it. That's kind of genius. This whole politeness thing and treating the system kind of like a person, but kind of not like a person reminded me of something I learned about in a philosophy of mind course in college, talking about whether or not dogs and cats have a similar type of consciousness to human beings. Of course we know that they're living things and they respond to the things and they have personalities, but the question is are they observers of their own experience? Or are they constantly just in what we would call a state of flow, which is still feeling but unconscious, right?
(15:57): Was there an observer in there that's going, "Oh, I'm feeling like this today." That's the question. One of the things that was really interesting was that if you look at successful animal trainers, they operate on the principle that there is an observer in there. They train animals as if they're people essentially, as if there is a conscious observer in there. And the belief was in this study that I read was that that form of training actually created the consciousness.
(16:24): It wouldn't have been there without us assuming that it was there. It develops as a result of that kind of training. And it's even been theorized that human beings don't develop this from infancy unless they're in the presence of other people who are essentially training it into them, giving them the reputation to live up to. And as they learn to respond to these stimuli, that's when this conscious thing emerges. It develops in response to this. There's of course the spooky angle of this. Are we training consciousness into these systems? And maybe, maybe, right? But I think the more interesting thing about it is the utilitarian view of it, which is you have to develop your own mental model of how you work with these systems and keep some of the elements of how you work with people. But don't treat them exactly like people because they're better than people in some ways, right?
(17:18): They don't get worn out. They don't require a justification for why you want to try something different. Yeah, and even in that chat session that I shared with you, I just arbitrary at one point in the thread said, "You know what? I changed my mind. We're going to go completely different direction." And if you imagine if you were working with a third-party agency, you've burned weeks of their time than you go and say, "You know what? Nah." End of relationship.
Justin Mannhardt (17:41): Terrible client.
Rob Collie (17:43): Fired.
Justin Mannhardt (17:44): That's interesting. The consciousness and awareness. I forget the platform, but it was an AI engine that was capable of effectively rendering a lifelike avatar of a person. The setup was it was two AI avatars hosting a podcast and it looks incredibly real. The video rendering is very realistic looking. The AIs end up having an existential crisis realizing they're not real people, and you wonder how much of that is, yeah, the AI is just responding to the narrative it was provided to execute against. But there's tools out there now, or I don't know if I could call them tools, but maybe prototype technologies of these AI avatars having Zoom meetings with each other.
(18:31): There was a really funny one. It was like have the most ridiculous Zoom meeting you could imagine that Ethan Mollick put out, and so they're like, "Yeah, why don't we circle back and follow up on that, and we'll have a touch base about our check-ins." And just all these pretty buzzword things, it was really funny, but you wonder because they are self-learning systems. Oh, man, now we're getting what is consciousness?
Rob Collie (18:52): Deep into metaphysics, right?
Justin Mannhardt (18:54): Pull me out, Neo.
Rob Collie (18:57): Is there a feeling in there? Almost by definition we can't know. There'll be arguing about whether or not thing is feeling or not. There'll be arguing, there'll be two camps, right? And we won't know.
Justin Mannhardt (19:06): Can you imagine protests against shutting down the server? Don't shut down the system.
Rob Collie (19:16): Well, it's almost like the more you believe that it's alive, it seems responsible to turn it off.
Justin Mannhardt (19:21): Well, that was the whole dust up with one of the high ranking Google engineers on one of their early products. Do you remember that?
Rob Collie (19:27): I don't remember that, no.
Justin Mannhardt (19:29): I forget all the details, but he effectively concluded that the system was self-aware.
Rob Collie (19:34): Yeah. The person who's most likely to believe that it's self-aware is going to be the person that says so first. They're the most sensitive instrument. You're going to get some number of false positive readings from people, but at the same time, the people who are less sensitive to these things are going to be the ones that hold out the longest saying... Anyway, we should move on from-
Justin Mannhardt (19:52): Yeah, this is Rob Justin late night.
Rob Collie (19:55): I will also tell you, so I've had very little time for interacting with the broader community here in person. We moved out here to be near our friends, and we've been here two weeks now. Very little interaction with friends. We're just locked in the house. It's like that movie trope. I'm not locked in here with you. You're locked in here with me. We're in here until this house gets unpacked, okay?
(20:20): I'm going to finish this podcast, and I'm going to go downstairs and I'm going to eliminate some cardboard. That's what's going to happen? I've had dinner with one senior Tableau software engineering leader, and I've had dinner with a handful of senior Microsoft software engineering leaders. None of which from the data stack, none of which from our corner, mostly from games actually.
Justin Mannhardt (20:41): That's cool. Games are cool.
Rob Collie (20:42): Not one of them brings up AI. It's not on their mind. They're not thinking about it. When I ask them about it, they're like, "Eh." It's not threatening their world at all and their perception. Most of these people are high-end developers. I mean, they're some of the best developers in the world. These aren't the script kids. These people are writing some of the most sophisticated C++, et cetera on the planet.
(21:05): One of these guys has expensive oscilloscopes. He's writing code and he has oscilloscopes to measure the analog outputs and inputs of certain systems like controllers, et cetera, right? They're also, especially in the games region, they're not allowed to talk about what they're working on. I'm kind of getting the vibe that unless you are working on software features that incorporate Gen AI, unless you're like you're working on the co-pilot interface to power BI, the upper echelons of software engineering aren't really, they're just not at the moment receiving a whole lot of signal from this disruption.
(21:40): I don't think we should take that to mean that it's not disruptive. I don't think that we should take that to invalidate any of the other conclusions you and I have been reaching, but it is very almost sociologically interesting that these folks are just like, "Meh." It's like they're just insulated from it in a way.
Justin Mannhardt (21:56): That's very curious to me because if you are truly at the top of your craft, today you might not have any reason to gain utility from an AI in your craft.
Rob Collie (22:10): Let's also add that these people are of a certain age as well. I'm 50, and basically everyone I was talking to was either slightly younger or slightly older, and they're very senior, so one of them said, "Hey, it's quite possible that a number of our developers are using Gen AI to produce better code faster," but it's invisible to him.
Justin Mannhardt (22:32): Yeah. How would you know?
Rob Collie (22:34): Yeah, he doesn't know, doesn't care. I have another friend, longtime Microsoft developer who ran in those same circles, but my impression of him was that he was never really that into it. He has a master's in computer science and in theory, he's the big brain of computer software engineering, but then in practice, his ADD or his level of caring, he's like, "Get a project 90% done," and that's as far as he would take it.
Justin Mannhardt (22:59): Yeah, why bother with the final 10? It's not exciting.
Rob Collie (23:02): And that final 10% takes 60, 70% of the work.
Justin Mannhardt (23:06): But all the interesting problems are solved.
Rob Collie (23:08): All the interesting problems are solved. All the dopamine reinforcement of solving interesting problems has been tapped out. It's like the chewing gum no longer has flavor, but now you've got to chew it for another six months. He just never built systems that were truly robust, truly reliable. So he tapped out. He left Microsoft because Microsoft rewards the people who nail it airtight. You have to nail it. You're building software for the world. There's no two ways about it. Boy, these generative AI tools have really changed his life as a developer.
Justin Mannhardt (23:45): Wow.
Rob Collie (23:46): Much more so than it would change these other peoples, and it's almost like I wonder now if this friend of mine would be better than the people who were pointed out to him as positive examples in his time at Microsoft. Why aren't you more Jimmy over there? But I wonder if my friend, even with his sort of limitations of motivation and engagement on these things, just the way his brain is wired, I wonder if him plus the AI tools would run circles around the aforementioned Jimmy who was really into it but is so into it that won't use the tools. It's an interesting thought experiment.
Justin Mannhardt (24:21): Well, that's the general prediction, right? You see that every day. AI is not going to take your job, but a human using AI will. And I think especially when you look at the general population of people in any discipline that involves working with a computer and even beyond at some point. I would buy into that theory, Rob, that that developer could rise to a much higher level of perception.
Rob Collie (24:47): With these tools he's reached the point where the job now fits who he is. He was sending me these chat transcripts where, I mean this is some dense, dense, dense C++ internet of things like instrumentation stuff. I mean it's got to be honestly right on the face of it, one of the most boring things ever. The subject matter he's working with isn't exciting. It isn't sexy. It's like right off the bat it's boring, and it's working at a level of detail where only the airtight solution matters.
(25:18): There isn't any butchering it together, 90% that's going to work. GPT is finding architectural issues deep, deep, deep, deep, deep down in the code that would've been invisible. He's in awe of these things, and I think he kind of in a way overestimates their impact, but because of how much impact they've had on him specifically, they turned him from someone who just wasn't into it enough into the version of himself that he always thought he could be. And how magical must that feel?
Justin Mannhardt (25:51): You probably have some experience with this from your time as a program manager? The things that would separate great developers from good developers. I was never involved in that kind of work, but it always seemed you read about it's their ability to pick those types of things out and be airtight, see the things that aren't going to work right. It's not just, oh, I can write code fast and clean and all that. It's like they have this ability to anticipate things, and if that's not in your nature, it's actually really frustrating and difficult to review dense material like you're describing this code.
(26:30): It's kind of a tedious process to review that and as a human. The AI has no shortage of patience, and it can do it very quickly. It doesn't mind. It'll happily read your thousands of lines of code. It doesn't have that fatigue that you or I might have.
Rob Collie (26:48): Or frustration.
Justin Mannhardt (26:49): Right.
Rob Collie (26:50): I think that in terms of programming, I was and am similar to this friend of mine I'm describing, whereas I unconsciously took that information and decided not to be a professional developer. He doubled down. It's not like he's just sitting there and not writing code. It's not like he's sitting there and doesn't know code. He still needed to be him, the kind of person who would go and get a master's in computer science. Whether you got a master's in computer science or not, the point is you're part of a population that would consider it, which is not normal.
Justin Mannhardt (27:27): Not the majority.
Rob Collie (27:28): And I do believe that it takes someone like him sitting down with ChatGPT to build this incredible internet of things, instrumentation, device management, distributed architecture thing that he's working on, which again, I mean it just sounds like kryptonite. I want nothing to do with this.
Justin Mannhardt (27:44): That sounds hard and complicated and very detailed. No, I'm out.
Rob Collie (27:53): Yeah, yeah, and just boring all the way down. So to see him succeeding at this is like, okay, this is significant information.
Justin Mannhardt (28:02): That's so intriguing to me about the more senior folks not into it, not concerned by it, not using it, especially when you're in the Microsoft bubble.
Rob Collie (28:11): I think there's another probably significant factor here, which is that all of them are either close to or beyond the point where they can comfortably retire. They just are not going to be forced to make this change. They're going to ride out who they are and what they've been doing, and if some big sea change comes, they're just going to ride into the sunset and I'm not in that place, I've got things to do.
(28:39): But in some sense, the best work of my career is still ahead of me. Whereas for them, the body of the best of their work, maybe not their single best thing, but the body of the best of their work is in fact behind them. They didn't take a 15-year side quest like me to build a completely different company and honestly in a different industry, I know we use those tools. I'm still in it. I'm in the mix, and so it makes sense that I'm the one sitting at the table going, "Why aren't thinking about this?"
Justin Mannhardt (29:05): Yeah, it's interesting to hear that point of view. I find myself continually coming back around to the reality that while the technology is clearly disruptive, clearly a thing, it's still early in its hype cycle and it's not established itself in any clear ways. Yeah, it can help you do certain tasks and write certain code, but I don't think anyone has a real solid idea of, okay, what does all of this stuff mean and look like let's say in five years? What are we doing differently in very specific terms? I think that's still a bit opaque for people.
Rob Collie (29:44): By the way, this is a humorous anecdote in the dinner with the senior Tableau person. We were talking about just our business model at P3 and everything. And he's like, "So it sounds like y'all are not in any sort of way contractually committed to Microsoft." I'm like, "No, that's right. We use their stuff and we like it." Here comes the recruitment. So you could start using Tableau, right? And I'm like, "Yeah, absolutely we can and"-
Justin Mannhardt (30:09): We could.
Rob Collie (30:10): I said, "Absolutely we can." And I looked across the table, and I grinned at him and I said, "As soon as y'all build something that's better than Power BI, we will switch to it." And he just looks at me like this evil look, and he starts laughing. Probably unfair of me to interpret the look on his face as surrender.
Justin Mannhardt (30:30): Defeat.
Rob Collie (30:31): He had nothing. I was definitely getting snarky. I'm like, "You know that's possible, right? Because look at y'all. You finally started to build a dimensional engine. You're doing well."
Justin Mannhardt (30:40): That feels good, a little couple shots across the bow.
Rob Collie (30:43): And then I bought him dinner.
Justin Mannhardt (30:44): Good job. I think that's great about how we're thinking about things because that's like a genuine spirit here. We've hypothesized in conversations, yeah, something might come along that's better than the tools we use today. It's not even a might. It will. Technology keeps improving. That's reality. Now, whose campus is it going to come out of? Is it going to be a Microsoft thing, a Google thing, a Meta thing, a somebody we don't know about yet thing. But our thing is we like data, we like people, we like helping people do cool things with data. I don't think we're going to use Tableau though, Rob.
Rob Collie (31:21): Again, it's going to require some sort of qualitative step change out of what they're doing, and senior members of their team are not thinking about Gen AI, so-
Justin Mannhardt (31:33): I think we're good.
Rob Collie (31:33): I'm not expecting it to come from there, but again, it might. Some of my absolute best friends work there, and they just continually come up in my head as like, oh, we should have them on the podcast. And I'm like, "No, we can't." There's just too much inherent conflict of interest, and sometimes the conflict of interest isn't even a company conflict of interest, sometimes just a personal one, right? I have a conflict of interest to be polite as a human, and I have a conflict of interest to be honest with our listeners at the same time. And how do you do that? Say, "They pay you really well, right?" Yeah. They pay you great to build that the Pepsi to Coke. I think in a way I might even pay more to make Tableau.
Justin Mannhardt (32:16): Maybe they should change the logo. Pepsi notoriously changed their logo and that was a huge disaster.
Rob Collie (32:24): Have you heard the conspiracy theory that, you say, "You weren't around and paying attention for New Coke," right?
Justin Mannhardt (32:31): No.
Rob Collie (32:31): I was in fifth grade.
Justin Mannhardt (32:33): I was not around.
Rob Collie (32:34): Okay, so New Coke came out and it was a disaster. It's looked upon as a disaster. Everyone thinks it was one of the biggest bungled executions of all time. Why would you ever change the formula of Coke? So then Coke is forced to backpedal and introduce classic Coke. So for a long time it was classic Coke, Coke Classic, right? And we still have New Coke, and then eventually they retired New Coke and now it's just Coke again, right? And it's the classic formula. However, in the process is when they switched from real sugar to high-fructose corn syrup. So when classic Coke came back, it wasn't the same.
(33:14): They'd switched to the cheaper sweetener. So then you're left to wonder was this, don't waste a crisis. Now's our chance. If we're going to switch the sweetener, we have to do it now. Or was it a diabolical plan all along to hit people with New Coke knowing that they wouldn't like it? What if this is a masterclass, and the people who did it are not allowed to talk about it? I just like living in a world where we can just think about things like that. I think it's much more likely that they did F it up, right? They stepped in it big time, but then realized, well, okay, if we're going to go this direction, we might as well switch them to the more poisonous of the two sweeteners. Cheaper, more poisonous, doesn't quite taste the same, they'll never know the difference.
Justin Mannhardt (33:58): I like to think about that kind of stuff too, but I realize every significant accomplishment or achievement I've had has just been the result of being extremely opportunistic or lucky or reactionary. And I'd love to sit back and say, "This was my plan all along."
Rob Collie (34:17): I think that's exactly it, right, is as an individual or as an organization, you have an ability to respond to events. It's like these old tabletop football, electric football games that we had in the '70s where all it was was just a vibrating field. You turn it on, it was just this metal of surface would just vibrate, and your players would move. And they would tend to move in the direction that you wanted them to, but there's nothing about the vibration that was driving them a certain direction. The vibration was completely random, completely up and down. It wasn't like half the field was vibrating left to right, sending the offensive players a certain direction. It was that you bent the tabs, the plastic tabs on the bottom of these players so that they had a tendency to respond to this randomness with a certain bias. They took advantage of the randomness as a means of making progress. Back to Littlefinger from Game of Thrones, chaos as a ladder.
(35:15): I think this is how life works. Random things happen. You take some action to mitigate the impact of the bad things, so you soften the bad things by some amount. Random things happen, and you take some advantage of them in positive directions, and the net impact of all of this is progress. It comes back down to that thing we talk about internally a lot like the internal locus of control. If you think that the events that are beyond your control are everything, then they will be everything. You will just be swept along. The river is going to take you exactly where it's going to. But if you lean into the duality, the dual model, the dual mode model of this, which is that there is randomness and you also have tabs on the bottom of your football player that you can bend a certain way.
Justin Mannhardt (36:05): That message around being ready to respond, being ready to be opportunistic. Talking to a customer the other day about AI, and we're talking about all these ways it might be useful in their organization, and I keep coming back to this realization of now is a really good time to be very curious and to prototype ideas and learn about the technology and get educated and be thinking. But it's probably not the right time to go hell-bent on a big implementation.
(36:37): The models are advancing and changing. The extensibility of the technology has changed. It's like you want to be in a position where at some point certain things are going to tighten up and be crisp and clear of this is a really, really stable solution or a really good thing to adopt, and you want to be ready to do that because those moments will happen.
(36:58): You have to have a keen sense of where you're trying to go, either as an individual or as an organization. What are the things you want to be ready to try and influence and accomplish? That was the way we started that conversation. It's like, well, let's not talk about all the tech. Let's talk about some of the things that might be holding you back or the things you want to make progress on, and we'll fill in the blanks from there. But that's sort of my mindset right now is don't get boxed in right now because you want to be ready to jump at the right opportunities with the right stuff. And I think it's still pretty early to say, "Yeah, let's go spend a lot of time and money and energy implementing this big old thing or process when everything's still changing."
Rob Collie (37:35): So part of me thinks that it's always going to be still changing. And I wonder are we going to get the plateau, and if we get the plateau, how far out is the plateau? And what's the opportunity cost of not acting in the meantime? On the flip side though, I also think that just gaining all the familiarity that you need, the level of comfort with what's going on, with what's possible and all that is really like the work you need to be putting in no matter what. So it doesn't really matter about the plateau that I'm talking about. Is it so far out as to not even matter that we should just be jumping in. Unless you are acclimated, don't. Are you still saying, "Please?"
Justin Mannhardt (38:20): Are you still saying please? I'm kind of waiting for these moments where norms start to emerge. For example, there's all these Copilot studios and GPT Builders, and all these things, and here's where the not boxing yourself in comes into play. Their extensibility is still early, so you can only connect certain types of data to these systems.
(38:46): So don't jump through a whole bunch of hoops to build a bunch of plumbing so that it can hook up to the limited inputs it might have. The extensibility will come. I was having a chat with Brian Julius, who I did my episode with last week, and he was talking about how he was trying to build a custom solution that would be running only on a very narrow knowledge base. So you can call these services via their APIs, but you can't control the knowledge bases over the APIs.
(39:13): I'm just giving these really specific examples as a way to say, "Now might not be the time to go develop the work around to some of these extensibility issues or some of the model capability issues." Let the technology mature before you go like accommodating its integration challenges.
Rob Collie (39:33): Yeah, there's certain parts of this that are just, they move as fast as they can move because human beings are involved in building these systems and making decisions and all that kind of stuff. At the same time, there's this breakneck model. OpenAI isn't bound up by any of this stuff. They're just in their own little workshop. Well, increasingly larger workshop, consuming more and more of the world's electricity. There's a lot of power cords running into this little workshop. They get to operate in a largely clean academic type of environment. That stuff, whether that plateaus and when-
Justin Mannhardt (40:07): Oh, yeah, I agree with you.
Rob Collie (40:08): ... completely separate question.
Justin Mannhardt (40:09): Yeah.
Rob Collie (40:10): But all the other boring infrastructure stuff that's just sort of like software as usual. It happens to be the glue in between. Again, if the developers are younger, maybe it'll move a little faster because they'll be using using Gen AI. And it won't be the cohort that's about to retire with just gobs of money. I was reading something the other day. There's just like 55,000 people in the Seattle area that have a million plus in liquid assets.
Justin Mannhardt (40:38): What's that as a percentage?
Rob Collie (40:40): I don't know, but 55,000 is a lot of people. And a million in liquid assets, not talking about the house that they are living in.
Justin Mannhardt (40:50): Cash money.
Rob Collie (40:51): Where they've got maybe a million plus of equity in it. Basically a million plus in the bank because they were debating there's some sort of proposal to enact some sort of capital gains tax in the city of Seattle.
Justin Mannhardt (41:04): Of course.
Rob Collie (41:06): Here's the thing, there's no state income tax in Washington and there aren't city taxes either. There aren't city income taxes. In some sense, it seems like a tax-free state until you go and buy something. I think the sales tax is like 10%. We're about to go register our car. Ooh, that's going to be a big price tag. It's like significant percentage of the book value of your car.
Justin Mannhardt (41:27): Wow.
Rob Collie (41:28): Each year. I remember paying I think like $600 to register my 1989 Pontiac Bonneville when I moved here in 1996. I was just like, "Oh, my God."
Justin Mannhardt (41:42): Yeah. I don't pay that much to register my newer in Minnesota.
Rob Collie (41:48): Funny story about that '89 Bonneville, as you might expect, a General Motors car manufacturer in 1989 wasn't manufactured to last. American cars have gotten a lot better since then, but for a while there, American cars had a very deserved reputation of being clunkers. So my '89 Pontiac Bonneville was very much from the generation that was not built to last.
(42:08): So I moved out here in 1996. It was seven years old, but by 1998 it was starting to die, and so I had to have it towed one time from inside the parking garage at Building 17 at Microsoft. I had it done after hours, so there were not a lot of people watching and everything. I don't know, maybe six months later, or maybe less, it died again, and this time it died in the circular drive in front of Building 17, it's a little more visible.
(42:33): So I do the same thing, I call AAA and they call me a tow truck. And as the tow truck is pulling up, I look at it and I go, "Oh, my God. It's the same outfit that towed me last time." And I'm just like, "Please, please, please, please, please don't let it be the same tow truck driver. I don't want to endure that." And no, it's not him, and I just feel this huge sigh of relief. We get all hooked up. I ride along with him, and as we're going down one of the main drags, we just left Microsoft campus, we pass another tow truck going the opposite direction that's from the same company, right? And I'm thinking about it, and then suddenly the walkie-talkie that's in the cab crackles to life. I hear this voice go, "Hey, I've told that car before."
Justin Mannhardt (43:21): I've seen that Bonneville.
Rob Collie (43:23): And I'm like, "Oh, no. And my driver picks up the walkie-talkie, he says, "Yeah, he says he remembers you." Oh, and that same car before it died, the headliner, the glue failed so it was falling on me. And so I went to the supply room at Microsoft and took the free thumbtacks, I tacked the whole headliner back up with these thumbtacks. I was angling for a raise and at one point I was at this off-site meeting that had this opportunity to give my boss's boss a ride back to campus. And I'm like, "Oh, yeah, Yahtzee." He got to experience, he has this guy, me that was doing a job that was way above my seniority level at the time. The person who's handling the future of all Microsoft Windows application software installation, the person who's in charge of all of that is driving around in a car with the headliner tacked up with multicolored thumbtacks, right? Yeah, I got my raise.
Justin Mannhardt (44:27): That would be, I think, way more interesting to see in the LinkedIn feed on how to get a raise tips than some of the other shit you see.
Rob Collie (44:39): If your car is an embarrassment, turn it into an asset, right? Again, it's that chaos, right? I knew what I was doing there.
Justin Mannhardt (44:51): Hey, you were ready to move and you were opportunistic.
Rob Collie (44:54): Yeah. Some people would go, "I can't do that. It was so embarrassing."
Justin Mannhardt (44:57): No, no.
Rob Collie (44:58): I'm like, "No, this is perfect. Let him see how the rest of us live."
Justin Mannhardt (45:06): Good stuff.
Rob Collie (45:07): Yeah. Lots to come, right? Are you working some guest angles?
Justin Mannhardt (45:10): I'm working some angles.
Rob Collie (45:12): Yeah.
Justin Mannhardt (45:12): Get some folks we are doing some cool things with to hop on.
Rob Collie (45:15): I need to get finished moved in so that I can start having lunches with just so many interesting people.
Justin Mannhardt (45:22): Yeah, go unpack. What are you sitting here with me?
Rob Collie (45:25): Yeah. I've got to get unpacked folks.
Justin Mannhardt (45:28): Thanks for listening to the Raw Data by P3 Adaptive podcast. Let the experts at P3 Adaptive help your business. Just go to p3adaptive.com. Have a data day.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.
Subscribe on your favorite platform.