A Lifelong Learner Embraces AI, w/Brian Julius

Rob Collie

Founder and CEO Connect with Rob on LinkedIn

Justin Mannhardt

Chief Customer Officer Connect with Justin on LinkedIn

A Lifelong Learner Embraces AI, w/Brian Julius

This week on Raw Data, Justin’s flying solo and catching up with Brian Julius, a true data veteran with a unique journey from government work to AI exploration. Brian’s path has been anything but typical. Starting in high-stakes federal roles, he later moved into management; only to find himself pulled back to his roots in hands-on analytics, just as AI started turning the industry on its head.

Brian digs into the moments that shaped his career and how curiosity led him back to data. Now, he’s diving into AI, pushing the tools to see how far they can go, and discovering where human intuition still has the edge. In this conversation, he and Justin explore what it means to keep evolving in a field that doesn’t stand still, and why staying open and curious might just be the most important skill.

With Rob away this week, Justin and Brian take a deeper dive into what it’s like to adapt to constant change, and why there’s more to data than just knowing the latest tools. It’s an honest, down-to-earth look at the power of a lifelong learner’s mindset in an era of rapid transformation.

Also in this episode:

DAX/ChatGPT

Ideogram

Episode Transcript

Justin Mannhardt (00:00:00): Hello, friends. I'm not sure I got that quite right. Rob will have to let me know after he listens to this week's episode. This is Justin. Rob has the week off from Raw Data, because he is in the process of moving to Seattle. Head back to last week's episode, and catch up on his reflections on his journey back to where it all started. Really excited for him and Jocelyn to be back out on the West Coast. For this week's episode, I had a chance to sit down with Brian Julius. Brian bills himself as a lifelong data geek. Today, Brian is a top voice on LinkedIn sharing amazing content related to analytics and Generative AI. You should definitely follow him.

(00:00:45): There's going to be a link to his profile in this week's show notes. I first met Brian when he was a participant in an advanced DAX class that I was teaching for P3 about six years ago. To see Brian today as an established thought leader in our space is really inspiring to me. This is an individual that was still wrestling with things like how context transition works, and now today, he's out there demonstrating how things like ChatGPT are actually quite good at DAX. Today, Brian and I talk about his journey, get into his experience as an expert witness, what truly makes a great analyst, the current state of AI tools related to our line of work, and where things might be headed.

(00:01:29): Brian is a tremendously insightful and kind person, and it was a delight to sit down with him. I hope you enjoy the conversation. So with that, let's get into it.

Rob Collie (00:01:40): Ladies and gentlemen, may I have your attention, please? This is the Raw Data by P3 Adaptive Podcast, with your host Rob Collie and your co-host Justin Mannhardt. Find out what the experts at P3 Adaptive can do for your business. Just go to p3adaptive.com. Raw Data by P3 Adaptive, Down to Earth conversations about data, tech, and biz impact.

Justin Mannhardt (00:02:15): Welcome to the show, Brian Julius. How are you today?

Brian Julius (00:02:19): Doing great. Doing great. I really appreciate the opportunity to be here. This is going to be a lot of fun.

Justin Mannhardt (00:02:24): You've been on my mind as I've been following you over the years, and I said, "Man, I've got to get Brian on a podcast." This just happened to be a great week for us to get together. Rob just finished the journey, so he finally moved back to Seattle after very long stint in Indianapolis, which is really great for him. His podcast recording kit is in several boxes I imagine at this point, but he'll be back up and running next week. So, Brian, I'm just thrilled we get to sit down today. To start, tell the audience a little bit about who you are and what you're up to these days.

Brian Julius (00:02:55): I describe myself as a lifelong data geek. It really goes back to being a kid. My dad was a scientist, was a biochemist. From the time I was a little kid, he used to sneak me into the lab on Saturdays, and we'd run experiments. I remember being eight years old, and having this giant lab coat on and goggles. We'd run these experiments in the lab, and he would say, "What do you think is going to happen? Make a guess, and then we'd run the experiments, see if that was right," and realized in retrospect, it was early hypothesis testing, and as a teenager fell into... Again, with my dad playing competitive bridge, we used to travel around play tournaments.

(00:03:34): I realized again much later that that was really an outstanding education in data science. By the time I got to college, I was already well versed in statistics and probability and hypothesis testing. It was all stuff that I didn't even think of as being math or analysis. It was just some fun that I did with my dad. Studied economics in college and econometrics in grad school. Came out really dedicated to analysis, and went to work in consulting, and then moved into government, and worked for 10 years doing natural resource damage assessment, working as an expert witness testifying on economic matters. Then I really moved into a management phase, and I call this my Rip Van Winkle data phase.

(00:04:19): Went into management, and got immersed in that for about 15 years, really did very little data. In retrospect, my skills really slipped, and I just fell out of the community, and woke up one day and just really realized how much I missed it, and took a four-level demotion out of the corner office, the big corner office, and 170 people under me. Went down to a team of five, really back into analysis and hands-on data work, and I just loved it. I got assigned a project where we had failed an audit on our financial system, and we were doing billions of dollars of transactions that were being tracked, believe it or not, in Excel.

Justin Mannhardt (00:05:00): Of course.

Brian Julius (00:05:03): The audits came out to the penny, but they were like, "You cannot be doing this kind of transaction in Excel" So, I got tasked with building a new financial system, and that was interestingly where you and I intersected, which is we talked to Microsoft, and they had suggested Dynamics and Power BI. I'd never heard of either at the time. My office sent me to a series of courses. It was all the courses that P3 taught, Power Query, DAX, advanced DAX and governance, and I took all four of them. That's where we met, and I worked for a year plus on that project, and then really got immersed in the Power BI community, and fell in with Sam McKay from Enterprise DNA and his group there.

(00:05:46): He called me up over one Thanksgiving, and offered me the job chief content officer. I spent close to three years doing that, and then interestingly, AI hit like a bomb, a meteor out of the sky in November of 2022. As you guys probably experienced, it really disrupted technical education in a huge way. I don't think there was anybody outside of the LLM world that was anticipating that. At the time, I thought, "Boy, it's not something I really know," so I stepped away from the position, because I thought, "I really don't have the skills or the expertise to offer anything in that field." Interestingly, that's now where I find myself doing most of my work. It's been an interesting journey over the last 40 years or so in data.

Justin Mannhardt (00:06:33): A lifelong data geek, I think you've earned the right to put that on your profile for sure. We would've met first in 2019 sometime.

Brian Julius (00:06:43): Probably late 2018, I think.

Justin Mannhardt (00:06:46): So, you were in one of my advanced DAX classes that we put on. You came to one of those, and so we met, and then all of a sudden, it feels like all of a sudden right, because we're in each other's orbit. We're connected on LinkedIn. This takes place over a period of five, six years. All of a sudden, here you are. You're a top voice on the platform, and you're talking about how PO1 one model can handle different problems that you're throwing at it. So, even just in the last handful of years, you've been on a journey of your own continuous self-development and even throughout your career. What propelled you along the way here where you ended up creating content being a well-known figure on the social platform, and one day, this guy, Brian's in my class trying to make sure he understands context transition?

(00:07:30): Next thing, he's wrestling with LLM models on doing custom visuals and denim. Where does all that come from? It's such a great story.

Brian Julius (00:07:37): I've always just been super curious. I don't know if you've ever taken the Clifton skills inventory. It was something designed by somebody I believe at Gallup. Unlike a lot of the assessments that assess your weaknesses, this really assesses your strengths. What are you best at, and what do you build from? When I took that, learning was really my number one skill, and it's always been something I've focused on a lot, and tried to do a lot of reading and learning about whatever domain I happened to be in at the time.

(00:08:06): As I came back in the data from a period of, say, this 15-year sleep, I was just astonished at how good the tools were, that when I left, it was like I left in the era of 486 computers and basically Windows still being three point something. I wake up, and here we are in what feels like super computers and Power BI and R and all the tools that are out there, and then the AI tools. It just became this fascinating journey for me of going back and seeing where I'd come from. My first computer was a 4.77 megahertz XT with 640k of RAM.

Justin Mannhardt (00:08:48): Yeah, but they don't measure it that low anymore, Brian.

Brian Julius (00:08:51): Yeah, I know. I know. I was running metric models on those, and it would take 17 or 18 hours to run those. It would keep me up at night, because you'd hear this hard drive just grinding. So, I came back into it from that, and it was just absolutely fascinating. So, I was just obsessed, and I'm an obsessive type anyway, and so with the background that I gotten from you guys, and then the content that was out there for free, I was just mainlining this stuff. On the bus home every night, I was watching YouTube. I was reading books.

(00:09:25): I was experimenting, really just started diving into the technical aspects of the tools, and just something I've always been fascinated with, and coming back to it with the perspective of where it's come from. Still to this day, it's like a lot of what we're doing now just feels like science fiction to me.

Justin Mannhardt (00:09:42): When Power BI or even Power Pivot and Excel, for those that were tracking with that at the time, when that came along, if you were paying attention, and you were tracking with it, it felt massive. Like when you said, "I went to sleep for 15 years," you got out of the slipstream in a way. You weren't still wrestling with the tools of the era as things progressed, and then for you to even say like, "Yeah, I was asleep, and then I came back, and I saw these tools." Quickly learn how they work. Quickly learn how to leverage them. Start creating value for...

(00:10:15): We take for granted how big of a shift that was, because without that, I would wager it'd be more difficult for you to go to sleep for 15 years, and then come back and jump into an analytic slipstream without all of these amazing things. Do you ever think about that?

Brian Julius (00:10:32): In some respects, it's something I loved doing before I got away from it. So, it was like rediscovering this past love. At the time I left, I'd been really at the peak of the tools of that day. I was testifying as an expert witness in federal cases, and so my analytic skills really finally tuned at that point. If you ever want to test your skills, testing them in a courtroom in federal court is a hell of a good way to do it.

Justin Mannhardt (00:11:03): Yeah, that stakes are a little higher than how many red socks did we sell last quarter?

Brian Julius (00:11:09): I mean, the interesting thing about it is I was working with the Department of Justice litigators, and they are top of the top. So, the prep sessions that you'd go through were just harder than anything you do in actual court.

Justin Mannhardt (00:11:22): Wow.

Brian Julius (00:11:23): So, almost like being an athlete, retiring and then coming back, it took me a while to shake the rust off-

Justin Mannhardt (00:11:29): Sure.

Brian Julius (00:11:29): ... and to learn the sport as it was played now, but I felt like I had a really good foundation. I could feel the skills coming back, and it was really interesting because particularly statistics is something we don't think of as changing that much, but statistics changed dramatically, because there were so many things that you could do theoretically. At the time I left, we just didn't have the computing power to do practically, and then I came back, and all of a sudden, there were just R packages and some of the stuff you could do just in power BI that you hadn't been able to do prior. It was kind of retraining myself in terms of what's best practice, because there were things that at the time I left were just not feasible, and I assumed coming back they weren't feasible.

(00:12:18): Another thing that was interesting was at the time I left, databases were slow, and storage was expensive. So, you used to just normalize everything to death. Then I come back to Power BI, and I immediately start creating these huge snowflake structures, and then realize, "Oh, that's not going to work at all." So, this whole retraining in terms of denormalizing your dimensions, it was really foreign to me. It took a while to undo the muscle memory of normalizing the hell out of your data.

Justin Mannhardt (00:12:51): When you haven't yet had the opportunity to really understand new technology, all of those pre-established assumptions can show up in very strong ways.

Brian Julius (00:13:02): Absolutely.

Justin Mannhardt (00:13:03): Even now, this is still a topic that comes up. You're working with a client or in your business, and you're trying to explain you want a certain level of granularity for your fact table even. For example, "We should aggregate this, because we want to preserve the space in the storage, and not understanding how things work can be kind of a trap." So, it's a testament I think to your curiosity when you realize like, "Hey, this isn't right. I'm building this third normal form database in a tool that's not designed for this type of schema. I need to learn what's going on here."

Brian Julius (00:13:37): Even stuff as basic as running a T-test, the student T is not a best practice anymore. So, it was really not even knowing where the potholes were in terms of these things that you just take as givens that are no longer given.

Justin Mannhardt (00:13:51): This was interesting to me just hearing you talk about your experience in the courtroom. As an expert witness, what type of analytics or analysis is going into those types of activities with the work you were doing at the time?

Brian Julius (00:14:04): We did a lot of pretty complex economic work. What we were doing was basically for oil and chemical spills. When the resources damaged, we would recover on behalf of the public. Like fishermen and restaurant owners and pier owners and stuff, they could recover directly for their losses, but nobody other than the federal trustee can recover for the loss of recreational fishing and beach use and the ecological services that are damaged in a spill. We would basically estimate the economic losses to the public, and then recover those funds for restoration. For example, in the Deepwater Horizon spill, we recovered over $8 billion for restoration, and we would use all kinds of things.

(00:14:54): We would use something called hedonic property valuation studies to look at the change in value of housing that was affected by contamination versus not keeping everything else equal in a regression framework. We did something called habitat equivalency, which was a lot of ecological modeling where we look at the ecological services that would flow from a resource, and then how those were affected by an event, and then what would be the provision of services from an equivalent resource, and so a lot of ecological modeling. We did some really fascinating stuff with a team of Nobel Prize winners for a case out in California looking at what's called the existence in bequest value, which is even for people who don't use a resource, you or I might value the fact that it's there for us to visit years from now, or for our children to see.

(00:15:47): So, how do you value that sort of loss? There were all sorts of pretty in-depth complex economics that we were doing to quantify those losses.

Justin Mannhardt (00:15:56): That's wild. It's not a story you hear about all the time, especially in our space. I hear about measuring sales and widgets and all that kind of stuff all the time, but to do that very nuanced and complex analysis, where I imagine there's quite a bit of subjectivity and debate involved around the analysis as well. Is that fair to say?

Brian Julius (00:16:16): Definitely. A lot of controversy both in terms of some of the methods were pretty heavily disputed, and that's why for some of those, you've got to bring in the heavy hitters, people with the big metals around their neck from Sweden. The other thing we used to do is we used to do a lot of simulation. We do a lot of probabilistic analysis to say, "Okay, what matters and what doesn't?" Most of our cases didn't end up in court, so we would work with the attorneys to settle these cases, and the key in settlement is finding something that is not of high value to you but is of high value to the other side, and them to do the same. So, you're trading things you don't really care about for things you do.

Justin Mannhardt (00:16:58): Wow.

Brian Julius (00:16:59): We would use a lot of mathematical simulation to figure out which parameters in a model really didn't have much impact. If the other side was heavily hung up on one of those, we'd say, "Okay, we'll give on the assumption on that in exchange for X."

Justin Mannhardt (00:17:14): Fascinating.

Brian Julius (00:17:16): A lot of interesting stuff there.

Justin Mannhardt (00:17:17): I like to ask people about their different experiences applying analysis or data science to whatever it is they were working on, because experts, people that have been there and really applied themselves to a particular area, when I listened to you answering that question, you were explaining all of the business rationale or societal rationale or economic rationale around some of these things much more than they tend to say like, "Oh, I had to learn how to write this fancy R function," which I'm sure you had to learn all kinds of interesting analytical or data science techniques along the way.

(00:17:51): But, I've always found the curiosity around the problem that you're trying to support or address has been the catalyst to propel you to learning different technologies or different methods. So, I'm just curious, how has that changed for you over the years? You put out a lot of content about how to achieve different types of results in the technologies. What's your reaction to that?

Brian Julius (00:18:13): I really take a broader view. I love the tools. I love playing with the technology, but the thing I really saw was that it wasn't the best technical analysts that really ended up having the biggest impact. I can safely say I work with a lot of really smart people. I was never the best analyst in that group. I was solid, but never the peak technical analyst. The thing that I did do well was I could explain complex concepts to non-experts. So, when you're sitting in front of a jury or sitting in front of a judge or a special master, it's not so much that you know every little nuance and detail of every methodology, and have the most innovative technical approach.

(00:19:01): It's that you can really get across, "Why did we do it? Why did we do this? Why did we use this approach? What did it tell us? How do we know it's valid?" Being able to address those questions in a clear way was often a lot more important than having the deepest esoteric skill set.

Justin Mannhardt (00:19:19): He just made me think about a moment in my own career. This is many, many years ago. Before I had gotten really into Power BI, I had a job opportunity, and as part of the screening process, you were asked to perform an analysis on a particular topic that applied to this business. So, I was like, "Oh, I'm going to crush this," but the feedback I got, this was very formative for me, was my presentation was so absorbed with technical fascinations. I buried the lead on the business impact. It was my style of communication. Understanding what mattered to the audience is... All those things, they mattered to me.

(00:20:00): It's like, "Yeah, this is a good analysis," but it was really a learning moment for me. It's like, "Wow, these things are important, but if you can't..." I love how you said this. If you can't break your complex topic down for a non-expert audience, you will really struggle. That's Sage's advice back to the dawn of analytics as far as I'm concerned, but you just made me think about, and I'm sure a lot of people have been through similar moments where they realize like, "Oh, I'm just totally missing the boat here on how to achieve what I really want."

Brian Julius (00:20:28): We actually turned that into a kind of an interview technique that we used to have people come in for analytics positions. We would have them just give a 20-minute presentation on any project of their choice. It didn't have to be something related to what we did. In some sense, it was better if it wasn't. If it was something you were just totally unfamiliar with, seeing how somebody came in cold to a group that included PhD economists to biologists to managers, people had no connection to the work that they were doing, see how they would explain that, and how they would handle questions from that range of different domains and expertise.

(00:21:09): That ended up being the best interview technique we ever came up with, because somebody who could come in and give a really thorough, concise, clear presentation on a technical topic you knew nothing about, and then handle questions addressed to the level of the audience effectively, if you can do that, that's the job of a data analyst.

Justin Mannhardt (00:21:33): The terminology we use in the marketplace is always interesting to me., because it can get a little fuzzy when you see the memes about job positions like," I'm looking for an analyst, but I really need someone who's a data scientist, a BI engineer, a data engineer, a DBA, a full stack developer, and has good presentation skills." The difference between doing things like data analysis and data science analysis is really about, to me anyways, it's about that curiosity, finding things we don't fully understand yet, coming to conclusions, and then educating people around all of that.

(00:22:09): It's not the most technical or most mathematical. It's being able to bring that all together and move it forward. But a gear shift here, Brian. So in the last five, six years, you've gone through this journey of really going full tilt into Power BI, the power platform, Python, R, a number of other things you've taken an interest in, and then as you said earlier, a meteorite hit called Generative AI. For the past 18 months, a couple of years, we've all been caught in different phases of hype cycles.

(00:22:41): How is this changing the nature of the work we do? How's it changing our jobs? How is it changing the services and software and things that are available to people? If I asked you in 2021, Brian, if I want to become a successful data person, data scientist, whatever, you'd probably have a certain set of advice for me. So now, here's all this new stuff. Rob and I, when we talk about AI on the podcast, we know it's not nothing, right? AI is definitely something.

Brian Julius (00:23:10): Unquestionably.

Justin Mannhardt (00:23:11): There is some subjectivity or unknowns around what's really going to start to establish it in the marketplace in terms of norms and how we do the work that we do. So, I'm curious. How's your thought around what it means to be a data person? How is that evolving with the meteor that is Generative AI?

Brian Julius (00:23:32): I would say in some respect, I was one of the first people to lose a job from AI.

Justin Mannhardt (00:23:37): Really?

Brian Julius (00:23:38): I was working as the chief content officer for Enterprise DNA, which is an excellent firm in New Zealand. When I was there, we expanded into power platform, into R, and Python, and the whole environment around Power BI and Deneb and all those things, and then AI hit as I mentioned. We just weren't sure what direction this was all going to go. I remember that first weekend of playing with ChatGPT, and as rough as that first model was, immediately seeing, "This is something, and it's going to get a lot better in a hurry. It's going to have a huge impact on education."

(00:24:15): So, I ended up stepping away from that role just because I felt like I didn't have the AI background that was going to be needed. That hit in November, and by late spring of the following year, I had made the decision to step away. So, it really directly affected my role. At the same time, I was fascinated by it, and one of the things that surprised me and frankly was a little disappointing was everybody in the data field had an opinion about it, but those opinions didn't seem very data-driven. It was like, "Oh, this is no good at this. It's great at this sort of thing," but it didn't seem to be based on the kind of analysis that we all pride ourselves on doing.

(00:24:56): So, I really saw an opportunity to dive in and say, "Okay, is there a place for somebody to really do a rigorous data analysis of how good is AI, a task A, B, C, and D, those types of things that are directly going to affect our jobs as data analysts?" For the last year or so, that's been the bulk of what I've been spending my time on is really digging in AI very, very heavily. I'm not saying in any way I'm an expert, but I feel like I'm a very knowledgeable end user. I've been applying my data analysis skills and data science skills to really doing some rigorous testing of, "How good is AI at writing DAX, at writing M-code, at constructing visuals with Deneb, but doing statistical analysis with R?"

(00:25:47): The answer I'm finding is it's remarkably good. You need to prompt it properly. You need to give it the information that you would give a human expert. But if you do that, it is really, really impressive in what it can do. Particularly as we've moved from the LLMs to the LRMs and the reasoning model of O1P and O1M, it's only going to be a matter of weeks or months before others start coming out with similar models, and we really start seeing the benefits of not only number of parameters and scale but of test time training, so giving the model longer to think in addition to just feeding it a lot more data.

(00:26:32): I think we're now seeing the models scaling up on both axis. I really think we're not far away from having most of the languages that data analysts use heavily pretty much be solved within the next year.

Justin Mannhardt (00:26:46): My own assessment of how this was going to progress, and by this, I mean Generative AI's capabilities within analytics to write DAX, write R, write Python, do data science applications, my assessment of how quickly that's going to move and who the players were going to be in its movement is something that I'm always recalibrating and just constantly unsure of. A great example is, I think, this would've been about the time where OpenAI moved from ChatGPT 3.5 to four. That's when I first started to be like, "Okay, these things are going to get good at DAX, for example."

(00:27:27): I would say at that time, it's just a matter of time before Microsoft gets serious about building the solution that's really good at that. Then the O1s come along, LRMs, right? So, we've already got new acronyms, large reasoning model versus larger language model. I've seen some of your posts and some of my work like, "Wait, it's already good at it."

Brian Julius (00:27:52): Oh, I'd say it's already great at it. I mean, I would say there are only a handful of people in the world who are better than it is right now.

Justin Mannhardt (00:27:59): That is remarkable. Rob and I, we had an episode we recorded a couple of weeks ago where Rob has a model he uses for his fantasy football league. What he ends up doing is he copies the text. He goes to a website, and control A, control C, and paste it. So, it's just this garbage of a output into a spreadsheet, and so he himself worked through Power Query to be able to clean up this data so that it could be useful. So, we did an episode where we say, "Okay, let's see if ChatGPT..." You can imagine this, Brian, just like the grossest layout of data that makes absolutely no sense, and it solved it.

(00:28:36): I had to prompt it well. I had to correct its mistakes, and tell it where to go next, but it got to a solution that works. The code it gave me was really clean. It was annotated. It was well formatted, but I don't know if I would've been able to do that having not had a reasonable understanding of M myself. So, I think that's one of the questions I have with AI is how important your ability to have a basis understanding of where you're trying to go. How durable is that need over time, do you think?

Brian Julius (00:29:11): It's a great question. It's something I've been really discussing and debating a lot. In the testing that I do, I often constrain myself so that I take the role of an intermediate beginner, somebody who knows something about DAX, but doesn't have anything approaching an advanced or expert level of knowledge. So, the prompting that I'll do is really just observational. I'll say, "Okay, this result through this error," that I won't say, "Okay, I think what you need to do is address the context transition here, or address the filter condition here."

(00:29:49): I've been trying to keep it strictly observational, something that a beginner could just look at, and say, "Okay, it's having this problem, throwing this error or all the columns are blank," and then seeing if it can correct from there. Obviously, if you can bring that more advanced level of knowledge to it, particularly one of the things I've found is M is probably the least advanced of any of the languages that data analysts use, not because it's harder, but because there's less of a publicly available code based on which to train it. I think a lot of the M-code, the best M-code sits in weird places that unless you're very knowledgeable about M, you don't even know those exist.

(00:30:32): One of the things I've been talking to folks like Vijay Verma who runs the Excel BI challenge nightly, Omid who runs three times weekly challenges on LinkedIn, talking to those folks, and saying, "Okay, can we tap the resources of those challenges?" A lot of those challenges draw some of the best M coders in the world, literally, people who are competing in Vegas and [inaudible 00:30:57] and some of the real superstars of the field are writing code every night. If you could train these models on that code base, I think they would be unstoppable.

(00:31:10): That's something that Vijay and Omid and I have been talking about, wanting to try to bring that fine-tuning to M-code, and see if we can really advance its ability to program an M, but it's still quite good, but that's the one to me that's lagging.

Justin Mannhardt (00:31:25): I wonder too of how much of that is hampered by the reality that most M-code is not code that was actually written. It's a code that was generated from the UI of people using the UI to complete their data transfer, which is what the tool was designed for really, and so the code output of all the... I've worked with ChatGPT on it looks nothing like what the UI would do. Things are named well, and it's spaced well, and it's still quite good, and it's the same thing. If someone found it important enough which someone will to train models to understand this better, that's not a huge leap. I read something this morning where something like 25% of all the code written at Google is written by AI now.

Brian Julius (00:32:12): That wouldn't surprise me at all.

Justin Mannhardt (00:32:13): It's happening, or even things like one of the reasons I personally liked when I was still doing a lot of development myself, I really liked tabular editor for is how I could just expedite quick things like, "Okay, give me this set of time intelligence measures for all my base measures. Can you just pop those in there, so I don't have to write all of them?" But that was for known repeatable things that are going to come up. Instead of copying and trying code from the internet to just have a sparring partner, and an AI tool to just go back and forth and find a solution with, I do wonder how low the bar is going to get, Brian, of the minimum level of knowledge or expertise you need on a particular language to get good results from AI, or are we going to end up learning a completely different way of doing this work three, five years out?

Brian Julius (00:33:00): I mean, I think it's really going to be the latter. I mean, I've been experimenting a lot with languages. I don't know. I couldn't write a single decent line of code in. C#, great example. I'm a huge Tabular editor fan. I've never gotten around to learning C#. I had one the other day where I crank out 128 lines of C# code for a script that I had an idea for. It banged out that 128 line just as I watched. I popped in Tabular editor, and it ran perfectly.

Justin Mannhardt (00:33:30): Wow. First try.

Brian Julius (00:33:32): I didn't have a single reprompt or redirect on it. It took me a while to test it and make sure that it was doing exactly what I thought it was, because that's the downside. When you're not skilled in a language, or, for example, I can look at the R code that ChatGPT writes, and I know exactly what it's doing. To be able to write in a language you have no knowledge of is really spectacular. It feels like those movies like Limitless where you take a pill, and all of a sudden, you can use 100% of your brain. The Matrix where it's like you wake up, and you're like, "Now, I know Kung Fu." It's like, "Now, I know C#."

(00:34:07): For me, it's just incredibly exciting and really motivating to feel like there's nothing out there that I can't do now. There's no constraints on our ability to bring in tools or techniques. I think it really does change the focus, which is particularly, you look at something like statistical analysis. You can now do a year's worth of terrible statistical analysis in a day, but it will be terrible.

Justin Mannhardt (00:34:33): Right. I love that.

Brian Julius (00:34:35): So, I look at the ability to violate all of the best practices and valid analysis. In one prompt, you can basically render your entire analysis. If you say, "Run every combination of independent variables, and only report those regressions where all of them are significant at the 0.05 level, and the F test passes," that is the worst possible thing you can do. Your analysis can be so biased in so many different ways. It's going to be basically unusable, but it's going to look really good.

(00:35:11): I think that's really what I worry about is that the AI not... It's not the hallucination. It's not giving the wrong answer. It's that it will give you exactly the right answer to the wrong questions.

Justin Mannhardt (00:35:24): So, Rob and I have been using this analogy of a referee. You need someone to leverage the AI, guide it along the problem, and then today, you still need somewhat of a referee to test its output, critique it, make sure it is what you want. Hallucination, it's a funny term to me at this point. It's just a known byproduct of how these technologies work. It's going to get better over time, but giving a right answer to the wrong question speaks to a really fundamental belief we have, which is knowing what you need or knowing what formula to write, what code to write is more important than knowing how to do it.

Brian Julius (00:36:02): Yes.

Justin Mannhardt (00:36:03): It's knowing what you're really trying to solve for, and it's a totally different set of skills to work with these things. I'm just curious, what skills or techniques or approaches have you found to really accelerate your use of these tools? I mean, your ability to work with them, what are some of your tips and tricks?

Brian Julius (00:36:22): It's going back to the fundamentals, and that I think so much of what I see when people talk about tools is, "How do you write this particular code, or how do you get this particular result?" It's really getting to the point where that value is just going to fall to zero soon. There's not going to be a need. It's like I remember talking to my dad about when he was coming out of grad school. It used to be considered a skill that people would actually put on resumes that you could do square roots and cube roots and reciprocals, and quote complex calculations.

(00:37:00): That was viewed as a desirable set of skills for STEM. Now, if you put that on a resume, people would think you're insane. I think the same way that we now talk about writing DAX or CTEs and SQL or something, that's going to be "a skill" that nobody in five, 10 years is ever going to talk or brag about. It's not going to be something we do anymore, looking to that day, and I think that day is probably going to come a lot faster than that. If the rumors are true that Orion is going to be released by the end of the year or early next, we could easily find ourselves to the point where that model is better at all the languages we use than any of us.

(00:37:42): That could happen very soon, and so my focus has really been going back to the basics. One of the books I'm looking at on my shelf is The Art of Statistics. To me, it's the best book ever written about. It deals with statistics, but it's really about being a data analyst. It's about the questions that you should be asking the lens through which you should be looking at problems. I go back and read and reread that book constantly. It's foundational at this point, but I get something out of it every time I reread it.

(00:38:11): I think it's the type of thing that I really encourage other analysts to really dig into, and really build those foundational skills, really understand, "How do we apply statistics in terms of not how do we do the tests, but which tests do we do? How do we interpret the results? How do we explain those results?" Then just let AI do what AI does best. AI can bang that R code out like nobody's business. I'm good at writing R code, and I just don't write it anymore. AI is just faster. It can do what I do in an hour. It can do in 60 seconds.

Justin Mannhardt (00:38:47): I used to write a lot of SQL. I'm trying to think of the last time I actually wrote a line of SQL, and something came up a few weeks ago. I just threw it into ChatGPT. I was like, "This isn't working. Why? Oh, here's why. Here's what to fix." Three years ago, I would've sat there carefully studying to figure out what's going on. I agree with things are going to move faster than our expectations. We're not going to be quite sure how things are going to stabilize. The market's going to take some time to react and adopt certain things, but one thing I'm curious about, Brian, is when does some of these languages just start to disappear?

(00:39:27): The reason I ask that is I think of something like DAX, for example. All of DAX is effectively syntax sugar for a language that's actually running under the hood in VertiPaq. So, even the language we're writing today isn't the thing that actually runs. There's all these layers, and so I wonder, at what point does the DAX just not exist anymore, and a human says, "Hey, I need to analyze this problem, and I'm interested in these metrics and things like fabric of Power BI?" It's like, "Sure, Brian, here you go. Here's the first analysis."

Brian Julius (00:39:56): In some respects, I think it raises the next level question, which isn't even about DAX, which is about Power BI in general, which is at what point does that become moot in the sense of when are we still building dashboards when we can just query our data completely in real time? We're already getting very, very close to that. If you played around with the new Claude engine, it's data analysis capabilities and data visualization are really, really good. When you join that with an LRM instead of an LLM, we're quickly getting the point where it's like, "Are dashboards even still relevant?" Is the idea of building a dashboard in advance with M and DAX, is that still a thing we do? I really question whether it is.

Justin Mannhardt (00:40:40): I've wondered about this myself, so there's a type of dashboard that I would refer to as something that we just use because it's routine. There's a dashboard that gives us the key things we want to know about what's happening in our organization every day, every week, every month. We're always going to look at that. We're always going to have tools like that, just give us the baseline, but then naturally, you're looking at something like that metric is up or down, or it's not trending, or it is trending, and we start to ask more and more questions. So, over the years, we've tried to build more bespoke or more flexible tools with different question and answer paths that could help end users leverage the dashboard to get to those things.

(00:41:22): That for sure is an area where if we could have massive improvement, it would be in that area where analysts are constantly cycling through iterations and iterations of reporting for people, and there's a bottleneck effect. It's like, "At what point do we not need those tools anymore?" To me, it still seems like we are at a juncture today where specifically in the Power BI and fabric ecosystem, good data models seem to matter still that have data as clean as we can possibly get it within reason. Before, I would say the right measures, but even then, it's like, "Okay, it's capable of deriving new calculations already."

(00:41:58): The nature of work is going to change. Claude's been really impressive. There's another project at Microsoft, Project Sophia. Have you played around with that much?

Brian Julius (00:42:06): A little bit.

Justin Mannhardt (00:42:07): When I think about the struggles of end users, they're like, "I don't know which dashboard to look at, or the dashboard. It's not convenient for me to find what I'm looking for, and I end up calling Brian anyway, and say, "Brian, can you help me with this?" You insert AI into this picture, and, "Hey, I'm going to upload a CSV that I exported from some system." We said, "Oh, why are you exporting the CSV?" Then Claude or Sophia can lead you down a journey of visualizing data and answering your questions in real time.

(00:42:37): How does that change the value proposition for... Let's talk about a couple of audiences here. First, the practitioners. People like you and I, we've been building these tools. Our careers have been based on them. The way we do that work is going to change. Where's the value proposition going to shift in the future in your assessment of what's been going on?

Brian Julius (00:42:56): I love Power BI. It's a tool I just enjoy using. I think it's a brilliantly designed tool. It's fun to use. It hits all the right neurons for me in terms of puzzle solving and all the things I enjoy. I'm not in any way looking for it to go away, but I don't really think of myself as a Power BI person. I really think of my skills as being the data analyst and being the person who knows how to ask the right questions, and structure a business problem in a way that I can come up with actionable insights on that problem. That skill set is really being tool independent, and I think that's why I've been more open to AI than some of the people in the community, I think, who see themselves as a Tableau person or a Power BI person.

(00:43:44): I think if you see yourself as a particular tool, then you're very threatened by anything that might take the place of that tool. Whereas if you just see yourself as an analyst and somebody who knows how to leverage domain knowledge in a business context, and answer questions, and turn that into outcomes that are either profitable or socially desirable, then you're not really tied to any tool or any particular outcome. So, it's helped me be more open to the idea that, "Hey, AI could be edging out this tool that I really love."

(00:44:21): I remember a post I did of what my tool set looked like in the '80s and what it looks like now. There's obviously nothing that survived. Even back then, I was using Lotus 1-2-3 as my spreadsheet. It was funny talking to a lot of the younger people that are in the community. They didn't even recognize the list of tools that I had. They're like, "I've never even heard of that."

Justin Mannhardt (00:44:42): Lotus, what's that?

Brian Julius (00:44:43): Yeah, they've never heard of it. They've never seen it. I still remember some of the slash keystroke shortcuts. It's just natural. The tools evolve. Tools change. Things get edged out by better things. That, I just view as the natural progression. I'm not really invested in one or the other, and so there's not going to be anytime soon that I think AI is good enough at the judgment side of things. What questions should we be asking? How should we be asking them? How should we be interpreting the results? I think those skills eventually may get overtaken by AI, but I see that as being much, much further down the road. Then it overtaking the technical skills.

Justin Mannhardt (00:45:29): We did an episode last week where I shared just some of the use cases I've found value from. I use it quite frequently just as a thinking partner. I'm always trying to advance or solve issues that more or less involve uncontrollable dynamics of market forces and other human beings, and the way we're set up as a team, and challenges we're having with projects or whatever the case might be, and just to reinforce the judgment part of what you were just saying. I found it helpful to provide frameworks for thinking about problems. Its ability to really solve those problems, I've not felt that.

(00:46:07): Here's a productive way you could go, "Hey, Justin, answering these questions might help you progress on your thinking here." It's right about that, but I'm curious how fast that would progress outputting usable code. I love all the use cases where people write the snarky email. It's like, "Please make this sound professional." Generating all that kind of stuff, summarizing all that is getting quite good at. Now, you've adopted an approach with AI where you've been very methodical and thoughtful for how you're testing it and finding out what it is actually capable of.

(00:46:42): Only a certain percentage of the knowledge workforce has even tried a tool, and that number keeps going up, but I don't think it's crossed the 50% mark. What are some of the easiest ways to get started in your opinion now that you've been playing with these for so long?

Brian Julius (00:46:59): The person who's really the guiding light for me on that is Ethan Mollick at Wharton, wrote the book Co-Intelligence, which my friends and family are just tired of hearing me talk about it. I think it's really the most thoughtful book that I've read in terms of explaining what AI is and how we interact with it to a lay audience that I've ever seen. It's one of those books that I think you can recommend to your parents or to your technical team members, and they'll each get a lot out of it. He talks about the jagged frontier of AI.

(00:47:34): There are things we think AI is good at that it's not, and there are things we don't think AI is good at that it's incredibly good at. One of the examples he gives is if you prompt AI, it can write a perfect JavaScript program to play tic-tac-toe, but if you actually play tic-tac-toe against it, it's terrible. That is not what you would expect. That's a great example of the jagged frontier. I think your point earlier on about hallucinations, I think one of the biggest mistakes that OpenAI made was releasing that three-point whatever version in November of 2022, because that version was not ready for prime time.

(00:48:17): I think it did hallucinate all over the map, and just gave you some crazy answers and stuff. A lot of the people that I talked to tried it, saw that and were like, "No, thanks," and their views on it are still colored by that. I'm like, "That is like saying I still don't drive because I tested the Model T, and it was terrible." It's not relevant to where we are now, and yet people still carry a lot of those biases. So, I think part of what I would really recommend to anybody diving in is dive in without prejudice, that whatever you think about AI from your earlier testing, it has come so far since that. A lot of it's really just understanding that, again, going back to Mollick, he characterized it as working with an alien.

(00:49:09): It's like we've met something from another planet, and it's friendly. It wants to help us, we think, but we don't really know what it is and how to communicate with it. For the post I did on LinkedIn, last night, I had to do four little almost like cartoons. I was doing it with... My wife was in the room watching me, and she's like, "Why are you adding in this wording in the prompt? Why does it need that?" "Well, I know from my interactions with it. If I don't add that in, then I'm going to get this, and if I do add it in, it's going to move it in this direction." She's like, "Wow." She's like, "I never would've thought that would've made a difference."

(00:49:47): It's one of those things that I don't even think about anymore, because I know how to talk to it. It's like a coworker that we've worked together now for two years. I know how they work. They know how I work. If you turn on the memory feature, it really does learn your style, and it knows almost a disturbingly large amount about you. A lot of it is really just creative experimentation, just being willing to probe it and poke at it and say, "Okay, I didn't get what I wanted exactly this time, but let me try it from a different way. Let me talk to it differently. Maybe it's just not understanding what I'm asking."

(00:50:22): One of the things I really recommend people do is even if you don't have much need for AI-generated images, play around with an image program. The one I use is ideogram, because I think it's the most coherent in terms of adhering to the written prompts. What you'll see, it gives you this very tangible feedback of what you're asking it to do, and then you get a picture of what it interprets that as. You really get this very symbolic, very resonant result that shows where your instructions are being misinterpreted, or where you're not being clear. Sometimes you'll get this crazy result, and you'll say, "Wow, why did it do that?"

(00:51:04): Then you'll read what you told it, and you're like, "Oh, it did exactly what I asked. I was just not asking for the right thing." I'm a very visual learner, and so when I see that in pictures, generating cartoons of red pandas or Godzilla is not really germane to your job. I think it really does help you understand how these models think. It's fun to play around with. At first, my wife used to see me play around with it, and she's like, "You're just goofing around. You're avoiding work, not doing work." I was like, "No, it really is teaching me how to prompt these things, and how to talk to them and what they interpret, what they misinterpret."

(00:51:44): Your point about hallucinations, I think people really get that wrong, which is if you turn the temperature down to zero, they just become very static if-then machines. You ask the same prompt. You get the exact same answer every time, and all you've done is basically, you've created this monstrous switcher case statement. It's not useful. It's the randomness and that degree of unpredictability that lets you brainstorm with it, lets you say, "Okay, I have this idea for a product. Give me your best 20 names that encompass these characteristics that it has."

(00:52:21): Well, of the 20 it gives you, 14 of them will probably crap. Three of them will probably be decent, and three of them will probably be great. It's that randomness and that unpredictability that lets you do that brainstorming or that if you say, "Okay, here's a business situation. Give me 10 risks that may impact this situation that I'm not thinking about," I really think that prompting and exchanging with a really top-notch model can really help you recognize some of these black swan events that we would otherwise totally overlook.

Justin Mannhardt (00:52:56): That's incredible. Business leaders that I work with, either they're customers of ours or maybe they're just in my network, we have a relationship. Everybody is getting asked, and I think we'll be perpetually asked for a number of years, "What are we doing about AI?" We're even thinking about it in our consulting practice. How are we going to adapt to AI? What does that mean specifically for us? What are some of the things people in those types of roles ought to be considering and doing today? What should we be thinking about as leaders right now, Brian?

Brian Julius (00:53:29): I think a lot of companies are throwing a lot of money at AI that is going to end up basically just being shoveling cash into a furnace, because they're doing it because all the other cool kids are doing it. The idea is basically if you throw money and technology at a problem, it will help. I think we've both been around long enough to know that is not true. If you just willy-nilly throw money and technology at a problem, all you're going to do is spend a lot of money.

Justin Mannhardt (00:53:56): Right, and if that's your goal-

Brian Julius (00:53:58): Mission accomplished.

Justin Mannhardt (00:53:59): Right.

Brian Julius (00:54:00): But you look at who's really generated defensible business results from AI already, it's McKinseys. It's BCG. It's Amazon. It's Google. It's the companies that already had their data really well-structured. Their knowledge management system's in place. You can't shovel garbage into an AI, and expect it to produce good results. The data you train on has to be quality, and so I would really say to anybody in a management leadership position looking at AI, first of all, I would say be clear on what is the outcome you're trying to achieve. If you're just trying to say, "We want AI to improve our profitability," well, what's the mechanism by which it's going to do that? What is the specific outcome you're looking for in that?

(00:54:48): Then I would say, is your data of a high enough quality to support that kind of outcome? I think for the vast majority of organizations, the answer is no. One of the things I was heavily involved in government was FOIA requests and information management and basically the directives that would come from the White House in terms of different standards that our data would've to meet in terms of metadata, in terms of documentation, in terms of organization, in terms of conversion from paper to electronic. There are a whole series of directives that the government has to meet, and meeting those is, a, very difficult. It's tedious. It's expensive. It's work that people just don't like to do.

Justin Mannhardt (00:55:35): Right. Maybe there's one person that wakes up in the morning, and says, "Boy, I can't wait to get today going."

Brian Julius (00:55:40): Those people make a very, very good living and are in tremendous demand, because it's a necessary skill set that not a lot of people have and not a lot of people enjoy that, but I got to know a lot of people in the information management community. I know a lot of people who do those assessments in business and government that come in. Look at your information management practices. Look at how far are you from the ideal standards. The vast majority, even if successful businesses are way, way far from ideal on this. I think if you take data that is not organized well, is not of known quality, you're going to just end up spending a lot of money for low results on AI.

(00:56:27): I think it's really understanding that before you can jump in a Formula One race car, you need to learn how to drive on the beltway.

Justin Mannhardt (00:56:35): Yeah. I saw an article. I think it came out this week from HBR, Harvard Business Review, about AI's disruption on startups. It's a similar idea to, I think, what you were saying about the organizations that are truly generating lots of tangible business results. Big firms already had lots of structured data available, and knowledge data, policies, documents, that type of information. One of the points this article was making is how, prior to AI, anybody could start a digital business. If you had a good idea that had a market fit, that solved a real problem for people, you could build that product, deploy it, scale it very quickly.

(00:57:21): A lot of people fail, of course, but a lot of people very successful building great digital first businesses. The challenge with AI is today, what does AI need to work well? It needs vast amounts of information to be trained on. It needs vast amounts of compute power. You see a little shift there of big players, Microsoft, Google, Amazon, Meta, et cetera, where they have these things. We're just in this period where how AI lands and where and what value is something that I think we're all collectively exploring at the same time.

(00:57:57): At the same time, P3 is not a huge company. We have less than 100 employees, and if you were to think about how much information do I have at P3 that I could train a model on for a bespoke purpose for us, it probably knows more about that topic area from just the vastness of all its other information. Your point's really insightful, Brian, about you really need to think about what information do you have that is truly novel and valuable and of a quality and of a consistency that makes sense for you to do something on your own here that makes sense? But, it's a wild time.

Brian Julius (00:58:30): There are three ways to improve your AI results, one of which is improve your base model, and none of us are going to be doing that. OpenAI has already spent $1.3 billion or more training that.

Justin Mannhardt (00:58:42): I don't have 1.3 billion, Brian, so we've got to find a different idea.

Brian Julius (00:58:46): We got to go down the list, because that one is not going to be something we do, but the other two are things we can do. One of which is you can prompt the AI better. Going back to Mollick, this is where he says, "Individuals are getting much better return on investment from AI than corporations and organizations." Just simply by probing that jagged frontier, and learning how to prompt, you can really increase the value of the results you get out of AI. The second thing you can do, think about, "What knowledge do I have that's unique that the system doesn't have?"

(00:59:23): If you understand how they derive these training corpuses, one of the things that I've done is I've developed a series of five analytics related, what I call Kaijus, which are these custom GPTs that I have like Rodan and Mothra and Godzilla. They each have a specific purpose, and what I've got... I'm looking over here eight or nine shelves of data books I've collected over the years, the best books on every topic, the best R books. I've got the best DAX books. I've got the best RegEx books. On RegEx, I took and built a custom GPT that included all of the best RegEx books. I got them digitized. I took them from Kindle in the digital form that I then built into a custom GPT, took the best cheat sheets, took the best blog articles.

(01:00:13): That system has not missed a RegEx question in five months. It solved RegEx. I try to throw the craziest conditional look backs, and it gets everything right. Bringing that improved knowledge base to the AI, you can get better results. That's something that I think is definitely feasible at the individual level. Much harder to do at the corporate level. Reading Mollick, he says, "Right now, what you're seeing is you're seeing individuals who really have become the new stars of analytics, of dealing with AI, of working in this co-intelligence mode, and are just particularly skilled at." He's one of the best. His creativity in terms of the way he applies it.

(01:00:59): If you follow him on LinkedIn, he posts three times a day. I'm just amazed at the things he's able to get out of it just by being a really smart, thoughtful, creative guy. I think there's a lesson in that, which is if you're a company that's trying to get a lot out of AI, don't go throwing big money into enterprise installations. Train your people to be really great users of AI rather than building custom systems. As you say, there's only five companies that are building things that really matter. The bar for entry is so high that there probably aren't going to be a lot of new players coming in.

Justin Mannhardt (01:01:40): Yeah, it's hard to see in the near term anyways.

Brian Julius (01:01:43): Yeah, there's a few. I mean, Perplexity was one that came out of nowhere that I think is amazing. Remains to be seen whether that gets eaten up by one of the bigger fish, but there are a few. Mistral, I think, has done some really neat stuff. We're now two years in, and I can't think of any small company tools that have made a big impact in the work I do.

Justin Mannhardt (01:02:04): Even the big players using Microsoft as an example, Microsoft has announced Copilot in all its various forms. There's a Copilot for Office, Copilots and Fabric, Copilot and GitHub. There's Copilots everywhere. It was just recently when Microsoft announced wave two for Copilot that the base models that Copilot's running on moved from 3.5 to 4.0. So, we're already out here in playing with P01 and these new models and models from other players that are already far surpassing where 4.0 is. Something just got announced. It's in GitHub, and you can now choose which model you want.

Brian Julius (01:02:46): I saw that this morning.

Justin Mannhardt (01:02:48): Hey, do you want your Copilot to run on Claude, or do you want it to run on ChatGPT or some of these other models that are out there? As a data analyst, you're probably not going to get into building your own base models. I think you're just spot on, Brian, with the thing to focus on right now as a leader is how can you best equip your people to use and to get some value out of these technologies. I think it's going to be some time before there's obvious enterprise products like, "Oh, we use this AI product for this purpose, and everybody uses it."

(01:03:21): I think Ethan's spot on there too, the values at the individual level. I mean, that's where I'm getting value from. It is the ways I'm building habits to integrate it into my own workflow and the unique things that I need to do in my job, so really good advice.

Brian Julius (01:03:34): With both of us coming from the Power BI universe, that's a really foreign concept in Power BI, because individuals are building enormously impactful enterprise capabilities in Power BI. I mean, you look at James Dales with Icon Map. It is, I believe, orders of magnitude superior from a mapping standpoint to anything that's in Power BI native. You look at Daniel Otykier in terms of Tabular editor and Tabular editor three, Daniel Marsh-Patrick and Deneb. It's like individuals are in some ways miraculously writing these tools that are better than that of a company with a trillion-dollar market valuation. That's incredible. That is never going to happen in AI.

Justin Mannhardt (01:04:23): Not in the way it's... You need lots of information. You need lots of compute power. I'm sorry, Brian, we don't have lots of compute power. We got to rent it from somebody else.

Brian Julius (01:04:30): Right.

Justin Mannhardt (01:04:33): Brian, I think some of the things that I just really resonated with me as a sign-off is if you got burned early on when these tools were coming out with bad results or some skepticism, you definitely want to find a way to let that prejudice go and jump back in, because things are a lot better and likely to get even further advanced very soon. I think these last few tips we've been talking about really understanding how you as an individual can get into the process here, and there's a lot of ways you can start building habits around these tools, and start gaining value just for yourself.

(01:05:07): Brian, you're always putting out great content on the ways you're getting great results from some of these tools in the Power BI space and otherwise. Thank you so much, man. It was a pleasure reconnecting with you. It's so cool to see your journey from one of the people in an advanced DAX class in DC to being a prominent voice on LinkedIn, sharing everything you're learning about. It's really cool, man.

Brian Julius (01:05:28): For me as well, I have to thank you and P3. You guys were really the ones who started me on this journey. I look back to those days of sitting starry eyed in those classes trying to memorize the Rob called the greatest formula ever. I still have the book I won from that course from having been able to recite that formula from memory. I really look back on that as having provided a great foundation for me. Honestly, the work you guys do is terrific. For me, it really helped launch me back into a field that I loved, and set me on a really fascinating course that I never could have predicted.

Justin Mannhardt (01:06:09): Well, that's great, man. I really enjoyed chatting with you.

Brian Julius (01:06:12): Likewise.

Rob Collie (01:06:12): Thanks for listening to The Raw Data by P3 Adaptive Podcast. Let the experts at P3 Adaptive help your business. Just go to p3adaptive.com. Have a day today.

Check out other popular episodes

Get in touch with a P3 team member

  • Hidden
  • Hidden
  • This field is for validation purposes and should be left unchanged.

Subscribe on your favorite platform.