The Power BI Fundamentals Behind Expert Development *and* AI Simplicity, w/ Microsoft’s Rui Romano

Rob Collie

Founder and CEO Connect with Rob on LinkedIn

Justin Mannhardt

Chief Customer Officer Connect with Justin on LinkedIn

The Power BI Fundamentals Behind Expert Development *and* AI Simplicity, w/ Microsoft’s Rui Romano

Everyone keeps asking whether AI kills Power BI or makes it stronger. Rui Romano flips that entire question on its head. As the Microsoft PM behind PBIP, TMDL, and all the file format work that rebuilt Power BI’s foundation, he explains how the platform accidentally became one of the most AI-ready systems in analytics – and it wasn’t by accident, not really. His team was solving problems for real developers who were tired of unsupported workarounds and offshore relay races. They weren’t training agents. But the work they did means AI now feels native instead of duct-taped on.

What we learned was that the semantic model is still the highest ground in this whole space. While other tools let AI stumble through raw tables and pray the math holds up, a proper model gives AI the one thing it absolutely cannot fake: context. Relationships. Business logic that works at every level of granularity without falling apart. Rui breaks down why that matters now more than ever, why all the hardening work his team did keeps your models from exploding when an agent gets ambitious, and why the future of BI isn’t about cranking out another hundred pixel-perfect dashboards. It’s about fast iteration, lower friction, and answers you can trust at scale. Dashboards still matter – but only the ones people use.

This conversation goes deep on architecture, not hype. Rui talks about what’s changing right now, what still needs work, and why natural language will eventually beat drag-and-drop for a lot of what we do today. If you’ve been wondering whether to invest in real semantic modeling or just let AI figure it out from scratch every single time, this episode makes the case for why foundations always win. Always.

Listen in and get ahead of the shift. And if the episode lands for you, leave us a review to help other folks find the show.

Episode Transcript

Speaker 1 (00:00:04): Welcome to Raw Data with Rob Collie. Real talk about AI and data for business impact. And now CEO and founder of P3 Adaptive, your host, Rob Collie.

Rob Collie (00:00:20): Hello, friends. Today we welcome a very special guest, Rui Romano of Microsoft. This conversation was a bit more technical than our average episode, which some of you will of course welcome. But for the rest of the audience, I want to give you a bit of a decoder ring because the technical stuff is much simpler than it sounds once you have a framework. And because there are some valuable non-technical things you can learn, both about AI and Power BI. We've all heard now, or perhaps even experienced firsthand, the amazing ability of LLMs to write code. I myself have written three Python applications, actually four, in the past 10 days, which is four more Python applications than I had written lifetime to date up until now. And I did that with the help of an LLM through Claude Code. Well, the second L in LLM stands for language.

(00:01:10): And the code that makes up web and mobile applications and all other software is very much a language. Not a human communication language like English, but it still follows rules and sequences and flows just as much as something like English does. And in fact, code is quite a bit more structured than English. AI became good at writing code before it got good at reading and writing English, because code is actually quite a bit more logical and predictable than the messy language of English and the even messier set of human ideas that we try to convey with English. AI was born to write code. Code is easy mode for AI. And Power BI definitely uses code. It uses languages. DAX and M, for instance, very much pass the is a language test, just as clearly as things like Python and HTML and C# do. But when Power BI V1 was first launched back in like 2014, the team behind it wasn't very concerned with making its code readable as code.

(00:02:15): If you cracked open a PBIX file back then, or even some number of years later, you would've seen a bunch of gibberish. There was no reason back then to make the contents of a Power BI file readable. A computer program, Power BI, created that gibberish whenever you press the save button. And that same program was the only thing that needed to understand it when you reopened one of those files. No one was expecting back then for AI to come along and suddenly need to be able to read all the code. And readability is an important distinction. Sure, if a non-developer looks at some Python code, they might not understand what it's doing, but they can still certainly read it. It's just text. Like you might not know what a function named hasattr does, but you can read it. Reading code that you don't understand is like being an English-only speaker and reading something written in French.

(00:03:08): You don't understand it, but you can still read it. The gibberish file format of a PBIX file did not pass that test. You couldn't even begin to read it. Python, despite its mysteries is 1,000 times more readable than that dense sea of symbols you'd see if you managed to crack open the original Power BI format. So, even though there was code in there like DAX formulas and Power Query M Scripts, it was hidden in the sea of gibberish. And there were also other parts of the file like relationships and report definitions and other important settings that would have still been gibberish, even if you could isolate them. The original Power BI file format was built for a different world, and it made 100% sense at the time to do things the way that they did. But as Power BI became increasingly popular and eventually came to dominate the world of BI, it became clear that in larger organizations, that Power BI file format was an obstacle to collaboration.

(00:04:03): Teams of developers working on Python code together can be working on lots of different parts of the code, lots of different parts of the application at the same time without stomping on each other's changes because the overall project consists of lots of smaller files, each containing readable, text, Python code. Developer one can work on Python files A, B, and C while developer two works on files X, Y, and Z. But chaos is manageable because not everything is stored in a single file and because each separate file is easy to read text. Power BI, in order to solve this collaboration problem, needed to become more like Python. A single PBIX file full of gibberish needed to be something you could break out into lots of smaller, non-gibberish files. So, long before AI came along and also had its opinion on the topic, Rui and company at Microsoft set off to tackle this huge challenge.

(00:04:59): They've been working on it for years. It involves all kinds of acronyms. You'll hear us say TMDL, T-M-D-L a lot. So, when you hear tech terms like that in this episode, just kind of like squint your ears and go, "Oh, they're talking about making a version of the Power BI format that is made up of lots of little readable, non-gibberish files." And all of that effort was originally aimed at the deepest end of the Power BI pool at the time. Enterprise grade collaborative version controlled development.

(00:05:25): Most mid-market power BI deployments take a while to get to needing something like that, and honestly, many of them never do. But for Microsoft's enterprise customers, this was a must. It just so happens that years after that effort began, LLMs came along and also needed the same thing. And this is how Rui has found himself central to two very different stories, both the enterprise developer story and the, "I want to sit down with AI and create a Power BI model and reports story." Kind of neat. The deepest end of the pool on one end and the most humane conversational end on the other, both linked by a common need for a clear, readable, like Lego brick style file format.

(00:06:07): I think that's enough context to unlock the magic of this conversation for everyone because the best magic is the simple, understandable guide. And there's a lot of magic going on here. So, let's get into it.

(00:06:21): Welcome to the show, Rui Romano. How are you today, sir?

Rui Romano (00:06:25): Amazing. Thank you for having me. It's a pleasure to be here. I'm a fan of the show.

Rob Collie (00:06:29): That's very flattering. We're a fan of yours.

Rui Romano (00:06:32): Thank you.

Rob Collie (00:06:32): You're quite a hot commodity on LinkedIn these days, the things that you're working on and you yourself. We're very keenly interested in the things that you're up to. But before we dive into that, why don't you just give us a quick summary of how you find yourself in your current role? What's your condensed life and career path that leads you to where you are today?

Rui Romano (00:06:52): Yes. I'm a PM on the Power BI team within Fabric. I focus mostly on pro developer features. So my ultimate goal is to make the pro developer teams and the pro developer persona that is working with Power BI in Fabric successful using our tools. So, I drive features like the Power BI project, the TMDL, the PBIR. And recently the MCP, it's all about improving the experience of a semantic model development, making it a little bit more enjoyable, more about describing what you need and then having an agent helping you building it. Not necessarily like do all the work for you. I think that we will probably get there, but it's still a journey to be made. But the thing that is amazing about these features and especially with agentic development and using MCPs or just using context is that you can get an agent. If you are really a good expert, you can just put your brain into some sort of context and then just get that agent do exactly as you would do it yourself.

(00:07:55): Actually, that was the thing that got me really impressed was when I realized that I was able to put my instructions, my style, my naming conventions, my way of doing a certain type of development, and then I just see it going and implementing that. And before joining Microsoft, I worked in consulting for 15 years from developer to a manager of the data and BI team. And I always worked with Microsoft. So, I was a fan of you when you were doing your PowerPivotPro.

Rob Collie (00:08:24): Wow.

Rui Romano (00:08:25): I even took my own spin on it. I called it at the time, Start Small Grow Big, because one of the things that was really, to me, mind-blowing about Power BI and even PowerPivot at the time was when I realized that I can have an analysis services in the cloud like this in a couple of minutes. And I knew because I was building BI solutions for the past, I don't know, five years when I started my career. And I knew that just to setting that thing up like an analysis services server would take you a week or more.

Rob Collie (00:08:58): Just installing.

Rui Romano (00:08:59): Yeah, just installing and thinking about the hardware. And as a Power BI developer, a lot of people, I would say the big majority of Power PI developers, they don't even realize how hard life was at that time. And it was like 70% of the focus on technical stuff that the business did not care about and 30% about the business. And Power BI changed that. Power BI allowed us to, okay, let's focus on the business. We can start small. We can start with a POC. We can focus on the value. And then because we also had a very technical team, we could scale that up. Okay, now let's take this Power BI and let's scale that up to Azure or SQL or whatever, like a SQL server on-prem or Azure on the cloud. And that Start Small, Grow Big was a theme in a company.

(00:09:46): And I was seeing you doing the same thing. Actually, most of it can expire for your stories, like really focusing on the value and the delivering of the value and not just on the technical jargon of building a BI solution.

Rob Collie (00:09:59): It's like the parallel version of what we call Faucets First here at P3. Yep. So, you got your real-world experience before you went to Microsoft to design software for the world. I did it in reverse order. I designed software for the world at Microsoft and then I went out and got my real-world experience. I think that probably it would've been better in reverse order. That's really cool though, right? That you were in the BI world. So, it's not like you're just picking up some domain at Microsoft like, "Oh, we need someone to go work on this." No, you walked that walk, so you know this space. And that's one of the things when you're developing software, when you work at Microsoft, you so rarely get the time to go get that sort of real-world experience that would help calibrate you and ground you. The job is just too demanding and you're too busy building the car to drive it. I would've loved in hindsight, to have driven the car a little bit more as it were.

Rui Romano (00:10:52): Yeah, 100%. I was very passionate about multidimensional, which was before the Tableau and before Power BI. I always said that multidimensional was better than Tableau in a way because it was more professional. You could tune everything. We tried to forget how hard it was when, for example, okay, let's bring a new attribute to a dimension. You had to reprocess the whole thing and that thing could take like a week to make sure that you process everything and you have the minimal impact to whatever we're doing. And Power BI and Tableau changed that and just simplified things a lot and the power of X is outstanding. So, whatever, the flexibility and the things that you can do with that thing in comparison to having to build and think through everything from the design in the multidimensional and the amount of work that you had to put in to do that and also the lack of flexibility that you had, it is a big difference these days.

(00:11:52): Now with AI, it's taking this to the next level because now it's like that two door or one door problem from Amazon, right? Because now with AI, you don't have that problem of building a project for three months and then you need to make a change or a big refactoring. You don't want to go back, so you kind of stick to it. Now, okay, let's just refactor the whole thing and follow this new approach. And if that is something that agents and AI agents are really good at it's understanding the context from your code and you can just say, now refactor in the following this new approach and implement three variants of the same thing, which is kind of the same thing when Power BI came along. So, you could just implement a lot more projects with a much better speed and efficiency. So you don't need to make all those hard decisions upfront.

(00:12:38): You could just delay some of them and just, "Okay, let's deliver this and let's learn from it. Let's put this in the hands of the business and then come back because it's not that expensive anymore."

Rob Collie (00:12:48): Just like you pointed out that the Power BI approach to things relative to its precursor, people who don't know what multidimensional was, don't worry about it. It was hard. The notion that you could iterate a lot faster in Power BI was one of many reasons why it was such a better fit for the real world. But now you're talking about a whole additional level of iteration, which is I might be deeply into something, I've got a lot of inertia in a particular direction, I've built something relatively sophisticated and deep and then realized that like, "Wow, maybe there's a whole different twist on all of this." And even with Power BI, I wouldn't explore that, but that kind of bulk iteration is still, when you're doing manual Power BI work, you're going to sort of unconsciously deflect from that idea. You might see that idea and it's going to be better for a moment, a split second in your head, you know that this other approach would be better, but you subconsciously talk yourself out of it before you even get far down the road.

(00:13:52): Nah, you just bounce off. Whereas with AI development, you might not bounce off of it. True story, I had to pause some Claude Code sessions on this laptop before I started this podcast because one of the MCP servers is running Chrome Windows to test itself and I couldn't have that happening while I'm using Chrome.

Justin Mannhardt (00:14:18): That's not distracting at all.

Rob Collie (00:14:19): Right. And every now and then it gets into state, it's like, "Oh, yeah, I need to kill all the Chrome windows now." So, I'm familiar.

Justin Mannhardt (00:14:26): Rui, following the work you've been doing over the last few years, PBIP was a big thing, GitHub integration for Fabric and workspaces, MCPs. I'm curious, you go back a couple of years ago, how much of what you've been working on has been a concerted effort to make the definition of what we're doing in Power BI accessible to AI because that was a challenge before, right?

Rui Romano (00:14:54): So, I would lie if I told you when we started the PBIP, which was when I joined the, not Microsoft because I started in the CAD team and I switched to a feature PM and the first feature I had to drive was PBIP and TMDL, which is an absolute honor because before joining Microsoft as a developer and I was known from bringing up X and building X in Power BI to do this stuff. So, having the opportunity to really drive the thing that put me out of the market as a presenter of doing accessions, it's great for me. But I would lie that in our mind when we designed the PBIP, the goal was AI. It was not because AI was not even a thing back then, not the thing as it is today. But there was something that was always clear in my mind and in my team as well.

(00:15:42): I had many chats with Christian Wade about this. These are the fundamentals. Having a good code formats will always have good outputs of it. Even though there was no AI, at the time was, for example, one of the things of the PBIP that I always felt really strongly, it happens in the PBIR and as well in the TMDL is just the ability to copy things around. We try to make it as easy as possible. If you want to copy a visual and put the same visual in the same report, you just copy the folder to all the pages and you are done. If you want to remove a page, you just drop the page folder. You want to copy a table, you just copy the table. If you want to drop the relationship, you drop the relationships file. So, it's like just a file I/O operation is enough for you to copy things around. So, which makes it accessible, not only for the pro devs, like people that care about Git, because PBIP is not just for Git, although it was one of the main goals was to have a good source control story, but it's like the X-ray of Power BI. It gives you the code behind of everything and you can just move things around. And at the time, one of the primary goals was also to finally expose in a documented and supported way the code of Power BI, so you can make changes to it. For the first time we can tell you, it is supported for you to go to those files and make changes. And if you make a change that is valid against the schema and it's supported, you are in supported land. You are not in unsupported land, which was what happened before. So, those things are fundamentals.

(00:17:17): And let me tell you a story. That is one thing that we had to do because now we are exposing the codes, we can no longer say, "Hey, you use Tableau Editor to make this change and if you broke things, it's not supported." So now you can make those changes by changing things in VS Code and also inside of the product with TMDL view. We also brought the TMDL language inside of desktop so you can make a, for example, you can create a partition or multiple partitions in the table using TMDL view. Before, to create multiple partitions, you had to go to the unsupported land and use Tableau Editor and also use something that is not supported in Tableau or was not supported by Power BI by creating multiple partitions even though Tableau Editor lets you do that, then just desktop crashes.

(00:18:05): So, we had to do a ton of investment that we call desktop hardening, which basically means Power BI Desktop and Power BI tools, they need to be okay to get changes from tools that are not from Power BI and do not blow up. They need to accept that. And the thing about the hardening work, again, we did it and the main driver for the hardening work was to make the experience with TMDL view better and avoid the frowns and also allow at the same time external tools like Tableau Editor to do more and be more powerful. But the story behind this is that without that hardening work that was not built with the main purpose of AI, there was no MCP today. MCP wouldn't have existed today because as soon as you connected an MCP, an agent and the agent would just start to going crazy and do renames, you would break the whole thing.

(00:19:04): For example, if you do a rename of a table before that hardening work, your Power BI Desktop would be broken or your model would be broken because the query behind the table was not aligned with the names of the columns and desktop somehow always make those things in sync. So, in the hardening work, we stopped making that as a requirement. It's not a requirement anymore. Now you can have a column name with a different name from the name in the Power Query expression. That's fine. We only keep things in sync if they have the same name. So, without that work, there was no MCP. MCPs wouldn't have been possible for Power BI semantic model.

(00:19:41): We didn't have in mind the things that are possible. MCP servers or building MCP server and agentic development because it was not even a thing, but that's the advantage of working in these fundamentals is that then things just play out. Now, yeah, we have code and guess what? AI is great at writing code. AI is great at understanding code and understanding patterns. And without that code interface, none of that would have been possible. And of course we can make things a lot better and we need to continue improving in those code languages, making it more and more accessible.

(00:20:13): And as the documentation in the way of increases and the scenarios and people sharing their own examples with code, the LLMs will just get better and better into understanding this. And maybe we don't even need the MCP to have an agent to make changes to the semantic model, which it's kind of already true. TMDL is already... So, if you want to make a small change, the AI can understand the context of the TMDL, even if it's not a language that exists for many years and make changes to it. It will probably hallucinate a little bit more when in comparison we're using the MCP. We also have the APIs, which was also something that is not taught a lot, but we did also a lot of investment in the Fabric APIs that also work with Power BI. You don't need to have a Fabric capacity for those APIs to work.

(00:20:59): We call them the CRUD, the create, read, update, and delete. So, essentially it's an API, but for the first time again, we have in Fabric and in Power BI, an API that you can get the code definition of an item and you can update or create an item with a code definition. And again, those APIs are critical for AI. If you want to be in a position where you are making changes to a semantic model and you order a report and you want to show that reports to an agent, you need to deploy it to the service. So, you can have an agent making a change and at the same time viewing the results of that change and iterating and looping on that.

Rob Collie (00:21:37): I love the full circle irony of you and the team you work with, Rui, you set out to make the pro developer the most advanced type of work make that better, like big team-wide development with check-in and check-out. And everyone has their own little corner of the Power BI model they're working on. Things that most businesses don't run into. This is the high end features, right? And it ends up coming full circle and enabling some of the most low-end type of like sit down and chat and build a data model and a Power BI set of reports from scratch with a chat interface. The high-end features ended up coming full circle and enabling this other kind of low end, like the lowest of the low end, while at the same time, the pros are going to love this too, the ability to make wholesale changes and just refactor things.

Rui Romano (00:22:31): Yeah.

Rob Collie (00:22:32): That foundational work paid off in unexpected ways.

Rui Romano (00:22:36): That should have been there since the beginning. We should have made this since the beginning. Now, it's also true that maybe if we had, we couldn't have moved as fast as we did because as soon as you make a file format public, people will rely on that. You cannot break it. You need to have versioning. So, things will slow down.

Rob Collie (00:22:54): And at the beginning of the Power BI like experiment, building Power BI, there wouldn't have been nearly enough knowledge about how it was going to evolve and how it was going to be-

Rob Collie (00:23:03): ... nearly enough knowledge about how it was going to evolve and how it was going to be used and all of that. Whatever file format was come up with at that point probably would have been the wrong choice and you found yourself trapped. As inconvenient as it might be, this is really the way it had to be.

Rui Romano (00:23:15): Yeah. And for the business, what is the business value in getting CICD and getting these code formats? It's an indirect business value because for the business, they don't care if you are using CICD, if you have a code format, if you make scripts to make the changes. But they will care on getting more reliable deliveries. They will care about asking for a change that needs a full refactoring and somehow getting, that's not the three months project. That can be few weeks projects because you can automate things. And also business leaders and team leaders, they will also care about having bigger teams working in a project and larger teams with more people because without this stuff, the PBIP and the source control, it was a nightmare to have team collaboration. Or you had to be in the unsupported land.

Justin Mannhardt (00:24:04): Oh yeah.

Rui Romano (00:24:05): Most of the customers that really care that they were working with a BI team, they were already doing that. It was unavoidable. Which by the way, I also want to make it clear that everything that we are doing is not to make those external tools out of the business. That's not the goal. Actually, those external tools become even more and more powerful. They can do more stuff. Now, the thing that I always very passionate since I joined the team, maybe because I felt the pain, I literally hated whenever I joined the meeting with a customer and they said that, "Hey, I want to use partitioning and I'm forced to use an external tool, but this is a supported feature. I need partitions. I need to do those things because otherwise my semantic model is not going to scale and now I'm forced to go to an external tool."

(00:24:55): To me, that was like needles in the fingers. I was like, "Yeah, that does not make any sense." And that's the thing that, fortunately, I think today, there is no reason where you are forced to use an external tool. Now, if you like and if you prefer to use an external tool, that's a different story. If you get value when using a tool like Tabular Editor, ALM Toolkit or whatever the tool, go for it. It's your choice, but you are not forced. So if you get those tool exists to provide certain type of value for certain type of personas, for Power BI needs to be more like this generalist approach, especially our tools, they need to...

(00:25:34): And we want to, and this is also this cycle of making the features, that it somehow does not scare away our self-service users, which are the big majority of our user base, but at the same time, provide some of those pro dev and more high advanced features like Team DelView. But again, we are not forcing anyone, and that's the thing. I'm really happy that, since the hardening work and with features like the PVIP and the Team DelView, you don't need to go to an external tool to achieve something.

Justin Mannhardt (00:26:06): Well, as someone who's been on the forced to use other tools side of the equation in the past, I'm very thankful for all the work you and your team... Because it was true, especially if you were building large models that needed to scale, you got to a point where just working only in desktop was unwieldy.

Rob Collie (00:26:24): Rui, did you happen to catch back in the day the handful of times I mentioned on the blog? I just now realized that something I used to do in Power Pivot is this really, really, really clumsy version of a lot of the things we're talking about here. So in the original Power Pivot file format, it was all squirreled away in the Excel file in a particular corner of the Excel file. And Excel files are just zip files. Most people don't know this, but you can rename an ExcelS file to be .zip and then open it up and start looking through the folders. And there were two things in the Power Pivot corner of the file. One was called item1.data. That was really the Power Pivot file. What you think of as the PBIX format today, that was this big binary blob. You couldn't open that and look at it and see anything sensible.

(00:27:12): But there was this other thing, I forget what it was called, but it was this full text... It was a text file, this giant XMLA backup file. And it was just for resilience. It was never used. It was just like a backup copy, like the ingredients of what was in the item1.data file. It was only there in case the item1.data file got corrupted or something. And if it got corrupted, it could sort of rehydrate itself from the instructions list that was in this XMLA file. So I found myself in situations like pro dev essentially situations where like, "Oh my God, I know I have to go write a hundred measures now that all follow this same pattern over and over and over again," and just enough to give you carpal tunnel, just frustrating as hell. It would take many, many, many, many hours.

(00:28:02): And what I discovered was I could edit that XMLA file by hand very carefully, but you couldn't get a single character wrong. You had to make sure everything was perfect, but you could edit that by hand. So I could write a bunch of formulas in a text. I could write them in Excel, just copy the formula that says text strings out, slam them into this XMLA file, save them back into the zip. But then I had to overwrite the item1.data file with a zero byte file. I couldn't delete it. I had to overwrite it with an empty file in the Excel file, then rename the whole thing back to XLSX, launch it, Power Pivot would look and say, "Oh, my item1.data is corrupted." And then it would rehydrate from my new hand-hacked instructions and now I'd have my hundred measures in there.

Justin Mannhardt (00:28:49): Clearly a supported way of working.

Rob Collie (00:28:53): And one day, out of the blue, they took it away. It didn't work anymore and I was just crushed. That was it. There was no way to bulk edit a Power Pivot model anymore. And it was just absolutely soul crushing.

Rui Romano (00:29:09): Yeah. And those things also, even today, they still exist in PBIX files. You can still do it. Actually, in the PBIX, you need to save it as a PBIT and then you will see the tablet definition of the semantic model and it can make changes. There are many other techniques. In my previous life as a consultant, I was always doing presentations about this stuff. I had this presentation with 30 tips. It was called Power BI Hacks. It was a list of one slide for each tip. I had a lot of fun doing those things. But the thing is, not only the items supported, but as you were saying, you need to be really careful to not break anything. And then if you break, you spend a lot of time debugging what was the thing that you missed, maybe like a coma or something. And one of the principles of the PBIP, and when I say PBIP, Power BI project, I mean the folder representation with files, textual files of your entire semantic model and report.

(00:30:08): Even if the tool does not allow you to create a batch set of measures and you want to just do that in the code and either scripting it using AI or just copy and paste it because doing copy and paste will also be a lot more efficient, you will be able to do it. And not only that, when you go back to desktop or Power BI in the service and you open it, we will do the best we can to tell you if anything is wrong. And we actually did a lot of work also on hardening and all of that to detect a situation like to validate the schemas. And that's why we also need to have versioning on all those files. And if anything goes against the rules, we will tell you, we will tell you the file. If you are doing those things, you are a more advanced user.

(00:30:52): The file will tell you, "No, we don't hide the error." We give you the full blown error and you should be able to figure that out and figure out what is wrong. We should not require those types of facts. And we should also acknowledge that especially today, we do have a fair amount of Power BI professional developers that they care about control, they care about transparency. They want to know what the tool is doing to the files. And that's one of the main tenants of the PBIP is to make those things supported. And hopefully right now you need to restart desktop whenever you make a change to those files. One of the things that I'm putting a lot of effort and prioritizing on the team is to just detect those changes and reload. And you don't need to wait like five minutes or one minute to reload desktop because you are working in a one gigabyte file. So it will just reload the metadata.

(00:31:42): It's not those types of features that are important for the business, but they are really important for the life quality of those developers and make those people a lot more efficient and also happy working with the product.

Rob Collie (00:31:55): And happy matters. Take care of the people who take care of you. It makes a difference. I've always said that suffering should be considered part of the cost of any project, not just time, not just money, right? And how much are you burning people out? Because you're not going to get their best work and eventually they're going to leave, et cetera.

(00:32:13): Let's back up for a moment. One of the lessons that I think we've been learning as a community, as an industry, is that LLMs are really good at writing code. And when you look at code, it's text. I mean, it's not English. It's not Portuguese. It's not friendly text, but it's still text and it follows a set of rules and it follows a flow. And it really is, they call them programming languages. That's not just an accident. They really are languages of a sort and large language models are very, very good at writing code.

(00:32:51): In fact, the earliest practical use cases, like the place where these things really caught fire, the reason why we have the name Copilot is because some of the earliest usage of this stuff was to write code. One of the first things that it got good at, before it was good at giving us advice on what to cook for dinner, it was good at writing functions in Java or in Python. And most people who aren't developers don't know that. They don't know that that's sort of like right in its wheelhouse. The Power BI file format by itself wasn't accessible to that. That's not code. It's a binary file. It's just a big blob of bits. It's like completely indecipherable even to the AI.

(00:33:33): Something we've been talking about is a lot of your work, your team's work has been making Power BI into a format that is readable and writeable by AI. But now there's also this MCP thing. And something you said earlier I really want to circle back to is when I'm using Claude Code or whatever to help me build some sort of application, if it's writing Python for me, for instance... Dear listener, if you've never written Python, don't worry about it. In a way, neither have I, because Claude Code does it for me. I have a scrupulous policy of never looking at my code, never looking at the code that Claude writes for me. Nope, not going to do it. So I don't write Python either. Claude does. But when I'm using Claude Code to write a Python script for me or whatever as part of a larger application, I don't need some sort of translation layer between the LLM and Python. LLM just knows Python. I mean, it really, really, really knows it.

(00:34:32): You hinted earlier that as more and more and more examples of Power BI are put on the web and made available in public repos and in blog posts, et cetera, there's going to be future iterations of these LLMs are going to become better trained on these file formats that are still relatively new and aren't necessarily... They don't represent the majority of Power BI examples in the world. We're still early in sort of the training data for it. Do you anticipate that we're ultimately headed for a world like that where the LLMs know the Power BI file formats, the text-based formats anyway, the LLMs know it as well as they know Python?

Rui Romano (00:35:19): I don't know if it's going to be as well as they know Python because the amount of Python developers is huge and the amount of Python samples is huge as well. But I do believe we will get to a point where it will have enough knowledge to create those TMDL files without even an MCP. And the reason I tell you that is it kind of already does that even today. The TMDL, it's a declarative YAML-like syntax, it's a declarative language. So it's not an impeditive language where you are going to create cycles and loops and if statements, it's not like that. You declare a measure and a measure has these properties and a measure has this expression and these descriptions and all of that. And then you just end up getting that measure in the semantic model.

(00:36:12): But because it's very readable, it's not only readable for the human. So you can open a TMDL file and you can see, okay, here's a column. And you can intuitively know that, okay, if you have things in the next level of indentation below the column, you know that, oh, these are the properties of the column. It's very obvious. Even if you don't know TMDL, it's the first time we are looking at it because we also did that research with some business users. We just showed them the TMDL and they get scared like, "What is this? " But then I ask, "Okay, can you spot how many columns? Can you spot how many measures?" "Oh yeah, I can see three columns and I can see the properties of the columns." They understand, they can read it.

(00:36:52): And because it's good for the humans, it's going to be good for the agents. And one of the things that was interesting to see when the first trying playing with a GitHub Copilot with the auto complete was I opened a TMDL file and I just started to type a measure and Copilot, taking into the context, the existing file and the other examples, it was able to generate the right code, even though it was not trained with the TMDL content. It does not know anything about TMDL because the first time I tried it, TMDL was like six months old or so. So there was no knowledge in the way.

(00:37:28): And I was very surprised to see that the LLM was able to generate the TMDL for me by using the examples and the patterns of the other objects. One of the examples was I had a sales table and the sales table had already a measure for sales amount and cost. And I just typed "measure profit" and you can see generating the DAX code for the profit using the other two measures. And why he was able to do that? It was able to do that by looking at the existing pattern of the other measures, knowing all they are represented. Because I typed margin, he knew that he already had two measures, one for the sales and another for the cost.

(00:38:10): And guess what? It does know how to write DAX because DAX is a language that exists for many, many years and it has a lot of examples and it can generate the DAX. At least the basic DAX, it will nail it. If you ask it to create more complex stuff, it will also do it, but it will also be a good idea for you to put some context on how to do things and not just trust whatever it creates. But the basic stuff, like create the base measures, create the measures with the format string, you can already do that and achieve that with a high level of success today without an MCP and just with a TMDL and it will just get better and better as these things become more pervasive in the community and in the web.

Rob Collie (00:38:51): So it's important I think to emphasize that what I think I'm hearing is that you believe there is nothing special about these text-based Power BI formats that makes them more difficult for AI to work with than say Python. The only difference is amount of training data, the amount of examples that's available in public, that's the only real difference between the two.

Rui Romano (00:39:16): Yes. Although for the reports, I'll make it clear that the report is not necessarily a language. On the report is just adjacent representation of the report state. In the TMDL, it's a little bit different. It's a text representation of the semantic model, but it's also a language. We actually implemented a clarity language with it. You even have scripts with a create and replace with TMDL. With PBIR and the report is just JSON. Now, of course, that JSON becoming more and more available in the web, the agent will know, for example, how to change a filter of a visual, how to change a semantic model field on a visual. Today, for you to work around that, and because we don't yet have an MCP for the reports, what you need to do is you need to put that into the context of the agent.

(00:40:09): And it's really nice because you can just say, "Hey, here's our visual looks like, and here's an example where the semantic model field should be. Now create me a report with these visuals and pick the right and the best semantic model fields and use your best judgment to realize what should be the top card, what should be the charts and the visual types." But you need to give those examples. You need to kind of give a, "Okay, here's how it looks like. Here's the JSON, here's a schema if you want to do a validation of the schema and now just follow this pattern." Because again, in the end, it's just text and the LLMs are really good at understanding the patterns on the text and just following that. You don't need to be really specific. You don't need to write a script to know how to parse the JSON or how to parse the TMDL. You can just put an example, say, "To do, hey, agent, this is where you need to change things. Here's the semantic model, the table name and the field name," and it will just nail it.

Justin Mannhardt (00:41:12): A question that we're working with internally a lot is there's kind of these two things that work you and your team do enable. One was sort of this intentional mission that you described earlier to empower the pro developer persona, make them more effective, not get people stuck when they got to use external tools. That's sort of like happy accident also ladders up to the AI use cases really well. And there's two primary use cases. One is for the individuals that are creating solutions, which now with AI, that could be everyone from a pro to someone that's just getting started to also then with things like MCP service for reading models, we can now have very interesting conversations about, well, how we could hook up AI-based solutions to models that are running in production so they can use them whether it's in workflows or agents. And so I'm just curious, how much do you get exposed to how effectively that second use case is working for customers, if at all?

Rui Romano (00:42:16): I'm definitely more focused on the developer space, but I can give you my take on it. I do think the future of the data predictions or the future of BI, let's say, in terms of the development, it will change to be more declarative and less code and less technical. So you will be, as a developer, the way I see it, you will evolve into be more like a director and a manager of these agents. And you'll need to be really good at express yourself and also validate whatever the agents or AI is going to create. And I don't think it's going to change that much on the need of doing work around the data, like data quality, data governance, data organization, how you are going to go from a raw, ugly CSV file into something that is organized and ready for consumption. I think that what is going to change is it's going to be more efficient to do those things.

(00:43:14): And better than that, you will be able to do multiple variations of the implementation because it's not that expensive anymore to try different stuff. So that thing about one-way, two-way doors, right? So it's not only one way and you don't need to make all the decisions upfront. You can experiment a lot more. And that's let's say on the developer side. At the same time, I do think, which is in my opinion, one of the most painful things in Power BI is actually creating those dashboards and reports. It's very fun when you are exploring things and it's awesome. But when you want to build a professional standardized reports inside of a company, it's a lot of work because it's a lot of click and drag and drop, and it's a lot of effort if you want to really have really nice and shiny dashboards that all look the same and standardized. Although with the PBIP and PBIR, you can already script all of those things.

(00:44:10): But I do believe that from the consumption, there won't be as much of a need to create like 100 reports or 50 reports or 10 reports that will answer multiple questions. I think that the consumption will evolve a lot more into the user will ask a question and will get an answer and it will get an answer that is textual, it is visual. They can share that discovery with others, they can share that just like they share like ChatGPT session. And if they want, and when they get to a point where their exploration is showing some nice value, maybe they can persist that as a dashboard. They can go from that to a dashboard. I don't think that dashboards and reports will go anywhere. They will still be very valuable because if I'm a BI developer and I'm building a solution for my business, I will probably have one dashboard that is there and the user can trust that it will always work and it's very deterministic.

(00:45:09): But from the exploration side, which is maybe not the CEO, but the data analysts in that team that needs to just explore data and the operational analysis, they will get a lot more value and it will evolve into be more like chat. I have a question I will ask and Copilot or AI will just give me the answer that I need and I can iterate and I can keep asking all those things until I get to that answer. And it will become a lot more personal. These agents, they can know what you need, they can know what type of response you like if you prefer to see tables or if you prefer to see visuals, if you prefer to see both, if you always want to get a report in the end, so you can tune that experience and get the agent doing that thing for you.

(00:45:56): So I'm really excited maybe also because I never really like the part of the project of building the reports.

Rui Romano (00:46:03): I never really liked the part of the project of building the reports. I love building reports, like interacting with the data, but when it comes down to really build this corporate reports and pixel perfect, it was a lot of work and it still is a lot of work. I was always this lazy developer. I never really liked things like bookmarks and creating hundreds of bookmarks and managing all those things because I was always stressed. Not that I dislike bookmarks, but the fact that I knew that if I went on the path of the bookmarks and creating hundreds of bookmarks, if I wanted to make a change, then if I wanted to go back and refactor something, it was like, 10X work and I needed to refactor all those bookmarks. It's a lot of work. In summary, I believe that it's going to be a lot more natural, declarative, and for the consumption, a lot more chatting experience and getting to the analysis you need without actually going to a dashboard, learning about the dashboard, having to learn about the dashboard, which is not easy.

Rob Collie (00:47:04): No, it's not. I think that's a crucial distinction is that in an experience, in any piece of software that you become familiar with, you're going to be able to interact with that piece of software much, much faster with your mouse or with your thumbs than you can via voice. Imagine if you had to interact with email by saying, "Open next email." Instead of tapping that message or double clicking that message. Or, "Delete those six emails." Instead of just selecting all six of them and pressing delete. You're able to move much, much, much more quickly with these mechanical interactions, like these visual mouse or touchpad, whatever type are on your mobile app interactions. Okay and cancel is faster to tap them than to say them. But for experiences that you're not familiar with, voice is way faster. And so this is what we're going to find is that there's a handful of dashboards that are familiar to you because you use them all the time.

(00:48:03): They match your workflow kind of perfectly and you're still going to use them that way, just like there's still going to be software with graphical interfaces. It's not everything is going to go to voice. Not everything is going to go to typing in English or to give instructions. But the other thing that we were talking about is like, yes, let's talk about those reports for a moment because if I could get back the time that even I have spent trying to align visuals with each other and get the borders of those visuals to not overlap with each other in a way that makes a double thick black line instead of a single thickness black line, if I could get all that time back, my lifespan would be noticeably longer. I get more life.

(00:48:47): So this is one of the things we've been running into, and we were sort of talking about a little bit earlier, is that the report format in Power BI is still less friendly to AI than the model format. That's certainly what we've seen. Is there anything that you can say about that? Are there plans to change that? Is that even something that can be fixed or is the report format just sort of necessarily such a different animal that it's going to be more resistant to being made understandable and accessible to LLMs?

Rui Romano (00:49:20): Yeah. And before I answer that, it is absolutely true. So the report, it's a different piece in terms of complexity when comparing it with the sematic model. So the amount of possibilities there, what you can do with the visuals, what you can do with the report, all comes down into some sort of JSON inside of the report. It is a lot. But for example, that example you just gave me, like aligning the visuals, that should be one of the most easiest for you to do today with AI. That's a very clear property like position. AI is really good at understanding that position, creating a wide frame, and then you can just say, "Hey, align everything and you will see that it'll do a good job." Actually, it's on my to do list because I already was able to do it. I just want to do a demo and a recording and publish the prompt and maybe a little bit of the context that I need to give to AI, which is not a lot.

(00:50:12): It's like a one page of text, so it knows what to do. Now, things get into a completely different level when you go into conditional formatting, filters, the selectors, how you configure a semantic model filled inside of the report, and now you use that in multiple places of the visual. That, JSON, is quite complex and hard today for AI agents to just go and make changes and detect the patterns. It can do it, but you need to provide a lot of context. Right now, the goal of the team, to be honest, and it's the absolute priority and the team that is working with the PBIR is taking this thing into GA. And the reason why it's not GA now, and we have been working on it for almost actually almost two years, it's not because we abandoned the feature and we are not going to GA it. That's quite the opposite. So we have been working on it for the last two years.

(00:51:06): But the thing about the PBIR is not only what the customer sees, which is just the JSON. The thing that makes PBIR a really complex project is that we are also changing how we store things in the backend. It was also an opportunity for us to change how we are storing things into a much more efficient and future-proof format going forward. And that's why it's taking a lot more time because the way we store things in the backend and also making sure that nothing is going to break and all those millions of reports out there when they migrate to the new format, they are not going to break. And there are a ton of reports out there that were manipulated in unsupported ways. And then there was things like, I'll give you an example. Because we have a public format, we need to have a schema and we need to have a strict schema. So if you, for example, if you create a visual with the same name, with the same ID, you get an error. Guess what? There were a lot of reports out there with duplicated visuals.

Justin Mannhardt (00:52:09): Table one, table one, table one.

Rui Romano (00:52:11): The display name is the same, but the idea because... And the only way for you to be in that situation is if you manually change the JSON in that unsupported path.

Justin Mannhardt (00:52:20): Oh, sure.

Rui Romano (00:52:21): Because that was not supported, we did not have a lot of validation about this. We just assumed our tool is not going to put duplicated visual names, so no one will do it. Now, people will do it, but somehow, and honestly, by accident and by luck, the report renderer picked the first one. And most of the times that's what the customer needs. That's what the user expects is to get the first visual. And because of that, things work. Getting an agent to really be able to make those changes to the reports, to the JSON as it is, it's going to be hard. And that's not... The focus of the team is, as I said, is to take this to GA. After that, and actually we have been also experimenting and looking into ways on how we can get agents to really be smarter into working into the reports and be able to work with the reports. That could be either having an MCP, but maybe before the MCP, we will need to have an object model.

(00:53:18): And that object model that is capable of reading that report definition and exposing an object model as a programming language in Python or C# or whatever. So agents can also use that object model to understand the code and generate new stuff, but also include in the object model concepts as the theme, include things like if you do a rename or if you do a rename on a semantic model, you can easily refactor and change that in the report without going through the JSON, but those things are at this moment, are more like in the phase of experimentation and see what would be the best path going forward because another approach could even be simplifying that JSON, which would be a far more complex work. But because we already have the foundation that now we have this JSON, it will be far easier than before when it was using a completely monolith JSON file that no one could read.

(00:54:20): We are now in a much better shape if we want to create a [inaudible 00:54:25] version of the Power BI if we want, but I'm not sure if that's the right move to go. Personally, I do feel we need to start by creating a really good object model that will feed everything else. So for example, today you have the modeling MCP, which is great and it's working great, but before the modeling MCP, you had the APIs, like the Tabular Object Model, the ADOMD, and the MCP, which is model context protocol, is nothing more than exposing APIs to AI agents. That's the goal of the MCP. And which APIs? The APIs, the Tabular Object Model and the APIs that already exist. And those APIs, they are the same APIs that Power BI desktop uses whenever you are creating a measure, say UI gesture to rightly create measure.

(00:55:15): But behind the scenes, what the desktop is doing is calling those APIs. Now with the MCP and AI agents, we are just flipping and switching the click of the miles and the keyboard by a natural language question, and then it's the agent that is going to ... It's using those same APIs behind the scenes. The problem is that on the reports, we don't have those APIs yet. There is a journey to be made on those libraries and APIs. And when we have that, we will be in a much better shape to either even change the language itself if we want to, or implement an MCP or implement a CLI on top of that object model that then an agent can just leverage and go crazy and create reports and modify reports. I'm sure we will get there. Right now, the real focus of the team is taking the PBIR, which is our code format for the report to GA. Hopefully it will happen somewhere early next year.

Justin Mannhardt (00:56:13): And GA, meaning general availability where it's out, everybody's able to benefit from it. I think the overarching narrative for all this stuff is getting Power BI to a point where the solutions we're building are described in code, enables AI to be part of the puzzle in a really effective way, which means the opportunity to reduce friction goes way up and it's for everybody. So like what Rui was describing before, if I'm a business leader and I consume data and information about my business, yes, I'm going to have a dashboard or set of dashboards that I look at all the time. They're telling me all the things I know I need to know just like in my car, I need to know how fast I'm going and how much gas I have. But then I also have all these questions and I go to people and they say, "Hey, Rob, could you build me another report that kind of further expands on this question I have?"

(00:57:10): And even with Power BI, it was like, "Oh yeah, he can maybe turn that around in a day or two." It's like, "Well, I needed it now." And so AI coming in the equation can help that user. And then when developers or report writers or authors and model builders do need to build new solutions, they have less friction because they don't need to spend an hour adding 100 measures or realigning their visuals or realizing that they're going to make a change that affects a huge part of it. So much friction comes out of the equation across the org. So I think the message is, you should be able to get answers to the things on your mind in a less frictional way and you should be able to enhance and optimize the solutions you're using in a less frictional way. That's what we should be expecting of ourselves.

Rui Romano (00:57:57): Even as a business user, because of these things, because we have code formats and file formats and you have AI that is able to understand those formats and generate new things, let's put it with MCPs or with context, with tools, whatever, but fundamentally it's because you have that code and you have APIs that can work with that code. I can see a future where I'm a business leader and I'm viewing a dashboard. I don't necessarily like what I'm seeing. I want to see things differently and I can just bring Copilot and I can say, "Hey, make this change." Or, "I also want to see sales trends by this new field that does not even exist maybe in the semantic model." But AI knows the code, AI knows the report, knows the semantic model and it could say, "Okay, I'm going to take this dashboard and I'm going to create a new one, like a temporary one that I'll make the changes for you and I will even make the change maybe something on top of the semantic model or there's a new measure in that report and I'm going to show you the result."

(00:58:59): And then you can decide, you can even say, "Okay, I made some changes to this report." And you can ask the guy that actually owns the report maybe to incorporate those changes into the report that everyone sees or it's just for you. In Power BI, we had this amazing feature that I don't think it never got the love it deserved for many years. It's called the personalized visuals where you can just go to a visual, you can change it to the way you want. These things, they can take this to the next level because it's not just personalized visuals is you can just ask a question about the report, but at the same time, you can ask it to change the report and we can make that change in a personalized version of that report. And all of this is because of the code. It's because of AI being able to understand that code and generate that code. And I'm sure we will get there. It's a little bit beyond my scope of things, but I'm sure we'll get there.

Justin Mannhardt (00:59:55): The power to just ask, regardless of whatever you're doing, if I'm building the model or if I'm just using reports, I'm glad you brought up personalized visuals because even that it was a good idea because it's like, "Oh, they put a line chart here. I kind of want to see it with a couple different categories." But that was still a training effort, especially for people that weren't Power BI builders. "What do I click on? Where do I go?" And now they're just like, "Hey, can I look at this with this other category or can you change this part?" And AI able to understand that and make that change. Yeah, that's what I think we're all looking forward to being able to do.

Rui Romano (01:00:31): Yeah. "Can you put a new line so I can set my static goal and I can see where I want to be and I can see that how the trends are going?" And again, if you want an experienced developer, Power BI developer, you know what to do, but if you are not, you will struggle with it. You need to go to the format pain, you need to search the property, you need to search the section where you want to make that change, you need to configure it. It's not easy. And this will democratize that because if there is an agent that knows how to modify a report and also create a report, and it's really good at that. And it basically has a set of tools and operations that will teach them how to do it, then as you said, you just ask and you get an output, and because it's in a render and view mode, you can even view it.

(01:01:20): And the agent can also view it if it's doing what you just asked to do. So that example from Rob, you can make a change and you can launch a browser, and especially if the agent is running in the same browser session, you make a change and you just see it and the agent also see it and it knows it if it did a good job or not, and it can iterate until it does.

Rob Collie (01:01:41): So I have a question. It's a little bit of a challenging question. I'm sure you've been hearing it. We've been hearing it from time to time and even asking ourselves this question, and I think we have our own answers for it. We've all now had experiences where we've sat down with some sort of AI agent and some data of some sort and started to get analysis out of the process without going through Power BI. There's no Power BI involved. The most simple example of this is you're sitting down and it'll just produce, for example, like in Claude with artifacts, it can just draw charts and it can perform chart type analysis on your data. Or even better, you can generate Python code or whatever code that performs analysis, hallucination-free analysis, produces wonderful visuals, even interactive visuals.

(01:02:33): You can have a web application with interactive visuals in it, displaying line charts, whatever, that do display a lot of Power BI-like interactivity. You can get those built relatively quickly. So what is the future of Power BI in a world like this? Does AI long-term end up being like an end run around the need for Power BI? Does Power BI have a place in the future if every analysis can be generated so readily?

Rui Romano (01:03:01): That's a really good question, which of course it is a hot topic internally as well today. First of all, we want to get to a point where you can get that same set of experience. You don't need to spend a day creating a report or a day creating a fabric solution end to end or a week or so to get there. So you will be able to say what you want. I want a report. You attach a screenshot of a look and feel of the report that you want, give it a semantic model or give it a set of data. And if it's needed, we create a semantic model and then a report on top of it and we show you the results. And it might be a transit report or it can be something that it can persist. So we want and we need to get to that same level of experience within the platform.

Rob Collie (01:03:46): So that it's just as fast, just as easy to get a Power BI style output as it is to get a coded from scratch Python and HTML type of output.

Rui Romano (01:03:56): Because the thing is this, and of course I have experimented a lot with those tools and creating dashboards, interactive dashboards, and it's amazing. And then you go back, you come back to a tool like Power BI or Tableau or whatever those BI tools that already existed and existed for many years. And the experience kind of feels a little bit outdated. So it's like, okay, in one, I ask and I get everything and it's very dynamic. It's even better than what I expected. And then you come back to Power BI, which yeah, you can make amazing dashboards and really like beautiful things, but it gives you quite a bit of work, right? You need to invest a lot of time. And it's a difference between going from spending one hour working with something to working eight hours to get to something that maybe it's not even as wow effect as you can get with generating those webpages with Python and D3 and whatever the chart libraries you are going to use.

(01:04:57): So it does feel on the creator side, maybe not on the consumption. I don't think that on the consumption, the look and feel of the Power BI, the interactive dashboarding, that thing is still really powerful. And I think it's hard for you even when you create those pages to get to that same level of interactivity. But on the creator, it does fill outdated. It's like, I don't want to do this anymore. Even if that thing gives you pleasure, you don't want to do it. And whenever I talk about AI and PBIP and semantic model these days, I usually say this all the time, which is there were things that gave me pleasure when I started like, "Hey, I'm going to create a semantic model. I'm going to get the data. I'm going to shape the data in Power Query, clean it up, put my name in convention, create the star scheme, relationship." That thing was amazing when I did the first 50. It's not fun anymore. I don't want to do that anymore.

(01:05:54): What I want to do is I want to put my brain into some sort of context and I want to hand it over to AI and then I get the thing as I want to with my style. That's already possible today, but it's still a little bit technical. So you need to go to VSCode or CloudCode. They need to install those tools. They need to use the PBIP. And one of the goals of the team is we want to get that same level of experience, but you don't need to install anything. You don't need to install VSCode that cloud code.

(01:06:22): You don't need to get a subscription. It's already part of what you are paying for. It's already part of the product. You can just say, "What do you want? We will do it for you." And it needs to be really smart and good at it. So it needs to, again, you want to create a dashboard like a Miguel Myers dashboard or a Kurt Buehler dashboard. So one of those amazing that, give me an example, here's a screenshot, this is what I want, go ahead and do it. And then in a few seconds or in a few minutes, you get that.

Rob Collie (01:06:51): I love that, by the way, people's names being used as adjectives. Oh, you mean like a Miguel Myers dashboard or a Kurt Buehler dash?

Justin Mannhardt (01:06:59): You use that with customers like, "You talking about a Rob Collie dashboard?"

Rob Collie (01:07:03): That's the platinum level.

Rui Romano (01:07:07): And I think that even the agents will struggle to get to a point of a Miguel Myers dashboard. So those dashboards that we built are pretty amazing. They

Rob Collie (01:07:14): Are, but our president and COO, Kellan Danielson was showing me something he's building yesterday called The Claude Father. And it is so well themed with the Godfather type theme and it's done by a design MCP. And again, it just becomes amount of training data. How many Kurt Buehler and Miguel Myer dashboards need to be on the internet before the LLMs get closer and closer?

Rui Romano (01:07:41): Yeah. But honestly, I don't think, you need to have a lot of examples. What I'm saying is that you can just take a screenshot of something that you like. Here's a style of a report that I like. It's like the same thing as you go to one of the consultants on P3, the best one that you have on creating report design and you say that, "Hey, here's a semantic model. Here's a screenshot of a style of the report, build something like this." The experts will know how to do it. So they'll know, okay, yeah, I need to start with a theme. I'm going to create a background image with these things and I'm going to create some icons for the design.

(01:08:19): So the agent should also be capable of doing such a thing and also be able to take an example of, okay, what's the goal? And using the tools and using the context, knowing what are the things that it must do to achieve to get to that level, it should be able to do that. I'm not saying that it's going to be easy. I'm not saying that this is going to be the reality in a couple of months. What I'm saying is that I'm absolutely sure we will get there.

Rob Collie (01:08:44): Yeah, I agree.

Rui Romano (01:08:45): Now, that's one part of it. Now, the second part, which is also very pertinent these days is, okay, but do I need Power BI? Why do I need Power BI? Why do I need, let's even not say Power BI, why do I need Fabric? Because Power BI right now is inside of Fabric. So the platform is no longer Part BI, the platform is-

Rui Romano (01:09:03): ... now is inside of Fabric. So the platform is no longer Power BI. The platform is Fabric, Power BI is an item, it's a workload. It's a very important workload. It's now inside of Fabric. It's kind of Power BI got demoted and now it has a new manager, which is Fabric and everything runs inside of it.

Rob Collie (01:09:17): Which is made very clear every time a Power BI report renders and the fabric icon comes up. There's a very, very clear communication, like, who's in charge here?

Rui Romano (01:09:27): I don't know if that's not a bug, but it's not. I don't know. I don't know. So do I need that? I would say, and of course I'm biased, I will say yes because the main value a lot of people forget when they are doing those amazing dashboards is first, how secure that thing is. Second, how can you trust that you are going to put some... How can you scale that to share that same dashboard with a thousand users? How can you put role level security in it and column level security? Those are the things that you will get from the platform. And especially as an organization, you can trust that platform. You can make sure you know that that data is secure. There is a lot of things that sometimes we take it for granted, but they are really important, especially in the corporate world, like the sensitivity label.

(01:10:16): I'm sure that I can have an ease of mind that this dashboard, because it has a confidential label, it's not going to be viewed outside of the company, or at least it's not going to be easy. And at the same time, you can give access and share with everyone within the company, but you need to be inside of this network. It's protected. So the platform side of it, I don't see that going away. And when you go into an approach of, "Okay, I have data and I'm going to just generate something in Python or a web application," it's very good. And I think it's very flexible for self-service usage. But when you want to scale this to a company, to a team, to customers, then you will start to get into a lot of complex problems to solve that those things are already solved not only in Power BI, but in the platform.

(01:11:09): And one thing is analyzing data from a set of CSV files or a database. Another thing is what I said also previously about the feature of BI. I still see a lot of work and need for that data prep, that data analysis, that data curation, that data governance. And for that, you will need a platform to do that, those things. That is not going to come for free. And I do believe in our platform that's capable of offering all of that as a unified platform where you get all this all in one package where things like, again, security, governance, it's built in and we take that thing really, really serious. And why not? You can already kind of do that, but maybe not in the most efficient way because you can have Power BI reports with Python visuals and R visuals. Maybe in the future we will have a type of report leveraging all the infrastructure, all this secure and existing semantic model within a secure way with your account and all of that and everything that Power BI and Fabric offers with a webpage on top of it that just connects to that data.

(01:12:17): And then you kind of get the best of both worlds. You can just say, "Hey, generate me a freeform analysis on top of the data that I already have, but in a trusted environment," and you get all that flexibility. So why not that? I still don't see anyone and any platform offering the same set of platform capabilities that Power BI and Fabric offers and in the end, having the BI on top of it. Now, I do agree that it's not the same experience on the creator side that you get these days with AI generating a dashboard. But again, if you want to share that thing and you want to share it in a trusted way with security on it, we will end up like, yeah, you could vibe code those things as well. And then in the end, you can even get a backend to configure the access security and the role level security, but it's not going to be easy. It is still a problem to be solved.

Rob Collie (01:13:14): We should assume that Microsoft has more than enough time to get the Power BI equivalent experience of vibe coding to make it as frictionless as it is to build a graphical dashboard in HTML and Python. The world is slow to adopt and slow to adapt to new methods and you all know exactly what your mission is and you're going to succeed at that. There's nothing that's going to stop you. For purposes of this question, we should evaluate the world where we're already there. Where the first time experience like building from scratch vibe coding, let's say a Power BI based and Fabric based solution versus a from scratch Python, HTML, whatever, solution on the other side. Assume that the first time experience is just as good and just as fast. That's when things like measures start to become pretty interesting. A measure definition in Power BI.

(01:14:10): Let's imagine a very sophisticated business logic measure. There are such things. We write them all the time. They're very nuanced and when you get them right, when you get them working, they work in a lot of different contexts. They don't just work at the state level, if you want to use geography. They don't just work at the state level. They work at the city level. They work at all levels of granularity and that you can ask questions on a million different dimensions and that measure works equally well across all of them. Whereas the Python coded version, let's say, is going to be hardwired around whatever you first built. It's going to be hardwired to calculate at the state level or at the country level. And if you want a dashboard that does something different, the Python code needs to be modified. Okay, but I've got the vibe coding assistant to write that other code for me, right?

(01:14:59): Might not be completely in agreement and there are going to be a million of them lying around because there's one for every different report. Every dashboard has its own version of it because the context is a little bit different that it's calculating in. And by the way, also, I can't imagine anything coded in Python ever being nearly as fast as Power BI. No matter what libraries you use, there's going to be response time buildup and all that kind of stuff. Eventually you'll reach a point where these vibe coded non-Power BI solutions are much harder to work with and modify over time than the one that's anchored in the platform that was designed to do those sorts of things from the beginning. And I believe in all of that. At the same time, I also think we're going to have a conceptual problem with the marketplace, and maybe this won't be quite so true in enterprise customers, which is where Microsoft focuses most of its sales attention anyway.

(01:15:51): But on the lower end... This just occurred to me while I was listening to your answer. On the lower end, today, we have a very simple sales pitch to businesses who haven't yet jumped into Power BI. Why do you want Power BI? It's because you get dashboards. You want dashboards? You get Power BI. It's the cheapest, it's the most affordable, it's the industry standard. It's just very clear. If you're going to take the step into dashboards, you're going to be going into Power BI. In the same way that Power BI has, even today, if you took a census of all the Power BI models in the world, it'd be an appalling percentage of them that aren't built to take advantage of Power BI appropriately. So many of them that are one wide Franken table or...

Justin Mannhardt (01:16:37): 73 disconnected tables.

Rob Collie (01:16:39): Right. 73 disconnecting tables are one large one, but definitely no star schema, right? And do they use measures? No. That tells you that the marketplace has a hard time absorbing nuance, like those long-term, deeper benefits that they can't quite perceive are a harder sell. And so I do think that in the mid-market in particular, we're going to see probably more competition from one-off vibe coded dashboards. They're not going to know necessarily that they're creating that future problem because it's like, "Ooh, dashboards." What was the goal? It was always dashboards and we're going to get there.

Justin Mannhardt (01:17:18): With lots of bookmarks for real.

Rob Collie (01:17:19): Lots of bookmarks because we love it. But also, I think the other real strong sales pitch that sort of comes back and might overwhelm that in a positive way for all of us is that chat-based, chat with data, ask questions of your data, really requires that you have that data model underneath that's prepared to answer lots of different questions. If you're just building lots of one-off dashboards and you've got someone who's willing to vibe code them, great, but that's not going to help you when you have a business user sitting down asking questions of their data. That's going to go off the rails over and over and over again. So in a weird way, that chat with data, like a natural language interface that allows you to ask questions with data, I think is going to be the reason why you need Power BI, even more so than the dashboards.

(01:18:11): And all the different dashboard use cases and all of the disciplined high end scaling and all that kind of stuff isn't going to be a clear enough message except to the enterprises. But boy, will that chat with data sell it? Chat with data, if you have a good Power BI model, you have a good data model underneath, you can have that or you can't. That's going to be the reason why the less sophisticated, the smaller companies I think still need to get onboard. What do you think about that?

Rui Romano (01:18:41): I didn't even focus that much on the semantic model, which again, I'm biased. I do think is a secret sauce inside of the platform. I think it's insanely powerful. As you said, a lot of customers, they don't even explore it in the way they should and don't get the biggest value out of it. It's insanely fast. The way as a measure, you define a measure and it just adapts to whatever the context. And those are the things when you go into the, let's say, the vibe coding of a dashboard, yes, the dashboard will work, but as you said, you will end up duplicating that code and then you'll have variations of that dashboard. In a tool like when you have a semantic model, you don't duplicate the semantic model. You probably have multiple variations of the reports with different set of context and different set of filters and things will just work.

(01:19:30): And you have the centralized logic that, by the way, it can even improve a lot more because even today, if you have multiple semantic models and you want to share the definitions across semantic models, that's not easy. And this is also something that I hope we can improve a lot in the future, like composability and inheritance of these semantic models. It is the secret sauce that we have and it's very powerful. Like you said, that's not going to be easy. So it's easy for you to get something [inaudible 01:19:58], but then you will run into the same set of problems that were already resolved like many years ago. It's not even with Power BI, before Power BI.

(01:20:08): And semantic models, in my opinion, there is no better data source that you can feed to AI to answer questions about the business because it includes not only the data and the data in a way that is really fast to query, but it includes the semantics, it includes the business semantics, it includes the relationships between the entities and an agent can just get that context in an instant, understand how to query that and how to answer business questions about that data.

Justin Mannhardt (01:20:40): The efficiency of a AI system with data, you could go out and you could see, "Oh, I have this AI and it's just talking to my data warehouse." And it's like, okay, well, what is it really doing? It's like constantly trying to evaluate what's in there, how should it relate? Okay, how do I write this query? And the semantic model just simplifies AI's job, right? Especially when you combine things like with the MCP tools, it's like, "Oh, I don't need to figure out how to invent a hammer. I've got a hammer." I think that's why this is so important because it's a great pairing.

Rob Collie (01:21:18): Yeah. There's two kinds of bias. Let's be clear. There's the kind of bias we all want to avoid, which is like whoever I'm working for at the time, whatever team I've temporarily chose to align myself with, I say that team's the best. That's the bias that we want to avoid. But then there's the bias that comes from you're betting with your career as well. So for example, I'm asking you this question just in general, why do we still need Power BI as if I am concerned about it. I'm not. I was on Brian Julius's and Sam McKay's podcast. We were talking about this exact same thing, and I was the one person on the show that was like, "No, no, no. Still need semantic models." I was the adversary in that conversation. So I'm very much on side. And this is something I believe. I mean, I'm not just saying this because I'm invested in it, I'm saying it because I remain invested in it.

(01:22:13): I see that there's a lot of value there that's going to persist. And if I didn't, we'd already be doing something else. At P3, we're already using plenty of non-Microsoft stuff in our AI stack. The things that Microsoft is getting around to but hasn't gotten there yet, we're going and solving those problems in the meantime with other tools, and it's totally fine. We are not such Microsoft Patriots that we have to be 100% Microsoft all the time. But one thing we have not strayed away from is the semantic model. We see its value. So when you believe something and you're already in on it, and then you tell the world that you're in on it, that's the right kind of bias. I don't think any of us should be ashamed about that. If you really didn't believe that semantic models were the future, you'd already be evaluating other career options.

(01:23:03): You wouldn't be super excited and doubling down on all this stuff in the way that you are. And the same with us. There's a kind of talking your own book that is appropriate, and I think this is one of them.

Rui Romano (01:23:16): Honestly, I think it's great. I'm really excited about these times. I'm really excited when I see someone else showing a blog, "Hey, I'm taking this semantic model. I'm taking this data and feeding it to AI. Look, I have an amazing dashboard," because this is also what we need to evolve as a tool and a platform. And like what I'm saying, I'm absolutely sure that we will get to a point that it will be exactly the same experience. I want this, I get something and I can just keep integrating with it. And even better, we can also have the UI. So you have both things. You have the prompt experience, the vibe experience, but in the end you want to make that small change into a title or into a line chart. You don't need to go to the vibe again. You can just click and you make a change.

Rob Collie (01:24:00): Yeah.

Rui Romano (01:24:00): That's unique. Now the semantic model, if you just feed a set of tables to AI and ask questions, it's a difference between AI really having a high chance of hallucinating or actually having AI answering the question when you feed it the semantic model. When you give it a semantic model, which already has the logic, the aggregations built in, the business definition, the relationships, everything is already there. And this is not something new. It already exists for many years and it's proved its value many, many times. The output quality that you get from AI and also the fact that AI can also talk that language that knows how to query the semantic models, it's a huge difference between just feeding some data and then letting AI go crazy at it.

Rob Collie (01:24:47): LLMs are just horribly bad at math. You can't even trust it. I've built a solution recently that is incredibly impressive. It's just absolutely blown my hair back in terms of what it's able to do. But then when I ask it to write a narrative based on the data that I've fed it, it'll do things like there's a number 0.08 and then in the text it says, "And this exceeded that by only eight-tenths," and it writes it out as eight-tenths in English when it was eight-hundredths. You can't give the LLM any room to breathe when it comes to the interpretation of numbers or even just arithmetic. It's got to be anchored in good old-fashioned CPU code. And as we've been talking about here, a semantic model is the most flexible version thereof.

Justin Mannhardt (01:25:37): I have this thing in my mind, I know we're running out of time, but I wanted to just, even just as a thing to put into Justin's head. Let's call it the mid-market CEO test. How do we explain to a mid-market CEO? Not even a CFO. How do we explain to a mid-market CEO why they still need Power BI in this world?

Rob Collie (01:25:54): Imagine this. Well, because the reuse and the blah, blah, blah and the scaling and then this and then that. And you know what's going to happen? "Shut up, nerd."

(01:26:04): That's the answer you're going to get, right? Even though we're right, you're going to be told, "Shut up, nerd," because look, I'm still getting dashboards right? I'm getting my answers. I'm not switching to your system, blah, blah, blah. Okay. How about chat with data where you can ask any question you want and you get answers you can trust?

(01:26:23): Okay. Mid-market CEO goes, "Now you have my attention." And I think it'll behoove all of us to constantly remember how close to these problems we are and how much we understand things that we can't expect the general population and don't try to push that rock up that hill too hard. We need to keep coming back to sort of the visceral. And again, enterprise like Fortune 500 gets it. They get those subtleties.

Justin Mannhardt (01:26:50): Maybe.

Rob Collie (01:26:51): Sometimes. Okay, I might be a little bit generous. I just don't think that chat with data is going to work in the absence of something equivalent to anyway, a semantic model. And I've been on record on this multiple times. And in the space of semantic models, no one has anything resembling the track record headstart history and investment that Microsoft has. Everyone else is going to be running to catch up. So yeah, I'm a truther, as they say. I'm not neutral on this.

Rui Romano (01:27:22): At the same time, that does not mean that we can just not keep innovating on this space. For example, another thing that we shipped at Ignite, that ontology and the Fabric IQ and then bringing that together, feeding that knowledge that again, can be consumed by AI and by the users. It will just build on top of the history and the experience that we have building these semantic models. And another thing that will happen is, as you said, I don't have the telemetry to back me up, but I'm pretty sure that it should be like 70/30, 70% of the semantic models, they are not semantic models. People just put data in there and they want to build a dashboard and they save that into the service, like multiple tables completely disconnected. But that's also something that AI can help with going forward. We can really just, "Hey, this is not how it should be. Let me shape that for you. Let me make this semantic model in a way. Let me help you make this semantic model that will be absolutely perfect when you expose it to AI when you want to shed with data."

(01:28:28): If you are using some weird names or going to, "Okay, let me help you put some synonyms in here. Help me suggest you to make sure that you put the right names and you put the right context so AI can do a much better job." If you do have a good semantic model, by the way, there was this idea that guess what? Microsoft is also saying that they have this prep data for AI, so they need to prep your semantic model for AI for it to work with AI, which I don't agree because if you do have a good semantic model, if you are following the best practices, good names, good descriptions, it will just work.

(01:29:06): You don't need to do anything else. If you want to make it more like fine tuned, then yeah, you go and you put some instructions. But if you do have a good semantic model, that should just work. But AI can, again, can also help fine tune those prep data for AI instructions and synonyms automatically for you because it will know what will work better when things are not already in a shape that without that will not work the way it should. But yeah, I fully agree with you on the fact that in the end it will come down to the basics. In the end, what people care is they want to ask a question and I want to get the answer as soon as possible and they want to get the answer in the smartphone. I want to get the answer maybe in the future in their glasses, whatever, and be able to get that as efficiently and as easy as possible and as flexible as possible.

(01:29:57): And for that, the semantic models, they are in a good shape to do that. That does not mean that we can just sit down and assume that we don't need to do anything. We need to continue evolving, not only in the semantic models, but also things that complement the semantic model like the ontology stuff. But yeah, exciting.

Rob Collie (01:30:16): I'm very familiar with the Microsoft culture and it will not take this challenge complacently at all. Microsoft is not wired for complacency. I know that you aren't going to be sitting still. Rui, thank you so much for spending this time with us. Big chunk out of your day. We're honored that you spent the time.

Rui Romano (01:30:34): Oh, my pleasure. And keep doing this. I love your show. It's one of my favorites in the car when I'm driving and I love the engaging conversations. It's a great show, so keep doing it.

Justin Mannhardt (01:30:45): Keep innovating and we'll have to have you back on soon. Yeah, man, really appreciate the time.

Check out other popular episodes

Get in touch with a P3 team member

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form

Subscribe on your favorite platform.