Speak to a consultant

Let’s solve your data problems.

Whether you have an immediate need or just want to know more about how we can help, we’re happy to chat.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form

OUR SPECIALTIES

We combine speed and expertise to get you further, faster.

Data Visualization
Governance and Security
Data Strategy
Advanced Analytics
Data Modeling
Custom Reporting

How we do it

Strategy


We’ll assess your current capabilities, and map out the right plan of attack.

Implementation


Fingers hit keyboards, and you’ll see your data transformed into valuable insights.

Training


Fundamentals to expertise, we’ll show you it all so you can eventually do it without us.

Our Approach

Their approach

Infrastructure Forward

GOAL:

Business Value

Start

Infrastructure-Oriented Approach

Traditional consultants overhaul your entire system first, delaying insights and costing you time and resources. By the time you’re set to go, the goalposts have moved.

Our approach

Impact Forward

GOAL:

Business Value

Start

Impact-Oriented Approach

P3 Adaptive will help you start winning now. We leverage Power BI and your current systems to quickly deliver the key insights, reducing costs and risk.

Mike Miskell

Director of Process Improvement, Kaman Industrial Technologies

“P3 Adaptive has empowered us, rather than making us dependent on them. Just the right amount of education, direction, and smart help, whenever we need it. The results have been transformational.

Interested in getting started?

Custom solutions built to propel your business forward, and specialized training designed to unlock the potential of your Power BI ecosystem and the team using it.

FAQ

OneSecurity is still on the Roadmap to come to preview, but it will enable us to secure the data once in OneLake and that will flow to every place that data is accessed. 

Before Microsoft Fabric, if we defined security in something like a SQL Database, we would need to effectively redefine that same security into concepts like workspace access, dataset permissions, row-level security, and object-level security. All of these have different capabilities and constraints, making end-to-end security not just complex but sometimes not possible. OneSecurity will enable us to define security once, in OneLake, which will then propagate to all workloads in Microsoft Fabric. Pretty cool. 

 

Overall, we advise “don’t panic, but fortune favors the prepared.” At a high level, probably the biggest “risk” is just “lost upside” as opposed to “potential downside” – so not the traditional meaning of the word “risk.”  

That said, Microsoft is clearly all-in on The Fabric Way, so this new paradigm is also going to be hoovering up most of their engineering resources. We don’t expect “the old way” to break, of course, because they’re not irresponsible, but lots of new stuff will probably only work on the new paradigm.  

We’ve seen this before from Microsoft. Power BI mobile device support, for instance, was only made available in the cloud (back when 95% of adoption was on-prem servers). And SSAS Tabular got all kinds of cool features that SSAS MD (the forerunner to Power BI) technically could have had – but didn’t.  

So, there IS a risk – actually more of a certainty – of something super compelling becoming available, your stakeholders demanding it, and your infrastructure isn’t ready to support it because it only works against the OneLake layer. Data Activator is already one example of this, and OneSecurity is already on the horizon, but there will be more. Many more. 

So in short, the clearest risk is just lost upside, but “getting caught behind the stakeholder demand curve because Microsoft is focusing their new development work on OneLake” is a very real, traditional-style risk. 

This is basically the same story as what we’re already used to: Power BI gateways with dataflows. 

This is a great question, with a few possibilities. We think it is important to consider the shortest possible path to making Fabric available to your teams. In this scenario we could either update the Target of the existing Data Factory processes to be OneLake. However, that may not be practial if your pipelines are invoking things like Stored Procedures or other code. 

Another option is to use CETAS (CREATE EXTERNAL TABLE AS SELECT) which allows us to create a parquet representation of tables in the Azure SQL Database. This is the same storage format that OneLake is based on so we can then use shortcuts to make that data available within Fabric for our users. The implementation will sound rather technical, so we’ll spare that from this response, but it is very low friction to put in place and certainly MUCH faster than having rebuilt the ELT/ETL from scratch. 

Yes, that’s the concept here. Once data is available in Fabric, there is no replication or movement of data required. Since your data is originally sourced from Azure Databases, we need to satisfy making that available within Fabric by either ingesting the data to Fabric, creating shortcuts to Azure storage accounts, or using techniques like CETAS (CREATE EXTERNAL TABLE AS SELECT). 

For now, Fabric’s disruption is centered around OneLake and its many benefits. Co-Pilot for Fabric (later this year) will create the types of AI-driven developer experiences you’re referring to. 

Not currently in Public Preview – but we will eventually get a full desktop and developer supported experience for OneLake datasets.

There are LOTS of options for us to make data available to Fabric w/o starting over. We should talk 😊

Fabric licenses will be purchased similarly to Power BI licenses and Power BI Premium Capacities. 

That is correct. The AI/ML capabilities leverage Fabric Notebooks, which can be written in either PySpark (Python), Spark (Scala), SparkSQL, or SparkR. You may also leverage any libraries you’d like within these languages, managing them at the Workspace level, which means they are automatically imported and available to all notebooks created in that workspace! 

From Rob – “Replace” is a strong word that I personally would hesitate to use, but maybe Justin would disagree. Going one level deeper on my own thoughts: to the extent that data lakes were already replacing data warehouses, I think that’s also true for Fabric/OneLake. And the data lake “advantage” over SQL-based warehouses was/is essentially this: you don’t have to think as hard about schema and design with a data lake. Data coming in can be “dumped” into a data lake with a lot less thought, in its raw form (or close to it), without fear of losing information, and without fear that the storage format will preclude you from achieving goals down the line. I think of it as “turning raw data into rectangles.” (Aka tabular schemas). Data often doesn’t “arrive” in rectangular form. (ex: XML, JSON, folders of inter related CSV’s). SQL warehousing requires you to design all of the rectangular shapes UP FRONT, and that’s both a dangerous and expensive process. The translation itself (from “curly” things like XML/JSON/CSV) risks “losing” important information that was present in the raw form. So you spend a tremendous amount of time on that process – and still get it wrong. If we could skip all that, it would be great. You always EVENTUALLY need rectangles – analysis by its very nature operates on the things that are regular about your data – but delaying that “rectangularization” until later turns out to be super smart, and OneLake gives you exactly that. Dump and store in raw form, and then extract whatever rectangles you need later. But if you already have data in SQL format, there’s not much to gain from data lake storage – except that now, if you load it into OneLake, it’s now “on the highway” that exposes your SQL-originated data to basically every single analytical service MS has to offer. 
 

From Justin – My answer here is definitely yes, but not discounting Rob’s comments here that an actual EDW represents a rather mature state for any organization that may not be practical or necessary. There are two different SQL artifacts in Fabric, the SQL Endpoint on the Lakehouse and a SQL Data Warehouse. The former is essentially read-only with schema generated automatically from the Lakehouse. The latter is fully ACID compliant with full transaction support, and full DDL and DML functionality – built on the Synapse Infrastructure and can most certainly serve as an EDW. The storage layer underneath is still an open delta format, so you retain the full benefit of flexibility to other Fabric artifacts, as well as external platforms.