Right Measures Banner

Right Measures Banner

Intro

How did I come to PowerPivot and the related tools? Desperation in trying to fix a process problem. How did I get to the point where I can return a reasonable answer to a data query using the Power tools? Much frustration, a total Google search time measured in weeks if not months, Rob Collie’s book and the P3 Adaptive website, and a dedicated IT department to help me determine if my reporting discrepancies were a problem in my DAX formulas or an underlying gap in the data (there were plenty of both). I found it tantalizingly challenging to learn – tantalizing because I always felt like I was one DAX command away from getting what I needed but unable to figure out what that one command was. If there is one thing I would urge upon anyone starting their PowerPivot journey it is perseverance.

The Objective

Like any large organization’s supply chain mission, our department’s primary objective is to ensure a product is always available when needed at the point of use with the right price point and quality level. Our project was to implement a 2-bin Kanban solution at four locations within our system. This effort had been hugely successful at 3 of the 4 hospitals in our system, but the largest location was encountering difficulties in implementation. Stockouts (lack of product at the needed time) were at an unacceptable level – the Kanban implementation had worked at our other locations, so why was it failing to deliver the expected results at our flagship location? Sheer volume didn’t seem to be the culprit as we could get the quantities of product from the dock to the points of use. The issue was that the product just wasn’t getting to the dock in the right quantity or wasn’t getting there at all.

This led to our first realization about our MMIS (Material Management Information System) – while it was very good at telling us what had happened, it wasn’t set up to tell us what should have happened but didn’t. Like many OLTP systems, it is optimized for transactional throughput with a secondary focus on analysis. If the data isn’t in the system in the first place, or it’s stored elsewhere requiring a great deal of processing power to retrieve, then finding the gaps in the necessary events is difficult.

I had used Excel for 20 years and didn’t know about PowerPivot until I was faced with trying to analyze these failures in 130 storerooms stocking 40,000+ SKUs. Learning what measures were, and creating the right measures using DAX was a giant step, and a lengthy one. SUM() and DIVIDE() could be managed pretty quickly, but when we started moving to histograms things got complicated. The P3 Adaptive blog (and its willingness to link to other published articles by the community) was a tremendous help in understanding how to build such measures.

Business Unit

Why start with histograms? They’re helpful to show variations in large datasets for multiple criteria. They revealed the gaps in what our MMIS system could provide. What significant value is missing from the above histogram? (cue theme music from Jeopardy…) There is no zero value on the x-axis! We can see how many times something on the shelf was ordered if it was ordered at least once during the period – but how many items weren’t ordered at all? Those are the items we want to know if we should have ordered but didn’t – and if they’re legitimately not wanted then get them off the shelf and make more room for the fast-moving items that we were running out of.

DAX and PowerPivot started highlighting the data gaps, and this is where the partnership with our IT department took off. Our storeroom (aka cart) data is not historical – the MMIS system can tell you what 378 items are in storeroom 5A today and what their reorder (par) quantities are, but it couldn’t tell you what that item and value was a week ago or a month ago. So, on a daily basis, we had to take a snapshot of the storeroom inventory and bump it against the order history. With 40,000+ items managed within the location that file ballooned quickly, so we limited ourselves to a 90-day history.

Once we had the core transactional data including the non-orders we wanted to slice ‘n’ dice the data in a variety of ways, so we built the lookup tables and linked them together. There’s no need to go into detail on the next diagram – your business probably has a similar complexity of relationships and acronyms. We used P3 Adaptive hourly consulting at various points to help us with the measures along with continual refinement of the queries generating the fact tables from our PeopleSoft MMIS. We also had to work out the timing of the data sources – for example, our Advance Ship Notice (ASN) data from the previous day’s orders were processed at 6 am in the MMIS, so we had to ensure our corresponding Power Query jobs ran after the ASN information was received.

MMIS

 

Without going into mind-numbing detail about the various measures we created and studied we can say that several features of PowerPivot were vital in that root-cause analysis.  Among them are:

  • Immediate update of the tables and graphs for each data refresh
  • Immediate answers when we sliced and filtered the data for various factors
  • Graphical representation of events for rapid comparison and evaluation

Rob’s posts on allowing multiple measure selections with disconnected tables let us build this next pivot table combined with sparklines. The measure shown gives us the amount ordered on a particular day subtracted from the amount received within two subsequent business days, leaving anything either not ordered or completely filled BLANK(). This report highlights what I consider one of the greatest strengths of PowerPivot – each mark represents the result of a sophisticated measure but they are embedded in a common Excel trope for easy comparison by lay users.

The vertical red marks highlighted by the blue box indicate a day when the EDI transmissions from our vendor were not sent. (The actual goods were shipped and received, but electronically we were blind.) We called the vendor as soon as the report was run to determine the status and verified their confirmation against our dock receipts.

The horizontal marks highlighted by the orange box represent a problem with shipments for the 36 French catheter – despite repeated orders the item was not coming in. This was a problem requiring the immediate substitution of the item until the vendor was able to resume shipments.

History Measure

The report is an immediate visual check for a stock person to know if an order has not been filled. We considered using a higher-level dashboard with a traffic-light metaphor so problems with any of 400+ items in larger storerooms would be highlighted on a single section of the screen, but the importance of a storeroom manager knowing which items were not received and the need to substitute for them at once led us to keep this level of detail.

Having this aggregated data meant that we could match stockout issues to external (product not shipping) vs. internal (correct orders not being made) causes. Once the transactional data was trustworthy as well as complete, we could continue developing more complex measures. We were now able to correctly differentiate a stockout (where something should have been there but wasn’t) to a lack of an order because an item had a sufficient par level, or it was the wrong day of the week for an order or other valid reasons. We could now automatically identify which events were “bad” ordering decisions and thus move to target the root cause. Power Pivot helped us with that too, but that’s another story.

Power Pivot (or any similar tool) didn’t/couldn’t give us an answer to our issues out of the box because at the start we didn’t have the proper data. What it could do was to lead us to the data organization we needed and then demonstrate where our data black holes were. Once those holes were fixed, we could then continually improve our reporting until we understood what was happening and adjust the people/process/technology issues. Without Power Pivot and the help provided by P3 Adaptive and the books that everyone should have on their shelf (Rob Collie, M is for Data Monkey, Definitive Guide to DAX, …) we would have foundered on the rocks.

Despite that help, and maybe it’s just me, but here’s something very hard for me to understand… Why is learning the tool so difficult? (Picture Will Smith in Men In Black yelling, “hey old guys!”) I have used spreadsheets for longer than most of the P3 Adaptive readers have been alive (VisiCalc on a VAX 1170 anyone?), and I have picked up what I needed as I went along, but always in service of a business objective rather than to become an Excel guru per se. Excel and the Power tools are not my day job, though I rarely do not have at least one spreadsheet open on my monitor. So why did it take me so much longer to get where I needed to go? To be sure I was tackling a more sophisticated problem… but it’s Excel!

The Challenges

P3 Adaptive has had several posts about how approaching the Power tools from an Excel perspective can lead to problems, but I think they only cover part of the mystery. Some of the difficulty may be in changing the way you think about what the purpose of a cell should be. For a traditional spreadsheet, a cell (say $B$3) has a particular meaning based on the formulas within. Specific information is found in a particular cell, and that cell should always have that same information (or the same calculation when the source materials change). With the Power tools, the connection between any given cell and a specific piece of information is no longer relevant. The focus switches from a single number to a set of numbers. Your desired result might be a set comprised of a unique number but the formula to achieve that result is no longer visible or traceable within the cells of the spreadsheet.

The in-house help functions for DAX and PowerPivot are terrible as well. For a regular Excel function, the help system gives you an example which can be copied to your spreadsheet and subsequently built piece by piece to duplicate the result in the help function. For DAX the examples are given in relation to the AdventureWorks database (not included), and proper importation and manipulation of the data model is needed before you could hope to start duplicating the help examples. The database wasn’t built for Power Pivot anyway and is more suited to creating flexible SQL joins than hardcoded one-to-many relationships. And just sayin’, could we have datasets with dates from the current decade at least?

Perhaps most disrupting is the mental switch to filtering – analysis is subtractive rather than additive. Instead of carefully building a report one data point at a time you are now chipping away at large chunks of data, like Michelangelo on a block of granite, until you have retained only the part that you need. (No, I’m not Michelangelo with my data, more like the guy on the road crew with a jackhammer.) The various filters and sequence of application needed to ensure you don’t blast away the part you intended to be the sculpted hand on your overall statue are very confusing to a newcomer. The Italians (yet another Renaissance motif) are so far advanced in their understanding of filter applications that even their basic postings on ALL() are intimidating. Microsoft does have some rudimentary datasets for HR, procurement, and some other common areas but the examples are pretty basic and quickly unable to reflect common issues that a user will encounter.

I also didn’t talk much about PowerBI. First, it wasn’t available when we were resolving our needs. Nowadays it’s very much the flavor of the moment, and it’s truly a great tool especially if you don’t already have a Tableau or Qlik license. But, I’m not ready to stop and build a dashboard. What I need to know changes as fast as my understanding of the real-world issues I’m trying to answer. There’s always another data source to link into my model from both internal and external sources, taking another slice of the data looking for a correlation between factors, testing against the real world. The measures and graphs I started with are not the ones I use today. I keep records of them, footprints in the sand for others to follow as they come up to speed on the questions I’ve already answered, but I need those followers to branch out and ask their questions in different directions from the path I’ve followed. There’s too much to know.

As Power Pivot allows us to become more accurate and specific in reporting – not coincidentally expanding our source data to millions of rows – so too we have to be sure that accurate granularity is supported at the associated table level. When executive reporting was highly aggregated the inputs to those reports could also be aggregated. Now I need to break apart the data I already have into more atomic units to give the finer level of reporting we can now provide. I can’t hide my summary values as averages of averages because my customers can drill to the detail.

Conclusion

Cole Nussbaum Knaflic notes that data analysis has both exploration and explanation – but as tools like Power Pivot allow us to explore at finer levels of detail, we must remember that data validation remains a vital precursor to exploration, especially for aggregations that we have uncritically used in the past. We have to look with fresh eyes at the data and queries we’ve used for years and be ready to rebuild from the ground up… Power Pivot is the red pill of data. (Does that make Rob Morpheus?)

Microsoft’s platform is the world’s most fluid & powerful data toolset.  Get the most out of it.

No one knows the Power BI ecosystem better than the folks who started the whole thing – us.

Let us guide your organization through the change required to become a data-oriented culture while squeezing every last drop* of value out of your Microsoft platform investment.

* – we reserve the right to substitute Olympic-sized swimming pools of value in place of “drops” at our discretion.