Excel in the News for the Wrong Reasons
A number of friends and readers have been sending me links lately about some high-profile mistakes made with Excel. I particularly like the one above where Excel is shown as a massive meteorite smashing into the Earth.
Since I am sure some of you are getting sent the same links, and sometimes in a not-so-subtle, snarky way, I thought it would be good to provide my opinion – maybe it will give you something constructive to say in response to them.
1) There’s a BIG Difference Between Analysis/Reporting vs. Simulation
Simulation-Style Workbooks Operate Like This: Small Amount of Data, Lots of Formulas
It’s funny: Excel is used for a BROAD array of purposes but we tend to lump them all together as “Excel.” To be honest, even I am guilty of this.
One of the articles in question (the one about the London Whale) relates how megabank JP Morgan lost a bazillion dollars on a bad bet. It turns out that some of the finance whizzes at JPM incorrectly divided by a sum rather than an average in one part of their model. That mistake led them to underestimate the risks involved with placing a billion dollar bet in the markets.
The purpose of that “whale” workbook was to evaluate profit/loss risks associated with bets that they COULD make. It was NOT built to measure whether or not their existing bets were working.
And that is a crucial distinction. Here, let’s “redo” that same diagram from above, this time for reporting and analysis:
Reporting and Analysis Workbooks Operate Like This: “Large” Amounts of Data,
Small Amounts of Formulas
2) The Opportunity for Mistakes is MUCH Greater in Simulations
Generally speaking, there aren’t any “moving parts” in data. The risk that a mistake occurs in the data itself, while not zero, is very small compared to the risks incurred when you write formulas.
Research has shown that no matter how good we are, we tend to make mistakes about 1% of the time – the article referenced here (which is fascinating reading) shows a rate somewhere between 0.87% error rate and 1.79% error rate – per cell!
Let that sink in for a moment:
And what’s an action? Basically everything is an action:
- Writing a formula
- Editing a formula
- Filling down a formula
- Copy/pasting a formula
- Repositioning input cells
- etc.
A simulation model like the “whale” model involves thousands of such steps, which means there’s a virtual certainty of multiple errors! An analysis or report, by contrast, may only involve 20 or less such actions.
3) PowerPivot Reduces the Number of Actions Dramatically Even Compared to “Traditional” Excel Reports
OK, so reporting/analysis involves far fewer formula-related steps than simulations. Great.
But PowerPivot reduces those steps even further, because when you write a PowerPivot formula (particularly a measure), that formula can be re-used all over the place without having to copy/paste or modify it. PowerPivot formulas are “portable,” in other words.
I’ve written specifically about portable formulas before, so check out the article if you haven’t seen it.
So let’s recap, again graphically:
Very Little Safe, Lots of Unsafe
Much Less Red (Unsafe), Much More Green (Comparatively Safe)
PowerPivot Swaps Out the “Red” From Traditional Reporting and
Replaces it With “Green” Portable Formulas
4) PowerPivot Mistakes Don’t “Hide” Like Traditional Mistakes
All right. Reports have fewer moving parts than simulations. And a PowerPivot report has still fewer moving parts than a traditional Excel report. But we’re not finished!
The “London Whale” mistake lived for many months (or maybe even years) undetected. But the average “lifetime” of a PowerPivot mistake tends to be mere seconds or minutes. Why? A few reasons:
- Measures are inherently tested, broadly and immediately, as soon as you write them. When you write a measure, PowerPivot slaps it on a pivot. And that means your formula gets applied in many different contexts all at once. You see multiple “answers,” immediately, and can instantly scan and compare them. Most formula mistakes will not survive this initial testing, because it’s actually quite rigorous.
- PowerPivot is a lot “stricter” about your assumptions. If you assume that there’s only ever one value for a variable, and sometimes there are multiple values, your measure is just going to return an error. At first this seems like a nuisance, but over time this strictness pays off.
- If a mistake “escapes” those first two “traps,” it still will very likely be detected as you manipulate your pivot or create new ones.
- Inspecting PowerPivot formulas is inherently a more centralized and readable process than inspecting/auditing traditional Excel formulas. Compare these two:
Can You Spot Mistakes in This Sea of Formulas?
It Even Uses a Few Named Ranges! But Still… Eww.
There Are Just as Many Formulas In This Screenshot as the Previous Screenshot, but You Can Inspect them One at a Time, Centrally, And Every One of Them is Both Named AND Uses Strictly Named Reference in the Formulas!
Even in Simulations, the Benefits Still Outweigh the Risks
Last parting shot. Several of the articles about the Whale reference a post by James Kwok. In that post, James says the following:
“Excel-Negative” Articles Cited This Post by James Kwok as Evidence that Excel Sucks.
I Guess They Only Read the First Three Paragraphs
Well said James. Well said. Which is worse: the capability to build incorrect simulations, or lacking the capability to do simulations at all?
I really think that last sentence nails it, in particular.
The Articles
If you’d like to read some of the original articles, here are the links:
- https://m.motherjones.com/kevin-drum/2013/04/quote-day-excel-error-destroys-world
- https://arstechnica.com/tech-policy/2013/04/microsoft-excel-the-ruiner-of-global-economies/
- https://www.businessinsider.com/excel-partly-to-blame-for-trading-loss-2013-2
- https://baselinescenario.com/2013/02/09/the-importance-of-excel/ [link removed due to 404]
- https://www.strategy-at-risk.com/2009/03/03/the-risk-of-spreadsheet-errors/