“All models are wrong but some are useful.” – George Box
As an engineer, I was trained to model the world: take a complex system—say, an oil-well drillstring extending several kilometers underground—make assumptions on what matters (e.g., contact with the surrounding rock, friction with the surrounding liquid) and what doesn’t (e.g., how tight the elements of the string are put together), keep what matters and ignore the rest, put it all into mathematical terms, and see if the resulting model accurately reflects reality. If the model is accurate, we can use it to better design and operate the string.
We cannot reproduce reality in its full complexity, so we need to simplify; a model is a simplification. A good model will retain all and only the important aspects, doing away with the unimportant ones. For a drillstring, it is challenging, but somewhat manageable. For for other tasks—such as measuring the economic impact of climate change 200 years out—it might be much more complex, if at all possible. In fact, the lack of agreement of what actually matters might undermines the whole effort.
Excellence in modeling doesn’t necessarily mean completeness
Achieving excellence in modeling doesn’t necessarily mean producing the most complete model of reality; it might be achieving a good-enough description at a manageable cost.
The figure below, which I’ve adapted from my close collaborator Albrecht Enders, shows how the added precision coming from the completeness of a model tends asymptotically toward perfect while the cost associated with achieving that added precision rise exponentially. In other words, and in full compliance with the Pareto principle, you attempt to make your model fully complete if these last bits of completeness are -1- not crucial and -2- come at the price of a large effort. Instead, you might be better served staying in the sweet spot, where your model is complete and cheap enough.
You can apply this approach in many practical settings. Going from Fahrenheit to Celsius by subtracting 30 then halving might not be exact, but it’s close enough for me to figure out if I need a sweater, and it’s a lot simpler to do mentally than to subtract 32 and take 5/9th.
Similarly, the London tube map doesn’t represent the distance between stations to scale but that actually makes it clearer than if it, and that makes it more usable to help me figure out my way from Earl’s Court to Heathrow.
Favoring simplicity over perfection in modeling can therefore make your model more usable. It might also make it more transparent, enabling users to understand more readily the connection between assumptions and outputs.
Consider letting go of achieving MECEness
Extrapolating, this conscious effort to not be collectively exhaustive isn’t restricted to modeling. It is also present when selecting criteria to make a decision. There, the decision maker is encouraged to include only those criteria that matter, rather than including all criteria. For instance, when deciding whether to authorize a new development on the California coast, Edwards notes that commissioners identified various criteria—size of development, conformity with land use in the vicinity, esthetics, and so on—that are crucial. There were other criteria, such as the name of the applicant, that were considered but not considered in the decision.
Another example where you see that conscious effort to leave things out is when framing problems, where you summarize your problem in situation–complication–question sequence (as I cover in my book, particularly on pages 35–36). There, you need to make the hard choices of including only what matters for what you define as the problem.
This focus on simplicity rather than perfection, by the way, is another example where MECEness isn’t desirable. In my penultimate post, we discussed a setting where we wanted collective exhaustiveness (CE) and ditched mutual exclusiveness (ME). Here, we also consciously seek not to be MECE, but this time by preserving MEness and throwing away CEness.
My point in summary, then, is to ensure that you have a model of at least acceptable accuracy. Past that, consider defining excellence as a great accuracy-per-unit-of-effort ratio.
Chevallier, A. (2016). Strategic Thinking in Complex Problem Solving. Oxford, UK, Oxford University Press, pp. 35–36.
Edwards, W. (1977). “How to use multiattribute utility measurement for social decisionmaking.” Systems, Man and Cybernetics, IEEE Transactions on 7(5): 326-340. (p. 328)
National Research Council, Terrorism and the electric power delivery system, 2012, p. 26.
Saltelli, A. and S. Funtowicz (2014). “When all models are wrong.” Issues in Science Technology Innovation 30(2): 79-85.