Problem solving is about bridging the gap between where you are and where you want to be. Decision making is identifying the path you want to follow in bridging that gap. In that sense, decision making is part of the problem-solving process.
One of the popular tools to help managers in the decision-making process is a decision tree.
Now an issue tree isn’t a decision tree in the formal sense but it is related—in fact, it is all a decision tree is and more—and it can provide invaluable assistance in the decision-making process. Here is how.
Think of your issue tree (almost) as a decision tree
A decision tree shows all the various ways you can answer your problem by comparing the expected value of each possible avenue.
Suppose you face a how problem, as in: “How can we increase our profitability?”. Your solution process—assuming you have already defined the problem and identified its root causes—consists of thinking about all the possible answers and organizing them in a mutually exclusive and collectively exhaustive (MECE) way by building an issue tree. In the end, you might have something that looks like the figure below.
Next, you assign a hypothesis for each branch—or group of branches, as some might benefit from being considered together—you identify the analysis that you need to conduct to test each hypothesis, and you spell out the data sources where you can find the information that will fuel that analysis.
A complete issue tree with hypotheses looks like the one we developed in our cable negotiation case study:
Go one step further and now you have not only the hypotheses but also the analyses and the data sources. Then you’ll have something that looks like this:
You should base that decision on two key characteristics for each possibility: the probability of success of each hypothesis and the associated payoff in case of success. In other words: can you make it happen (probability of success) and do you want it to happen (associated payoff)?
If you keep these two factors independent, you can map where all the possibilities fall in a two-by-two matrix. Whatever hypothesis lands in the top right quadrant is golden and you should implement it first.
But you don’t have to keep these two dimensions independent. In fact, since both factors are necessary conditions—or non-compensatory: for instance, it doesn’t matter how attractive one option is, if it isn’t feasible at all, there is no point pursuing it—you might as well multiply one by the other. Then you get the expected value of each hypothesis.
Now a typical decision tree does just that: it separates graphically all the possible answers to the key question and evaluates both the probability of success and the payoff of each option.
Therefore, an issue tree isn’t precisely the same because it doesn’t just stop at identifying potential solutions, it also spells out the necessary analyses and data sources. In that sense, it is useful to the decision maker as a road map to organize and follow the problem-solving process, something that a typical decision tree doesn’t do. So decision trees and issue trees are different.
But an issue tree is a perfectly acceptable basis for the decision maker to build a decision tree as well: all you have to do is spell out next to each hypothesis the numerical value for both the probability of success and the payoff of that hypothesis, just as you would do with a regular decision tree. In that sense, an issue tree provides the same benefits as a decision tree (identifying the expected value of each option) and it also helps managing the decision-making process. So why not forget about decision trees and work with issue trees.
(The figure above provides some pointers as for which metrics you might consider to do so; please keep in mind that these aren’t collectively exhaustive as the metrics are very much problem-dependent.)
Learn more about using issue trees as decision trees:
Here is the slide deck on logic trees from my course.