This entry is part of a multi-post case study.
We’ve developed a ‘how’ tree about how we can close negotiations with our cable providers on time. The result was a set of hypotheses about how to proceed, and we now have several alternatives. Now we need to decide which one(s) to pursue.
If you’re lucky, you have enough resources (time, money, infrastructure, etc.) to implement all. Congratulations!
If you’re not that guy, you’ll have to decide which to pursue first—or which to pursue, period—and discard the others. Here’s how.
Since you won’t have the luxury to pick one hypothesis, implement it, see if works and, if not, move to the next, you’ll need to come up with a custom analysis; to do so, think about hypothesis-testing as match-testing.
Imagine that you’re about to embark for a week-long trip to the jungle and all you have to start a fire is an old box of matches. How can you establish if these matches are reliable? Sure, the most thorough way is to try them all out beforehand but then, even if they work, that won’t be much help when it’s time to light your fire. So, instead, you can use proxies.
You can look at the wood of the match and make sure that it is still intact and dry. You can look at the combustible chemical at the tip and make sure that it is also dry; you can try to compress it gently between two fingers to see if breaks down into powder or if it remains intact.
If the match needs to be scratched on the rough surface of the box, you should also check that one out: is it dry? is it still rough?
If applicable, you can review your history with that brand of matches: have they been reliable in similar conditions in the past?
Finally, if you have more matches than you intend to use during your trip, you may want to select a couple of representative ones and try them out. All these tests will give you some indication of your success rate when you’ll play Crocodile Dundee. And, short of trying each match, they’ll provide you with the best information you can get about that particular box.
Design the tests for your ‘how’ hypotheses
Hypothesis-testing with ‘how’ trees is the same: you’ll probably won’t have the resources to implement all your potential solutions, so you should focus on the ones that you think will be successful. Since you want to implement only successful ones, what analysis can you do to estimate your success chance for each hypothesis?
So the analysis should address two things: whether we can and whether we want to implement this hypothesis / solution.
Whether we can implement this solution includes: do we have the people/savoir-faire, infrastructure, money, time to do it? If it requires someone else to do something, do we have the leverage to make them do it? Is it allowed (as defined by the law or by the elements in the out-of-scope section of our problem identification card)?
Whether we want to implement this solution includes: Would it actually solve our problem? Does it have an attractive costs vs. benefits bottom line? In particular, would it create a significant problem elsewhere? Does it have a high cost of opportunity; i.e. If we do it, would it prevent us from doing something more beneficial?
In the attached figure, I’ve put in between parentheses which of these factors (can/want) each bit of the analysis we are testing. You might want to do the same in your issue tree: it allows you to check that you’re indeed testing both aspects of the hypothesis.
So, for instance, the first four hypotheses are about reducing our expectations. To reduce our expectations, we need to want to do it and we need to be able to do it. The ‘want’ will vary with each sub-branch but the ‘can’ stays the same: are we able to reduce our expectations? If we’re currently getting only the bare minimum out of our negotiations, we cannot reduce our expectations and therefore we shouldn’t look any further in this direction. So the ‘can’ elements of the analysis (highlighted in green, in the attached) are present in each sub-branch of the tree.
There’s a lot of positive things to say about making elements parallel in your logic trees. We’ll discuss that in depth in another post but the point is that it applies for the hypothesis testing part of the tree as well: you want to make elements as standardized as possible so that it is easier to check your logic. By using the same elements—in the case of the first branch, the analysis associated with the feasibility of reducing our expectations—we do make our tree more boring and that’s a drawback, I admit, but we also make it more standardized. And when you’re talking about a process as energy-consuming as developing a logic tree, you want to reduce all distractions: standardization is your friend and you should be prepared to sacrifice some flamboyance to get it.
Identify your solution(s)
Once you’ve identified the analysis that you want to do, you need to choose which hypotheses to test. How do you do that? Well, you know the drill by now: it’s the same as for ‘why’ trees: ask your friends to help you choose the best solution(s). One way is to ask them to place the hypotheses in a 2×2 matrix of which one axis is attractiveness (“want”) and the other is feasibility (“can”).
Does this exercise lead you to have one (or more) hypothesis in the top right quadrant? If it does, congratulations! These are the ones you want to test first. Follow the analysis that you’ve spelled-out in your tree to see if it confirms that these hypotheses are indeed both feasible and attractive.
On the contrary, if this prioritization exercise led you to not having a single hypothesis in the top right quadrant, then you might want to look at your problem from the other way: discard all the hypotheses that are in the bottom left quadrant (they are neither attractive nor feasible, so don’t waste your time) and refine your analysis of the ones that landed in the other two quadrants. Maybe you can develop with your friends an objective scale for defining feasibility and attractiveness? If so, develop the scale as a group and then ask each participant to rank individually the hypotheses according to the scale (remember, you want to avoid contamination amongst evaluators, so it’s better that you don’t do the evaluation as a group).
The point is to place all the hypotheses in the 2×2 according to your best guess (or that of your team). Then carry out the in-depth analysis of the most promising hypotheses and if the analysis confirms your gut-feel, declare victory, calling these the solutions to your problem.
These are the solutions that you’ll want to implement, so all it will take now is to convince the rest of your organization/boss/client/spouse that it’s the one you need. That, we’ll talk about in the last two posts of this case.