How to Measure Anything, 2nd Edition
by Douglas Hubbard, 2010
This book is of value to anyone in a decision making role. It details the steps necessary to… ahem… measure anything. It is written from a PRACTICAL standpoint of asking questions and performing basic statistics to determine what actually matters to a decision.
Even with a strong decision science and statistical background, I found the way this book framed questions and thought-processes to be refreshing. Too often we delve so deeply into the math of the problem that we forget how to frame the question initially. So while the math wasn’t new at all to me, I found it’s presentation useful. I think other PhD students and professors would benefit from this book as well.
Here is the method the book suggests when deciding what (and how much) to measure to aid in making a decision:
1. Define the decision and the variables that matter to it. If you can’t define a decision that will be affected by your question/measurement, then it doesn’t matter.
2. Model the current state of uncertainty about those variables. This typically involves querying experts to get their confidence intervals on the variables of interest. Experts should be calibrated in providing accurate confidence intervals before beginning.
3. Compute the value of additional measurements. This aids in determining which variables have confidence intervals that are too wide to make a decision right now. Typically, only a couple variables require ANY measurement in a decision, and they are not necessarily the variables that decision makers would normally try to measure.
4. Measure the high-value uncertainties in a way that is economically justified. Measure in an iterative manner, starting with a small study. You will often need less information than you think to shrink a confidence interval to an acceptable range.
5. Make a risk/return decision after the economically justified amount of uncertainty is reduced. You will have adequate information at this stage to make your decision. Does the reward expected from the decision justify its risk for your organization?
You should read the book. That said, the book is not perfect. It messes up a couple mathematical topics in its exposition. It can be very dry and boring in places, even for someone used to reading statistical prose. I think its benefits outweigh its drawbacks, however.
Thanks for your review. Can you clarify what mathematical topics you think I mess up?
Doug Hubbard
I believe it was a passage toward the middle of the book where convexity and concavity were backwards that most jumped out at me. Give me a couple days and I’ll find page numbers of where I thought there were errors. I’ll post them here. I read the 2010 edition; I see that you have a new version out since.
Errors or not, great book. Thanks for your comment.
Page 108-109 of the 2nd edition: Expected Cost of Information is convex (not concave). Expected Value of Information is concave (not convex).
That’s all I could find on a quick skim, 2nd time around. Will post here if I can remember what else I noticed on first reading.
I was going to say whether a curve is concave or convex is arbitrary depending on the side it is viewed from. I said in the definition of the ECI “If we all the direction of the EVI curve convex, this is the concave direction” – which would not be wrong if that is all I had said. But now I see that when I defined the EVI as convex, I said it was convex “relative to the horizontal axis”. If I was going to call EVI convex and ECI concave I should have said “relative to the vertical axis” or reverse the labels.
Doug Hubbard