VoyageATL is an online magazine that highlights local small businesses and entrepreneurs and promotes local events. A few days ago they published a small piece on my company, Incite.
The official blog site of Incite! Decision Technologies. Thinking about thinking and deciding in a complex, uncertain, and risky world.
Saturday, July 22, 2017
I made the following "movie" trailers for two of my tutorials to play with the idea of making a teaser that didn't attempt to explain anything, but mostly just to have some fun.
It’s Your Move: Creating Valuable Decision Options When You Don’t Know What to Do
Information Espresso: Using Value of Information for Making Clear Decisions (with support from the R programming language)
Tuesday, June 13, 2017
Monday, November 14, 2016
Seeking Beta Testers for a Web-based Sales Opportunity Portfolio Analysis Tool
- Engineering, architecture & construction firms
- Professional service firms
- Capital equipment manufacturers
- Improved accuracy of revenue realization and timing forecasts;
- Guidance on how to allocate resources to maximize the likelihood of deal closure;
- Guidance on opportunity selection and prioritization.
If you are interested in learning more or know someone who might be, please, contact me via LinkedIn message or send me an email from our web form.
Wednesday, November 09, 2016
The Power of Negative Thinking: “How do we know this opportunity is worth the time and effort?”
- Unwarranted optimism or wishful thinking – personal enthusiasm or a natural disposition to believe that desired outcomes will most likely occur; or, inflating initial estimates of desired outcomes to appear more effective than is warranted;
- Sand-bagging – under reporting potential outcomes to appear heroic when better than anticipated outcomes materialize;
- False precision – reporting anticipated outcomes with an unjustified level of certainty, usually as a single-point estimate rather than a range;
- Availability – recalling values that are memorable, easily accessible, recent, or extreme;
- Anchoring – using the first “best guess” as a starting point for subsequent estimating;
- Expert over-confidence – failure of creativity or hubris (e.g., “I know this information and can’t be wrong because I’m the expert.”);
- Incentives – the SME experiences some benefit or cost in relationship to the outcome of the term being measured, adjusting his estimate in the direction of the preferred outcome;
- Entitlement – the SME provides an estimate that reinforces his sense of personal value.
|The best laid schemes o’ Mice an’ Salesmen, Gang aft agley|
|No, no, boy, that's no way to make a plane. That'll, I say, that'll never...fly!|
- What is the real opportunity?
- What are our goals and objectives?
- What are the client's goals and objectives?
- What are the decision boundaries and open decisions?
- What are the sources of uncertainty?
- What is the likely range of outcomes for the uncertainties?
- What are the effects of uncertainties on sales goals, revenues, and profit?
- How much risk do we face with each opportunity; i.e., how much could we lose by pursuing one opportunity over another?
- What insights can we create for contingency plans or options?
- How do we prioritize our set of current opportunities?
A Decision Analyst's View of Electoral Surprise
- Never, ever believe your own spin. Humans love narratives that give them comfort. Unfortunately, almost all narratives are constructed from selected evidence that fits a preferred narrative.
- Always question where your biases are coming from. You are biased. Until you recognize it, you will frequently be rudely embarrassed.
- There is no meaningful position in certainty. All beliefs about future events should be treated with degrees of belief.
- Even events that happened in the past are open to interpretation. The real issue about the facts of events is not so much whether events have occurred in the past or whether they will occur in the future. The real issue is our epistemic distance from the events. We generally don't know as much as we think we do.
- We condition our beliefs on the evidence at hand. Thinking that a Clinton victory was highly probable was not a bad position to take. It made sense given much of the evidence. BUT, Prob(Clinton win) > 50% does mean Prob(Clinton win) = 100%! (I'm actually getting tired of explaining this. I'm getting tired of seeing people make this mistake and the effects it has in real life on real people. Probabilities are degrees of belief, not statements of fact.) Always, always, always consider the disconfirming evidence.
- Trump never showed an insignificant chance of winning. His victory was always probable. What I see and hear coming from those expressing shocked disappointment about the Clinton loss is that they didn't really explore and consider the edge cases that would lead to a Trump victory. Explore the edge cases. Explore aggressively. Keep exploring.
- Informed accuracy trumps false precision (pun intended). Don't be embarrassed to draw your prediction intervals wide. It's more honest, more informative, and will allow you to do a better job preparing contingency plans. When #6 is performed honestly and aggressively, it should lead you to make your prediction intervals even wider. It's better to be humble and recognize how little you know versus being sure and then being rudely surprised.
- The evolving probability of win curves for this election resemble the curves associated with predicting that a given hypothesis among several is true when there are unaccounted for characteristics at play. Suddenly, a seemingly most likely explanation crashes to be replaced by a previously less likely hypothesis as the unrecognized characteristic manifests itself. This is a long way to say people get caught up in false dichotomies (or n-chotomies) for the possible explanations for what really is the case. It is almost always the case that more explanations are available than the limited set we originally conceived.
- If something really weird happens and somehow the posted results at 4 AM reverse by the time I wake up, all of the above still applies, maybe more so.
|Although Nate Silver was leaning in the wrong direction for predicting the outcome, his odds were actually more realistic and informed than many other pollsters who were giving 19:1 odds or better for a Clinton win.|
Labels: decision analysis
Tuesday, September 27, 2016
New Book: Business Intelligence with R by Dwight Berry
Perhaps most importantly, I've also decided to give all proceeds to the Agape Girls Junior Guild, which is a group of middle-school girls who do fundraising for mitochondrial disorder research at Seattle Children's Research Institute and Seattle Children's Hospital. While the minimum price for this book will always be free, if you're the type who likes to "buy the author a coffee," know that your donation is supporting a better cause than my already out-of-control coffee habit. :-)
Wednesday, February 10, 2016
Becoming a Business Analytics Jedi: An application of values-framed decision making
In the current rush to adopt data-driven analytics, discussions about algorithms, programming tools, and big data tend to dominate the practice of business analytics. But we are defined by our choices, our values, and preferences. Data and business analytics that do not start with this recognition actually fail to support the human-centered reason for decision making. This is the way of the Sith. A Jedi, however, knows that framing business analytics in terms of the values and preferences of decision makers, and the uncertainty of achieving those, employs the tools of decision and data science in the wisest way. In this discussion, we will think about the principles of high quality decisions, how to frame a business analytics problem, and learn how to use information in the most efficient way to create value and minimize risk.
Interview with Atlanta Business Radio
Monday, January 19, 2015
Teaching the Love of Thinking and Discovery
|Curiosity photo by Rosemary Ratcliff, provided courtesy of FreeDigitalPhotos.net|
I've been thinking about this TED Talk almost non-stop since I watched it, and I'm beginning to think that one way to achieve the idea here is to provide mathematics education outside of traditional school environments. By that, I don't mean that we should advocate that schools quit teaching math; rather, I think we need to start providing private forums in which kids who are interested in math can learn math in the same way they might learn and participate in extracurricular sports or arts activities that are not offered in a traditional school. I'm currently convinced the program must be private and free from policy driven curricula that "teaches to the test" and arbitrary performance criteria. This is for fun, but a special kind of fun.
Wednesday, January 07, 2015
An Interesting Christmas Gift
Over the holidays, the New York Times delivered an unusual juxtaposition of headlines and content, and apparent lack of self-awareness, to illicit such a hearty chuckle from its readers as to make the cheerful Old Saint jealous.
[image originally provided by @ddmeyer on Twitter]
To those imbued with the skill of basic high school Algebra 1, the information in the article about Sony’s revenues for the first four days of release of “The Interview” were enough to solve a unit value problem. If we let R = the number of rentals, and S = the number of sales; then,
- R + S = 2 million
- $6*R + $15*S = $15 million
However, not too far into the sudoku puzzle we might realize that a deeper, more instructive problem exists here, a problem that actually permeates all of our daily lives. That problem is related to the precision of the information we have to deal with in planning exercises or, say, garnering market intelligence, etc. A second reading of the article reveals that the sales values, both the total transactions and the total value of them, were reported as approximations. In other words, if the sources at Sony followed some basic rules of rounding, the total number of transactions could range from 1.5 million to 2.4 million, and the total value might range from $14.5 million to $15.4 million. This might not seem like a problem at first consideration. After all, 2 million is in the middleish of its rounding range as is $15 million. Certainly the actual values determined by the simple algebra above point to a good enough approximate answer. Right? Right?
To see if this true, let’s reassign the formulas above in the following way.
- R + S = T
- $6*R + $15*S = V
- S = 1/9 * V - 2/3 * T
- R = T - S
[Fig. 1: The distribution of total transaction values for various combinations of rental and direct sales numbers.]
Here we see that the rental numbers could range from about 800 thousand to 2.4 million, while the direct sales could range from nearly 0 to 700 thousand! Maybe more instructive is to consider the range of the ratio of the rentals to direct sales:
[Fig. 2: The distribution of the ratio of rentals to direct sales for various combinations of rental and direct sales numbers.]
Using the sample values underlying these distributions in our last set of formulas, we observe that in all likelihood - an 80th percentile likelihood – the actual ratio of the rentals to sales falls in a much narrower range – the range of 3 to 9, not 1.11 to 215.
[Fig. 3: The 80th percentile prediction interval for the ratio of the rentals to sales falls in the range of 3 to 9.]
Our manager may push back on this by saying that our SME doesn’t really have the credibility to use the distributions assessed above. She asks, "What if we stick with maximal uncertainty within the range?” In other words, what if, instead of assessing a central tendency around the reported values with declining tails on each side, we assume there is a uniform distribution along the range of sales values (i.e., each value is equally probable to all values in the range)?
[Fig. 4a, b: We replace our SME supplied distribution for (a) total sales transactions and (b) total value with one that admits an insufficient reason to suspect that any value in our range is more likely than any other.]
What is the result? Well, we see that even with the assumption of maximal uncertainty, while the most likely range expands by a factor of 2.7 (i.e., the range expanded from 3-9 to 1.7-18), it still remains within a manageable range as the extreme edge cases are ruled out, not as impossible but as fairly unlikely.
[Fig. 5: Replacing our original SME distributions that had peaks with uniform distributions flattens out the distribution of our ratio of rentals to sales, causing the 80th percentile prediction interval to widen. The new range runs from about 1.7 to 18.]
The following graph displays the full range of sales and rental variation that is possible depending on our degrees of belief (as represented by our choice of distribution) about the range of total transactions and total value.
[Fig. 6: A scatter plot that demonstrates the distribution of direct sales and rental combinations as conditioned by our choice of distribution type.]
By focusing on the 80th percentile range of outcomes in the ratio of rentals to sales, we can significantly improve the credible range to estimate the rentals and direct sales from the approximate information we were given.
[Fig. 7: A scatter plot that demonstrates the distribution of direct sales and rental combinations as conditioned by our choice of distribution type, constrained only to those values in the 80th percentile prediction interval.]
Precise? Not within a hair’s breadth, no, but the degree of precision we obtain by employing probabilities (as opposed to relying on just a best guess with no understanding of the implications of the range of the assumptions) into our analysis improves by a factor of 13.1 (assuming maximum uncertainty) to 35.2 (trusting our SME). If our own planning depends on an understanding of this sales ratio, we can exercise more prudence in the effective allocation of the resources required to address it. Now, when our manager asks, “How do you know the actual values aren’t near the edge cases?”, we can respond by saying that we don’t know precisely, but using simple algebra combined with probabilities dictates that the actual values most likely are not.
The Zen of Decision Making
I copied the following nineteen zen-like koans from the website devoted to the Python programming language (don't leave yet...this isn't really going to be about programming!).
- Beautiful is better than ugly.
- Explicit is better than implicit.
- Simple is better than complex.
- Complex is better than complicated.
- Flat is better than nested.
- Sparse is better than dense.
- Readability counts.
- Special cases aren't special enough to break the rules.
- Although practicality beats purity.
- Errors should never pass silently.
- Unless explicitly silenced.
- In the face of ambiguity, refuse the temptation to guess.
- There should be one-- and preferably only one --obvious way to do it.
- Although that way may not be obvious at first unless you're Dutch.
- Now is better than never.
- Although never is often better than *right* now.
- If the implementation is hard to explain, it's a bad idea.
- If the implementation is easy to explain, it may be a good idea.
- Namespaces are one honking great idea -- let's do more of those!
The koans are supposed to communicate the essence of the guiding principles of programming. Their zen-like fashion is intended to motivate reflection and discussion more so than state explicit rules. In fact, there is a twentieth unstated (Or is it? How's that for zen-like clarity?) principle that you must discover for yourself.
- In what way is decision management like programming?
- How would you interpret these principles, if at all, for use in the role of decision making?
- What do you think is the missing principle?
Monday, October 20, 2014
You've probably heard the saying, "It's better to be mostly accurate than precisely wrong." But what does that mean exactly? Aren't accuracy and precision basically the same thing?
Accuracy relates to the likelihood that outcomes fall within a prediction band or measurement tolerance. A prediction/measurement that comprehends, say, 90% of actual outcomes is more accurate than a prediction/measurement that comprehends only 30%. For example, let's say you repeatedly estimate the number of marbles in several Mason jars mostly full of marbles. An estimate of "more than 75 marbles and less than 300 marbles" is probably going to be correct more often than "more than 100 marbles but less than 120 marbles." You might say that's cheating. After all, you can always make your ranges wide enough to comprehend any range of possibilities, and that is true. But the goal of accuracy is just to be more frequently right than not (within reasonable ranges), and wider ranges accomplish that goal. As I'll show you in just a bit, accuracy is very powerful by itself.
Precision relates to the width of the prediction/measurement band relative to the mean of the prediction/measurement. A precision band that varies around a mean by +/- 50% is less precise than one that varies by +/- 10%. When people think about a precise prediction/measurement, they usually think about one that is both accurate and precise. A target pattern usually helps make a distinction between the two concepts.
|The canonical target pattern explanation of accuracy and precision.|
The problem is that people jump past accuracy before that attempt to be precise, thinking that the two are synonymous. Unfortunately, unrecognized biases can make precise predictions extremely inaccurate, hence the proverbial saying. Jumping ahead of the all too important step of calibrating accuracy is where the "precisely wrong" comes in.
Good accuracy trucks many more miles in most cases than precision, especially when high quality, formal data is sparse. This is because the marginal cost of improving accuracy is usually much less than the marginal costs of improved precision, but the payoff for improved accuracy is usually much greater. To understand this point, take a look again at the target diagram above. The Accurate/Not Precise score is higher than the Not Accurate/Precise score. In practice, a lot of effort is required to create a measurement situation that effectively controls for the sources of noise and contingent factors that swamp efforts to be reasonably more precise. Higher precision usually comes at the cost of tighter control, heightened attention on fine detail, or advanced competence. There are some finer nuances even here in the technical usages of the terms, but these descriptions work well enough for now.
Be careful, though - being more accurate is not just a matter of going with your gut instinct and letting that be good enough. Our gut instinct is frequently the source of the biases that make our predictions look as if we were squiffy when we made them. We usually achieve improved accuracy through the deliberative process of accounting for the causes and sources of the variation (or range of outcome) we might observe in the events we're trying to measure or predict. The ability to do this reflects the depth of expert knowledge we possess about the system we're addressing, the degree of nuances we can bring to bear to explain the causes of variation, and a recognition of the sources of bias that may affect our predictions. In fact, achieving good accuracy usually begins by assessing that we may be biased at all (and we usually are) and why.
Once we've achieved reasonable accuracy about some measurement of concern, it might then make sense to improve our precision of the measurement if the payoff is worth the cost of intensified attention and control. In other words, we only need to improve our precision when it really matters.
|[Image from FreeDigitalPhotos.net by Salvatore Vuono.]|
Monday, September 22, 2014
Are Your Spreadsheets the Problem?
Mr. Patrick Burns at Burns Statistics (no, not that Mr. Burns) provides an excellent overview for the hidden dangers that lurk in your spreadsheets. Guess what. The problems aren't just programming errors and the potential for their harm, but are errors that are inherent to the spreadsheet software itself. That's right. Before your analysts even make an error, the errors are already built in. Do you know what's lurking in your spreadsheets? Well, do you?
Before you answer that question, ask yourself these:
- What quality assurance procedures does our organization employ to ensure that our spreadsheets are free of errors of math, units conversion, and logic?
- What effort does our organization undertake to make sure that the decision makers and consumers of the spreadsheet analysis comprehend the assumptions, intermediate logic, and results in our spreadsheets?
- How do we ensure that spreadsheet templates (or repurposed spreadsheets or previously loved spreadsheets) are actually contextually coherent with the problem framing and subsequent decisions that the spreadsheets are intended to support?
My suspicion is that errors of the first level run amok much more than people are willing to admit, but their prevalence is relatively easy to estimate given our knowledge about the rates at which programming errors occur, why they occur, and how they propagate geometrically through spreadsheets. Mr. Burns recommends that the programming language R is a better solution than spreadsheets and easier to adopt than might be currently imagined by your analysts. I agree. I happen to like R a lot, but I love Analytica as a modeling environment more. But the solution to our spreadsheet modeling problems isn't going to be completely resolved by our choice of software and programming mastery of it.
My greater suspicion is that errors of the second and third level are rarely addressed and pose the greatest level of risk to our organizations because we let spreadsheets (which are immediately accessible) drive our thinking instead of letting good thinking determine the structure and use of our spreadsheets. To rid ourselves of the addiction to spreadsheets and their inherent risks, we have to do the hard work first by starting with question 3 and then working our way down to 1. Otherwise, we're being careless at worst and precisely wrong at best.
(Originally published at LinkedIn.)
Thursday, July 17, 2014
When A Picture is Worth √1000 Words
This morning @WSJ posted a link to the story about Microsoft’s announcement of its plans to lay off 18,000 employees. This picture (as captured on my iPhone)...
...accompanied the tweet, which is presumably available through their paywall link.
While I’m really sorry to hear about the Microsoft employees who will be losing their jobs, I am simply outraged at the miscommunication in the pictured graph. (This news appeared to me first on Twitter, and the seemingly typical response on Twitter is hyperbolic outrage.)
Here’s the problem as I see it: the graph communicates one-dimensional information with two-dimensional images. By doing so, it distorts the actual intensity of the information the reporters are supposed to be conveying in an unbiased manner. In fact, it makes the relationships discussed appear much less dramatic than it actually is.
For example, look at Microsoft’s (MSFT) revenue per employee compared to Apple’s (AAPL). WSJ reports MSFT is $786,400/person; APPL, $2,128,400. The former is 37% of the latter. But for some reason, WSJ communicates the intensity with an area, a two-dimensional measure, whereas intensity is one-dimensional. Our eyes are pulled to view the length of the side of the square as a proxy for the measurement being communicated. The sides of the squares are proportionally equal to √(786,400) and √(2,128,400); therefore, the sides of the squares visually communicate the ratio of the productivity of MSFT:AAPL as 61%. In other words, the chart visually overstates the relative productivity of MSFT's employees compared to that of AAPL's by a factor of 1.62.
If the numbers are confusing there, consider this simpler example. The speed of your car as measured by your speedometer is an intensity. It’s one dimensional. It tells you how many miles (or kilometers, if you’re from most anywhere else outside the US) you can cover in one hour if your car maintains a constant speed. Your speedometer aptly uses a needle to point to the current intensity as a single number. It does not use a square area to communicate your speed. If it did, 60 miles per hour would look 1.41 times faster than 30 miles per hour instead of the actual 2 times faster that it really is. The reason for this is that the the sides of the squares used to display speed would have to be proportional to the square roots of the speed. The square roots of 60 and 30 are 7.75 and 5.48, respectively.
For your own personal edification, I have corrected the WSJ graph here:
Do you see, now, how much more dramatic the AAPL employees' productivity is over that of MSFT's?
This may not seem like a big deal to you at the moment, but consider how much quantitative information we communicate graphically. The reason is that, as the cliché goes, a picture is figuratively worth a thousand words. I firmly believe graphical displays of information are powerful methods of communication, and a large part of my professional practice revolves around accurately and succinctly communicating complex analysis in a manner that decision makers can easily consume and digest. But I’m also keenly aware of how analyst and reporters often miscommunicate important information via visual displays, either by design, inexperience, or by trying to be too clever. I see these transgressions all the time in the analyses I’m asked to audit.
The way we communicate information is not just a matter of style for business reporters. We often make prodigious decisions based on information. If information is communicated in a way that distorts the underlying relationships involved, we risk making serious misallocations of scarce resources. This affects every aspect of the nature of our wealth - money, time, and quality of life. The way we communicate information bears fiduciary responsibilities.
For discussion sake I ask,
- How often have you seen, and maybe even been victimized by, graphical information that miscommunicates important underlying relationships and patterns?
- How often have you possibly incorporated ineffective means of graphically communicating important information? (Pie charts, anyone?)
If you want to learn more about the best ways to communicate through the graphical display of quantitative information, I highly recommend these online resources as a starting point:
Friday, July 11, 2014
The Value of Knowing What You Do Not Know
Tuesday, February 25, 2014
How Do You Know That? Funny You Should Ask.
During a recent market development planning exercise, my client recognized that his colleagues were making some rather dubious assumptions regarding the customers they were trying to address (i.e., acceptable price, adoption rate, lifecycle, market size, etc.), the costs of development, and costs of support. Although he frequently asked “How do you know that?”, he seemed to face irritation and mild belligerence in reaction from those he asked to justify their assumptions. So, together we devised a simple little routine to force the recognition that assumed facts might be shakier than previously thought.
|Before Western explorers proved that the Earth is round, ships used to sail right off the assumed edges.|
We then asked the team to supply a statement that answered each question in support of the original statement. Once this was completed, we then appended the dreaded question mark to each of these responses. We repeated this process until no declarative answers could be supplied in response to the questions. The cognitive dissonance among the team members became palpable as they all had to start facing the uncomfortable situation that what they once advocated as fact was largely unsupportable. Many open questions remained. More uncertainty reigned than was previously recognized. The remaining open questions then became the basis for uncertainties in our subsequent modeling efforts in which we examined value tradeoffs in decisions as a function of the quality of information we possessed. You probably won’t be surprised to learn that the team faced even more surprises as the implications of their tenuous assumptions came to light.
I am interested to know how frequently you find yourself participating in planning exercises at work in which key decisions are made on the basis of largely unsupported or untested assumptions. My belief is that such events happen much more often than we care to admit.
- Write down everything you believe to be true about the issue or subject at hand.
- Each statement should be a single declarative statement.
- Read each out loud, forcing ownership of the statement.
- Convert each statement to a question by changing the period to a question mark.
- Again, read each out loud as a question, opening the door to the tentative nature of the original statement.
- Supply a statement that you believe to be true that answers each question.
- Repeat the steps above until you reach a point with each line of statements-questions where you can no longer supply answers.
Let me know if you try this and how well it works.
Wednesday, January 22, 2014
Can Modeling a Business Work?
A friend on LinkedIn asks, “Can modeling a business work?” I respond:
Business modeling is a tool similar to accounting in that it aids our thinking in a world whose complexity seems often to exceed the grasp of our comprehension. I look at the value of modeling a business as a means to stress test both the business plan logic and the working assumptions that drive the business plan. In regard to the business plan logic, we're asking if the business has the potential ability to produce the value we think it can; and in regard to the working assumptions, we're testing how sensitively important metrics (i.e., payback time, break-even, required resources, shareholder value) of the business plan respond to conditions in the environment and controllable settings to which our business plan will be subjected.
- Think of such models as "what-ifs" more so than precise forecasts. Use the "what if" mindset to make a business plan more robust against the things outside your direct control versus using it to justify a belief in guaranteed success. The latter is almost a sure fire approach to failure.
- Always compare more than one plan with a model to minimize opportunity costs. Often times, the best business plans derive from hybrids of two models that show how value can be created and retained for at least two different reasons.
- Avoid overly complex models as much as, maybe more so than, overly simplistic models. Building a requisite model from an influence diagram first is usually the best way to achieve this happy medium before writing the first formula in a spreadsheet or simulation tool. Richer, more complex models that correspond to the real world with the highest degree of precision are usually not useful for a number of reasons:
- they can be costly to build
- the value frontier of the insights derived decline relative to the cost to achieve them as the degree of complexity increases
- they are difficult to maintain and refactor for other purposes
- they are often used to justify delaying commitment to a decision
- few people will achieve a shared understanding that is useful for collaborating and execution
Sunday, January 12, 2014
Double, double toil and trouble; Fire burn, and caldron bubble
This was a great article in The Wall Street Journal today.
For me, the key take away point can be summed up in this quote from Prof. Goetzmann: "Once people buy in, they start to discount evidence that challenges them..." I relate this not only to investing decisions in the market, but also to making organizational decisions--investments in capital projects, new strategies, the next corporate buzz. We've all seen or been apart of the exuberant irrationality that leads organizations into malinvestments.
Let's consider the complementary action--saying "no." Against the tendency toward the irrational "yes, Yes, YES!", learning to say "no" is a very important skill to master. It's probably one of the hardest skills to master when people request something from us that makes us feel important and liked.
I think, however, we always need to be aware that many of our initial reactions are often driven by biases. Reactively saying "no," once we've learned to say it and it becomes easy to do, can emerge from the same biases that urge us unreservedly to say "yes." Both incur their costs: missed opportunity, waste, and rework.
The skill more important to learn than saying "no" is acquiring the skill to consider disconfirming evidence, especially when that evidence challenges our dearest assumptions about what is going to make us rich. Let's not be so quick to say "yes" or smug when we say "no." Rather, let's learn the practice of asking,
- "what information might disabuse me of my favorite assumptions?"
- "what biases are preventing me from seeing clearly?"
Tuesday, September 10, 2013
It's Your Move: Creating Valuable Decision Options When You Don't Know What to Do
The followings is the first chapter excerpt from my newly published tutorial.
Business opportunities of moderate to even light complexity often expose decision makers to hundreds, if not tens of thousands, of coordinated decision options that should be considered thoughtfully before making resource commitments. That complexity is just overwhelming! Unfortunately, the typical response is either analysis paralysis or "shooting from the hip," both of which expose decision makers to unnecessary loss of value and risk. This tutorial teaches decision makers how to tame option complexity to develop creative, valuable decision strategies that range from "mild to wild" with three simple thinking tools.
Read more here.
Wednesday, July 24, 2013
RFP Competitive Price Forecasting Engine
Tuesday, July 23, 2013
Business Case Analysis with R
A Simulation Tutorial to Support Complex Business Decisions
1.2 Why use R for Business Case Analysis?
- Set up a business case abstraction for clear communication of the analysis
- Model the inherent uncertainties and resultant risks in the problem with Monte Carlo simulation
- Communicate the results graphically
- Draw appropriate insights from the results
1: You will find other examples of spreadsheet errors at Raymond Panko's website. Panko researches the cause and prevalence of spreadsheet errors.
Read more here: Or, if you prefer Amazon or Scribd.
Wednesday, March 06, 2013
Never Tell Me The Odds?
In a previous post, I discussed the meaning of expected value (EV) and how it's useful for comparing the values of choices we could make when the outcomes we face with each choice vary across a range of probabilities. The discussion closed by comparing the choice to play two different games, each with different payoffs and likelihoods. Game 1 returns an EV of $5, even though it could never actually produce that outcome; and Game 2 returns an EV of $4, also being incapable of producing that outcome.
But let's say that you hate it when C-3PO tells you the odds, so you commit to Game 2 because you like the upside potential of $15, and you think the potential loss of $5 is tolerable. After all, Han Solo always beat the odds, right? Well, before you so commit, let me encourage you to look into my crystal ball to show you what the future holds…not just in one future, but many.
Here's what we see. For Game 1, the accrued earnings range from ~$4,500 to ~$5,500 by the 1000th step.
But take a second look. That second-from-the-top band for Game 2 converges on the second-from-the-bottom band in Game 1. These are the upper and lower 5th percentile bands of the outcome, respectively.
So it is in the fantasy of Hollywood that the mere mention of long odds ensures the protagonist's success. Unfortunately, life doesn't always conform to that fantasy. Over a long time and many repeated occasions to play risky games, especially those that afford little opportunity to adjust our position or mitigate our exposure, EV tells us that our potential for regret increases for having chosen the lesser valued of the two games. Depending on the relative size of the EVs between the two choices, that potential for regret can occur rapidly as the inherent outcome signal implied by the EV begins to overwhelm the potential short-term lucky outcomes in the random noise of the game.
So how can you know when you will be lucky? You can't. The odds based on short-term observations of good luck will not long be in your favor. Your fate will likely regress to the mean.
(This post was also simultaneously published at the Lumina Blog.)