Posts Tagged ‘risk’

HBR article recommends confronting new venture risks early

Wednesday, May 19th, 2010

Once upon a time in innovation, there was a general rule: get to market as quickly as you can, meaning you should start on your “long-pole” development activities as soon as possible. But there’s a growing consensus in the innovation community that the best way to succeed isn’t to start developing quickly, but instead to do as much work as possible on paper, to validate assumptions cheaply and quickly, and defer more expensive, riskier (and even long-pole) activities until after some of the basic assumptions are validated.

Part of this thinking encourages innovators to rank their risks – to work on critical assumptions first. In case those assumptions don’t pan out, the entire venture might fall apart: all the better to look at them early. That’s the premise behind the article “Beating the Odds When You Launch a New Venture” by Clark Gilbert and Matthew Eyring in the May Harvard Business Review.

The authors identify three types of risks that should be evaluated early in a new venture’s life:

1) Deal-killer risks – risks that can sink the venture. Often these seem to be marketing and sales related risks: will anyone buy the product we want to build? Given that engineers often start with a product idea, it’s easy to see why market testing is often left to last. However, prototyping and beta launches (common with internet products today) can provide cheap and quick data about a product’s attractiveness to the market.

2) Path-dependent risks – these are situations that could go down multiple paths – for example, a new product that could be useful to consumers or businesses. Committing to one of these paths, and later learning the other path was a better choice, wastes time and money, and risks the venture never fulfilling its potential. The authors recommend entrepreneurs carefully evaluate these alternate paths early on, and consider outsourcing or other ways to cost-effectively pursue both paths until the correct one becomes clear.

3) Risks that are simple and quick to evaluate – validating other assumptions, that may not be as critical as the above two, but which can be simply and cheaply tested, can reduce the overall risk of the venture.

This thinking is similar to ideas put forward in last year’s book “Innovation Tournaments” by Terweisch and Ulrich, which also discussed testing high-impact risks early, before expensive steps like building supply chains.

And, of course, all these efforts owe a debt to the thinking of McGrath and MacMillan, whose book “Discovery-Driven Growth” is the bible of the test-assumptions-first school.

Related posts:
A brief definition of strategy (Clark Gilbert)
When innovating, try more and more varied ideas (Innovation Tournaments)
On “Discovery-Driven Growth”

Risk is risk: an idea for spotting emerging asset bubbles

Thursday, January 21st, 2010

One of the causes of the recent financial meltdown was a bubble in the value of housing assets that emerged over a number of years. Some people saw this while it was happening, but could do little to impact things. When the bubble burst, institutions and individuals suffered tremendous damage that is still hurting people today.

An important question is this: can asset bubbles be spotted earlier, and, if so, can measures be taken that can minimize the impact of these bubbles? Sendhil Mullainathan of Harvard Business School thinks to the answers to both questions is yes. In one of the 10 Breakthrough Ideas discussed in the current issue of Harvard Business Review, Mullainathan postulates that, as structural engineers can design buildings to withstand much of the force of earthquakes, a well-designed system can help absorb and dampen the negative effects of asset bubbles.

bubbleHe proposes an “early warning system” composed of a committee that could look at markets and see when contrarian views are underrepresented (according to Mullainathan, this happens often during bubbles – investors betting against the prevailing views are eventually chased from the market as prices rise inexorably). By publicizing this situation, and even, perhaps, placing counter bets, the early warning committee can help dampen prices and keep bubbles from overinflating.

I have a lot of questions on this approach – for one, wouldn’t investors create political pressure for such a public group betting against price rises? But, nonetheless, Mullainathan’s proposal deserves careful study. Because anyone who’s lived through this last bubble (John Paulson excepted) doesn’t want to relive it.

(Photo by Viking_79 via Flickr creative commons)

Risk is risk – the specter of cybertheft

Monday, January 4th, 2010

Welcome to a new category of posts. My 2010 goal (resolution?) is to post each Monday on the impact of risk on businesses small and large.

My recent vacation reading included a daily dose of USA Today. Among the bar charts was this very interesting article on cybertheft.

According to the article, malicious links hide in official-looking emails, online ads and web pages. Clicking on these links download programs that scan your computer or log your keystrokes to find account numbers and passwords. And while consumers are by and large protected in the case of a fraudulent funds transfer, business are not. According to the USA Today article:

…consumer-protection laws require banks to fully reimburse individual account holders who report fraudulent activity in a timely manner. However, banks have taken to invoking the Uniform Commercial Code — a standardized set of business rules that have been adopted by most states — when dealing with fraud affecting business account holders. Article 4A of the UCC has been interpreted to absolve a bank of liability in cases where an agreed-upon security procedure is in place and a theft occurs that can be traced to a compromised PC controlled by the business customer.

“It’s time for small business to wake up and understand the true risk of online banking,” says [Gartner analyst Avivah] Litan. “If the bank thinks you were negligent, they do not have any obligation to pay you back.”

Here are some documented losses from cybertheft (some of these losses have been recovered, and in some cases the victims are suing the banks for more reimbursement):

Bullitt County, Kentucky: $415,000
Western Beaver County, PA, School District: $700,000
Cumberland County, PA: $479,000

The lesson here is to take great care when banking online. At minimum, use a PC dedicated for online banking and do no other web browsing or email on that PC. Look into accounts that offer better protection (they’ll cost you). Or, perhaps, consider writing plain old checks again.

UPDATE: Brian Krebs, the great IT security blogger referenced twice above, has left the Washington Post as of 12/31/09. Fortunately, he’s still blogging on security at his own website.

Duncan Watts: “If it’s too big to fail, it’s too big”

Wednesday, May 20th, 2009

(Funny, my wife made this point months ago.)

Duncan Watts, principal research scientist at Yahoo Research and expert on human networks and complexity, makes this point in the June Harvard Business Review (”Too Big To Fail? How About Too Big To Exist?“).

Watts looks at the financial market as a complex system and compares it to another complex system: the power grid. As an outage in one power plant can cascade and cause outages regionally, so the financial system failures (such as the collapse of Lehman Brothers) cascaded to a general meltdown in credit and prompted unprecedented governmental intervention.

He points out that in a complex system, the actors (financial firms, power system components) affect each other to the point that one’s own risk profile can change dramatically depending on what happens to others. Meaning, your risk department’s calculations are dependent on assuming the other guy is stable and rational (risky assumptions those are).

Government coming in after a disaster and resuscitating the surviving firms is one approach. A better approach, according to Watts, is to make certain that each actor is small enough that its failure has a limited effect on the other actors.

This discussion reminded me of the evolution of robustness in computer systems in the past thirty years. In the 1980’s, the best way to achieve robustness was to build a huge computer with redundant components and very complex software. Such computers were protected in military-style data centers with concrete walls and fire suppression systems. In case a piece of the computer failed, the software helped the machine use other pieces to continue operating. Tandem (now part of HP) was the market leader here.

Of course, relying on one huge computer (too big to fail) exposed you to lots of other risks. For example, what if the power went off? What if there was a localized weather disaster? etc. There were limits to the “too big to fail” computer architecture–exposed most notably in the 9/11 disaster, where reliance on Lower Manhattan data centers put the stock markets and other financial markets on hold for days till their data services could be relocated.

Another approach to computer redundancy was created in the internet space, perfected by Google. Rather than having one or two huge servers with complex software managing redundant everything, Google has created a worldwide network of hundreds of thousands of small, pretty dumb servers, and software that allows transactions to be moved across those servers depending on their health. If a Google server goes down, nobody notices because its traffic is quickly spread over the remaining zillion servers that are working.

And that seems like a better model for our financial systems, too. I agree with Watts: too big to fail is too big.

Related post:
On Duncan Watts’ “Big Seed Marketing” idea