I’m wrestling with this question right now. I’ve completed several projects that involved collecting, reviewing and working with collections of customer stories to help companies in telesales, market positioning and strategic planning. When I talk to larger companies, inevitably they point to an industrial-strength IT platform they’ve installed that collects & interprets thousands or millions of interactions. “See what we’re doing?” they say. “We are looking at millions of data points. So there’s nothing more you can do for us.”
My discussions with these companies won’t go anywhere if I can’t demonstrate to them that there is a distinct difference between the data mining they’re doing today and this more hand-crafted approach I’m proposing.
So I’m laying out some important distinctions between story work and data mining.
1) My approach (and others’) is rooted in customer narratives, not in numbers.
2) With the narrative-based approach, an essential aspect is human immersion in the individual customer stories. This means a real person reading transcripts, listening to recordings, etc., to experience as fully as possible what’s going on in the moment. In customer encounters, hesitations, stammers, changes, long pauses, laughs, interruptions are not noise–they are part of the story. Removing these is at least sterilizing and at worst misleading. [Even rudimentary, supposedly machine-readable stories such as Tweets can be easily misinterpreted.]
3) This human interactor helps catalyze* or identify unusual or possibly interesting patterns, acting as a naive observer (or customer proxy), outside the company. Excerpts that may illustrate patterns are excerpted and used in sensemaking described below. [Sensemaking, in a somewhat different context, is discussed in this Sloan Business Review article.] Data items related to the stories are also collected and the interactor creates graphical representations of the data, some of which are potentially interesting and insightful.
4) The catalyzed information is reviewed and assessed by groups of people inside the company, rather than individuals scanning dashboards and using that data to reinforce their preconceptions. The groups use techniques designed to foster evidence-based collective sensemaking, rather than simply to confirm/refute preexisting hypotheses. The techniques encourage the groups to develop alternate interpretations of the data, an important advantage over numerical data analysis in evaluating customer service, sales, product satisfaction, which have complex aspects that defy reduction to simple dashboard figures.
But what about the limitations of all this human intervention? Isn’t 200 stories too small a data set to a company with millions or tens of millions of customers? My answer, surprisingly, came from a CFO friend. “Gosh,” he said, “after 200 stories, I don’t think you’d find much of anything new.” I agree. If the scope and key questions are defined clearly enough, a sample of a hundred or two hundred can blanket the problem and provide patterns that would equally apply to the remainder of the customer base.
*I first heard the term catalysys used by Cynthia Kurtz to describe this process.