Boardspan Library

A Leader’s Guide to Data Analytics

by Florian Zettelmeyer


In recent years, data science has become an essential business tool. With access to incredible amounts of data—thanks to advanced computing and the “Internet of things”—companies are now able to measure every aspect of their operations in granular detail. But many business leaders, overwhelmed by this constant blizzard of metrics, are hesitant to get involved in what they see as a technical process.

For Florian Zettelmeyer, a professor of marketing and faculty director of the program on data analytics at the Kellogg School, managers should not view analytics as something that falls beyond their purview. “The most important skills in analytics are not technical skills,” he says. “They’re thinking skills.” Managing well with analytics does not require a math genius or master of computer science; instead, it requires what Zettelmeyer calls “a working knowledge” of data science. This means being able to separate good data from bad, and knowing where precisely analytics can add value.

A working knowledge of data science can help leaders turn analytics into genuine insight. It can also save them from making decisions based on faulty assumptions. “When analytics goes bad,” Zettelmeyer says, “the number one reason is because data that did not result from an experiment are presented as if they had,” he says. “Even for predictive and prescriptive analytics, if you don’t understand experiments,” he says, “you don’t understand analytics.”

Start with the Problem

Too often, Zettelmeyer says, managers collect data without knowing how they will use it. “You have to think about the generation of data as a strategic imperative,” he says. In other words, analytics is not a separate business practice; it has to be integrated into the business plan itself. Whatever a company chooses to measure, the results will only be useful if the data collection is done with purpose.

Like all scientific inquiries, analytics needs to start with a question or problem in mind. Whether it is a software company that wants to improve its advertising campaign, or a fast food company that wants to streamline its global operations, the data collection has to match the specific business problem at hand. “You can’t just hope that the data that gets incidentally created in the course of business is the kind of data that’s going to lead to breakthroughs,” Zettelmeyer says. “While it is obvious that some kinds of data should be collected—for example, consumers’ browsing behavior—customer interactions have to be designed with analytics in mind to ensure that you have the measures you need.”

Nor can managers rely on data scientists to take the lead. Ultimately, it is the manager’s job to choose which problems need to be solved and how the company should incorporate analytics into its operations. Executives, after all, are the ones who have to make decisions; therefore, they should play a central role in determining what to measure and what the numbers mean to the company’s overall strategy.

Understand the Data-Generation Process

“There is a view out there that because analytics is based on data science, it somehow represents disembodied truth,” Zettelmeyer says. “Regrettably that is just wrong.”

So how can leaders learn to distinguish between good and bad analytics? “It all starts with understanding the data-generation process,” Zettelmeyer says. “You cannot judge the quality of the analytics if you don’t have a very clear idea of where the data came from.”

Zettelmeyer says most managers share a common behavioral bias: when results are presented as having been achieved through complicated data analytics, they tend to defer to the experts. “There is a real danger in managers assuming that the analysis was done in a reasonable way. I think this makes it incredibly important for managers to have a sixth sense for what they can actually learn from data.” To make informed decisions, he says, it helps to take a step back and establish some fundamentals.

Because analytics often boils down to making comparisons between groups, it is important to know how those groups are selected. For example, a marketing department may want to judge the effectiveness of an ad by comparing consumers who were exposed to the ad with those who were not. If the consumers were selected randomly, the groups are what data scientists call “probabilistically equivalent,” which is the basis for good analytics. But if, say, they were exposed to the ad because they had shown prior interest in the product, this will lead to bad analytics, since not even the most sophisticated analytical techniques could provide an answer to the basic question: Was the ad truly effective or was the consumer already interested?

This is not just a marketing problem. Take, for example, a hospital that wants to replace its ultrasound machines. Thanks to advanced wireless sensors, the hospital is able to measure in the course of business exactly how long it takes to perform an exam using the new devices, a metric that would help it decide whether to switch over for good. But the data show a surprising result: the new device is taking longer to use than the older one. What the hospital had not accounted for was a preexisting difference between two groups of technicians: novice technicians and experienced technicians. It turns out that more novice technicians, who were naturally slower than the experienced ones, were choosing to use the newer device, and this skewed the data. “The problem,” Zettelmeyer says, “is one of confounding technician experience with the speed of the device.” Again, analytics failed because it overlooked fundamental questions: What makes technicians choose one machine over the other? Is everything about the usage of the two machines comparable? And if not, was the correct analytics used to correct for that?

Understanding the data-generation process can also uncover the problem of reverse causality. Here, Zettelmeyer points to the case of a company deciding whether or not to limit promotional emails. The data reveal that promotional emails are extremely effective: the more emails a customer receives, the more purchases they are likely to make. But what is not apparent in the data is that the company is following a piece of marketing wisdom Reader’s Digest hit upon decades ago, which found that loyal customers—people who bought more recently, more frequently, and who spend more on purchases—are more likely to buy again when they are targeted. So rather than the number of emails driving the amount of sales, the causality actually works the other way: the more purchases customers make, the more emails they receive. Which means that the data are effectively useless for determining whether email drives revenue.

Use Domain Knowledge

In addition to making sure that data is generated with analytics in mind, managers should use their knowledge of the business to account for strange results. Zettelmeyer recommends asking the question: “Knowing what you know about your business, is there a plausible explanation for that result?” Analytics, after all, is not simply a matter of crunching numbers in a vacuum. Data scientists do not have all the domain expertise managers have, and analytics is no substitute for understanding the business.

Consider an auto dealership that runs a promotion in February. Based on a rise in sales for that month, the dealer assumes the promotion worked. “But,” Zettelmeyer says, “let’s say what they were trying to sell is a Subaru station wagon with four-wheel drive, and they completely ignored the fact that there was a giant blizzard in February, which caused more people to buy station wagons with four-wheel drive.” In cases like these, he says, having the data is not enough.

Know It—Do Not Just Think It

As Zettelmeyer sees it, decision making in the business world is being revolutionized in the same way that healthcare is with the widespread adoption of “evidence-based medicine.” As big data and analytics bring about this revolution, managers with a working knowledge of data science will have an edge. Beyond being the gatekeepers of their own analytics, leaders should ensure that this knowledge is shared across their organization—a disciplined, data-literate company is one that is likely to learn fast and add more value across the board. “If we want big data and analytics to succeed, everyone needs to feel that they have a right to question established wisdom,” Zettelmeyer says. “There has to be a culture where you can’t get away with ‘thinking’ as opposed to ‘knowing.’”

Developing such a culture is a big challenge for leaders. Organizations are rarely willing to admit the need for change, and few managers feel confident enough to lead with analytics. This, he says, will have to change.

“Can you imagine a CFO going to the CEO and saying, ‘I don’t really know how to read a balance sheet, but I have someone on my team who is really good at it.’ We would laugh that person out of the room,” Zettelmeyer says. “And yet I know a whole bunch of people in other disciplines, for example, marketing, who, without blinking an eye, would go to the CEO and say, ‘This analytics stuff is complicated. I don’t have a full grasp on it. But I have assembled a crackerjack analytics team that is going to push us to the next level.’ I think this is an answer that is no longer acceptable.

Florian Zettelmeyer is the Academic Director for Kellogg Executive Education’s Leading With Big Data and Analytics program.

 

Republished with permission from Kellogg Insight, which is published by the Kellogg School of Management at Northwestern University. For more, visit insight.kellogg.northwestern.edu

More on Risk