Understanding Data

The power of data is often spoken of; but to wield that power, a company has to internalize the complexity, discipline, and skill required to generate reliable internal data and use it effectively. Misunderstanding data can lead to very costly mistakes.

Data and insights are very popular figures in business circles. Machine learning and artificial intelligence are giving them a run for their money. Is it that simple though? How often are these words being dropped into sentences without a clear handle on the underlying principles? There are important underlying concepts that make data practically useful or completely misleading.

It is critical that both analysts (people who assess data for insightful messages) and decision-makers (people who review data assessments to choose the right course of action) internalize the pitfalls around data for the company to improve its standing on the Objective & Analytical Culture maturity model.

Small- to mid-sized companies have to make major decisions about their future with very little historic precedent. These decisions are made based on sparse data available. But misunderstanding the data available will have far-reaching consequences on these companies with only a few years of history. Fast growing small- to mid-sized companies do not have the luxury that tenured companies have, in terms of having historical reference points to compare analytical recommendations to.

Fundamentals first!

What is Data? Process-Data Symbiosis

Data is the qualitative or quantitative recorded reflection of behaviors in a process (inside or outside a company), captured manually or automatically, based on previously set guidelines. In other words, data is just a reflection of the process. As stated under Strong Process Culture, there is no data without process. It is a symbiotic relationship where well-designed processes capture useful data and poor processes capture tarnished and misleading data.

Speaking of misleading data, books can be written on this topic. For the moment, let’s explore two of the most critical aspects analysts and decision-makers should internalize to move further to the right of the Objective & Analytical Culture maturity model.

  1. Data Flaws: These are inherent flaws in data that analysts can’t really fix except find alternate data. Companies need to preemptively remedy these flaws or avoid using such data for decision-making.
  2. Data Biases: These are not outright flaws in the data. Biases are natural human skews that are either captured in the data or affect people’s ability to look at the data objectively. Every company should leverage analysts with deep expertise to assess data to limit risk of poor recommendations misleading the company’s direction.

Data Flaws

Data is often sliced and diced by many individuals in a company to get to underlying messages. However, what if the underlying messages are a result of disruptions in the process itself? Unfortunately, “messages” in data are far more often a reflection of process disruptions than real trends or insights.

There are two most common types of data flaws. Both create the illusion that behaviors or results from a process have meaningfully changed; but in reality the causes for observable change in data are undesirable aberrations. Every analyst should evaluate the data for various manifestations of both of these flaws before considering the information ready to be analyzed.

1: Process breakages

Process breakages are essentially poorly designed processes causing poor execution, especially the information-capture portion. Process breakages pollute data with incorrect entries without any obvious clues to which entries are incorrect. i.e. The data does not reflect the actual process that is being executed.

In a day-to-day example of measuring health habits using a Fitbit watch (a health monitoring device), process breakage can happen in many forms:

  • The user can be careless and forget to wear the watch on certain days before heading to work. The data will incorrectly show those days as lazy ones, while the reality is different.
  • The user may not be well-trained on the Fitbit and logs exercises poorly overstating or understating the length of exercises.

Unfortunately, there is nothing reasonable that can be done with the data in such circumstances. Attempting to go back and “fix” the entries will most likely create biases (see below) in the data, causing a different set of problems.

2: Process and measurement changes

In small- to mid-sized companies with poor operational governance, process changes are decided through quick in-person conversations or via email communications without formality. Moving fast is fine; but it also creates problems. When employees are asked to make a change to their behavior or the definition of a process step is changed on an ad hoc basis, without adjusting the information capture guidelines, the company is distorting its data. Essentially, the data before the change cannot be compared to the data after the change.

In the same example of wearing a Fitbit to measure a person’s exercise habits and heartbeat, data flaws due to process or measurement changes might include:

  • Comparing number of exercises between two weeks is flawed, if the individual had to log exercises manually during the first week and if automated exercise capturing capability was turned on just before the second week.
  • Comparing heartbeat data between two weeks is flawed, if the individual switched the hand on which the watch is being worn from non-dominant to dominant hand without changing the watch setting.

Takeaways: Both types of data flaws are likely hiding in many data sets and it takes experienced analysts and decision-makers to steer clear. An analyst should only attempt to learn about and make recommendations with data after validating that the process underlying the data wasn’t broken or changed. Decision-makers should have the wherewithal to pressure-test recommendations and ensure that the analysis didn’t use flawed data. Process-Data symbiosis is an often-ignored fact resulting in poor recommendations that can harm an organization.

Data Biases

Unlike data flaws, which are potholes in the data itself, biases are natural human tendencies that manifest in the data or affect the use of data. They skew execution of processes and replicate such tendencies in the data, or drive observers to look at the available data in a skewed manner. Cognitive biases form an entire branch of study called behavioral economics. So, we will just broadly classify biases into two summary themes that impact data from an analytical perspective: 1) Biases in data capture, and 2) Biases in observer mindset.

1: Biases in data capture

Human beings demonstrate natural biases that create a personal slant in the way process steps are executed and that slant is captured in the data. Biases that impact processes and data capture include Framing bias, Confirmation bias, and Anchoring bias. Unless these biases are addressed by employees in critical roles, the organization will be left with suboptimal data. Practical examples might include:

  • Asking customers leading questions to understand their decisions, leaving the organization with data that skew towards internal perspectives. This is an example of confirmation bias.
  • During a sales pipeline review, if sales leaders ask sales reps – “is there any reason not to close any of these deals?”, the answer is likely to be skewed based on the organization dynamics that the sales rep is facing. This will result in a skewed sales pipeline and is an example of a framing bias.

These skews don’t make the data wrong, technically speaking. However, the messages that analysts and decision-makers can take away from such data can be very misleading, especially if they don’t understand the impact of biases and how they might have skewed the data during information capture.

2: Biases in observer mindset

Even if we assume that data captured has no biases, certain types of biases creep in as the data is being assessed. Every senior executive and analyst has to develop hypotheses to look at data. Just wading through information hoping to find answers is not effective. Developing the right hypotheses is critical to using data. Over-confidence Bias, Self-serving Bias, Herd Mentality, and Narrative Fallacy are all cognitive skews that can lead a company astray. Analysts and decision-makers should look in the mirror and consider these questions:

  • Self-serving bias – Do I have a vested interest in proving a point and thus missing the real insight?
  • Herd mentality – Am I tempted to look for messages that align with the thinking of the whole group?
  • Narrative fallacy – Has the company created positive storylines that are being used as the truth while the data might indicate otherwise?

These biases aren’t anyone’s fault. They are natural human tendencies. However, accommodating for them is a practiced skill and requires self-awareness. Senior executives and analysts operating in critical roles without internalizing these concepts are at risk of misleading the company.

Remedies To Accommodate Flaws And Biases

There are no hard and fast rules to ensure that a company understands its data well and uses it prudently to make decisions. However, a strong process designer and an experienced analytical team (analysts and decision-makers) can account for the pitfalls and move the organization towards an “insightful” maturity level.

1: Nip it in the bud

Most data flaws can be prevented with strong process design and governance. This responsibility falls to the company’s process designer. A good process designer (a small- to mid-sized company only needs one) ensures that information capture is effective through the following means:

  1. Always breakdown derived data into basic components and only expect the process to capture the basic components. e.g. Don’t expect the process to capture data about ‘total activity duration’; only capture the beginning and ending time through the process (if possible, automatically). The complex and subjective ‘total activity duration’ can be derived.
  2. Capture information in the same sequence as it is performed in the process. i.e. Design the information capture aspects to help the Frontline Resource complete the task as opposed to create onerous additional tasks around data capture.
  3. All process changes likely create a data distortion. Ensure that there is a simple governance approach where the information capture mechanism retroactively accounts for the changes in the process. This prevents process / measurement changes from impacting data quality.

2: Leverage analytical experts

Similar to operational expertise, expertise in developing objective and comprehensive hypotheses, choosing the appropriate analysis, assessing the available data while being wary of Process-Data Symbiosis, and teasing out the right insights are not easily learned skills. It takes years of practice in the right teaching environments to hone these skills. An organization that is ready to scale should have a well-bred analyst on staff. A common folly around analytical expertise is confusing the difference between reporting and analysis. Additionally, the key decision-makers in the organization has to internalize the concepts listed in the sections above.

Although using data to make decisions is broadly accepted as an absolute necessity, small- and mid-sized companies often trivialize the foundational structure and discipline necessary to create and use data. An organization-wide internalization of several core process and data concepts form the groundwork on which strong strategic and operational insights and decisions sit.

Published By

John Oommen

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s