It’s easy to think of measuring everything as a deification of numbers. Sure we’ve all heard about this well-known axiom “You can’t manage what you don’t measure.” Don’t you think it’s just fair to ask how and why you are measuring stuff?
While testing your metrics doesn’t seem like a fun task at all, doing so would absolutely define your status and progress as a business. Each metric should be scrutinized and carefully chosen taking into account its sensitivity, validity, responsiveness, reliability, and cost/benefit.
This element focuses on the sensitivity of the metric to a shift or change in the underlying construct. How much should the prime setup (i.e. campaign costs) need to change (percentage) before it is detected or is registered to the system? Test how sensitive the metric is to changes in the underlying setup.
Validity has most been most commonly connected with accuracy and reasonableness. What must be put into additional context as well is meaningfulness. A measure should be valid and with validity comes reliability. One particular factor may be valid for determining market share, but not for the business valuation.
According to research, there are at least two ways to test the validity of a specific measure.
- Content validity refers to the extent to which the items on a test are fairly representative of the entire domain the test seeks to measure. It checks whether the other subject matter or content experts would agree on the use of the measure for its intended purpose.
- Predictive validity refers to the degree to which test scores accurately predict scores on a criterion measure. It checks whether the measure is predictive of the construct it’s intended to provide guidance about.
The predictive validity would be more applicable to a key performance indicator (KPI) as it is predictive of another macro measure, as revenue.
This element focuses on how fast the value of the measure changes as the underlying construct changes. Your metrics should be responsive enough to pick up the change once a measure has been updated. A lot of executives pull up their dashboard of metrics and are able to see real-time updates, with every metric being highly responsive to actual changes in their operations.
Reliability implies repeatability or consistency. A measure can be called reliable if it gives the same results (given the same measured factors) over and over again.
If you have an unclear outline of the way you measure, you will leave room for misinterpretation which brings more error or “noise”. This makes your metrics unreliable.
The fundamental basis for standardized processes is the push for reliability, which is a necessary but insufficient condition in determining whether a measure is valid.
For example, in sales, pipeline analysis (e.g., I’m in the Qualify or Develop stage) is meaningless without a clearly defined sales process and stage-gate criteria.
The cost or benefit element focuses on the costs and benefits of measuring a particular construct. How long does it take to collect, input, and analyze the necessary information (to determine sales cycle time or cost per lead, for example)? What benefits (better resource allocation, for example) accumulate from measuring the construct?
Given the alternatives, do the benefits outweigh the costs given the time, focus, and analysis required to measure the construct?
The above criteria can help you to have a better understanding of the pros and cons of each metric that is utilized to monitor and track the status of a business, unit, and/or functional department. Each metric should be evaluated in the context of the portfolio of measures to be useful, as any single measure, just as any single asset class, may be unbalanced or risky in isolation.
The criteria mentioned above may seem like common sense, but they are not common practice. Many companies try to roll up or trend “apples and oranges” or inconsistent data, leading to erroneous conclusions and risky strategies.
The lifeblood of measurement is how you execute the measure — how you define and measure it in practice, as well as how you operationalize measures to accurately reflect a certain phenomenon, trend, or discrepancy.
Finally, considering that measuring is a tool that constitutes change, each metric should be tested rigorously so as not to dilute focus or waste time and energy on measures that provide little direction, are not actionable, and/or are not closely aligned with the goals and objectives of the organization.