The acronym SMART has been around for decades, helping people to move from pie-in-the-sky goals to practical objectives that are Specific, Measurable, Actionable, Realistic, and Time-bound. Using this mnemonic as a “hack” when setting goals and metrics can be quite helpful. However, the process of defining and deploying goals includes much more than simply being SMART.
Where do metrics come from?
Just as babies aren’t delivered by a stork, metrics don’t come from thin air. Any metrics that seem to have no ties to an organization’s raison d’être will generally be met with scepticism, indifference or confusion. Instead, metrics are created as an integral part of the organization’s strategic and tactical planning processes.
Metrics are developed as part of the deployment process of identifying a vision, defining a strategy, setting goals, and assigning actions. The vision identifies where we want to go, the strategy explains how we want to get there, the goals define where “there” is, the actions say who will do what by when, and the metrics help us measure how well we are doing on our path and how far we have to go.
Going beyond SMART metrics
SMART is a good start, but other elements are important as well. When setting goals and metrics, consider:
Basis for metrics
Within the strategic framework, goals can be set in several ways. Using historic data can help to set goals and metrics for achievable continuous improvement. Benchmarking against direct competitors, within similar processes or technologies or compared to other successful groups within the organization, can motivate workers to understand and implement best practices. Setting visionary goals that match a disruptive strategic change can guide workers to be creative in gap closure efforts.
Qualitative vs. quantitative
Metrics can be qualitative, often yes-or-no results. They can be quantitative, with clearly measurable and differentiable levels of performance. Quantitative metrics are easier to measure and often more effective, especially as they provide information on degree of performance. Many qualitative metrics can be converted to quantitative. For example, project implementation can be articulated as a series of milestones with percent completion vs. a time-based plan. Other soft qualitative areas can be measured with surveys, yielding percent satisfaction with various elements.
Alignment
Clearly local and departmental metrics must be aligned with the overall organization. This may be a simple cascade or roll-up process. Sometimes departments and the total organization may measure quite different outcomes, but they must fit together to achieve the overall vision, strategy, and goals. Remember that the focus is on serving customer needs vs. optimizing a single department. For example, delivering to meet a customer lead time goal may drive quite different local metrics than if the department sets goals and metrics to increase uptime and reduce inventory, which may actually hurt the lead time goal
“Good enough” vs. competitive
Overdelivering something that the customer doesn’t value is simply waste. Goals and metrics need to tie to needs of the customer and the capabilities of others in the marketplace. An organization can serve a customer with parity goals, matching the competition in most baseline areas, and using the strategy to select the one or two areas where it will differentiate performance from competitors to become preferred by customers. Metrics align with this matrix of goals.
Lagging vs. leading
Many metrics are lagging. Monthly and quarterly financial reports reveal how the department or organization did on managing waste, controlling costs and driving revenue during the past
If the results are disappointing, nothing—other than cooking the books—can change what has already happened. Alternatively, or often additionally, the organization can use real-time leading metrics to predict performance. Using metrics for SPC (statistical process control), activities-based cost tracking, and sales prospect tracking can provide much earlier indicators of problems or progress, giving managers and workers early triggers to make course corrections long before results are collected and reported.
Aggregating metrics
Any given process could have many different measurement opportunities, possibly too many for most people to understand and integrate. Simplification by using aggregates of key items can be helpful. For example, a supply chain operation may use an OTIFNE (on time, in full, no error) measure or a customer-facing department may develop a client happiness score that incorporates customer response time, right-the-first-time answers, and customer feedback. At a higher level, the leadership team may create a balanced scorecard that incorporates KPIs (key performance indicators) in areas such as customer service, operational effectiveness, financial performance, and resource development.
Visual workplace
One of the best ways to help people know where to focus efforts on achieving goals is by making the metrics visible. At a leadership level this might involve a monthly review and discussion of the visual balanced scorecard, perhaps with KPIs linked to rolled-up departmental metrics. Misses are given gap closure actions. On the shop floor, performance charts can show graphs of performance—being careful to indicate “up” as “good”—and allowing opportunity for annotations with actions, suggestions, and more. All of these metrics can be marked as red, yellow, or green showing unsatisfactory performance, progress, or goal achievement. Items with red and yellow indicators will have gap closure attention and green will be reinforced.
What gets measured gets done
It’s important to remember that metrics don’t just track what has been done. They also serve as triggers for people to take desired actions. Setting expectations for goal achievement helps people to be more effective and be motivated to achieve those goals. Teachers use rubrics explaining what learning outcomes are considered ranging from unsatisfactory to exemplary.
Likewise, organizations can use rubrics to help workers understand what they need to do. This is especially useful when people are training on new skills, but can apply to experienced workers as well. Most motivated workers will tend to strive for the highest level of achievement, driving learning and skills development to the desired “best” level.
Sample Worker Mastery Rubric
Job Component |
Beginning |
Proficient |
Mastery |
Safety |
Wears all required safety gear. |
Can explain safety requirements and appropriate safety procedures. |
Utilizes process understanding to identify and address safety concerns in operations. |
Product Specs |
Knows where to find product information. |
Uses product information appropriately to set machine conditions. |
Utilizes understanding of product specs and machine capability to predict or investigate quality problems. |
Product Output |
Can produce 10 good units per hour of Widget A. |
Can produce 20 good units per hour of Widget A or B. |
Can produce 25 good units per hour of Widget A or B and complete a setup change in 10 minutes. |
Always think about unintended consequences when setting goals and metrics. Even a “good” metric can sometimes trigger dysfunctional behavior. For example, a supervisor who is rewarded for machine uptime may choose to ignore minor quality issues rather than shut the machine down to investigate root cause. A worker may not report a safety incident because he doesn’t want to be the person to end the string of days without a safety incident. A manager charged with creating a budget may predict a low output level so she can overachieve and be seen as a hero.
Overachievers striving for top performance may try to do better than a stated metric. If too much of a good thing is too much, make sure that’s reflected in the metric. For example, overdelivery of units of production might sound good, but it can obviously drive waste for inventory, storage costs, obsolescence, mismatched resource utilization, or other problems. Setting goals within a plus-or-minus range can help make clear absolute limits. Showing the overachievement as a less-than-good level of performance also illustrates that it is not desirable.
Goal |
Unacceptable |
Acceptable |
Ideal |
|
Output (units) |
100 |
Less than 94 or greater than 105 |
94 to 97 or 103 to 105 |
98 to 102 |
What to do with metrics
Having metrics just for the sake of measurement are not valuable. Operators who diligently provide data and see that nothing is done with it quickly realize the metrics are not an important part of their job.
What is done after metrics are put in place is important. Track and discuss metrics to identify areas for gap closure activities and recognition and reinforcement. Use the metrics themselves to help drive improved performance.