Call center metrics can include individual agent metrics, team metrics, call center metrics for both micro and macro views of various KPI.

By: Colin Taylor

Metrics, Key Performance Indicators (KPI’s), Reports – we have a lot of names for the information and data we review to help keep our centers on track and performing as we want them to. Call Centers can generate an immense amount of data and reporting on just about anything that we could conceive. But just because we can produce a report, does it mean we should? If we generated every possible report, not only would we never have enough time to read, review and understand the reports, but we would likely increase confusion and ambiguity regarding the centers’ performance.

To understand the metrics and reporting that we should be looking at, we need to look at the reasons that reporting exists in the first place. We are all familiar with the maxim that “you can’t manage what you don’t measure”, and it is certainly true. If you don’t know how a specific area or task is performing, then you cannot gauge improvement or degradation that occurs based on any changes to processes, technologies or staffing that have been made. However, there is a second reason we measure and track performance with metrics, KPI’s and reports and that is to influence the behavior of the staff that is reflected in those performance metrics.

Organizational Reporting vs. Individual Reports

In a Call Center, we generally run two types of reports;
1) those that focus on the organization and
2) those that focus on individuals

Organizational reporting can be the overall Contact Center infrastructure (multiple centers), a single center, a single queue or team or single work group. Individual reports show us how individual Agents or staff are performing alone or in comparison to how other individuals perform. So when and why would we run an organizational report versus an individual report?

Organizational reporting, by its very nature, aggregates the performance below the organizational level. So in a multi-center environment the enterprise report will aggregate the performance of each of the centers or lines of business across multiple centers. The center wide reporting will aggregate the performance of the individual queues. The queue reports will aggregate the shift and team performance while the team level reporting aggregates the individual Agent performance.

We can look at the reporting hierarchy as altitude, the 50,000-foot view we have the big picture but do not have a lot of detail or granularity. As we descend through the layers of the organization, we get increasing detail about specific subsets of the enterprise’s performance. This is analogous to an airplane descending, things get clearer as we get closer, but the breadth of the view gets smaller until we get to ground level. But how does this help us know when and where to employ which reports?

From the above, we can deduce that high-level (50,000-foot view) reporting is best suited for high-level, big-picture performance metrics such as what is the Service Level, Customer Satisfaction Score (CSAT), Net Promoter Score (NPS) , First Contact Resolution (FCR) or Average Speed of Answer (ASA)across the organization. This represents a factoid that is useful to assist senior management to understand the overall performance and if these data points are shared regularly, they will develop a frame of reference related to changes, patterns and shift in the data. This logic cascades down the organization, where each level can gain a global perspective on the performance of their reports and the parts of the organization that reports up to them.

Each level of the organization relies upon the reports to inform them of specific performance data, relative to their scope of understanding. The ‘C Suite’ does not need to know which Agent has the lowest quality score or the highest FCR rate. They do need to know the global perspective on how the customers facing Contact Centers are performing: are volumes rising? Is satisfaction increasing or decreasing? Are returns declining? etc.

With this context in-hand, we can then begin to specify the reports and analysis that should take place at various levels within the center(s) and the customer facing organization. Whereas we took a top down approach above, let’s now look at these reports and metrics from the bottom up.

Agent Metrics & Reporting

Agents are the lifeblood of every organization that supports and services customers through a Contact Center. Without Agents, we have no ability to provide live and personalized service to our customers. The role of ‘bots’ and AI have begun to impinge on this exclusivity, but for most organizations today, live Agents are who and how service is delivered to our customers. So what information does the individual Agent want to know about their performance?

Agents are human beings and like all of us, they want to know how they are performing. Management and their immediate Team Leaders define what those performance standards are. Best practice is for management to be transparent regarding what the performance standards are and why they are important. Typical Agent metrics will likely include;

1. Login time
2. Available time
3. Busy/Not ready time
4. Talk time
5. Wrap time
6. Hold time
7. Status states (lunch, break, coaching, training, etc.)
8. Schedule Adherence
9. Average talk time
10. Average hold time
11. Average handle time
12. First contact resolution (FCR)
13. Quality/Compliance score
14. CSAT/NPS score

In a sales environment, we would add to this list sales opportunities, sales conversions, and sales dollars.

The Agents want to know how they are performing against their goals, company expectations and in comparison, to their peers. It is a best practice to share these metrics with the Agents. Everyone wants to succeed and excel and if we share our expectations and their performance, Agents will work to improve their performance or seek assistance for improvement.

All of the above call center metrics can inform the Team Leader and other management how an individual is performing. Points 1-11 (above) are metrics that tell the Agent and their managers how they are performing versus time management expectations and in comparison to their peers. This comparison, based on their tenure and other performance metrics, can lead to the provision of additional coaching if they are lagging the expectation or team performance, or benchmarking if they are exceeding the expectation and team performance. The final three points (12-14) all deal with the effectiveness of the Agent in discharging their duties. The first 11 points tell us how the Agent is doing in meeting the time management elements and the final three points indicate the quality of the interactions they are completing.

Metrics to Exclude

You may have noted that metrics such as Service Level, Calls per day, abandoned calls, occupancy, etc. These metrics are excluded as the Agent has zero control over these factors. Workforce management sets the forecasts and schedules and if these volumes, Agents or anticipated handle times do not materialize, these metrics will be adversely affected. These factors are not controlled by Agent behavior therefore we cannot hold the Agents accountable for their attainment.

A similar agreement for excluding Agent reporting may be postulated for NPS, as NPS can be narrowing viewed as a ‘brand’ or enterprise question. Certainly, this is correct if the question posed is “How likely are you to recommend XYZ Company to a friend or acquaintance?”. If this is the presentation then it does not belong on the list of Agent reporting metrics. If, however, it is positioned as “Based on your call with Suzy, yesterday at 2 pm, how likely are you to recommend calling Suzy at the XYZ Contact Center to a friend or acquaintance?” then the reference is clearly the Agent interaction and not the company or brand. In this case, I would suggest that including it in the Agent metrics is appropriate.

Team Metrics and Reporting

The focus of the Team Leader or Supervisor is to maximize the performance of their team. As a result, they will want to examine all of the identified Agent reporting metrics and in addition, examine schedule adherence which is generally within their control and purview. They will view these metrics through a slightly different lens; the Team Lead will want to see how their team members compare to one another. The objective is by viewing all Agents across each of these metrics we can see and recognize the best performers. In addition, we can identify those who are laggards and who bring down the overall team performance. These weaker performers are prime candidates for additional training and coaching. The impact of the training and coaching can be evidenced in subsequent weekly reporting.

In addition, Team Leads will want to know how they are performing opposite other Team Leads. By sharing the performance of all teams against the same metrics and criteria, individual Team Leads can see their performance in the context of the center or line of business. This transparency allows the Team Lead to consult with their peers and manager regarding how to best improve the performance of their team.

The Workforce Manager

Workforce management (WFM) is a specialized role in most centers and the Manager or head of the workforce is responsible for establishing volume forecasts and distribution against which all Agents are staffed and rostered. It is the Workforce Manager that will need to understand contact volumes, distribution by channel, and distribution by hour or day-part. These actual results are then compared to the forecast and the accuracy of the forecast or alternatively, its degree of variance can then be identified. Variance is a critical metric as it shows where the center was appropriately and inappropriately staffed. The Workforce Manager will then drill down and complete a root-cause analysis (RCA) to identify any factors that may have impacted the expected volume or distribution. This RCA exercise allows future forecasts to learn based upon past history and leads to a virtuous cycle of improved accuracy.

In addition, the Workforce Manager will examine Adherence, AHT, and Occupancy as these factors all impact the centers’ ability to serve and support the received volume and distribution. They will look specifically at adherence in terms of how well the staff adhered to the published schedule. This will include looking at absenteeism, as well as how well staff adhered to start times and scheduled breaks. AHT and Occupancy provide insight into the Agents’ activities and again, variance to the expected activities is identified. The Workforce Manager can then provide guidance to the site or center manager regarding the variances to forecast and plan and the impacts of these variances on staffed hours and service level performance. Once again, this performance data can be employed to change or vary assumptions on volumes, channels, distribution, handle time, occupancy, adherence and inform future forecasts. This performance data can also be employed to show the Workforce Manager how they are performing versus other Workforce Managers in other centers or other Lines of Business (LOB). This comparison can help to surface issues and challenges that may transcend a particular center or LOB.

Quality Manager

Quality reporting can vary considerably between verticals and centers, based upon a number of factors such as regulation and the quality model employed: call monitoring and scoring against a checklist, versus a compliance model or CSAT/NPS approach or some hybrid variation. Regardless of the approach employed in the center, the Quality Manager will want to understand the quality performance by Agent, team and/or LOB. The Quality Manager will wish to see if there is an improving trend overall for the center and if individual and team performance has shown improvement based on past coaching interactions. A failure to see improvement in quality scores is a cause for concern, not just by the Quality Manager, but also by center/LOB and team managers.
This performance data can also be employed to show the Quality Manager how they are performing versus other Quality Managers in other centers or other LOB. This comparison can help to surface issues and challenges that may transcend a particular center or LOB.

Call Center Metrics and Reporting

The Site/Center Manager or LOB Manager needs a 360⁰ view of their center or LOB. By reviewing metrics they can see how the center is performing by LOB, team and how well the workforce and quality processes are being managed. Key reporting of interest to this manager includes:
• LOB/team performance

o versus goals
o between teams
o by center
o service level
o forecast vs actual variance (volume, adherence, occupancy, and costs)
o quality, coaching effectiveness etc.

Additionally, the center manager will wish to see how their performance compares to other centers or LOB’s within the organization. Visibility into their own performance and the ability to compare and contrast with their peers provides insight into how the center or LOB is functioning, creating opportunities for improvement or best practice sharing.

Senior Leadership

Senior Leaders will likely require the fewest reports as their interest is generally focused on how well is the center performing versus expectations. Specifically, expectations are typically around volumes, service level, quality and budget implications. In many organizations, Senior Leaders just want to understand the variance and likely impact of the variance of these call center metrics to the operational and financial performance of their areas of responsibility.

By keeping Senior Leaders apprised of the centers’ performance through sharing these key metrics and reports, an organization can create a level of understanding and appreciation for the centers’ activities and establish an interconnected hierarchy of reporting that enables the centers’ performance to be viewed from a highly granular level. These reports allow Senior Leaders to assess the impact on Agents and on the budget.

Specific reporting requirements may vary between centers, LOB’s and industries, but at the end of the day we want to ensure that at every level of the organization we can compare our performance to that of our peers, identify improvement opportunities, identify best practices that can shared and continue to refine our assumptions and processes to be both more effective and more efficient.

Please Contact ApexCX for all your customer experience needs.

Aug 23, 2017