What a Dinosaur Taught Me About User Adoption

Throughout my career I've been part of many new technology implementations. An important element to nearly all of them has been successful user adoption. 

Years ago, the customer service contact center at the bank where I worked implemented a new performance reporting application from Merced Systems (side note: several years later, Merced Systems was acquired by NICE). We had been managing call center operations using antiquated and disjointed reporting across the phone system, a home-grown call quality database, and our customer servicing applications. 

We had, a few years before, implemented an earlier version of Merced’s performance reporting application. It was implemented by our Information Technology department and the Operations department was not fully engaged in the original design discussions. Success was defined as a high percentage of agents that logged in weekly. They expected it would be exactly what we needed to help reduce average handle time and improve call quality scores. The gains in efficiency and satisfaction would more than pay for our investment in less than one year. 

But that’s not what happened.

It bombed. 

Agents hated it. They didn’t like the way their performance was presented. They didn’t like that it was yet another system with yet another password they’d have to remember. They didn’t understand all of the metrics we crammed on their “personalized dashboards”. Team Managers used it for a while, but only since their managers made it “mandatory". That didn’t last long. After just a few months many of them went back to creating their own spreadsheets and databases to report metric performance to their team members and managers. 

Not only were we in danger of not realizing the original return that we had forecasted to justify the investment, but we were seeing something far more troublesome: a negative return on our investment. We were actually losing ground on our efficiency and quality results since Team Managers were spending even more time manually creating reports.

What did we do wrong?

As can sometimes be the case with reporting systems projects, the goal eventually became the output of reporting data, instead of what the reporting data was supposed to help solve. We didn’t bother asking the agents for their input. If we had, we might have seen the focus shift in that way. We, instead, focused our efforts on middle-level managers and our executive sponsor. On a side note, of course executive sponsorship is still crucial, but that alone won’t ever ensure success.

The Re-Launch

Fast forward a few years and we once again found ourselves at the table with Merced Systems. They had made some improvements to their service and wanted to try to launch again. The team at Merced totally understood the call center space, and understood that front-line supervisors needed current and actionable data to guide their efforts. They also helped us understand what caused the failure of the first launch. We knew the agents, or the Team Managers, weren’t to blame, but we as managers had failed them.

What did we do?

For the re-launch, and at the advisement of our partners at Merced, we started over completely. We put the agents and team managers first. We sought their buy in from the start and rallied around the notion that if we couldn’t sell our most important users on the benefits of this new technology than we had no business pursuing it. There was a new executive sponsor in our organization and, to his credit, he agreed that success depended on user adoption and encouraged the team to move forward. Don’t get me wrong, we still had to make the financial case for re-launching to earn his support, but, understanding what had happened with the first launch, he did support us, after we convinced him we would build it from the agents’ and team managers’ perspectives. 

This time, we included agents across many tenure ranges and skill levels, many of which had never participated in these types of focus groups. We performed many initial requirements gathering sessions with those agents and team managers and, along the way, went back to them when we had further design questions or updates to share.

Branding

Besides the important steps of getting feedback from the agents and the team managers, and addressing the technical issues (single sign-on, what a novel concept) the most important element to our new implementation was the branding. The team at Merced was not the first to tell us that "Merced Systems" was a terrible name for a performance reporting application. We first heard that sentiment from, you guessed it, our agents and team managers. That’s right, the very people whose support we needed to succeed. 

When it came to naming this new tool, we needed something that was unique and that did a better job communicating what its function was. We asked for, and received, a number of suggestions from agents and team members, but it was someone from Merced that actually offered up a name that would stick.

1 statzilla hat day 06.jpg

Mike Leonard, who at the time was the Western Regional Sales Director with Merced Systems, offered up “Statzilla”. He offered it up in a “you can name it anything you’d like, even something like Statzilla” kind of way.

Statzilla. That was it. Before we launched, we went to the group of agents and team managers with a selection of suggested names. Statzilla won in a landslide. 

Successful branding of a web-based call center performance reporting service is more than just a name. Equally as important for us were all of the accoutrements. The team at Merced helped us with the imagery and, in a stroke of genius, sent me a dinosaur mask. I wore that mask all over the department, all over our campus. I, and any other brave soul that donned the mask, became the embodiment of Statzilla. It was quirky. It was corny. It was fun. And it worked. It helped us successfully launch this new system. 

One version of truth

Another lesson I learned during this time was that there needs to be only one version of the truth, one version of the data. With our first implementation, we did not retire old reporting systems. It doesn’t take a rocket scientist to figure out, in hindsight of course, that having multiple versions of our data meant that one would lose out. And eventually, and even more damaging, neither version would be trusted. When we re-launched, we suspended all access to all other reporting systems. If a leader needed an operational metric from the call center, they would have to get it from Statzilla. 

Success

After a short pilot we eventually launched Statzilla across all of our service call centers. Team members across the organization were finally aligned around common goals and a common language around those goals. Managers at all levels were able to more quickly recognize great performers and spot changes in productivity or quality trends. 

All of this happened more than a decade ago but I regularly call on the lessons I learned during that second implementation. I still have the Statzilla mask, and in addition to using it to occasionally scare my children, it serves as a useful reminder to look at new tools from the end user perspective, that names we give these tools are important, and, when there is an option to do so, always go with the silly mask. 

Post script

I am very proud of the work we did with Merced Systems. The developers at Merced worked tirelessly to solve a fundamental problem we had in our call center. Hierarchical reporting is at the foundation of solid call center performance reporting, just as foundational as team changes were in most large call centers. Every reporting platform we had used treated these team changes the same - agents would always see their data regardless of how many teams they may have been on during a reporting period and managers would only see the agents’ data for when they reported to them. We wanted our performance reporting system to be able to handle team changes (for instance, in the middle of the month) in a way that would allow all of the affected agent’s data to report up through that supervisor (even if they weren’t on their team). The team at Merced were committed to delivering on this requirement, for which they eventually received a patent and I learned the meaning of “temporal specificity" (US Patent No. 7,856,431 - Reporting on facts relative to a specified dimensional coordinate constraint). So, Statzilla wasn’t just a crazy name, it was also the spirit behind the use case for something that ultimately became a patented technology. 

4 statzilla-login-2006.png