What Comes Before "The One Number You Need To Grow"

Guest post by Andrew Gilliam

About six months ago, the survey that I've been working on for almost two years went live. It's the first step toward revitalizing our Voice of the Customer program. In the beginning, I recognized that capturing actionable data will be very challenging and essential to gain traction.

The old survey had been stagnant for about a decade. It was designed by committee, didn't ask tough or useful questions, and came to be largely ignored. After just a few months of living under its shadow, I began to notice patterns in the types of responses we received.

Photo by mucella/iStock / Getty Images

A prominent and relatable trend was the correlation between unresolved issues and minimum ratings. Occasionally customers were erroneously invited to take surveys before we finished working on their request. In other cases, we failed to address a multi-part problem completely.

In these situations, customers seemingly ignore the question and choose the minimum rating regardless of our specific performance. The speed of our response and the representative's courteousness don't matter if the issue isn't resolved. I can't blame them for this; in fact, I've done it myself. It seems fair to raise as many red flags as you can when something isn't right.

Unfortunately, this behavior invalidates the data because the questions weren't answered objectively. It also makes it more difficult to isolate the source of a problem when the questions and ratings aren't directly related to the cause. Furthermore, erratic responses made it impossible to automatically classify and alert us to concerning responses. Asking CSAT questions first seemed to be wasting customer effort without delivering any value.

Goodbye Uncertainty, Hello Absolute Resolution

I answered these challenges with an innovative new type of question called Absolute Resolution (AR). It's the very first question. Its goal is to assess closure and completeness, not satisfaction.  This yes or no question is extremely simple, "Would you like us to contact you to address any concerns or unresolved issues?"

Respondents who choose yes are asked to describe the unresolved issue or question, and then the survey ends, triggering an instant email to the service recovery team. Selecting no continues on to the satisfaction, loyalty, and effort questions. To provide respondents with an idea of how this choice affects participation in the survey, the choices are labeled "Yes: I still need help, please contact me" and "No: I'm ready to take the survey."

I dreamed of AR being an elegant solution to the problems plaguing our old survey, but I also aspired for it to measure and solve some problems that we hadn't tracked before.

Closure Verification

This particular survey is designed to be completed after the customer receives some type of closure to their contact. The possibility of issues being unresolved had previously skewed unrelated data, and the resulting metrics weren't helping to improve service. After almost 1,500 responses, I haven't seen any indication of unresolved issues in the subsequent questions. Respondents choosing the lowest possible score to highlight issues is a thing of the past. Additionally, it seems that customers are understanding the purpose of the question and correctly self-selecting.

Result: Success

Reducing Response Time

The previous system did not have any automatic reporting mechanism. Our manager had developed a scheduled script to run reports twice per day, at the time we opened and shortly before we closed. It was a decent workaround, the frequency and timing could have been adjusted if desired, but it still required human analysis to determine if there were concerns. This step was easily lost in the shuffle, and responses were only reviewed a few times each week. The immediacy of the new system has allowed us to consistently close the loop up to 35 times faster, and it's taken a burden off our service recovery team.

Result: Success

Answering Unasked Questions

Just as the send button magically reveals typos in email, the end button on a call or chat seems to spur new questions. This has happened to me a lot as a customer, and I often debate whether or not my uncertainty is worth another contact. Questions often remain unanswered, because it doesn't seem worth the effort. AR is engineered to solicit these unasked questions and provide an easy pathway to resolution. Success on this point will prove difficult to measure, because we don't know how many questions go unasked. We've made slightly more follow-up calls than in the past. Some of these calls were for trivial follow-up questions that may have remained unanswered, but others definitely would have required the customer to contact us again. AR has helped us to respond to these questions, so it seems to be working.

Result: Plausible

Reducing Customer Perception of Effort

Customers hate multiple contacts. Technically one could categorize a survey response as an additional contact, but I hope that customers will perceive this method more favorably than dialing the phone or chatting online. In order to test this theory, I hope to eventually ask a Customer Effort Score (CES) question after our follow-up contact. Comparing the CES of negative AR contacts to general and low-FCR responses should indicate effectiveness in this area.

Result: To be determined

Empowering Anticipatory Service

Part of my vision for AR is to correlate unresolved issues with particular types (categories) of contact and then identify opportunities to improve how we conclude those interactions. If specific types of incidents frequently result in a negative AR response, cues can be added to ticket types and knowledge articles helping agents preempt follow-up questions. It's still too early to judge whether or not AR is successful in this mission.

Result: To be determined

Next Steps

In the initial push to get my survey into production, I put aside some features that I think will be very important. I'd like to configure a recovery mode that we can use to evaluate the effectiveness of this question and our follow-up efforts. In this mode, AR would be replaced with questions related to our recovery contact and then continue into the rest of the survey. This method would enable us to complete our dataset and compare NPS and CES after recovery to our baseline.

I'd also like to get some idea of how many "unasked questions" AR is helping to resolve. The initial strategy for this involves the recovery team making notes and categorizing follow-up engagements. I'm also exploring ways of asking this question in recovery mode surveys.

Finally, I believe AR has the ability to significantly improve and expand anticipatory service. However, a tremendous amount of responses will need to be collected before patterns emerge. Less than 3% of respondents have answered AR negatively so far. It's a great indicator of performance, but it will be a while before trends emerge.


Andrew Gilliam is a passionate customer experience innovator and change agent. He’s developed new employee portals, created effective surveys, and built silo-busting escalation systems. Andrew’s background in Information Technology put him on the front-lines of customer service as an IT Support Center Analyst. His vision: deliver Amazing Customer Service and Technical Support™. Learn more at andytg.com, follow @ndytg on Twitter, and connect on LinkedIn.