Monday, January 29, 2018

Audio Deep Dives: Listening Analysis Made Simple

By Diana Aviles



The core part of Speech Analytics that sometimes gets lost amongst the high powered metadata and reporting functionalities are the audio insights themselves. The whole purpose of SA is to have the ability to analyze specific words and phrases mentioned in customer/agent interactions. To SA newcomers, I have found that once the dust settles from all of the extensive training, there’s this feeling of “What’s next for me?” that begins to settle in.

When we talk about “Deep Dives” or listening analysis, we are generally talking about taking a random sample of calls and listening for specific criteria within the audio. Sample sizes can vary from 25 calls to 10,000 (yes, I have randomized 10,000 calls before). Of course the criteria you can look for is endless and that’s oftentimes where people get overwhelmed. There is the fear of looking for too much or too little in a deep dive. Also, there are complicated grey areas which you will need to account for. Here are some dos and don’ts for making deep dives a bit easier to manage.



Don't

  • Create “mile-long wish lists” - It’s tempting to want to look at every little thing in one go but depending on the type of deep dive you’re doing, it Is recommended you look at the data in phases. Rome wasn’t built in a day, neither are your insights.
  • Randomize/Size the project incorrectly - Samples require balance to them. If you are trying to look at data from two different markets and for the majority your sample only reflects one of them then it goes without saying that your data is tainted.
  • Improperly account for out of scope data (OOS) - Some listening analyses will have criteria that cannot be counted into the main pool of data. An example of this would be if you’re deep diving into cable box issues and you encounter a caller who is having issues with his phone service. His call does not meet the requirements for the project and must be bucketed to indicate that. Going back to the prior point of proper randomization and sizing, you need to account for OOS data by making sure that your sample is 20-30% above the total amount you’re looking for. Example: For a 100 call listening project send over 120 calls to account for the possibility of OOS data.

Do

  • Keep questions clear and concise – It is important to keep the wording of your questions or standards clear in order to avoid confusing auditors (if you are performing a listening analysis with more than one person). You want to avoid causing people to second guess how they are observing and documenting information.
  • Have job aids available for reference - I work mainly in telecom and we deal with a lot of technical issues so while I am pretty seasoned with troubleshooting most issues I do like to have a reference for items I seldom come across which I may be rusty on. If you outsource your listening analysis this is also critical as the listening team may not be as familiar with the line of business they are auditing as well as you are.
  • Maintain uniform data - There is nothing more annoying than having data that is all over the place. I am a fan of using conditional drop downs in Excel to restrict what is entered in the cell and only permitting certain cells to have open text. I recommend the core and secondary drivers you are looking to capture be placed in a drop down for this reason. I also recommend you avoid heavy use of “other” as a driver to prevent data pollution.
  • Require high level summaries of calls reviewed - I like to ask for two reasons- Reason #1: when it comes to “scrubbing” or cleaning up the data (before I start building charts and reporting against it,) the summaries allow me to see any major trends and observations captured OUTSIDE of the main listening project. Reason #2: to ensure the audio in question was ACTUALLY reviewed. In some studies I ask for a time stamp to determine where the criterion in the call was hit for as a method to maintain data integrity.


There are other related topics relevant to deep dives such as presenting and “data cleansing” which may be subject for discussion in later articles. These are a few general suggestions I have for people who are beginning their Speech Analytics journey and looking to start on high level deep dives. Once you get a few deep dives under your belt they will become second nature to you. The goal is to make sure that all your data insights make sense and can be organized in an efficient and concise manner.

Editors note: This article was originally posted on LinkedIn.

Diana Aviles has more than 5 years of Quality Assurance experience in a call center environment. Her objective is to simultaneously promote and educate Speech Analytics with a human touch; one which further emphasizes the importance of First Call Resolution and overall customer experience.

Follow Diana on LinkedIn.

No comments:

Post a Comment