AWS Machine Learning Blog

Identify objections in customer conversations using Amazon Comprehend to enhance customer experience without ML expertise

According to a PWC report, 32% of retail customers churn after one negative experience, and 73% of customers say that customer experience influences their purchase decisions. In the global retail industry, pre- and post-sales support are both important aspects of customer care. Numerous methods, including email, live chat, bots, and phone calls, are used to provide customer assistance. Since conversational AI has improved in recent years, many businesses have adopted cutting-edge technologies like AI-powered chatbots and AI-powered agent support to improve customer service while increasing productivity and lowering costs.

Amazon Comprehend is a fully managed and continuously trained natural language processing (NLP) service that can extract insight about the content of a document or text. In this post, we explore how AWS customer Pro360 used the Amazon Comprehend custom classification API, which enables you to easily build custom text classification models using your business-specific labels without requiring you to learn machine learning (ML), to improve customer experience and reduce operational costs.

Pro360: Accurately detect customer objections in chatbots

Pro360 is a marketplace that aims to connect specialists with industry-specific talents with potential clients, allowing them to find new opportunities and expand their professional network. It allows customers to communicate directly with experts and negotiate a customized price for their services based on their individual requirements. Pro360 charges clients when successful matches occur between specialists and clients.

Pro360 had to deal with a problem related to unreliable charges that led to consumer complaints and reduced trust with the brand. The problem was that it was difficult to understand the customer’s objective during convoluted conversations filled with multiple aims, courteous denials, and indirect communication. Such conversations were leading to erroneous charges that reduced customer satisfaction. As an example, a customer may start a conversation and stop immediately, or end the conversation by politely declining by saying “I am busy” or “Let me chew on it.” Also, due to cultural differences, some customers might not be used to expressing their intentions clearly, particularly when they want to say “no.” This made it even more challenging.

To solve this problem, Pro360 initially added options and choices for the customer, such as “I would like more information” or “No, I have other options.” Instead of typing their own question or query, the customer simply chooses the options provided. Nonetheless, the problem was still not solved because customers preferred to speak plainly and in their own natural language while interacting with the system. Pro360 identified that the problem was a result of rules-based systems, and that moving to an NLP-based solution would result in a better understanding of customer intent, and lead to better customer satisfaction.

Custom classification is a feature of Amazon Comprehend, which allows you to develop your own classifiers using small datasets. Pro360 utilized this feature to build a model with 99.2% accuracy by training on 800 data points and testing on 300 data points. They followed a three-step approach to build and iterate the model to achieve their desired level of accuracy from 82% to 99.3%. Firstly, Pro360 defined two classes, reject and non-reject, that they wanted to use for classification. Secondly, they removed irrelevant emojis and symbols such as ~ and ... and identified negative emojis to improve the model’s accuracy. Lastly, they defined three additional content classifications to improve the misidentification rate, including small talk, ambiguous response, and reject with a reason, to further iterate the model.

In this post, we share how Pro360 utilized Amazon Comprehend to track down consumer objections during discussions and used a human-in-the-loop (HITL) mechanism to incorporate customer feedback into the model’s improvement and accuracy, demonstrating the ease of use and efficiency of Amazon Comprehend.

“Initially, I believed that implementing AI would be costly. However, the discovery of Amazon Comprehend enables us to efficiently and economically bring an NLP model from concept to implementation in a mere 1.5 months. We’re grateful for the support provided by the AWS account team, solution architecture team, and ML experts from the SSO and service team.”

– LC Lee, founder and CEO of Pro360.

Solution overview

The following diagram illustrates the solution architecture covering real-time inference, feedback workflow, and human review workflow, and how those components contribute to the Amazon Comprehend training workflow.

solution architecture

In the following sections, we walk you through each step in the workflow.

Real-time text classification

To use Amazon Comprehend custom classification in real time, you need to deploy an API as the entry point and call an Amazon Comprehend model to conduct real-time text classification. The steps are as follows:

  1. The client side calls Amazon API Gateway as the entry point to provide a client message as input.
  2. API Gateway passes the request to AWS Lambda and calls the API from Amazon DynamoDB and Amazon Comprehend in Steps 3 and 4.
  3. Lambda checks the current version of the Amazon Comprehend endpoint that stores data in DynamoDB, and calls an Amazon Comprehend endpoint to get real-time inference.
  4. Lambda, with a built-in rule, checks the score to determine whether it’s under the threshold or not. It then stores that data in DynamoDB and waits for human approval to confirm the evaluation result.

Feedback workflow

When the endpoint returns the classification result to the client side, the application prompts the end-user with a hint to get their feedback, and stores the data in the database for the next round (the training workflow). The steps for the feedback workflow are as follows:

  1. The client side sends the user feedback by calling API Gateway.
  2. API Gateway bypasses the request to Lambda. Lambda checks the format and stores it in DynamoDB.
  3. The user feedback from Lambda is stored in DynamoDB and will be used for the next training process.

Human review workflow

The human review process helps us clarify data with a confidence score below the threshold. This data is valuable for improving the Amazon Comprehend model, and is added to the next iteration of retraining. We used Elastic Load Balancing as the entry point to conduct this process because the Pro360 system is built on Amazon Elastic Complute Cloud (Amazon EC2). The steps for this workflow are as follows:

  1. We use an existing API on the Elastic Load Balancer as the entry point.
  2. We use Amazon EC2 as the compute resource to build a front-end dashboard for the reviewer to tag the input data with lower confidence scores.
  3. After the reviewer identifies the objection from the input data, we store the result in a DynamoDB table.

Amazon Comprehend training workflow

To start the training the Amazon Comprehend model, we need to prepare the training data. The following steps show you how to train the model:

  1. We use AWS Glue to conduct extract, transform, and load (ETL) jobs and merge the data from two different DynamoDB tables and store it in Amazon Simple Storage Service (Amazon S3).
  2. When the Amazon S3 training data is ready, we can trigger AWS Step Functions as the orchestration tool to run the training job, and we pass the S3 path into the Step Functions state machine.
  3. We invoke a Lambda function to validate that the training data path exists, and then trigger an Amazon Comprehend training job.
  4. After the training job starts, we use another Lambda function to check the training job status. If the training job is complete, we get the model metric and store it in DynamoDB for further evaluation.
  5. We check the performance of the current model with a Lambda model selection function. If the current version’s performance is better than the original one, we deploy it to the Amazon Comprehend endpoint.
  6. Then we invoke another Lambda function to check the endpoint status. The function updates information in DynamoDB for real-time text classification when the endpoint is ready.

Summary and next steps

In this post, we showed how Amazon Comprehend enables Pro360 to build an AI-powered application without ML expert practitioners, which is able to increase the accuracy of customer objection detection. Pro360 was able to build a custom-purposed NLP model in just 1.5 months, and now is able to identify 90% of customer polite rejections and detect customer intent with 99.2% overall accuracy. This solution not only enhances the customer experience, increasing 28.5% retention rate growth, but also improves financial outcomes, decreasing the operation cost by 8% and reducing the workload for customer service agents.

However, identifying customer objections is just the first step in improving the customer experience. By continuing to iterate on the customer experience and accelerate revenue growth, the next step is to identify the reasons for customer objections, such as lack of interest, timing issues, or influence from others, and to generate the appropriate response to increase the sales conversion rate.

To use Amazon Comprehend to build custom text classification models, you can access the service through the AWS Management Console. To learn more about how to use Amazon Comprehend, check out Amazon Comprehend developer resources.


About the Authors

Ray Wang is a Solutions Architect at AWS. With 8 years of experience in the IT industry, Ray is dedicated to building modern solutions on the cloud, especially in NoSQL, big data, and machine learning. As a hungry go-getter, he passed all 12 AWS certificates to make his technical field not only deep but wide. He loves to read and watch sci-fi movies in his spare time.

Josie Cheng is a HKT AI/ML Go-To-Market at AWS. Her current focus is on business transformation in retail and CPG through data and ML to fuel tremendous enterprise growth. Before joining AWS, Josie worked for Amazon Retail and other China and US internet companies as a Growth Product Manager.

Shanna Chang is a Solutions Architect at AWS. She focuses on observability in modern architectures and cloud-native monitoring solutions. Before joining AWS, she was a software engineer. In her spare time, she enjoys hiking and watching movies.

Wrick Talukdar is a Senior Architect with the Amazon Comprehend Service team. He works with AWS customers to help them adopt machine learning on a large scale. Outside of work, he enjoys reading and photography.