Recently, a young work colleague approached me for a reference. He was fresh out of university and was still on his initial six-month probation period with IBM. He wanted to have me help him with a personal reference, so he could pass this important evaluation stage and hopefully become a permanent employee.
While I had worked with him for just half a day, he seemed at first impression to be smart, hard-working, enthusiastic, happy, positive, and keen to help. All the qualities and traits that make for a good colleague and a great IBMer that would serve our customers well. So, my first impression was very positive, and I agreed to provide him with a referral. What followed next was a major learning experience for me.
I asked how to do the reference? Should I drop a note to his direct report manager or to HR? No he said, he would send me an email with a link to a new tool and I should add some comments to this online tool and I would be done. It all sounded simple enough.
So, the email arrived, and I clicked on the link provided. The link took me to a protected online tool with a text box to enter my evaluation of him. The page showed his information and mine too. Not too surprising. Once I completed a paragraph, I clicked the ‘send’ button. However, it did not actually send my message. Instead it showed an analysis of my paragraph by an artificial intelligence cognitive computing program driven by Watson, IBM’s A.I. Needless to say, this is when I was surprised to see an artificial intelligence analysis of my posting.
The tool had deconstructed my paragraph, and analyzed the tone, meaning, and emotional unspoken intent within it. The tones included are frustrated, sad, satisfied, excited, polite, impolite, and sympathetic. It suggested that my paragraph was ‘sad’. This analysis astonished me. I certainly did not feel that my paragraph conveyed a ‘sad’ intent nor did I wish to provide a personal reference to a employee candidate that was ‘sad’. But, the A.I, presence read my paragraph that way.
It was my goal to help this young man, so the analysis suggested that my reference was poor and would harm him. I was horrified.
The A.I. tool permitted me to edit the passage and rerun the A.I. analysis before it was actually submitted. So, I naturally did edit the paragraph and tried many words to enhance this passage to one that would earn a more positive rating. I was trying to get it to ‘excited’.
I did about ten attempts until I got it right. I reran the process each time and it flagged the words for me that pulled it towards one tone outcome versus another tone. This tool fascinated me. It was in beta, so I suppose that my attempts to correct my tone where actually ‘training’ it and I was helping to optimize this A.I. tool.
After some investigation with other senior colleagues, I learned that ever since we released the IBM Watson Tone Analyzer Service, we have gotten feedback from our clients that they would like to use the service to analyze the logs from contact centers, chatbots, and other customer support channels. We worked with clients to figure out what those tones should be and came up with an answer. Turns out tones such as frustration, satisfaction, excitement, politeness, impoliteness, sadness, and sympathy are important to detect while analyzing customer engagement data.
What can a Customer Service Manager do with these tones, you might ask. Knowing whether a customer is frustrated or satisfied with their interaction is a must-have for Contact Center managers to assess customer satisfaction. Of course, most of the customer service chat conversations start with frustrated customers. That is to be expected! However, it is the progression of tones throughout the conversation that is very important to track. If the customer is still frustrated when the conversation ends, that is bad news. However, just knowing how the customer felt at the end of the call alone doesn’t tell the whole story. Was the customer frustrated, even at the end of the conversation, because the resolution given was not acceptable? Or, was it because the agent did not show excitement when resolving the problem? Was the agent impolite or not sympathetic enough to the situation that the customer was in?
Tracking these tone signals can help Customer Service Managers improve how their teams interact with customers. Do the agents need more training in content or in communication style? Are there any patterns in the tones of successful agents? If so, what can be learned from it to replicate it more broadly? Are specific tones of agents indicative of how the conversation is likely to end?
We hope Customer Service Managers can now begin to use these tones to analyze their customer conversations by incorporating the results of this endpoint into their dashboards and analysis applications, thereby improving their customer engagement performance.
How Does it work?
Given a set of customer support conversations and associated tones, we trained a machine learning model to predict tones for new customer support conversations. For the machine learning model, we leverage several categories of features including n-gram features, lexical features from various dictionaries, punctuation, and existence of second-person reference in the turn. We use Support Vector Machine (SVM) as our machine learning model. In our data, we have observed that about 30% of the samples have more than one associated tone, so we decided to solve a multi-label classification task than multi-class classification.
For each of our tones, we trained the model independently using One-Vs-Rest paradigm. During prediction, we identify the tones that are predicted with at least 0.5 probability as the final tones. For several tones, our training data is heavily imbalanced and to address this issue we find the optimal weight value of the cost function for each of these tones during training.
Input and output
Input to the Tone Analyzer for Customer Engagement API endpoint is either a single piece of text reflecting a single statement, a set of statements, or a conversation delimited by newline. For each given input, the endpoint will produce a confidence score for each of the predicted tone(s) taken from the following set of 7 tones: Frustration, Satisfaction, Excitement, Politeness, Impoliteness, Sadness and Sympathy. The API will return tones with a confidence score higher than 0.5.
For example, here is a conversation from a customer service agent to a customer for a package delivery service:
Given text: “Please have patience. We will work on your problem, and hopefully find a solution.”
Output Tone: [Polite: 0.90, Sympathetic: 0.76].
Based on the scores, we can infer that the input text expresses “Politeness” and “Sympathy” with 90% and 76% confidence, respectively.
Customer: I know it snowed in Maryland, why aren’t you delivering our range today? Predicted tone(s): Sadness: 0.59
Agent: Did you receive notification?
Predicted tone(s): Sympathy: 0.83, Politeness: 0.74
Predicted tone(s): No Associated Tone
Agent: I understand, and I apologize about any disappointment.
Predicted tone(s): Politeness: 0.99, Sympathy: 0.70
Customer: Can you tell me when my package will arrive?
Predicted tone(s): Frustration: 0.78
Agent: Please give me the tracking number.
Predicted tone(s): Politeness: 0.83
Customer: Here is my tracking #.
Predicted tone(s): No Associated Tone
Agent: Your package will arrive today 🙂
Predicted tone(s): Excitement: 0.89, Politeness: 0.84, Satisfied: 0.78
Customer: Thanks a lot.
Predicted tone(s): Satisfaction:0.85
So, is this the future. Absolutely, yes. It is a wee bit scary to ponder but if we can leverage these artificial intelligence tools to better understand and make smarter and faster decisions, then it is all positive. Life is good.
However, if this A.I. tool is combined with ‘Data Scraping’ that I previously wrote about, then where can this analysis go and how can it be used? Data Scraping Link.
However, it can be used by bad actors for unintended purposes. For polling during elections, we have already seen a great deal of A.I. abuse to manipulate politic opinion and sway votes. I for one do not want to be manipulated or have my subconscious analyzed to steer me to the whims of others. I want to make my own decisions and use my own intuition to decide things like how I vote by myself. So, there is a matter of trust and influence peddling here. In the end, we all must use our inherent common sense to make our own decisions. We need to tune out these A.I. tools and use our own brains. However, it is scary to me as to what can happen if people are not conscious of these tools and the subliminal power that they can wheel upon us.
Bhuiyan, M. (2017). Watson Tone Analyzer: 7 new tones to help understand how your customers are feeling. Retrieved on April 8, 2018 from, https://www.ibm.com/blogs/watson/2017/04/watson-tone-analyzer-7-new-tones-help-understand-customers-feeling/
About the Author:
Michael Martin has more than 35 years of experience in systems design for broadband networks, optical fibre, wireless and digital communications technologies.
He is a Senior Executive with IBM Canada’s GTS Network Services Group. Over the past 13 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He was previously a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).
Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V).
He serves as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology.
He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) and on the Board of Advisers of five different Colleges in Ontario. For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section.
He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has diplomas and certifications in business, computer programming, internetworking, project management, media, photography, and communication technology.