Over the last decade or so, bots have revolutionised the world of customer service. For consumers, live chat provides instant answers without the pain of lengthy call waiting times. For companies, bots speed up query resolutions and slash costs.
Everything is perfect. Until a customer asks a question like this:
“Can I wear shorts to the concert?”
I give this example because it comes from a real life scenario involving the Sinch team and a Brazilian online ticketing specialist. Here, the company had trained the chatbot to give answers relating to all the expected topics: venue, date, time, payment methods, cancellations, refunds and so on.
This worked very well for thousands of customers. But for a significant minority – including those with questions related to trouser length – the best answer the bot could manage was: “Not Understood”.
This little scenario goes to the heart of the persistent limitation of automated customer service: it can only handle specific queries. When it fails to comprehend, the customer is left frustrated, which can be very bad news for brands.
Is there a solution? Just weeks ago, we got a glimpse of one.
On November 30 2022, OpenAI launched ChatGPT – a bot that could seemingly give text answers to any given question in a friendly conversational way. As most readers will know, ChatGPT (GPT stands for “generative pre-trained transformer”) completely blew up. One million users signed up in five days.
Customer service professionals were almost certainly among them. They were keen to test the technology – to find out exactly what impact it might have on the future of the sector.
Before we get into that, let’s just re-cap the basics of how ChatGPT works, and what makes it a step change from what we have now.
Broadly speaking, conventional chatbots use simple AI tools (often based on keywords) that match a person’s intent to relevant answers in pre-defined and fixed conversational flows. We can describe these tools as both closed-domain and non-generative.
ChatGPT goes a step further by enabling completely free form conversations. It can reply to any question (even if the answer is just made up) and it will remember the context even if the query goes outside the standard predefined flows. In short, it just learns from the internet how to communicate.
Products such as ChatGPT therefore represent generative artificial intelligence. They use large language models to generate human-like responses on the fly from a few words of input.
On the surface, this would seem like the customer care golden ticket. At last, a bot that can improvise the answer to any query. No more ‘not understood’ dead ends.
Well, not so fast. There are some very obvious limitations. First, even a generative AI is not omnipotent. It can only reproduce what it has learned from ‘reading the internet’. In the case of ChatGPT, its knowledge base stops at 2021. It can’t even tell you who won the World Cup.
Also, ChatGPT will almost always respond to a question, even if it is not sure of the answer. It will produce perfectly grammatical responses that seem to make sense but might be complete nonsense for people with domain knowledge. Tellingly, the industry even calls these responses ‘hallucinations.’
In an online search scenario this is bad enough, but in the world of customer service it’s a complete no-no. Customers don’t want improvised approximate answers. They want the right answers.
ChatGPT’s free-form ‘hallucinations’ don’t help brands either. One of the benefits of using virtual agents is that managers can analyse thousands of conversations in order to understand customer pain points and thereby improve their processes. But if every response is different, how can customer care execs accurately correlate questions and answers?
These are the big challenges facing generative AI in the customer service space. It’s why, in the short term, we should be realistic about its impact. Indeed, even Sam Altman, CEO of OpenAI, has admitted this.
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.— Sam Altman (@sama) December 11, 2022
it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
He tweeted, “It’s a mistake to be relying on (ChatGPT) for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
That said, there’s little doubt that the tech will ultimately make a big difference. It just needs to resolve issues such as:
It should contain more up to date information and give priority to a company’s own data and knowledge base.
It should be able to state whether it really knows the facts or if it is just guessing.
It should be able to connect to a back end to store and/or retrieve information about the user and the topic of the conversation.
It should be able to tell people they are talking to a bot and always let them ask to speak to a human agent.
If generative AI tools like ChatGPT can overcome these challenges, virtual agents will be able to handle vastly more queries.
The flurry of activity around ChatGPT suggests they will. In the two months since it launched, a range of companies have rushed to incorporate the tech into their products. They include Microsoft (which was an early investor in OpenAI) and Ada, a customer service specialist which has already licensed the large language model into its chatbot programming.
Here at Sinch, we are also helping to build the next generation of virtual agent technology. In 2022, we launched AskFrank. It is a question-answering search engine that integrates with any business’s chatbot, contact centre, website or knowledge base. As such, it gives conversational answers to natural language questions that are not available in a chatbot’s database. It can do this on any channel and in more than 100 languages.
These developments point to the fast development of generative AI in the customer service space. But there’s one final factor that might make the most profound change of all: the customer bot.
One of the reasons why ChatGPT has made such a splash is that it has shown end users how they can deploy bot technology for their own purposes. In time, it seems inevitable that consumers will hand over boring and time-consuming tasks to ‘personal’ virtual agents, just as brands have. We could soon be in a world where a customer bot negotiates and buys travel insurance from a company bot, and no human being is directly involved.
By Dr Pieter Buteneers
Director of engineering for AI and ML