Intelligent CXO Issue 56 | Page 15

CXO CASE INSIGHT STUDY

W

What is an AI agent, and how does it differ from a traditional AI model or API call?
AI is a powerful technology with a lot of different use cases. What’ s really changed is the ability of the technology to analyse and learn from massive amounts of data, to synthesise it and respond in a human-like way. At Nutun, our focus has been on voice agents, AI-powered personas that can actually mimic human behaviour and engage with customers on the front line. When we talk about AI agents, we’ re really talking about that idea people have from sci-fi movies, robots that can simulate a human. Traditional AI models or API calls usually sit in the background making decisions or carrying out specific actions. So, in simple terms, Generative AI is about conversation, while Agentic AI is about action. One part might be talking to customers and building rapport, which is Generative AI. The other part is about making decisions and exercising judgement, which is Agentic AI working behind the scenes.
How does Nutun use AI agents?
As a contact centre operator, we’ ve been exploring how far an AI-powered persona can mimic human behaviour and engage with customers on the front line to meet their specific needs. When we talk about AI agents, we’ re talking about the kind of Artificial Intelligence people might recognise from old sci-fi movies, robots that can simulate a human. As a BPO in South Africa, our focus has always been to bring empathy-driven, problem-solving people to work for our clients and represent their brands with care. Within Nutun, we have two main lines of business. The first is customer service, where we help solve complex problems that customers haven’ t been able to resolve through self-service channels. The second is collections, which involves sensitive conversations with people in vulnerable situations. That’ s where we’ ve seen the greatest success with AI, because the conversations are more structured, the paths are clearer and that makes it easier to prompt and guide the agent effectively while still maintaining empathy.
How do you strike the right balance between automation and human connection?
Everyone has a story about how a robot answered the phone and their initial reaction was negative or toxic. They don’ t want to talk to this‘ thing’. But if you get those models right, it moves from a conscious understanding that you’ re talking to a bot to something more similar where you’ re willing to engage. It’ s the golden spot of balancing the empathy of talking to a human with the efficiency and productiveness of a bot. It’ s comparative to the experience of engaging with a really good app. It’ s user experience. In collections, a good conversation can lead to a payment, a customer fulfilling that promise. There’ s also a stigma about having a collections conversation with a person. It can be embarrassing to admit you’ re struggling. It’ s human nature to overpromise to a human but we’ re seeing that element drastically drop off when talking to a bot.
What were some of the biggest failures or misbehaviours you encountered in your early experiments with agents?
AI is an incredibly powerful technology. What we’ re learning with AI is that you still need to tell the machine what to do. When people talk about general Artificial Intelligence, where AI doesn’ t need to be told what to, we’ re not there yet. Today’ s Generative AI models still need clear direction and structure. You’ ll hear the word‘ prompt engineering’ which is really the coding or the scripting that you need to sit behind the AI agent to tell them what to do. Using AI, there’ s actually very limited paths, for example, a collections conversation can go. You’ re approaching someone about an outstanding debt. There may be queries to resolve so the customer is comfortable the debt is accurate. From there it is about facilitating a payment plan that works for both the company and the person who has to pay. This is where we have seen success, largely because the prompting in a collections conversation is manageable. In customer service, the scenarios are broader, so the prompting is much more extensive.
What was an early assumption you had about AI agents that turned out to be wrong?
We’ ve been live for about a year now and learnt a lot about using AI-driven voice agents in empathy-filled conversations when you’ re trying to solve complex problems. The first thing is that you have to be very clear with the AI about what you’ re trying to achieve. Your prompting needs to be incredibly specific and quite narrow in terms of fulfilling an overall conversation. When you’ re engaging with ChatGPT, for example, it’ s impressive because you can have a conversation and go down certain tangents, but a large language model isn’ t always right. In a corporate environment, the AI agent is representing your brand, and if commitments or promises are made to a customer, there’ s little room to get it wrong. What we’ ve learnt is that you have to limit the freedom the model is allowed. To do that, you need business-orientated guardrails to stop the conversation from going into areas that haven’ t been prompted. This is how you avoid hallucinations. The second thing we’ ve learnt is about the human response when talking to a bot. A lot of work goes into making the conversation flow. As it turns out, a natural
www. intelligentcxo. com
15