Financial Services CX Insights: 4 Winning Chatbot Deployment Strategies

January 31, 2022
|
Genesys
|
Contact Centre
,
Genesys
,

Five years ago, banks cautiously deployed bots for a very narrow set of tasks like basic FAQs. Most customer inquiries still went directly to client service agents. Today, banks have bigger ambitions with bots - the technology has improved and there’s increased demand from customers to self-serve. An enterprise-wide program that supercharges automation, and uses natural language understanding (NLU) and bots, could take years for many financial institutions to fully realise. But, with the right technology - one that speeds deployment and allows for customisation - the wait might not be as long.

Here are four areas for bot deployment and use that are unique to the financial services industry (FSI).

1. A Mountain of Intents

Unlike other verticals, the banking industry must contend with a wide variety and volume of possible client intents because there are multiple lines of business within industry. Think of all the tasks in one financial institution around retail banking, checking accounts, card services, mortgages and wealth management.

Many technology vendors that specialise in financial services have built up ready-to-use libraries of intents and utterances. These same vendors might have pre-built integrations into the backend systems that banks use. But this doesn’t mean engaging with a vendor that specialises in financial service bots will solve all potential problems.

Larger banks often support clientele in many languages; different geographic regions could have a banking-specific vernacular that’s not universal. Proper bot tuning, quality assurance and usability testing remain critical to success. Rushing crucial training and quality assurance stages will create bots that deliver a disjointed client experience.

“The challenge in financial services is that it is a very rich and complex domain, comprising many business lines and a wide range of distinct products,” said Jake Tyler, CEO at Finn AI. “The more scope you add, the more intents you need - and the more intents you add, the more data you need for each intent. At Finn, for example, we support hundreds of intents in the retail banking domain, and each of our intents is trained with an average of 2,000 human-labeled utterances, often significantly more (over 10,000) for complex parts of our language model. Weak model coverage or insufficient training data means the bot is less likely to be able to answer the users specific query.”

2. Bot Fulfillment and FSI Backend Systems

Identifying intents and filling slots is only half the battle; sometimes it’s the easiest half. This is particularly true when you want bots to tie into backend systems for intent fulfillment.

Imagine your customer wants to transfer funds from their checking account to their savings account. Identifying that the customer wants to move a specific amount of money and all the details surrounding this transfer satisfies the challenges of intent recognition and slot-filling. But that’s not where the bot task ends. It has to actually fulfill that request. That means integration with FSI backend systems is necessary to actually execute the intent. And a key challenge is ensuring each of those system interfaces adhere to security compliance and communication protocols.

Simplifying the process can solve this. Some companies, like Oracle, ServiceNow and Salesforce have consolidated the interfaces with many backend systems, essentially becoming a communications broker. This means your bot might not need to talk to 10 different systems.

3. Data Security, Privacy and Compliance

Although many start their journey into FAQ bots or concierge bots, the real challenge comes in the form of transactional bots because banks have higher standards for controls. And when it comes to productising these transactional bots, there’s no quick fix.

The first step is to divide all intents into two buckets: those that require identification and verification (ID/V) and those that don’t This clarifies the security profile that you need to follow within your bot ecosystem. The second step is to categorise the types of information that will fill slots or that bots will deliver. Determine what’s required from the perspective of PCI compliance and data-privacy regulations. The third is to identify all the systems that will be leveraged for intent fulfillment - and then categorise each of these from a security/risk perspective.

Gathering this information ahead of time will prepare you for the necessary discussions with your security and compliance teams. You want your security and compliance team to be a partner in this endeavour as well as the ultimate authority on what you can allow. Do your homework; start the consultation process with these teams before developing very ambitious plans.

4. Model Risk Governance

Banks use complex models and data science within their business operations. Model risk governance grew out of this when banks first started to use artificial intelligence (AI) algorithms for risk assessment, such as whether to give you a loan. If the algorithms didn’t work properly, it would have unintended consequences and would put banks in jeopardy. Now, model risk governance is a formal process with strict gate control. When an AI algorithm is being proposed - any type of AI or machine learning algorithm - banks must go through this very detailed and lengthy process to protect the bank.

Suppose you have a project in mind that you want to complete in six months. Talk to your in-house model risk governance team as early as possible and ask them how long their process will take. Then build this into your project timeline.

While model risk governance wasn’t originally intended to be about bots, they’re now involved with all services that use AI models. If a process or system is making calculations, forecasting or analysing, it could be classified as a model. In the end, you’re responsible for navigating this process as well as having all required information available for the model risk governance team. This information can include tuning, testing and QA processes, data source information and data cleansing processes, as well as detailed processes on how to eliminate the risk of unfair or unethical bias within the model.


The Road to Bot Sophistication

Considering all these challenges, you might wonder where to begin and how to move toward a sophisticated end-state bot deployment.

“Just like a regular conversation a quality bot experience is dependent on both understanding what the user wants to do and being able to complete that task,” said Tyler. “The answer side of the equation combines two equally important elements - conversation design and integration with backend systems. It doesn’t matter how good your AI is at understanding what users are saying if your answers are poorly designed or you’re not integrated with backend systems to complete tasks.”


Tyler also suggests choosing use cases you know will work and deliver value. “Chatbots and virtual assistants are deployed at scale and delivering value at major banks today in their consumer banking units. Bank of America’s Erica Virtual Assistant, for example, has 21 million users and usage grew over 60% last year. Customers use Erica because it’s a faster, more convenient way to do things on mobile. This consumer use case is a great place to start. If you don’t want to dive straight into deploying a bot into digital banking, perhaps start on the dotcom and expand into authenticated channels over time.”


He also recommends finding a technology partner who brings a pretrained banking language model to the table. This will save time and resources on AI training, get you to market faster and lower the overall risk of the project. You still need to ensure the bot can answer user queries and complete their tasks.

“The industry is riddled with examples of FIs who underestimated the time and level of effort required to train the AI language model,” added Tyler.

Start with the low-hanging fruit, which are use cases with a high volume of routine, low complexity queries and tasks. These are generally going to be in your retail/consumer business units. Pick high-volume use cases because it’s a strong indication you’re solving a real problem and you’ll generate sufficient data to train and optimise your AI model quickly.


“The good news is that there is a lot of low hanging fruit. One look at what customers are calling into your call centre about will give you a clear indication of where to start,” said Tyler. “Alternatively, start with low-risk use cases. These could be an internal HR or IT help desk for example. These sorts of knowledge management bots are akin to a search engine with a conversational interface and can be very effective, easier to build and train and faster to deploy. But note that the solution you build for this use cases and your experience training it will not transfer over to more complex, mass consumer use cases.”

Finally, keep humans in the loop. Bots help users solve routine, repetitive problems. These comprise the bulk of what customers want to do in digital banking; these simple queries are monopolising your support team’s time. But bots will never work perfectly all the time. There are many higher-value tasks, such as transaction disputes and estate issues that are best solved by your support team. Having the ability for the bot to seamlessly hand a user over to a live agent gives you the best of both automation and the human touch.


See how Genesys chatbots and voicebots can help you save time and money.

Follow us

Dive behind the scenes and keep up to date on the latest people centred tech.

Find out how we can support your business

Talk to us today