As Genera AI (Genai) reshapes the industry, family-owned community banks need to balance the need to maintain opportunities and ethical standards to improve customer workflows, protect data privacy and ensure regulatory compliance. This issue is particularly pronounced in the fierce financial services sector where trust is important and regulatory scrutiny.
According to a 2023 report by McKinsey, AI could provide an additional $200-$340 billion worth of value across the banking industry, which is equivalent to 4.7% of the industry's annual revenue. Personal trust is of paramount importance, which makes it particularly high interest for community banks. New technologies present great opportunities, but also amplify risk.
Community Banks serve local customer bases and have limited resources compared to global institutions, but play an important role in the community, as outlined in a recent report from the National Bureau of Economic Research. As a result, responsibly scaling AI is even more important for community banks to maintain customer trust and operational integrity.
Matthew Demello, senior editor at Emerj, sat down with Miranda Jones from Emprise Bank on the “Business in Business” podcast to continue a conversation about how to expand responsible AI.
In the next article, we will focus on three key points from the conversation.
- Creating a safe environment for AI experiments: Provide employees with control space to explore AI tools, ensuring data privacy and protecting their own data.
- Leverage AI for unstructured data insights: Use Genai to process unstructured data and help employees improve communication clarity and efficiency.
- Implementing domain-specific AI models: Prioritize smaller and more targeted AI models than broader underlying models to address the unique needs of community banking customers and ensure contextually relevant outcomes.
Listen to the complete episode below:
guest: Miranda Jones, SVP, Data & AI Strategy Leader at Emprise Bank
Expertise: Strategic Leadership, AI, Machine Learning
Simple recognition: Before her current role at Emprise Bank, Miranda was Emprise's vice president of predictive analytics. Previously, she was an analyst at Spirit Aerosystems, a support for procurement costs, pricing and business analytics. She holds a Master's degree in Science in Mathematics.
Creating a safe environment for AI experiments
When asked why it is important for employees to create a safe environment for experimenting with AI tools from a data science perspective in the financial services sector, Jones elaborates. According to Jones, development takes time, so it's important that employees can use Genai tools right now, rather than waiting for employees to be perfect.
Jones explains that these spaces can help employees build AI literacy by learning how to write prompts effectively, interpret output, and avoid dealing with AI tools like search engines. Additionally, recognizing issues such as bias, hallucinations, and misinformation can help employees understand risks and allow AI to critically evaluate the results generated. The result is that employees can securely integrate AI into their workflows and enhance customer service, document processing, or internal operations without violating privacy or compliance rules.
Jones further explains that the genai model is designed to “write words that sound like humans, rather than identifying facts.” As a result, critical evaluation of the output is essential to prevent false information that undermines customer trust.
Leverage AI for unstructured data insights
Jones explains how genai is great at processing unstructured data, email text, word documents, or PDFs. She highlights how AI can help employees approach data structure and communication.
“For example, if you have 10 pages of redundant text in a document, instead of having 10 pages of redundant text, it might be easier to understand.
So, by using genai and trying to learn things from the documentation and iterate at the prompt, they may really learn that in the end, what I need to do to communicate is not ten pages. These five ideas were in a concise way. ”
– Miranda Jones, AI Strategy Leader at SVP, Data and Enprise Bank
Implementing domain-specific AI models
Domain-specific models are essential to addressing the needs of their own customers. Jones notes that even small variations of language, such as differences between English and American English and local slang, can affect how customers communicate.
Jones argues that overly generalized AI models may not be able to capture the nuances of a particular customer segment or local context. She also points out that the words used in the context of financial services mean something very different in another industry. She used Apple's App Store analogy to explain that specialized apps outweigh apps designed to address too many purposes.
When asked about the benefits of providing specific insights when regulated industries that employ AI at a deliberate pace responsibly, she advises that when deploying agents and other applications, companies should always start with humans, evaluate processes and determine whether they can separate humans.
In parallel, we need to consider whether processes need to be designed differently to scale AI, while fully benefiting from technology. Jones believes that companies need to approach conversations by identifying what the problem is and what they are trying to achieve, and not trying to decisively find out how AI can use AI agents, but rather by determining whether it is the right tool for them.
