I was with you last week Tom Whittaker, AI team leader at Burgess Salmon, said: in amsterdam InventU’s 1st AI Governance Conferencebrought together the leading minds in the AI governance community. It was an immersive experience. Tom has already posted some insightful thoughts on:
Below is a super summary of my own thoughts.
- trust (There’s a good reason I capitalized this important word):
- Trust requires rules
- Consumers need to trust your business
- business depends on reputation
- customer:
- What your customers care about really matters
- No trust gap between executives and customers
- Identify and engage with your customers
- Understand customer experiences
- values:
- Your core values determine how you express yourself in the digital world
- Decide what you want to be known for
- If you want to be trusted with anything, this is the key to your strategy
- Trustworthy core values become a competitive advantage
- have a clear mission:
- You need to develop an AI strategy and define your path forward.
- Be customer (human) centric. AI needs to exist alongside humans
- Speed without strategy guarantees chaos.
- Governing with true accountability:
- Blame is a hot potato, but it’s true
- Requires real-time accountability, continuous monitoring, and continuous improvement
- Identify who will take ownership
- Good governance doesn’t inhibit good business
- Culture is important but undervalued
- AI cannot do governance.
- There are AI tools that can assist with governance, but responsible humans must be able to see, understand, and explain the results, and take full responsibility for the results.
- Note the “middle management” gap.
- The approach must be top-down and bottom-up, with no middle ground along the way
- Collaboration and teamwork across the business is key
- collaboration:
- combine one’s talents
- find your critical thinker
- Collect all relevant skill sets and put them in the room and avoid falling through the gaps
- The people in this room don’t want each other’s jobs, they want to coexist and cooperate.
- Collaborate in ways that allow people to fill knowledge gaps. No single skill set or individual can do this alone.
- You may already have great talent in your data, legal, procurement, compliance, and privacy teams, or you may already have great processes and systems in place. There is no need to reinvent the entire AI wheel.
- Find your influencer:
- Change requires influence, your influencers can make things happen
- Create an ambassador
- Assemble a “Responsible AI Team”
- employment:
- Humanoid robots aren’t science fiction, but they won’t take away all jobs and reskill and repurpose the workforce.
- Everyone in your business needs to be AI-savvy in a way that fits their role
- Literacy reduces the risk of human error
- Keep training, training, and training
- data:
- The problem is data, not AI
- Specifically, the real question is where AI touches data
- Data issues delay adoption
- AI uncovers any problems in your data at speed and scale
- Data: Sort, categorize, and sanitize your data, put controls around it, and see where it comes from.
- bias:
- Historical human biases are embedded in data
- AI amplifies prejudice
- Be clear about your risk appetite.
- The risks are huge
- Identify your use case. You don’t need AI for everything
- Categorize use cases into risk lanes. Not all applications require the same treatment. Some applications can be addressed immediately, while others require more careful attention.
- Knowing your data helps isolate risk
- Some risks are acceptable, others are not.
- The highest risks require the greatest concentration
- agent:
- Autonomous agents are good news
- Autonomy requires reliable data
- Agents have clear rules for engagement, intervention, and oversight, and require a clear owner with full visibility
- Agent owners must be humane, skilled and knowledgeable
- All agent actions must be transparent, reversible, and explainable
- We are only at the beginning of our journey.
- We are all learning every day, and we have a long way to go
- Inflated expectations need to be met by reality
- Many AI projects have been abandoned
- Many senior citizens resigned under intense public scrutiny
- Failure rate increases as more AI is introduced, and so does the rate of “falling on the sword”
- Kill switch:
- Test, monitor, train, evaluate, pilot…
- The journey continues
- AI widens the gap
- The technical part is easy.
- People, processes and culture are difficult.
- Change management is difficult
My area of focus is financial services, but there was an important message from our healthcare collaborators (I believe that patients are not that different from financial services consumers in many ways). That touched me deeply.
- patient-centered
The company wanted to be known for its core values and its culture, which made it stand out. If I were a patient, I would worry about these things. Would you do that?
And the key message about ethics reminded me of a data protection experience I had many years ago in my career.
- it might be legal
- it might be possible
- But should you?
AI isn’t all about speed. we can and should Stop Then consider the outcome and ask yourself, “Is this okay?” “Is this the right thing to do?”
You can read more updates like this by clicking to subscribe to our monthly Financial Services Regulation Update. hereclick here for AI Blogand here for us AI Newsletter.
If you would like to discuss how current or future regulations may impact your use of AI, please contact us. myself, tom whitaker or martin cook. Meet with our financial services experts here with our AI experts here.
