- A company’s chief ethics officer will ensure that AI is used responsibly.
- They define the principles for regulating the technology, study the legal landscape, and liaise with stakeholders.
- People in this role often earn a salary in the mid-six figures per year.
The launch of ChatGPT marks a new era in the corporate world.
Hot bot technology, or generative AI, can write emails, generate code, and materialize graphics in minutes. Suddenly, the days of employees sifting through their inboxes and painstakingly crafting presentations seemed like a thing of the past.
Drawn by the promise of increased profits and productivity, businesses have rushed to adopt the technology: A May survey by consulting firm McKinsey & Company found that 65% of more than 1,300 companies surveyed now use generative AI regularly, double the number from the previous year.
However, there are great risks to the misuse of this technology. If not properly managed, generative AI could cause hallucinations, spread misinformation, or reinforce prejudice against the weaker members of society. Because this technology relies on large amounts of sensitive data, there is a high possibility of data leaks. But in the worst case scenario, the more advanced it becomes, the more likely it is to become non-compliant with human values.
With great power comes great responsibility, so companies making profits from generative AI also need to ensure it is regulated.
This is where the Chief Ethics Officer comes in.
An important role in the age of AI
of The details of the role will vary from company to company, but broadly speaking, they're responsible for determining the impact of a company's use of AI on society as a whole, according to Var Shankar, chief AI and privacy officer at Enzai, a software platform for AI governance, risk, and compliance. “So how does it impact not just your company and your bottom line, but your customers? How does it impact people around the world? And what is the impact on the environment?” he says. He told Business Insider. The next step will be to “build a program to standardize and scale those questions every time we use AI.”
It's a role that gives policy wonks, philosophy majors and programming whiz kids a foothold in the fast-changing tech world — and it often comes with a handsome salary in the mid-six figures.
But right now, companies aren't hiring fast enough for these roles, according to Steve Mills, chief AI ethics officer at Boston Consulting Group. “There's a lot of talk about the risks and the principles, but I think there's very little action to put that into practice within companies,” he said.
C-Level Responsibility
According to Mills, the successful candidate in this role should ideally have four areas of expertise: technical knowledge of generative AI, experience building and deploying products, an understanding of key laws and regulations around AI, and extensive experience with adoption and decision-making in organizations.
“Too often we appoint people in middle management who may have the expertise, drive and passion, but who typically don't have the authority to change things within the organization and bring together legal, business and compliance teams,” he said. All Fortune 500 companies using AI at scale should appoint an executive to oversee a responsible AI program, he added.
Shankar, a former lawyer, said no special educational background is required for the job. The most important qualification is understanding a company's data, which means “the ethical implications of the data you collect and use, where that data comes from, where it sits before it enters the organization, what consents you have for that data,” he said.
He gave an example of how health care providers can unintentionally perpetuate bias when they don't have a firm grasp on data: A study published in Science found that hospitals and health insurers used algorithms to identify patients who would benefit from “high-risk medical management,” but ended up prioritizing healthier white patients over sicker black patients. It's the kind of blunder that ethics officers can help companies avoid.
Collaboration across companies and industries
The person in this role must also be able to communicate confidently with a range of stakeholders.
Christina Montgomery, IBM's vice president, chief privacy and trust officer, and chair of its AI ethics committee, told BI that her days are typically filled with client meetings and events in addition to other responsibilities.
“I've spent a lot of time outside, probably even more recently, speaking at events and serving on policymakers and external committees because I feel like we have a huge opportunity to influence and determine what the future looks like,” she said.
She sits on the board For example, the International Association of Privacy Professionals recently launched an Artificial Intelligence Governance Specialist certification for those who want to lead in the field of AI ethics. She also interacts with government leaders and other chief ethics officers.
“We think it's absolutely important to talk to each other on a regular basis and share best practices, and we do a lot of that between our companies,” she said.
She aims to develop a broader understanding of what's going on at a societal level, which she sees as key to the role.
“What concerns me about the situation we're in right now is that all of these regulations are not globally interoperable and companies don't know what the expectations are in terms of what they have to follow and what is right and what is wrong,” she said. “We can't operate in that kind of world, so that dialogue between companies, governments and boards is really important right now.”