- Businesses are rapidly integrating generative AI technologies to improve productivity.
- But experts are concerned that efforts to manage AI risks are lagging.
- A senior partner at BCG said efforts toward responsible AI are “not moving that fast.”
Businesses continue to race to implement generative AI technology into their operations. Start ChatGPT In 2022.
Executives say they are excited about how AI will increase productivity, analyze data and reduce red tape.
Nearly four in five business leaders believe their companies will remain competitive, according to the 2024 Work Trends report, which surveyed 31,000 full-time workers between February and March from Microsoft and LinkedIn. We believe that we need to introduce technology.
However, implementing AI in the workplace also comes with risks, including reputational, financial, and legal damage. The challenge in addressing them is that they are vague, and many companies are still trying to figure out how to identify and measure them.
A responsibly run AI program must include strategies for governance, data privacy, ethics, and trust and safety, but risk experts say the program is not keeping pace with innovation. states.
Tad Rosenlund, managing director and senior partner at Boston Consulting Group, told Business Insider that efforts to use AI responsibly in the workplace are “not moving very fast.” According to BCG, these programs often require significant investment and at least two years to implement.
This requires significant investment and time, so company leaders are likely focused on allocating resources to quickly develop AI in a way that increases productivity.
Nanjila Sam, a researcher and policy analyst, said: “Establishing good risk management capabilities requires significant resources and expertise, which not all companies currently have the luxury or access to.'' ” he said. told the MIT Sloan Management Review.. He added that “demand for AI governance and risk professionals exceeds supply.”
Investors say they need to play a more important role in funding the tools and resources for these programs. Navrina Singh, Founder of Credo AIis a governance platform that helps businesses comply with AI regulations.According to one study, funding for generative AI startups will reach $25.2 billion in 2023 Report from the Stanford University Institute for Human-Centered Artificial IntelligenceHowever, it is unclear how much money was paid to companies focused on responsible AI.
“The venture capital landscape also reflects a disproportionate focus on AI innovation over AI governance,” Singh told Business Insider in an email. “Deploying AI responsibly, at scale, and rapidly requires a similar focus on ethical frameworks, infrastructure, and tools to ensure sustainable and responsible AI integration across all sectors. there is.”
Legislative efforts are underway to close this gap. In March, EU approves artificial intelligence law, It divides AI application risks into three categories and prohibits those with unacceptable risks. Meanwhile, the Biden administration signed a sweeping executive order in October calling for greater transparency from big tech companies that develop artificial intelligence models.
However, the pace of innovation in AI means that government regulation alone may not be enough for businesses to reliably protect themselves at this time.
“We fear that our AI efforts may be halted before they reach production, or worse, pose unintended social risks, reputational damage, and regulatory complications once they are deployed in production.” It runs the risk of a serious lack of accountability that could lead to failure,” Singh said.
