Generative artificial intelligence (GenAI) is popping up everywhere in the workplace, but companies don't have policies or training in place to ensure their implementations don't fail.
Research by technology professional association ISACA shows that staff in almost three-quarters of European organizations are already using AI in their work, but many lack formal and comprehensive policies governing the use of such technology. Only 17% of organizations have
Almost half (45%) of those surveyed by ISACA said their organization allows the use of GenAI, up from 29% just six months ago.
But staff seem to be adopting it more than their bosses realize, with 62% of those surveyed using GenAI to create document content, increase productivity, and automate repetitive tasks. It is said that there is.
Lack of understanding of AI
According to ISACA research, 30% of organizations provide limited AI training to technology employees, while 40% provide no training at all.
Research shows that despite all the hype around generative AI, most business and IT professionals have limited awareness about the technology, with three-quarters (74%) saying they only know something about it. I answered either that I don't have any information or that I don't know much about it at all. Only 24% said they were very or extremely knowledgeable about AI, and 37% said they were beginners.
However, this doesn't seem to have stopped staff from worrying about the potential negative effects of AI, with 61% of respondents saying they were very or extremely worried that generated AI could be exploited by bad actors. I admit that I am worried about this.
Approximately 89% of business and IT professionals surveyed by ISACA cite disinformation and disinformation as the biggest risk of AI, but only a few are confident in their or their company's ability to detect it. was only 21%
A quarter (25%) feel that their organization pays sufficient attention to ethical standards for AI, but only a quarter (25%) feel that their organization is adequately addressing AI concerns, such as data privacy and risk of bias. Only 23% think so.
A majority (89%) identified misinformation and disinformation as the biggest risk to AI, but only 21% were confident in their or their company's ability to spot it.
While 38% of workers surveyed expected many jobs to be eliminated by AI over the next five years, more (79%) said their jobs would be changed by AI.
Digital trust professionals were more optimistic about their field, with 82% claiming that AI would have a neutral or positive impact on their career. However, they acknowledge that they will need new skills to succeed, with 86% saying that to get promoted or keep their job they will need to increase their AI skills and knowledge within two years. I'm predicting it.
For this study, ISACA surveyed 601 European business and IT professionals. This result was consistent with a large-scale international study conducted as well.
Chris Dimitriadis, chief global strategy officer at ISACA, said there is much work to do to understand AI in the workplace. “You can't create value without a deep understanding of technology. If you don't understand technology, you can't really address risk,” he said.
Dimitriadis added that the current state of AI is similar to previous emerging technologies. “We have an organization that is trying to figure it out, and we are trying to create policies and assemble teams to create skills,” he said.
“But at the same time, deploying this technology doesn’t mean waiting for all these things to happen. So we see employees using generative AI to create written content. [and] “Product teams are looking to test new partnerships with AI providers without building them into a framework that can help organizations in meaningful ways,” he added.
Dimitriadis said that while companies are keen to capture the potential of AI and create value, many have not yet seriously focused on training and reskilling employees to use AI safely. I warned you. “Our eagerness to innovate and create something new sometimes outweighs a company's policy structure,” he told Computer Weekly.
For example, within a larger organization, some departments may start using AI without informing senior management about this new technology. In some cases, time-to-market pressures can delay cybersecurity, assurance and privacy efforts, he said.
Need education on AI risks
However, the biggest reason for the gap between AI use and AI governance was believed to be a lack of skills. “For example, there is already a huge gap when it comes to cybersecurity. Imagine how this gap could change into something even more severe using AI,” he said.
Especially since GenAI can pose new security risks depending on the industry and application. Companies that process personal data need to be aware of the risks of AI introducing bias or hallucinating new details it adds to records.
There is also the threat of external attacks, which can create requests that can trick AI systems into revealing sensitive data. Another challenge is that while some jurisdictions are enacting specific regulations regarding AI, it is difficult for non-experts to understand what rules the use of AI may violate. is.
For this reason, he said, organizations need to continually audit their AI for results and be very careful to ensure that the AI behaves as users expect it to.
According to Dimitriadis, the first step to fixing the lack of policy and oversight for AI is to train the right people within your organization.
“It always starts with people. If you have trained people, they can put the right policies in place. You have to understand the risks first and then create the policies,” he said. Ta.
It is also important to ensure that awareness and education about AI extends beyond experts so that employees are aware of the risks associated with using AI and avoid unintended breaches, he said. Ta.
“This is a board discussion in terms of making sure that there is a balance between the value created by the introduction of AI and the risks,” he added.
Part of this will include having a full-time chief information security officer (CISO), as well as trained privacy and risk experts who can interact with the board on issues related to specific organizational operations.
“If you bring a general story to the board about a threat, it's never going to be very persuasive because it's not put in the context of specific revenue streams within the company,” he says. “It all starts with educating people.”
This website uses cookies to ensure that you get the best experience on our website. OkRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are as essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.