#Infosec24: Tackling AI risks requires going back to risk management basics

AI Basics


Experts argue that if organizations want to ensure AI is used safely and securely, they should adhere to risk management basics, such as well-written policies, effective training, and clear accountability.

An expert panel on day two of Infosecurity Europe this morning agreed that accountability and training should go hand in hand.

“One of the really important things is [to build into] “The most important thing about policies is accountability – you need to hold employees responsible for the choices they make. It's good to give them the tools, but employees need to be responsible for knowing they're making the right choices,” argues Sarah Lawson, CISO at UCL.

“If you're going to use it, you need to know how to use it. And as a company, you need to provide people with the correct way to use it. Provide people with the training to do that.”

Read more from Infosecurity Europe: #Infosec24: Deepfake expert warns about “AI tax havens”

Others highlighted the need for modern training programs to help employees use AI responsibly and safely within their organizations.

“To use it most effectively, some awareness training will be required. [GenAI] “If you ask the question the wrong way, you won't necessarily get the right answer,” says Ian Hill, director of information and cybersecurity at Brockmoor.

Both Hill and GuildHawk CEO Jurga Zilinskien agreed that rapid engineering will be a key skill going forward.

“We talk about machine learning, but we really need to focus on human learning, which is probably the weakest part when we talk about AI,” Zirinskiy said.

Guildhawk created a prompt engineering task force, where business professionals within the organization, rather than the technology team, will work together to decide what outputs they want from GenAI and design the prompts.

“Certainly, tech people have a role to play in this, but the reality is [business] “We have experts here,” Zilinskien added.

Data quality is key

Zirinskine also noted that data quality and governance provide a critical foundation for the safe, secure and optimal use of AI, yet their importance is under-recognized.

“One of the biggest weaknesses is the foundation on which this technology is built. You need strong datasets that are verified and trusted. But who wants to invest in this, which accounts for 80-90% of development?” she argued.

UCL's Lawson argued that much of the early hype around AI is starting to wane as a result, with users realising that leading data models are “not as good as they were hoped”.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *