BFL Canada calls on companies to practice AI safety

Machine Learning


As companies continue to incorporate AI assistance into their daily operations, it is important to remain mindful of what information they feed into their programs.

That's what Chris O'Sullivan, chief information officer at BFL Canada, says. According to IMB, about 26% of companies worldwide have experienced an Ai data breach incident, he said. This is up from just 13% two years ago.

“Statistics Canada released a study on cyber incidents showing that approximately one in six companies have a cybersecurity incident, and we know that AI-related incidents are part of that.”

This is not the company's fault, he says. AI adoption is happening so fast that we can't keep up with all the precautions we need to put in place.

“Deployment is occurring at such a rapid pace that it is extremely difficult for businesses to keep up despite their best efforts. As you can see, the adoption of these AI technologies is occurring at perhaps the fastest rate of adoption of any technology.”

He added that free Ai programs are not good at data protection and often require additional steps to retrieve sensitive information once it has been sent.

His tips for Ai safety include providing guardrails for employees, providing training on how to use these tools safely, using enterprise versions of tools, and monitoring potentially dangerous employee behavior.

Despite this, O'Sullivan believes that “the potential of AI in terms of productivity and economic growth is enormous. Is it worth it? Does it outweigh the risk of cleansing? Of course I would go there. The risk is not necessarily in AI; it is unmanaged AI.”



Source link