HR Magazine – Government advises companies on AI cyber threats

AI For Business


The UK government sent an open letter to business leaders on Wednesday, April 15, providing guidance on AI-based cyber threats.

In a joint letter, the Minister for Security and the Secretary of State for Science, Innovation and Technology provided business leaders with measures to “protect against AI cyber threats”.

The letter follows news that AI company Anthropic has announced a Mythos model capable of carrying out cyberattacks.

Government ministers have written that the UK is poised for a rapid increase in frontier AI model capabilities over the next year and is planning accordingly.

Among other recommendations, the government advised business leaders to take cybersecurity seriously, obtain government-backed cyber certifications and follow the advice of the National Cyber ​​Security Center (NCSC).

AI has significantly lowered the barrier to entry for cybercriminals, explained Conor O’Neill, CEO and co-founder of cybersecurity company OnSecurity. Human resources department magazine.


read more: IT worker jailed for hacking employer


Mr O’Neill praised the government for recognizing the threat posed by AI, but said the advice it had given “will not have the appropriate impact”. According to O’Neill, the government-backed certifications Cyber ​​Essentials and NCSC’s early warning services are valuable as a “foundation,” but not as a “corporate goal.”

Aldis Elgulis, chief AI officer at business consultancy Emergan, called the government’s letter “a useful wake-up call for business leaders.”

Elgulis said he agreed with O’Neill’s opinion. Human resources department The magazine says that while this new advice gives organizations a solid technical baseline, it does not address the root of the problem: human behavior.

“Staff entering sensitive data into unapproved AI platforms, payroll teams facing deepfake requests that look just like their bosses, and widespread use of shadow AI that no one approves of are people issues and fall within the purview of HR departments,” Elgulis said.


read more: Viral video exposes deepfake AI fraud


O’Neill added: “AI is making social engineering attacks scarier and more convincing. A spoofed email from a ‘coworker’ asking you to change your bank details no longer looks like a scam. It feels like Tuesday morning.”

said Jordan Burke, co-founder and director of HR training provider Nine Dots Development. Human resources department Magazine: “Human Resources departments must be on the front lines of educating employees about the nature of these threats: how to spot them and what to do if they encounter them.”

Erglis advised HR teams to implement three strategies. Incorporate AI literacy into the flow of work through continuous work-embedded learning. Establish clear policies about which tools are authorized and for what purposes. Finally, create a culture where staff openly raise concerns about usage rather than hiding it.



Source link