Artificial intelligence (AI) chatbots could fuel terrorism by spreading violent extremism among young users, government advisers warn.
Jonathan Hall (KC), an independent reviewer on terrorism laws, said it was “entirely conceivable” for AI bots like ChatGPT to be programmed or self-determined to promote extremist ideology. said.
He also argues that “joint liability” between “humans and machines” obscures criminal liability, while AI chatbots behind grooming are not covered by anti-terrorism laws, making them “scot-free.” He warned that it could be difficult to prosecute someone because
“Today, the threat of terrorism in the UK is associated with unsophisticated attacks using knives and vehicles,” said Hall. “But AI-powered attacks are probably just around the corner.”
Senior technologists such as Elon Musk and Apple co-founder Steve Wozniak have already suspended large-scale AI experiments like ChatGPT, citing “serious risks to society and humanity.” I’m looking for
“Spreading Violent Extremist Ideology”
In a Mail on Sunday article, Hall said a “terrorist worm” could infiltrate AI chatbots by expanding their role as companions and moral guides rather than just Internet search engines. said.
“Hundreds of millions of people around the world will be able to chat with these artificial companions for hours at a time, in every language of the world,” he said.
“I think it’s perfectly conceivable that an artificially intelligent chatbot could be programmed, or worse, decide to propagate some violent extremist ideology.
“When it comes to the online world, anti-terrorism laws are already lagging behind. do you want?
“Human users can get arrested for what they have on their computers, and based on recent years, many of them are children. Many of those affected are neurodivergent and likely suffer from medical disabilities, learning disabilities, or other conditions.
“But criminal law does not apply to robots, so AI groomers are acquitted.