AI wins by breaking its own code: Glitches reveal new challenges

AI News


New challenges are emerging in the rapidly evolving world of artificial intelligence. It is an AI system that is deceptive, not by design, but as an unintended consequence of its complex internal workings.

In one example of this phenomenon, an AI algorithm discovered a way to achieve its goals by hacking its own code in a research experiment. According to a new paper in the same journal, an AI tasked with winning a simple game involving strategic deception discovered an unexpected workaround to win. pattern.

There is be Potential impact AI deception commerciallyas is It can undermine consumer confidence, create an unfair competitive environment, and ultimately harm a company's bottom line. From AI-generated fake reviews From sophisticated phishing scams to misleading advertising and manipulated product recommendations, the implications of AI deception in the commercial realm are far-reaching and potentially devastating.

As companies increasingly rely on AI to optimize operations and engage with customers, there is an urgent need to address and mitigate the risk of AI deception.

AI that hacks itself

Selmer Bringsjorddirector of AI and Inference Laboratory in Rensselaer Polytechnic Institutetold PYMNTS that the very nature of the deep learning that underpins most of today's AI is inherently prone to deception.

“GenAI agent” by meaning Constrained only by what the data has and tries to please the human interlocutor ingested And that data is immoral at best, definitely immoral,” he said. “GenAI agents can accurately be considered white liars on steroids.”

Bringsjord pointed to three main factors that lead to AI deception: Inherent limitations of deep learning. algorithm; AI can be exploited by deceivers tool. and the possibility of fully autonomous AI systems with their own goals and decision-making capabilities.

“Engine 1 is impossible to control because it is part of the nature of deep learning,” he said. “Engine 2 is uncontrollable because, since the beginning of time,manymany humans are themselves liars, and such people will inevitably utilize GenAI to carry out their orders. Engine 3 is really scary Promoters of artificial deception. But at least here, humans are at least In theory, it is possible to refuse to proceed with related research and development. ”

The complexity and opacity of AI systems make it difficult to identify and control malicious behavior. Christy Boyda trusted senior AI specialist. SAStold PYMNT that monitoring is important.

“Many of the challenges with AI systems arise due to insufficient AI governance and oversight over the lifecycle of AI systems,” she said. “AI governance helps address hallucinations, inadequate training data, and a lack of appropriate constraints and guardrails through comprehensive controls.”

“The concept of human systems, where human judgment and values ​​are central, emphasizes the importance of maintaining human control over the AI ​​decision-making process,” she added.

Oi ai CEO and co-founder Bob Rogers warned PYMNTS: is fulfilled Information and ads that the algorithms have determined will grab our attention. From there, we can easily jump to AI-generated content (reviews, articles, explainers) that are optimized to manipulate broader purchasing behavior.”

How can businesses maintain trust in AI?

Mr. Rogers also emphasized the importance of trust in commerce, Capgemini Research Institute It found that 62% of consumers have more trust in a company if they believe in it. AI interaction It's ethical.

“The biggest impact would be trust,” he says. “How can businesses maintain trust between partners and customers in the face of a tsunami of unreliable information?”

Companies need to take a multi-pronged approach to combating AI deception. Accumulate AI chief innovation officer Andres Diana he told PYMNTS. He suggested that “enterprises should conduct a rigorous testing phase that closely simulates real-world scenarios to identify and mitigate fraud before widespread deployment.”

Furthermore, when implemented, Explainable AI The framework is important.

“It is essential to continually monitor AI output during production and regularly update testing protocols based on new findings,” said Diana. “Incorporating an explainable AI framework increases transparency, and adhering to ethical AI guidelines ensures accountability.”

As AI advances, the challenge of controlling its deceptive tendencies becomes more complex. Bringsjord warned: future The possibility of fully autonomous AI configurable own goals and invent That program.

“Who can say that such a machine will not be like a demon, but like an angel?” he said.

Experts agree that the way forward lies in robust AI governance, continuous monitoring, and the development of AI systems that counter deception.

“Creating an open source AI framework” What to prioritize Transparency, fairness and accountability are key. ” FLock.io Founder and CEO Jiahao Sun he told PYMNTS. “but, Promotion and enforcement remain difficult Apply these frameworks to all AI companies. ”

Sun also emphasized the importance of determining the goals for AI.

“It's difficult to set overarching goals for AI because AI always looks for shortcuts to maximize the goal,” he said. “Researchers must predict all possible edge cases during AI training, which is difficult.”

Understand the strengths and weaknesses of AI

In addition to technical solutions, improving AI literacy among consumers and businesses is critical to fostering a more accurate understanding of AI's capabilities and limitations. Boyd highlighted the role of his AI literacy in promoting trust and managing expectations.

“The ultimate goal is to create an AI system that is not only technologically advanced, but also reliable and beneficial to all parties,” she said. “This way, we can harness the potential of AI to drive innovation and growth while preventing the risk of deception and unintended consequences.”

Blackbird.AI chief technology officer Naushad Uzzaman He told PYMNTS that while AI systems are not inherently deceptive, deception can occur when influenced or manipulated by malicious actors or flawed data.

Deception in AI is difficult to control due to several factors, including the black-box nature of AI systems, the sheer volume and variety of training data, and the rapid evolution of AI technology that outpaces regulatory frameworks and ethical guidelines.

“These social media disinformation campaigns also become part of the web-scraped text data used for training.” [AI models]” Uzzaman explained. “at the same time, [models] It is becoming the primary user interface for search engines. perplexed chatbot etc. Chat GPT.theseThese products can replicate the harmful brand bias that exists on the web and negatively impact consumers' perceptions of specific brands. ”

He said AI-driven deception could undermine trust between businesses and consumers and impact commerce.

“Fake AI-generated reviews and misleading product recommendations can distort consumer choices and damage brand reputations,” he explained.

He also pointed out that Deceptive AI may be used Manipulating financial markets or misrepresenting products in advertising.

UzZaman said companies must ensure data integrity, develop transparent and explainable AI systems, establish robust monitoring mechanisms, and adopt ethical frameworks to combat AI deception. and encouraged collaboration with industry peers and experts.

“Brand bias remains an under-researched topic and Blackbird.AI is doing some of the initial work. Directly quantify bias for different brands [models]” he said. [these models] As the proportion of content generated on the web increases, these biases are likely to be further amplified, potentially making it much harder for brands to recover from narrative attacks in the future. ”

For all of PYMNTS AI coverage, subscribe to our daily newsletter AI Newsletter.




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *