Cybersecurity experts call for greater accountability and transparency in the use of AI

Applications of AI


Singapore – When a consumer buys a lock, the seller is not interested in talking to the customer about the limitations of the product, and only emphasizes its quality.

However, if experts reveal the vulnerability of the lock, consumers will have a different opinion about its value.

This concept also applies to artificial intelligence, says computer and internet security expert Jeff Moss, founder of hacking convention DEF CON.

As the use of AI is rapidly becoming more widespread, developers may extol only the virtues of their products and consumers may ignore asking if AI is safe.

For Moss, more accountability and transparency is needed to reduce the risks associated with the use of AI by ensuring it is not misused or exploited by criminals.

He expressed his concerns in an interview with The Straits Times on April 29 at the Sands Expo and Convention Center. DEF CON will be held in Singapore for the first time.

It will be held in parallel with Milipol TechX Summit (MTX) 2026, which will be held from April 28th to 30th.

Moss, who has held several high-profile cybersecurity positions and was part of the technical consulting team for the hit techno-thriller TV series “Mr. Robot,” said it’s important to discuss accountability in AI.

When a problem arises, he said, the question becomes whether the responsibility lies with the developer, the company that hired the developer, or the user of the technology.

Apart from the unpredictability that arises from a lack of accountability, a lack of transparency in how the technology works and how it is developed can also lead to opportunities for criminals and nation-states to misuse the technology, it added.

“When I talk to policymakers, I always encourage them to work towards greater accountability and transparency, because I think people generally make better decisions when they have more information,” he said.

Speaking at MTX, Moss noted that AI has gone from being a novelty to being able to create value as agents with the ability to perform tasks autonomously.

This gives users control by giving parameters and boundaries to AI agents, even if they are not infrastructure experts.

For example, he said, agents could be deployed to find the lowest prices on airline tickets instead of relying on airline pricing sites.

In an age where almost every technology has a political element, Moss said he wouldn’t be surprised if AI agents quickly become political as well.

And without regulation or guidance, he added, developers and users can use technology to do whatever they want with little consequence.

He noted that increasing accountability and transparency will help make the most of AI, and that these safeguards will allow policymakers to decide on trade-offs and decide what risks society is prepared to take.

“But if everything is opaque, if it’s covered by NDAs, if you can’t study it, if you can’t understand how the AI ​​agent works because it’s proprietary, the more you prevent transparency, the more problems you’re going to have,” he said.

Moss, who served on the Cybersecurity Advisory Board of the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency from 2021 to 2025, also highlighted some of the ethical issues surrounding the use of AI in warfare.

He cited the Israel-Gaza war and said the Israeli military used an AI program called “Where’s Dad?” Track the enemy to their home before targeting them there. This meant their families could be at risk as well.

He said that while AI systems can help provide goals, traditional ethical considerations are not taken into account.

“It provides incredible speed and awareness (but) if you can’t solve that problem, all these problems are completely unimportant to the moral fabric of how you behave in combat,” he added.

Asked about cyberattacks between nations, Moss said the telecommunications industry is a “jewel” for political interference through cyberattacks, giving attackers access to technological and social networks, including mobile phones.

“This means that attackers can see all the politicians and who they’re having dinner with. They can use that to determine whether the target is meeting someone’s wife, meeting with a political opponent, (or) spending a lot of time talking to a particular company.”

“So much information could be leaked. Imagine the potential for blackmail,” he said.

In February, All four major telecommunications companies in Singapore were attacked. By state-sponsored cyber espionage group UNC3886.

However, no sensitive data was accessed or stolen, and critical systems such as the 5G core were not compromised.

Moss said one way to strengthen cybersecurity is to give good hackers (known in cyber circles as “white hats”) more legal space to fight bad guys.

After the 2016 U.S. presidential election, which was marked by election fraud, a friend suggested to Moss that the voting machines being used were unsafe.

Mr. Moss was surprised that he could purchase the machine on eBay. When he dismantled them, he found they were severely damaged.

He and his team were allowed to study the device under a “safe harbor” exception to the U.S. Digital Millennium Copyright Act, which makes it legal to hack and research items such as medical devices and election technology.

Moss said such laws could allow experts to test and investigate vulnerabilities without fear of litigation.



Source link