The government’s draft national policy on AI appears to be creating more questions for businesses than answers. Fortunately, this policy is still only a draft and is currently awaiting comment.
Investments in AI have skyrocketed in recent years, driven by the rapid adoption and popularity of OpenAI’s ChatGPT since its launch in November 2022.
As domestic businesses seek ways to make the most of new technologies, governments are under pressure to create rules that foster innovation and investment while protecting rights such as privacy.
Last week, the Ministry of Communications and Digital Technologies under Minister Solly Malatsi released a draft national policy for public comment and acknowledged in the Official Gazette that the policy was likely incomplete.
“Due to the wide range of applications and applicability to almost every conceivable sector, general national policy cannot and should not address every aspect of AI,” said Deputy Director-General Omega Shelembe. “Rather, the main objective of national policy is to identify core principles that will guide sectoral approaches.”
Therefore, all the most pressing practical issues such as liability, audit methods, sector-specific rules, penalties and classification criteria are deferred to a later stage of consultation and guideline development.
Amoah Berger Smit, director and head of regulation at Werksmans Solicitors, said that while “the vision is compelling”, the policy “often falls short of providing the level of detail that businesses and institutions need to operate reliably, especially when it comes to defining risk categories and setting enforceable compliance standards.”
In the absence of firm rules regarding the use of AI in the country, companies have taken guidance from legal and other experts on how to “ready for AI.”
Lizanne Engelbrecht, executive manager of content at legal intelligence platform LexisNexis, explained that companies are using the latest version of the King Code of Corporate Governance to guide their thinking around AI.
The scheme, known as King V, introduces progressive guidance on technology, information governance and digital risk, placing AI, cybersecurity and data management “firmly on the agenda and responsibility list of boards and executives”.
Darren Olivier, a partner at law firm Adams & Adams, said the biggest concern for companies is whether they own the output produced by AI tools.
Olivier told Business Day that companies need to take a proactive and pragmatic approach to governance, rather than waiting for legal clarity, as AI advances faster than legislation. For example, human contributions to AI-generated work must be tracked to establish ownership under current law.
Some have called for clarity on the new policy, while others have warned of the dangers of overregulation.
The Expression and Equal Rights Foundation (Free SA) has expressed concern over the proposed creation of multiple new bodies, including an AI regulator, an ethics commission, a commission and an ombud, arguing that these bodies could duplicate existing functions, increase administrative costs and create new barriers for start-ups and small businesses.
FreeSA spokesperson Gideon Joubert said: “AI should help accelerate South Africa’s recovery and not become another excuse for over-regulation.”
Similarly, Berger-Smit said a large number of supervisory bodies could “raise questions about implementation” and cause “fragmentation in an already complex regulatory environment without clear guidance on roles, coordination and resources.”
As companies make greater use of new technologies, Olivier highlights the risk of sensitive information being lost through the casual use of AI tools. “AI tools can leak sensitive information, which is often the earliest real-world risk. Are we eroding our competitive advantage?” he warns.
For businesses concerned about compliance, Olivier suggests that regulators take a pragmatic view and value intent and preparation over perfection. “What regulators are generally concerned about is what you did, what you did, did you foresee this risk, and what steps did you take?”
