Rwanda promotes safe and responsible AI development

AI For Business


Rwanda has launched a National Artificial Intelligence Policy to guide local companies in developing safe and responsible artificial intelligence (AI).

The policy outlines the East African nation's vision for AI, said Victor Muvuni, a senior official in the ICT ministry. Inclusiveness and ethical adoption are guiding principles in Rwanda as the country seeks to harness AI to improve people's lives. The policy also encourages Rwandan companies to use AI to address the unique challenges facing their citizens.

In addition to the policy, the ICT ministry has set up a National AI Office, whose mandate includes ensuring that local companies adopt AI technologies “responsibly and effectively.”

“This office will serve as a guide for AI [development] “We will continue our journey to address challenges and foster innovation while keeping our cultural and ethical values ​​at the forefront,” Mvuni told local newspaper The New Times.

African countries have accelerated their adoption of AI in recent years as they seek to keep pace with other nations. Some, such as Morocco, are using AI technology in courts to conduct investigations and search archived documents. Others, such as South Africa and Kenya, are using AI to solve specific challenges facing the continent, such as creating climate models to help farmers plan better.

However, Africa faces greater obstacles to AI adoption than other regions, with challenges including an inadequate structured data ecosystem, skills shortages, inadequate infrastructure and restrictive policies hindering AI development.

Rwanda wants to help AI companies mitigate these challenges, Muvuni said. In addition to the new AI office, the government is also asking the Rwanda Utilities Regulatory Authority to foster AI development. The authority is also promoting AI principles that uphold the integrity of these companies and protect the rights of citizens.

“The principles include beneficence and non-maleficence to ensure that AI systems not only benefit society but also uphold human dignity and prevent harm,” Mvuni told the media.

AI safety is a global issue: governments in the United States, European Union, UK and Asia have called on AI developers to prioritize safety in the development of the technology.

Last month, Google (NASDAQ: GOOGL), Meta (NASDAQ: META) and OpenAI were among the industry leaders who made new pledges in Seoul to prioritize safety in AI development, following a similar pledge made to the Biden Administration last year.

Another challenge facing the sector is data privacy, with companies such as Meta and OpenAI finding themselves in legal trouble for ignoring data laws when training and deploying AI models.

In Rwanda, the country's Data Protection and Privacy Law is key to protecting its citizens in the face of aggressive AI developments. The law, which came into force in 2021, requires companies to obtain consent from citizens before using their data and to be transparent about how they handle and store that data.

For artificial intelligence (AI) to function properly within the law and thrive in the face of growing challenges, it needs to integrate enterprise blockchain systems that guarantee the quality and ownership of data inputs, keeping data secure while also ensuring its immutability. Read CoinGeek's article on this emerging technology to learn more about why enterprise blockchain is the backbone of AI.

Watch: Artificial Intelligence Needs Blockchain

YouTube videosYouTube videos

width=”560″ height=”315″ frameborder=”0″ allowfullscreen=”allowfullscreen”>

New to blockchain? Check out CoinGeek's Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *