The artificial intelligence (AI) industry is changing rapidly. Innovators, governments, and everyone in between are now turning to AI-powered products and services. AI can greatly improve business operations and the quality of our lives, but with any groundbreaking technology comes risks that can cause harm.
Here are some of the key events that happened in the AI space last week.
Meta suspends generative AI tool in Brazil
Meta (NASDAQ: META) has decided to suspend the use of its generative AI tools in Brazil following a challenge from the Brazilian government regarding Meta's privacy policies regarding personal data and AI.
The decision comes after Brazil's National Data Protection Agency (ANPD) suspended Meta's new privacy policy over concerns about how personal data was being used to train the AI system. ANPD asked Meta to update its privacy policy, particularly the section on data processing for training. Meta has suspended its generative AI tool while it continues to engage in discussions with ANPD to find a solution.
Regional and global policies related to AI will be crucial to monitor. Many companies may choose not to operate in certain countries or offer degraded versions of their products to comply with government standards, but either approach could have a significant impact on the business. For example, Brazil is Meta's second-largest market with over 200 million WhatsApp users, and losing the ability to operate and train AI in such a large market could hinder Meta's progress in the region.
Are Japanese companies adopting AI?
A recent survey conducted by Nikkei Research for Reuters found that of 506 Japanese companies surveyed, 40% have no plans to use AI, while around 24% of respondents have already introduced AI into their operations and 35% are planning to do so.
For companies that have adopted or are planning to adopt AI, the primary motivation has been to address talent shortages and reduce labor costs, but hurdles to adoption include employee fears of downsizing, lack of technical expertise, implementation costs, and concerns about reliability.
We hear more about AI's transformative potential than about the reasons why companies are hesitant to adopt it. As the Nikkei survey shows, adopting AI is not always easy. It requires significant expenses and skilled manpower, and these resources do not guarantee the reliability of AI systems, which also require maintenance. These factors make adopting AI more costly than maintaining the status quo and make it unrealistic for some companies.
Meta circumvents the European Union with its AI products
Meta has announced that it will not release its AI model “Llama” in the European Union (EU) due to concerns about EU privacy and AI regulation.
“We plan to release a multimodal Llama model in the coming months, but will not release it in the EU due to the unpredictable regulatory environment there,” Mehta said.
This decision was influenced by the GDPR, which requires the processing and transfer of personal data of EU residents. Additionally, the EU AI law, which is scheduled to come into force in August, imposes several requirements on AI companies that want to operate or provide services to EU residents. While these regulations prioritize the protection of residents, they also restrict companies and ultimately deny EU residents access to some of the most innovative technologies.
Complying with the GDPR remains a challenge for many AI companies: EU policy prioritizes protecting residents and their data, but this clashes with the stringent data requirements that come with many AI systems, putting countries at a disadvantage by limiting access to certain AI products and services.
Nvidia, Amazon, OpenAI and others form coalition for safe AI
The Aspen Security Forum introduced the Coalition for Secure AI (CoSAI), an organization that aims to promote comprehensive security measures for AI. Its founding members include Amazon (NASDAQ: AMZN), Anthropic, Chainguard, Cisco, Cohere, GenLab, Google (NASDAQ: GOOGL), IBM (NASDAQ: IBM), Intel (NASDAQ: INTC), Microsoft (NASDAQ: MSFT), NVIDIA (NASDAQ; NVDA), OpenAI, PayPal (NASDAQ: PYPL), and Wiz.
CoSAI will initially focus on three areas:
- Software Supply Chain Security for AI Systems: Extend SLSA Provenance to AI models to assess provenance, manage risk in third-party models, and improve AI security by fully assessing the provenance of AI applications.
- Preparing defenders for a changing cybersecurity environment: Develop a defender-side framework to address AI security concerns, identify investments, and scale mitigation strategies in response to offensive cybersecurity advances.
- AI Security Governance: Develop risk and control taxonomy, checklists, and scorecards to guide those responsible for managing and reporting on AI security.
There are some drawbacks to the existence of such groups, but they raise the question of what they actually do. Like many government initiatives, many groups, coalitions, and associations just want to let the world know they are discussing and considering technologies the world is paying attention to. But in many cases, we have yet to see any action come out of these groups other than thought leadership and services that are not available to consumers or most businesses. These two groups are largely responsible for the spike in popularity we see for many mainstream products and services.
For artificial intelligence (AI) to function properly within the law and thrive in the face of growing challenges, it is necessary to integrate enterprise blockchain systems that guarantee the quality and ownership of data inputs, keeping data secure while ensuring its immutability. Check out CoinGeek's coverage To learn more about this emerging technology Why Enterprise Blockchain is the Backbone of AI.
Watch: B2029 Meetup showcases the convergence of AI and Web3
New to blockchain? Check out CoinGeek's Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.