Receive Free Artificial Intelligence Updates
I will send myFT Daily Digest E-mail summarizing the latest information artificial intelligence News every morning.
The author is the president of global affairs at Meta.
Underlying the excitement and anxiety about advances in generative artificial intelligence lies the fundamental question of who controls these technologies. Big tech companies with vast amounts of computing power and data to build new AI models and society at large?
This is at the heart of the policy debate as to whether companies should keep AI models in-house or make them more openly available. As the debate progresses, calls for disclosure grow. This is partly for practicality, and it is not sustainable to keep the underlying technology in the hands of a few large companies. It is also partly due to its track record of being open sourced.
It is important to distinguish between current AI models and potential future models. The most dystopian warnings about AI are actually about a technological leap, or leaps and bounds. There is a world of difference between today’s chatbot-style applications of large language models and super-large frontier models that could theoretically enable his sci-fi style superintelligence. But we are still at the foot of the mountain, discussing the dangers that may be found at the summit. Should these advances become more realistic, a different response may be required. But there is time for both technology and guardrails to evolve.
As with all underlying technologies, from radio transmitters to Internet operating systems, AI models have many uses, some predictable and some not. And like any technology, AI will be used by good guys and bad guys, for good and bad purposes. Addressing that uncertainty cannot rely solely on the expectation that AI models will remain secret. The horse has already escaped. Many large language models have already been open sourced, such as the Falcon-40B, MPT-30B, and many of their predecessors. And open innovation is nothing to be afraid of. The infrastructure of the internet, like web browsers and many of the apps we use every day, runs on open source code.
The risks associated with AI cannot be eliminated, but they can be mitigated. Here are four steps I think tech companies should take.
First, they need to be transparent about how their system works. Meta recently released 22 “system cards” for Facebook and Instagram. It gives you insight into the AI behind how your content is ranked and recommended without the need for deep technical knowledge.
Second, this openness must be accompanied by cooperation across industry, government, academia and civil society. Meta is a founding member of the Partnership on AI alongside Amazon, Google, DeepMind, Microsoft and IBM. We participate in the Framework for Collective Action on Synthetic Media, an important step in ensuring guardrails are established around AI-generated content.
Third, AI systems need to be stress tested. Ahead of the next-generation release of our large-scale language model, Llama, Meta is “red teaming”. Common in cybersecurity, this process involves a team taking on the role of an adversary looking for flaws and unintended consequences. Meta will be submitting his latest Llama model to the DEFCON conference in Las Vegas next month. There the expert can further analyze its function and carry out stress tests.
There is a false assumption that exposing the source code or model weights makes the system more vulnerable. On the contrary, external developers and researchers can identify problems that keep teams trapped within internal silos. A researcher testing his BlenderBot 2, Meta’s large-scale language model, found that it could be tricked into remembering false information. As a result, BlenderBot 3 is more resistant.
Finally, companies should share development details, such as academic papers and publications, open discussion of benefits and risks, or making the technology itself available for research and product development when appropriate.
Being open is not altruism. Meta believes it is to their advantage. It leads to better products, faster innovation, and a thriving marketplace, which benefits us as well as many other companies. Nor does it mean that all models can or should be open sourced. Both proprietary and open AI models have a role.
But ultimately, openness is the best antidote to the fear surrounding AI. This allows for collaboration, scrutiny and iteration. And companies, start-ups, and researchers will now have access to tools they could never have built for themselves, backed by otherwise inaccessible computing power, creating a world of social and economic opportunity. is opened.
