Written by Vincent Calchidi
Imagine an artificial intelligence (AI) application that can communicate meaningfully during careful reflection. I'm not talking about the imitation of communication popularized by chatbots powered by large language models (LLMs), recently embodied in OpenAI's GPT-4o. I envision an AI model that can productively utilize specialized literature, extract and reformulate important ideas, and engage in meaningful interactions with human experts. It is easy to imagine that such a model could be applied to fields such as medical research. However, the machine learning systems that are gaining the world's attention (generative AI such as ChatGPT, Gemini, Claude, etc.) lack the intellectual resources and autonomy needed to support such applications. Our lofty visions of AI remain science fiction for now.
The drive to master AI in geopolitics is undaunted by this reality. Indeed, a geopolitical “war” over AI starting in 2023, represented by countries as diverse as the UK, France, Germany, India, Saudi Arabia, the United Arab Emirates, the United States, and China, will undoubtedly generate It was caused by AI and machines. Learn more broadly. However, some in the AI world believe that machine learning is just the current stage of cutting-edge AI, not the final stage.
Paradigms beyond learning strategies tied to machine learning are being explored, for example, by the state-backed Beijing General Artificial Intelligence Institute (BIGAI), which was founded in 2020. As the 2023 Center for Security and Emerging Technologies report shows, BIGAI was founded in 2020 in part as a response to researchers disillusioned with “big data” approaches, including its US-educated director Zhu Songchun, who pursues “brain-inspired” AI models. BIGAI’s research theme is “small data, big tasks.”
The strategic importance of “small data” AI is also recognized by Australia’s Kingston AI Group, a group of AI academics aimed at coordinating Australia’s national AI research and education strategy. In a February 2023 statement, the group acknowledged Australia's relative disadvantage in terms of economic size and access to large datasets used to train machine learning models. They therefore recognize the need to develop “small data capabilities” that will enable Australia to compete in “designing AI systems from small datasets”.
Moreover, Prime Minister Narendra Modi, in his June 2023 address to the US Congress, highlighted India's efforts to embrace technological innovation and also touted India's collaboration with the US through the Initiative for Critical Advanced Technologies (iCET). Equally noteworthy, however, was PM Modi's meeting with AI researcher Amit Sheth, director of the University of South Carolina's Artificial Intelligence Institute.
In December 2023, Sheth laid out his next AI vision at India's third annual Chief Secretaries Conference. The US led the first two phases of AI. “Symbolic” AI dominated the first wave, while the now fashionable “statistical” AI (i.e. machine learning) dominated the second wave. India can and should “conquer AI Phase III,” Sheth argues. The third wave refers to AI models that can adapt to the situation. There is one emerging paradigm in this field known as neurosymbolic AI, which combines techniques from both waves to gain new capabilities. While generative AI is important for India, Sheth told Indian officials that “neurosymbolic AI… will drive the next third phase of AI.”
If this concept of AI development sounds familiar, it's worth remembering that its origins date back to the US Defense Advanced Research Projects Agency (DARPA). DARPA distinguishes between two waves of AI, where models are first governed by rules and then learn through statistical association of data. However, in both waves, the model lacks robust inference ability in new situations. DARPA's “Third Wave” envisions models capable of “contextual reasoning.” This is a commitment embodied in his 2018 AI Next campaign to “beyond second-wave machine learning technologies” (also evident in his 2022 Assured Neuro Symbolic Learning and Reasoning program).
DARPA's efforts are continually evolving. Still, the tripartite conceptualization of AI is a relic of a pre-ChatGPT era, and American policymakers risk losing sight of its strategic importance.
American trenches in the second wave
Machine learning will remain an essential element for some time to come for national institutions interested in taking a leading role in AI, but why are nations like China, Australia, and India encouraging such research? This is because cutting-edge machine learning techniques are not widely available. It provides the functionality needed to support applications such as virtual medical agents.
But much of U.S. policymakers’ focus on AI is rooted in the era of and content of “big data” AI. For example, President Biden's 2023 Executive Order on Safe, Secure, and Trustworthy AI will invoke the Defense Production Act to develop a “dual-use foundational model” to train beyond the 1026 floating-point computational threshold. Requires companies planning or actively developing The Division of Operations (FLOP) reports to the Department of Commerce on such development and testing. The idea, as Paul Scharre puts it, is that computational power is a “crude proxy” for a model's capabilities. This mandate reflects a widespread belief in the effectiveness of increasing the size of models and the datasets on which they are trained, and in the computing power required to do so.
Moreover, the Biden Administration’s bulk advanced computing export restrictions on Chinese companies implemented in October 2022, and their continuing evolving restrictions since then, are premised on the notion that U.S. semiconductor design and manufacturing equipment cannot enable China to develop advanced AI models. The implicit assumption is that state-of-the-art AI will indefinitely depend on the massive amounts of data and computing power that characterize today’s machine learning models.
In early criticism of the October 2022 export restrictions, proponents Martin Lasser and Kevin Wolf said such restrictions were a “calculated risk” and that “the pursuit of so-called hybrid AI , noted that neurosymbolic AI has the potential to increase the potential for breakthroughs in AI that address some of the shortcomings of deep learning.
The criticism was appropriate but late: BIGAI was established beyond machine learning in 2020, long before the Biden administration expanded export controls. Australia and India, both of which enjoy relatively harmonious relations with the United States, have recognized the importance of hybrid AI research.
The second wave of AI (statistical machine learning) is now mainstream by using export controls to strengthen America's AI lead in certain subfields of AI, most notably natural language processing. This effectively established the United States in the United States. Even though the pitfalls of this lock-in may benefit American industry and national defense in the short and medium term, the long-term future of AI may take a more proactive and deliberate path beyond machine learning. may belong to countries that do. Therefore, relying on limited access to advanced computing tools and workers will not be enough for the United States to maintain its AI advantage.
Emerging paradigms like neurosymbolic AI require concerted efforts to harmonize research with national and selected international partners.
Maintaining and expanding America’s AI superiority
There is preliminary evidence that U.S. policymakers understand the need to engage with the indigenous AI efforts of partner countries, including those with closer than acceptable ties to China. A case in point is Microsoft's recent agreement to invest $1.5 billion in Abu Dhabi-based AI conglomerate G42 following negotiations with the Biden administration. Mohamed Soliman of the Middle East Institute said in testimony before the U.S.-China Economic and Security Review Commission in April 2024 that this is partly due to countries like the United Arab Emirates becoming leaders in AI. He insisted that this was his honest recognition that he was trying to do so.
But this recognition is only part of the necessary effort by American policymakers. Much of the second wave of fundamental research in AI has been happening in the private sector, with companies like Google and OpenAI achieving new milestones in natural language processing. Microsoft, while partnering with OpenAI and now G42, cannot be expected to take the steps necessary to ensure new technologies for third wave AI support high-stakes applications in the corporate AI arms race around generative AI.
Therefore, concerted action by the U.S. government must be taken to balance the scales, including harmonizing and expanding existing efforts.
A useful model is the 2023 partnership between the Department of Defense and the National Science Foundation to fund the Artificial Natural Intelligence Institute (ARNI). The partnership will help fund efforts that connect “great progress in the world.” [AI] A system that will revolutionize our understanding of the brain. ” ARNI's interdisciplinarity echoes the chorus of voices about the potential achievements of neurosemiotic AI. ARNI is inspired by the human mind's ability to reason, analogy, and engage in long-term planning, and focuses on building algorithms that support explainable applications. It may offer “performance guarantees” that do not exist with deep learning. And it provides adaptability not found in deep learning. Therefore, policymakers may look to ARNI's interdisciplinary research and funding scheme as an example of future research tailored to the needs of third-wave AI.
Additionally, a small but forward-looking industry stakeholder needs to be involved. These companies include Symbolica, whose teams aim to leverage applied mathematics to build explainable models capable of structured inference with less training data and computational power; These include companies like Verses AI, which principal researcher Carl Friston says is “aiming to be 99% smaller.” Develop models without compromising quality. ” Such research could contribute to the foundations of third-wave AI.
Finally, the United States should selectively embrace partnerships to foster hybrid AI research that targets flaws in modern AI models. Notably, the rise of “minilaterals” such as the Quadrilateral Security Dialogue and AUKUS is fostering cooperation on emerging technologies. While restraint is needed to prevent advanced technology from falling into enemy hands, the United States has a strong view on hybrid Initiatives targeting specific areas of AI research should be considered. South Korea is considering sharing advanced military technology.
The United States must take these steps not just to remain competitive in the second wave of AI, but to create and take advantage of the third.
- About the author: Vincent J. Carchidi is a non-resident scholar in the Strategic Technology and Cyber Security Program at the Middle East Institute. He is also a member of the American Foreign Policy 2024 NextGen Initiative. His opinions are his own. Follow him on LinkedIn. X.
- The views expressed in this article are solely those of the author and do not necessarily reflect the views of Geopoliticalmonitor.com.