
keystone
Ukraine is currently seeing how AI will be used in war at breakneck speed. Is the use of artificial intelligence in weapons systems new? And how does AI actually select targets? Expert Katrin J. Yuan has the answers.
Don’t have time? Blue News summarizes for you
- Artificial intelligence (AI) is becoming increasingly pervasive in everyday life, but it has long been present in simple forms of weaponry.
- AI expert Katrin J. Yuan explains the benefits of AI in weapons, how it makes decisions, and how it is networked.
- We also discuss regulations and international differences regarding ethical principles.
AI and Weapons: Is this a new phenomenon?
Katrin J. Yuan: Absolutely not. Anyone who thinks this is “new territory” has missed the last 40 years of strategic military technology. We’re talking about evolution here. HARM anti-radar missiles and even Phalanx CIWS on ships have been highly automated for decades because the human brain is too slow for incoming missiles.
How is it different today?
Previously, it was “if-then” logic, or automation. Today, that is pattern recognition and adaptation through AI. We are moving from rigid machines to autonomous systems that learn.
How much is the Ukraine war accelerating this development?
Ukraine is currently the largest live lab for AI in world history. What once took 10 years to develop in the lab is now being repeatedly tested on the front lines in 10 weeks. The scale is changing, and we are seeing a shift from expensive individual systems to “low-cost consumable swarms” – swarms of cheap drones.
What advantages does AI bring on the battlefield?
about the person
ZVG
Katrin J. Yuan is the founder of the Swiss Future Institute and chair of the AI Future Council. She advocates for responsible AI in education and its application to small businesses, foundations, and the United Nations. Yuan is the author of the book AI for Leaders.
Everything is focused on maximum efficiency. For example, in the world of hypersonic weapons, humans become the “bottleneck.” AI reduces OODA loops to milliseconds. Abbreviation of OODA observation, direction, decision, action. AI also greatly helps in target identification. object recognitionin a groove invisible to the human eye. War has become an AI competition, and the global AI competition is intensifying.
What about electronic warfare?
when the enemy jam Once the signal is emitted, the radio link to the pilot is severed. AI drones do not require a radio link. “Think” locally – Keywords: edge computing – and fulfill the mission autonomously. This is the ultimate unique selling point on the modern battlefield.
Will AI drones make their debut in Libya in 2021?
I have had the opportunity to keynote AI at the United Nations and have been an advocate for the responsible design, training, and use of AI for many years. According to a UN report, the Turkish military deployment Kargu-2 In 2021, “Casas Naxas” actually occurred in Libya, and it is said that AI drones were able to autonomously hunt humans for the first time. Whether this is a complete first is debatable, but it was a wake-up call for regulators. It shows that this technology is no longer only available to superpowers.
How do AI weapons “judge”?
Not by moral considerations, but by probability mathematics. What are the standards? AI compares sensor data (infrared, visual, radar) to a database. “Is that object 98% similar to a T-72 tank?” The trigger is that the decision is based on classification. The problem with this is that the AI doesn’t understand “situations” such as potential surrenders or delays, it understands “patterns”. The AI will consider all options, including annihilation, as possible options.
Can AI weapons be tamed with something like “fundamental laws”?
Technically, geofencing and strict target profile. as Swiss Future Institutewe advocate the true fundamental law. Participatory governanceThat means humans ultimately make the final decision. Failure to do otherwise risks unintended consequences.
Are regulations necessary?
We need international standards similar to those for biological weapons. The problem is that controlling the algorithm is not as easy as in a chemical warehouse. Various nodes come together for me: as a board member, as an AI speaker at the United Nations, as an AI university lecturer at ZHAW and HWZ, and as an entrepreneur. I’m not interested in isolated knowledge, but in the best bundles of knowledge that can be used for application. Our approach at the Swiss Institute for the Future is: AI Future Council Members, we are using AI-Human collective intelligence: we build Ethics by design [ethical principles, editor’s note] More than just a fill-in, it shows concrete applications, possibilities, and risks, both individually and across industries, from early in the coding stage. Innovation and regulation are interdependent.
With AI drones taking to the skies, does this technology seem to be gaining acceptance on land and at sea?
A “domain merger” is currently underway. On land, armed robot dogs, similar to Boston Dynamics’ robot dogs, take on dangerous urban combat missions. On the water, autonomous underwater drones (UUVs) protect or threaten critical infrastructure such as Nord Stream. AI connects these units to form a “swarm intelligence system.” We are watching a multidimensional chess match in real time.
How much difference is there in national character?
I have lived in different countries and have been giving talks on “AI China, the US, Europe, and the Global AI Race,” so I experience the differences from different perspectives. The United States is focused on high-end technology and, at least officially, on ethical guardrails. In China, a realistic motto might be “civil-military fusion.” Here, vast amounts of data flow together without barriers between the private sector and the military. Their goal is to achieve AI supremacy by 2030. Russia is focused on electronic warfare and autonomy, and is acting out of necessity.
And what about Switzerland?
We have traditionally been strong in sensor technology and leading the ethical debate at the highest levels. We owe this foundation especially to our world-class universities. ETH Zurich and EPFL Lausanne serve as global talent hotbeds. Switzerland consistently ranks high in global innovation rankings. While policymakers are sometimes slow to regulate, the private sector has already proven the opposite. Thanks to this symbiosis of academic brilliance and entrepreneurial courage, Switzerland has the potential to become a global pioneer of ‘responsible self-government’.
Who are the potential players?
A central component for this is accurate processing of data. This is where Walter Chrismareanu is setting the standard with Tipalo. His solutions impressively demonstrate how complex information can be efficiently structured and utilized in the private sector. This is a basic requirement for modern AI applications. Switzerland’s huge capacity for innovation is also demonstrated by visionaries like Switzerland’s Benjamin Regener. NuclearIQ solution: His team is proving how highly specialized data analysis can solve critical safety problems. and Alijana Selimi, CEO of RoboSwiss, which is driving the next generation of practical robotics. They are all united. AI Future Council and is at the forefront of a movement that combines ethics and efficiency. These players are living proof. Switzerland has the know-how, top-notch research and entrepreneurial spirit to play a leading role in international competition. We just let them do it.
