To make AI work for national security, we need to invest in humans too

Applications of AI

ChatGPT is here and its user base is growing faster than TikTok.

In a 2017 survey by AI software company Pega, only 34 percent said they had used technology that incorporated artificial intelligence (AI). However, the actual number is much higher, with 84 percent of respondents, including people in the US, UK, France, Germany, the Netherlands and Australia, using AI-powered services such as virtual home his assistants and chatbots. I have used it. , or software that makes predictive suggestions.

Because AI intersects many aspects of American life, including defense, the United States needs better AI literacy and understanding of this emerging technology. For example, in October 2022, the Department of Defense issued a formal Request for Information to identify sources of AI, machine learning, and data science talent. This is often the first step in the process of calling for bids and issuing government contracts. To borrow that word, [Defense Department] To scale the AI ​​workspace workforce, it is critical to retain a qualified and experienced workforce that matches industry innovations in both speed and execution. ”

It shows how AI is becoming a national security issue, and how an AI-educated workforce (and an AI-savvy Pentagon) is moving forward. As AI expands, it will be essential to strengthen US thinking about how humans and AI can learn from each other, and to understand the opportunities, risks, and challenges of AI innovation. AI is a complex technology that is difficult to grasp, but the Department of Defense and other federal agencies are committed to using AI through training, engagement, and investments to help prepare nations for a future in which humans will inevitably use AI at scale. can be “decrypted”.

To that end, the United States needs “human-centered AI,” involving humans throughout the research, design, training, testing, and decision-making processes of AI systems. This approach leverages both machine and human intelligence.

But this country also needs humans, especially AI.

The relationship between national security and AI. The 2018 Department of Defense Artificial Intelligence Strategy defined AI as the ability of machines to perform tasks that normally require human intelligence. The Pentagon hopes AI will enhance the military, increase operational effectiveness and efficiency, and enhance homeland security.

The U.S. government is investing $2.5 billion in AI research and development in fiscal 2022, but the U.S. isn’t the only country increasing federal spending on artificial intelligence. Regarding military funding for AI, an October 2021 report by the Center for Security and Emerging Technologies estimated China’s annual military spending on AI to be in the “low billions of US dollars.”According to the Defense Industry Association magazine national defense, this level of funding for AI is equivalent to a Pentagon investment. Other countries leading AI investment activity include Israel, UK, Canada, India, Japan, Germany, Singapore and France. It is clear that interest in artificial intelligence is growing rapidly, both nationally and globally.

Many people already use AI on a regular basis without even realizing it. For example, individuals encounter AI through popular virtual assistants such as Apple’s Siri and Google’s Assistant, rapid language translation, major online platforms such as Amazon and YouTube with recommendation algorithms, and object and person tagging. To do. image. AI will do all this without becoming the dystopian superintelligence that critics have warned about for decades. But AI also has its pitfalls. For example, research has shown that training datasets can amplify bias. Algorithmic decisions lack transparency and accountability. And biased criminal justice algorithms make dubious predictions about sentencing.

We all need to be more aware of the nuances in the relationship between AI and humans, whether we’re tech-loving, tech-hating, or somewhere in between. AI needs humans, and humans need AI.

AI needs humans. Over the past few months, we’ve seen Elon Musk make big changes to Twitter. He disbanded the human rights team led by Shannon Raj Singh and laid off thousands of Twitter employees, including many in the content moderation department. These teams work behind computer screens to combat misinformation and disinformation, increase accessibility for people with disabilities, and protect users who face human rights abuses around the world. . One team worked on ethical AI and algorithmic transparency.

Humans are crucial in such a social and dynamic environment, both socially and militarily. After all, AI and its algorithms have limitations. For example, algorithms cannot understand parody, sarcasm, satire, or context the way humans do. In fact, humans are the foundation of coding processes, AI systems, and platforms.

In 2020, the Department of Defense issued recommendations on AI ethical principles that apply to combat and non-combat operations. Then-Secretary of Defense Mark T. Esper said, “The United States, along with our allies and partners, will maintain our strategic position, win the battlefields of the future, accelerate the adoption of AI to protect our nation, and contribute to national security.” We have to lead the application,” he said. International order based on rules. The Principles focus on five key areas: responsibility, fairness, traceability, reliability and governance. At the core of each of these principles is the critical role of humans in exercising judgment and striving to minimize unintended consequences and biases.

Humans need AI. AI can complete tasks and outperform humans in some notable areas. For example, AI’s ability to train on large image sets, extract patterns from data mining, and identify specific features that are relevant to diagnosis can lead to more accurate medical care, especially in the fields of radiology and pathology. We may be able to provide a diagnosis. Some studies have shown that AI programs can detect breast cancer, especially early stages of cancer, on mammograms. You can also translate speech while preserving human speech via Google’s AI, quickly transcribe audio, and proofread.

Moreover, AI can learn from AI. For example, Google’s AutoML and Microsoft DeepCoder can build the next generation of AI. These two machine learning systems not only hold the code given to them by researchers, but they can also investigate how the code fits together, how it works, and how it learns other codes. increase. Simply put, AI can absorb massive amounts of data, recognize patterns, and deliver relevant output at an astonishing pace.

AI is not only widely used in our daily lives. Society cannot ignore the growing use of artificial intelligence in warfare and future conflicts. Semi-autonomous drones guided by human pilots are already being used in the Russo-Ukrainian war for surveillance and target identification. We can imagine AI and human operators increasingly cooperating in these conflict environments, especially with advanced drones. For example, his Switchblade 600 in the US requires a human operator to select targets while watching a live video feed.

One of the reasons people distrust AI is because the algorithms that make it possible are perceived as “black boxes.” AI can be biased due to a lack of explanation about the datasets used to make coding decisions and train algorithms. Bridging the gap between humans and machines further complicates the skills required, limited data quality and fear of the unknown.

Barriers to AI adoption are difficult, but not insurmountable. Improving AI literacy will enable his current and future AI adopters to approach developing, deploying and using technology in a responsible manner.

Put a person in the driver’s seat. “Human-Centric AI” flips the concept of “Human-Centric AI” on its head. Instead of humans interacting at various stages of the decision-making process, naturally AI-centric humans will be in the driver’s seat. For example, when the U.S. Department of Defense adopted his Five Ethical Principles for the use of AI, industry, government, academia, and general AI experts came together. Additionally, Stanford University’s Human-Centered Artificial Intelligence Laboratory hosted its first congressional bootcamp on AI last year, with 25 bipartisan congressional staffers discussing recent developments in AI. Such conversations are not isolated within the technical community. Diverse perspectives and expertise, as well as her increased understanding and awareness of AI and its applications, will enable humans to better assess the risks, opportunities, and limitations of her AI.

Some important work has already been done on this front. For example, in June the Association for Computing Machinery held its sixth annual conference on fairness, accountability and transparency, bringing together computer scientists, social scientists, jurists, statisticians, ethicists, and Gather others interested in fairness. Accountability and transparency in socio-technical systems. The association is the world’s largest computing association, and its conferences are widely considered the most prestigious in the field of human-computer interaction.

Germany takes a comprehensive, evidence-based, AI-centric, human approach with a strong focus on capacity development. Specifically, Germany’s Minister for Economic Affairs and Energy has funded a free online course, “The Elements of AI,” to increase AI literacy. Users of this course can follow the course at their own pace, without coding experience or specialized math skills. This is a step in the right direction.

Going forward, the United States needs to invest more national attention, funding, and programs into strengthening AI education across federal agencies and civil society. Perhaps more importantly, the U.S. federal government needs to formally develop an AI education strategy that emphasizes both short-term and long-term goals and sets timeline-specific goals. Specifically, U.S. policymakers must prioritize the AI ​​Information Society, ensure transparency, and maximize support for the military.

Some progress has been made along these directions. For example, the Department of Defense’s 2020 AI Education Strategy highlights the priority areas and skills needed to accelerate AI adoption, from software and coding to data management and infrastructure. This strategy focuses on how to build AI capabilities, raise awareness of her AI among senior leaders, and provide training on the responsible use of AI. While this is a good first step, this strategy lacks timeline details.

Last year, the Center for Integrated Artificial Intelligence rolled out an AI education pilot course for thousands of DoD personnel, ranging from general officer education to coding bootcamps. It would be beneficial to extend these efforts beyond the Department of Defense into annual, five-year, and ten-year plans. The United States will greatly benefit from strong education initiatives and investments in AI, especially across the defense, education, homeland security, and national affairs sectors to enhance national security.

In March 2021, former Google CEO Eric Schmidt and former U.S. Deputy Defense Secretary Bob Work, who led the National Security Committee on AI, said in the committee’s final report, “America is ready for the AI ​​era.” We are not ready to defend or compete.” But this doesn’t have to be the future of the United States when it comes to AI. Decoding AI through AI literacy is an important national security issue. AI permeates nearly every aspect of daily life in the United States. Governments, Big Tech, and the general public all have a vested interest in AI and its social impact.

This entire article was written by ChatGPT. just kidding! Julie George (Human) did.

As the coronavirus crisis shows, we need science now more than ever.

This bulletin elevates the expert’s voice above the noise. But as an independent, not-for-profit media organization, our operations depend on the support of readers like you. Help us continue to provide quality journalism that holds leaders accountable. It is important that you support our work at all levels. In return, we promise that our reporting will be understandable, impactful, prudent, solution-oriented and fair-minded. Together, we can make a difference.

Support breaking news

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *