as autotext Generators have progressed from fantasy to novelty to real tools in a rapid and dazzling manner, but they are inevitably beginning to reach the next stage: weapons. The Pentagon and intelligence agencies openly plan to use tools like ChatGPT to get the job done, but the company behind the wildly popular chatbot is silent.
OpenAI, the nearly $30 billion R&D giant behind ChatGPT, has set an ethical line it won’t cross, a public list of businesses it won’t pursue no matter how lucrative, because it could harm humanity. It offers. Among the many banned use cases, OpenAI says it has preemptively excluded military and other “high-risk” government applications. Like rivals Google and Microsoft, OpenAI is eager to declare its lofty values, but what those claimed values actually mean, or how they I don’t want to seriously discuss whether or not it will be enforced in some cases.
“If there’s one thing you have to take away from what you’re seeing here, it’s the weakness of letting companies monitor themselves.”
An AI policy expert who spoke to The Intercept said the company’s silence reveals an inherent weakness in self-regulation, which could make companies like OpenAI less nervous about AI as it develops powerful technology. It allows the public to appear principled, but its scale is still unknown. Sarah Myers West, Managing Director of the AI Now Institute and former AI Advisor to the Federal Trade Administration, said: commission.
The question of whether OpenAI allows the militarization of its technology is not an academic question. On March 8th, the Intelligence and National Security Alliance met in northern Virginia for their annual conference on emerging technologies. The conference will bring together participants from both the private sector and government, including the U.S. Department of Defense and neighboring spy agencies, to explore how U.S. security agencies are working with companies around the world to rapidly adopt machine learning technology. wanted to know In a Q&A session, Phillip Chudoba, deputy director for functions at the National Geospatial-Intelligence Agency, was asked about how he will leverage AI in his office. He replied at length:
We’re all looking at ChatGPT and we’ll see how it matures as a useful and scary technology. … our expectation is … that things like GEOINT, AI, ML, analytic AI/ML, and ChatGPT will evolve to a place where they really collide. Predict things that human analysts probably never thought of, perhaps because of experience, exposure, etc.
Removed the jargon, Chudoba’s vision is clear. Use ChatGPT’s (or similar) predictive text capabilities to help human analysts interpret the world. The National Geospatial-Intelligence Agency (NGA), a relatively obscure organization compared to its three-letter sibling, is the primary handler for geospatial intelligence, often referred to as GEOINT. This technique involves processing vast amounts of geographic information, such as maps, satellite photos, and weather data, to give military and spy agencies an exact picture of what is happening on Earth. “Anyone who sails American ships, pilots American aircraft, makes national policy decisions, fights wars, identifies targets, responds to natural disasters, and even navigates on a mobile phone. , we rely on NGA,” the agency boasts on its site. On April 14, The Washington Post reported the findings of an NGA document detailing China’s high-altitude balloon surveillance capabilities that sparked an international incident earlier this year.
Prohibited Use
But the ambitions of Chudoba’s AI extension GEOINT are complicated by the fact that the creators of the technology in question seem to have banned precisely this application. OpenAI’s “Terms of Service” page. “If we find that your product or usage does not comply with these policies, we may ask you to make the necessary changes,” the policy reads. “Repeated or serious violations may lead to further action, including account suspension or termination.”
By industry standards, this is a very strong and clear document that seems to affirm the bottomless defense spending available to less conscientious contractors, and is exactly what Chudova envisions. Seems like a pretty cut-and-dry prohibition.Intelligence community. It’s hard to imagine that an agency that monitored North Korea’s missile capabilities and acted as a “silent partner” in the invasion of Iraq would not be the definition of high-risk military decision-making, according to the Pentagon.
NGA and fellow Intel agencies looking to join the AI boom may eventually pursue deals with other companies, but for the time being, they’re stuck with things like GPT-4, the large-scale language model that underpins ChatGPT. Few of OpenAI’s competitors have the resources needed to build one. Chudoba’s namecheck on his ChatGPT raises an important question: Will the company receive the money?While OpenAI’s ban on using ChatGPT to process foreign information may seem clear-cut. , the company refuses to say so. OpenAI CEO Sam Altman introduced The Intercept to company spokesperson Alex Beck, but he declined to comment on his Chudoba remarks or answer questions. In this case, when asked about how OpenAI enforces its usage policy, Beck replied with a link to the policy itself and declined to comment further.
AI Now Institute’s Myers told The Intercept: “I think it certainly goes against everything they’ve told the public about how concerned they are about these risks, as if they were actually acting in the public interest. If you go into the details, if they don’t want to get close to this sort of potential harm, it shows a kind of fragility of that stance.
public relations
Even the most articulated ethical principles in technology have proven to be routine in most fields outside public relations. Twitter has simultaneously banned the direct use of its platform for surveillance, and Google is selling his AI services to the Israeli Defense Ministry. The AI Principles prohibit applications that “cause or are likely to cause overall harm” or applications that “have the purpose of violating widely accepted international law and human rights principles.” increase. While Microsoft’s public ethics policy refers to its “commitment to climate change mitigation,” the company helped Exxon analyze oilfield data and similarly “commitment to vulnerable groups.” while professing to sell surveillance tools to American police.
This is a problem that OpenAI will never be able to avoid. The data-hungry Department of Defense is increasingly obsessed with machine learning, so ChatGPT and the like are clearly desirable. The day before Chudoba spoke about her AI in Arlington, Kimberly Sablon said at a conference in Hawaii:A big language model like [ChatGPT] to disrupt critical functions across the department,” National Defense Magazine reported last month. In February, CIA Director of Artificial Intelligence Lakshmi Raman told the Potomac Officers Club:This is certainly an inflection point for this technology and we definitely need to [be exploring] It is a way to take advantage of new and future technologies. ”
Steven Aftergood, a government secrets scholar and longtime intelligence agency observer for the Federation of American Scientists, explained why Chudova’s plan makes sense for the agency. We are overwhelmed with information, more than an army of human analysts can handle,” he told The Intercept. “As long as the initial data-evaluation process can be automated or assigned to semi-intelligent machines, humans may be freed up to deal with specific pressing problems. What I mean by that is that AI can do more than that and can identify issues that human analysts miss.” I don’t think there is, but I don’t think it has anything to do with the likelihood that the underlying machine learning model will sift through huge datasets and come up with inferences. “It will be interesting and a little scary to see how it works,” he added.
Photo: U.S. Army
convincing nonsense
One of the reasons it’s scary is that while tools like ChatGPT can mimic human writing almost instantly, the underlying technology stumbles on a basic fact that can lead to plausible-looking but completely bogus responses. Because it has a reputation for generating This tendency to churn out nonsense with confidence and persuasion (a chatbot phenomenon known as “hallucinations”) can be a problem for hard-headed intelligence analysts. It’s one thing for ChatGPT to fabricate the best places to eat lunch in Cincinnati, and another is to create meaningful patterns from satellite imagery of Iran. Moreover, text generation tools like ChatGPT generally lack the ability to explain exactly how and why they generated their output. Even the most ignorant human analyst can try to explain how they came to their conclusions.
Lucy Thatchman, professor emeritus of anthropology and military technology at Lancaster University, told The Intercept that feeding systems like ChatGPT with completely new information about the world would be an additional obstacle. “the current [large language models] Like what drives ChatGPT, it’s effectively a closed world of already digitized data. Famously, data scraped for ChatGPT will end in 2021,” Suchman explained. “We also know that rapid retraining of models remains an open problem. So the question of how LLM incorporates continuously updated real-time data is particularly rapidly changing and In the ever-chaotic combat situation, it seems like a big problem, which can’t even go into all the stereotyping, profiling, and under-informed targeting issues that plague today’s data-driven military intelligence. .”
It makes at least business sense for OpenAI not to want to rule out NGA as a future customer. Government jobs, especially those related to national security, are highly lucrative for tech companies. In 2020, Amazon Web Services, Google, Microsoft, IBM, and Oracle signed deals with the CIA. Investing his $13 billion in OpenAI and rapidly integrating machine learning capabilities from small businesses into its products, Microsoft earns tens of billions of dollars in defense and intelligence. Microsoft declined to comment.
However, OpenAI recognizes that this work can be highly controversial, and the potential is significant for both staff and the general public. OpenAI currently enjoys a global reputation for its dazzling machine learning tools and toys, but a glorious public image that partnering with the Pentagon could quickly tarnish. “The legitimate presentation of OpenAI itself is in line with the recent wave of ethical washdowns related to AI,” said Suchman. “Ethics guidelines set out what my British friends call ‘hostages to good luck’ or what your remarks could bite you. “The fact that they can’t even address press questions like yours suggests they’re not ready to be held accountable for their policies.”
