About the CEO and the Secretary of the Army: Calling for a whole-of-society approach.
“The real question is not whether machines think, but whether humans think.”
—BF Skinner
“We have to learn to sit together and talk a little about culture…”
—Sylvia Winter
The dramatic public battle between Anthropic and the Department of Defense is a microcosm of the many boiling debates surrounding the ethics of AI, but in particular it has made clear to everyone the very high stakes of who will control the design, development, and deployment of the technology, including its intended or (unintended) use. However, regardless of one’s ethical or philosophical views, it seems self-evident that governments of elected officials, rather than a handful of corporate entities, should decide when, where, and how technologies as powerful and critical to humanity as AI are used.
But what does it mean to suggest that the government can either claim any private company’s technology it wants as its own — by nationalizing it, as the Trump administration has suggested, or perhaps do so — or seek to sacrifice it by designating it a supply chain security risk for both government and commercial use, as Secretary of the Army Pete Hegseth has infuriated Anthropic CEO Dario Amodei? (No one understands the contradiction of claiming that Claude is too dangerous to use, while at the same time using it to arrest former Venezuelan President Nicolas Maduro and to wage war against Iran.)
As a result, many of my students who are interested in starting a startup or creating technology ask, “If I create something that is inconsistent with the current government’s policies and preferences, will the government simply take it away from me or destroy me?” Mr. Amodei did not want Claude to be used in ways that could be seen as violating civil or human rights, such as large-scale domestic surveillance or autonomous drones. When my students think about mitigating unwanted secondary or unintended uses of products through design, incorporation into the technology itself, and policy, such scenarios typically involve imagining malicious actors and malicious intent. Students ask, what should they do if they feel that their government reserves the right to use the technology they develop to violate human rights? So are they, as creators, involved in its (mis)use? This is the crisis that both Einstein and Oppenheimer fought over nuclear power.
Stanford HAI admirably requires an ethics and social review (ESR) of applicants for large project funding. This is to move beyond institutional review boards, which only consider individual subjects, to consider more broadly and thoroughly the expected social impact of their work and what they will do to reduce harm. Of course, there are related debates surrounding the ethics of general-purpose AI versus purpose-driven AI, and some, like OpenAI CEO Sam Altman, have controversially argued that no one can predict harm from general-purpose technology before the product is “out there,” but ESR can be an important exercise in moving beyond simple ethics audits and thinking more deeply about what constitutes so-called responsible, transparent, trustworthy, and accountable AI. We fully understand how problematic these reassuring adjectives are, as statutory rhetoric appears to allay consumer fears about irresponsible, opaque, deceptive, and unaccountable AI. Nevertheless, ESRs, which are created at the beginning of a project rather than on the backend, remain an important opportunity to think meaningfully about, and in some cases take some ownership of, what happens to what we put out into the world, for better or for worse. The final project for my course requires an enhanced version of HAI’s ESR.
But then, what is the value of ESR in the face of, say, the president’s power to independently define what constitutes a “lawful use” of AI? Or does it mean that one must cede ownership of one’s technology or compromise one’s own ethical compass or moral values in order to comply with military priorities or national security obligations? These are clearly not new philosophical questions. But the dispute between Anthropic and the Department of Defense reveals the real-world immediacy of such issues and the DOW’s impatience with discussions. As Maureen Dowd recently stated, new york times In the editorial, the Pentagon has given humanity a choice: “Be blackmailed or blacklisted.” It is no surprise that President Trump did not consult Congress before declaring war on Iran. This administration’s adoption of Zuckerberg’s “fast-break” mentality, acting seemingly unilaterally and capriciously with breathtaking speed, has effectively disabled the legal and ethical deliberative processes necessary to make informed decisions about the use of AI.
Some argue that such decisions should ultimately be left to the executive branch. But what to do when tech industry lobbyists have bought (for mind-boggling sums of money) such unprecedented access and enormous influence over presidential decision-making regarding AI? It is naive to think that regulation and governance will make the United States the loser in the global AI arms race, and that industry mantras such as “regulation stifles innovation” and “governance castrates competition” are driven solely by a concern for some higher social good or a belief in the progress of civilization. After all, many of these stories are hatched by the industry’s marketing departments and serve commercial interests that directly benefit from the rush to market and push for deployment.
In that context, the deliberation and critical reflection that should occur in a democratic society about the ethical design and use of AI is artificially made to be unacceptably “slow.” But deliberation about its importance and impact requires scheduling at a human pace, not one that forces us into eternal fast-forwarding. Furthermore, decisions about who decides what about AI should not be solely the preserve of the presidential prerogative or the CEO, but would ideally involve civil society, technologists, academia (including the humanities and social sciences, and STEM fields), philanthropy, and all sectors of government—a whole-of-society approach. Indeed, we are all stakeholders in such profoundly transformative technologies.
— Michelle Elam, William Robertson Coe Professor of Humanities, Department of English, Stanford University, Senior Fellow, Stanford HAI
