Weasel’s Word: OpenAI’s Department of Defense contract won’t stop AI-powered surveillance

Applications of AI


OpenAI, the developer of ChaptGPT, has understandably faced widespread criticism for its decision to bridge the gap created by the U.S. Department of Defense (DoD) when it competed with Anthropic. refused Eliminate restrictions on the use of AI for surveillance and autonomous weapons systems. Following protests from both sides user and employees who did not sign up to support mass government surveillance.Initial reports indicate that the number of ChapterGPT uninstalls has nearly increased 300% after the company announces the deal.OpenAI CEO Sam Altman said the original agreement wasopportunistic and sloppy”Then he said, Republish internal memos on social media Additions to the agreement “Fourth Amendment to the United States Constitution, the National Security Act of 1947; [and] Under the FISA Act of 1978, AI systems may not be intentionally used for domestic surveillance of U.S. persons and nationals. ”

The problem is that the US government doesn’t believe “Consistent with applicable law” It means “no domestic monitoring”. Instead, governments have mostly adopted lax interpretations of “applicable law,” which have facilitated mass surveillance and widespread violations of civil liberties. Then a fight tooth and nail To prevent court intervention.

After all, many of the world’s most notorious human rights atrocities were historically “legal” under the then-current laws. ”

“Intentionally” also plays a very large role in this sentence. For years, the government has maintained that mass surveillance of Americans simply happens. By the way (Read as unintentional) because their communications with people inside and outside the United States are likely captured by surveillance programs designed solely for the purpose of collecting communications. outside US.

of the company Contract amendments continue In a similar vein, “For the avoidance of doubt, the Department understands that this restriction prohibits the intentional tracking, monitoring, or surveillance of United States persons or nationals through the acquisition or use of commercially obtained personal information or private information.” Here, “intentional” is a red flag, given that intelligence and law enforcement agencies often rely on incidental or commercially purchased data to circumvent stronger privacy protections.

Another: “AI systems may not be used to conduct unrestricted surveillance of the personal information of Americans, consistent with these authorities. Nor may the systems be used for domestic law enforcement operations, except as permitted by the Posse Comitatus Act and other applicable laws.” What exactly does “unrestricted” mean? Who said it?

Lawyers sometimes refer to these words as “weasel words.” This is because it creates an ambiguity that protects one or the other from actual liability for breach of contract. Similarly, human negotiationwhere the Pentagon has reportedly agreed to abide by Antropic’s red lines only in “appropriate cases,” with the government likely seeking to publicly commit to restrictions in principle but retaining broad flexibility in practice.

OpenAI also notes that the Pentagon has promised not to allow the NSA to use OpenAI’s tools without a new agreement, and that its deployment architecture will help verify that no red lines are crossed. However, secret agreements and technical guarantees are never enough to rein in oversight agencies, and they are no substitute for strong and enforceable legal limits and transparency.

Indeed, OpenAI executives may be trying to use the company’s contractual relationship with the Department of Defense to ensure that governments use AI tools only in ways consistent with democratic processes, as has been claimed. But based on what we know so far, that hope seems very naive.

And that innocence is dangerous. In an era where governments are willing to accept extreme and unfounded interpretations of “applicable law,” companies need to step up to the plate to live up to their promises. After all, many of the world’s most notorious human rights atrocities were historically “legal” under existing laws at the time. OpenAI promise Although the public claims to “avoid enabling uses of AI and AGI that harm humanity or unfairly concentrate power,” we know that enabling mass surveillance accomplishes both.

However, OpenAI is not the only consumer-facing company seeking to profit from government mass surveillance efforts while trying to reassure the public that it is not participating in acts that violate human rights. Despite this marketing double talk, it is clear that companies cannot do both. It’s also clear that companies shouldn’t be given so much power over privacy restrictions in the first place. Citizens should not be dependent on small group-Whether you’re a CEO or a Department of Defense employee.—To protect our civil liberties.



Source link