U.S. lawmakers are wrestling with what guardrails should be put in place around the burgeoning artificial intelligence, but months after ChatGPT came to the attention of the U.S. government, consensus is far from certain.
Interviews with US senators, congressional officials, AI companies, and interest groups revealed a number of options being debated.
The debate will focus on Tuesday, when OpenAI CEO Sam Altman will appear before a Senate committee for the first time.
Some proposals focus on AI that could put people’s lives and livelihoods at risk, such as in healthcare and finance. Other possibilities include rules to ensure that AI cannot be used to discriminate or violate someone’s civil rights.
Another debate is whether to regulate AI developers or companies that use AI to interact with consumers. And OpenAI, the startup behind the chatbot sensation ChatGPT, discussed a standalone AI regulator.
While it’s unclear which approach will win, some in the business community, including IBM and the U.S. Chamber of Commerce, favor an approach that only regulates critical areas such as medical diagnostics, calling it a risk-based approach. .
If Congress decides new legislation is needed, the U.S. Chamber of Commerce’s AI committee will argue that “risk is determined by the impact on the individual,” said Jordan, of the Chamber’s Center for Technology Engagement. Mr Crenshaw said. “Video recommendations may not pose as high a risk as health or financial decisions.”
So-called generative AI, which uses data to create new content like ChatGPT’s human-like prose, has skyrocketed in popularity, and this rapidly evolving technology facilitates exam cheating and misinformation. Concerns have been raised that it may lead to hype and a new generation of fraud.
The AI hype has sparked a flurry of meetings, including a visit to the White House this month by the CEOs of OpenAI, its backers Microsoft and Alphabet, and President Joe Biden met with them.
Congress is doing the same, according to congressional aides and technology experts.
“The broad staff of the House and Senate is basically waking up and we are all being asked to stand firm on this issue,” said co-founder and CEO of high-profile AI startup Anthropic. ) said Jack Clark, who also attended the White House meeting. “People want to get ahead of AI, and one of the reasons he feels he hasn’t gotten ahead of social media.”
A key priority for big tech companies as lawmakers become more active is to fight “premature overreaction,” said Adam Kovacevich, head of the Tech Advancing Chamber.
And while lawmakers like Senate Majority Leader Chuck Schumer are determined to tackle the AI problem bipartisanly, Congress is actually polarized, with a presidential election next year and lawmakers in debt. We are working on other big issues, such as raising the cap.
Schumer’s proposed plan calls for independent experts to test new AI technologies before they are released. It also calls for transparency and providing governments with the data they need to avoid harm.
government micromanagement
A risk-based approach means that AI used to diagnose cancer, for example, will be subject to FDA scrutiny, but recreational AI will not be regulated. The European Union is also moving toward passing similar rules.
But for Democratic Sen. Michael Bennett, who introduced a bill to create a government task force on AI, the focus on risk seems insufficient. He said he supports a “values-based approach” that prioritizes privacy, civil liberties and rights.
Bennett’s aide added that risk-based rules may be too strict and fail to detect dangers like AI being used to recommend videos that promote white supremacy.
Lawmakers are probably trying to prevent AI from being used for racism, such as in deciding who gets a low-interest mortgage, according to a person who wasn’t authorized to speak to reporters who were listening to the debate. They are also debating the best way to do it.
At OpenAI, staff are considering broader oversight.
Karen O’Keeffe, a research scientist at OpenAI, said in an April talk at Stanford University that companies should license powerful AI models before they train them or run the data centers that power them. proposed the creation of an institution to mandate O’Keeffe said the agency could also be called the Office of AI Safety and Infrastructure Security (OASIS).
When asked about the proposal, Mira Murati, chief technology officer at OpenAI, said credible bodies could “hold developers accountable” for safety standards. But more important than the mechanics was an agreement on what the standards were and what risks they were trying to mitigate.
The last major regulator to be established was the Consumer Financial Protection Authority, established after the 2007-2008 financial crisis.
Some Republicans may be hesitant about regulating AI.
A Senate Republican aide told Reuters, “We should be careful that proposed AI regulations do not become government microcontrollers for computer code, such as search engines and algorithms.”
