OpenAI reassigns Chief Security Officer Madry to role focused on AI inference

AI News


OpenAI told CNBC that Madry will continue to work on core AI safety efforts in his new role.

He is director of MIT's Center for Deployable Machine Learning and co-leads the MIT AI Policy Forum, but is currently on leave from those roles, according to the university's website.

The decision to reassign Madry comes less than a week after a group of Democratic senators sent a letter to Open AI CEO Sam Altman with “questions about how Open AI is addressing emerging security concerns.”

The letter, sent on Monday and seen by CNBC, also said, “We are requesting additional information from OpenAI about the steps it is taking to meet its safety commitments, how it is internally evaluating its progress against those commitments, and how it identifies and mitigates cybersecurity threats.”

OpenAI did not immediately respond to a request for comment.

The lawmakers asked the tech startup to answer a series of specific questions about its security practices and financial responsibility by August 13.

This is all part of a summer of growing safety concerns and controversy surrounding OpenAI, which, along with companies like Google, Microsoft, and Meta, is spearheading the generative AI arms race, a market predicted to exceed $1 trillion in revenue within the decade. Companies across industries are rushing to add AI-powered chatbots and agents to avoid falling behind their competitors.

Earlier this month, Microsoft relinquished its observer position on OpenAI's board of directors, saying in a letter seen by CNBC that it was able to step down because it was satisfied with the makeup of the startup's board, which has been revamped in the eight months since the revolt that led to Altman's temporary removal and threatened Microsoft's huge investment in the company.

But last month, a group of current and former OpenAI employees published an open letter expressing concern about the rapid advances being made in the artificial intelligence industry despite insufficient oversight and no whistleblower protections for those who want to speak out.

“We believe that AI companies have strong economic incentives to avoid effective oversight, and that bespoke structures of corporate governance will not be enough to change this,” the employees wrote at the time.

Days after the letter was made public, sources familiar with the matter confirmed to CNBC that the FTC and Department of Justice plan to launch an antitrust investigation focused on the conduct of OpenAI, Microsoft, and NVIDIA.

FTC Chair Lina Khan described her commission's actions as a “market investigation into investments and partnerships forming between AI developers and major cloud service providers.”

In their June letter, current and former employees wrote that AI companies have “substantial non-public information” about what their technology can do, the scope of safeguards they have in place, and the risk levels that the technology poses different types of harm.

“We also understand the significant risks these technologies pose,” they wrote, adding: “Currently, companies have little obligation to share information with governments but none with the private sector. We do not expect all companies to share information voluntarily.”

In May, OpenAI decided to disband the team, just a year after announcing it, to focus on the long-term risks of AI, a person familiar with the matter confirmed to CNBC at the time.

Some team members will be transferred to other teams within the company, said the people, who asked not to be identified.

The team was dissolved after its leaders, OpenAI co-founders Ilya Sutskever and Jan Reicke, announced their departures from the company in May. Reicke wrote in a post on X that OpenAI's “safety culture and processes have taken a back seat to flashy products.”

Altman said on X at the time that he was sad to see Reicke leave and that OpenAI still had work to do. Shortly after, co-founder Greg Brockman posted a statement on X from Brockman and the CEO, claiming that the company had “raised awareness of the risks and opportunities of AGI so the world can prepare for it.”

“I joined because I thought OpenAI was the best place in the world to do this research,” Reicke wrote to X at the time. “However, I had been at odds with OpenAI's leadership about the company's core priorities for quite some time and finally reached a breaking point.”

Reike wrote that he believes the company should focus more of its capabilities on security, surveillance, preparedness, safety, and societal impact.

“These problems will be very difficult to solve, and I worry that we are not on track to get there,” he wrote. “Over the past few months, my team has been sailing against headwinds, at times. [computing resources] And it was becoming increasingly difficult to carry out this important research.”

Reike added that OpenAI must become a “safety-first AGI company.”

“Building machines smarter than humans is an inherently dangerous endeavor,” he wrote at the time. “OpenAI has a huge responsibility on behalf of all of humanity. But over the past few years, safety culture and process have taken a back seat to flashy products.”

Madrid's transfer was first reported by The Information.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *