Tech workers call for increased AI surveillance in new open letter

AI Video & Visuals


Tech employees are advocating for increased oversight and regulation surrounding the rapidly evolving field of artificial intelligence. Former and current employees of prominent tech giants such as OpenAI, Alphabet's Google (GOOG, GOOGL), and Anthropic have published an open letter expressing concerns about the lack of governance to ensure the safe development of AI technology. Appian (APPN) CEO Matt Calkins appeared on Catalysts to share his thoughts on the effort.

Calkins supported the letter, calling it “an extremely responsible step,” noting that it addressed three distinct areas: safety concerns, the lack of a “more mature set of regulations,” and transparency within AI organizations.

“Fundamentally, it's a question of whether we can trust AI and whether it will be our partner in making decisions as humans and as businesses. At this point, AI is a novelty,” Calkins told Yahoo Finance.

To learn more about expert insights and the latest market trends, click here to watch this full episode of Catalysts.

This post Angel Smith

Video Transcript

Now, a group of former and current employees from Open AI, Google Deep Mind, and Anthropic have published an open letter warning about the lack of oversight of rapidly expanding AI technology.

They argue that AI companies have strong financial incentives to avoid effective oversight.

So what do questions about AI safety mean for the AI ​​sector, which has driven much of the market rally so far this year?

And what does it mean to take this further?

I welcome him as AB and CEO Matt.

We are very grateful for your participation.

We know that your company is constantly using AI as part of its business.

But I would like to hear your thoughts on this open letter we're receiving and what it means.

If employees at these AI companies have already raised concerns about their safety;

What does this say about the likely continued adoption of and concerns about AI on the part of major tech companies going forward?

I love this letter.

I think this is an extremely responsible step and one that is necessary to move the conversation forward on AI, saying that there are dangers with AI — and we all know there are dangers with AI — and that we need regulation, and we do need a more mature set of regulations.

Europe has taken a step forward.

America is not like that.

And third, he called for transparency, and I think we need much more transparency about what's going on inside AI organizations so that we can all understand new technologies and know how to set the rules. What stood out to you from the letter?

Is there anything else that isn't already reflected in the price?

Well, I think the letter focuses on some concerns, not all concerns, but I want people to think broadly about what we need to regulate in AI. It's basically not just the threats that they mention, it's about whether we can trust AI and whether it can be our partner in making decisions as humans and as businesses. Right now, AI is novel, but it hasn't yet made it into our homes and businesses.

And I love II, I love this letter in terms of regulatory transparency.

I think these are good questions, but we need to go beyond the letter and offer assurances and commitments about how AI will be a responsible partner for us.

And it can start with transparency.

I like to start by exposing the data sources that are trained to create AI algorithms.

That's A. That's an incredibly great step.

But more than that, personal data must also be respected.

Any use of personal or identifiable information requires consent and compensation, personally identifiable information must be anonymized and permission obtained, and copyrighted information, such as photographs, stories, or this morning's New York Times, must be protected and consent and compensation required to use them.

I think these four items are the core of this list of important things to make AI responsible and more mature, so I would encourage others to join me.

Now, Matt, I want to set the list aside for a second and just know, as it plays out, what will be the catalyst for moving forward to create a safer world?

Do you think that drive will come from within companies?

Is the drive we are seeing now coming from governments and the public sector?

Governments will need to regulate, because we need to know what the rules are.

But organizations like here at APP have been using AI and using I to sell AI for 10 years.

We want to be part of the answer.

We want to be constructive players in creating a better future for AI.

And I think that's the bottom line, ultimately.

do.

Remember when Web 2.0 came along?

This happened a while ago.

Web 2.0 is like the second generation of the internet, it has become more user-centric, and we are now exchanging data with websites, rather than just getting data one way.

The future of AI is much the same.

AI 2.0 is coming, where we entrust it with our data and it can tell us something about ourselves and make recommendations based on what it knows about us. To get to AI 2.0, we need trust.

We need to know that AI is a good steward of our own information.

And for that we need regulation.

We need to be clear about what the role of AI is.

We need to know that what we tell our AI, and what it knows about us, is protected.

So Matt, that's it for now, but thank you so much for joining us and for giving us your thoughts on where we might go from here.

It was Matt Calkins.

He is the CEO of BN.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *