AI “Incidents” Up 690%, Worst Criminals Are Tesla, Facebook, OpenAI

AI For Business


The first “AI Incident” almost caused a global nuclear war. Recent AI-powered glitches, errors, scams and scams include deepfakes used to influence politics, harmful health information from chatbots, self-driving cars endangering pedestrians and more will be

According to security firm Surfshark, the worst offenders are Tesla, Facebook and OpenAI, accounting for 24.5% of all known AI incidents to date.

In 1983, Soviet automated systems detected incoming nuclear missiles from the United States and thought they were on the verge of a global conflict. This is the first incident in the Surfshark report (although it is debatable whether automated systems from the 1980s specifically count as artificial intelligence). In a recent incident, the National Eating Disorders Association (NEDA) was forced to shut down its chatbot Tessa after she gave dangerous advice to people seeking help with eating disorders. Other recent incidents include a self-driving Tesla vehicle failing to notice a law-breaking pedestrian who refuses to give way at a crosswalk, and a Jefferson Parish resident who mishandled a facial recognition system developed by Clearview AI. He was wrongfully arrested by the Louisiana State Police on suspicion of him for another individual.

According to Surfshark, these AI incidents are increasing rapidly.

This is not surprising given the significant increase in investment and use of AI over the past year. According to Software Reviews his G2 for his service, chatbot search traffic increased 261% from February 2022 to his February 2023, the fastest growing of G2’s entire database. His three software products are his AI products.

And while most people take AI systems for granted, executives who buy software seem to trust them.

Based on a recent survey of 1,700 software buyers, a G2 representative said, “Amidst widespread skepticism around the use of AI, 78% of respondents said their AI-powered solutions were accurate and reliable. I said I trust you,” he told me.

Despite that trust, the number of AI incidents averaged 10 per year in the early days of AI (2014-2022), but in recent years has surged to an average of 79 major AI incidents per year. Did. That’s a 690% growth rate in just six years, and the growth is accelerating.

As of May, the number of incidents in 2023 is already half that of 2022 overall.

“A recent notable example is the deepfake image of Pope Francis in a white down jacket, demonstrating the incredible realism that can be achieved with AI image generators,” said Elena Baverskaite, spokeswoman for security firm Surfshark. says Mr. “But its impact goes beyond mere entertainment, as AI-enhanced fake news may continue to mislead the public. can cause.”

In fact, 83 out of 520 AI incidents reportedly used the word “black” in Surfshark listings, including the above-mentioned wrongful arrests and Facebook’s AI tools accusing black men of black men. It also includes the famous case that I classified as “primate”.

Some of that bias is trained from AI systems, but what’s clear is that with the massive rush to add AI to everything, safety and fairness aren’t necessarily key considerations. is.

When it comes to Facebook, Tesla, and OpenAI, which account for 25% of recorded AI incidents, the challenge is that Facebook’s algorithms failed to find violent content, so scammers could find deepfake images and videos on Facebook. to defrauding people using Tesla’s AI problems have generally focused on the company’s Autopilot and fully self-driving software products, which caused unexpected braking (which caused an eight-car accident in San Francisco in late 2022). or you may not notice vehicles or people in front of you. OpenAI challenges generally relate not only to the privacy of OpenAI users, but also to the privacy of the data used in training. OpenAI’s technology also allegedly made death threats for its use in Bing Search.

follow me twitter Or LinkedIn. check out You can find my website and other works here.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *