Get smarter about artificial intelligence | Columnist

Machine Learning








Edith Cooke Photos

edith cook




Artificial intelligence (AI) is touted as a great advancement, but like many “improvements” in modern life, it's also a deal with the devil.

“Learning Deep Learning” is the title of a technical publication I came across while visiting California. Silicon Valley is home to many companies and conferences specializing in artificial intelligence (AI) products.

The diagrams in this book show how deep learning (DL) is incorporated into machine learning (ML), and machine learning (ML) is incorporated into AI. Next, deep learning embeds and hosts deep neural networks (DNNs).

This book has approximately 630 pages. It's put together by a software architect and his ESL writer, so it can be difficult to understand. Still, the heading in the preface, “Is DL Dangerous?” piqued my interest. My paraphrase is:

Unintended consequences:

A 2018 study by Buolamwini and Gebru focused on facial recognition systems used in law enforcement. The system achieved 99% accuracy for light-skinned men, but only 65% ​​accuracy for dark-skinned women, resulting in false positives and unfair results. I was at risk of being prosecuted.

This system remains “commercially available.”

Malignant use:

Dixon's 2019 study on fake porn found that the technology is used to make porn videos appear to feature people, often celebrities.

In other words, AI is fraught with both unintended consequences and abuses. The fact that these studies were done 4-5 years before him means that these illegal activities have been going on for more than a decade, even though ethical AI has become a major focus in recent years. It tells the story.

The authors state that “DL learns from human-generated data, so there is a risk of learning and even amplifying human biases.” He mentions “the need to take a responsible approach to DL and AI,” but acknowledges that historically “this topic has been largely ignored.” He alerts Algorithmic Justice to his League of Justice website, which has raised the alarm, but the horse is out of the barn, right?

The foreword by Dr. Anima Anandkumar also touches on this theme. Her book also suffers from lengthy prose, but the gist states that it is of paramount importance for all her AI engineers to “think critically about the social implications of AI deployments.” It's dark. The authors point to the proliferation of harassment, hate speech, and misinformation on social media that is “wreaking havoc” in society.she added:

Groundbreaking research such as the Gender Shades Project and Stochastic Parrots shows that AI models deployed at scale commercially have highly problematic biases.

Anandkumar said he is advocating for a ban on the use of AI in the use of facial recognition by law enforcement until “proper guidelines and testing are in place.” The question is: who will design and implement the guidelines?

Meanwhile, if you look at internet news, you'll see headlines like the following: “Audio deepfakes are calling – here’s what they are and how to avoid falling victim to fraud. Powerful AI tools available to anyone with an internet connection have made it easy to impersonate someone’s voice. , increasing the threat of telephone fraud.”

The authors represent the DeFake Project at the Rochester Institute. Computing Security Professor Matthew Wright and Computing Security Research Fellow Christopher Schwartz point to chatbots like his ChatGPT that generate realistic scripts with adaptive real-time responses. “By combining these techniques with audio generation, deepfakes go from static recordings to living, lifelike avatars capable of convincing phone conversations.” or “vishing” scams are most likely audio deepfakes. . . . In 2019, an energy company was scammed out of $243,000 by imitating the voice of its parent company's boss and ordering employees to transfer funds to suppliers. In 2022, people were scammed out of an estimated $11 million by simulated voices containing close personal connections.

Do not rely on caller ID, caution researchers. These can be faked. Additionally, be careful with any personally identifiable information. Your social security number, home address, date of birth, phone number, middle name, and even the names of your children and pets are all information scammers can use to impersonate you and pass it on to banks, real estate agents, etc. There is a possibility.

Another headline says an AI hustler stole a woman's face to appear in an ad. The law cannot help them.

The Washington Post's Nitasha Tich explains that AI has created a “new type of identity theft.” Ordinary people find their faces and words distorted to promote offensive products and ideas. Tik gives an example.

The 27-year-old content creator was with her husband in a rented cabin in snowy Maine when the messages from her followers started trickling in and a YouTube commercial advertised her erectile dysfunction supplement. He warned her that she was using his likeness.

The commercial showed her in a real bedroom wearing real clothes, but with a non-existent partner in bed with a problem.It appears that a fraudster stole and operated it.

One of her videos uses a “new wave of artificial intelligence tools” that can create realistic “deepfakes.” Deepfake is an umbrella term that refers to media that has been altered (or created) by AI.

It's easier and cheaper to create fake videos based on real content, so scammers scour social media for videos that match their sales pitch's target audience and, as experts predict, steal “There will be an explosion in the number of advertisements created using personal information,” the author says. .

On April 9, 2024, an open letter signed by over 200 prominent figures in the music industry was published by the non-profit musician advocacy group on Medium: How Musicians Are Fighting AI for Fair Pay. Published in The letter implores AI companies not to use the technology to devalue their music.

“Unchecked, AI will create a race to the bottom, devaluing our work, and making it impossible for us to get paid for it,” the letter said.

Earlier this year, on January 30, 2024, Universal Music Group took a stand against AI by removing its entire catalog from TikTok, where artists have been plagued by rights violations. An open letter to the artist and songwriter community was headlined:

Why you should take a timeout on TikTok.

TikTok allows AI-generated recordings to flood its platform, and is developing tools to enable, promote, and encourage AI-generated music creation on its platform, and this content demands contractual rights that allow them to significantly dilute their royalties. A pool for human artists. . . [it] This is nothing more than sponsoring the replacement of artists by AI.

“This hurts,” the musician told me. “He can no longer use TokTok to promote our music.” He said profit margins for musicians are already thin, and some AI tactics can wipe out profits.

“Hollywood actors are hurting as well,” he said. “Whereas before, appearing as an extra in a crowded movie scene would earn you a small income, this has now been replaced by clones.”

Meanwhile, a Google search for “phishing” brings up a surprising variety of posts, from cybersecurity companies offering the service to advice posts like “How to spot and avoid phishing scams.”

It turns out that identity fraud is very common in the United States. A few years ago, scammers siphoned $98 million from Facebook and $23 million from Google by sending fake invoices to employees.

Miss Edith (Dr. Edith Cooke) is a German by birth and a naturalized citizen. She worked as a translator before moving to California. She has taught at several universities in California, South Dakota, and Tennessee. As a writer, she was the recipient of the Wyoming Arts Council's Frank Nelson Doubleday Memorial Award and Professional Development Grant. Visit www.edithcook.com. Her opinions are her own and do not reflect the editorial stance of the Cheyenne Post.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *