AI and You: Predicting the Future, New Beatles Songs, and Reimagining Work

AI News


Since the launch of OpenAI’s ChatGPT in November 2022, conversations about Conversational or Generative AI have become frequent and filled with predictions about the opportunities and challenges ahead.

Regardless of how you feel about AI, AI is here to stay and will continue to evolve as it has already profoundly changed the way we live, work, collaborate, brainstorm and create. There is no question about that.

Over the past three months, I’ve delved into all things related to conversational AI to understand the opportunities and risks, the companies and stakeholders working on new tools and policies, and some of the issues surrounding this new technological frontier. I’ve been Each week, we’ll highlight some of the notable things happening in the world of AI that we think deserve attention.

Since this is my first roundup of In the Loop on AI, I’ve compiled some of the highlights from the past month or so with links to the source material so you can get up to speed quickly. increase.

AI may end badly for humanity, but it may not. In March, notable AI researchers and tech executives, including Apple co-founder Steve Wozniak and Twitter owner Elon Musk, announced the development of AI to give the industry time to set safety standards around AI. signed an open letter calling for a six-month suspension. Designing and training these powerful and potentially harmful systems.

“We’ve gotten to a point where these systems are smart enough to be used in ways that are dangerous to society,” Yoshio Bengio, an AI pioneer and director of the University of Montreal’s Institute for Learning Algorithms in Montreal, told The Wall Street Journal. Told. interview at that time. “And we still don’t understand.”

The past two months have seen posts duel about the potential threats and joys of AI. In a one-sentence open letter signed by luminaries such as OpenAI CEO Sam Altman and the godfather of AI, Jeffrey Hinton, experts say AI will go hand in hand with pandemics and nuclear war. He said it could pose an “extinction risk”. In contrast, venture capitalist and internet pioneer Marc Andreessen, whose company has backed a number of AI startups, wrote nearly 7,000 words on the subject “Why AI will save the world.” I wrote the post.

Here are this week’s latest insights from the 119 CEOs across industries who responded to the Yale CEO Summit survey. 42% said AI could destroy humanity, 34% said it could do so within 10 years, 8% said it could do so within 5 years, and the remaining 58% said it would not. It never happened and he replied, “I’m not worried.” To CNN’s summary of results. In a separate question, Yale University said 42% of those surveyed thought the potential AI catastrophe was overstated, while 58% said it wasn’t.

I’m glad everything was resolved.

AI doesn’t always paint beautiful pictures. What’s it like to be a CEO? Or a drug dealer? It’s a question Bloomberg answered in an article about how text-to-image converters create a highly distorted vision of the world—a world even more biased than an already biased human being. After analyzing more than 5,000 images generated by Stable Diffusion (OpenAI’s rival to Dall-E), Bloomberg found, “According to Stable Diffusion, the world is run by a white male CEO. Lawyers, seldom judges. Dark-skinned men commit crimes.” Meanwhile, a dark-skinned woman flips a hamburger. ”

“We are essentially projecting a single worldview onto the world, rather than representing multiple cultural or visual identities,” said co-author of a study on text-to-image bias. said Sasha Luccioni, a research scientist at AI startup Hugging Face. He told Bloomberg about generative AI models. “The question is who is responsible?” “The dataset provider? The model trainer? Or the creator?”

All good questions.

The Beatles are back with the final song. Thanks to AI, a new “last” Beatles song featuring the original Fab Four is set to be released this year. Paul McCartney told the BBC in June that AI was used to separate John Lennon’s vocal track from a demo of an unreleased song (rumored to be the 1978 John Lennon song “Now And then”). said to have been used.

We know it is possible to separate the voice track from the recording (hence the deafening vocals of Linda McCartney in Hey Jude or the John Lennon production of Yoko Ono). possible “painful” contributions to ).

From the BBC: “Sir Paul had received a demo from Lennon’s widow, Yoko Ono, a year earlier. It was on a cassette labeled ‘Four Paul’ that Lennon made shortly before his death in 1980. It was one of those lo-fi, early kind of tracks.” It was recorded mostly on a boombox while the musician sat at the piano in his New York apartment. ”

McCartney made so much news with this news. He posted a tweet on June 22nd He reiterated that it was the Fab Four who actually sang, and that the AI ​​was not used to generate the new vocals.

Are the new Beatles songs good or bad? I don’t know, but what I do know is that this work may not be eligible for a Grammy. CNET reporter Nina Raymont said the Grammys will consider only human-made music for eligibility for the 2024 awards show, which airs Jan. 31. “Only human creators will be submitted for judging. I’m eligible,” he said, according to the new rules of the Grammy Awards. . “A work that does not contain human writings does not qualify in any category.” The artist will continue to be able to use AI tools to create her music, but the work submitted must be “meaningful and minimal.” It must be more than

$5,000 hallucinations: In case you didn’t know, some AI chatbots can “hallucinate”. This is a polite way of saying that you are making up something that is true, even though it is not true. Well, two attorneys in Texas have learned painfully that hallucinations are a no-go, at least when it comes to filing legal briefs in federal court.

Two lawyers who used ChatGPT to prepare legal briefs were accused by the court of being found to have fabricated cases in which chatbots did not exist and cited them as precedents. They were fined $5,000.

Texas Judge P. Kevin Castel wrote in a reprimand that “technological advances are the norm and there is nothing inherently wrong with using trusted artificial intelligence tools to assist.” . “However, existing regulations place lawyers in a gatekeeper role to ensure the accuracy of filings.”

Cats, Dogs, Work: AI engines like ChatGPT don’t have human-level intelligence and aren’t as smart as dogs or cats, Meta’s chief AI scientist Yann Lucan said at the Viva Tech conference in June. That’s because most generational AI engines trained on large language models (LLMs) aren’t very intelligent because they’re trained only on language, not images or videos.

“These systems are still very limited and have no understanding of the underlying realities of the real world because they are trained purely on text, on large amounts of text. ‘ said Lucan. “Most of a human’s knowledge has nothing to do with language, so part of the human experience is never brought into his AI.”

For example, he points out that an AI system can pass the bar exam to be a lawyer, but it can’t load a dishwasher that a 10-year-old can learn in 10 minutes.

“What it tells you [is that] “We’re really missing something big…to reach not just human-level intelligence, but dog intelligence,” LeCun said. Meta also said it is working to train AI on video, which he said is much more complicated than text. I don’t know how to reproduce this ability on today’s machines. Until it can do this, it will not have human level intelligence, nor dog or cat level intelligence. [intelligence]. ”

Meanwhile, Airbnb CEO Brian Chesky says he’s not worried about AI taking jobs. We believe that AI will help create more startup entrepreneurs because AI saves a lot of time and money on coding work and doesn’t have to be a computer scientist to write code. Here’s an excerpt of Chesky’s remarks, according to CNBC:

“AI is making Airbnb’s software engineers more efficient, and within the next six months, 30% of their day-to-day tasks will be handled by tools like ChatGPT, Chesky said. It doesn’t mean that these engineers are necessarily efficient in their work.” He argued that the time saved could allow them to focus on more challenging and more individualized projects.

“Computer scientists aren’t the only potential beneficiaries,” he said. will build it, no coding language required, Airbnb CEO said.

“I think this will create millions of startups … Entrepreneurship will be a boon,” Chesky said. “Basically, anyone can do the same things that only software engineering could do five years ago.”

The downside for all these software engineers is that Elon Musk said in May that “if AI can do the job better than you,” it can be hard to find your job rewarding. , said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *