Your use of AI is breaking my brain.

Applications of AI


A few years ago, while I was covering the rise of AI slop at Facebook, I asked friends and family if they thought their timelines were being fed AI spam and if they could send me examples. A few of them responded, sending us an apparently AI-generated sci-fi landscape, Jesus the Shrimp, and lonely, starving children begging for sympathy. However, some of my friends have sent me images that they thought were AI but weren’t. Their mental alertness has reached a point where they think it’s safer to look at human-made art or photographs and ignore them as AI than to be fooled by them.

Browsing the internet today, consuming any kind of content, is subject to all kinds of AI attacks. People think fake things are real and real things are fake. many It is written about “AI psychosis””, a non-specific and unscientific diagnosis given to people who have lost themselves in AI. not said much It’s about the cognitive load that the AIs used by others place on the rest of us, and about the insidious nature of having to navigate an internet and world where lazy AIs have permeated everything. Our brains currently perform countless calculations each day. Is this an AI? Do you care if it’s an AI or not? Why does this sound, look, or read so strange? Does this person write things like this all the time? Is this actually human?

I see people being conditioned to expect and ignore AI content. Google’s “AI Overview” famously told us to eat glue pizzaengagement-baiting LinkedIn posts, and across your Facebook and Instagram feeds. But I feel more and more that it’s everywhere, coming from all directions, and completely unavoidable. It’s not that I’m averse to AI-assisted content, nor do I want to be fooled by it. It all feels incredibly creepy, so something must be going on where my brain has become the AI ​​police. You’re spending your day reading, watching, or listening to something, and then suddenly you realize that something is very off. Simply put, it drives me crazy.

Example: Last week, I was listening to an episode of Everyone’s Talkin’ Money, a podcast I’ve listened to on and off about taxes for years, in a desperate attempt to avoid new interpretations of the White House Correspondents’ Dinner shooting (yay). This podcast has been running for years, has a human host named Shari Rush, and has hundreds of episodes. Rush began reading the introductory script. “The change I want you to make today, and this is the change that changes everything, is to start seeing your tax return as information, and not as a bill or a badge of shame.” The script went on and on like this, the AI ​​writing metaphors, and the AI ​​writing metaphors after metaphors. My brain shut down and stopped paying attention to the script, and I began to wonder if Rush was using AI just for the intro script. What about research? Did she edit the script at all? I turned off the podcast.

Later that day, I was scrolling through the Orioles Hangouts forum. This forum is a small community of die-hard fans of the Baltimore Orioles that I’ve been lurking around with for decades. Until recently, it was one of the few places on the internet where you could safely assume it wasn’t full of AI. That’s the case, except now. The site’s administrators started using AI to analyze players’ performance and help write some of their posts. To his credit, he explains how he uses AI and prefaces these posts as AI-assisted analysis. Some of them are interesting. But now, when I browse forums almost every day, I see discussions between posters who have been on the forum for years that seem either too general or not very meaningful. A recent post discussing the return timeline for injured players suggested a ridiculously long recovery time. One poster pointed this out, saying, “You said 10-18 months, but I said position players don’t take that long.” The poster said, “You’re right. 10-18 months was the answer the AI came up with… Consider this a bit of a cautionary tale about trusting AI, and another about the benefits of asking for real medical research on questions like this.” I scroll through the forums every day and chat with ChatGPT. I see people hooking things up to Gemini and copy-pasting their answers for other people to see. In the last 30 years of human community discussing sports, the emergence of AI is inevitable.

Of course, I’m not the only one. A friend of mine has been sending me screenshots of texts she received from someone she started dating, and she suspects he’s cheating on her using ChatGPT. I’ve received apparently AI-generated apologies and excuses from people trying to escape social engagement. I once attended a wedding where the speech felt like it was partially generated by AI.

Recent PEW poll We showed that people think it’s important to be able to tell whether an image, video, or text is generated by AI, created with the assistance of AI, or written by a human. It also became clear that the majority of people do not believe they can tell the difference between works created by AI and works created by humans. Studies have repeatedly shown that humans judge AI-generated art and writing more harshly than human works. Journal of Experimental Psychology They found that when people know or perceive a text as being generated by AI, it is “very difficult to mitigate” and “very persistent over the study period, across different metrics, contexts, and different types of written content.” Simply put, I’m not the only one who hates or finds AI writing text annoying. Even when AI writing is “better,” it often feels bland, weird, and formulaic. Author Eve Fairbanks I wrote a thread the other day “What matters for AI is not the rhythm, wording, or factual errors; it is that the problems with *all these elements* exist equally and simultaneously.”

Register to access this post for free

Free memberships give you access to posts like this, plus receive a roundup of this week’s articles via email.

Subscribe



Source link