There’s a lot to dislike about AI. But what if there was a way to use it consciously? | Life and Style

Applications of AI


Not to brag, but for the past year I’ve been that annoying guy at parties talking about AI. When I tell people I’m working on a newsletter, I get the usual raised eyebrows and skepticism.

But wait! Don’t leave yet. This isn’t one of those ads about how an AI tool will replace your friends or trick your boss into thinking you’ve been up all night working on a presentation. Instead, I focus on how to use AI without dehumanizing me or others.

Like most people, I hate the indiscretion and the threat that AI poses to our privacy, mental capacity, and work. But I think of AI in the same way as the internet.

Yes, unfortunately the internet has given us doomscrolling, data harvesting, clickbait, and Facebook posts about your uncle’s vaccine. But it also gave us digital maps, podcasts, niche blogs, Wikipedia, video calls, and, let’s not forget, the Guardian app.

Like any powerful tool, people are trying to exploit AI for illicit means, but that doesn’t mean we have to follow suit or tolerate it. That means we need to demand proper regulation and accountability from the companies that build it. Now is the time to demand guardrails around privacy, environmental impact, and the reach of misinformation.

When using AI, you need to use it with your eyes open.

So what happens to us? In our new free six-week newsletter course, AI for the People, we explore useful ways to leverage AI while staying alert and in control, at work, in the kitchen, at the gym, and beyond. Do this with guardrails. This is explained in detail in the four basic rules below.

But back to being annoying at parties.

Here’s what I tell skeptical acquaintances about how AI can actually help. I hate information asymmetry. For example, consider how a company tries to trick us with legalese and ends up signing a contract we never read. Remember the arbitration clauses Disney and Uber used to stop people from suing?

So I retrieved the terms of service and legal agreement and had the AI ​​explain it to me in plain English, highlighting the most concerning clauses.

I also used AI to overcome my chronic time blindness, study for my driving test, cook more adventurously, train more consistently, and even better, learn to play The Lord of the Rings theme on my tin whistle.

It turns out that AI is no substitute for real humans in most cases. This isn’t a big surprise. But as an assistant who helps me understand new information, speed up tasks, and create customized plans, my year was full of small, practical discoveries. I look forward to sharing them with you.

AI for the People is not about “10 prompts that will change your life” or letting a chatbot do the work for you. It’s about learning how AI can help you without giving up judgment.

As AI expert Ethan Mollick told me, “It’s just like any other tool: By handing over all your skills and critical thinking to AI, you dull your skills and critical thinking.”

Many of these issues are not new. As he told the New York Times in 2002, Italian author Umberto Eco was already grappling with misinformation in the early days of the web. “The problem with the Internet is that it gives you everything from reliable material to crazy material,” he says. “The question then becomes, how do you differentiate?”

This question—how do we learn to identify, adapt, and maintain control—is the guiding philosophy behind AI for people. Please join us.


Four golden rules for this series

AI is powerful and truly useful, but only if we approach it with intention. The principles we are working towards are:

1. You are the boss

You can tell the AI ​​to do everything and then uncritically regurgitate its responses. But over time, that tradeoff comes at a cost of control.

Ethan Mollick, AI expert and best-selling author of Co-Intelligence, told me, “It’s just like any other tool, right? If you surrender all your skills and critical thinking to an AI, you’re slowing down your skills and critical thinking. If you’re trying to learn something, make sure the AI ​​asks questions instead of giving you answers.”

That’s why we always see AI as a smart collaborator or assistant, and you take the lead.

2. Be your own fact checker

AI tools can get something wrong due to improper sourcing or illusions. One example: In 2024, Google’s AI search summary advised people who mistook a joke on Reddit for a real recipe tip to add glue to their pizza.

The key is to treat AI information like any other information. “If it’s something that’s really important, you have to take the time to verify it,” Mollick says.

You can ask the AI ​​tool to provide a link to the source, or you can upload the source itself (such as a peer-reviewed study or official report) and ask the AI ​​to respond based solely on what you provide.

3. Be informed and intentional

The Guardian highlighted some of the alarming impacts of AI on the environment. This can confuse individual users as to how they should use it. Data is hard to pin down, but the larger environmental issue we should be thinking about is the rapid growth of AI infrastructure, how AI is passively integrated into digital services, and how they are being enhanced.

Everything we do online uses energy and water, whether it’s watching Netflix, sending emails, or making video calls. Some data suggests that using AI for simple tasks, while not orders of magnitude more expensive than regular web activity, can be more energy-intensive than basic search.

In this series, we only use text-based prompts, which consume less energy for the AI. This isn’t to say we all need to send out 100 prompts a day. Just like not running the dishwasher to wash a single fork, it’s like taking a private jet to the supermarket to use it responsibly.

4. Don’t share sensitive information

If you want to preserve your privacy and possibly your job, you need to be careful what you share with AI tools. Anything you enter is sent to a server owned by the company and could be accessed in the event of a data breach or legal claim. Many workplaces have strict policies regarding how AI is used. Anything you share can also be used to train models, unless you can opt out.



Source link