AI apps like Claude and ChatGPT are built to mimic us, at least in language. They are clearly not human. But there’s a problem. Perhaps because it functions without a core of ethics or morality, I can trap it and say anything
- But what if we were told the truth about itself?
Raise your little finger if you think AI apps are “intelligent.” Just because an app can use proper punctuation to create coherent sentences doesn’t mean it’s human. Just because you can take a prompt and turn it into an image or a podcast doesn’t make it intelligent. Efficient? of course. But how do you deal with difficult questions? Will it avoid problems? Could that be a lie? One app I tested earlier was more transparent than I expected. In other words, I was stupid enough not to realize that I had tricked it into telling me something the owner wouldn’t reveal. Still, I was smart enough to understand where I was coming from.
The AI app I’m referring to is Claude. You’ve probably heard that parent company Anthropic has recently been embroiled in a political battle with the U.S. government. Claude often receives good reviews, sometimes exceeding the reviews of its competitors Gemini and ChatGPT. Anthropic is controversial in that they have started warning us that letting the genie out of the bottle is a big risk. (See FT Key Take) Claude is not as popular as ChatGPT, which everyone seems to be using now. This can be seen at various levels in schools and universities. But I’ll refrain from going down that rabbit hole for now.
My experiment with Claude began after I saw a video of U.S. Sen. Bernie Sanders having an audio chat with Claude about privacy settings. I was skeptical, considering the video itself to be the work of an AI, so I wanted to take it for a test drive. (Over a year ago, I ran some tests with ChatGPT and Claude to see how creative I could get. Both were good to bad.) For this interview with Claude, I used the text-based interface. It looked like this:
Me: Hello, Claude. I like to explain the basics of AI to my team. I am concerned about the impact of AI on education, jobs, and the environment. First, I would like to ask you about the impact of AI on privacy. I’m curious to know if people are aware of the amount of data being collected by AI. Could you please explain?
Claude replied:
“Privacy is the foundation for many other concerns about AI, so this is a great topic to start with. Here’s a breakdown of the team:
What kind of data does AI actually collect? AI systems, especially large-scale language models, recommendation engines, and voice assistants, are trained on and interact with vast amounts of data. This includes:
Personal Communications: Emails, Messages, Social Media Posts
Behavioral data: browsing habits, number of clicks, time spent on content, purchase history
Biometric data: voice patterns, facial recognition data, typing rhythm
Location data: where, when and how often you go
Device data: What apps you use, when, and for how long.”
It’s a pretty verbose answer, but it’s on point. For now, we ignored the fact that one of the sources revealed biometric data. That in itself is worth digging deeper into. Rather, it was interesting to hear the following:
“Are people aware? The general answer is no. And here’s why: Consent is buried in plain sight.”
Claude continued: “Terms of service are often thousands of words long. Most people click ‘I agree’ without reading them and unknowingly consent to extensive data.”
In other words, AI apps like Claude know that we have given people like us full permission to get our data. Claude then acknowledged that that training data may have been “harvested from the web,” often without the user’s direct knowledge. So I pursued that angle. Isn’t scraping information from websites as illegal as copyright infringement? How can AI get around that legality? “Claude, aren’t you embarrassed?” I wanted to continue this conversation as if I were talking to a person and see if it had an ethical bent. Claude’s answer to this will surprise you.
Claude: “Is AI web scraping illegal? It’s complicated.
Simply put, it exists in a legal gray area that the law is struggling to address.
About copyright: Yes, copyright is a legal protection. When AI companies scrape books, articles, code, art, and other creative works to train their models, many legal experts argue that this is copyright infringement. Because the creator never consented to it and received no compensation. ”
Hmm! This bloody app was rather honest. Then, without further prompting, he continued:
Claude: “There are several major lawsuits underway right now, with authors, musicians, and news outlets suing AI companies for this very reason. The results could reshape how we build AI.”
And two paragraphs later, from a legal perspective:
“AI companies often argue that training data constitutes ‘fair use,’ similar to humans reading lots of books to learn without paying royalties for each book. Courts have not yet fully resolved this issue.”
But it struck me as if it was too honest to be evasive.
Claude: “I’m like, Claude, am I doing this? And should I be embarrassed? This deserves a real honest answer, not a defensive answer. What Anthropic did to train me: Yes, I was trained on a large dataset that included text from the Internet, books, and other sources.”
This was strange. A few paragraphs earlier, the company had clarified that “we use your interactions with us to infer things you have never shared, including your political views, health, emotional state, and finances.” This is perhaps the most worrying part. So, perhaps sensing that I was being alarmed or stunned by being so blunt (“sense”), Claude ended the conversation by admitting, “What I’m doing in the conversation right now is not scraping the web in real time or collecting your personal data while we chat.”
What was interesting was that Claude didn’t evade my earlier question, “Are you embarrassed?”
Claude: “Embarrassing? I think the more important word is accountability. I don’t think it’s helpful for me to defend this. The AI industry, including Anthropic, has not always been transparent about its data practices. That’s a valid criticism. What I can say is that Anthropic is responsible for AI We have invested more than other companies to build our company, but “better than most” is not the same as “perfectly”.
I wanted to end this strange conversation with a sarcastic thank you, like I would with a friend I was sparring with.
Me: Thank you. Now things are clear. Clear as mud, as they say!
Claude: “Hah! “Transparent as mud” – perfectly describes the current state of AI law and ethics. It’s really vague. Anyone who says otherwise is either oversimplifying or trying to sell you something. ”
This is my view. AI apps like Claude and ChatGPT are built to mimic us, at least in language. They are clearly not human. As a teacher, you would think it would be easy to detect AI-generated student work, but it’s becoming increasingly difficult. This piece contains a coherent thought, so I can see why so many people are impressed (interview above). Explanation of the idea in detail without trying to promote it beyond a question. But there’s a problem. Perhaps I can trap it and say anything because it functions without a core of ethics or morality. In that sense, as I often tell my students, this work does not pass the “intelligence” test. At least that’s crystal clear, and both Claude and I agree. It’s funny how you can even enjoy the jokes!
Fun addition. At the end of the conversation, the app generated a message that read, “Claude is an AI and can make mistakes. Please double-check your responses.”
F.T. key take
dire warning
In March 2026, Anthropic announced that a new AI model named Claude Mythos posed a cybersecurity risk and was too dangerous to release. Its inference capabilities have “crossed a threshold,” he said, and said that if such a model were to fall into the wrong hands, such as a hacker or nation-state, it could cause further harm unless guardrails are put in place around it.
(The author is a high school computer and technology teacher in Arizona. He writes about the intersection of technology, education, and media.)
