AI tools like ChatGpt are everywhere now, and it's easy to see why. They can brainstorm ideas, draft documents, answer questions, and even help with everyday planning. The more you use it, the more attractive it becomes to have almost everything handled.
That doesn't mean that it's always the right tool. ChatGpt is an LLM-based chatbot that is sure to be useful for a variety of tasks, but you can be confident when it's wrong and also create incorrect details and outdated information. For casual use, that may not be a big deal, but if money (including your taxes), health, or legal matters are involved, relying on it can cause bigger issues than it solves.
When do you know do not have Using ChatGpt is just as important as knowing how to use it well. Below are 11 situations where relying on AI chatbots can be more harmful than good to avoid the most common pitfalls.
Don't miss CNET's unbiased technical content and lab-based reviews. Add us as your preferred Google source for Chrome.
(Disclosure: CNET's parent company Ziff Davis filed a lawsuit against ChatGpt Maker Openai in April, claiming it infringed Ziff Davis' copyright in training and operation of AI systems.)
1. Diagnosis of physical health problems
I definitely gave my symptoms of ChatGpt out of curiosity, but the answers that come back can read like your worst nightmare. Potential diagnosis can cause dehydration or some kind of cancer to sway from influenza. I had a lump in my chest and I entered that information into ChatGpt. Behold, it told me that I might have cancer. In fact, I have a lipoma, which is not cancer, and occurs in one every 1,000 people. My licensed doctor told me that.
I'm not saying there are no good uses for ChatGpt for health. It will help you draft questions for your next appointment, translate medical terms, organize your symptoms timeline, and help you prepare better. And it helps to make the doctor's visit less overwhelming. However, AI cannot order or inspect labs. This does not guarantee that you carry fraud insurance. I know that limitation.
2. Take care of your mental health
ChatGpt can certainly provide grounding techniques, but you can't get a phone when you have real mental health issues. Some people know that they use ChatGpt as an alternative therapist. Corin Cesaric from CNET has found that as long as she keeps that limit in front of her mind, it will help her work through her grief. But as someone with a very real and very human therapist, ChatGpt is still a really pale imitation and at worst it is incredibly dangerous.
Chatpgpt has no experience, is unable to read body language or tones, and has zero ability to truly empathize. Just simulate it. A licensed therapist operates under legal duties and professional codes that protect you from harm. ChatGpt is not. That advice can enhance biases that can be fire-faults, red flags, or unintentionally burned into training data. Leave deeper work, the hard, messy human work – to real people who are trained to handle it properly. If you or someone you love is in danger, dial 988 on your US or local hotline.
3. Immediate safety decision
If the carbon monoxide alarm starts chirping, do not open ChatGpt. Ask if there is a real danger. I'll go out first and ask later. Large language models cannot detect gas odors or smoke or dispatch emergency crews. In a crisis, typing will not be evacuated or dialed every second. There will not be any evacuation or dialing of the 911. ChatGPT can only work with scrapping information that feeds it, and in emergencies it may be too late. Therefore, we treat chatbots as post-integrated explanators. Not the first responder.
4. Obtaining a personalized financial or tax plan
ChatGpt can explain what an ETF is, but we don't know the debt-to-income ratio, state tax system, application status, deductions, retirement goals, or risk appetite. That training data may not lead to hiking at the current tax year and the latest tax rates, so reaching Enter could cause the guidance to get outdated.
I have a friend who throws out the 1099 total to ChatGpt and throws away DIY returns. Chatbots cannot replace CPAs that can catch hidden deductions worth hundreds of dollars or flag mistakes that can cost thousands of dollars. If real money, deadlines and IRS penalties are in place, call the expert, not the AI. Also, be aware that what you share with the AI chatbot will likely be part of your training data and include income, social security numbers and bank routing information.
5. Processing confidential or regulated data
As a tech journalist, I watch Embargo land in my inbox every day, but I have never thought about throwing one of these press releases to ChatGpt to get a summary or further explanation. That's because, if so, the text leaves my control and lands on a third-party server outside the guardrail of my nondiscloure contract.
The same risk applies to client contracts, medical charts, or those covered by California's Consumer Privacy Act, HIPAA, GDPR, or ordinary trade secret laws. It applies to income tax, birth certificate, driver's license and passport. If sensitive information is in the prompt window, we cannot guarantee that it is stored, someone to review internally, or if it can be used to train future models. ChatGpt is not immune to hackers or security threats. If you do not paste it into a public rack channel, do not paste it into chatgpt.
6. Do something illegal
This is obvious.
7. Academic misconduct
I would lie if I said I didn't fool my exam. In high school, I use my first-generation iPod Touch to peer into some nasty equations that were difficult to remember in AP Calculus. However, with AI, it is significantly tamed by the scale of modern fraud.
Turnitin and similar detectors are getting better at discovering AI-generated prose every semester, and professors can hear “Chatgpt Voice” already a mile away (thank you for ruining my beloved EM dash). Suspension, expelling, and revoking a license are real risks. It's best to use ChatGpt as a study companion, not a GhostWriter. Also, if ChatGpt is doing the job for you, you are just cheating on yourself from education.
8. Information monitoring and breaking news
Since Openai rolled out its ChatGPT search in late 2024 (and opened it to everyone in February 2025), chatbots can get fresh web pages, stock estimates, gas prices, sports scores and other real-time numbers at the moment they ask. However, it does not stream continuous updates on its own. Every update requires a new prompt, so if speed is important, a live data feed, official press releases, news sites, push alerts and streaming coverage are still the best bet.
9. gambling
I've actually been lucky with ChatGpt and hit a 3-way parlay during the NCAA men's basketball championship, but I wouldn't recommend it to anyone. Looking at ChatGpt Hallucinate, we provide player statistics, incorrect injuries and incorrect information about WINLOSS records. I reconfirmed all my bills for real-time odds, so I cashed in and still got lucky. ChatGpt won't be able to see tomorrow's box score, so don't rely on it just to get that victory.
10. Drafting of Will or other legally binding contracts
ChatGpt is perfect for breaking down basic concepts. If you want to know more about the trust of revocable living, ask. However, the moment you ask them to draft an actual legal text, you are rolling the dice. Real estate and family law rules vary from state to state, and sometimes even county to county, so skipping witness signatures or omitting notarized clauses could result in the entire document being thrown. Rather, help CHATGPT create a checklist of questions for your lawyer, pay that lawyer and turn that checklist into a document that will stand up in court.
11. Make art
This is not an objective truth, and not my own opinion, but I don't think AI should be used to create art. I am not a disproveable intelligence at all. I use ChatGpt to brainstorm new ideas and assist with headlines, but that's a supplement rather than an alternative. You will definitely use chatgpt, but don't use it to create art. It's kind of gloss.
