Since ChatGPT was announced in late November 2022, artificial intelligence (AI) tools in the form of generative AI have changed the way people surf the web. While AI tools have made many tasks much easier to perform, they have also raised concerns about data privacy and accuracy.
In an interview with indian expressDr. Shruti Patil, director of the Symbiosis Artificial Intelligence Institute, spoke about how people can safely use AI in 2026.
Q: How can we safely incorporate AI into our lives?
Dr. Shruti Patil: The key is for people to understand what they can use AI for. For example, if you want to know about what is happening around the world, or if you want to know if a certain event is happening and read the news about it, you can use AI.
If you have manual tasks that you are doing repeatedly, you can gradually automate them with the help of AI. For example, if you are going on a trip somewhere and want to create an itinerary within a certain budget. People spend two or three days researching locations, directions, tourist attractions, temperatures, and more. With ChatGPT, you can do all this in just 2 minutes. So these kinds of small tasks that require some kind of decision-making and are done based on some kind of research can be automated.
Content generation and application generation are also areas where AI can be used very well. For example, if you want to design an invitation for an event, you don't need to consult a designer. Simply use AI tools like Gemini and Notebook LM to design and instantly share your invitations.
Q: How should I protect my privacy when using AI tools?
Story continues below this ad
Patil: It is important that people understand what information they can give to AI and what information they should disguise. You should never disclose confidential information about yourself or others. This could be information that could reveal your identity, financial details, and passwords. We all use generalized large-scale language model products such as ChatGPT that capture millions of parameters from all over the world. Therefore, providing this type of information should be avoided at all costs.
When making financial decisions about investments, you can ask AI about things like current stock trends, but you should also do your homework and not trust it blindly. If you want to understand the doctor's words more deeply during a doctor's examination, you can use AI tools to provide explanations. But doctors cannot be replaced by AI.
Q: AI is also prone to hallucinations (when AI tools produce information that sounds plausible but is false or inaccurate). How can we protect people from this situation?
Patil: In general, AI tools work well for single-page results. If the PDF has hundreds of pages, the AI will hallucinate. Therefore, although AI can be used for personal work, the free version of AI models should not be used for office work. Paid versions of AI tools are available there.
Story continues below this ad
The AI can hallucinate even a single page PDF, so it depends on the importance of the data. These tools are still learning. As usage increases, tasks become increasingly complex.
Q: So do you think it's important to cross-check the results of your AI, even if you're performing a small but important task?
Patil: yes of course. Currently, there are no tools that provide accurate and perfect results. Sometimes you get the right answer, sometimes you don't. Results are inconsistent.
Q: Women are being targeted online using generative AI tools where men edit themselves into photos and videos of women. What are the responsibilities of AI companies as well as users in this regard?
Story continues below this ad
Patil: It is important that countries, rather than users, set AI policies, and all companies providing AI services must enforce them. Certain rules must be devised at the product level to ensure that these things are not allowed.
Even now, if you ask ChatGPT, for example, “When will you die?”, it won't answer. Today, ChatGPT recognizes when users are emotionally attached to it, as many teenagers and even older adults are attached to it, and it treats users like digital humans.
India needs to have a very strong AI policy. These AI tools are trained on user data, especially when it comes to user data privacy. Governments need to put in place guardrails that specify what types of data are allowed to be shared and what types of data are simply prohibited.
Alistair Augustine is an intern at The Indian Express
