The risk of making AI plan your next trip

Machine Learning


According to a 2024 survey, 37% of those surveyed using the survey reported using AI to help plan their trips, while about 33% said recommendations generated by AI contain false information.

These problems stem from the way AI generates the answer. According to Rayid Ghani, a well-known professor of machine learning at Carnegie Mellon University, programs like ChatGpt may seem to give you reasonable and useful advice.

“I don't know the difference between travel advice, directions and recipes,” said Ghani. “It's just you know the words. So it keeps spitting out words that make up anything that sounds realistic. That's where a lot of the underlying questions come from.”

Large language models like ChatGpt work by analyzing large collections of text and combining words and phrases that feel like statistically appropriate responses. This may provide you with completely accurate information. You also get what AI experts call “hagaku”. These tools just make things up. However, AI programs present hallucinations and factual responses in the same way, making it often difficult for users to distinguish between the real thing and the unnatural.

In the case of “The Sacred Canyon of Humantei,” Ghani believes that the AI ​​programme is merely aggregating a few words that the region believes are valued. Similarly, analyzing all of its data does not necessarily mean that tools like ChatGpt will give you a useful understanding of the physical world. If you make a leisurely mistaken city, you could end up walking around the city and climbing the side of the mountain and making a 4,000m climb. That's before the actual misinformation issues arise.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *