According to a 2024 survey, 37% of those surveyed who used AI to help plan their travels reported that it could not provide enough information, while around 33% said their AI-generated recommendations included false information.
These issues stem from how AI generates its answers. According to Rayid Ghani, a distinguished professor in machine learning at Carnegie Melon University, while programmes like ChatGPT may seem to be giving you rational, useful advice, the way it gets this information means you can never be completely sure whether it’s telling you the truth.
“It doesn’t know the difference between travel advice, directions or recipes,” Ghani said. “It just knows words. So, it keeps spitting out words that make whatever it’s telling you sound realistic, and that’s where lot of the underlying issues come from.”
Large language models like ChatGPT work by analysing massive collections of text and putting together words and phrases that, statistically, feel like appropriate responses. Sometimes this provides perfectly accurate information. Other times, you get what AI experts call a “hallucination”, as these tools just make things up. But since AI programmes present their hallucinations and factual responses the same way, it’s often difficult for users to distinguish what’s real from what’s not.
In the case of the “Sacred Canyon of Humantay”, Ghani believes the AI programme likely just put together a few words that seemed approriate to the region. Similarly, analysing all that data doesn’t necessarily give a tool like ChatGPT a useful understanding of the physical world. It could easily mistake a leisurely 4,000m walk through a city for an 4,000m climb up the side of a mountain – and that’s before the issue of actual misinformation comes into play.