Graphics by Transport Topics and Getty Images
Important points:
- Industry leaders said the growing use of AI agents in logistics raises risks around data security, governance and access that need to be addressed before deployment.
- Executives warned that poor data quality, unclear ownership, and weak interfaces could undermine model reliability and create an opening for fraudulent or malicious activity.
- Experts said companies need stronger oversight, vendor due diligence and employee training to ensure secure data processing and manage limitations and errors in AI systems.
[Stay on top of transportation news: Get TTNews in your inbox.]
Data security, governance, and system access continue to grow in importance as AI agents begin sending emails, fulfilling orders, and making pricing recommendations. Industry leaders said these issues should be tackled from the beginning.
While AI promises to significantly improve workflows, it also expands the channels through which sensitive information passes.
“For agent tools to work, organizations must have open email, phone calls, and voicemail,” said Eric Rempel, chief innovation officer at Redwood Logistics. “This allows for data provenance and governance and also creates a security aspect.”
Jonah McIntyre, chief product and technology officer at Trimble, said AI agents must be protected from unauthorized prompts.
“If you have an agent who can receive email, it's like a house with an unlocked door,” he said. “Anything can be thrown at it, and the more the world knows what that agent is capable of, the more things will be thrown at it.”
As the role of AI grows, so too will access control and monitoring.
“Someone with malicious intent could ask an agent to do something sinister,” McIntyre said. “You could just have an agent like this look at all the percentages and increase it by five points. There would be incredible corruption.”
Before fleet and logistics companies can implement AI into their operations, they need clean, consistent, and reliable data as a foundation.
Levi Sorenson, AI strategy lead at technology and compliance firm Fleetworthy, said most fleets underestimate the magnitude of the challenge.
“There's a lot of data out there, but it's not all good data,” he says. “There needs to be a whole process of cleaning and promoting the data into a known good dataset so it can be used across the organization.”
Fleetworthy uses tiered datasets internally and pauses projects when a customer's data is too old or inconsistent to support reliable modeling.
“Hygiene isn't magic; it's math,” Sorenson says.
Mark El Khoury, CEO of technology-driven trucking company iFleet, said package data often arrives incomplete or inconsistent, making modeling a challenge.
“We've built checks and balances to validate the information. Does it make sense? Does it match what we know? Are the rates correct? It's very difficult to build a good model when the source data is not available.”
El Khoury said that many of the AI failures he observed before founding Aifleet were not caused by the models themselves, but by core operational data that was too inconsistent to support reliable modeling.
He added that significant governance risks come from software systems that assume operations will go according to plan, which often does not happen in transit agencies.
“Many algorithms ignore edge cases,” El-Khoury said, adding that all AI systems used in trucking must “continuously change their decisions” to avoid locking fleets into unsafe or unrealistic assumptions.
Even if a system is secure, reliability can be compromised if the underlying data is incomplete or siled.
NMFTA Chief Operating Officer Joe Orr said that to support interoperability across the industry, the National Motor Freight Transportation Association is working to develop a common application programming interface that will allow carriers, shippers, 3PLs and TMS providers to exchange data securely and consistently.
Data safety depends not only on what a company does internally, but also on its vendors.
Keith Peterson, NMFTA's vice president of operations, said buyers of AI tools need to understand how their data is stored, whether it's segregated or mixed with other customers' information, and whether it's used to train models.
“Will the data be publicly available or private?” he asked.
Additionally, data ownership must be clear. Vendor reliability is also part of the equation.
Ahmed Ebrahim, vice president of strategic alliances at McLeod Software, said third-party AI providers often require deep access to customer-specific data sets, increasing the importance of clearly defined roles and responsibilities for data processing and model behavior.
“The level of depth required to take a specific customer's data and train an AI model for that environment is amazing,” he said.
Data governance is as much about people as it is about data, so employees need to be trained on what they can and cannot share, Fleetworthy's Sorenson said.
“We want to make sure that no one is accidentally leaking customer lists or other information that we don't want made public,” he explained.
However, even the best managed systems will make mistakes, and McLeod CEO Tom McLeod said users should continue to be cautious. “Don't assume that AI is absolute. It's important to know what AI can and cannot do,” he said.
Want more news? Listen to today's daily briefing below or go here for more information.
