
Transportation in southeastern Pennsylvania piloted a new executive tool in Philadelphia in 2023. The AI-equipped camera is mounted on seven buses. The results were immediate and dramatic. In just 70 days, the cameras have flagged more than 36,000 cars blocking bus lanes.
The pilot's results gave transport, also known as SEPTA, valuable data on bus route obstacles and insight into the role of technologies in combating these issues.
In May 2025, the Septa and the Philadelphia Parking Department officially launched the city-wide program. More than 150 buses and 38 trolleys in the city are equipped with similar artificial intelligence systems that scan license plates for violations. The system uses AI-powered cameras that use computer vision technology to find vehicles blocking bus lanes and scan license plates to identify vehicles breaking rules. If the system flags a possibility of a violation, a human reviewer will check it before the fine is issued. It costs USD 76 in Center City and USD 51 elsewhere.
The deployment will result in impending service cuts and fare hikes as Septa faces a $213 million budget shortfall.
I am a professor of information systems and academic director of the Lebow Business Centre for Applied AI and Business Analysis at Drexel University. The Center's research focuses on how organizations use AI and what it means for trust, fairness and accountability.
Recent research has shown that in centres where 454 business leaders in industries, including technology, finance, healthcare, manufacturing and government, the use of AI is often deployed faster than the governance required to ensure it works fairly and transparently.
Research shows that efficiency and monitoring gaps are particularly common in public sector organizations.
That's why I think it's important for SEPTA to carefully manage its AI enforcement systems to gain public trust while minimizing risks.
Fairness and transparency
When the car blocks the bus lane, they clog traffic. The resulting delays can ruin people's days, miss connections, or delay work. This makes riders feel they can't rely on the transport system.
So if AI enforcement helps keep these lanes clear, it's a victory. The bus travels faster and commutes faster.
But here is the problem. If the system feels unfair or unreliable, goodwill will not work. Our study found that over 70% of the organisations surveyed do not fully trust their data. In the context of public enforcement, whether it's a transport or a parking authority, it's a warning sign.
Without reliable data, tickets with AI can turn efficiency into expensive mistakes, such as incorrectly issued citations that require refunds, incorrectly issued citations that have lost error corrections, and even legal challenges. It is important here because people are most likely to follow the rules and accept penalties when this process is deemed accurate and transparent.
Furthermore, this finding from our research really caught my attention. Only 28% of organizations report implementing a well-established AI governance model. The governance model is a guardrail that trusts AI systems and tailors human values.
If private companies use AI, that's annoying enough. However, if a public agency like SEPTA sees the driver's license plate and sends a ticket to the driver, the stakes are high. Public enforcement has legal authority and requires a higher level of fairness and transparency.
AI Label Effect
You can also ask, “Isn't this ticket system like a red light or a speed camera?”
Technically, yes. The system detects violations of rules, and humans review evidence before the citation is issued.
But simply labeling technology as AI can change how it is perceived. This is known as the framing effect.
Just calling it AI-driven can make people trust it less. Whether the system is scoring a paper or hiring workers, research shows that the exact same process elicits more skepticism when AI is mentioned more than if it were not. People listen to “ai” and assume that the machine is making judgments, so they start looking for defects. Thinking AI is accurate does not close the trust gap.
That recognition means that public agencies need to align AI-based enforcement with transparency, visible protection measures and easy ways to challenge mistakes. These measures will increase confidence in AI-based enforcement.
We've seen what's wrong when AI-based enforcement systems malfunction, and how quickly trust can erode. In late 2024, AI cameras on New York City's Metropolitan Transportation Bureau buses mistakenly issued thousands of parking tickets, including nearly 900 cases where drivers actually parked legally in accordance with the rules.
Even if such errors are rare, they can undermine public confidence in the system.
Build trust in the system
Organizations for Economic Cooperation and Development are international physical AI policy standards in dozens of countries, and found that people are most likely to embrace AI-driven decisions when they understand how these decisions are made and have clear and accessible ways to challenge mistakes.
In short, AI enforcement tools should work for people, not just for people. For scepters, that could mean:
– People know what is allowed to issue clear bus lane rules and exceptions.
– Explain the safeguards, like the fact that all bus camera violations have been reviewed by staff at Philadelphia parking agencies before tickets are issued.
– Provide a simple appeal process with administrative review and rights to appeal.
– Share enforcement data, including how many times the violation or appeal is processed.
These steps demonstrate that the system is fair and accountable and help people shift from a ticket machine-like feeling to a trustworthy public service.
Read more about our stories about Philadelphia.
![]()
Murugan Anandarajan does not work, consult with, own or receive funds for shares from companies or organizations that benefit from this article, nor does it disclose any related affiliations beyond academic appointments.
/Commentary of the conversation. This material of the Organization of Origin/Author is a point-in-time nature and may be edited for clarity, style and length. Mirage.news does not take any institutional position or aspect, and all views, positions and conclusions expressed here are the authors alone.
