Danny Tobey told a roomful of North Texas business leaders at Convergence AI Dallas that the policy signals shaping artificial intelligence “are not just headlines.” They will influence “where the government will push, where it will invest, and how companies can design their own compliance systems to move as quickly as possible,” said Tobey, an attorney, medical doctor, and exited entrepreneur who has advised at least half the Fortune 10 on AI.
The lawyer chairs the AI and Data Analytics practice at DLA Piper, the global law firm with 90 offices in roughly 45 countries. He and his team built one of the first focused AI practices in the country about eight years ago—and he’s quick to note they wear many hats. “Most of us are also computer scientists, data scientists, and former software founders,” Tobey told the March 31 audience. “We very much love the technology. We are not anti-progress.”
But major companies are investing heavily in AI and not yet seeing the returns they want, Tobey said, while simultaneously “finding themselves opened up to risks of inaccuracy, lack of transparency, bias in data and other things that we’re starting to see ripple out into a highly active litigation environment.” He pointed to the first jury verdict on social media addiction, handed down just days before the event, as “the tip of the iceberg.”
Tobey gave the audience three things to listen for in the fireside chat to follow. First, what winning looks like in concrete terms: “innovation, capacity, infrastructure, readiness, and global versus local standard setting.” Second, how governance fits into that vision. “Not as bureaucracy, not as checklists and fig leafs, but as a real operating system” for scaling AI without scaling unknown risk. And third, what industry can do now, especially in Texas.
“Responsible AI is return on investment,” he said. “If I can leave you with a thought, our AI is ROI.”
With that, he handed the stage to his DLA Piper colleague Sean Fulton and Dean Ball, one of the key architects of America’s AI Action Plan.
The main stage at the Dallas Regional Chamber’s Convergence AI Dallas on March 31, the second day of the two-day conference and the day of Dean Ball’s fireside chat on America’s AI Action Plan. [Photo: Sandra Louz/DRC]
America’s AI Action Plan is a to-do list
Sean Fulton opened the chat by introducing Ball as the “primary designer” of the federal strategy. Released in July 2025, the 28-page document outlines the Trump administration’s plan for maintaining U.S. leadership in artificial intelligence. Fulton didn’t waste time. “We only have 30 minutes for a topic that we could probably talk for days about,” he said. “So we’ll just jump right in.”
Fulton asked where the action plan stands 12 months after release and where it’s headed in the next 12.
Ball was the plan’s primary staff drafter during his time as senior policy advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy. Now a senior fellow at the Foundation for American Innovation, he said he’s “actually quite on the upside, happy with the way that the implementation of the action plan is going.” He pointed to the Export Promotion Program and the adoption of AI in government as bright spots.
Fulton asked what was left on the cutting room floor. Ball said that if his team had had another month or two, they would have gone deeper into AI adoption in heavily regulated industries like financial services and healthcare, which he believes could be transformed by it.
The plan was designed to be different from other government AI strategies, which tend to be, in his words, “very fluffy and airy and high level.” He described it as a credible “to-do list for the federal government to carry out tasks in the near term on behalf of the American people”—within existing statutory authority and existing budgets.
Solving a patchwork problem
In his opening, Tobey “level set” on the AI “state of play,” defining that as operating “like it or not, in a world of global regulation.”
“Most of our companies are now multinational,” he told the audience. “Data does not stop at borders, yet each sovereign nation oftentimes has its own data protection regime and increasingly, its own artificial intelligence regulatory regime.”
Tobey pointed to the EU AI Act as the prime example. “Much like for those of you who are familiar with GDPR, the prior European privacy law, the AI Act in Europe is extraterritorial,” he said. “It purports to cover any company anywhere in the world, no matter where their models and their data are hosted, if the outputs of those models impact citizens in the European Union.” He called the regulatory scope “massive” and noted that penalties at the highest level can reach 7% of a company’s global annual revenue. “Quite extraordinary,” he said.
The U.S. is seeking “a centralized approach that avoids a patchwork of state laws and allows an open and innovation-focused framework at the top,” Tobey said, with rules around frontier risks like national security and child safety but without burdening companies with heavy regulation.
Fulton asked Ball how the newly released national policy framework fits into the overall action plan.
Ball said one of the leading items of the action plan addresses the issue of state preemption and onerous state AI regulations. “The president wants a national framework for AI. He doesn’t want a state-by-state patchwork.” When President Trump announced the plan, Ball said, that issue “was front and center for him.”
The issue of states “racing ahead to regulate things is a real one,” Ball said, and he pointed to Texas as an example.
What emerged in Texas was the Responsible Artificial Intelligence Governance Act, or TRAIGA, which took effect January 1, 2026. The law is designed to both foster innovation and encourage private industry investment in the state while protecting individual rights. It establishes regulatory sandboxes—environments where businesses can test AI systems with limited legal liability—along with safe harbors designed to promote innovation rather than just restrict it.
Texas “really did chart a path with regulatory sandboxes, safe harbors, benefits to promote innovation and really a narrower view of how the government can help regulate this,” Tobey said in his opening. TRAIGA stands apart from other state laws that tend to mirror the EU approach.
“Many people think the Texas law could provide one potential framework for national legislation,” he noted.
But it was a process. Ball said he was “dismayed a year ago or so” when the state was considering an earlier version—one he described as “a very, very aggressive, algorithmic discrimination bill that created a centralized regulator.” That bill got walked back. “And so we got a much, I think, lighter touch version in the end,” he said.
Ball’s eye is on the big picture beyond any one state. The U.S. action plan addresses the patchwork itself, he said, and the National Policy Framework for Artificial Intelligence, released by the White House on March 20, was “a really important part of sort of fulfilling that aspect of the action plan.” The framework is a set of legislative recommendations to Congress and a direct follow-up to President Trump’s December 2025 executive order, “Ensuring a National Policy Framework for Artificial Intelligence.”
Still, Ball said, “a compliance patchwork that creates a real maze for deploying firms and for AI developers is not good for anybody, and so that worries me quite a bit.”
The North Star
Fulton followed with a question he hears frequently in his practice: “What am I supposed to do with this patchwork?” referring to clients contending with the EU AI Act and an array of states trying to do their own thing in a fragmented landscape. “Certain states can’t even pass a comprehensive bill, but have these kind of little side bills, kind of addressing various things.” He asked Ball for a “North Star”—something “that’s not going to go away, regardless of what the regulation is.”
Ball admitted the question is a tough one. But “number one … I think you want to make an effort to quantify, if you’re using AI to automate an existing process, you want to make an effort to quantify the sort of current pre AI level of reliability, level of safety, whatever metric it is you care.”
If you’re going end-to-end, “we should really want AI to be much better.” Exactly how much better it needs to be depends on the thing, he said.
But the bar should be high. While not the only one, self-driving cars are a great example, he said. “Self-driving cars should be like an order of magnitude better than human beings. That’s technological progress. We shouldn’t settle for self-driving cars that are as safe as human drivers. We should settle for them only when they are much safer than human drivers.” The same standard, he said, “is probably true for a lot of automated business processes.”
Liability, common law, and the question AI forces us to ask
Fulton then turned to the allocation of liability, asking where it falls as AI integrates into organizations and product stacks.
Ball said he’s not a lawyer—”I’m one of those people who’s, you know, I know just enough about the law to be dangerous”—but he loves common law. “It’s one of the most powerful incentives we have in our society,” he said. “This notion that a person who causes harm to another must be compelled to internalize that negative externality is an extremely important one. And it’s amazing how this body of law is accumulated.”
He described the evolution of his thinking. “I used to be much more of like, I want liability shields as much as I can,” he said. “And then I kind of realized, as I got into AI policy,” that one area where common law is “really messy right now is the intersection of the First Amendment and common law liability.” Part of the problem, he said, is Section 230. “One of the downsides to this kind of like binary liability shield is that you can’t litigate the interesting things, which is how we accumulate knowledge.”
Ball said he believes Section 230 protections likely do not apply to frontier AI companies—”and I think that’s probably, on the whole, a good thing.” He paused and turned to Fulton. “I’d be actually curious for your thoughts about that,” he said. “Maybe I’m wrong.”
But the flip side of that, he said, is the social media addiction case against Meta and YouTube that Tobey had referenced—the one that produced the first jury verdict on social media addiction just days before the event. “You can see that going in a very bad direction for AI where juries are second-guessing the design of the transformer or something, the algorithmic design of the transformer, or of neural networks or multi-layer perceptrons or some such,” Ball said. “That seems like it could be quite deeply problematic.”
Deployer-side liability and a backstage scenario
With respect to frontier AI and governance, Ball’s sense is that “there are going to be some obligations that end up falling on the developer.” As an intellectual problem, legal scholars are interested in this because it’s interesting to think about software liability, he added.
What Ball thinks is “actually profoundly more interesting and much less talked about … for lack of a better word … is deployer side liability,” he said. “In other words, what are the characteristics of the responsible use of AI?”
It’s worth considering “at the firm level, sort of a business that’s adopting AI,” but also “at the level of the individual, just me using AI agents for some purpose,” he added. “We have a pretty clear sense of what responsible driving is. If you’re on your phone distracted, we kind of all have a sense that that’s not responsible driving. Will we have similar kind of socially constructed notions of what is responsible AI use with time, and will that plug into the common law system in interesting ways? I kind of hope so.”
Fulton put “a finer point on the liability question” with a scenario they’d discussed backstage. Consider a doctor interpreting an MRI who comes to one conclusion and an AI that comes to another. The doctor, who’s been practicing for 20 or 30 years, overrides the AI—and ends up being wrong. “How do you deal with that kind of difference in information expertise versus technology?” Fulton asked.
Ball called it “another thing about common law that doesn’t get talked about that much.” His view: “If a system is quantifiably superhuman in its reliability and safety characteristics, if an AI system is just reliably better than humans, then it may well be de facto negligent not to use it.” Overriding that system, he acknowledged, “gets complicated for various reasons.”
Where the market can’t self-correct
Fulton noted that courts are ill-equipped to handle AI cases due to information asymmetry and asked Ball what actual transparency looks like when it comes to policymakers and frontier model makers.
“The transparency that I have focused the most on is actually probably one that is not directly relevant to many people in this audience,” Ball said—transparency around how frontier AI developers measure and mitigate catastrophic risk.
“We will see models this year that have staggering cyber capabilities,” he said, “that are better than many human cyber security experts at finding vulnerabilities in critical software. And they might well be better than all human experts at that at some point in the relatively near future.” He noted that people also talk about bio risk—”the ability to design novel pathogens using AI systems and all kinds of other threats that will manifest themselves.”
It’s the area he focuses on most, Ball said, because it connects directly to the liability conversation. “Catastrophic tail risk is the thing you shouldn’t expect a common law liability regime or a market-based incentive to solve,” he said.
He pointed to California’s SB 53, the Transparency in Frontier Artificial Intelligence Act, as an example of a legislative approach. The law, which took effect January 1, 2026, requires developers of the most powerful AI models to publicly disclose how they test for and mitigate catastrophic risk. Ball broke with some of his usual allies in supporting it. “Unlike a lot of people on my side of the aisle, I’m in fact supportive of” the bill, he said.
The diffusion challenge
But SB 53 is aimed at the builders. For most of the companies in the room—the ones Tobey described as “not yet seeing the returns that they want”—a different challenge is playing out: How do you adopt AI in a way that actually delivers?
Fulton asked Ball what practical guidance he’d give CEOs, general counsel, and CTOs trying to draw conclusions from the action plan. And does that advice differ for organizations deploying AI versus developing it?
The action plan is called “Winning the Race.” Ball said the problem is how people are defining that race. “I totally take responsibility for this, where I just think we confuse people,” he said.
A key theme throughout the action plan is what policymakers call “diffusion”—getting AI out of the lab and into the economy. But Ball said that idea has narrowed in practice. In governance conversations, diffusion “has actually kind of turned out” to mean getting older, open-source models into familiar use cases—”like getting DeepSeek or some open source model into medical diagnostics or something.”
“There are a lot of people who think that’s what the race is about,” he said.
Ball doesn’t. “The actual really interesting diffusion challenge that we face is, how do you integrate AI—and like, advanced AI, not like sub-frontier, but really frontier AI—how is it going to restructure organizations?” he said. “How will institutions be fundamentally upended by this technology?”
Tobey earlier described an aspect of that restructuring. “We’re moving from an era of chatbot AI, where there is always a human at the receiving end of recommendations, to agentic and physical AI,” he told the audience—”a world where AI is increasingly automated, increasingly impacts the real world and takes actions and makes decisions on behalf of humans, and does so in ways that don’t always stop for human intervention, permission and understanding.”
Companies that built AI governance programs several years ago are now calling him back, he said, because “the old human-in-the-loop rules really don’t work as well when you’re looking at ubiquitous, automated AI agents that are acting 24/7 as digital workers.”
Infrastructure bottlenecks
Fulton noted time was running short and asked Ball what he sees as the real-world bottlenecks to AI implementation in the next 18 months, the biggest risks coming forward, and how the government should address them.
“It’s not going to be surprising to anyone here—probably, we are in Texas after all—but it’s energy,” Ball said. “The grid and the infrastructure side of this.” He added that one area “that’s somewhat under-appreciated is the skilled labor that we’re going to need to build all of that infrastructure, because that, in and of itself, is a big problem.”
He pointed to an example close to home. “The Stargate Project in Abilene, Texas—I think they’re importing skilled laborers from like 48 states or something,” Ball said. “Enormous operations with real shortages of the labor that we need. I think that’s a really, really big challenge.”
But he was clear that these are not dead ends. “We’re going to get through the energy bottleneck,” he said. “We’re going to get through the compute bottleneck, which will also be a real thing.” Even the state-by-state patchwork, Ball said, won’t stop AI’s diffusion—”AI is a really important macro invention.”
What keeps Dean Ball up at night
Ball is confident the practical problems will get solved: energy, labor, compute, the patchwork.
“The thing that keeps me up at night is not any of those things,” he said. “It’s this question of—America feels, you know, we’re in our 250th year now, we feel like, to me, we feel like an old country. Like maybe middle aged, right? We’re not as young and energetic and hungry as we once were.”
He wondered aloud whether we can absorb what’s coming. “Can our civilization handle the dynamism, or do we just kind of want to go more in a European direction, and just kind of want to, you know, hang out and have a picnic for the next 150 years or something,” he said. “I hope not. I hope we remain a hungry country.”
He spoke to the room: “I love coming to places like Texas, because I think Texas still has that much more than, for example, the East Coast where I live.”
Fulton, with a couple of minutes left, asked Ball for his “sage advice” to the Convergence audience—people excited about AI and its future.
“Be prepared for quite a lot of change,” Ball said. “Dynamism means saying hello to new things, and it also means saying goodbye to some things that we maybe don’t want to say goodbye to.”
He said he’s “quite convinced” the diffusion of AI is going to be for the better. But he was candid about what comes with it. “The diffusion of artificial intelligence is going to mean humanity takes its hand off the wheel a little bit, of processes and mechanisms that we’re not used to not having our hands on the wheel for,” he said. “I also would be lying if I said that I didn’t feel some degree of melancholy about that.”
Don’t miss what’s next. Subscribe to Dallas Innovates.
Track Dallas-Fort Worth’s business and innovation landscape with our curated news in your inbox Tuesday-Thursday.





