Texas charts a practical path for legal education in the age of artificial intelligence. Innovate carefully, teach ethics first, and remind all future lawyers that efficiency without integrity is never competence.
Generative AI is being introduced into legal practice faster than any recent technology, and law schools are already testing what it means to use these tools responsibly. The challenge for SMU Dedman Law School is not to move forward at full speed, but to teach students that technology should inform, not replace, professional judgment.
Like most law schools, we are learning how to balance experimentation and rigor. The goal is not to train students to be AI operators, but to become evaluators, or lawyers, who can spot errors, biases, and overconfidence before they become malpractice.
Professors across the university are now granted access to ChatGPT Enterprise, ensuring safe university-wide experimentation. SMU Law recently joined the Harvey AI project, collaborating with other leading law schools and Harvey's AI legal research platform to study how generative tools can responsibly enhance legal education and practice.
This partnership will enable faculty and students to explore AI in realistic professional contexts with the support of privacy protection and technical training. SMU also requires faculty to adopt one of three AI usage options in all syllabi:
- Completely ban generation AI
- Allow structured use with attribution and confidentiality requirements
- Create a custom policy for your class
This transparency has led to meaningful discussions in every classroom about what responsible use looks like. This is a lesson that future lawyers really need.
SMU offers several courses that explore AI from multiple aspects of legal practice and theory. “Artificial Intelligence and the Law” examines how regulatory and governance frameworks shape the development of AI systems.
Other services focus on practical applications of how AI supports legal research and drafting, assists with pre-litigation evaluations, and reshapes discovery and due diligence.
The Legal Analysis, Writing, and Research program will begin with traditional research, followed by guided AI integration during the spring practice readiness event, where law firms will demonstrate the use of AI for document review and transactional work. This structure emphasizes that technology enhances, rather than replaces, analytical skills.
The legal profession's own regulators are grappling with these same issues. In February 2025, the State Bar of Texas issued Ethics Opinion 705, comprehensive state-level guidance regarding the use of generated AI by attorneys. This opinion outlines how existing disciplinary rules apply to this rapidly evolving technology.
- ability (Rule 1.01): Lawyers must understand how AI works and maintain “technical competency” before using it. You don’t have to use AI, but you shouldn’t “unnecessarily back away” from tools that can save your clients time and money.
- Confidentiality (Rule 1.05): Lawyers must avoid disclosing client information to public systems or “self-learning” AI systems without safeguards or client consent.
- supervision and candor (Rules 5.03, 3.03): Lawyers remain solely responsible for verifying work products generated by AI and may not blindly rely on or submit unverified output.
- Fee: Lawyers can bill for time spent reconciling and validating AI results, but not for time “saved” through the use of AI. Efficiency belongs to the client.
The message of this opinion is clear. Technological convenience does not diminish ethical responsibility.
This guidance is the basis of how I teach professional responsibility. Misusing AI (relying on hallucinatory cases, uploading sensitive data, etc.) violates the duty of diligence and honesty.
However, refusing to use AI when it can ethically improve accuracy and reduce costs can also raise competency and equity concerns. Competence now includes understanding when technology serves the client's interests and when it does not.
Texas State’s cautious approach to combining innovation with ethical guardrails reflects how law schools are adapting. By anchoring AI guidance in ethics and governance, institutions across the state are positioning themselves as leaders in responsible adoption, rather than blindly accelerating adoption. SMU's transparency, attribution, and accountability policy model reflects the same client expectations that lawyers would face in practice: clear disclosure, informed consent, and responsibility for every word they sign.
We want to teach our students to think better, not to prompt them better. The reliability of the technology is determined by the lawyers who review it. The future of law will rely less on automation and more on discernment: the ability to examine, stay informed, and act ethically in the face of constant change.
That's why SMU starts with the foundation: understanding precedent, authority, and reasoning before layering AI on top. The discipline of verification—checking sources, checking citations, and maintaining an attitude of skepticism—remains the lawyer's first defense against error.
AI will evolve faster than any classroom, but the principles of professional judgment will persist. As Ethics Opinion 705 reminds us, competence, confidentiality, and integrity cannot be delegated to algorithms. The best way to prepare future lawyers is to teach them to think critically before using technology and to remember that the law's most powerful tool remains human judgment.
columnist Karlis Chatman He is a professor at SMU Dedman School of Law. She writes about corporate governance, contract law, race, and economic justice for Bloomberg Law's Good Counsel column.
