Yale Law School’s AI Lab

AI News


Leading law firms around the world are investing in artificial intelligence (AI) platforms designed to streamline their operations. But AI can also play a more socially conscious role in the legal field, helping expand access to legal services.

Scott Shapiro, the Charles F. Southmaid Professor of Law, co-founded the Yale Law AI Lab at Yale Law School two years ago to do just that.

“At law school, there were discussions about what to do with AI, questions about the ethics of AI, regulation of AI, pitfalls, promises, etc.,” Shapiro said. “I thought what was missing was actually trying to build tools that were intellectually, ethically, and professionally responsible.”

Shapiro worked closely with Ruzica Piscak, a professor of computer science at Yale School of Engineering and Applied Sciences and a co-founder of the lab, to learn about the latest advances in AI and figure out which ones are best suited for different types of legal problems. His fascination and enthusiasm for AI can be confusing to people. People ask if I’m worried that technology will take job opportunities away from law students.

“My answer is, ‘99% of Americans act like they have access to a lawyer, but they don’t have access to a lawyer,'” he said. “We can use these tools to help people access public housing and other types of benefits, or protect them from eviction, if it’s AI or nothing.”

Yale News spoke with Shapiro, who is also a professor of philosophy in the Yale School of Arts and Sciences, to discuss more about the AI ​​Lab’s focus. Here are five key points.

Shapiro has a background and passion for computer science.

As an undergraduate at Columbia University, Shapiro majored in computer science for a time before switching to philosophy. (He first encountered the then-nascent concept of artificial intelligence in the 1980s.) He revisited his love for the subject in his last book, Fancy Bear Goes Phishing: The Dark History of the Information Age, in Five Extraordinary Hacks (Farrar, Straus & Giroux, 2023).

Mr. Shapiro is also the founding director of the Yale University Cybersecurity Lab, which provides educational facilities in cybersecurity and information technology. And he and Piskak recently began co-teaching a course called “Law and Large-Scale Language Models” about how AI can be applied to legal reasoning.

AI Lab uses theorem proving tools to build legal reasoning tools.

“People think of AI as ChatGPT, but there’s a lot of other things to AI,” Shapiro says.

ChatGPT is a large language model. A model trained on vast amounts of data to generate human language responses to questions and other tasks. This model acts as a kind of high-precision guessing machine. While this may be appropriate for certain types of legal problems, it is not the best approach for the rules-based legal problems that your lab is working on. To do so, they use a theorem prover that works like a calculator.

“These are things that most people have never heard of, but they make computers work,” he said. “The reason computers don’t always crash is because they take every computer program and run it through checkers that look for bugs in the code. What we’re doing is, instead of using a theorem prover in a computer program, we’re using a theorem prover in legal code.”

He said the technology could provide a tool to overcome the complex rules that often create barriers to accessing public benefits.

The lab has already created several prototype tools.

Once you find the right partner, start rolling out your model. One challenge in this regard is determining how best to make the tools available to the right users. Is a tool like this best distributed as an app? If so, how can people help people find it? Are you working with legal aid organizations and providing them with QR codes to distribute to clients? These are all questions the lab is working on.

“One of the ideas that I think is really exciting is that we can provide these kinds of tools to pro bono lawyers,” Shapiro said. “Lawyers who want to donate their time on the weekends but don’t know the area of ​​law they need help with. Having a tool that answers their questions accurately will help them educate clients who are in desperate need of legal services.”

One prototype was developed in collaboration with Yale University’s human resources department.

The institute built a “concierge” named Alfred to help human resources departments streamline tedious tasks, such as creating guidance on hiring fellows and dealing with employee benefits. In the demonstration video, an employee asks Alfred if he can use funds in his Health Flexible Spending Account to cover various expenses related to a serious accident that left him hospitalized for several weeks. Alfred responds with a series of questions designed to elicit legally accurate answers.

Alfred answers the types of questions that HR staff always ask. “We learned a very important lesson there: It’s not the hard problems that make life miserable for people in human resources and other departments,” Shapiro said. “It’s a simple question and one that gets asked over and over again.”

/University Release. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *