Yale University freshman creates AI chatbot that provides answers on AI ethics

AI News


A student and a professor at Yale University have teamed up to create an artificial intelligence chatbot based on the professor's research on ethical AI.

When Nicholas Gertler, a freshman at Yale, wanted to build an artificial intelligence (AI) chatbot based on his professor's research on AI ethics, his professor advised him to temper his expectations.

“I said, 'Prepare to be disappointed,'” said Luciano Floridi, a professor at Yale University who is in charge of the practicum in the cognitive science program. “I don't know if people are really interested in this subject of a Yale professor who studies digital technology.”

He added: “That seems to be the case.”

Two weeks after LuFlot Bot was released, it received 11,000 queries from over 85 countries. This bot is not intended to replace the more common ChatGPT type bots, which can seemingly answer any question. The LuFlot bot focuses specifically on the ethics, philosophy, and uses of AI, answering questions such as “Is AI harmful to the environment?” “What are the regulations regarding AI?”

“I never expected this technology to reach people in so many parts of the world,” Gertler said. “This is what happens when you break down this barrier to technology.”

Gertler and Yale have joined the ranks of institutions creating their own large-scale language models (LLMs). Building your own AI has gained traction in recent months as intellectual property, ethics, and fairness concerns swirl around major generative AI tools like ChatGPT.

Yale chatbot overcomes IP issues

Gertler first started tinkering with artificial intelligence five years ago when he was 14 and interested in technology. Last fall of his first year at Yale, he introduced his own AI chatbot as a kind of study guide for a midterm test in a cognitive science class. He built it using lecture slides and study guides. Next, he had the chatbot ask questions similar to those asked on the exam.

“I just thought this was a really cool experiment,” Gertler said.

Gertler started the spring semester and began talking about chatbots with Floridi, who was immediately interested. Floridi is the founding director of Yale University's Center for Digital Ethics, a prominent philosopher, and the author of dozens of research papers and books exploring the ethics of AI.

Gertler, who co-founded an edtech startup called Mylon Education with Rithvik Sabnekar, wanted to create LuFlot Bot to educate users about the ethics of AI.

“He thought that given the topics I'm researching, it would be natural to make all of this research in philosophy and AI ethics available to the public,” Floridi said.

One of Gertler's main goals for chatbots was to bridge the digital divide that has widened with ChatGPT iterations, many of which charge subscription fees. LuFlot Bot is free and available to everyone.

“Providing people with sources directly from academia is very important, because having access to literature is a privilege,” he says. “There are a lot of paywalls, and ideas are usually conveyed in a way that uses sophisticated language that the general public is not forced to understand or cannot understand.

“The fact that they are now able to gain understanding through this website is very important to me,” he said.

For Florida State, there was also the added bonus of securing intellectual property rights. Many higher education authorities refuse training for LLMs, where intellectual property protection and copyright are often opaque. With a homegrown LLM, it becomes clear what a professor's research will and won't be used for.

“There's a difference between buying something at the store and cooking it yourself. You know the ingredients,” Floridi said. “You cook it, so it might not be better than what you bought, but we know exactly what we're putting in it.”

Several other higher education institutions, including Harvard University, the University of Washington, the University of California, Irvine, and the University of California, San Diego, have created their own internal LLMs for campus-wide use, allowing professors' intellectual property to be used within their institutions. We are committed to ensuring that you are safe.

And as technology familiarity increases, universities may also be more likely to build their own internal models.

Gertler and Floridi say that not all professors will be able to create their own chatbots based on their teachings, but given the large amount of documentation required to build one, future faculty acknowledged that it could be useful for both students and students.

“This project is emblematic of how safe, secure, and accessible chatbots can be created in a relatively short period of time, so consider the possibilities of what could happen if professors were able to create similar bots.” Gertler said. “They put together lesson slides and study guides and a ton of questions. They have a lot of this rich data, so they're just plugging it together to make it more accessible to students.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *