As artificial intelligence (AI) continues to advance rapidly, there has been a surge in the development of AI-powered content creation tools, such as ChatGPT and Dall-e, that provide users with a variety of personalized experiences. But with this growth comes concerns about the potential dangers and implications of such apps, from privacy concerns to the migration of human workers.
For example, the previous paragraph was written by ChatGPT and illustrates the blurred line between AI-generated content and human-generated content. The image on the right was created by instructing Darré to create an image of “University of Toronto in the style of Van Gogh.” starry night”
News headlines in recent months have outlined issues related to generative AI tools and content. Illustrators, graphic designers, photographers, musicians, and authors have expressed concern that generative AI will cause them to lose income and have their work used as material without permission or compensation.
On the academic side, instructors have had to deal with students submitting work written by ChatGPT, and as a result are reassessing how best to teach and assess courses. We are researching the impact of this technology and providing guidelines for students and instructors.
Despite the challenges, many experts say the technology is here to stay and that the focus needs to be on establishing guidelines and safeguards for its use, while others say We are looking at its positive potential.
liberal arts writer Chris Sasaki We spoke with six experts from the University of T about the impact of generative AI tools and the ethical issues posed by new technologies.
Ashton Anderson
Department of Computer Science Assistant Professor
We are increasingly seeing AI-powered gameplay, text generation, and artistic expression tools designed to simulate a specific person. For example, it’s easy to imagine an AI model that plays in the style of chess champion Magnus Carlsen, writes like a famous author, or interacts with students like an assistant to your favorite teacher. My colleagues and I call these imitation models (models that mimic specific individuals), and they raise important social and ethical questions across a variety of applications.
Is it used to trick others into thinking they are dealing with a real person, such as a business colleague, celebrity, or politician? If the imitation model performs well enough to replace that person , what happens to that person’s worth or value? Conversely, what if the model exhibits bad behavior? What is the impact on the modeled person’s reputation? Is consent obtained from the person being modeled in this scenario? As these tools become part of our daily lives, it is essential to consider all these questions.
Paul Bloom
Department of Psychology Professor
What ChatGPT and other generative AI tools are doing today is both very impressive and very scary. I have a lot of questions about those features that I don’t know the answer to. I don’t know their limits. That said, I’m not sure if there are some things the text generator can’t fundamentally do. They can write short works or write in a particular person’s style, but can they write longer books?
Because these tools use deep learning statistics, some people don’t think they are capable of such a task. In other words, it generates sentences and predicts what will happen next. But they lack the basics of human thinking. And they will never come close to writing like we do unless they have those foundations. We have models, mental representations of homes and friends. And we have memories. Machines don’t have them. Machines are not human until they do so. Nor can they write, illustrate, or create like we can.
Paolo Granata
Associate Professor, Media Ethics Laboratory.Book and Media Studies, St. Michael’s College
AI literacy is the key. Whether something is viewed as a threat or an opportunity, the wisest course of action is to understand it. For example, there are tasks that AI does better than humans, so let’s focus on the tasks that humans do better than AI. The emergence of widely accessible generative AI technologies should also motivate educators to rethink teaching methods, assignments, and the learning process as a whole.
AI is an eye-opener. The role of educators in the age of AI must be re-evaluated. Educators should be experience designers, not content providers. In education, context is more important than content. Now that you have access to such a powerful content producer, you can focus primarily on your active learning approach.
Valerie Kindulge
Doctoral Program in Political Science
While the public eye has turned to disruptive AI technologies themselves, we can’t forget the people behind the screens using these tools. Our democracy needs informed citizens with access to quality information. Digital literacy is essential to helping us understand these technologies and make the most of them. It empowers us to have access to tools that spark our creativity and help us summarize information in a flash.
But while it’s important to know what these tools can do to move forward, it’s equally important to learn and recognize their limitations. In an age of information overload, digital literacy provides a pathway to practice critical thinking online, understand the biases that influence the output of AI tools, and discern information consumers. As the meaning of literacy continues to evolve with technology, we need to encourage initiatives that help us learn how to navigate the online information ecosystem. Ultimately, we will be better citizens and neighbors.
Catherine Moore
Part-time professor at the Faculty of Urban Sciences.Faculty of Music
Does having the credit “original score made by Google Music” at the end of a movie change your opinion of the score? I don’t think so. Film music is meant to create an emotional impact. That’s what it’s for. And if an AI-created score succeeds in doing that, it has done its job regardless of how it was created.
In addition, the generative AI “composer” raises questions such as: What is sound? What is music? What are nature sounds? What is artificial sound? These questions go back decades, and people have captured the sounds of machines and sounds from nature. You speed them up and slow them down. You do all sorts of things to them. An entire electro-acoustic music movement was created by musicians using technology to manipulate acoustic sounds and create something new.
We believe the emergence of AI-generated music is part of a natural progression in the long line of music creators who create and produce using new technologies. .
Karina Voldo
Associate professor at the Institute for the History and Philosophy of Science and Technology.Center for Ethics; Schwarz Leismann Institute of Technology and Society
Advances in these tools are exciting, but they carry many risks. For example, these systems have biases that reflect human biases. When he asks tools like ChatGPT for the names of 10 famous philosophers, he gets 10 Western male philosophers. And when asked about female philosophers, only Western philosophers are named. So GPT-4 is Open AI’s attempt to address these concerns, but unfortunately it doesn’t solve them all.
in his book Randomly, [moral philosopher] Harry Frankfurt argues that “bullshit” is more dangerous than liars because liars at least keep track of their lies and remember what is true and what is false. I will not. Well, ChatGPT is bullshit – it doesn’t care about the truth of what it says. It composes content and composes references. And the problem is that it sometimes does the right thing, so users start to trust it – and that’s a big concern.
Lawmakers need to catch up on regulation of these generative AI companies. There have been internal reviews by some companies, but that’s not enough. In my view, there should be an ethics review board and even laws regulating this new technology.