
With the release of an artificial intelligence chatbot Chat GPT Last November, an opportunity was opened to the public to harness the power of sophisticated language learning models programmed with human-like responses to requests and questions.
ChatGPT is currently the fastest growing consumer application in history, and Florida State University Information Science Professor Paul Marty is keeping a close eye on its adoption. Marty says the technology has many possibilities and many moral and ethical implications.
“Used well, it is a tool that can help us all become better teachers, writers and learners,” he said. “Like all technology it has to adapt to us, not the other way around. We need to be wary of tools that lack human interaction to the point where humans are reduced to being nannies for machines while working for them.”
Marty taught an undergraduate Honors course on the Unintended Consequences of Information Technology, including Artificial Intelligence, and answered questions about ChatGPT.
What do you mean when you say new technologies like CHATGPT have tradeoffs?

Think about the skills acquired throughout human history. We used to build fires from scratch, but in modern life do we need to do that now? If you have a smartphone, you don’t even have to remember phone numbers.
Proper use of these tools has allowed me to pursue higher mathematics by employing a calculator. But when problems arise with these tools, people can be left unsure of what to do without them. We need to understand the trade-offs of what we have gained and what we have given up.
If CHATGPT isn’t really intelligent, what is it?
This technology pretends to be intelligent. It analyzes vast amounts of data and puts together sequences of text in a way that simulates intelligence. What’s so impressive and potentially scary about ChatGPT is that it quickly generates large amounts of readable, well-written, and unique text.
Basically, this tool is like autocomplete gone awry, choosing the correct word to put next to each sentence, but it doesn’t understand what it’s doing. It’s not really intelligent, so it often generates meaningless text. In that sense, it reminds me of early computer translation tools that, when asked for a French translation of the word “president”, returned the name of the actual French president instead.
What about user concerns about emotional reactions from CHATGPT?
ChatGPT is so sophisticated in the way it organizes text that it can really manipulate our emotions. It’s important to remember that it’s programmed to do so as a simulation of intelligence.For those concerned about emotional reactions from ChatGPT, I want to tell you that this technology is not sentient. . Not alive. We haven’t reached that stage yet. I don’t know how close we are to it, but I’m sure we’re a long way off.
Are you worried about plagiarism?
Florida’s Academic Integrity Policy covers this. You cannot ask someone else to write your thesis. Full stop. This includes ChatGPT.
But there are gray areas. Students can use tools such as spell checkers and grammar checkers. So why not allow students to use AI tools like ChatGPT? Where is the line between Grammarly and programs like ChatGPT? But at what point does your paper cease to be your paper? You need to work with your students to help them understand the proper use of these tools.
So how will colleges and universities adapt?
Adapting takes time, but you shouldn’t hide from these technologies. We need to embrace them and understand the correct way to use them in the classroom. This means changing the way we think about education at the university level.
The conclusion for me is: If AI can answer test questions and essay prompts well enough, it needs to ask better questions. Think about the assignment as having students read the article and write a short summary of it. Well, ChatGPT is good at writing her 200 word summary. If computers can do this as well as humans, why are we asking students to do it? Again, this is about tradeoffs. What have we lost and what have we gained?
You need to see what ChatGPT can and can’t do, and adjust your assignments to reflect that understanding to help your students reach the next level. ChatGPT is good at writing at a high-level level, but it can’t produce as detailed a sentence as a human can. At least not in any meaningful way. So, using AI as a starting point, let’s work with students to analyze the AI-generated text, discuss what is right and wrong, and really dig into the topic.
Ultimately, that’s what we want our students to be able to do when they graduate — using the tools and technology available to them to achieve higher levels of thinking and higher levels of writing. That should be the goal of university education. .
For more information, visit FSU’s School of Information.
