Can college professors use ChatGPT forever?

AI News

Artificial intelligence has been around for decades. In 1951, computer science pioneer Christopher Leachey created his first successful AI program. Since then, the technology has been embedded in our daily lives, from bank fraud detection to flu season prediction to facial recognition on smartphones.

However, as large-scale language model-based tools like ChatGPT become more accessible to the public, AI is increasingly being denied, especially due to the prospect that students may use AI to complete assignments. It is now attracting a lot of attention.

Cynthia Firth, professor of electrical and computer engineering at the University of Utah, uses AI in her classroom. She teaches her students how to design antennas using genetic algorithms that use computer-encoded natural selection.

Engineers have been using AI-like genetic algorithms since the 1960s, so ChatGPT and other text-generating bots don’t seem particularly revolutionary, at least in their current form. They are unable to produce data or cite sources accurately, so their ability to assist in producing scientific reports, for example, remains limited.

“By the time you tell ChatGPT what to write, you’ll have just written it,” Firth said.

Other professors, such as Utah State University’s Chris Babitz, believe generative AI will cause a “paradigm shift” in college education. Not because students are afraid to become dependent on it, but because it gives educators an opportunity to re-evaluate their teaching methods.


Cynthia Firth, professor of electrical and computer engineering at the University of Utah, holds an antenna over a pacemaker at the University of Utah offices in Salt Lake City on Tuesday, May 23, 2023. Firth says artificial intelligence has been used to develop antennas for decades.

Christine Murphy, Deseret News

AI raises the bar for teachers and students

“I think the role of professors is not to be afraid if students are cheating. What we should do is be more aware of what it means to teach students in the first place. to do,” said Christa Albrecht-Crane, professor of English at Utah Valley University.

As chair of the university’s writing program committee, Albrecht-Klein discusses with colleagues how he can teach students that writing for themselves benefits them. has led

Part of that effort is to divide the paper into separate tasks such as brainstorming, outlining, drafting, revision, and peer review. ChatGPT can help with all these steps (Albrecht Crane has introduced his ChatGPT as a “collaborator” in the classroom). However, breaking the essay into pieces may make students less likely to rely on Chat GPT to write the full text.

“Emphasis is not a writing artifact, but emphasis is the process of creating writing that students feel they are invested in,” Albrecht-Crane said.

Text-generating AI can also encourage professors to design assignments that require more creativity, collaboration, and complex analysis from students. History professor Babitz plans to put this philosophy into practice in his “America in the 1960s” class this fall.

Hoping that students would use ChatGPT to complete “low-level” tasks in class, Babits looked at mock museum exhibits from the National Museum of American History and those exhibits based on assigned readings. We require students to create social media campaigns to promote.

“I mean, it turns out to be a lot harder and more challenging, but it’s probably more meaningful than sitting down and writing three essays on three different books in one semester,” Babitz said. rice field.

Some professors see ChatGPT as just another new technology in the same category as calculators and the Internet. Before spell checkers and Grammarly, writing required knowledge of spelling and grammar. USU computer science professor John Edwards argues that text-generating AI is doing the same thing at a higher level, and that it’s prose.

“But I don’t think that takes away the highest level, the most important part: the way the arguments are framed… the creativity behind the writing,” Edwards said.


Cynthia Firth, professor of electrical and computer engineering at the University of Utah, takes a portrait photo with an antenna at the University of Utah offices in Salt Lake City, Tuesday, May 23, 2023. Firth says artificial intelligence has been used to develop antennas for decades. .

Christine Murphy, Deseret News

Incorporate ChatGPT into college curriculum

Midway through this spring semester, Mr. Babitz gave an optional assignment to his history of sexuality class. Students entered questions about class content into his ChatGPT and analyzed the strengths and weaknesses of their responses.

After the initial surprise wore off, students began to notice flaws in the bot’s output. It was relatively generic, lacked depth, and even had some information wrong. Some students found this to be a useful starting point, but no one said they would trust it to complete the entire assignment.

One possible conclusion that can be drawn from this is that if students understand ChatGPT’s shortcomings, they may be less likely to use ChatGPT for all their work. Albrecht-Crane believes educators have a responsibility to teach students how to use ChatGPT ethically to prepare them for entry into the workforce likely to be transformed by AI.

What if a student uses ChatGPT for cheating?

Most of these professors conceded that it was inevitable that at least some students would use generative AI to get out of their jobs. And while some believe banning ChatGPT will do more harm than good, others believe it is the most viable way to deter students from becoming addicted.

Edwards believes college students should be exposed to AI at some point, but banning it at the course level won’t hurt.

“ChatGPT is not changing the world. It is a step in our decades of progress in using technology in innovative ways,” he said. “It’s certainly a big step, but it’s perfectly fine if English teachers don’t use it.”

For educators concerned about AI robbing students of their critical thinking skills, take-home exams that return to traditional tests may be considered, rather than giving students open books.

One of the main problems with banning the use of generative AI in the classroom is that it is nearly impossible to enforce. It was easy for a student to tweak ChatGPT’s responses to sound like they were written by a human, and the AI ​​detection software concluded that at least part of the paper the student wrote all by herself was generated by her AI. often attached.

That’s why Edwards and his colleagues at USU are investigating another method of plagiarism detection: keystroke tracking. In addition to their assignments, computer science students submit a log of backspaces, copy-and-pastes, and any other keys they enter in the process of entering code.

Having a window into a student’s coding process can help professors detect red flags. Copy and paste could suggest that the student is using a code-generating AI program like her Copilot. Edwards’ research also shows that students who know their keystrokes are being tracked are less likely to plagiarize.

Edwards and his team are still working out some of the issues with keystroke logging, such as potential privacy violations and the anxiety it causes some students, but it could be a more effective approach to AI detection in education. There is a possibility.

What will happen when AI evolves?

Many argue that the real concern is not the AI ​​in its current state, but the smarter AI that will be delivered in the future. Earlier this month, the AI ​​Safety Center released a statement highlighting the importance of advances in AI and received signatures from hundreds of technology experts.

“Reducing the risk of AI-induced extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.

Other experts have chosen to take a step back from AI. Jeffrey Hinton, the “godfather of AI” whose research was integral to the development of software like ChatGPT, quit his job at Google and expressed concern that AI would grow out of control.

He even said it was “not inconceivable” that AI could destroy humanity, according to a Deseret News report.

However, this view is not universal. As an engineer, Firth is optimistic about the future of generative AI. For example, if eventually her ChatGPT could generate more accurate information and cite sources, it would be useful and help students and teachers focus on more advanced tasks.

“It would be really nice if we could spend our time doing something more original,” Firth said.

Edwards argues that if AI becomes disruptive, it’s not because it’s getting too smart, but because humans can’t adapt.

“We need to elevate what makes us human,” he says. “And if we’re continuously improving as humans, I don’t think computers will ever catch up.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *