The sentence sounded like a typical 3am word vomit. It’s a natural concept. The sentence conveyed, at best, a superficial understanding of what had been done this semester, and the discussion only vaguely responded to the prompts.It was a piece of paper that always made me wonder: Did this student come to class? everything Is it of no value to them?
Except that there was no glaring evidence that this was the product of an all-nighter: no grammatical errors, spelling errors, or deviations into unrelated examples that would seem serious to a late-night student, but definitely in the light. Sounds like the result of hitting a bonbon in the day.
Perhaps you saw the first student essays written by ChatGPT just before the end of the semester?
I typed some of my text into one of the new AI writing detectors. However, before running the test, I realized the absurdity, and perhaps hopelessness, of what I was trying to do. I now find myself living inside a Turing test. So, as a human, I can no longer be completely sure if I’m reading another human’s research or a copy-pasted answer generated by generative AI. .
In the fall semester, I was worried that my essays would be purchased from the Internet. It’s hard to police, but it’s usually off-topic, so the students aren’t doing very well anyway. In the spring semester, the rules of the game changed completely. At the end of the term, watch out for machines writing student papers.
After running the tests (10.1% were written by humans, according to the program), I bowed down to the kitchen table. Since January, we have been overwhelmed by an onslaught of technological change that seems to be proceeding at the speed of light. Text, image-generating AI, and art (and to a lesser extent music and video) are all raising further questions about what we can trust and what is real.
To keep my sanity, I needed to know if both the internal BS detector and the automated GPT detector were correct.The essay was actually a ChatGPT job.In an email to a student, We gave them the option to disclose whether they used AI tools, and promised no grade penalties or ethical repercussions for what was, at best, a B essay. I explicitly forbid it in the assignment. They had; and, like most efforts at cheating, it was because they were tired, stressed, and desperate.
Technically, I won my first (known) showdown with the machine. But I didn’t feel victorious.
I study and teach media, politics and technology. So it’s literally my job to help people understand the disruptive potential of new media technologies for civic life.
It also meant that this semester was the most existentially challenging of the 17 years I spent in the classroom, teaching in D.C. during the 2016 election and early Trump presidency, and learning early in the pandemic. I taught on Zoom (it strained every molecule in my ADHD brain).
This year, I found myself not only playing Whack-A-Mole on ChatGPT, but trying to understand what might be the most significant technological change since the introduction of smartphones. Beyond the classroom mechanics, helping my students (and myself) find the language to talk about the changes we’re going through and develop the questions needed to make sense of it all. I feel that this is more urgent than ever.
The disruptive potential of generative AI has me hooked. Of course, I was not alone. The Atlantic declared college essays dead. At my university, I created pop-up classes for students and interdisciplinary faculty to explore the ethics of AI, and convened a series of webinars and conferences to help faculty understand the new Leviathan that we were suddenly confronted with. I made it
During that time, each of my three classes has been devoted to teaching about information disorders, or the different ways in which the information landscape is polluted, from deepfakes to clickbait to bipartisan news. While I could describe the process and incentives for creating and consuming misleading content, there were moments when I was completely overwhelmed by the scale and speed with which GPT was already causing chaos.
“I have nothing,” I told my students in response to a fake Trump arrest photo made by a journalist at a respected investigative journalism outlet (in his words, “just I was messing around”). We went through timelines and talked about who was more susceptible to misinformation, but offered a teaching moment where reality felt completely unreal. Who knew what happened next? (I teach at a Catholic university, so at least the photo of the Pope wearing a blowfish was a little more lighthearted.)
Still, my students have become obsessed with needlessly and overly misleading information barriers, the attention economy, and more generally Big Tech, building their own agency to make sense of what’s going on. I gave up. “Algorithms” and “AI” became the freaking stuff in my class. These words signify the end of a career and encapsulate anxieties about everything from graduating and job hunting to attacks on LGBTQ rights and abortion.
Hearing my students talk about these new tech bogeymen reminds me of the mistakes we made when criticizing the news media. When words themselves have so many meanings and are open to so many interpretations by speakers, the precision needed to diagnose points of intervention and separate existential fears from more immediate threats to social justice. needlessly surrender the ability to understand nuances and nuances., environment, and democracy.
Consider the diversity of meanings of “fake news,” from online memes and politicians discrediting factual journalism to satire and late-night shows. It becomes almost impossible to know who is calling which news fake, and it becomes even harder to demand accountability.
Moreover, the collapse of large categories and industries into single integrated entities overestimates their ability to influence the masses. Most Americans would collectively say they don’t trust the “media.” The media is imagined as a cabal manipulating the masses through some sort of coordinated attempt at mind control.
But some follow-up questions ultimately lead to exceptions for credible media such as Fox, The New York Times, or YouTube’s Weird Conspiracy channel. Media is not monolithic. Shaped by the desires, decisions and questions of those who consume it.
In the same way, when it comes to generative AI, if we raise our hands and lament the possible end of college essays, the end of the legal profession, or even the end of the human race, then we should be willing to help determine the technology’s potential future. will transfer the agency of The most powerful voice to invest in it.
In this semester class, a student led a presentation demonstrating ChatGPT’s parlor trick abilities.Why did my tomato turn red?saw salad dressingThe student presenter failed to emphasize that ChatGPT is also possible. errorthat day, the students muttered, “No more work.”
This desperation paralyzes public criticism and allows technology companies to go unchecked.
The syllabus for my Surviving Social Media class has a section titled “What God Brought” after the first telegram message. Understand how technological advances will shape the future.
In this section of our course, students tackle cryptocurrencies, biohacking, the love of robots, and the unknowns of how our digital lives will continue after ours are over.My Undergraduate allows anyone who is not a technical expert or computer scientist to define the problems posed by these advances, assess the current situation, and identify likely futures on an individual and societal level.
What I was trying to figure out in the section title is that: Samuel F.B. Morse’s first telegraph message from 1844, which can raise related questions even though it is itself a riff on a Bible verse. The fact that we actually have a vocabulary for regaining agency in a world that seems to be getting closer and closer terminatorAn unprecedented type of extinction. (In our final assignment, I specifically instructed my students that presenting a nuclear holocaust PowerPoint image to a prompt about the worst case scenario was an unacceptable answer.)
What I wanted to show my students was that if you break down the blanket terms like “AI”, “algorithms” and “big tech” that make it so impossible to truly capture this moment, the same question is to be able to understand how is done. And the starting point of our previous technical critique has appropriately prepared us for this very moment.
Let’s start with some basics. What kind of artificial intelligence are you talking about, what are its specific features and uses? Who stands to make money from this particular fork of technology? Who has the ability to promote
Or, more simply, we might take into account what futurist and tech ethicist Jaron Lanier recently pointed out in The New Yorker. Remembering that we created these technologies as tools empowers us to remember that we have the ability to shape their use.
In fact, I treat GPT like a calculator. Most of us used calculators in our math classes and still didn’t get perfect grades. After discovering my first ChatGPT essay, I decided that from now on, generative AI could be used for assignments as long as students disclosed how and why.This means less banging your head on the kitchen table and, at its best, your own lesson.
Future Tense is a partnership of Slate, New America, and Arizona State University, investigating emerging technologies, public policy, and society.
