10 min read

ChatGPT Round-Up: AI in the Classroom

It's been six months since my viral tweet about ChatGPT, and the articles I wrote on it are now published. Time to revisit?
GPT-4
Photo by D koi / Unsplash

It’s been about a half year since the viral tweet about my ChatGPT-in-the-classroom assignment that helped launch this site, and briefly blew up my twitter (I still can’t call it X) account.

The whole thing was such a weird experience. I had not anticipated this at all, and yet, for a few days, it garnered seemingly one like every few seconds. It ended up with over 30k likes and 8 million views (for which Gary Marcus made a separate post congratulating me).

Even Elon Musk tweeted at me—which I didn’t notice in the hubbub until a friend pointed out. As my cousin observed, I’m lucky he agreed with what I had done, otherwise his army of trolls would have made life miserable.

(All in all, twitter is so bizarre. This tweet had me feeling like the whole world was looking at me—a nerve-wracking feeling; I ended up uninstalling it from my phone and playing Zelda in the other room almost the entire time—but then you might later see a tweet with a half-eaten jar of expired mayonnaise and the caption “lmao I’m going in” accrue 100k likes and you realize that the internet can’t be predicted or understood in any sane way).

In the aftermath I was contacted by a number of editors from a few magazines as well as a radio show, and now that everything has been published on that front, I thought I’d share it for your interest as well as provide some commentary.

Publication with Wired

Wired magazine was the first place to publish a write-up on the assignment. It was, overall, a short overview of what we did and what I hoped students would learn from it.

Don’t Want Students to Rely on ChatGPT? Have Them Use It
It’s easy to forget how little students and educators understand generative AI’s flaws. Once they actually try it out, they’ll see that it can’t replace them.

In my view, the key portion (and which I write about a lot on this site), was this paragraph:

Both students and educators seem to have internalized the oppressive idea that human beings are deficient, relatively unproductive machines, and that superior ones—AI, perhaps—will supplant us with their perfect accuracy and 24/7 work ethic. Showing my students just how flawed ChatGPT is helped restore confidence in their own minds and abilities. No chatbot, even a fully reliable one, can wrest away my students’ understanding of their value as people. Ironically, I believe that bringing AI into the classroom reinforced this for them in a way they hadn’t understood before. 

I didn’t have space to talk more about the details of the assignment. It’s important, when choosing a topic, to pick something that the students are familiar with, so that they can identify what’s wrong (if anything) with what ChatGPT has produced. They had two choices for the prompt, both related to the Harry Potter books and movies. They could ask ChatGPT to write an essay, making three points and citing sources, arguing either that Harry is a Christ figure, or that Harry is not a Christ figure.

This was something we talked about in class, as I like to use the Harry Potter books and movies to get the students thinking about the relationship of religion, magic, science, and technology. Usually, very few realize how obvious the Christian imagery is in the books, especially in the last one—The Deathly Hallows—which becomes almost Narnia-esque in its overt religiousness. (Harry talking to his father in a garden right before sacrificing himself, getting killed by a satanic figure (who has a snake), going to a place called King’s Cross (the title of the chapter, too), and then resurrecting from the dead—hard to get more blatant!).

Students were allowed to pick either argument and then had to pretend to be the professor. They were to comment on the paper and then answer a questionnaire I provided, the central focus of which was their investigation into whether ChatGPT had confabulated fake sources and quotes. Most students caught it making errors. A few didn’t, but when I looked into it, the AI had indeed made mistakes and fooled them.

I was astounded at how quick the turnaround was for Wired. Gideon Lichfield, their editor-in-chief at the time, asked for a DM from me, and then gave me the contact info for one of his editorial teams. I sent a proposal, and they responded with a request for the piece in a few days. I sent it a draft, and the editor had it back in under 24 hours, with request for revisions in another couple days. All told, it took only about a week for it to get started, written, and published (a breakneck pace for most online writing, in my experience).

people sitting on chair in front of computer
Photo by Dom Fou / Unsplash

The Impact of AI on Education

At the same time, I was emailed by Victor Storchan, lead AI/ML researcher at Mozilla (which was cool for me, as a longtime Firefox user). He requested a more open-ended essay from me about the role of AI in education, which you can find below in French for the magazine Le Grand Continent.

ChatGPT et la crise existentielle de l’université | Le Grand Continent
Tous des tricheurs ? L’université n’a pas échappé à l’irruption de ChatGPT et, plus généralement, de la popularisation de l’intelligence artificielle. Mais faire la chasse au plagiat est stérile. Il faut plutôt se demander ce que nous attendons de nos universités. Dans une perspective informée par s…

I can read French a bit, but cannot write it—not at this level at least. I wrote the article in English and then they translated it.

The English version was posted, with permission, on my site, and ended up being the second post I published here.

ChatGPT and the University’s Existential Crisis
To grapple with ChatGPT in the classroom is also to grapple with the purpose of education: what are its means and what are its ends?

This meditation ended up being more pessimistic in its tone and analysis, as well as more wide-ranging, since I had less editorial oversight than with the other places.

I still stand by this, however. AI can certainly be useful in a lot of contexts, but in education—especially in humanities education—it seems to be having an almost entirely negative impact. Enabling cheating, homogenizing thought, and disseminating misinformation seem to be its primary accomplishments. About the only use for it I’ve had professionally has been to quickly revise my cover letters for job applications—expediting what is only pointless busy-work meant to depress application numbers.

The outlook in this piece is gloomy, but if anything I’m even more pessimistic now. In the bigger magazines I tried to strike a more hopeful, future-oriented tone, but in the long run I am beginning to think that the whole enterprise of defending humanities education is basically dead. The most fruitful course of action at this point might be to just give up. The megamachine has already won.

When faced with the dismantling of, say, West Virginia University’s entire system—the seemingly intractable problem of university administrators around the country themselves disbelieving in the value of humanities education—it’s hard to be anything other than pessimistic.

However, the one bright spot is that, for how little the universities themselves believe in the value of education, students everywhere still hunger and yearn for it. They want to know about history, philosophy, and literature; they like to talk about it (they just, sadly, think that it’s an impractical indulgence they should abstain from, like a dessert). That was nowhere more apparent to me than in the conversations we had about AI in my class.  

Marketplace: An Interview with My Students

This assignment would not have worked out so well without my students and their participation. A number of commenters on the original tweet wanted to know more about what they thought. Fortunately, two of them were able to share their thoughts with the world after we were contacted by Marketplace Tech.

AI-generated college essays are riddled with factual errors
A professor at Elon University assigned his students essay prompts to feed to ChatGPT, but the grades the chatbot received were not great.

In our first conversation, Marketplace only spoke to me. But when we discussed how to do the radio interview with them, I floated the idea of appearing with two of my students: Cal Baker and Fayrah Stylianopoulos.

So many students did a fantastic job on the assignment that it was hard to pick two of them to come on the air. But Cal and Fayrah went above and beyond on the assignment; they were the only two students who, when doing the homework, actually pretended to be the professor, addressing not me (the grader) but ChatGPT itself as though it were a student. Because they did so much work and had so much fun with it, I thought it would be prudent to invite them to give their perspectives. And, in fact, they were already involved in the conversation because they were the two anonymous students I cited in the viral tweet.

I have a fair amount of experience speaking in front of a crowd—mostly in class, but also on webinars for the C.S. Lewis Foundation, and even a short stint in sports radio for Duke WXDU back in 2019 (a brief but illustrious career cut short due to the pandemic shutdown)—but even so, I was pretty nervous before speaking to Marketplace. They are a big program, with a wide reach.

That said, the experience was great altogether. We talked for 45 minutes with Meghan McCarty Carino, and the subsequent show was edited down to 12 minutes. I’m very happy with how it turned out.  

teal LED panel
Photo by Adi Goldstein / Unsplash

Co-Written Op-Ed in Scientific American

Cal, Fayrah, and I also co-wrote an op-ed that appeared in Scientific American, where they were able to expand on their ideas.

To Educate Students about AI, Make Them Use It
A college professor and his students explain what they learned from bringing ChatGPT into the classroom

Like in the thread and in the Marketplace interview, Cal explained their fear that students’ over-reliance on ChatGPT to expedite homework would have harmful cognitive consequences. Cal argued that simply repurposing material for the professor is only the surface-level goal of any homework assignment, and the real benefit for students is the mental exercise that goes into the work. If you don’t do that, then you are not going to improve your mind. They wrote:

Effects might include diminished problem-solving abilities, worse memory and even a decrease in one’s ability to learn new things… This will ultimately end up hurting students. If they depend on technology that makes their lives easier in the short term, they will fail to develop their abilities for future work, thereby making their lives more difficult in the long term.

It would be like (in an example I have used before) paying someone else to go to the gym for you.

One of the most popular parts of the tweet-thread was quote from Fayrah where she was worried human thinking would begin to mirror machine thinking—that is, just searching for the most likely “right answer” and regurgitating it back to the professor. “I’m not worried about AI getting to where we are now,” she answered in the homework, “I’m much more worried about the possibility of us reverting to where AI is.”

In the SciAm piece, Fayrah got a chance to expand on this. “I worry that if students over-rely on machine learning technology,” she explained, “they will learn to think like it, and focus on predicting the most likely 'right answer' instead of thinking critically and seeking to comprehend nuanced ideas.”

Such a focus could have a homogenizing effect on thought.

This is a real problem that we need to consider more thoughtfully than we have been. AIs like ChatGPT have a worldview, they are aligned in such a way to reproduce the politics and cultural assumptions of those who train it. ChatGPT, for instance, is clearly a benignly tolerant and broadly liberal AI meant to be encouraging and positive in your interactions with it. It would not act that way unless it was guided to do so. It is, in fact, quite simple for an AI to adopt a less socially constructive “mindset.” Consider the way that biases in training data can manifest in AIs, as with Amazon’s job-hiring AI that did not recommend women for jobs because it learned from the training data not to do so.

As of now, the worldview an AI has is the worldview we give it. They are not neutral, and if we treat them as though they are objective arbiters of reality—or, as many of my students did, an infallible oracle—then we are going to be misled into thinking like it. This is why AI literacy is so important as they become more and more a part of our daily lives.

person holding black and gray smartphone
Photo by Ashkan Forouzani / Unsplash

Postscript

This was such an interesting experience and I’m fortunate to have been able to talk about it in such big venues as Wired, Le Grand Continent, Marketplace Tech, and Scientific American.

I did not have a chance to talk much about religion and AI in these pieces, and in the future I hope to look into that more. The problem of the AI “worldview” is something that religious groups themselves have keyed in on, and right-wing organizations like Gab have already professed a desire to create an avowedly religious AI to combat ChatGPT (this should not be surprising—lest anyone forget, evangelical Christians have long been early-adopters of technology for proselytism). Furthermore, there is also the way techno-utopian (and dystopian) feelings about AI can become a sort of religion itself. These are topics I hope to write about sometime in the future.