AI and Disruption
I distinctly remember my first encounter with a large language model (LLM). It was sometime in early 2022, and I'd been hearing enough about GPT-3 (a forerunner of ChatGPT) that I wanted to try it out myself. So I took one of my recent ministry newsletters, pasted the first three-quarters of it into the prompt, and asked the model to compose a new conclusion.
I was ready to be impressed if the model could bring the story to a coherent close. I was not ready for what it actually produced, which was something like:
“We are truly grateful for your unwavering support that enables us to continue reaching out and making an impact on young lives like J.'s. This story is a testament to God's unfailing love and pursuit of His children, even in the darkest times…”1
What floored me was that the model not only picked up on the concrete events of the story, but also closed the letter by casting vision for the kingdom-building opportunities in campus ministry. In other words, it generated a paragraph that, until that point, I would have assumed could only have been produced by an articulate human writer.
On an emotional level, as I read what the model produced I experienced that sinking sensation you get in your gut when you discover that something has gone terribly wrong. I may have even said a prayer in that moment along the lines of, “Lord, have mercy.”
Why that response rather than the unalloyed excitement many others have felt? To be sure, I am excited about the technical possibilities offered by LLMs and other forms of AI.2 I have spent many hours experimenting with LLMs and related tools since that first foray with GPT-3, and I'm convinced that there are many highly beneficial applications for these technologies. But I would also say that my “Lord, have mercy” prayer still captures my overall feeling about AI. I feel this way because I expect AI to be one of the most disruptive technologies of our time. Exploring that idea—disruption—is my purpose in this post.
When I say that AI will be disruptive, I mean that it will bring about changes that increasingly display two characteristics:
- Impact: It will force significant changes to our social, cultural, and economic order.
- Speed: These changes will occur quickly enough that people and institutions will not be able to adapt smoothly, but will experience jarring discontinuity.
When both of these characteristics are present, we experience a technology as disruptive. It's what we experienced in the 80's when the personal computer made rapid inroads into businesses of all sizes and “computerization” resulted in the obsolescence of entire categories of employment. We saw it again in the late aughts and early 10's when the smartphone put the world in everyone's pocket and rapidly re-ordered norms regarding what it means to be present in-person with someone.
Having cited those recent examples, let me be clear that “disruptive” is not the same as “destructive” or “morally wrong.” While it's in our nature to experience rapid, significant change negatively, this doesn't mean that disruption is always bad. In other words, I'm not attempting to render a moral judgment on AI technology by labeling it “disruptive.” (It's vital that we learn to make moral judgments about AI, but it's not what I'm trying to do in this particular discussion.)
In subsequent posts I'll flesh out particular ways in which I expect AI will be disruptive, but for the purpose of illustrating what disruption means let's consider one example of a disruption that is already well underway: students using LLMs to cheat on tests and papers.
LLMs like ChatGPT have seriously undermined several important techniques that educators rely on for evaluating students' learning. As a philosophy major, I had several courses in which my grade was entirely based on papers, with not a single in-class quiz or exam the entire semester. I had other classes in which take-home tests were common. My professors ran their courses this way because it gave them low-cost yet reliable assessments of how well we understood the material. It was reliable because cheating was hard: in order to cheat on a paper, for instance, you would have to lift large sections of writing from someone else's paper. This is difficult to do, especially without being detected.
The LLM has upended that picture entirely. Now, cheating on a paper or take-home exam—or just about any form of assessment in an online course—is trivial to do and near-impossible to detect. In the blink of an eye, several major tools in our educational system's toolbox were rendered at best problematic, perhaps even invalid.
To assess the impact of this change, it's crucial to realize that this is not merely a technical or logistical concern. Students must now reckon with a temptation toward moral compromise that is much stronger than it was just a few years ago: the risk of cheating is low, the reward is high, and (perhaps even worse) the perception that “everyone else is doing it” could undermine the integrity of the entire grading system. This is not mere speculation. My organization's work is with college students, and they are telling us that cheating is a big part of their moral landscape now. This is a significant impact to arise from a technological change.
As for the role speed plays in this disruption, imagine how different our situation would be if, instead of exploding onto the scene all at once in 2022, the technical capabilities of ChatGPT had appeared gradually over the course of 30 years. In this scenario:
- Most teachers would have experienced as students the temptation toward cheating with LLMs, and could apply that experience to the decisions they make about how to grade their courses.
- Students would have grown up learning about LLMs, how to distinguish between proper and improper uses of them, and perhaps would even have absorbed some of the moral consensus that would have been forming regarding their use.
- School districts and university faculties would have formed committees to examine the challenge from LLMs and issue recommendations for changes to grading practices.
- Academia as a whole would have time for conventional thinking about grading to change.
As it is, none of that has happened. The fast uptake of ChatGPT after its public release meant that, for all practical purposes, the technological change happened instantly. Many (most?) educators are still assigning papers and take-home tests as if nothing had changed. (Just the other day, I spoke with a friend who is an instructor for an intro-level class at a major university. She estimated that half of her students are cheating on exams by using ChatGPT to generate answers. But she doesn't have the authority to change how the course is graded, and the professors who have the authority don't see any urgent need to change.)
To really understand what I mean by disruption, however, we need to step back from this particular example to consider what the total effect of LLMs and other generative AI might be. After all, students cheating with LLMs is just one example in one segment of our society, and a relatively clear-cut example at that. But I believe that the nature of generative AI suggests that we will experience disruption in just about every area of life—and that in most cases the disruption will be harder to describe than the cheating example, but no less real.
Footnotes #
1: I generated this example using the same newsletter I used then, but this time with an open source model running on my laptop.
2: N.B. from the Correct Terminology Department: Since the release of ChatGPT, most mainstream discourse has used the term “AI” as if it referred exclusively to forms of generative AI like LLMs and image generators. A more precise and historically accurate usage recognizes that “artificial intelligence” is a much broader discipline that was well-established within computer science by the mid-20th century. I will try to avoid use of “AI” that obscures this fact, but I am not going to be pedantic about it.
- Previous: Book Review: Telling a Better Story