ChatGPT: The Next Firestorm in Education
- History teaches us that it never works to ban new technology from our classrooms—it is more productive to adjust to its implications while exploring its potential.
- If you have not yet tried ChatGPT, you should experiment with its capabilities and consider its long-term consequences.
- AI-driven tools such as ChatGPT promise a range of benefits that will enhance educational delivery, improve student learning, and accelerate the research process.
In 1931, lecturers at the University College of South Wales in the United Kingdom debated the risk of being replaced by gramophones, a record-playing technology that seemed silly to many at the time. Fast forward to the turn of the millennium, when academics feared the consequences of distance education, with some even predicting the end of university education as we knew it. In 2013, professors at San Jose State University in California made headlines after they wrote a strongly worded letter to a Harvard University professor, because they felt his massive open online course infringed on their own curriculum.
In retrospect, we can smile at these modern Luddites, knowing that their concerns did not come to pass. Given that knowledge, we also might wonder: Is history is repeating itself in 2023?
The most recent firestorm was ignited in November 2022, when the company OpenAI released ChatGPT (GPT stands for “Generalized Pretrained Transformer” algorithms). Using artificial intelligence (AI), ChatGPT can create realistic responses to question prompts, generating everything from outlines to college essays. That awe-inspiring capability has caused many academics to fear that ChatGPT will challenge the very purpose of higher education.
It’s one thing to know that students can use Google to find bits and pieces of other people’s writing to integrate into their own. It’s quite another to know that students can use ChatGPT to cut short the gradual, arduous journey of learning how to think creatively and critically, argue a point, and make moral judgments.
Professors throughout the world are calling for a complete classroom ban on the technology. They’re recommending a return to face-to-face teaching, sit-in examinations, the use of pen and paper for assignments, the use of new AI tools to detect students using AI new tools, and so on.
In just a few weeks, the release of ChatGPT has caused instructors at all levels to reach the collective conclusion that the traditional essay, so long used to assess students’ thinking and writing skills, is dead. And because this tool quickly helps improve computer code, including code used for malware, even professors in computer science are seriously rethinking what and how they teach and assess students.
But as we respond to this new development, we must learn from history and come to a clear conclusion: Banning new technology will fail. We will not win any arms race to develop AI that detects AI. Therefore, we need to embrace it with open arms, as we have done with new technologies in the past, in ways that account for basic human nature.
What’s All the Fuss About?
There’s no denying that a tool like ChatGPT changes the education equation. AI experts both praise its capabilities and warn of its implications. The road that leads us from artificial narrow intelligence, which includes ChatGPT, to artificial general intelligence (AGI) and eventually to artificial super intelligence (ASI) can indeed look scary.
What happens when an exponentially self-improving algorithm reaches AGI (what we humans have), and then starts to improve its IQ from hundreds to thousands or millions times that? Author Ray Kurzweil calls such ASI the “singularity.” Will the singularity be kind to us? Or will we end up on the shallow end like the characters in sci-fi movies such as The Matrix and The Terminator?
It’s one thing to know that students can use Google to find other people’s writing to integrate into their own. It’s quite another to know that students can use ChatGPT to cut short the gradual, arduous journey of learning.
We are far from reaching the singularity, but if you have not yet tried ChatGPT, you should. Then, you must consider its long-term consequences. The AI tool uses sheer computer power—combined with large language model algorithms (LLMs)—to produce nearly humanlike text-based responses. Its current capabilities are already making headlines because of their “wow” effect—what will it mean when the next versions of the technology can produce images, voice, music, and video? It is happening. Try Open AI’s DALL-E 2 to get a glimpse into why artists and designers, too, are getting worried.
The Power—and Limitations—of LLMs
LLMs are generative statistical models of publicly available texts, words, parts of words, individual characters, and spaces, as well as exclamation marks, commas, and other punctuation. These machine-learning algorithms have deep neural networks pre-trained with vast amounts of data from the internet, including hundreds of billions of parameters and hundreds of terabytes of textual data about virtually everything humans have written.
These models are generative. We can ask them questions and get humanlike replies. But this is only in appearance. The responses are following a set of Bayesian statistics about the probabilities of certain words following others.
When ChatGPT is answering any question, it is always answering this one: “According to the statistical model of human language on which I have been pre-trained, what words are likely to come next given your prompts?” Its model can be fine-tuned to consider certain types of language—for example, to remove swear words—but the principle is the same. It has no pre-given context to what you ask.
The effect is powerful and produces initial reactions of shock and awe. As humans, we find it easy to think that the tool knows, believes, or understands. However, it does not. It is important that we avoid anthropomorphism when interacting with ChatGPT.
A Challenge to Higher Education’s Purpose
Universities offer more than just vocational job training. At their core, they develop citizens who are knowledgeable about the laws of nature and social systems. They teach students to think critically and creatively, communicate clearly and ethically, and collaborate effectively and responsibly.
AI tools like ChatGPT do not change the university experience, but as they become more powerful, they have long-term consequences that compel us to look at the purpose of higher education on a deeper level. It’s a process that exists not only to prepare students for careers, but to help them develop what Aristotle called phronesis, a term that Thomas Aquinas later translated to mean “practical wisdom.”
There are few, if any, shortcuts to becoming practically wise. No one becomes the equivalent of Dumbledore, the sorcerer and sage in Harry Potter, without years—or even decades—of practice.
Universities are places where students are immersed in experiences designed to lay the foundation of practical wisdom—where students engage in debates and experience failures, where they receive feedback and make moral judgments. Universities can help students learn to make decisions and take actions that are good for their communities, not just for themselves.
Aristotle also wrote that engaging in dramas—comedies and tragedies—helps people develop a habit of phronesis by allowing them to practice making moral judgments and to see the consequences. That’s why drama became a pillar of classical education.
Since we cannot ban technology (at least not in the free world), we must quickly embrace the implications of AI tools for teaching, learning, research, and outreach. We must view AI as a tool we can use to encourage the development of practical wisdom.
AI Can Enhance Business Education
So, what can we do? Let us first remember how, during the pandemic, universities quickly integrated online technologies into the educational experience. Professors learned to design and deliver first online and then hybrid courses; they adopted polling technologies, breakout rooms, and chat forums.
Of course, with these advances, some students also learned better ways to cheat—just as students of every previous generation have done. In response, professors adopted new online tools to identify plagiarism. There is nothing new here—the response to ChatGPT will be no different.
I recently prompted ChatGPT to design an outline for a master’s-level course. It took less than a minute.
AI technology and tools naturally extend what educators already do in online formats. For example, we can use AI-driven insights to provide more personalized online learning experiences to learners all over the world, based on their locations, backgrounds, needs, and abilities. We can turn to AI to analyze data from students’ learning and provide tailored feedback to improve student performance. The technology also can help us quickly identify relevant content and even design courses.
On this last point, to demonstrate what AI can help us do, I recently prompted ChatGPT to design an outline for a master’s-level course, in preparation for a curricular committee meeting. It took less than a minute. The AI generated “new” content by scanning its data of existing courses, within a given time frame, and compiled a version that met my specifications and prompts. The committee members were impressed not only by the speed of the outline’s creation, but also by its quality.
AI Can Support Administration
AI technology and tools already are improving most, if not all, facets of education. University leaders must consider how to adjust academic policies to take advantage of these advancements and explore how their schools can use the technology in the following areas:
- Improving the efficiency and effectiveness of the admissions process.
- Designing curricula that better prepare students for the future workforce—as well as for changes and advances in AI technology.
- Enhancing student learning experiences and improving student learning outcomes.
- Upholding academic standards and preventing academic integrity issues.
- Increasing the accuracy and efficiency of evaluating and assessing student performance.
It is crucial for universities to proactively address these considerations. Administrators must ensure that they effectively leverage AI technology to enhance education and prepare students for the future.
AI Can Push the Frontier of Research
Academics also can enhance their research in fundamental ways, using AI tools and technologies to achieve a wide range of objectives. These include:
- Analyzing vast amounts of data quickly and identifying patterns and trends that would be difficult to discern via traditional research methods.
- Building predictive models that can forecast future trends and identify potential risks or opportunities.
- Analyzing images and videos to better understand human sentiments and behaviors, from pinpointing facial expressions of people in natural environments to tracking foot traffic in a physical space.
- Extracting first-, second-, and multi-order insights from interview transcripts, and even generating continuous, real-time analysis.
- Automating data collection tasks such as survey design, distribution, and analysis, so that researchers can focus on high-level tasks such as interpreting results and identifying insights.
- Improving the quality of data by identifying and removing errors and outliers, reducing bias, and increasing the accuracy of results.
- Accelerating the research process by automating data collection and analysis, providing results in near real-time, and enabling faster conclusions and recommendations.
It is likely that AI will generate entirely new research methods that we have not yet imagined. Researchers and university leaders need to take advantage of AI’s full potential if they are to stay ahead of the curve in their respective fields.
Keep Calm and Step Up
While AI technology will bring many benefits to education and research, it also presents unresolved ethical and legal implications related to privacy and bias. It's likely that new concerns will arise as the technology evolves. But while these concerns are very real, we can manage them if universities, professors, and students shape the use of AI in higher education. Together, we can integrate the technology in a balanced manner that takes ethical, legal, and human factors into consideration.
Furthermore, we must remember that humans tend to overestimate the immediate impact of new technologies and underestimate their long-term consequences. History has taught us that, as long as we approach this challenge with a sense of caution, ChatGPT and its successors can help us design richer, more purposeful education experiences than ever before.