A computer information technology instructor walks into a room full of AI and begins telling jokes.
The first joke he tells goes like this: “Why did the computer go to the doctor? Because it had a virus!”
The AI all looked at each other and let out a mechanical chuckle.
– The one and only, ChatGPT
Stephen Graham, an instructor in the Computer Information Technology program at Lethbridge College, splits his time between teaching and working on his doctoral research project: telling jokes to AI.
“For more than 25 years, we computer science-y people have been trying to do stuff with humour,” Stephen said of his current work. “So, we try and get computers to write jokes or computers to identify sarcasm or irony or things that are just plain funny.”
With a masters of Science in Artificial Intelligence, Stephen is very comfortable with AI.
So, when ChatGPT-3, an AI text generator, debuted earlier this year in a frenzy of excitement, fear, and questions of morality and academic integrity, Stephen’s eyes were on all the possibilities.
One possibility in particular: how can AI evolve our teaching and learning practices?
Before we dive deeper into Stephen’s big-picture thinking, let’s take a moment to put AI text generators–aka, chatbots–into perspective.
According to Stephen, current chatbots find the “meaning” of a word through its relationship to other words across millions of sentences. This gives each word a pattern, like a fingerprint, which is known as an embedding. The chatbot then takes these embeddings and learns how they typically interact with each other based on the information it’s fed.
For a more advanced bot like ChatGPT, it’s been trained on billions of interactions, from classic stories to casual conversations. Using that information, it can predict which embeddings follow which, and extrapolate to an often-astounding degree.
“When you ask ChatGPT ‘Hey, tell me a story about Goldilocks meeting a dragon,” Stephen said. “It will, in its very best way, take that prompt and attempt to predict something that is plausible, relative to the prompt, based on all the information.”
The good, the bad, and the cold-hearted calculations
Clearly, chatbots do not approach writing the same way humans do. In fact, it helps to think of chatbots as calculators of words, according to Stephen. At their worst, these word calculators shortcut learning experiences by supplying learners with quick answers to simple prompts–just like a calculator would for a 5th-grade math student.
However, applied alternatively, calculators (and AI text generators) can also empower learners.
“First, we learned how to do arithmetic by hand, so we understand what’s going on with arithmetic,” Stephen said. “Then we say, ‘and here’s a calculator to do the boring arithmetic’ and we teach you about calculus and trigonometry. Then we teach you how to use a calculator to do those things.”
Once a student has a calculator, asking them to provide the answer to 283 x 569 is too easy with the technology available to them.
Likewise, once a student has access to ChatGPT, “simpler” assignment prompts–like writing an essay or coding a program–become outdated.
“I took one of my assignments for Programming 1 and gave it to ChatGPT,” Stephen said. “I said, ‘Hey, I want to write a program that does this in this way and has these features’. And ChatGPT spit out a bunch of code that I would have given a B.”
At first glance, this seems problematic. Can anyone jump into a programming course and get passing grades without showing up or understanding the concepts?
For Stephen, this challenge presents an opportunity to dig into more advanced coding problems–the ones that move a learner from a ‘B’ to an ‘A’.
“It’s the reasoning. It’s the purpose. What’s the why of what we’re doing here?” Stephen said.
Encouraging critical thinking
A major part of coding, Stephen says, is problem-solving. That means learning how to think about a problem in such a way that the path to the solution becomes clear. From here, students demonstrate understanding by clearly communicating their path to the solution.
Coding is simply putting that solution into practice.
So, while ChatGPT can complete the last step (i.e., predict the answer), it only works if you’ve successfully confronted the problem and are able to communicate–or prompt–it to the AI.
“If you don’t put the right stuff in, you will never get the right stuff out,” he said.
So, once learners understand the basics of coding, ChatGPT can streamline the teaching and learning process by allowing students to focus more fully on the why–on thinking about how best to solve and communicate a problem–and less on the exact coding grammar required to write it out.
Beyond ChatGPT, Stephen is working on an AI project to help streamline assessments and promote more active learning opportunities for Criminal Justice Human Studies (CJHS) students.
Currently, the Criminal Justice department trains students in a virtual reality-like setting on how to de-escalate confrontations. Students are placed in fraught simulations, then prompted on how they would react to avoid violence.
Though these simulations are a valuable training tool that promote active learning, they are also incredibly time-consuming and create a bottleneck in efficiency.
“It required somebody to be sitting there the whole time the learner was doing this training,” Stephen said.
However, by creating an AI bot that can assess students’ reactions and tone of voice, Stephen hopes to automate some assessment elements and make the CJHS simulation program much more viable.
There’s no denying AI poses certain challenges to academic integrity, and educators will continue to grapple with this as we–and AI–evolve.
But when we work together to create guidelines and innovative ways to teach and learn, we can leverage this powerful tool to better our teaching and learning efforts to the benefit of learners.
It’s about seeking to evolve together and letting ourselves smile along the way.
Feeling encouraged, the instructor tells another joke: “Why did the AI cross the road? To get to the other side of the network!” This time, the AI don’t laugh at all.
Puzzled, the instructor asks, “What’s wrong, why didn’t you laugh?” One of the AI responds, “We didn’t find that joke funny. It was too predictable, and we could see the punchline coming from a mile away.”
The instructor sheepishly replies, “Ah, well, I guess I’ll have to program better humor algorithms into my next lesson plan!”
– The rest of ChatGPT’s joke
Computer Information Technology Instructor