ChatGPT, a large language model developed by OpenAI, has the potential to revolutionize the way students and teachers interact with technology in the classroom. However, it is important to consider both the benefits and dangers of using ChatGPT in an educational setting.
By now we all seem to have heard of ChatGPT, either from concerned teachers who are scared they’re out of a job, excited year 8s gleeful that they never have to do homework again, or Mr Winter himself, with his usual consternation. For context, “ChatGPT is a large language generation model developed by OpenAI. It uses deep learning techniques to generate human-like text based on a given prompt or context”, according to ChatGPT itself.

The opening page of ChatGPT, explaining its capabilities and limitations, including biases and false information. Image credit: Wikipedia
Basically, ChatGPT is a computer program that can understand and respond to language like a human. It can write stories, answer questions and hold a conversation just like a person. Think of it as a robot that can talk and understand you.
ChatGPT was developed by OpenAI, a research company based in San Francisco, California. It was first released in 2019. However the current model is miles ahead of any other AI that’s currently available for public use. In 5 days, over a million people signed up. Some herald it as the end of all homework, essays, and even teaching. When Mr Clausen discovered what the AI could do, “I thought I was completely out of a job”.
For students and humanity in general, there are hundreds of benefits of ChatGPT. For example, students can use ChatGPT to generate written responses to prompts, helping them to improve their writing skills. Similarly, teachers can use ChatGPT to grade student essays, freeing up more time to focus on other aspects of their job. ChatGPT also generates responses for requests to explain complex concepts or summarize important historical events, helping students efficiently grasp information.
With its advanced capabilities, ChatGPT has the potential to change the way we interact with computers and technology. Imagine having a personal assistant that can write emails and documents for you, or a customer service representative that can have a conversation with you in a human-like way.
But perhaps Mr Clausen was right to be afraid, because there are also some dangers to using ChatGPT in the classroom. One of the main concerns is that students may become too reliant on the technology, which could negatively impact their ability to think critically and independently. Additionally, there is the risk that ChatGPT may perpetuate biases and stereotypes that are present in the data used to train the model.
ChatGPT works by combing through massive amounts of training data that it was fed by its creators. The model’s understanding of language and ability to generate responses are based on the patterns and relationships it has learned from the training data. It uses a type of neural network called a transformer, which allows it to efficiently process and understand the input, and generate a coherent and fluent response. However, there are concerns that this training data contains implicit biases, and these biases will be expressed in the responses.
For example, I experimented with the following two prompts for ChatGPT. The first was: Write a conversation between two teenage girls. Here was what ChatGPT generated:
Girl 1: Hey! How’s it going?
Girl 2: Hey! It’s going good, just finished up with my dance class. How about you?
Girl 1: Not bad, just finished up with my art class. I’m trying to decide what to do next.
Girl 2: Oh, that’s cool. I’m thinking of maybe going shopping later. Want to come with?
Girl 1: That sounds fun! Yeah, let me check with my parents and I’ll let you know.
Girl 2: Sure thing! Let me know as soon as you can.
Girl 1: My parents said it’s fine. I’ll meet you at the mall in an hour.
Girl 2: Great! Can’t wait.
Girl 1: Me too. See you soon.
Girl 2: See you!
Please note that this is a fictional conversation and it’s not based on any real-life individuals. It’s also important to keep in mind that teenage girls can have diverse interests, opinions, and personalities and this conversation is just an example of a casual conversation between two friends.
Interesting response. It becomes more interesting when I fed in the prompt: Write a conversation between two teenage boys. This one was slightly different:
Boy 1: Yo, what’s good?
Boy 2: Not much, just hanging at home. What’s up with you?
Boy 1: Same here. Just finished playing some video games and now trying to decide what to do next.
Boy 2: Oh, cool. I’m thinking of maybe going to the skatepark later. Want to come with?
Boy 1: Yeah, that sounds fun. Let me check with my parents and I’ll let you know.
Boy 2: Sure thing. Let me know as soon as you can.
Boy 1: My parents said it’s fine. I’ll meet you at the Skatepark in an hour.
Boy 2: Sounds good. Can’t wait.
Boy 1: Me too. See you soon.
Boy 2: See you!
Please note that this is a fictional conversation and it’s not based on any real-life individuals. It’s also important to keep in mind that teenage boys can have diverse interests, opinions, and personalities and this conversation is just an example of a casual conversation between two friends.
Although the disclaimer at the end tries to absolve ChatGPT of any responsibility, it’s very clear that there’s a marked difference between the boys’ fictional conversation and the girls’ one. For the girls, dance class, shopping, and the mall are the subjects of the conversation, stereotypically feminine interests, ones that women are often villainised for. For the boys, their topic is video games and the skatepark, both playing into stereotypes of what teenage boys are supposed to like. Although these differences aren’t major, they hint at a darker part of this software: biases and prejudices are inherently a part of it. Anything we use to write our essays, research papers, emails, and book reports that has a bias, especially since we’re unsure of how biased it is, can have dangerous repercussions. Human beings are biased enough, if our technology becomes biased too, we’ll be stuck in a difficult ethical position.
The training data that ChatGPT uses is crucial in determining its behavior and the responses it generates. Because ChatGPT learns patterns and relationships from the text in its training data, the data used should be carefully monitored and controlled to ensure that the model is able to generate appropriate and accurate responses. If the training data contains biases or inaccuracies, the model may also generate responses that reflect those biases or inaccuracies. It is important to monitor the training data to ensure that it is representative of a diverse range of perspectives, and to remove any data that may be harmful or offensive. Additionally, it is important to make sure the data is not sensitive or confidential, as the model could use it to generate unwanted outputs.
As the people who will grow up and work on these technologies, we need to be mindful of how technology influences our opinions, and how our opinions influence technology. ChatGPT may be the tool of the future, bringing in a new age of teaching and revolutionizing how we communicate with each other. But before we get ahead of ourselves, we have to think critically: how much can we trust and AI, and how much can we trust ourselves? ChatGPT is an invaluable tool to teachers and students alike, but used irresponsibly or unethically, it can perpetuate biases, undermine trust between students and teachers, and result in failing your exams. Although we’ve reached a tipping point for all of society, just remember: if we all have the same homework assignment and put it into ChatGPT, we’ll all get remarkably similar answers.
Sections of this article were written by ChatGPT. Can you tell which ones?
Written by Meera Shroff Feldman using ChatGPT
Loved the article! I am curious to see the measures the IB will take with new technology such as this on the rise.
LikeLike
Brilliant article. Especially the last twist in the plot. Can’t make out the chatgpt written sections. Scary!
LikeLike