I’m frustrated by the conversation around ChatGPT in higher education.
So far, the conversation has been largely about using the tool as a text generator and fears around how students can use it for “cheating”. I tend to think this is only the tip of the iceberg and it frustrates me – this convo is still very young so maybe I just need to give it a chance to develop. I think the more interesting (and likely disruptive) conversation is around how the tool can be used for meaning making (and legal issues around intellectual property). Maybe I’m overreacting, though maybe I’m not.
But meaning making is not the topic of the day! No, the topic of the day is “cheating” and everyone is officially freaking out!
Just in the last few days there have been claims of “abject terror” by a professor who was able to “catch” a student for “cheating” with ChatGPT (resulting in the student failing the entire course). Calls to return to handwritten, in-person essay writing and over 400 comments (at the time of this writing on Dec 29th) almost entirely focused on fears around “cheating” in an article about the tool’s impacts in higher ed.
Besides the calls for surveillance and policing, the humanized approaches being proposed include talking with students about ChatGPT and updating your syllabus and assignment ideas to include ChatGPT. But often these ideas include getting students to use it; helping them to see where it can be useful and where it falls down. This is a go to approach for the humanistic pedagogue and don’t get me wrong I think it is head and shoulders above the cop shit approach. Yet there are some parts about this that I struggle with.
I am skeptical of the tech inevitability standpoint that ChatGPT is here and we just have to live with it. The all out rejection of this tech is appealing to me as it seems tied to dark ideologies and does seem different, perhaps more dangerous, than stuff that has come before. I’m just not sure how to go about that all out rejection. I don’t think trying to hide ChatGPT from students is going to get us very far and I’ve already expressed my distaste for cop shit. In terms of practice, the rocks and the hard places are piling up on me.
Anyway, two good intention issues around working with ChatGPT and students are giving me pause:
It is a data grab
Many (though not all) of the ideas I’ve heard/seen for assignments that use ChatGPT require students to use ChatGPT which requires an OpenAI account. An OpenAI account requires identifiable information like an email address or google account which means that it can be tracked. Their privacy policy is pretty clear that they can use this info how they want and that includes third party sharing and data possibly being visible to “other users” in a way that seems particularly broad.
I have this same issue with any technology that does not have a legal agreement with the university (and I don’t necessarily even trust those who do). But I’ve also long believed that the university is in a futile battle if we really think that we can stop students or professors from using things that are outside of university contracts.
Some mitigation ideas for the data grab
Note: All of my mitigation ideas I’m sure are flawed. I’m just throwing out ideas, so feel free to call out my errors and to contribute your own ideas in your own blog post or in the comments below.
Don’t ask students to sign up for their own accounts and definitely don’t force them to. There is always the option of the professor using their account to demo things for students and other creative design approaches could be used to expose students to the tool without having them sign up for accounts.
If students want their own accounts maybe specifically warn them about some of the issues and encourage them to use a burner email address but only if they choose to sign up.
I’m not sure if it is breaking some kind of user policy somewhere to have a shared password on one account for the whole class to use. This could get the account taken down but I wonder how far you could take this.
It is uncompensated student and faculty labor potentially working toward job loss
How do humans learn? Well that is a complex question that we don’t actually have agreement on but if you will allow me to simplify one aspect of it – We make mistakes, realize those mistakes (often in collaboration with other humans – some of whom are nice about it and others not so much) and then (this part is key) we correct those mistakes. Machine learning is not that different from this kind of human learning but it gets more opportunities to get things wrong and it can go through that iterative process faster. Oh and it doesn’t care about niceness.
Note: I cannot even try to position myself as some kind of expert on large language models, AI, or machine learning. I’m just someone who has worked in human learning for over 15yrs and who has some idea about how computational stuff works. I’ve also watched a few cartoons and I’ve chatted with ChatGPT about machine learning terms and concepts*
But even with all of its iterations, it seems to me that human feedback is key to its training and that the kind of assignments that we would ask students to take part in using ChatGPT are exactly the kind of human fine tuning that it (and other tools like it) really need right now to become more factually accurate and to really polish that voice. Machines can go far just on those failing/succeeding loops that they perform themselves but that human interaction [chef’s kiss]. And that should be worth something.
When I imagine what a finely tuned version of ChatGPT might look like I can’t say it feels very comfortable and I can’t imagine how it does not mean job/income loss in some way or another. Now it could also mean job creation but none of us really have any idea.
What we do know is that ChatGPT’s underlying tech is GPT-3 and OpenAI plans to drop an upgraded version, GPT-4 in 2023. Asking students to train the thing that might take away opportunities from them down the road seems particularly cannibalistic but I also don’t know how you fight something you don’t understand.
Some ideas for mitigating the labor problem
I’m pretty stuck on this one. My go to solution for labor problems is compensation but I don’t know how that works here. I’m thinking that we are all getting ripped off everytime we use ChatGPT. Even if it ends up making our lives better OpenAI is now a for-profit (be it “capped profit”) company and they are going to make a lot here (barring legal issues). But I don’t think that OpenAI is going to start paying us any time soon. I suppose college credit is a kind of compensation but that feels hollow. I do think that students should be aware of the possible labor issues and no one should be forced to use ChatGPT to pass a course.
I just want to end by saying that we need some guidance, some consensus, some … thing here. I’m not convinced that all uses of ChatGPT are “cheating” and I’m not sure someone should fail an entire course for using it. I mean sure you pop in a prompt get a 3 second response that you copy and paste – I can’t call that learning and maybe you should fail that assignment. But you use it as a high end thesaurus or know your subject and use ChatGPT to bounce ideas off of it and you are able to call out when it is clearly wrong… Personally I’d even go so far as getting a first draft from it as long as you expand on and cite what parts come from the tool. I’m not sure these uses are the same thing as “cheating” and if it is I’ve likely “cheated” in writing this post. I’ve attempted a citation below.
~~~
** Update 1/26/23 after publishing this post some were looking for more mitigation ideas. In response I published Prior to (or instead of) using ChatGPT with Your Students which is a list of classroom assignments focusing privacy, labor, history, and more around ChatGPT and AI more broadly.
Image by Yvonne Huijbens from Pixabay
*Some ChatGPT was used in the authoring of this blog post though very little of the text is generated by ChatGPT. I chatted with it a bit as part of a larger process of questioning my own understanding machine learning concepts but this also included reading/watching the hyperlinked sources. My interactions with it included questions and responses around “human-in-the-loop” and “zero-shot learning” but I didn’t use these terms in this post because I worried that they may not be accessible to my audience. I do think that I have a better understanding of the concepts because of chatting with ChatGPT – especially with the “now explain that to me like I was a 10yr old” prompt. One bit of text generation is when I asked it to help me find other words/phrases for “spitballing” and I went with “throwing out ideas”.
Comments
8 responses to “ChatGPT and Good Intentions in Higher Ed”
Thank you for laying it out this way–I think this is likely one of the first pieces we’ll be reading in my course on it to open up the conversation about if/how people should have accounts on ChatGPT and the implications of that account.
Thanks for reading and sharing this post lance. I’m really flattered that you will use it in your course.
This is a really fascinating concept that by using this tool which is “helping” some of us right now, we may ultimately be training it to take over our jobs in the future. I’m really interested to run this by some high school kids to see whether it takes the shine off the potential of using it to get “easy A’s” …however I think since most of them don’t have a great big picture perspective at the moment, it won’t act as a deterrent! Thanks for sharing your thoughts they are always insightful!
Thanks for reading and for your comment Kristen. I think you hit the nail on the head with that lack of the big picture cavet. I wonder if you could spend some time with them on it though. I found this report on the impact of AI on future workforces from the us-eu trade and technology council. The whole thing might be too much for one lesson with high school students but maybe you could take parts of it for quotes in a slideshow or something. Anyway, if you do talk with your students about it I’d love to hear what they have to say.
[…] wrote about some concerns that good intentioned higher ed instructors, who want to use ChatGPT with their students, might want to think about. There, I mostly cited […]
So essentially, those who build technology (e.g. tractors) are promoting unemployment… just like consuming an apple is taking other person from consuming that apple. Utterly ridiculous. All human actions, when accumulated, have implications in the short and long term, here and far away. But that is no reason to NOT do anything. We can also ADAPT. Societies adapt. That is how things progress. Otherwise we would be stuck in complete inaction because of fear of externalities.
Thanks for the comment Perico. Like I said, this will take away jobs but it could also make some – we just don’t know. I simply advocate for proceeding with caution and not accepting the tech hype uncritically.
[…] how “harness the potential and avert the risks of this game-changing technology”. This blog post by Autumm Caines raises some skeptical questions about the discussion around Chat GPT, as well as linking to other […]