Refresh loader

Category : Faculty Development

Looking for ChatGPT Teaching Advice? Good Pedagogy is Nothing New

There has been a lot of advice around teaching since ChatGPT was released in November. As we are approaching fall term I wanted to curate some of it but also consider it from a critical lens. I agree with much of the advice out there, but find that what I agree with is often nothing new; it is the same good teaching advice that we have been giving for a long time. But I also sometimes hear advice that draws on problematic practices which draw from poor digital literacies around privacy and data collection and will likely perpetuate harms. 

Advice Professors May Want to Question

ChatGPT is inevitable/resistance is futile/you can’t put the genie back in the bottle – or can you? 

L.M. Sacasas calls it the Borg Complex and it is often a very tempting line of rhetoric to accept or to perpetuate, but I think it does more harm than good. I’m talking about the “it is inevitable”, “resistance is futile”, “you can’t put the genie back in the bottle” narrative. Advice that comes from this perspective should raise an eyebrow (or two) and those perpetuating this line of thought should think twice. Faculty are smart people and I think this comes off more as sales rhetoric to many of them. Plus, no one likes to feel like they are being forced into something and that they have no choice or control.

Nothing is predetermined and this technology is shifting and changing every day from global regulation, market forces, and public opinion. The “it” is doing a lot of work in the “It is inevitable” narrative. We have already seen OpenAI roll back intrusive features of data collection by allowing users to opt out of contributing their prompts to the training of the model. We have already seen paid tiers introduced and we know the costs of running these models is significant. What exactly is it that is here to stay? Is it cost-free access? Is it data collection? Is it integrations with other tools? 

Those who perpetuate this narrative will often give examples like calculators but leave out examples like educational television, InBloom, and Google Glass all of which were predicted to “change the educational landscape” and were mostly big flops (a nod to Mr. Rogers and Mr. Dressup as exceptions). This line of thinking is not wrong necessarily, something is changing, but the thing is… something is always changing. Maybe this is harsh, but for me, it is just intellectually lazy and a red flag that the person giving this advice is not thinking about things very deeply. 

Run your assignments through ChatGPT – but wait…

Over and over I see the advice to run your own assignments through ChatGPT to get an idea of what kind of answers you might get. It is not the worst advice but at its best, this advice is given to inspire reflection on the assignment and maybe help the instructor realize the need for some assignment redesign. Alas, at its worst this advice is given in hopes that the professor will develop a sense of ChatGPT’s “voice” so that an instructor can recognize it and call out students who are using it to “cheat”. 

This advice gives me pause because simply prompting ChatGPT with statements like “revise this text to have a more academic tone” or even “rewrite this so that it sounds more conversational” can change that tone. I also think that some faculty might be careful about prompting ChatGPT with their assignments knowing that prompts are collected and used to train the model. Those with worries about academic integrity or who want to hold on to their intellectual property may want to wait on this advice.

You have to use the tool to learn the tool – or do you?

In countless articles, websites, and in overheard conversations I have heard the question “What can I do to prepare myself” answered by “Go create an account and start playing with it”. Indeed, I’ve even seen some who have gone so far as to say this is the “only” way to prepare for how AI might affect your class. Now I want to preface what I’m about to say by stating that if people want to go create an account and play with ChatGPT then go for it. I’ll even admit that this is what I’ve done myself. But there are lots of reasons why someone might not want to and I’m not sure that encouraging everyone to run out and create an account is such a great idea. 

I do think that you have to inform yourself about this technology, and I will get to that in a moment, but using it comes with concerns and those giving this advice rarely talk about those. Data privacy concerns around companies running the models that most of the public have access to are real. Creating an account tied to personal information with any company opens individuals up to data collection and sale, it also perpetuates surveillance capitalism. The FTC has just this week opened an investigation on OpenAI about the harms the tool may be inflecting. This should especially be considered if you are asking students to create accounts. Additionally, there are real human labor costs and climate costs that some may not want to contribute to. If we are going to give this advice to run out and create accounts and play with this technology I think that we should start with some of these concerns first.

Finally, I will say that I do think that using ChatGPT can be a great way to learn about these tools but I’ve seen this advice backfire so many times. The truth is that all of these bots (ChatGPT, Bing, BARD, etc.) are really easy to use poorly but using them in interesting and unique ways takes some thought. Prompt engineering is a bit of an art. I’ve seen many people use poor prompting to explore these tools and then write them off as nothing special because they stop too soon and don’t ask the right questions in the right way. 

Evergreen Teaching Advice

Learn about ubiquitous tools, their uses, and impacts

Every once in a while a technology comes around that impacts larger society through sheer volume of usage. Think social media, mobile devices, and even the internet itself. These are technologies where your students are likely to come in with direct experience or at least questions about using it. And so you might want to know a thing or to about it before you get into class. As mentioned above, a lot of folks say just go out and use the tool and that is one way but I’m not here to force anyone to do anything they don’t want to and I’m also just not convinced it is the best way to learn about generative AI.

There is plenty to be learned by reading about how this tech functions, various impacts to industries/society, as well as pedagogical uses. Besides just reading about the tool there are plenty of demo videos on YouTube and TikTok that actually show you how people are using it. There are free courses on prompt engineering which are filled with examples of inputs/outputs. You could also partner with a friend or a colleague who is using the tool and ask them to show you examples of ways it has benefited/failed them. 

Let students know where you stand

No matter if you are 100% embracing or rejecting ChatGPT you should know enough to take some kind of stance, or at least start a conversation, or respond to student questions.  Because of the variety of ways that it can be used and the variety of ways that different classes may approach it, you should make it clear what use of ChatGPT means for your class especially if you consider some uses to be cheating. Don’t be afraid to let students know if you are still making up your mind about it. It is still really new and many of us are trying to figure out what it means for our disciplines and fields. 

Don’t Freak Out About Cheating

  • Don’t fail all of your students
  • Don’t use GenAI tools as detectors (see section about learning how the tools actually work)
  • If you use actual detectors don’t rely on them solely to determine cheating

Design

There is a place for assignment/course/assessment design but I think that it is trickier than many are making it out to be and that it is going to take some classroom trial and error to really figure out. I do think that in some cases we have let writing stand in for knowing and this technology will upset that. Some disciplines will move to assess without writing and instead move to projects and presentations. Yes some will move to surveillance but I’m hoping that will be at a minimum. 

Focus on Care

Care about your students. Trust them. Have conversations with them.

Image by Hans from Pixabay

ChatGPT ID/FacDev?

Many of those of us who work in higher ed have been thinking about ChatGPT since OpenAI dropped free access to it at the end of November. The fancy new chatbot which can generate essays, responses to qualitative quiz questions, and discussion board prompts has everyone thinking about academic integrity and “cheating”. The tech has been around for several years but offering access for free, right before finals, has caused quite the stir in higher education. 

Something I hear a lot of people talk about, but which I feel is still not getting enough attention, is the question about why this tech is free. It is not much of a question because it seems everyone is aware that the tool has been given for free to the public so that massive amounts of people can help to train it. 

So, like most of these kind of things, it is not really free. Maybe we are having fun playing with it but you are exchanging your time, creativity in writing questions/prompts, and your data in exchange for access. You need to create an account which needs to be tied to an email and I also believe a phone number. At the bottom of the ChatGPT input screen it clearly reads “ChatGPT Dec 15 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve.” But improve for who and to what end?

Most of the folks who I have heard talk about this hint at how it is being trained by public labor for free, on public data obtained for free, so that eventually it will be used to create corporate products which will likely take away jobs and make billions for the creators. But after throwing this fact out there nonchalantly and often with a tone insinuating that this is a no brainer, they continue to move on and talk about how they used it to generate a rap in the style of Jay-Z, ask it questions about philosophy, or try to get it to mimic a student responses so that they can see if they (or their colleagues) could be fooled by it. I realize I’m about to be guilty of doing the same thing here – perhaps I just point this out to try to redeem some semblance of integrity. This work continues to put me in paradox. 

OpenAI seems more than aware of the potential economic impacts of all of this and they have a research agenda around it – but this gives me little comfort. I can’t help but think about my own position in instructional design/faculty development/academic technology.

“Instructional design” (ID) can live in lots of places in the university and the position takes on a different flavor depending on where it exists. You have a different job if you are an instructional designer in the teaching center vs the library vs IT vs HR. Not all of us work with faculty but there is even variation between those of us who do. Some IDs are content focused and they use skills like graphic design or videography to develop things. My work has never been very content creation heavy – though I do like to create content. Working in smaller schools with tight budgets I mostly consult with faculty and for many of us this consulting role is a big part of our work. I talk with faculty about their teaching and offer advice about what they can do better. 

I talk with them… I offer advice… You see where I’m going here.

This made me wonder, what kind of instructional designer/faculty developer consultant ChatGPT would make and so I decided to have a very basic conversation with it posing as a faculty member in physics. I copied the transcript of the chat into this google doc and I’m sharing it publicly for reflection by others in the field. 

As for my own reflection 

I’ll say that the results of my chat are much like what I have seen in the disciplines. These are perfectly plausible responses that sound very human but they don’t go very deep and it is in that lack of depth where those of us who do this work will recognize the flaws.

The bot falls down when asked about discipline specific approaches and when asked for anything that could connect to what other instructors may have tried in the past. It glazes over specifics around EdTech and flat out gets it wrong sometimes (I think the directions it gives for dropping the lowest grades in Canvas sound more like directions for Moodle, personally). I’m not actually a physics professor so I didn’t/couldn’t ask it specifics about advice for teaching individual topics in physics. In my experience, it does do better when you ask it to narrow its scope; so asking more detailed questions could make a big difference. 

Still, the results are very familiar to a lot of faculty development advice that I see. Be it on various blogs, websites, listservs or even what I sometimes hear come out people’s mouths –  much of it is the same basic stuff over and over. Professors are busy and so giving them simple lists and easy answers for improvement is quite common and ChatGPT mimics these basics pretty well and includes attempts at big picture synthesis. It ends its advice to me about developing as teacher by saying “Remember, being a lifelong learner as an educator is an important part of staying current and effective in your teaching practice.” 

It’s not surprising. ChatGPT is trained on huge data dumps of the internet (including Reddit, Wikipedia, and websites). I threw the phrase “5 things you can do to improve collaboration in your class” into google and got a return of 712,000,000 results of various listicle type pedagogy puff pieces. With so much of this stuff out there already maybe it doesn’t matter that a chatbot can regurgitate it too? But I have to wonder what it means for our work. 

I’ve been struggling with a kind of imposter syndrome from this work for some time. I say a “kind of” imposter syndrome because I refuse to take all of the blame. I can’t shake the feeling that at least some of it comes from the work itself; that the nature of the work encourages it. So many of us are limited in our own opportunities to go deeper or to reflect in more meaningful ways. We are incentivised to create/repeat these easy answers/”best practices”. After the pandemic we have seen many of our professional development organizations raise prices of in person conferences and reject accessible virtual options. Simultaneously, professional development funds often have not increased from our institutions and don’t get me started about how frequently we are throttled in our attempts to teach directly ourselves. 

Many of us rely on disclaimers and positioning ourselves in various ways to account for our lack of knowledge/experience in domain specific areas, with technology, or even with specifics of teaching. In the beginning of that chat, the bot even gave me its own disclaimer “As a language model, I don’t have personal experience in instructional design or teaching, but I can provide general information and suggestions based on best practices in the field.” So, some of this is just the nature of the work but it is depressing nonetheless. 

No one really knows yet what ChatGPT means for higher ed and I’ve not seen much talk about what it means for EdTech/Instructional Design/Faculty Development. We are kind of in a wait and see and react when we can kind of moment. I guess I’m hopeful this will open up room for more thoughtful and creative work. But I worry that it will force us to ask some hard questions about what kind of work is meaningful and that will cause some casualties. 

How I got here/More if you want it

If you were paying attention to this chatbot/large language model (LLM) conversation at all before Nov. 30th, or even if you dug a little deeper since, you likely heard about this paper by Bender and Gebru et. al. but if you haven’t and want a critical look at the dangers of this stuff (including the environmental impacts and perpetuating biases) this is what you should really be paying attention to. I also found this piece from Jill Walker Rettberg, really helpful in better understanding the underlying datasets that GPT-3 is trained on and reflections the culture they come from. The relationships and evolutions between ChatGPT, GPT-3 (as well as 1 and 2), InstructGPT and all that is quite confusing but this post (from the Walmart Global Tech Blog, of all places) helps a bit. For even more, Lee Skallerup Bessette has created a Zotero library collecting all things ChatGPT in higher ed. 

In addition, I ended my last post (which was mostly a reflection current happenings with twitter) with a reflection on European Starlings. Yes starlings, the invasive bird species who are problematic but in that post I referred to how they are also “strange and wonderful for lots of reasons”. I had focused on their murmurations but another important facet of the starling’s disposition is its proclivity for mimicry – and of course got me thinking about things that can talk but not really understand.

~~~~

Featured Image by Kev from Pixabay