Looking for ChatGPT Teaching Advice? Good Pedagogy is Nothing New

There has been a lot of advice around teaching since ChatGPT was released in November. As we are approaching fall term I wanted to curate some of it but also consider it from a critical lens. I agree with much of the advice out there, but find that what I agree with is often nothing new; it is the same good teaching advice that we have been giving for a long time. But I also sometimes hear advice that draws on problematic practices which draw from poor digital literacies around privacy and data collection and will likely perpetuate harms. 

Advice Professors May Want to Question

ChatGPT is inevitable/resistance is futile/you can’t put the genie back in the bottle – or can you? 

L.M. Sacasas calls it the Borg Complex and it is often a very tempting line of rhetoric to accept or to perpetuate, but I think it does more harm than good. I’m talking about the “it is inevitable”, “resistance is futile”, “you can’t put the genie back in the bottle” narrative. Advice that comes from this perspective should raise an eyebrow (or two) and those perpetuating this line of thought should think twice. Faculty are smart people and I think this comes off more as sales rhetoric to many of them. Plus, no one likes to feel like they are being forced into something and that they have no choice or control.

Nothing is predetermined and this technology is shifting and changing every day from global regulation, market forces, and public opinion. The “it” is doing a lot of work in the “It is inevitable” narrative. We have already seen OpenAI roll back intrusive features of data collection by allowing users to opt out of contributing their prompts to the training of the model. We have already seen paid tiers introduced and we know the costs of running these models is significant. What exactly is it that is here to stay? Is it cost-free access? Is it data collection? Is it integrations with other tools? 

Those who perpetuate this narrative will often give examples like calculators but leave out examples like educational television, InBloom, and Google Glass all of which were predicted to “change the educational landscape” and were mostly big flops (a nod to Mr. Rogers and Mr. Dressup as exceptions). This line of thinking is not wrong necessarily, something is changing, but the thing is… something is always changing. Maybe this is harsh, but for me, it is just intellectually lazy and a red flag that the person giving this advice is not thinking about things very deeply. 

Run your assignments through ChatGPT – but wait…

Over and over I see the advice to run your own assignments through ChatGPT to get an idea of what kind of answers you might get. It is not the worst advice but at its best, this advice is given to inspire reflection on the assignment and maybe help the instructor realize the need for some assignment redesign. Alas, at its worst this advice is given in hopes that the professor will develop a sense of ChatGPT’s “voice” so that an instructor can recognize it and call out students who are using it to “cheat”. 

This advice gives me pause because simply prompting ChatGPT with statements like “revise this text to have a more academic tone” or even “rewrite this so that it sounds more conversational” can change that tone. I also think that some faculty might be careful about prompting ChatGPT with their assignments knowing that prompts are collected and used to train the model. Those with worries about academic integrity or who want to hold on to their intellectual property may want to wait on this advice.

You have to use the tool to learn the tool – or do you?

In countless articles, websites, and in overheard conversations I have heard the question “What can I do to prepare myself” answered by “Go create an account and start playing with it”. Indeed, I’ve even seen some who have gone so far as to say this is the “only” way to prepare for how AI might affect your class. Now I want to preface what I’m about to say by stating that if people want to go create an account and play with ChatGPT then go for it. I’ll even admit that this is what I’ve done myself. But there are lots of reasons why someone might not want to and I’m not sure that encouraging everyone to run out and create an account is such a great idea. 

I do think that you have to inform yourself about this technology, and I will get to that in a moment, but using it comes with concerns and those giving this advice rarely talk about those. Data privacy concerns around companies running the models that most of the public have access to are real. Creating an account tied to personal information with any company opens individuals up to data collection and sale, it also perpetuates surveillance capitalism. The FTC has just this week opened an investigation on OpenAI about the harms the tool may be inflecting. This should especially be considered if you are asking students to create accounts. Additionally, there are real human labor costs and climate costs that some may not want to contribute to. If we are going to give this advice to run out and create accounts and play with this technology I think that we should start with some of these concerns first.

Finally, I will say that I do think that using ChatGPT can be a great way to learn about these tools but I’ve seen this advice backfire so many times. The truth is that all of these bots (ChatGPT, Bing, BARD, etc.) are really easy to use poorly but using them in interesting and unique ways takes some thought. Prompt engineering is a bit of an art. I’ve seen many people use poor prompting to explore these tools and then write them off as nothing special because they stop too soon and don’t ask the right questions in the right way. 

Evergreen Teaching Advice

Learn about ubiquitous tools, their uses, and impacts

Every once in a while a technology comes around that impacts larger society through sheer volume of usage. Think social media, mobile devices, and even the internet itself. These are technologies where your students are likely to come in with direct experience or at least questions about using it. And so you might want to know a thing or to about it before you get into class. As mentioned above, a lot of folks say just go out and use the tool and that is one way but I’m not here to force anyone to do anything they don’t want to and I’m also just not convinced it is the best way to learn about generative AI.

There is plenty to be learned by reading about how this tech functions, various impacts to industries/society, as well as pedagogical uses. Besides just reading about the tool there are plenty of demo videos on YouTube and TikTok that actually show you how people are using it. There are free courses on prompt engineering which are filled with examples of inputs/outputs. You could also partner with a friend or a colleague who is using the tool and ask them to show you examples of ways it has benefited/failed them. 

Let students know where you stand

No matter if you are 100% embracing or rejecting ChatGPT you should know enough to take some kind of stance, or at least start a conversation, or respond to student questions.  Because of the variety of ways that it can be used and the variety of ways that different classes may approach it, you should make it clear what use of ChatGPT means for your class especially if you consider some uses to be cheating. Don’t be afraid to let students know if you are still making up your mind about it. It is still really new and many of us are trying to figure out what it means for our disciplines and fields. 

Don’t Freak Out About Cheating

  • Don’t fail all of your students
  • Don’t use GenAI tools as detectors (see section about learning how the tools actually work)
  • If you use actual detectors don’t rely on them solely to determine cheating

Design

There is a place for assignment/course/assessment design but I think that it is trickier than many are making it out to be and that it is going to take some classroom trial and error to really figure out. I do think that in some cases we have let writing stand in for knowing and this technology will upset that. Some disciplines will move to assess without writing and instead move to projects and presentations. Yes some will move to surveillance but I’m hoping that will be at a minimum. 

Focus on Care

Care about your students. Trust them. Have conversations with them.

Image by Hans from Pixabay

Hot(take) Chatbot Summer: Considering Value Propositions

They say… one of the keys to successful blogging is regular and systematic publishing of posts. Well dear reader, it has been six months since my last entry and alas I must admit I’m just not that kind of blogger… but here we are. 

The last few posts here really took off and far surpassed anything else I have ever published in this space, also garnering several new subscribers – hello new folks. This is likely because those posts were about generative AI and specifically ChatGPT. I’m of the firm belief that anything published on this topic in the last six months was going to skyrocket but I’m flattered that some found those posts useful. 

Over the last six months I’ve found myself somewhere between fascination and boredom around the bots. 

…They also say, the more something changes the more it stays the same. The headlines say this is big change in the fabric of the tech landscape but there is a part of me that can’t help but feel a little… meh. I’ll admit something seems big and earth shaking but something also seems blasé – like we have been here before. Everyone seems to be talking, all at once, and there is a lot of overlap in what is being said. It seems everyone is evangelizing about how this tech will change the world for good or bad, but the thing about the world is that it is in her nature to change no matter what. 

I have just not been that interested in adding my voice to the chorus and saying even more of the same. I’ve been quiet on purpose. I’m not in a rush to push out my next post/article/hottake around generative ai. But I am still reading, I’m still listening, and I’m still thinking. And a six month update on where my head is at seems… reasonable? 

So this is just an update with some of the things on my mind right now regarding generative AI in higher education:

Enterprise Access and Other Integrations

A big part of my past concerns with ChatGPT have been around privacy. Even with numerous examples of what say… social media companies, have done with our data, people still don’t seem to have a good sense of platform literacy. Many still sign up for accounts and apps with no regard. It is just an email address, a phone number, oh look I can log in with my google account – how convenient! [broken heart emoji] This led for me to call for better digital literacy/citizenship but I have been doing that for a long time now and it only goes so far. (No shade on others who do that work – I just want more of it).

But word on the street is that enterprise access might be on the way meaning you don’t sign up with a personal account but with an institutionally recognized account. OpenAI mentioned this when they were forced to expand privacy functions of the bot because of the Italy ban stating: “We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users.”

Though there was no mention of “educational” enterprise access this is an interesting prospect and I’d have to think if it is available for businesses that education could sign on if they so desired/can get the lawyers on board. It is a prospect that does give me some hope of relief. I would hope that this would mean there would be some education-specific data safeguards negotiated by university/college higher ups. That there would be some expanded restrictions on the sale of personal data to third parties for instance and oh I don’t know, maybe that some of that stuff in FERPA would be considered as another level of protections above commercial offerings. But these are just my hopes.

I also admit that I might just being naive. It is important to call out that this kind of access, if it were to come to pass, would simply be a power shift. Remember “end users will be managed” by someone (your boss or your boss’s boss rather than OpenAI) and the specifics of all of that are likely to be buried in technical deployment details and more contract legalize. And who even knows if that access or language will be accessible/understandable. I’ve seen folks asking data questions of their institutions go down that rabbit hole to be met by those in authority who tell them that language is buried in closed contracts and access to those interfaces are only available to certain administrators. They are then faced with filing a FOIA request to try to get answers. And that’s a just a great look when you decide to ask for that raise. 

So, educational enterprise access is on my mind this summer. I’m wondering what it looks like in terms of licensing and pricing. I’m especially wondering what it will mean for educational data privacy. But I’m also wondering if institutions will be able to train these models with enterprise access to behave in specific ways the way that some other educational integrations (who have been granted special GPT-4 API access)  have done. I imagine recent announcements about lower prices and “function calling” with APIs will further enable things? But I’m wondering what it means when some schools can afford such access and others can’t. Some may just prefer to use the free web access but, despite all the inevitability rhetoric, I question how sustainable a free web interface of this tech is given the cost of keeping it running and the regrets of its creators. But I imagine that free training data is worth a ton to them.

Okay, what else…

Cheating and Detectors

Detectors continue to be front of mind for me especially with the Turnitin mess. 

I continue to be flabbergasted by this one. I mean why aren’t more people talking about this? I know there has been a good amount of press but I really don’t know why it is not the top story in every higher ed and edtech outlet. 

In the this round of AI arms race madness in April Turnitin decided to turn on an AI detection feature that that they had never tested in a real world environment and to not allow any of their customers to opt-out. Rather than offering this functionality to a small number of test schools and piloting it to get some feedback, they just forced it on everyone. 

They claimed it had a 1% false positive rate but, surprise, that proved to be low after they actually started using it in real schools. They have never provided access to the specifics of their internal research or testing data. They have just expected everyone to take their word for it that they can detect synthetic text (nevermind that is still very much a complex subject area). Oh oh and they are not just adding functionality to their product out of the goodness of their hearts – no this is a paid feature that is “free for now”. This is the drug dealer, first hit is free, sales tactic if I’ve ever seen it but it is worse because these are existing customers who can’t opt-out.  

And the infuriating part is that, even though some pushed back and were able to get them to allow an opt out, most schools (in the US) just let them do it. 

I’m still not sure where cheating begins and resourcefulness or collaboration ends but I’m skeptical of black boxes that give us nice percentages make it all look super clear and easy. 

Final Thoughts – for now

There are a ton of other things running through my head too. Economic and labor impacts are a big one, as is climate. I especially enjoyed this Marketplace report featuring Hugging Face’s climate lead Sasha Luccioni because of the “is it worth it” positioning. She makes the case, for instance, that search is already very good, that it already runs with AI but that the current models are way more efficient than LLMs. From the article:

“I’m not against innovation. I don’t think we should all just stop doing AI research. But for me, it’s kind of the basis of that research to say “this thing costs this much” not just in terms of money, but also in terms of planetary and human costs. Then, if that calculation makes sense, then yes, we’re going to use the AI. But currently, we’re not making that logical kind of decision. Right now, it’s more like “why shouldn’t we use the ChatGPT to do web search?” But I think environmental factors should be considered, because the new technology is lot less efficient than the current model.”

I wonder what the “is it worth it” position is for education? 

Of course I’m continuing to pay attention to all the different ways that people are proposing that we use ChatGPT in the classroom. I like some of the ideas. They seem neat. And I’ve seen plenty of people warning students about bias and how the bots can straight up get it wrong sometimes. And that is a good thing. 

But I still continue to see many people skip the privacy issues when talking about these things. Even as companies continue to ban this tech and create all manner of policy around how it can be used by their employees for fear of leaking company information to the bots. But it is just students’ personal data – what could happen? With all this talk about teaching students prompting skills I’d think that there would be a place for this – it seems like it would transfer nicely to the workplace. 

When is it worth it to use a chatbot in the classroom and when is it just window dressing/look at me being an innovative professor? I’m not exactly sure but it is the question I’m most interested in right now.

So, that is my summer check-in. Perhaps I’ll do another in six months – maybe I’ll even post on something other than generative ai.

Featured Photo by Ethan Robertson on Unsplash

Prior to (or instead of) using ChatGPT with your students

I have been thinking, reading, and writing a lot about OpenAI’s ChatGPT product over the last month. I’ve been writing from the perspective of instructional design/faculty development/edtech mostly in higher education, though I did dive into a bit of K-12 (which is totally out of my element).

I understand the allure of the tool and the temptation to have students use it. It is new and shiny and everyone is talking about it. It is also scary, and sometimes we can assuage our fears by taking them on directly. 

But I suggested across two other posts that educators might not want to have students directly work with ChatGPT via having them sign up for a free OpenAI account for the following reasons:

  • Student data acquisition by OpenAI
    • Anytime you use a tool that needs an account the company now has an identifier in which they can track your use of the site to your identity
    • You need to provide personally identifiable information like an email/phone number/google account to create your OpenAI account
    • Their terms are quite clear about collecting and using data themselves as well as sharing/selling to third parties
  • Labor Issues 
    • Using ChatGPT is providing free labor to OpenAI in their product development. They are clear about this in their terms and in their faq page.
    • I don’t want to go down the “robots are coming for our jobs” path but many people (including the people building these tools) do envision AI having major impacts on the job market. Is it okay to ask students to help train the very thing that might take opportunities from them? It could be making opportunities too but shouldn’t they understand that? 
    • And I didn’t mention this in the other posts but AI has horrible labor practices exploiting global workers who train these systems. Do we want students to be part of that? Shouldn’t they at least know? 
  • ChatGPT is not a stable release, it could change or go away at any point. It is estimated it costs $3 million USD every month to keep it running. What happens to your assignments if it is down/gone?
    • ChatGPT has been released as a “Research Preview” and no one really knows what that is
      • It might be similar to a “Public Beta” or a “Developer’s Beta” but both of these come with an assumption of a public release which we do not have with ChatGPT
    • It is often down or slow because of the large number of users
    • Features are changing all of the time (for instance chat histories have disappeared and reappeared a few times already)

 

After suggesting this I got a good bit of push back. “But AI is such a big deal Autumm, and it is going to change the world, and students need to be prepared … and… digital literacy and… and… and…”

I hear you my good intentioned pedagogue. And yet I still have these concerns. So, here are just some ideas of some things you may want to do with your students prior to having them directly use the ChatGPT product with a free OpenAI account – and (I’m kind of hoping) maybe you want to have them do these things instead of using ChatGPT with a free OpenAI account.  

Socially Annotate OpenAI’s privacy and service Terms 

Wouldn’t it be great if students better understood what they were getting themselves into by creating that account with OpenAI? A social annotation activity using a tool like Hypothesis of OpenAI’s privacy policy and terms of service (TOS) can start this understanding. I’ve done this several times out on the open web with various collaborators. TOS and Privacy Policies are dense technical and legal readings so doing it as a group with in line comments really helps. If you can invite a guest annotator who has a background in law or policy great and if not consider having a reading before the annotation about what to look for in a privacy policy and how to read a TOS.  

*Note – This one can be somewhat problematic if your school does not provide a social annotation tool as students likely need to create an account with a social annotation provider who does not have an agreement with your school and that could be the same problem you are trying to avoid. I do feel better about Hypothesis because they are a non-profit but you could also get around this by copying the terms/policy and sharing it in your school supported cloud word processor (Google Docs, MS365, etc) and just using the commenting feature. 

Play the Data, Privacy, and Identity game with your students

Instead of “playing” with ChatGPT (cough, nota toy, cough) in your class you could play the Data, Privacy, and Identity game developed by Jeannie Crowley, Ed Saber, and Kenny Graves. First developed as an in person activity, read Jeannie’s blog post overview of the game. Then check out the resource page where you can read instructions and print off cards. Looking for an online version? Since the team published this with a CC 4.0 license I adapted it into an online version on a simple WordPress site using H5P that requires no login and collects no data. 

Discuss big issues around AI like labor and climate

Have a discussion with students about big issues with AI that are likely to affect them. A good overview of the issues with large language models can be found in Bender, Gebru, McMillian-Major, and Shmitchell’s 2021 paper On the dangers of stochastic parrots: can language models be too big. A discussion of this paper will set you up to dive deeper on the issues.

Impacts of artificial intelligence on labor directly speak to the world of work that students will graduate into. This report from the US-EU Trade and Technology Council about the impact on future workforces can be a starting point. You may want to break it into sections and keep in mind that it is US/EU centric. Follow up (or start with, depending on your context) a more global perspective. You could check out MIT Technology Review’s whole series of articles on AI Colonialism or the recent reporting from Time about OpenAI paying workers in Kenya less than $2 a day for grueling work training the model (you will need a content warning for SA and have to figure out how to get around the paywall for the Time Exclusive but other great articles about this report exist like this one from Chole Xiang on Motherboard).

Large language models like ChatGPT take a lot of computing power to run and all of that electricity has a carbon footprint that we are still trying to figure out how to measure. Discussing this with students helps them to understand these potentials. Maybe start with a discussion around this MIT Tech Review article on how Hugging Face is attempting to better measure things

Conduct a technoethical audit 

If you don’t know about all the resources on the Civics of Technology site you are in for a treat. Here I’m specifically going to recommend their resources around EdTech Audit but the site has a great larger curriculum with all kinds of resources. I’m not sure that ChatGPT is really “EdTech” but if you are thinking of having students use it then you are using it as EdTech. I think the questions, handouts, and examples provided here will serve you in getting your students to analyze some of the implications from the articles and activities listed above. 

Analyze your data collected from other social media platforms

Check out HesitaLabs Digipower Academy. They have several tools, which run in the browser and collect no data, which allow you to examine and better understand the way social media platforms use your data for targeted advertising. It does require that you request a data export from these various platforms but they have instructions for how to do that for each platform. After the tools analyze your data they provide you with dashboards and metrics to help you better understand why you are being targeted the way that you are (because we are all being targeted in some way). Don’t feel comfortable having students download their own data (can they really secure it)? They have sample data you can run too.

Work Through The People’s Guide to AI

What even is an “algorithm”? What is the difference between AI and Machine Learning? The People’s Guide to AI is a workbook helping you to answer these questions. It is filled with relatable descriptions, activities, prompts, and so much more! You could spend the whole term working through this thing!  Written by Mimi Onuoha and Diana Nucera a.k.a. Mother Cyborg, with design and illustration by And Also Too. Licensed CC-NC-SA 4.0 this workbook is also available in print for the affordable price of just $7 USD – and you will want to write in it so paper copies are not a bad idea.

Learning objectives

These are just some of my ideas for activities and assignments. You can come up with your own but perhaps you might consider the following learning objectives (or something like them) to guide you. 

Prior to creating an account with OpenAI students will:

  • Discuss the value that their personal data holds with various actors (themselves, friends/family, school, corporate, government) 
  • Demonstrate an understanding of typical tech product cycles and compare them to non-typical ones
  • Compare how power is held by various actors (themselves, friends/family, school, corporate, government) 
  • Analyze workforce implications of AI at home and globally  
  • Create a personal data security plan 

 

These are just some ideas, and I’m sure they are flawed in various ways, I’m sure they won’t work for every course, and I’m sure some folks are already doing something similar or even better. But the message I’m trying to send here is just think about some of the larger picture in AI, and have students think about it, before you have students sign up and start “playing” with something they don’t understand.

~~~~

Image by Kevin from Pixabay

  • *This post is especially messy as I accidently hit publish while drafting late at night. The section Analyze your data collected from other social media platforms was added the next morning. And then later the next evening I added the Work Through the People’s Guide to AI section. I just keep thinking of things to add!
  • ** No ChatGPT was used in composing this post

In Defense of “Banning” ChatGPT

The current big news on ChatGPT is around the decisions by K-12 school districts to pause, think, push back and sometimes “ban” ChatGPT. I’ve mostly heard about NYC Department of Ed because they have actually put a “ban” in place but other districts are now considering their approach. What does it mean to “ban” a technology in a school? In this case it means blocking it on their networks and on all school issued devices. Another way of pushing back though is simply spreading the message that this tool is not aligned with the school’s values. 

Responses that I’ve seen are mostly dismissive of the schools who are considering or implementing a ban and often buy into the techno-inevitability frame. This is the future; you can’t fight it? But I’m more sympathetic to these schools’ stance. While I do think that they have set themselves up for a Streisand effect, and I realize that there are other ways to access the tool on cell networks and personal devices, I also feel the need to defend this approach. 

I know little about K-12 education myself; I mostly work in higher ed. But I do know that K-12 schools block parts of the internet all of the time and I’m pretty sure that often they are required to do so here in the US to get federal funding. OpenAI’s own Terms of Use states that their tools should not be used by anyone under 18 and their Privacy Policy says they are not intended for anyone 13 and younger. Additionally, NYC Dept of Ed has provisions for lifting the ban for schools who would like to explore the pedagogical possibilities of the tool, so those who have a plan and intention have a pathway to use. 

I think others who are sympathetic to banning are doing so because of cheating concerns but I’m not so interested in the “cheating” angle. I do think that this tool could be used to assist in critical thinking and in the drafting process. But I’ve worked in edtech for 15 years and if I know anything I know quickly throwing new technology, at scale, into a learning environment is a recipe for disaster. It takes people to develop meaningful curricula around technology use, imagine harms and try to avoid them, and that takes time. I’m all for slowing this bus down.

I wrote about some concerns that good intentioned higher ed instructors, who want to use ChatGPT with their students, might want to think about. There, I mostly cited privacy and larger labor concerns which I think are heightened for K-12. But another concern for both higher ed and K-12 might be that this is a “free for now” product. Some are estimating that it costs $3 million dollars a month to run the thing. They are going to start charging for it at some point. What if it is in the middle of your term? 

I’m okay with some schools considering and even deciding to attempt to throttle ChatGPT usage – especially K-12 schools. OpenAI is pretty open about the fact that this whole thing is a big experiment around the effects of releasing ChatGPT on society. They are quoted as telling CNN:

“A spokesperson for OpenAI, the artificial intelligence research lab behind the tool, said it made ChatGPT available as a research preview to learn from real-world use. The spokesperson called that step a “critical part of developing and deploying capable, safe AI systems.”

OpenAI says that their mission is “to ensure that artificial intelligence is a benefit to all of humanity” but I’m not sure how that tracks with running experiments on the general public (in higher ed this would never pass IRB) and drawing a line at extending this experiment on kids is okay in my book. 

~~~~

Image by Gerd Altmann from Pixabay

ChatGPT and Good Intentions in Higher Ed

I’m frustrated by the conversation around ChatGPT in higher education.

So far, the conversation has been largely about using the tool as a text generator and fears around how students can use it for “cheating”. I tend to think this is only the tip of the iceberg and it frustrates me – this convo is still very young so maybe I just need to give it a chance to develop. I think the more interesting (and likely disruptive) conversation is around how the tool can be used for meaning making (and legal issues around intellectual property). Maybe I’m overreacting, though maybe I’m not

But meaning making is not the topic of the day! No, the topic of the day is “cheating” and everyone is officially freaking out!

Just in the last few days there have been claims of “abject terror” by a professor who was able to “catch” a student for “cheating” with ChatGPT (resulting in the student failing the entire course). Calls to return to handwritten, in-person essay writing and over 400 comments (at the time of this writing on Dec 29th) almost entirely focused on fears around “cheating” in an article about the tool’s impacts in higher ed

Besides the calls for surveillance and policing, the humanized approaches being proposed include talking with students about ChatGPT and updating your syllabus and assignment ideas to include ChatGPT. But often these ideas include getting students to use it; helping them to see where it can be useful and where it falls down. This is a go to approach for the humanistic pedagogue and don’t get me wrong I think it is head and shoulders above the cop shit approach. Yet there are some parts about this that I struggle with.

I am skeptical of the tech inevitability standpoint that ChatGPT is here and we just have to live with it. The all out rejection of this tech is appealing to me as it seems tied to dark ideologies and does seem different, perhaps more dangerous, than stuff that has come before. I’m just not sure how to go about that all out rejection. I don’t think trying to hide ChatGPT from students is going to get us very far and I’ve already expressed my distaste for cop shit. In terms of practice, the rocks and the hard places are piling up on me.

Anyway, two good intention issues around working with ChatGPT and students are giving me pause:

It is a data grab

Many (though not all) of the ideas I’ve heard/seen for assignments that use ChatGPT require students to use ChatGPT which requires an OpenAI account. An OpenAI account requires identifiable information like an email address or google account which means that it can be tracked. Their privacy policy is pretty clear that they can use this info how they want and that includes third party sharing and data possibly being visible to “other users” in a way that seems particularly broad.

I have this same issue with any technology that does not have a legal agreement with the university (and I don’t necessarily even trust those who do). But I’ve also long believed that the university is in a futile battle if we really think that we can stop students or professors from using things that are outside of university contracts. 

Some mitigation ideas for the data grab

Note: All of my mitigation ideas I’m sure are flawed. I’m just throwing out ideas, so feel free to call out my errors and to contribute your own ideas in your own blog post or in the comments below. 

Don’t ask students to sign up for their own accounts and definitely don’t force them to. There is always the option of the professor using their account to demo things for students and other creative design approaches could be used to expose students to the tool without having them sign up for accounts.

If students want their own accounts maybe specifically warn them about some of the issues and encourage them to use a burner email address but only if they choose to sign up.

I’m not sure if it is breaking some kind of user policy somewhere to have a shared password on one account for the whole class to use. This could get the account taken down but I wonder how far you could take this. 

It is uncompensated student and faculty labor potentially working toward job loss

How do humans learn? Well that is a complex question that we don’t actually have agreement on but if you will allow me to simplify one aspect of it – We make mistakes, realize those mistakes (often in collaboration with other humans – some of whom are nice about it and others not so much) and then (this part is key) we correct those mistakes. Machine learning is not that different from this kind of human learning but it gets more opportunities to get things wrong and it can go through that iterative process faster. Oh and it doesn’t care about niceness. 

Note: I cannot even try to position myself as some kind of expert on large language models, AI, or machine learning. I’m just someone who has worked in human learning for over 15yrs and who has some idea about how computational stuff works. I’ve also watched a few cartoons and I’ve chatted with ChatGPT about machine learning terms and concepts*

But even with all of its iterations, it seems to me that human feedback is key to its training and that the kind of assignments that we would ask students to take part in using ChatGPT are exactly the kind of human fine tuning that it (and other tools like it) really need right now to become more factually accurate and to really polish that voice. Machines can go far just on those failing/succeeding loops that they perform themselves but that human interaction [chef’s kiss]. And that should be worth something. 

When I imagine what a finely tuned version of ChatGPT might look like I can’t say it feels very comfortable and I can’t imagine how it does not mean job/income loss in some way or another. Now it could also mean job creation but none of us really have any idea. 

What we do know is that ChatGPT’s underlying tech is GPT-3 and OpenAI plans to drop an upgraded version, GPT-4 in 2023. Asking students to train the thing that might take away opportunities from them down the road seems particularly cannibalistic but I also don’t know how you fight something you don’t understand. 

Some ideas for mitigating the labor problem 

I’m pretty stuck on this one. My go to solution for labor problems is compensation but I don’t know how that works here. I’m thinking that we are all getting ripped off everytime we use ChatGPT. Even if it ends up making our lives better OpenAI is now a for-profit (be it “capped profit”) company and they are going to make a lot here (barring legal issues). But I don’t think that OpenAI is going to start paying us any time soon. I suppose college credit is a kind of compensation but that feels hollow. I do think that students should be aware of the possible labor issues and no one should be forced to use ChatGPT to pass a course. 

I just want to end by saying that we need some guidance, some consensus, some … thing here. I’m not convinced that all uses of ChatGPT are “cheating” and I’m not sure someone should fail an entire course for using it. I mean sure you pop in a prompt get a 3 second response that you copy and paste – I can’t call that learning and maybe you should fail that assignment. But you use it as a high end thesaurus or know your subject and use ChatGPT to bounce ideas off of it and you are able to call out when it is clearly wrong… Personally I’d even go so far as getting a first draft from it as long as you expand on and cite what parts come from the tool. I’m not sure these uses are the same thing as “cheating” and if it is I’ve likely “cheated” in writing this post. I’ve attempted a citation below.

~~~

** Update 1/26/23 after publishing this post some were looking for more mitigation ideas. In response I published Prior to (or instead of) using ChatGPT with Your Students which is a list of classroom assignments focusing privacy, labor, history, and more around ChatGPT and AI more broadly.

Image by Yvonne Huijbens from Pixabay

*Some ChatGPT was used in the authoring of this blog post though very little of the text is generated by ChatGPT. I chatted with it a bit as part of a larger process of questioning my own understanding machine learning concepts but this also included reading/watching the hyperlinked sources. My interactions with it included questions and responses around “human-in-the-loop” and “zero-shot learning” but I didn’t use these terms in this post because I worried that they may not be accessible to my audience. I do think that I have a better understanding of the concepts because of chatting with ChatGPT – especially with the “now explain that to me like I was a 10yr old” prompt. One bit of text generation is when I asked it to help me find other words/phrases for “spitballing” and I went with “throwing out ideas”. 

ChatGPT ID/FacDev?

Many of those of us who work in higher ed have been thinking about ChatGPT since OpenAI dropped free access to it at the end of November. The fancy new chatbot which can generate essays, responses to qualitative quiz questions, and discussion board prompts has everyone thinking about academic integrity and “cheating”. The tech has been around for several years but offering access for free, right before finals, has caused quite the stir in higher education. 

Something I hear a lot of people talk about, but which I feel is still not getting enough attention, is the question about why this tech is free. It is not much of a question because it seems everyone is aware that the tool has been given for free to the public so that massive amounts of people can help to train it. 

So, like most of these kind of things, it is not really free. Maybe we are having fun playing with it but you are exchanging your time, creativity in writing questions/prompts, and your data in exchange for access. You need to create an account which needs to be tied to an email and I also believe a phone number. At the bottom of the ChatGPT input screen it clearly reads “ChatGPT Dec 15 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve.” But improve for who and to what end?

Most of the folks who I have heard talk about this hint at how it is being trained by public labor for free, on public data obtained for free, so that eventually it will be used to create corporate products which will likely take away jobs and make billions for the creators. But after throwing this fact out there nonchalantly and often with a tone insinuating that this is a no brainer, they continue to move on and talk about how they used it to generate a rap in the style of Jay-Z, ask it questions about philosophy, or try to get it to mimic a student responses so that they can see if they (or their colleagues) could be fooled by it. I realize I’m about to be guilty of doing the same thing here – perhaps I just point this out to try to redeem some semblance of integrity. This work continues to put me in paradox. 

OpenAI seems more than aware of the potential economic impacts of all of this and they have a research agenda around it – but this gives me little comfort. I can’t help but think about my own position in instructional design/faculty development/academic technology.

“Instructional design” (ID) can live in lots of places in the university and the position takes on a different flavor depending on where it exists. You have a different job if you are an instructional designer in the teaching center vs the library vs IT vs HR. Not all of us work with faculty but there is even variation between those of us who do. Some IDs are content focused and they use skills like graphic design or videography to develop things. My work has never been very content creation heavy – though I do like to create content. Working in smaller schools with tight budgets I mostly consult with faculty and for many of us this consulting role is a big part of our work. I talk with faculty about their teaching and offer advice about what they can do better. 

I talk with them… I offer advice… You see where I’m going here.

This made me wonder, what kind of instructional designer/faculty developer consultant ChatGPT would make and so I decided to have a very basic conversation with it posing as a faculty member in physics. I copied the transcript of the chat into this google doc and I’m sharing it publicly for reflection by others in the field. 

As for my own reflection 

I’ll say that the results of my chat are much like what I have seen in the disciplines. These are perfectly plausible responses that sound very human but they don’t go very deep and it is in that lack of depth where those of us who do this work will recognize the flaws.

The bot falls down when asked about discipline specific approaches and when asked for anything that could connect to what other instructors may have tried in the past. It glazes over specifics around EdTech and flat out gets it wrong sometimes (I think the directions it gives for dropping the lowest grades in Canvas sound more like directions for Moodle, personally). I’m not actually a physics professor so I didn’t/couldn’t ask it specifics about advice for teaching individual topics in physics. In my experience, it does do better when you ask it to narrow its scope; so asking more detailed questions could make a big difference. 

Still, the results are very familiar to a lot of faculty development advice that I see. Be it on various blogs, websites, listservs or even what I sometimes hear come out people’s mouths –  much of it is the same basic stuff over and over. Professors are busy and so giving them simple lists and easy answers for improvement is quite common and ChatGPT mimics these basics pretty well and includes attempts at big picture synthesis. It ends its advice to me about developing as teacher by saying “Remember, being a lifelong learner as an educator is an important part of staying current and effective in your teaching practice.” 

It’s not surprising. ChatGPT is trained on huge data dumps of the internet (including Reddit, Wikipedia, and websites). I threw the phrase “5 things you can do to improve collaboration in your class” into google and got a return of 712,000,000 results of various listicle type pedagogy puff pieces. With so much of this stuff out there already maybe it doesn’t matter that a chatbot can regurgitate it too? But I have to wonder what it means for our work. 

I’ve been struggling with a kind of imposter syndrome from this work for some time. I say a “kind of” imposter syndrome because I refuse to take all of the blame. I can’t shake the feeling that at least some of it comes from the work itself; that the nature of the work encourages it. So many of us are limited in our own opportunities to go deeper or to reflect in more meaningful ways. We are incentivised to create/repeat these easy answers/”best practices”. After the pandemic we have seen many of our professional development organizations raise prices of in person conferences and reject accessible virtual options. Simultaneously, professional development funds often have not increased from our institutions and don’t get me started about how frequently we are throttled in our attempts to teach directly ourselves. 

Many of us rely on disclaimers and positioning ourselves in various ways to account for our lack of knowledge/experience in domain specific areas, with technology, or even with specifics of teaching. In the beginning of that chat, the bot even gave me its own disclaimer “As a language model, I don’t have personal experience in instructional design or teaching, but I can provide general information and suggestions based on best practices in the field.” So, some of this is just the nature of the work but it is depressing nonetheless. 

No one really knows yet what ChatGPT means for higher ed and I’ve not seen much talk about what it means for EdTech/Instructional Design/Faculty Development. We are kind of in a wait and see and react when we can kind of moment. I guess I’m hopeful this will open up room for more thoughtful and creative work. But I worry that it will force us to ask some hard questions about what kind of work is meaningful and that will cause some casualties. 

How I got here/More if you want it

If you were paying attention to this chatbot/large language model (LLM) conversation at all before Nov. 30th, or even if you dug a little deeper since, you likely heard about this paper by Bender and Gebru et. al. but if you haven’t and want a critical look at the dangers of this stuff (including the environmental impacts and perpetuating biases) this is what you should really be paying attention to. I also found this piece from Jill Walker Rettberg, really helpful in better understanding the underlying datasets that GPT-3 is trained on and reflections the culture they come from. The relationships and evolutions between ChatGPT, GPT-3 (as well as 1 and 2), InstructGPT and all that is quite confusing but this post (from the Walmart Global Tech Blog, of all places) helps a bit. For even more, Lee Skallerup Bessette has created a Zotero library collecting all things ChatGPT in higher ed. 

In addition, I ended my last post (which was mostly a reflection current happenings with twitter) with a reflection on European Starlings. Yes starlings, the invasive bird species who are problematic but in that post I referred to how they are also “strange and wonderful for lots of reasons”. I had focused on their murmurations but another important facet of the starling’s disposition is its proclivity for mimicry – and of course got me thinking about things that can talk but not really understand.

~~~~

Featured Image by Kev from Pixabay

Is a space

For some time now I’ve used “Is a liminal space” as my tagline and it has always intrigued me how people latch on to the word “liminal” in that little phrase. Asking me what it means and why I identify with it. Fewer folks point out that people are not spaces and that spaces, though they may indeed influence people, are not people. Still, I go on using it because I like to make people think and wonder – what the heck does that mean. 

Environments do indeed shape us and it has been on my mind more than usual lately. Someone recently asked me that question that seems to circle our field over and over about the differences between “designers” and “technologists” and what is the “right” term for the work. I’ve gotten to the place where those terms mean nothing to me. To understand what you do, in this strange world of edtech/instructional design/faculty development/teaching/administrating/// tell me where you work. That is the only way I will get some idea about what you do. And if what you do doesn’t fit the identity of the space you are working in… just wait. In my experience, you’ll either leave or change. 

Digital environments have been one of my bags for some time now and yes they shape us too. Especially if you share through them and make connections there. But no environment is static and when Twitter was sold to Elon Musk a few months ago I think everyone knew things would change.

I didn’t leave, technically. I’ll likely share this post there. Technically, I’ve been on Mastodon since 2016 but “technically” I’m in a lot of places. It’s messy. I’m messy. 

But when they started selling checkmarks… yeah I had to go. I’ve never had a checkmark, but the idea of buying one. It is all just so sad and strange. To see so much wide spread top-down abuse there. To scroll my feed there and see all these reports of banned accounts and blocking links from competing platforms and then sprinkled in someone promoting their latest article or webinar. I understand some people have invested years and have tens of thousands of followers and that is hard to let go of. I don’t want to throw shade. It is just weird. 

I also don’t want to tell anyone what to do but I will say it makes me happy to see familiar folks in other spaces.

I’m pretty privileged there and by privileged I mean invisible. Of course that is not completely true but it is not completely untrue either. I mean it has been a long time since I’ve been an egg but randos in my DMs still seem impressed that Barack Obama follows me. I’m somewhere in-between and not quite loud enough to make a fuss but not translucent enough to feel comfortable existing in a space that just continues to increase in toxicity. But that is strange of me to say too – I can’t say it hasn’t always been toxic – I know that would be a lie. But now it just feels like the call is coming from inside the house more than ever. And to continue to post just feels like a statement of support for things I can’t agree with.

Also, it just feels like a time to try something new. Maybe it won’t be as big or as notable (is Barack even on Masto?) but that has never stoped me before. Starting again, and again, and again. That is kind of my thing. Perhaps that is why I’m perpetually on the threshold. It is sad but it is where I’m at. 

I can’t help but think of the starling murmurations. You know these, yes? Starlings are strange and wonderful for lots of reasons (equal reasons why they are pests but I’m trying to end on an upbeat here) but one is because of the murmurations that they perform in the sky at sunset. Here in Michigan I see them while driving in the country. How they pull this off is a bit of magic no one really understands the details of – maybe something similar is happening now. 

On Endings

When I was a teenager neighborhood friends I had connected with moved away, the way that neighborhood friends do. It wasn’t far but it wasn’t nearly as close. I have a memory that kind of haunts me about walking around the area the day after they left, passing by their house, feeling this kind of emptiness.

In my early 20’s I fell in love with a coffeehouse in downtown Dearborn. There was coffee of course but also whimsical decor, this chicken wrap with toum, music and poetry in the evenings, and friendly randos who quickly became confidants. It closed for renovations promising to reopen in a few months but it never did.

I enjoy the nature of endings in my work as they tend to allow for reflection and growth. I can also plan for them. They are explicitly expected, timed precisely, and everyone is on board. The term is x weeks long, the class is x hours, the midterm occurs in week x, and the final in week xy. But endings in most of life are never like this.

It feels like there has been a lot of endings over the last yearish. For me personally but also for so many around me. And I’m not exactly sure what to do with it – there has just been so much of it. I want to mourn but I also want to celebrate. I also want to learn and do better next time. I also want to scream and cry. I also want to spit in the face of any asshole who dares come at me with that “better to have loved and lost..” or “when one door closes..” crap.

To err is human. To end is human.

For now, in late autumn, in Michigan, with most of the trees bare and woody, it has been unseasonably warm. I’ll prepare for winter and darkness as best I can. I’ll plant amaryllis now for a splash of color in few weeks when the cold is sure to have set in. And I’ll wait and watch for some sign of newness.

Image by Terry from Pixabay

So Many Connections: Attending Data, Power, and Pedagogy

I’m excited to have been accepted to participate in Data, Power, and Pedagogy put on by HestiaLabs and Brown University’s Information Future’s Lab (IFL) running Sept. 27th – 29th in Providence, Rhode Island. It is the first professional development anything I have traveled to in person since the pandemic and I’m so happy that it is NOT a conference. I’m doing a little homework trying to prep for the experience and can’t help but find connections to past and current work that seemed worthy of a blog post.

I know HestiaLabs as the most recent project from Paul-Olivier Dehaye who I know from my past poking around regarding the Cambridge Analytica scandal. In the last few months, I have caught up with Paul and others at HestiaLabs and been impressed with what they are doing. When I saw this opportunity in Providence to workshop some of the things they are working on first-hand I had to apply – especially since I am now remotely teaching Digital Citizenship part-time for College Unbound which is also located in Providence – and was really excited to be accepted.

As part of our prep work for the workshop we were asked to make data requests from several social media sites, ride share services, mobile operating systems, etc. and to export the data that we got back. We are bringing those exports to the workshop with us. What are we going to do with those? Well, I have some idea from following what HestiaLabs has been doing but it turns out IFL’s most recent panel discussion gave us a sneak peek too. The panel was after my own heart asking the question “Can Regulation Solve the Problem of Misinformation?”, and at about 51:00 in the video one of the panelist Mark Scott from Politico gives a glimpse of what I think we are going to experience this week.

As I found in the past, just obtaining the data exports was a lesson in itself. Every service had an automated process to request your data which is a huge improvement from when I worked with Paul to request my data from Cambridge Analytica back in 2016. All of the platforms that we were asked to download from were big ones and I suppose they now have to have such services with GDPR protections and an automated system is likely the cheapest way for them to field such requests. For almost all of them I had to initiate a request and then wait a bit to then be provided with a raw data file in a JSON format. I could have also picked an HTML format in many of them. All of them sent me a notification when my file was ready with the exception of TikTok. I had about 5 expired requests from TikTok already as I’ve requested my data from them in the past but they never send a notification and I always forget. The data files expire after something like 4 days and without a notification that your data is ready it is easy to forget.

I’m excited to participate and to be blogging again. I’m seeing connections to my past work questioning regulation and calling for education around data privacy, as well as, applying some of these methods to help us to better question the edtech solutions that we use locally. We will see what comes of it all.

Photo by Nancy Hughes on Unsplash

For Mom: Reflections on teaching and learning with family

On the day before what would have been her mother’s 104th birthday, my mother left this world. She didn’t want a service. If we would of had a service I would have felt drawn to express how I felt about her but if you know me well then you know I am not that good at speaking – especially when it comes to times where I am overwhelmed emotionally. I suffer from pretty debilitating performance anxiety and I’m not sure I would have been able to do it honestly. 

I’m more comfortable with the written word but even that has been hard recently in my grief. The kind people at the funeral home helped us to compose her obituary but we put it together while I was still in shock and it is mostly just facts. This post is not going to be a very good obituary or eulogy since it is maybe more about me than about her but I have used public writing as a way of processing my thoughts for several years now, and in my grief I feel drawn to write about her and how she influenced me. I’ve committed this last third of my life to teaching and learning and dedicated this blog to the subject. So, reflecting on her as a teacher, as my teacher, seems appropriate for this public forum.

I want to say that mother is the first and most powerful teacher but I know that is not always the case. I want to believe that it is often the case; but I don’t know for sure that this is true either. I was lucky that it was the case for me. My mom, Elizabeth Rita Caines, (“Rheat” to her friends, and often “mom” to my friends) was by far my greatest champion and advocate. From the perspective in which we think of a teacher as a positive and supportive force who guides someone to discover for themselves rather than force ideas or processes, one who protects the learner from external and internal obstacles to learning – she has no rival in my life. 

Her love and advocacy for me was so fierce. Well I mean she was ‘so fierce’ in general, and you didn’t want to cross her but you really didn’t want to cross her if it had anything to with me. She often attributed this to the fact that I was her only biological child. She said a psychic once told her that she would “never have children but that all children would be her’s”. She loved children and I know many nieces, nephews, and other children loved her dearly (‘Aunt Rheat’ was a force of her own who I only knew peripherally). But it wasn’t just the physic, doctors too told her she could never have children due to a childhood accident on a horse and so when she became pregnant with me it was a shock and a delight – and a point of much bragging by my father. Before discovering it was a pregnancy she was frightened, at first being told she might have a tumor but then an ultrasound revealed that she had a baby instead. She said there was a mailman coming up the stairs to the doctor’s office as she was leaving and she excitedly told him she was pregnant and did a little dance with him on the steps. On the way home she stopped at drive-in coney island type place and got a “baby root beer” and drank it for me – I still have the mug. 

As a youth, when I expressed questions or an interest in things she would find resources for me and let me decide for myself – even if those were things that she did not understand or those that challenged her own beliefs or upbringing. We didn’t have a ton of books around the house but a medical encyclopedia and nature guide outlining plants and animals were often referenced to answer questions. When I expressed interest in music that was “controversial” in some way she would surprise me, buy the album, and we would listen to it together. As I entered high school and began to question the Catholic faith that she raised me in (though she always referred to herself as “one of those backsliding Catholics”), I was nervous in the bookstore to ask her to purchase Zen Mind; Beginners Mind and The Three Pillars of Zen but she did so without question – both of these were way over my head at that age (who am I kidding – they are still way over my head).

Formal schooling was a struggle for me from K-12 but she fought for me at every stage whether it was struggling to get me up in the morning (oh dear reader, how I fought it) or fighting with teachers and administrators who had all kinds of ideas about how I was different. It seemed they always knew something was not quite standard with my thinking but they were not really sure what it was. Early on, I attended a private Catholic school for just a few years and they thought that I was “gifted” but later transferring to public school they said I had a learning disability of some kind, put me in a “special education” room, and threatened to hold me back. She worked with me all summer, using flashcards and worksheets she obtained from the school and the educational sections of the bookstore, and even enlisted my brother to help, till I got to where they wanted me on their standardized measures. 

She never graduated high school herself and my completing this milestone was important to her. I did so, but by the skin of my teeth. Eventually, higher education would be a place where I would thrive, but I tried a few different paths on my way and at one point my free spirit got brave enough to take on some extensive travel. We were always close and spoke nearly every day, but the 10 ½ hour time difference when I was going to spend nearly three months in India would make that difficult. I was traveling on a shoestring budget and didn’t even know how often I would be able to call – I wasn’t even sure where I would be sleeping. Public libraries had recently started to offer computing stations with internet and so before I left I took mom down to the Henry Ford Centennial Library and set her up with an email address. We would return several times before I left so that we could practice sending and receiving messages. During this time she often said that she, the teacher, had become the student but considering how and where I ended up it seems to me that she was perhaps preparing me for an occupation yet to come. 

On the topic of death she was known to say “I’d have no problem with death, if it wasn’t just so damn permanent”. Now that she is gone the permanence of her absence, and the reminder of the brevity of our existence, weighs heavy on me. She was sick for a long time and throughout my life various doctors would tell her, and even tell me, that she was not going to live a year or two if she didn’t change her lifestyle, have an operation, or take some medicine. Sometimes she would listen, but often she would not. So, even though the nurse told me a few months before she passed that it was near, it still came as a surprise to me. She made it to 81. I am missing her every day.

She was more than my mom and there was more to her than being my teacher. Things were far from perfect between us, I suppose that is one of the risks with being the kind of teacher that let’s the student decide for themselves – you are not always going to agree. But she let me become my own person and I knew she would always love me no matter what. When I would tell her what a great mom she was, she would often say “all mothers are this way; I’m nothing special” but I know that was not true. I was lucky to have her as my mom. 

Thinking of and collecting artifacts that I have of her (old pics, voicemails, text messages), I remembered that she appears very quickly at the end of a video I made to “un-introduce myself” for CLMOOC back in 2015 – it is one of the few bits of video I have of her and it seemed to fit in with the topic of this post too well to not share it.

Thank you mom for all you taught me and all I learned from you. I miss you.