Refresh loader

Category : Instructional Design

Hot(take) Chatbot Summer: Considering Value Propositions

They say… one of the keys to successful blogging is regular and systematic publishing of posts. Well dear reader, it has been six months since my last entry and alas I must admit I’m just not that kind of blogger… but here we are. 

The last few posts here really took off and far surpassed anything else I have ever published in this space, also garnering several new subscribers – hello new folks. This is likely because those posts were about generative AI and specifically ChatGPT. I’m of the firm belief that anything published on this topic in the last six months was going to skyrocket but I’m flattered that some found those posts useful. 

Over the last six months I’ve found myself somewhere between fascination and boredom around the bots. 

…They also say, the more something changes the more it stays the same. The headlines say this is big change in the fabric of the tech landscape but there is a part of me that can’t help but feel a little… meh. I’ll admit something seems big and earth shaking but something also seems blasé – like we have been here before. Everyone seems to be talking, all at once, and there is a lot of overlap in what is being said. It seems everyone is evangelizing about how this tech will change the world for good or bad, but the thing about the world is that it is in her nature to change no matter what. 

I have just not been that interested in adding my voice to the chorus and saying even more of the same. I’ve been quiet on purpose. I’m not in a rush to push out my next post/article/hottake around generative ai. But I am still reading, I’m still listening, and I’m still thinking. And a six month update on where my head is at seems… reasonable? 

So this is just an update with some of the things on my mind right now regarding generative AI in higher education:

Enterprise Access and Other Integrations

A big part of my past concerns with ChatGPT have been around privacy. Even with numerous examples of what say… social media companies, have done with our data, people still don’t seem to have a good sense of platform literacy. Many still sign up for accounts and apps with no regard. It is just an email address, a phone number, oh look I can log in with my google account – how convenient! [broken heart emoji] This led for me to call for better digital literacy/citizenship but I have been doing that for a long time now and it only goes so far. (No shade on others who do that work – I just want more of it).

But word on the street is that enterprise access might be on the way meaning you don’t sign up with a personal account but with an institutionally recognized account. OpenAI mentioned this when they were forced to expand privacy functions of the bot because of the Italy ban stating: “We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users.”

Though there was no mention of “educational” enterprise access this is an interesting prospect and I’d have to think if it is available for businesses that education could sign on if they so desired/can get the lawyers on board. It is a prospect that does give me some hope of relief. I would hope that this would mean there would be some education-specific data safeguards negotiated by university/college higher ups. That there would be some expanded restrictions on the sale of personal data to third parties for instance and oh I don’t know, maybe that some of that stuff in FERPA would be considered as another level of protections above commercial offerings. But these are just my hopes.

I also admit that I might just being naive. It is important to call out that this kind of access, if it were to come to pass, would simply be a power shift. Remember “end users will be managed” by someone (your boss or your boss’s boss rather than OpenAI) and the specifics of all of that are likely to be buried in technical deployment details and more contract legalize. And who even knows if that access or language will be accessible/understandable. I’ve seen folks asking data questions of their institutions go down that rabbit hole to be met by those in authority who tell them that language is buried in closed contracts and access to those interfaces are only available to certain administrators. They are then faced with filing a FOIA request to try to get answers. And that’s a just a great look when you decide to ask for that raise. 

So, educational enterprise access is on my mind this summer. I’m wondering what it looks like in terms of licensing and pricing. I’m especially wondering what it will mean for educational data privacy. But I’m also wondering if institutions will be able to train these models with enterprise access to behave in specific ways the way that some other educational integrations (who have been granted special GPT-4 API access)  have done. I imagine recent announcements about lower prices and “function calling” with APIs will further enable things? But I’m wondering what it means when some schools can afford such access and others can’t. Some may just prefer to use the free web access but, despite all the inevitability rhetoric, I question how sustainable a free web interface of this tech is given the cost of keeping it running and the regrets of its creators. But I imagine that free training data is worth a ton to them.

Okay, what else…

Cheating and Detectors

Detectors continue to be front of mind for me especially with the Turnitin mess. 

I continue to be flabbergasted by this one. I mean why aren’t more people talking about this? I know there has been a good amount of press but I really don’t know why it is not the top story in every higher ed and edtech outlet. 

In the this round of AI arms race madness in April Turnitin decided to turn on an AI detection feature that that they had never tested in a real world environment and to not allow any of their customers to opt-out. Rather than offering this functionality to a small number of test schools and piloting it to get some feedback, they just forced it on everyone. 

They claimed it had a 1% false positive rate but, surprise, that proved to be low after they actually started using it in real schools. They have never provided access to the specifics of their internal research or testing data. They have just expected everyone to take their word for it that they can detect synthetic text (nevermind that is still very much a complex subject area). Oh oh and they are not just adding functionality to their product out of the goodness of their hearts – no this is a paid feature that is “free for now”. This is the drug dealer, first hit is free, sales tactic if I’ve ever seen it but it is worse because these are existing customers who can’t opt-out.  

And the infuriating part is that, even though some pushed back and were able to get them to allow an opt out, most schools (in the US) just let them do it. 

I’m still not sure where cheating begins and resourcefulness or collaboration ends but I’m skeptical of black boxes that give us nice percentages make it all look super clear and easy. 

Final Thoughts – for now

There are a ton of other things running through my head too. Economic and labor impacts are a big one, as is climate. I especially enjoyed this Marketplace report featuring Hugging Face’s climate lead Sasha Luccioni because of the “is it worth it” positioning. She makes the case, for instance, that search is already very good, that it already runs with AI but that the current models are way more efficient than LLMs. From the article:

“I’m not against innovation. I don’t think we should all just stop doing AI research. But for me, it’s kind of the basis of that research to say “this thing costs this much” not just in terms of money, but also in terms of planetary and human costs. Then, if that calculation makes sense, then yes, we’re going to use the AI. But currently, we’re not making that logical kind of decision. Right now, it’s more like “why shouldn’t we use the ChatGPT to do web search?” But I think environmental factors should be considered, because the new technology is lot less efficient than the current model.”

I wonder what the “is it worth it” position is for education? 

Of course I’m continuing to pay attention to all the different ways that people are proposing that we use ChatGPT in the classroom. I like some of the ideas. They seem neat. And I’ve seen plenty of people warning students about bias and how the bots can straight up get it wrong sometimes. And that is a good thing. 

But I still continue to see many people skip the privacy issues when talking about these things. Even as companies continue to ban this tech and create all manner of policy around how it can be used by their employees for fear of leaking company information to the bots. But it is just students’ personal data – what could happen? With all this talk about teaching students prompting skills I’d think that there would be a place for this – it seems like it would transfer nicely to the workplace. 

When is it worth it to use a chatbot in the classroom and when is it just window dressing/look at me being an innovative professor? I’m not exactly sure but it is the question I’m most interested in right now.

So, that is my summer check-in. Perhaps I’ll do another in six months – maybe I’ll even post on something other than generative ai.

Featured Photo by Ethan Robertson on Unsplash

ChatGPT ID/FacDev?

Many of those of us who work in higher ed have been thinking about ChatGPT since OpenAI dropped free access to it at the end of November. The fancy new chatbot which can generate essays, responses to qualitative quiz questions, and discussion board prompts has everyone thinking about academic integrity and “cheating”. The tech has been around for several years but offering access for free, right before finals, has caused quite the stir in higher education. 

Something I hear a lot of people talk about, but which I feel is still not getting enough attention, is the question about why this tech is free. It is not much of a question because it seems everyone is aware that the tool has been given for free to the public so that massive amounts of people can help to train it. 

So, like most of these kind of things, it is not really free. Maybe we are having fun playing with it but you are exchanging your time, creativity in writing questions/prompts, and your data in exchange for access. You need to create an account which needs to be tied to an email and I also believe a phone number. At the bottom of the ChatGPT input screen it clearly reads “ChatGPT Dec 15 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve.” But improve for who and to what end?

Most of the folks who I have heard talk about this hint at how it is being trained by public labor for free, on public data obtained for free, so that eventually it will be used to create corporate products which will likely take away jobs and make billions for the creators. But after throwing this fact out there nonchalantly and often with a tone insinuating that this is a no brainer, they continue to move on and talk about how they used it to generate a rap in the style of Jay-Z, ask it questions about philosophy, or try to get it to mimic a student responses so that they can see if they (or their colleagues) could be fooled by it. I realize I’m about to be guilty of doing the same thing here – perhaps I just point this out to try to redeem some semblance of integrity. This work continues to put me in paradox. 

OpenAI seems more than aware of the potential economic impacts of all of this and they have a research agenda around it – but this gives me little comfort. I can’t help but think about my own position in instructional design/faculty development/academic technology.

“Instructional design” (ID) can live in lots of places in the university and the position takes on a different flavor depending on where it exists. You have a different job if you are an instructional designer in the teaching center vs the library vs IT vs HR. Not all of us work with faculty but there is even variation between those of us who do. Some IDs are content focused and they use skills like graphic design or videography to develop things. My work has never been very content creation heavy – though I do like to create content. Working in smaller schools with tight budgets I mostly consult with faculty and for many of us this consulting role is a big part of our work. I talk with faculty about their teaching and offer advice about what they can do better. 

I talk with them… I offer advice… You see where I’m going here.

This made me wonder, what kind of instructional designer/faculty developer consultant ChatGPT would make and so I decided to have a very basic conversation with it posing as a faculty member in physics. I copied the transcript of the chat into this google doc and I’m sharing it publicly for reflection by others in the field. 

As for my own reflection 

I’ll say that the results of my chat are much like what I have seen in the disciplines. These are perfectly plausible responses that sound very human but they don’t go very deep and it is in that lack of depth where those of us who do this work will recognize the flaws.

The bot falls down when asked about discipline specific approaches and when asked for anything that could connect to what other instructors may have tried in the past. It glazes over specifics around EdTech and flat out gets it wrong sometimes (I think the directions it gives for dropping the lowest grades in Canvas sound more like directions for Moodle, personally). I’m not actually a physics professor so I didn’t/couldn’t ask it specifics about advice for teaching individual topics in physics. In my experience, it does do better when you ask it to narrow its scope; so asking more detailed questions could make a big difference. 

Still, the results are very familiar to a lot of faculty development advice that I see. Be it on various blogs, websites, listservs or even what I sometimes hear come out people’s mouths –  much of it is the same basic stuff over and over. Professors are busy and so giving them simple lists and easy answers for improvement is quite common and ChatGPT mimics these basics pretty well and includes attempts at big picture synthesis. It ends its advice to me about developing as teacher by saying “Remember, being a lifelong learner as an educator is an important part of staying current and effective in your teaching practice.” 

It’s not surprising. ChatGPT is trained on huge data dumps of the internet (including Reddit, Wikipedia, and websites). I threw the phrase “5 things you can do to improve collaboration in your class” into google and got a return of 712,000,000 results of various listicle type pedagogy puff pieces. With so much of this stuff out there already maybe it doesn’t matter that a chatbot can regurgitate it too? But I have to wonder what it means for our work. 

I’ve been struggling with a kind of imposter syndrome from this work for some time. I say a “kind of” imposter syndrome because I refuse to take all of the blame. I can’t shake the feeling that at least some of it comes from the work itself; that the nature of the work encourages it. So many of us are limited in our own opportunities to go deeper or to reflect in more meaningful ways. We are incentivised to create/repeat these easy answers/”best practices”. After the pandemic we have seen many of our professional development organizations raise prices of in person conferences and reject accessible virtual options. Simultaneously, professional development funds often have not increased from our institutions and don’t get me started about how frequently we are throttled in our attempts to teach directly ourselves. 

Many of us rely on disclaimers and positioning ourselves in various ways to account for our lack of knowledge/experience in domain specific areas, with technology, or even with specifics of teaching. In the beginning of that chat, the bot even gave me its own disclaimer “As a language model, I don’t have personal experience in instructional design or teaching, but I can provide general information and suggestions based on best practices in the field.” So, some of this is just the nature of the work but it is depressing nonetheless. 

No one really knows yet what ChatGPT means for higher ed and I’ve not seen much talk about what it means for EdTech/Instructional Design/Faculty Development. We are kind of in a wait and see and react when we can kind of moment. I guess I’m hopeful this will open up room for more thoughtful and creative work. But I worry that it will force us to ask some hard questions about what kind of work is meaningful and that will cause some casualties. 

How I got here/More if you want it

If you were paying attention to this chatbot/large language model (LLM) conversation at all before Nov. 30th, or even if you dug a little deeper since, you likely heard about this paper by Bender and Gebru et. al. but if you haven’t and want a critical look at the dangers of this stuff (including the environmental impacts and perpetuating biases) this is what you should really be paying attention to. I also found this piece from Jill Walker Rettberg, really helpful in better understanding the underlying datasets that GPT-3 is trained on and reflections the culture they come from. The relationships and evolutions between ChatGPT, GPT-3 (as well as 1 and 2), InstructGPT and all that is quite confusing but this post (from the Walmart Global Tech Blog, of all places) helps a bit. For even more, Lee Skallerup Bessette has created a Zotero library collecting all things ChatGPT in higher ed. 

In addition, I ended my last post (which was mostly a reflection current happenings with twitter) with a reflection on European Starlings. Yes starlings, the invasive bird species who are problematic but in that post I referred to how they are also “strange and wonderful for lots of reasons”. I had focused on their murmurations but another important facet of the starling’s disposition is its proclivity for mimicry – and of course got me thinking about things that can talk but not really understand.


Featured Image by Kev from Pixabay