I’m frustrated by the conversation around ChatGPT in higher education.
So far, the conversation has been largely about using the tool as a text generator and fears around how students can use it for “cheating”. I tend to think this is only the tip of the iceberg and it frustrates me – this convo is still very young so maybe I just need to give it a chance to develop. I think the more interesting (and likely disruptive) conversation is around how the tool can be used for meaning making (and legal issues around intellectual property). Maybe I’m overreacting, though maybe I’m not.
But meaning making is not the topic of the day! No, the topic of the day is “cheating” and everyone is officially freaking out!
Besides the calls for surveillance and policing, the humanized approaches being proposed include talking with students about ChatGPT and updating your syllabus and assignment ideas to include ChatGPT. But often these ideas include getting students to use it; helping them to see where it can be useful and where it falls down. This is a go to approach for the humanistic pedagogue and don’t get me wrong I think it is head and shoulders above the cop shit approach. Yet there are some parts about this that I struggle with.
I am skeptical of the tech inevitability standpoint that ChatGPT is here and we just have to live with it. The all out rejection of this tech is appealing to me as it seems tied to dark ideologies and does seem different, perhaps more dangerous, than stuff that has come before. I’m just not sure how to go about that all out rejection. I don’t think trying to hide ChatGPT from students is going to get us very far and I’ve already expressed my distaste for cop shit. In terms of practice, the rocks and the hard places are piling up on me.
Anyway, two good intention issues around working with ChatGPT and students are giving me pause:
It is a data grab
Many (though not all) of the ideas I’ve heard/seen for assignments that use ChatGPT require students to use ChatGPT which requires an OpenAI account. An OpenAI account requires identifiable information like an email address or google account which means that it can be tracked. Their privacy policy is pretty clear that they can use this info how they want and that includes third party sharing and data possibly being visible to “other users” in a way that seems particularly broad.
I have this same issue with any technology that does not have a legal agreement with the university (and I don’t necessarily even trust those who do). But I’ve also long believed that the university is in a futile battle if we really think that we can stop students or professors from using things that are outside of university contracts.
Some mitigation ideas for the data grab
Note: All of my mitigation ideas I’m sure are flawed. I’m just throwing out ideas, so feel free to call out my errors and to contribute your own ideas in your own blog post or in the comments below.
Don’t ask students to sign up for their own accounts and definitely don’t force them to. There is always the option of the professor using their account to demo things for students and other creative design approaches could be used to expose students to the tool without having them sign up for accounts.
If students want their own accounts maybe specifically warn them about some of the issues and encourage them to use a burner email address but only if they choose to sign up.
I’m not sure if it is breaking some kind of user policy somewhere to have a shared password on one account for the whole class to use. This could get the account taken down but I wonder how far you could take this.
It is uncompensated student and faculty labor potentially working toward job loss
How do humans learn? Well that is a complex question that we don’t actually have agreement on but if you will allow me to simplify one aspect of it – We make mistakes, realize those mistakes (often in collaboration with other humans – some of whom are nice about it and others not so much) and then (this part is key) we correct those mistakes. Machine learning is not that different from this kind of human learning but it gets more opportunities to get things wrong and it can go through that iterative process faster. Oh and it doesn’t care about niceness.
Note: I cannot even try to position myself as some kind of expert on large language models, AI, or machine learning. I’m just someone who has worked in human learning for over 15yrs and who has some idea about how computational stuff works. I’ve also watched a few cartoons and I’ve chatted with ChatGPT about machine learning terms and concepts*
But even with all of its iterations, it seems to me that human feedback is key to its training and that the kind of assignments that we would ask students to take part in using ChatGPT are exactly the kind of human fine tuning that it (and other tools like it) really need right now to become more factually accurate and to really polish that voice. Machines can go far just on those failing/succeeding loops that they perform themselves but that human interaction [chef’s kiss]. And that should be worth something.
When I imagine what a finely tuned version of ChatGPT might look like I can’t say it feels very comfortable and I can’t imagine how it does not mean job/income loss in some way or another. Now it could also mean job creation but none of us really have any idea.
What we do know is that ChatGPT’s underlying tech is GPT-3 and OpenAI plans to drop an upgraded version, GPT-4 in 2023. Asking students to train the thing that might take away opportunities from them down the road seems particularly cannibalistic but I also don’t know how you fight something you don’t understand.
Some ideas for mitigating the labor problem
I’m pretty stuck on this one. My go to solution for labor problems is compensation but I don’t know how that works here. I’m thinking that we are all getting ripped off everytime we use ChatGPT. Even if it ends up making our lives better OpenAI is now a for-profit (be it “capped profit”) company and they are going to make a lot here (barring legal issues). But I don’t think that OpenAI is going to start paying us any time soon. I suppose college credit is a kind of compensation but that feels hollow. I do think that students should be aware of the possible labor issues and no one should be forced to use ChatGPT to pass a course.
I just want to end by saying that we need some guidance, some consensus, some … thing here. I’m not convinced that all uses of ChatGPT are “cheating” and I’m not sure someone should fail an entire course for using it. I mean sure you pop in a prompt get a 3 second response that you copy and paste – I can’t call that learning and maybe you should fail that assignment. But you use it as a high end thesaurus or know your subject and use ChatGPT to bounce ideas off of it and you are able to call out when it is clearly wrong… Personally I’d even go so far as getting a first draft from it as long as you expand on and cite what parts come from the tool. I’m not sure these uses are the same thing as “cheating” and if it is I’ve likely “cheated” in writing this post. I’ve attempted a citation below.
~~~
** Update 1/26/23 after publishing this post some were looking for more mitigation ideas. In response I published Prior to (or instead of) using ChatGPT with Your Students which is a list of classroom assignments focusing privacy, labor, history, and more around ChatGPT and AI more broadly.
*Some ChatGPT was used in the authoring of this blog post though very little of the text is generated by ChatGPT. I chatted with it a bit as part of a larger process of questioning my own understanding machine learning concepts but this also included reading/watching the hyperlinked sources. My interactions with it included questions and responses around “human-in-the-loop” and “zero-shot learning” but I didn’t use these terms in this post because I worried that they may not be accessible to my audience. I do think that I have a better understanding of the concepts because of chatting with ChatGPT – especially with the “now explain that to me like I was a 10yr old” prompt. One bit of text generation is when I asked it to help me find other words/phrases for “spitballing” and I went with “throwing out ideas”.
For some time now I’ve used “Is a liminal space” as my tagline and it has always intrigued me how people latch on to the word “liminal” in that little phrase. Asking me what it means and why I identify with it. Fewer folks point out that people are not spaces and that spaces, though they may indeed influence people, are not people. Still, I go on using it because I like to make people think and wonder – what the heck does that mean.
Environments do indeed shape us and it has been on my mind more than usual lately. Someone recently asked me that question that seems to circle our field over and over about the differences between “designers” and “technologists” and what is the “right” term for the work. I’ve gotten to the place where those terms mean nothing to me. To understand what you do, in this strange world of edtech/instructional design/faculty development/teaching/administrating/// tell me where you work. That is the only way I will get some idea about what you do. And if what you do doesn’t fit the identity of the space you are working in… just wait. In my experience, you’ll either leave or change.
Digital environments have been one of my bags for some time now and yes they shape us too. Especially if you share through them and make connections there. But no environment is static and when Twitter was sold to Elon Musk a few months ago I think everyone knew things would change.
I didn’t leave, technically. I’lllikely share this post there. Technically, I’ve been on Mastodon since 2016 but “technically” I’m in a lot of places. It’s messy. I’m messy.
But when they started selling checkmarks… yeah I had to go. I’venever had a checkmark, but the idea of buying one. It is all just so sad and strange. To see so much wide spread top-down abuse there. To scroll my feed there and see all these reports of banned accounts and blocking links from competing platforms and then sprinkled in someone promoting their latest article or webinar. I understand some people have invested years and have tens of thousands of followers and that is hard to let go of. I don’t want to throw shade. It is just weird.
I also don’t want to tell anyone what to do but I will say it makes me happy to see familiar folks in other spaces.
I’m pretty privileged there and by privileged I mean invisible. Of course that is not completely true but it is not completely untrue either. I mean it has been a long time since I’ve been an egg but randos in my DMs still seem impressed that Barack Obama follows me. I’m somewhere in-between and not quite loud enough to make a fuss but not translucent enough to feel comfortable existing in a space that just continues to increase in toxicity. But that is strange of me to say too – I can’t say it hasn’t always been toxic – I know that would be a lie. But now it just feels like the call is coming from inside the house more than ever. And to continue to post just feelslike a statement of support for things I can’tagree with.
Also, it just feels like a time to try something new. Maybe it won’t be as big or as notable (is Barack even on Masto?) but that has never stoped me before. Starting again, and again, and again. That is kind of my thing. Perhaps that is why I’m perpetually on the threshold. It is sad but it is where I’m at.
I can’t help but think of the starling murmurations. You know these, yes? Starlings are strange and wonderful for lots of reasons (equal reasons why they are pests but I’m trying to end on an upbeat here) but one is because of the murmurations that they perform in the sky at sunset. Here in Michigan I see them while driving in the country. How they pull this off is a bit of magic no one really understands the details of – maybe something similar is happening now.
In the beginning of 2017 I first discovered Cambridge Analytica (CA) through a series of videos that included a Sky News report, some of their own advertising, as well as a presentation by their CEO Alexander Nix. I found myself fascinated by the notion that big data firms, focused on political advertising, were behind those little facebook quizzes; that these data firms were creating profiles on people through harvesting their data from these quizzes and combining it with other information about them like basic demographics, voter and districting information, and who knows what else to create a product for advertisers. I was in the process of refining a syllabus for a class and creating an online community around digital citizenship so this was of particular interest to me.
My broad interest in digital citizenship is around our rights and responsibilities online and I was compelled by the thought that we could be persuaded to take some dumb quiz and then through taking that quiz our data would be taken and used in other ways that we never expected; in ways that would be outside of our best interests.
I had questions about what we were agreeing to: how much data firms could know about us, what kind of metrics they were running on us, how the data could be shared, and what those messages of influence might look like. I started asking questions but when the answers started coming in I found myself paralyzed under the sheer weight of how much work it took to keep up with all of it not to mention the threats of financial blowback. This paralisis made me wonder about the feasibility of an everyday person to challenge this data collection, request their own data to better understand how they were being marketed to, and of course the security and privacy of the data.
However, much of this conversation is happening from the perspective of advertising technology (adtech), politics, and law. I’m interested in it from the perspective of education so I’d like to intersect the two.
The Request
A few weeks after I found those videos, featured by and featuring Cambridge Analytica, I came across a Motherboard article that gave some history of how the company was founded and how they were hired by several high profile political campaigns. Around this time I also found Paul-Olivier Dehaye of personaldata.io who was offering to help people understand how to apply to get a copy of their data from Cambridge Analytica based on the Data Protection Act (DPA), as the data was being processed in the UK.
My interests in digital citizenship and information/media/digital literacy had me wondering just how much data CA was collecting and what they were doing with it. Their own advertising made them sound pretty powerful but I was curious about what they had, how much of it I’d potentially given to them through taking stupid online quizzes, and what was possible if combined with other data and powerful algorithms.
The original request was not to Cambridge Analytica but rather to their parent company SCL Elections. There was a form that I had to fill out and a few days later I got another email stating that I had to submit even more information and GPB £10 payable in these very specific ways.
[/caption]Out of all of this, I actually found the hardest part to be paying the £10. My bank would only wire transfer a minimum of £50 and SCL told me that my $USD check would have to match £10 exactly after factoring in the exchange rate the day they recieved it. I approached friends in the UK to see if they would write a check for me and I could pay them back. I had a trip to London planned and I considered dropping by their offices to give them cash, even though that was not one of the options listed. It seemed like silly barrier, that a large and powerful data firm could not accept a PayPal payment or something and would instead force me into overpayment or deny my request due to changes in the exchange rate. In the end, PersonalData.io paid for my request and I sent along the other information that SCL wanted.
Response
After I got the £10 worked out with Paul I heard from SCL pretty quickly saying that they were processing my request and then a few days later I got a letter and an excel spreadsheet from Cambridge Analytica that listed some of the data that they had on me.
It was not a lot of data, but I have administered several small learning platforms and one of the things that you learn after running a platform for awhile is that you don’t really need a lot of data on someone to make certain inferences about them. I also found the last tab of the spreadsheet to be disconcerting as this was the breakdown of my political beliefs. This ranking showed how important on a scale of 1-10 various political issues were to me but there was nothing that told me how that ranking was obtained.
Are these results on the last tab from a quiz that I took; when I just wanted to know my personality type or what Harry Potter Character I most resemble? Is this a ranking based on a collection and analysis of my own Facebook reactions (thumbs up, love, wow, sad, or anger) on my friend’s postings? Is this a collection and analysis of my own postings? I really have no way of knowing. According to the communication from CA it is these mysterious “third parties” who must be protected more than my data.
[/caption]In looking to find answers to these questions Paul put me in touch with a Ravi Naik of ITN Solicitors who helped me to issue a response to CA asking for the rest of my data and more information about how these results were garnered about me. We never got a response that I can share and in considering my options and the potential for huge costs I could face it was just too overwhelming.
Is it okay to say I got scared here? Is it okay to say I chickened out and stepped away? Cause that is what I did. There are others who are more brave than me and I commend them. David Carroll, who I mentioned earlier just filed legal papers against CA, followed the same process that I did is still trying to crowdfund resources. I just didn’t have it in me. Sorry democracy.
It kills me. I hope to find another way to contribute.
Platform Literacy and Gaslighting
So now it is a year later and the Cambridge Analytica story has hit and everyone is talking about it. I backed away from this case and asked Ravi to not file anything under my name months ago and yet here I am now releasing a bunch of it on my blog. What gives? Basically, I don’t have it in me to take on the financial risk but I still think that there is something to be learned from the process that I went through in terms of education. This story is huge right now but the dominant narrative is approaching it from the point of view of advertising, politics, and the law. I’m interested in this from the perspective of what I do – educational technology.
The part of boyd’s talk (and her response) that I find particularly compelling in terms of overlap with this Cambridge Analytica story is in the construct of gaslighting in media literacy. boyd is not the first to use the term gaslighting in relation to our current situation with media but, again, often I see this presented from the perspective of adtech, law, or politics and not so much from the perspective of education.
If you don’t know what gaslighting is you can take a moment to look into it but basically it is a form of psychological abuse between people who are in close relationships or friendships. It involves an abuser who twists facts and manipulates another person by drawing on that close proximity and the knowledge that they hold about the victim’s personality and other intimate details. The abuser uses the personal knowledge that they have of the person to manipulate them by playing on their fears, wants, and attractions.
One of the criticisms of boyd’s talk, one that I’m sympathetic to, is around the lack of blame that she places on platforms. Often people underestimate what platforms are capable of and I don’t think that most people understand the potential of platforms to track, extract, collect, and report on your behaviour.
In her rebuttal to these criticisms, to which I am equally sympathetic, boyd states that she is well aware of the part that platforms play in this problem and that she has addressed that elsewhere. She states that is not the focus of this particular talk to address platforms and I’m okay with that – to a point. Too often we attack a critic (for some reason more often critics of technology) who is talking about a complex problem for not addressing every facet of that problem all at once. It is often just not possible to address every angle at the same time and sometimes we need to break it up into more digestible parts. I can give this one to boyd – that is until we start talking about gaslighting.
It is exactly this principle of platforms employing this idea of personalization, or intimate knowledge of who a person is, which makes the gaslighting metaphor work. We are taking this thing that is a description of a very personal kind of abuse and using it to describe a problem at mass scale. It is the idea that the platform has data which tells it bits about who you are and that there are customers (most often advertisers) out there who will pay for that knowledge. If we are going to bring gaslighting into the conversation then we have to address the ability of a platform to know what makes you like, love, laugh, wow, sad, and angry and use that knowledge against you.
We don’t give enough weight to what platforms take from us and how they often hide or own data from us and then sell it to third parties (users don’t want to see all that messy metadata…. Right?). I’m not sure you even glimpse the possibilities if you are not in the admin position – and who gets that kind of opportunity?
It would be a stretch to call me a data scientist but I’ve built some kind of “platform literacy” after a little more than a decade of overseeing learning management systems (LMS) at small colleges but most people interact with platforms as a user not as an admin so they never get that. I’m not sure how to quantify my level of platform literacy but please understand that I’m no wiz kid – an LMS is no Facebook and in my case we are only talking about a few thousand users. I’m more concerned with making the thing work for professors and students than anything, however, in doing even a small amount of admin work you get a feel for what it means to consider and care about things on a different level: how accounts are created, how they interact with content and with other accounts, the way accounts leave traces through the content they contribute but also through their metadata, and how the platform is always monitoring this and how as an administrator you have access to that monitoring when the user (person) often does not.
I don’t think that most LMS admins (at least as LMSs are currently configured) at small colleges are incentivised to go digging for nuanced details in that monitoring unprompted. I do think that platform owners who have customers willing to pay large sums for advertising contracts have more of a motivation to analyze such things.
Educational researchers are incentivised to show greater returns on learning outcomes and the drum beat of personalized learning is ever present. But I gotta ask if can we pause for a second and think… is there something to be learned from all this Cambridge Analytica, Facebook, personalization, microtargeting, of advertising story for education? Look at everything that I went through to try to better understand the data trails that I’m leaving behind and I still don’t have the answers. Look at the consequences that we are now seeing from Facebook and Cambridge Analytica. The platforms that we use in education for learning are not exempt from this issue.
My mind goes back to all the times I’ve heard utopian dreams about making a learning system that is like a social media platform. All the times I’ve seen students who were told to use Facebook itself as a learning tool. So many times I’ve sat through vendor presentations around learning analytics and then during Q&A asked “where is the student interface – you know, so the student can see all of this for themselves” only to be told that was not a feature. All the times I’ve brainstormed the “next generation digital learning environment” only to hear someone say “can we build something like Facebook?” or “I use this other system because it is so much like Facebook”. I get it. Facebook gives you what you want and it feels good – and oh how powerful learning would be if it felt good. But I’m not sure that is learning is the thing.
In her rebuttal boyd says that one of the outstanding questions that she has after listening to the critics (and thanking them for their input) is how to teach across gaslighting. So, it is here where I will suggest that we have to bring platforms back into the conversation. I’m not sure how we talk about gaslighting in media without looking at how platforms manipulate the frequency and context with which media are presented to us – especially when that frequency and context is “personalized” and based on intimate knowledge of what makes us like, love, wow, sad, grrrr.
Teaching and learning around this is not about validating the truthfulness of a source or considering bias in the story. Teaching and learning around this is about understanding the how and why of the thing, the platform, that brings you the message. The how and why it is bringing it to you right now. The how and why of the message looking the way that it does. The how and why of a different message that might be coming to someone else at the same time. It is about the medium more than the message.
And if we are going to talk about how platforms can manipulate us through media we need to talk about how platforms can manipulate us and how some will call it learning. Because there is a lot of overlap here and personalization is attractive – no really, I mean it is really really pretty and it makes you want more. I have had people tell me that they want personalization because they want to see advertising for the things that they “need”. I tried to make the case that if they really needed it then advertising would not be necessary, but this fell flat.
Personalization in learning and advertising is enabled by platforms. Just as there are deep problems with personalization of advertising, we will find it is multiplied by tens of thousands when we apply it to learning. Utopian views that ignore the problems of platforms and personalization are only going to end up looking like what we are seeing now with Facebook and CA. The thing that I can’t shake is this feeling that the platform itself is the thing that we need more people to understand.
What if instead of building platforms that personalized pathways or personalized content we found a way to teach platform’s themselves so that students really understood what platforms were capable of collecting, producing, and contextualizing? What if we could find a way to build platform literacy within our learning systems so that students understood what platforms are capable of doing? Perhaps then when inside of social platforms people would not so easily give away their data and when they did they would have a better understanding of the scope. What if we were really transparent with the data that learning systems have about students and focused on making the student aware of the existence of their data and emphasised their ownership over their data? What if we taught data literacy to the student with their own data? If decades ago we would have focused on student agency and ownership over platforms and analytics I wonder if Cambridge Analytica would have even had a product to sell to political campaigns let alone ever been a big news story.
I’m not saying this would be a fail safe solution – solutions come with their own set of problems – but I think it could be a start. It would mean a change in the interfaces and structures of these systems but it would mean other things too. Changes in the way we make business decisions when choosing systems and changes in the way we design learning would have to be there too. But we have to start thinking and talking about platforms to even get started – because the way they are currently configured has consequences.
I’m excited to be presenting a poster at ELI2018 with Sundi Richard on DigPINS – a participatory faculty development experience. Sundi designed DigPINS around the same time that I was designing my first year seminar in digital citizenship – of course we co-founded #DigCiz and digciz.org together so there has been a lot of talk between us about all of these projects.
DigPINS looks at Digital Pedagogy, Identity, Networks, and Scholarship as an online faculty development experience in a cohort model over a set time period. It sort of reminds me of a cMOOC except the focus is not on massive numbers and there is a part of the experience that does not happen in the open – the cohort at the school that is running the course has a backchannel and really they are often closer in physical proximity to one another so they can sometimes just talk to each other on campus.
For our poster we have given a description of each of the defining concepts (the PINS: Pedagogy, Identity, Networks, and Scholarship) on one half and then an interactive description of examples of the activities on the other half. The activities are dynamic and complex – they are not easily put into a box – hence making the poster interactive. How do we make a poster interactive? Well each activity will be printed separately so that during explination they can be placed along two intersecting continuums: Private/Public and Synchronous/Asynchronous. The far extremes of each of these are hard to get at and I’m not sure that anything in DigPINS belongs there but we are hopeful that having these as moveable elements that we will be able to better demonstrate their complexity.
A digital version of the poster is embedded below – it is three slides long as Slide 1 is the poster, Slide 2 are the moveable activities, and on Slide 3 we put a description.
Some of you know I just took a position at St. Norbert and one of the big reasons was because I knew they were not just open to but encouraging really exciting approaches to faculty development like DigPINS. I just finished up running my first implementation of DigPINS at St. Norbert. I had a great group of faculty, staff, and librarians who were really thoughtful about their approaches. We had some serious conversations about the good and bad of technology, social media, mobile access and their effects on pedagogy, scholarship, and ourselves.
I’m excited to be able to present with Sundi on DigPINS – our next move is to open the curriculum so that others can take the skeleton of the defining concepts and activities and make it their own at their institution. That is coming soon so stay tuned!!!