Refresh loader

Category : Open Web

Prior to (or instead of) using ChatGPT with your students

I have been thinking, reading, and writing a lot about OpenAI’s ChatGPT product over the last month. I’ve been writing from the perspective of instructional design/faculty development/edtech mostly in higher education, though I did dive into a bit of K-12 (which is totally out of my element).

I understand the allure of the tool and the temptation to have students use it. It is new and shiny and everyone is talking about it. It is also scary, and sometimes we can assuage our fears by taking them on directly. 

But I suggested across two other posts that educators might not want to have students directly work with ChatGPT via having them sign up for a free OpenAI account for the following reasons:

  • Student data acquisition by OpenAI
    • Anytime you use a tool that needs an account the company now has an identifier in which they can track your use of the site to your identity
    • You need to provide personally identifiable information like an email/phone number/google account to create your OpenAI account
    • Their terms are quite clear about collecting and using data themselves as well as sharing/selling to third parties
  • Labor Issues 
    • Using ChatGPT is providing free labor to OpenAI in their product development. They are clear about this in their terms and in their faq page.
    • I don’t want to go down the “robots are coming for our jobs” path but many people (including the people building these tools) do envision AI having major impacts on the job market. Is it okay to ask students to help train the very thing that might take opportunities from them? It could be making opportunities too but shouldn’t they understand that? 
    • And I didn’t mention this in the other posts but AI has horrible labor practices exploiting global workers who train these systems. Do we want students to be part of that? Shouldn’t they at least know? 
  • ChatGPT is not a stable release, it could change or go away at any point. It is estimated it costs $3 million USD every month to keep it running. What happens to your assignments if it is down/gone?
    • ChatGPT has been released as a “Research Preview” and no one really knows what that is
      • It might be similar to a “Public Beta” or a “Developer’s Beta” but both of these come with an assumption of a public release which we do not have with ChatGPT
    • It is often down or slow because of the large number of users
    • Features are changing all of the time (for instance chat histories have disappeared and reappeared a few times already)

 

After suggesting this I got a good bit of push back. “But AI is such a big deal Autumm, and it is going to change the world, and students need to be prepared … and… digital literacy and… and… and…”

I hear you my good intentioned pedagogue. And yet I still have these concerns. So, here are just some ideas of some things you may want to do with your students prior to having them directly use the ChatGPT product with a free OpenAI account – and (I’m kind of hoping) maybe you want to have them do these things instead of using ChatGPT with a free OpenAI account.  

Socially Annotate OpenAI’s privacy and service Terms 

Wouldn’t it be great if students better understood what they were getting themselves into by creating that account with OpenAI? A social annotation activity using a tool like Hypothesis of OpenAI’s privacy policy and terms of service (TOS) can start this understanding. I’ve done this several times out on the open web with various collaborators. TOS and Privacy Policies are dense technical and legal readings so doing it as a group with in line comments really helps. If you can invite a guest annotator who has a background in law or policy great and if not consider having a reading before the annotation about what to look for in a privacy policy and how to read a TOS.  

*Note – This one can be somewhat problematic if your school does not provide a social annotation tool as students likely need to create an account with a social annotation provider who does not have an agreement with your school and that could be the same problem you are trying to avoid. I do feel better about Hypothesis because they are a non-profit but you could also get around this by copying the terms/policy and sharing it in your school supported cloud word processor (Google Docs, MS365, etc) and just using the commenting feature. 

Play the Data, Privacy, and Identity game with your students

Instead of “playing” with ChatGPT (cough, nota toy, cough) in your class you could play the Data, Privacy, and Identity game developed by Jeannie Crowley, Ed Saber, and Kenny Graves. First developed as an in person activity, read Jeannie’s blog post overview of the game. Then check out the resource page where you can read instructions and print off cards. Looking for an online version? Since the team published this with a CC 4.0 license I adapted it into an online version on a simple WordPress site using H5P that requires no login and collects no data. 

Discuss big issues around AI like labor and climate

Have a discussion with students about big issues with AI that are likely to affect them. A good overview of the issues with large language models can be found in Bender, Gebru, McMillian-Major, and Shmitchell’s 2021 paper On the dangers of stochastic parrots: can language models be too big. A discussion of this paper will set you up to dive deeper on the issues.

Impacts of artificial intelligence on labor directly speak to the world of work that students will graduate into. This report from the US-EU Trade and Technology Council about the impact on future workforces can be a starting point. You may want to break it into sections and keep in mind that it is US/EU centric. Follow up (or start with, depending on your context) a more global perspective. You could check out MIT Technology Review’s whole series of articles on AI Colonialism or the recent reporting from Time about OpenAI paying workers in Kenya less than $2 a day for grueling work training the model (you will need a content warning for SA and have to figure out how to get around the paywall for the Time Exclusive but other great articles about this report exist like this one from Chole Xiang on Motherboard).

Large language models like ChatGPT take a lot of computing power to run and all of that electricity has a carbon footprint that we are still trying to figure out how to measure. Discussing this with students helps them to understand these potentials. Maybe start with a discussion around this MIT Tech Review article on how Hugging Face is attempting to better measure things

Conduct a technoethical audit 

If you don’t know about all the resources on the Civics of Technology site you are in for a treat. Here I’m specifically going to recommend their resources around EdTech Audit but the site has a great larger curriculum with all kinds of resources. I’m not sure that ChatGPT is really “EdTech” but if you are thinking of having students use it then you are using it as EdTech. I think the questions, handouts, and examples provided here will serve you in getting your students to analyze some of the implications from the articles and activities listed above. 

Analyze your data collected from other social media platforms

Check out HesitaLabs Digipower Academy. They have several tools, which run in the browser and collect no data, which allow you to examine and better understand the way social media platforms use your data for targeted advertising. It does require that you request a data export from these various platforms but they have instructions for how to do that for each platform. After the tools analyze your data they provide you with dashboards and metrics to help you better understand why you are being targeted the way that you are (because we are all being targeted in some way). Don’t feel comfortable having students download their own data (can they really secure it)? They have sample data you can run too.

Work Through The People’s Guide to AI

What even is an “algorithm”? What is the difference between AI and Machine Learning? The People’s Guide to AI is a workbook helping you to answer these questions. It is filled with relatable descriptions, activities, prompts, and so much more! You could spend the whole term working through this thing!  Written by Mimi Onuoha and Diana Nucera a.k.a. Mother Cyborg, with design and illustration by And Also Too. Licensed CC-NC-SA 4.0 this workbook is also available in print for the affordable price of just $7 USD – and you will want to write in it so paper copies are not a bad idea.

Learning objectives

These are just some of my ideas for activities and assignments. You can come up with your own but perhaps you might consider the following learning objectives (or something like them) to guide you. 

Prior to creating an account with OpenAI students will:

  • Discuss the value that their personal data holds with various actors (themselves, friends/family, school, corporate, government) 
  • Demonstrate an understanding of typical tech product cycles and compare them to non-typical ones
  • Compare how power is held by various actors (themselves, friends/family, school, corporate, government) 
  • Analyze workforce implications of AI at home and globally  
  • Create a personal data security plan 

 

These are just some ideas, and I’m sure they are flawed in various ways, I’m sure they won’t work for every course, and I’m sure some folks are already doing something similar or even better. But the message I’m trying to send here is just think about some of the larger picture in AI, and have students think about it, before you have students sign up and start “playing” with something they don’t understand.

~~~~

Image by Kevin from Pixabay

  • *This post is especially messy as I accidently hit publish while drafting late at night. The section Analyze your data collected from other social media platforms was added the next morning. And then later the next evening I added the Work Through the People’s Guide to AI section. I just keep thinking of things to add!
  • ** No ChatGPT was used in composing this post

In Defense of “Banning” ChatGPT

The current big news on ChatGPT is around the decisions by K-12 school districts to pause, think, push back and sometimes “ban” ChatGPT. I’ve mostly heard about NYC Department of Ed because they have actually put a “ban” in place but other districts are now considering their approach. What does it mean to “ban” a technology in a school? In this case it means blocking it on their networks and on all school issued devices. Another way of pushing back though is simply spreading the message that this tool is not aligned with the school’s values. 

Responses that I’ve seen are mostly dismissive of the schools who are considering or implementing a ban and often buy into the techno-inevitability frame. This is the future; you can’t fight it? But I’m more sympathetic to these schools’ stance. While I do think that they have set themselves up for a Streisand effect, and I realize that there are other ways to access the tool on cell networks and personal devices, I also feel the need to defend this approach. 

I know little about K-12 education myself; I mostly work in higher ed. But I do know that K-12 schools block parts of the internet all of the time and I’m pretty sure that often they are required to do so here in the US to get federal funding. OpenAI’s own Terms of Use states that their tools should not be used by anyone under 18 and their Privacy Policy says they are not intended for anyone 13 and younger. Additionally, NYC Dept of Ed has provisions for lifting the ban for schools who would like to explore the pedagogical possibilities of the tool, so those who have a plan and intention have a pathway to use. 

I think others who are sympathetic to banning are doing so because of cheating concerns but I’m not so interested in the “cheating” angle. I do think that this tool could be used to assist in critical thinking and in the drafting process. But I’ve worked in edtech for 15 years and if I know anything I know quickly throwing new technology, at scale, into a learning environment is a recipe for disaster. It takes people to develop meaningful curricula around technology use, imagine harms and try to avoid them, and that takes time. I’m all for slowing this bus down.

I wrote about some concerns that good intentioned higher ed instructors, who want to use ChatGPT with their students, might want to think about. There, I mostly cited privacy and larger labor concerns which I think are heightened for K-12. But another concern for both higher ed and K-12 might be that this is a “free for now” product. Some are estimating that it costs $3 million dollars a month to run the thing. They are going to start charging for it at some point. What if it is in the middle of your term? 

I’m okay with some schools considering and even deciding to attempt to throttle ChatGPT usage – especially K-12 schools. OpenAI is pretty open about the fact that this whole thing is a big experiment around the effects of releasing ChatGPT on society. They are quoted as telling CNN:

“A spokesperson for OpenAI, the artificial intelligence research lab behind the tool, said it made ChatGPT available as a research preview to learn from real-world use. The spokesperson called that step a “critical part of developing and deploying capable, safe AI systems.”

OpenAI says that their mission is “to ensure that artificial intelligence is a benefit to all of humanity” but I’m not sure how that tracks with running experiments on the general public (in higher ed this would never pass IRB) and drawing a line at extending this experiment on kids is okay in my book. 

~~~~

Image by Gerd Altmann from Pixabay

ChatGPT and Good Intentions in Higher Ed

I’m frustrated by the conversation around ChatGPT in higher education.

So far, the conversation has been largely about using the tool as a text generator and fears around how students can use it for “cheating”. I tend to think this is only the tip of the iceberg and it frustrates me – this convo is still very young so maybe I just need to give it a chance to develop. I think the more interesting (and likely disruptive) conversation is around how the tool can be used for meaning making (and legal issues around intellectual property). Maybe I’m overreacting, though maybe I’m not

But meaning making is not the topic of the day! No, the topic of the day is “cheating” and everyone is officially freaking out!

Just in the last few days there have been claims of “abject terror” by a professor who was able to “catch” a student for “cheating” with ChatGPT (resulting in the student failing the entire course). Calls to return to handwritten, in-person essay writing and over 400 comments (at the time of this writing on Dec 29th) almost entirely focused on fears around “cheating” in an article about the tool’s impacts in higher ed

Besides the calls for surveillance and policing, the humanized approaches being proposed include talking with students about ChatGPT and updating your syllabus and assignment ideas to include ChatGPT. But often these ideas include getting students to use it; helping them to see where it can be useful and where it falls down. This is a go to approach for the humanistic pedagogue and don’t get me wrong I think it is head and shoulders above the cop shit approach. Yet there are some parts about this that I struggle with.

I am skeptical of the tech inevitability standpoint that ChatGPT is here and we just have to live with it. The all out rejection of this tech is appealing to me as it seems tied to dark ideologies and does seem different, perhaps more dangerous, than stuff that has come before. I’m just not sure how to go about that all out rejection. I don’t think trying to hide ChatGPT from students is going to get us very far and I’ve already expressed my distaste for cop shit. In terms of practice, the rocks and the hard places are piling up on me.

Anyway, two good intention issues around working with ChatGPT and students are giving me pause:

It is a data grab

Many (though not all) of the ideas I’ve heard/seen for assignments that use ChatGPT require students to use ChatGPT which requires an OpenAI account. An OpenAI account requires identifiable information like an email address or google account which means that it can be tracked. Their privacy policy is pretty clear that they can use this info how they want and that includes third party sharing and data possibly being visible to “other users” in a way that seems particularly broad.

I have this same issue with any technology that does not have a legal agreement with the university (and I don’t necessarily even trust those who do). But I’ve also long believed that the university is in a futile battle if we really think that we can stop students or professors from using things that are outside of university contracts. 

Some mitigation ideas for the data grab

Note: All of my mitigation ideas I’m sure are flawed. I’m just throwing out ideas, so feel free to call out my errors and to contribute your own ideas in your own blog post or in the comments below. 

Don’t ask students to sign up for their own accounts and definitely don’t force them to. There is always the option of the professor using their account to demo things for students and other creative design approaches could be used to expose students to the tool without having them sign up for accounts.

If students want their own accounts maybe specifically warn them about some of the issues and encourage them to use a burner email address but only if they choose to sign up.

I’m not sure if it is breaking some kind of user policy somewhere to have a shared password on one account for the whole class to use. This could get the account taken down but I wonder how far you could take this. 

It is uncompensated student and faculty labor potentially working toward job loss

How do humans learn? Well that is a complex question that we don’t actually have agreement on but if you will allow me to simplify one aspect of it – We make mistakes, realize those mistakes (often in collaboration with other humans – some of whom are nice about it and others not so much) and then (this part is key) we correct those mistakes. Machine learning is not that different from this kind of human learning but it gets more opportunities to get things wrong and it can go through that iterative process faster. Oh and it doesn’t care about niceness. 

Note: I cannot even try to position myself as some kind of expert on large language models, AI, or machine learning. I’m just someone who has worked in human learning for over 15yrs and who has some idea about how computational stuff works. I’ve also watched a few cartoons and I’ve chatted with ChatGPT about machine learning terms and concepts*

But even with all of its iterations, it seems to me that human feedback is key to its training and that the kind of assignments that we would ask students to take part in using ChatGPT are exactly the kind of human fine tuning that it (and other tools like it) really need right now to become more factually accurate and to really polish that voice. Machines can go far just on those failing/succeeding loops that they perform themselves but that human interaction [chef’s kiss]. And that should be worth something. 

When I imagine what a finely tuned version of ChatGPT might look like I can’t say it feels very comfortable and I can’t imagine how it does not mean job/income loss in some way or another. Now it could also mean job creation but none of us really have any idea. 

What we do know is that ChatGPT’s underlying tech is GPT-3 and OpenAI plans to drop an upgraded version, GPT-4 in 2023. Asking students to train the thing that might take away opportunities from them down the road seems particularly cannibalistic but I also don’t know how you fight something you don’t understand. 

Some ideas for mitigating the labor problem 

I’m pretty stuck on this one. My go to solution for labor problems is compensation but I don’t know how that works here. I’m thinking that we are all getting ripped off everytime we use ChatGPT. Even if it ends up making our lives better OpenAI is now a for-profit (be it “capped profit”) company and they are going to make a lot here (barring legal issues). But I don’t think that OpenAI is going to start paying us any time soon. I suppose college credit is a kind of compensation but that feels hollow. I do think that students should be aware of the possible labor issues and no one should be forced to use ChatGPT to pass a course. 

I just want to end by saying that we need some guidance, some consensus, some … thing here. I’m not convinced that all uses of ChatGPT are “cheating” and I’m not sure someone should fail an entire course for using it. I mean sure you pop in a prompt get a 3 second response that you copy and paste – I can’t call that learning and maybe you should fail that assignment. But you use it as a high end thesaurus or know your subject and use ChatGPT to bounce ideas off of it and you are able to call out when it is clearly wrong… Personally I’d even go so far as getting a first draft from it as long as you expand on and cite what parts come from the tool. I’m not sure these uses are the same thing as “cheating” and if it is I’ve likely “cheated” in writing this post. I’ve attempted a citation below.

~~~

** Update 1/26/23 after publishing this post some were looking for more mitigation ideas. In response I published Prior to (or instead of) using ChatGPT with Your Students which is a list of classroom assignments focusing privacy, labor, history, and more around ChatGPT and AI more broadly.

Image by Yvonne Huijbens from Pixabay

*Some ChatGPT was used in the authoring of this blog post though very little of the text is generated by ChatGPT. I chatted with it a bit as part of a larger process of questioning my own understanding machine learning concepts but this also included reading/watching the hyperlinked sources. My interactions with it included questions and responses around “human-in-the-loop” and “zero-shot learning” but I didn’t use these terms in this post because I worried that they may not be accessible to my audience. I do think that I have a better understanding of the concepts because of chatting with ChatGPT – especially with the “now explain that to me like I was a 10yr old” prompt. One bit of text generation is when I asked it to help me find other words/phrases for “spitballing” and I went with “throwing out ideas”. 

ChatGPT ID/FacDev?

Many of those of us who work in higher ed have been thinking about ChatGPT since OpenAI dropped free access to it at the end of November. The fancy new chatbot which can generate essays, responses to qualitative quiz questions, and discussion board prompts has everyone thinking about academic integrity and “cheating”. The tech has been around for several years but offering access for free, right before finals, has caused quite the stir in higher education. 

Something I hear a lot of people talk about, but which I feel is still not getting enough attention, is the question about why this tech is free. It is not much of a question because it seems everyone is aware that the tool has been given for free to the public so that massive amounts of people can help to train it. 

So, like most of these kind of things, it is not really free. Maybe we are having fun playing with it but you are exchanging your time, creativity in writing questions/prompts, and your data in exchange for access. You need to create an account which needs to be tied to an email and I also believe a phone number. At the bottom of the ChatGPT input screen it clearly reads “ChatGPT Dec 15 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve.” But improve for who and to what end?

Most of the folks who I have heard talk about this hint at how it is being trained by public labor for free, on public data obtained for free, so that eventually it will be used to create corporate products which will likely take away jobs and make billions for the creators. But after throwing this fact out there nonchalantly and often with a tone insinuating that this is a no brainer, they continue to move on and talk about how they used it to generate a rap in the style of Jay-Z, ask it questions about philosophy, or try to get it to mimic a student responses so that they can see if they (or their colleagues) could be fooled by it. I realize I’m about to be guilty of doing the same thing here – perhaps I just point this out to try to redeem some semblance of integrity. This work continues to put me in paradox. 

OpenAI seems more than aware of the potential economic impacts of all of this and they have a research agenda around it – but this gives me little comfort. I can’t help but think about my own position in instructional design/faculty development/academic technology.

“Instructional design” (ID) can live in lots of places in the university and the position takes on a different flavor depending on where it exists. You have a different job if you are an instructional designer in the teaching center vs the library vs IT vs HR. Not all of us work with faculty but there is even variation between those of us who do. Some IDs are content focused and they use skills like graphic design or videography to develop things. My work has never been very content creation heavy – though I do like to create content. Working in smaller schools with tight budgets I mostly consult with faculty and for many of us this consulting role is a big part of our work. I talk with faculty about their teaching and offer advice about what they can do better. 

I talk with them… I offer advice… You see where I’m going here.

This made me wonder, what kind of instructional designer/faculty developer consultant ChatGPT would make and so I decided to have a very basic conversation with it posing as a faculty member in physics. I copied the transcript of the chat into this google doc and I’m sharing it publicly for reflection by others in the field. 

As for my own reflection 

I’ll say that the results of my chat are much like what I have seen in the disciplines. These are perfectly plausible responses that sound very human but they don’t go very deep and it is in that lack of depth where those of us who do this work will recognize the flaws.

The bot falls down when asked about discipline specific approaches and when asked for anything that could connect to what other instructors may have tried in the past. It glazes over specifics around EdTech and flat out gets it wrong sometimes (I think the directions it gives for dropping the lowest grades in Canvas sound more like directions for Moodle, personally). I’m not actually a physics professor so I didn’t/couldn’t ask it specifics about advice for teaching individual topics in physics. In my experience, it does do better when you ask it to narrow its scope; so asking more detailed questions could make a big difference. 

Still, the results are very familiar to a lot of faculty development advice that I see. Be it on various blogs, websites, listservs or even what I sometimes hear come out people’s mouths –  much of it is the same basic stuff over and over. Professors are busy and so giving them simple lists and easy answers for improvement is quite common and ChatGPT mimics these basics pretty well and includes attempts at big picture synthesis. It ends its advice to me about developing as teacher by saying “Remember, being a lifelong learner as an educator is an important part of staying current and effective in your teaching practice.” 

It’s not surprising. ChatGPT is trained on huge data dumps of the internet (including Reddit, Wikipedia, and websites). I threw the phrase “5 things you can do to improve collaboration in your class” into google and got a return of 712,000,000 results of various listicle type pedagogy puff pieces. With so much of this stuff out there already maybe it doesn’t matter that a chatbot can regurgitate it too? But I have to wonder what it means for our work. 

I’ve been struggling with a kind of imposter syndrome from this work for some time. I say a “kind of” imposter syndrome because I refuse to take all of the blame. I can’t shake the feeling that at least some of it comes from the work itself; that the nature of the work encourages it. So many of us are limited in our own opportunities to go deeper or to reflect in more meaningful ways. We are incentivised to create/repeat these easy answers/”best practices”. After the pandemic we have seen many of our professional development organizations raise prices of in person conferences and reject accessible virtual options. Simultaneously, professional development funds often have not increased from our institutions and don’t get me started about how frequently we are throttled in our attempts to teach directly ourselves. 

Many of us rely on disclaimers and positioning ourselves in various ways to account for our lack of knowledge/experience in domain specific areas, with technology, or even with specifics of teaching. In the beginning of that chat, the bot even gave me its own disclaimer “As a language model, I don’t have personal experience in instructional design or teaching, but I can provide general information and suggestions based on best practices in the field.” So, some of this is just the nature of the work but it is depressing nonetheless. 

No one really knows yet what ChatGPT means for higher ed and I’ve not seen much talk about what it means for EdTech/Instructional Design/Faculty Development. We are kind of in a wait and see and react when we can kind of moment. I guess I’m hopeful this will open up room for more thoughtful and creative work. But I worry that it will force us to ask some hard questions about what kind of work is meaningful and that will cause some casualties. 

How I got here/More if you want it

If you were paying attention to this chatbot/large language model (LLM) conversation at all before Nov. 30th, or even if you dug a little deeper since, you likely heard about this paper by Bender and Gebru et. al. but if you haven’t and want a critical look at the dangers of this stuff (including the environmental impacts and perpetuating biases) this is what you should really be paying attention to. I also found this piece from Jill Walker Rettberg, really helpful in better understanding the underlying datasets that GPT-3 is trained on and reflections the culture they come from. The relationships and evolutions between ChatGPT, GPT-3 (as well as 1 and 2), InstructGPT and all that is quite confusing but this post (from the Walmart Global Tech Blog, of all places) helps a bit. For even more, Lee Skallerup Bessette has created a Zotero library collecting all things ChatGPT in higher ed. 

In addition, I ended my last post (which was mostly a reflection current happenings with twitter) with a reflection on European Starlings. Yes starlings, the invasive bird species who are problematic but in that post I referred to how they are also “strange and wonderful for lots of reasons”. I had focused on their murmurations but another important facet of the starling’s disposition is its proclivity for mimicry – and of course got me thinking about things that can talk but not really understand.

~~~~

Featured Image by Kev from Pixabay

Designing for Privacy with DoOO: Reflections after DPL

The thinking for this post comes on the tail end of Digital Pedagogy Lab (DPL) where, despite not being enrolled in any of the data or privacy offerings, concerns of student data and privacy rang loud in my ears. This came from various conversations but I think it really took off after Jade Davis’ keynote and after Chris G and Bill Fitzgerald visited us in Amy Collier’s Design track to talk about designing for privacy. After the Lab I also came across Matthew Cheney’s recent blog post How Public? Why Public? where he advocates for public work that is meaningful because it is done so in conjunction with private work and where students use both public and private as options depending on what meets the needs of varying circumstances.

A big part of what attracts me to Domain of One’s Own (DoOO) is this possibility of increased ownership and agency over technology and a somewhat romantic idea I have that this can transfer to inspire ownership and agency over learning. In considering ideas around privacy in DoOO it occurred to me that one of the most powerful things about DoOO is that is it has the capability of being radically publicly open but that being coerced into the open or even going open without careful thought is the exact opposite of ownership and agency.

In a recent twitter conversation with Kris Schaffer he referred to openness and privacy as two manifestations of agency. This struck me as sort of beautiful and also made me think harder about what we mean by agency, especially in learning and particularly in DoOO. I think that the real possibility of agency in DoOO starts from teaching students what is possible around the capabilities and constraints in digital environments. If we are really concerned about ownership and agency in DoOO then we have to consider how we will design for privacy when using it.

DoOO does allow for various forms and levels of privacy which are affected by deployment choices, technical settings, and pedagogical choices. I hear people talk about these possibilities and even throw out different mixes of these configurations from time to time but I have never seen those listed out as a technical document anywhere.

So, this is my design challenge. How can I look at the possibilities of privacy for DoOO, refine those possibilities for specific audiences (faculty and students), and then maybe make something that is not horribly boring (as technical documents can be) to convey the message. I do want to be clear that this post is not that – this post is my process in trying to build that and a public call for reflections on what it could look like or resources that may already exist. What I have so far is really just a first draft after doing some brainstorming with Tim C during some downtime at DPL.

Setting Some Boundaries
This could go in a lot of different directions so I’m setting some boundaries up front to keep a scope on things. I’d love to grow this idea but right now I’m starting small to get my head around it. I’m looking to create something digestible that outlines the different levels of privacy around a WordPress install on DoOO.  DoOO is so much bigger than just WordPress, I know that but I’m not trying to consider Omeka or other applications – yet. Also, I’m specifically thinking about this in terms of a class or other teaching/learning environment. A personal domain that someone is doing on their own outside of a teaching/learning environment is another matter with different, more personal, concerns.

Designing for Privacy with DoOO
Right now I’m dividing things up into two broad categories that interact with one another. I need better titles for them but what I’m calling Privacy Options are stand alone settings or approaches that can be implemented across any of the Deployments which are design and pedagogical choices that are made at the onset. Each of these also afford for and require different levels of digital skills and I’m also figuring out how to factor that into the mix. I will start with Deployments because I think that is where this starts in practice.

Deployments:
Deployment 1 – Instructor controlled blog: With this deployment an instructor has their own domain where they install WordPress and give the students author accounts (or whatever level privileges make sense for the course). Digital Skills: Instructor needs to be comfortable acting as a WordPress administrator including: theming and account creation. Students gain experience as WordPress authors and collaborating in a single digital space.

Deployment 2 – Instructor controlled multisite: With this deployment an instructor installs a WordPress multisite on their own domain and each student gets their own WordPress site. Digital Skills: Running a multisite is different from running a single install and will require a bit more in the way of a digital skill set including: enabling themes and plugins, setting up subdomains and/or directories. Students can gain the experience of being WordPress administrators rather than just authors but depending on the options chosen this can be diminished.

Deployment 3 – Student owned domains: This is what we often think of as DoOO. Each student does not just get a WordPress account or a WordPress site but their own domain. They can install any number of tools but of course the scope of this document (for now) is just WordPress. Digital Skills: One fear I have is that this kind of deployment can be instituted without the instructor having any digital skills. Support for digital skills will have to come from somewhere but if this is being provided for from some other area then the instructor does not need to have the skills themselves. Students will gain skills in c-panel, installing WordPress, deleting WordPress

Privacy Options
Privacy Options looks at approaches, settings, or plugins that can be used across any of the Deployments:

1 – Visibility settings: WordPress Posts and Pages have visibility settings for public, password protected, and private. These can be used by any author on any post and by admins on posts and pages.

2 – Private site plugin: Though I have not personally used a private site plugin I know that they exist and can be used to make a whole WordPress site private. Tim mentioned that he has used Hide My Site in the past with success.

3 – Pseudonyms: There is no reason that a full legal name needs to be used. How do we convey the importance of naming to students. I took a stab at this for my day job but I’m wondering what else can be done.

4 – Search engine visibility setting: This little tick box is located in WordPress under the reading settings and “discourages search engines from indexing the site” though it does say that it is up to the search engines to honor this request.

5 – Privacy protection at the domain level to obscure your name and address from a WhoIs lookup. Maybe not a concern if your institution is doing subdomains?

6 – An understanding of how posts and sites get promoted. Self promotion and promotion from others. How different audiences might get directed to your post or site.

Some Final Thoughts
There is one approach that I’d actually been leaning toward prior to Digital Pedagogy Lab that raises questions about how to introduce this. I do worry about the technical barrier that comes with learning about these privacy options. All of the privacy options come with some level of digital skill and/or literacy that needs to be in place or acquired. In addition, I think that often the deployments are made before the privacy options are considered; yes yes I know that is not ideal but it is a reality. Because of this, is it maybe just better to tell faculty and students, in the beginning at least, to think of their DoOO or their WordPress as a public space? Mistakes happen and are we muddying the waters by thinking of DoOO or WordPress as private spaces where a simple technical mistake could easily make things public? Most people have so many options for private reflection and drafting; from Google Docs to the LMS, email to private messaging we have so many tools that are not so radically publicly open. Is there something to be said for thinking of the domain space as public space and using it for that – at least while building the skills necessary to make it more private?

I don’t have the answers but I wanted to open the conversation and see what others are thinking. Are there resources that I’m missing and how can this be created in a way that will be easy to understand and digestible? I’m thinking and writing and booking some folks for conversations to keep thinking in this way. Stay tuned and I’ll keep learning transparently.

Big thanks to Tim C and Chris G for giving feedback on a draft of this post.

Photo original by me licensed CC-BY