What Do We Owe Students When We Collect Their Data – a response

It has been a few weeks since we issued our #DigCiz call for thoughts on the question “What do we owe students when we collect their data?” and there have been a few responses. The call is in conjunction with the interactive presentation at the EDUCAUSE Annual Conference that I’ll be helping to facilitate with Michael Berman, Sundi Richard, and George Station. The session will be focused around breakout discussions both onground and online during the session. We don’t necessarily have “answers” here – the session (and the call) are more about asking the questions and having discussion. The questions are too big for one session and often there are not easy answers; so we released the call early hoping that people would respond before (or after) the session. I’ve yet to respond to it myself so I’m going to attempt to do that in this post.

The #DigCiz Call

We want the call to be open to everyone – even those who don’t know a ton about student data collection and we want people to respond using the tools and mediums that they like. We have had some great examples already and I wanted to thank those who have responded so far. I threw the call out to some of our students at SNC and I was super honored that Erica Kalberer responded with an opinion piece. Erica does not study analytics, she is not a data scientist or even a computer science major. She didn’t do any research for her post and it is an off the cuff, direct, and raw response from a student perspective – which I love.

Additionally, Nate Angell chose to leave a hypothesis annotation on the call itself over at digciz.org.

Nate points out that there are many “we”s who are collecting student data and that students often have no idea who the players are that would want to collect their data let alone what data is being collected and what could be done with it. What do we mean when we ask “What do WE owe students….” Who is this we? Instructional designers may answer these questions very differently than accreditors would, or as librarians would, or even as students themselves would. I hope that by hearing from different constituencies that we can bring together some common elements of concern.

Framing Things Up

I am really intrigued by our question but I also have some issues with it.

The question is meant to provoke conversation and so in many ways it is purposefully vague and broad. It is not just “we” that could be picked out for further nuance. So many simple definitions could be picked out of this question. What is meant by “data” and more specifically “student data”.

What are we talking about here? Is this survey data? Click data from the LMS or other educational platforms. What about passive and pervasive collection that is more akin to what we are seeing from the advertising industry? The kind of stuff that does not just track clicks but tracks my where the cursor moves, the speed of how my cursor moves, where eyes are on a screen, text that has been typed into a form but has not been submitted. What about if we are using wearables or virtual reality? Does the data include biometric information like heart rate, perspiration, etc. Is this personally identifiable information or aggregate data? Some of these examples seem particularly sensitive to me and it seems like they should all be treated differently depending on context.

We could keep going on…. What is meant by “collect”, “students”, “owe”… a whole blog post could be written just about any one of these things.

Another of my issues is that the question assumes that student data will be collected in the first place. I’m setting that issue aside for this call and presentation because if I like it or not I am part of field that is collecting student data all of the time. As an instructional designer I make decisions to use technologies that often track data and to be honest if I wanted to avoid those technologies completely I’m not sure that I could. Over the course of my career faculty and administrators have often come to me asking to use technologies that collect data in ways that I consider predatory. How do I respond? How do I continue to work in this field without asking this question?

People who know me or follow my work know that over the last few years that I have often struggled with considering our responsibilities around student data. Even though I have been thinking about these kind of questions for a few years now I don’t think that I will be able to dive into all of the nuance that any of these could bring. (I want to write all the blogs – but time). So, I just have to resolve that – that is why this is a broader call for reflection and conversation and invite others to respond to the call around things that I may have overlooked.

Though I am still new to this conversation, I’m not so new or naive to think that there are not already established frameworks and policies for thinking about the ethical implications of student data collection. I’ve been aware of the work that JISC has been doing in this area for some time and had just started a deeper dive on some research when I attended the Open Education Conference in Niagara Falls a few weeks ago.

Somehow I missed that there were two important data presentations back to back and though I only caught about ¾’s of the Dangerous Data: The Ethics of Learning Analytics in the Age of Big Data presentation from Christina Colquhoun and Kathy Esmiller from Oklahoma State University, I got the slides for Billy Minke and Steel Wagstaff’s “Open” Education and Student Learning Data: Reflections on Big Data, Privacy, and Learning Platforms which I missed completely.

Both of these presentations looked at different policies and ethical frameworks around using student data which was a goldmine for me. Dangerous Data’s list did not make any claim about quality of the framework’s while the Open Education and Student Learning Data presentation did specifically state that their list was curated for policies that they were impressed by.

Open Education and Student Learning Data listed:

Dangerous Data listed:

My Response

I’ve started reading through the policies and frameworks listed above and while I have not had a chance to dive deep with each one of them, I’ve found a lot of overlap with what I have identified as four core tenets that I believe start answer the question “What do we owe students when we collect their data?” at least for me – for now. I’m personally identifying with “we”s as in instructional designers, college teachers, IT professionals, librarians (as an official wannabe librarian) and institutions – at least on some level.

I’m still learning myself and I could change my mind but for the purposes of this post I’m leaning on these four tenets. I feel like before we even start I need to say that there are times when considering these tenets, in practice, that the answers to the problems that inevitably arise come back as “well, that is not really practical” or “the people collecting the data themselves often don’t know that”. In these cases I suggest that we come back to the question “what do we owe students when we collect their data?” and propose that if we can’t give students what they are owed in collection that we think twice before collecting it in the first place.

I will list these tenets and then describe them a bit.

  • Consent
  • Transparency
  • Learning
  • Value

Consent

This one seems of the most importance to me and I was shocked to see that not all of the policies/frameworks listed above talk about it. I understand that consent is troubled, often because of transparency – more on that in a bit –  but it still strikes me that it needs to be part of the answer.

There is a tight relationship between ownership and consent; there is a need for consent because of ownership. If I own something then I need to give consent for someone else to handle it. But not all of these frameworks recognize that. The Ithaka S+R/Stanford CAROL project, listed above, talks about something called “shared understanding” where they basically envision that student data is not owned solely by the student but is a shared ownership between the school, the vendors, and third parties. In a recent EDUCAUSE Review article some of the framers of the project actually said “the presumption of individual data propriety is wishful thinking”. This, after they put the word “their” in scare quotes (“their” data) when referring to people being in a place of authority around the data about them. Ouch!

I mean I get what they are doing here. One looks at the Cambridge Analytica/Facebook scandal and says “oh how horrible” but their response is: you are a fool not to realize that it is happening all of the time. And maybe I am a fool but I still think it is horrible. The article points to big tech firms, how much data they already have about us, and how much money they have made with those data and uses it as a justification. But here is the thing, we are talking about students not everyday users. I think that makes a difference.

In another EDUCAUSE Review article Chris Gilliard points out the extractive nature of web platforms and the problems of using them with students. What of educational platforms? Is it really okay to import the same unethical issues that we have with public web platforms into our learning systems and environments? I’m comforted that most, if not all, of the other frameworks listed above and those that I’ve come across over the years do understand the importance of consent and ownership.

I’ve read broader criticisms of the notion of consent that I found quite persuasive by Helen Nissenbaum (Paywalled – sorry) but even she does not abandon consent completely. Rather she points out that consent alone, in and of itself, is not the answer. We need more than just consent – especially now when our culture grants consent so easily and thoughtlessly. Nissenbaum’s criticisms of consent are in thinking of it as a free pass into respectful data privacy. But here I’m thinking of consent in terms of what we owe students – I see it as a starting place and the least of what we owe them.

What do we owe students when we collect their data? We owe them the decency of asking for it and listening if they change their mind.

How we ask for data collection and and how we continue to inform students about how it is changing is not easy to answer and I want to be very careful of oversimplifying this complex issue. I think that, at least in part, it also an issue of my next tenet – transparency.

Transparency

Asking for consent is no good if you are not clear about what you are asking for consent to do and if you are not in communication about how your practices are changing and shifting over time. In the policies and frameworks it seems like transparency is sort of a given – even the guys over at Ithaka S+R/CAROL have this one. We need transparency in asking for consent around data collection as consent sort of implies “informed consent” and we can’t be informed without transparency.  But we also need ongoing transparency of the actual data and of how it is being used.

I found a blog post from Clint Lalonde published after the 2016 EDUCAUSE Annual that pretty much aligns with how I feel about it:

Students should have exactly the same view of their data within our systems that their faculty and institution has. Students have the right to know what data is being collected about them, why it is being collected about them, how that data will be used, what decisions are being made using that data, and how that black box that is analyzing them works. The algorithms need to be transparent to them as well. In short, we need to be developing ways to empower and educate our students into taking control of their own data and understanding how their data is being used for (and against) them. And if you can’t articulate the “for” part, then perhaps you shouldn’t be collecting the data.”

What do we owe students when we collect their data? We owe them a clear explanation of what we are doing with it.

But I actually think that Clint takes things a bit further than transparency at the end of that quote and it is there that I would like to break off a bit of nuance between transparency and learning for my third tenet.

Learning

Providing information is not providing understanding and while I can concede that in consumer technologies providing information for informed consent is enough, I think that we have an obligation to go further in education and especially in higher education. We have an obligation because these are students and they have come to us to learn. While they will learn from “content” they will learn a lot more from the experience of the life that they lead while they are with us. If that life is spent conforming and complying to data collection practices that they don’t understand and never comprehend the benefit of then, at best, they will graduate thinking all data collection is normal and they will be vulnerable to data collection practices from bad actors.

Of course this means that we ourselves need to better understand the data that we are collecting. It means that we need to know what is being collected and how it can be used ourselves before we start putting students through experiences where this is happening inside of a black box.

Inside of institutions we need to know what our vendors are doing. We need to create and articulate clear expectations about how we view the responsibilities of vendors around privacy and security. We need to vet their privacy and security policies and continue to check on them over time to see if any of those policies have changed. We need to build a culture of working with reputable companies. Then, we need to build that into the curriculum through increased digital, data, and web literacy expectations.

What do we owe students when we collect their data? We owe them an understanding, an education, about what their data are; what they mean; and what can be done with them.

Collectively, as teachers, librarians, instructional designers, administrators, product developers, institutions, etc. it seems that we will always have a leg up on this though – we will always be in a position of power over students. And so my final tenet has to do with the value of the outcome of data collection.

Value

Finally, if we are collecting student data I think that we should be doing if for reasons where we believe that the benefits to the student outweigh the potential costs to the student. This means putting the student first in the equation of what, when, why and how of student data collection.

I also need to be clear that I’m not talking about a license to forgo consent, transparency, and learning because it is believed that the best interest of the student are in intended. This is not an invitation to become paternalistic or to do whatever we want in the name of value.

My point being that the stakes are too high to be collecting student data for the heck of it, or because the system just does that and we are too busy to read the terms of service, or because someone is just wondering what we could do with it. If we have data we should be using the data to benefit students. If we are not using it we should have parameters around storage and yes even eventual deletion.

Collecting student data makes it possible to steal or exploit those data; while we can take precautions and implement security measures no data are as secure as data that were never collected in the first place and, to a lesser extent, data that were deleted. If we are going to collect student data then we have to do something of value with it. Having piles of data stored on systems that no one is doing anything with is wasteful and dangerous. If there is not a clear value in collecting data from students then it should not be collected. If student data has been collected and is not serving any purpose that is valuable to students and no one can envision a clear reason why it will hold value in the future then maybe we should discuss deleting it.

Amy Collier speaks to how data collection can particularly impact vulnerable students in Digital Sanctuary: Protection and Refuge on the Web? (at the end of which she presents seven strategies that you should also read – no really, go read them right now – I’ll wait). Collier starts with a quote from Mike Caulfield’s Can Higher Education Save the Web?

“Caulfield noted: “As the financial model of the web formed around the twin pillars of advertising and monetization of personal data, things went awry.” This has created an environment that puts students at risk with every click, every login. It disproportionately affects the most vulnerable students: undocumented students, students of color, LGBTQ+ students, and students who live in or on the edges of poverty. These students are prime targets for digital redlining: the misuse of data to exclude or exploit groups of people based on specific characteristics in their data.

What do we owe students when we collect their data? We owe them an acknowledgement and explanation that we are doing something that will bring value to them with those data.

Summation – Trust

Policy is great but I think taboo is stronger.

I can’t get that power difference out of my head. I mean it is like the whole business model of education – knowledge is power and we have more knowledge than you but if you come to us we can teach you. There is this trust to it; this assumption of care. We will teach you – not, we will take advantage of you. And to offer that with one hand and exploit or make vulnerable with the other – yeah…

I’ve been working in educational technology for fifteen years and when I first started there was very little that I heard about ethics. Security, sure – privacy… that was a thing of the past, right? It seems that we are starting to see some repercussions now that are making us pause and I’m hearing more and more about these things.

Still, I see these conversations happening in pockets and while I’m seeing lots of new faces there are ones that are consistently absent. I wonder about new hires just entering the field, especially those in schools with little funding, and what kind of exposure they are given to thinking about these implications. I wonder if a question like “what do we owe students when we collect their data?” ever even comes up for some of them.

There is a whole myriad of issues that are now coming to light around surveillance and data extraction. What is happening to trust in our communities and institutions as we try to figure all of this out? 

Perhaps more than anything, what we owe students when we collect their data is a relationship deserving of trust.

Don’t forget

So, don’t forget, the #DigCiz call is open for you to respond how you see fit. Share your creation/contribution on the #DigCiz tag on twitter or in the comments on the #DigCiz post.

We go live Friday, November 2nd at 10 AM Eastern Time with a twitter chat and a video call into the session. Please join us!

~~~

Thanks go out to Chris Gilliard, Doug Levin, Michael Berman, and George Station, all of whom offered feedback on various drafts of this post.

Photo by Taneli Lahtinen on Unsplash

#OpenEd18 Lightning Talk: #DigPINS, We are Open … But sometimes closed

I’ve made it to Open Ed 2018 and I’m excited to present a lightning talk on Friday at 3:30 – 3:45 with Sundi Richard and Joe Murphy on our collaborations with #DigPINS. If you are at the conference please consider coming by and if you are not I’m hoping this blog post will give you a glimpse.

If you don’t know, DigPINS is a faculty development experience, much of which happens in the open, where we collaborate with small cohorts of faculty in a fully online experience to discuss issues of Digital (the Dig) Pedagogy, Identity, Networks, and Scholarship (the PINS) over anywhere from 3-5 weeks.

We have released a template of the curriculum as a model that can be found at https://digpins.org so that is one place to get started but that is just content… #DigPINS is really an opportunity for collaboration and community as we will discuss in the talk.

It basically works from a position of someone at an institution deciding that they are going to run #DigPINS with a cohort of faculty – this could be an instructional designer, a librarian, a technologist… but someone interested in faculty development around how we learn in online spaces. This person needs to pick dates, register people, promote it and ultimately design the thing. Like I said a template is available at https://digpins.org but, again, that is just content. One of the big design decisions is about choosing the open digital environments and the backchannel (This is the ‘closed space’ that we are calling out in the title of this talk).

We have found that the backchannel is important for faculty who are just getting started. They have to have a safe space to communicate and collaborate outside of the public eye while considering and challenging themselves with these heavy notions and the very idea of ‘going open’.

The facilitator should have experience with each of the themes (the PINS) in theory and in practice.

This past summer Joe and I ran the first DigPINS cohorts in conjunction with one another creating the first inter-institutional cohorts. We had a total of 17 participants and we had to be flexible with one another. We had our own backchannels and our own open hubs.

There are lots of ways to join – the big one is to run your own iteration at your own school with your own cohort but people can also dip in as individuals with any of the open activities and of course on the #DigPINS tag on Twitter. This January there are plans for all three of us to run it with cohorts from January 2nd till the 28th.

I’m embedding our slides below – if you need more info don’t hesitate to leave a comment below.

Privacy and Security in DoOO: First attempt at student resource

It has been a wild few months and it feels like there are a lot of things happening at once.

I’m thrilled that at St. Norbert we have gotten our Domains project off of the ground and I’m talking about and working with domains more than ever – which is wonderful.

However, a few months ago after attending DigPed Lab, those of you who follow regularly will recall, I had some serious questions about how to design for privacy and security with DoOO.

I had some great collaboration around this from comments on the post to backchannel conversations about what all is out there. I would be remiss if I did not particularly give a shout out to Tim C from Muhlenberg College and Evelyn Helminen from Middlebury College who gave me lots of feedback and resources. And of course to Chris G who just keeps me thinking about privacy in edtech in general.

I’d had some visions of pulling together a group who is interested in this topic but I found that things just moved too quick for me and I was in need of a resource that I could give to students before I could pull the group together. So, still working on that – if you have a particular need for this please put a fire under me.

I’ve struggled with this topic because it is such a nuanced thing. I love DoOO because of the focus on student ownership and agency. Privacy can be addressed with blanket best-practices but that is not the conversation that I’m interests me.

I feel our domains project at SNC is particularly blessed in that we have our Tech Bar. We visited University of Mary Washington in building it and got a lot of tips from Martha Burtis and the students who work at the Digital Knowledge Center. I’m telling you all of this because I think it is important to contextualize this resource that I’ve built for students.

This first little resource around privacy and security with DoOO that I’ve built is directed at students and is really just meant to give them a taste of what is possible around naming, making pages private, securing sites, etc. I created a little infographic around this and at SNC I printed them up as large bookmarks. The SNC version clearly says that a student can visit the Tech Bar for more information.

I made a more generic version of the resource and slapped an open license on it in case it might be helpful for others with DoOO projects. I’m hoping to think about this more, collaborate with others, and have more thoughts on this as we move forward.

PrivacyAndSecurityStudentsDoOO

Download link

On a somewhat related note I do want to draw attention to our most recent DigCiz call for engagement which is a parallel project to an interactive presentation that we will give at EDUCAUSE Annual Conference.  The call for engagement and the presentation basically ask the question “What do we owe students when we collect their data?”. To participate in the call just blog or tweet (Nate Angell even started a hypothes.is annotation of the post). To participate in the presentation come to the EDUCAUSE Annual conference presentation or participate in the twitter chat. All details on the post.

Designing for Privacy with DoOO: Reflections after DPL

The thinking for this post comes on the tail end of Digital Pedagogy Lab (DPL) where, despite not being enrolled in any of the data or privacy offerings, concerns of student data and privacy rang loud in my ears. This came from various conversations but I think it really took off after Jade Davis’ keynote and after Chris G and Bill Fitzgerald visited us in Amy Collier’s Design track to talk about designing for privacy. After the Lab I also came across Matthew Cheney’s recent blog post How Public? Why Public? where he advocates for public work that is meaningful because it is done so in conjunction with private work and where students use both public and private as options depending on what meets the needs of varying circumstances.

A big part of what attracts me to Domain of One’s Own (DoOO) is this possibility of increased ownership and agency over technology and a somewhat romantic idea I have that this can transfer to inspire ownership and agency over learning. In considering ideas around privacy in DoOO it occurred to me that one of the most powerful things about DoOO is that is it has the capability of being radically publicly open but that being coerced into the open or even going open without careful thought is the exact opposite of ownership and agency.

In a recent twitter conversation with Kris Schaffer he referred to openness and privacy as two manifestations of agency. This struck me as sort of beautiful and also made me think harder about what we mean by agency, especially in learning and particularly in DoOO. I think that the real possibility of agency in DoOO starts from teaching students what is possible around the capabilities and constraints in digital environments. If we are really concerned about ownership and agency in DoOO then we have to consider how we will design for privacy when using it.

DoOO does allow for various forms and levels of privacy which are affected by deployment choices, technical settings, and pedagogical choices. I hear people talk about these possibilities and even throw out different mixes of these configurations from time to time but I have never seen those listed out as a technical document anywhere.

So, this is my design challenge. How can I look at the possibilities of privacy for DoOO, refine those possibilities for specific audiences (faculty and students), and then maybe make something that is not horribly boring (as technical documents can be) to convey the message. I do want to be clear that this post is not that – this post is my process in trying to build that and a public call for reflections on what it could look like or resources that may already exist. What I have so far is really just a first draft after doing some brainstorming with Tim C during some downtime at DPL.

Setting Some Boundaries
This could go in a lot of different directions so I’m setting some boundaries up front to keep a scope on things. I’d love to grow this idea but right now I’m starting small to get my head around it. I’m looking to create something digestible that outlines the different levels of privacy around a WordPress install on DoOO.  DoOO is so much bigger than just WordPress, I know that but I’m not trying to consider Omeka or other applications – yet. Also, I’m specifically thinking about this in terms of a class or other teaching/learning environment. A personal domain that someone is doing on their own outside of a teaching/learning environment is another matter with different, more personal, concerns.

Designing for Privacy with DoOO
Right now I’m dividing things up into two broad categories that interact with one another. I need better titles for them but what I’m calling Privacy Options are stand alone settings or approaches that can be implemented across any of the Deployments which are design and pedagogical choices that are made at the onset. Each of these also afford for and require different levels of digital skills and I’m also figuring out how to factor that into the mix. I will start with Deployments because I think that is where this starts in practice.

Deployments:
Deployment 1 – Instructor controlled blog: With this deployment an instructor has their own domain where they install WordPress and give the students author accounts (or whatever level privileges make sense for the course). Digital Skills: Instructor needs to be comfortable acting as a WordPress administrator including: theming and account creation. Students gain experience as WordPress authors and collaborating in a single digital space.

Deployment 2 – Instructor controlled multisite: With this deployment an instructor installs a WordPress multisite on their own domain and each student gets their own WordPress site. Digital Skills: Running a multisite is different from running a single install and will require a bit more in the way of a digital skill set including: enabling themes and plugins, setting up subdomains and/or directories. Students can gain the experience of being WordPress administrators rather than just authors but depending on the options chosen this can be diminished.

Deployment 3 – Student owned domains: This is what we often think of as DoOO. Each student does not just get a WordPress account or a WordPress site but their own domain. They can install any number of tools but of course the scope of this document (for now) is just WordPress. Digital Skills: One fear I have is that this kind of deployment can be instituted without the instructor having any digital skills. Support for digital skills will have to come from somewhere but if this is being provided for from some other area then the instructor does not need to have the skills themselves. Students will gain skills in c-panel, installing WordPress, deleting WordPress

Privacy Options
Privacy Options looks at approaches, settings, or plugins that can be used across any of the Deployments:

1 – Visibility settings: WordPress Posts and Pages have visibility settings for public, password protected, and private. These can be used by any author on any post and by admins on posts and pages.

2 – Private site plugin: Though I have not personally used a private site plugin I know that they exist and can be used to make a whole WordPress site private. Tim mentioned that he has used Hide My Site in the past with success.

3 – Pseudonyms: There is no reason that a full legal name needs to be used. How do we convey the importance of naming to students. I took a stab at this for my day job but I’m wondering what else can be done.

4 – Search engine visibility setting: This little tick box is located in WordPress under the reading settings and “discourages search engines from indexing the site” though it does say that it is up to the search engines to honor this request.

5 – Privacy protection at the domain level to obscure your name and address from a WhoIs lookup. Maybe not a concern if your institution is doing subdomains?

6 – An understanding of how posts and sites get promoted. Self promotion and promotion from others. How different audiences might get directed to your post or site.

Some Final Thoughts
There is one approach that I’d actually been leaning toward prior to Digital Pedagogy Lab that raises questions about how to introduce this. I do worry about the technical barrier that comes with learning about these privacy options. All of the privacy options come with some level of digital skill and/or literacy that needs to be in place or acquired. In addition, I think that often the deployments are made before the privacy options are considered; yes yes I know that is not ideal but it is a reality. Because of this, is it maybe just better to tell faculty and students, in the beginning at least, to think of their DoOO or their WordPress as a public space? Mistakes happen and are we muddying the waters by thinking of DoOO or WordPress as private spaces where a simple technical mistake could easily make things public? Most people have so many options for private reflection and drafting; from Google Docs to the LMS, email to private messaging we have so many tools that are not so radically publicly open. Is there something to be said for thinking of the domain space as public space and using it for that – at least while building the skills necessary to make it more private?

I don’t have the answers but I wanted to open the conversation and see what others are thinking. Are there resources that I’m missing and how can this be created in a way that will be easy to understand and digestible? I’m thinking and writing and booking some folks for conversations to keep thinking in this way. Stay tuned and I’ll keep learning transparently.

Big thanks to Tim C and Chris G for giving feedback on a draft of this post.

Photo original by me licensed CC-BY

Platform Literacy in a Time of Mass Gaslighting – Or – That Time I Asked Cambridge Analytica for My Data

Digital Citizenship and Curiosity 

In the beginning of 2017 I first discovered Cambridge Analytica (CA) through a series of videos that included a Sky News report, some of their own advertising, as well as a presentation by their CEO Alexander Nix. I found myself fascinated by the notion that big data firms, focused on political advertising, were behind those little facebook quizzes; that these data firms were creating profiles on people through harvesting their data from these quizzes and combining it with other information about them like basic demographics, voter and districting information, and who knows what else to create a product for advertisers. I was in the process of refining a syllabus for a class and creating an online community around digital citizenship so this was of particular interest to me.

My broad interest in digital citizenship is around our rights and responsibilities online and I was compelled by the thought that we could be persuaded to take some dumb quiz and then through taking that quiz our data would be taken and used in other ways that we never expected; in ways that would be outside of our best interests. 

I had questions about what we were agreeing to: how much data firms could know about us, what kind of metrics they were running on us, how the data could be shared, and what those messages of influence might look like. I started asking questions but when the answers started coming in I found myself paralyzed under the sheer weight of how much work it took to keep up with all of it not to mention the threats of financial blowback. This paralisis made me wonder about the feasibility of an everyday person to challenge this data collection, request their own data to better understand how they were being marketed to, and of course the security and privacy of the data.

Cambridge Analytica is again in the news with a whistleblower coming forward to give more details – including that the company was harvesting networked data (that is not just you but your friends’ data) from facebook itself (reactions, personal messages, etc,) and not just the data entered into the quizzes. Facebook has suspended the Cambridge Analytica’s accounts and distanced themselves from the company. Additionally, David Carroll, a professor from the New School Parson’s School of Design, filed a legal action this past week against the company in the UK. The story is just going crazy right now and every time I turn around there is something new.

However, much of this conversation is happening from the perspective of advertising technology (adtech), politics, and law. I’m interested in it from the perspective of education so I’d like to intersect the two.

The Request

A few weeks after I found those videos, featured by and featuring Cambridge Analytica, I came across a Motherboard article that gave some history of how the company was founded and how they were hired by several high profile political campaigns. Around this time I also found Paul-Olivier Dehaye of personaldata.io who was offering to help people understand how to apply to get a copy of their data from Cambridge Analytica based on the Data Protection Act (DPA), as the data was being processed in the UK.

My interests in digital citizenship and information/media/digital literacy had me wondering just how much data CA was collecting and what they were doing with it. Their own advertising made them sound pretty powerful but I was curious about what they had, how much of it I’d potentially given to them through taking stupid online quizzes, and what was possible if combined with other data and powerful algorithms.

The original request was not to Cambridge Analytica but rather to their parent company SCL Elections. There was a form that I had to fill out and a few days later I got another email stating that I had to submit even more information and GPB £10 payable in these very specific ways.

umm.edtech.fm/wp-content/uploads/sites/2/2018/03/Screenshot-2018-03-19-23.17.38.png”> Response from SCL asking for more information from me before they would process my Subject Access Request

[/caption]Out of all of this, I actually found the hardest part to be paying the £10. My bank would only wire transfer a minimum of £50 and SCL told me that my $USD check would have to match £10 exactly after factoring in the exchange rate the day they recieved it. I approached friends in the UK to see if they would write a check for me and I could pay them back. I had a trip to London planned and I considered dropping by their offices to give them cash, even though that was not one of the options listed. It seemed like silly barrier, that a large and powerful data firm could not accept a PayPal payment or something and would instead force me into overpayment or deny my request due to changes in the exchange rate. In the end, PersonalData.io paid for my request and I sent along the other information that SCL wanted.

Response

After I got the £10 worked out with Paul I heard from SCL pretty quickly saying that they were processing my request and then a few days later I got a letter and an excel spreadsheet from Cambridge Analytica that listed some of the data that they had on me.

It was not a lot of data, but I have administered several small learning platforms and one of the things that you learn after running a platform for awhile is that you don’t really need a lot of data on someone to make certain inferences about them. I also found the last tab of the spreadsheet to be disconcerting as this was the breakdown of my political beliefs. This ranking showed how important on a scale of 1-10 various political issues were to me but there was nothing that told me how that ranking was obtained.

Are these results on the last tab from a quiz that I took; when I just wanted to know my personality type or what Harry Potter Character I most resemble? Is this a ranking based on a collection and analysis of my own Facebook reactions (thumbs up, love, wow, sad, or anger) on my friend’s postings? Is this a collection and analysis of my own postings? I really have no way of knowing. According to the communication from CA it is these mysterious “third parties” who must be protected more than my data.

m/wp-content/uploads/sites/2/2018/03/Screenshot-2018-03-20-01.35.23.png”> Excerpt from the original response to the Subject Access request from Cambridge Analytica

[/caption]In looking to find answers to these questions Paul put me in touch with a Ravi Naik of ITN Solicitors who helped me to issue a response to CA asking for the rest of my data and more information about how these results were garnered about me. We never got a response that I can share and in considering my options and the potential for huge costs I could face it was just too overwhelming.

Is it okay to say I got scared here? Is it okay to say I chickened out and stepped away? Cause that is what I did. There are others who are more brave than me and I commend them. David Carroll, who I mentioned earlier just filed legal papers against CA, followed the same process that I did is still trying to crowdfund resources. I just didn’t have it in me.  Sorry democracy.

It kills me. I hope to find another way to contribute.

Platform Literacy and Gaslighting

So now it is a year later and the Cambridge Analytica story has hit and everyone is talking about it. I backed away from this case and asked Ravi to not file anything under my name months ago and yet here I am now releasing a bunch of it on my blog. What gives? Basically, I don’t have it in me to take on the financial risk but I still think that there is something to be learned from the process that I went through in terms of education. This story is huge right now but the dominant narrative is approaching it from the point of view of advertising, politics, and the law. I’m interested in this from the perspective of what I do – educational technology.

About a week ago educational researcher and social media scholar danah boyd delivered a keynote at the South by Southwest Education (SXSW Edu) conference where she was pushed back on the way we approach media literacy with a focus on critical thinking – specifically in teaching but this also has implications for scholarship. This talk drew a body of compelling criticism from several other prominent educators including Benjamin Doxtdator, Renee Hobbs, and Maha Bali which inspired boyd to counter with another post responding to the criticisms.

The part of boyd’s talk (and her response) that I find particularly compelling in terms of overlap with this Cambridge Analytica story is in the construct of gaslighting in media literacy.  boyd is not the first to use the term gaslighting in relation to our current situation with media but, again, often I see this presented from the perspective of adtech, law, or politics and not so much from the perspective of education.

If you don’t know what gaslighting is you can take a moment to look into it but basically it is a form of psychological abuse between people who are in close relationships or friendships. It involves an abuser who twists facts and manipulates another person by drawing on that close proximity and the knowledge that they hold about the victim’s personality and other intimate details. The abuser uses the personal knowledge that they have of the person to manipulate them by playing on their fears, wants, and attractions.

One of the criticisms of boyd’s talk, one that I’m sympathetic to, is around the lack of blame that she places on platforms. Often people underestimate what platforms are capable of and I don’t think that most people understand the potential of platforms to track, extract, collect, and report on your behaviour.

In her rebuttal to these criticisms, to which I am equally sympathetic, boyd states that she is well aware of the part that platforms play in this problem and that she has addressed that elsewhere. She states that is not the focus of this particular talk to address platforms and I’m okay with that – to a point. Too often we attack a critic (for some reason more often critics of technology) who is talking about a complex problem for not addressing every facet of that problem all at once. It is often just not possible to address every angle at the same time and sometimes we need to break it up into more digestible parts. I can give this one to boyd – that is until we start talking about gaslighting.

It is exactly this principle of platforms employing this idea of personalization, or intimate knowledge of who a person is, which makes the gaslighting metaphor work. We are taking this thing that is a description of a very personal kind of abuse and using it to describe a problem at mass scale. It is the idea that the platform has data which tells it bits about who you are and that there are customers (most often advertisers) out there who will pay for that knowledge. If we are going to bring gaslighting into the conversation then we have to address the ability of a platform to know what makes you like, love, laugh, wow, sad, and angry and use that knowledge against you.

We don’t give enough weight to what platforms take from us and how they often hide or own data from us and then sell it to third parties (users don’t want to see all that messy metadata…. Right?).  I’m not sure you even glimpse the possibilities if you are not in the admin position – and who gets that kind of opportunity?

It would be a stretch to call me a data scientist but I’ve built some kind of “platform literacy” after a little more than a decade of overseeing learning management systems (LMS) at small colleges but most people interact with platforms as a user not as an admin so they never get that. I’m not sure how to quantify my level of platform literacy but please understand that I’m no wiz kid – an LMS is no Facebook and in my case we are only talking about a few thousand users. I’m more concerned with making the thing work for professors and students than anything, however, in doing even a small amount of admin work you get a feel for what it means to consider and care about things on a different level: how accounts are created, how they interact with content and with other accounts, the way accounts leave traces through the content they contribute but also through their metadata, and how the platform is always monitoring this and how as an administrator you have access to that monitoring when the user (person) often does not.

I don’t think that most LMS admins (at least as LMSs are currently configured) at small colleges are incentivised to go digging for nuanced details in that monitoring unprompted. I do think that platform owners who have customers willing to pay large sums for advertising contracts have more of a motivation to analyze such things.

Educational researchers are incentivised to show greater returns on learning outcomes and the drum beat of personalized learning is ever present. But I gotta ask if can we pause for a second and think… is there something to be learned from all this Cambridge Analytica, Facebook, personalization, microtargeting, of advertising story for education? Look at everything that I went through to try to better understand the data trails that I’m leaving behind and I still don’t have the answers. Look at the consequences that we are now seeing from Facebook and Cambridge Analytica. The platforms that we use in education for learning are not exempt from this issue.

My mind goes back to all the times I’ve heard utopian dreams about making a learning system that is like a social media platform. All the times I’ve seen students who were told to use Facebook itself as a learning tool. So many times I’ve sat through vendor presentations around learning analytics and then during Q&A asked “where is the student interface – you know, so the student can see all of this for themselves” only to be told that was not a feature. All the times I’ve brainstormed the “next generation digital learning environment” only to hear someone say “can we build something like Facebook?” or “I use this other system because it is so much like Facebook”. I get it. Facebook gives you what you want and it feels good – and oh how powerful learning would be if it felt good. But I’m not sure that is learning is the thing.

In her rebuttal boyd says that one of the outstanding questions that she has after listening to the critics (and thanking them for their input) is how to teach across gaslighting. So, it is here where I will suggest that we have to bring platforms back into the conversation. I’m not sure how we talk about gaslighting in media without looking at how platforms manipulate the frequency and context with which media are presented to us – especially when that frequency and context is “personalized” and based on intimate knowledge of what makes us like, love, wow, sad, grrrr.

Teaching and learning around this is not about validating the truthfulness of a source or considering bias in the story. Teaching and learning around this is about understanding the how and why of the thing, the platform, that brings you the message. The how and why it is bringing it to you right now. The how and why of the message looking the way that it does. The how and why of a different message that might be coming to someone else at the same time. It is about the medium more than the message.

And if we are going to talk about how platforms can manipulate us through media we need to talk about how platforms can manipulate us and how some will call it learning. Because there is a lot of overlap here and personalization is attractive – no really, I mean it is really really pretty and it makes you want more. I have had people tell me that they want personalization because they want to see advertising for the things that they “need”. I tried to make the case that if they really needed it then advertising would not be necessary, but this fell flat.

Personalization in learning and advertising is enabled by platforms. Just as there are deep problems with personalization of advertising, we will find it is multiplied by tens of thousands when we apply it to learning. Utopian views that ignore the problems of platforms and personalization are only going to end up looking like what we are seeing now with Facebook and CA. The thing that I can’t shake is this feeling that the platform itself is the thing that we need more people to understand.

What if instead of building platforms that personalized pathways or personalized content we found a way to teach platform’s themselves so that students really understood what platforms were capable of collecting, producing, and contextualizing? What if we could find a way to build platform literacy within our learning systems so that students understood what platforms are capable of doing? Perhaps then when inside of social platforms people would not so easily give away their data and when they did they would have a better understanding of the scope. What if we were really transparent with the data that learning systems have about students and focused on making the student aware of the existence of their data and emphasised their ownership over their data? What if we taught data literacy to the student with their own data? If decades ago we would have focused on student agency and ownership over platforms and analytics I wonder if Cambridge Analytica would have even had a product to sell to political campaigns let alone ever been a big news story.

I’m not saying this would be a fail safe solution – solutions come with their own set of problems – but I think it could be a start. It would mean a change in the interfaces and structures of these systems but it would mean other things too. Changes in the way we make business decisions when choosing systems and changes in the way we design learning would have to be there too. But we have to start thinking and talking about platforms to even get started – because the way they are currently configured has consequences.

Image CC0 from Pixabay

#DigCiz Reflections and a #DigPed Workshop

We just wrapped up a month long #DigCiz conversation and it was really unlike any of the others.

It was bigger for one thing.

I was informally running Twitter stats in the background and we consistently had between 200-400 people for any given week. Not massive by any means but growing. Though it was bigger than before and though it was online I’m still adamant that it was not a MOOC – it’s a conversation.  A conversation mediated by technology, sure, but a conversation, and not a course, nonetheless.

A #DigPed Workshop

Still, we learned a lot and as part of the continual processing and dissemination of that learning, I’m excited to point out (I’m not really announcing – the site has been up for awhile) that Sundi Richard and I will be collaborating in the flesh with participants for a 75 minute workshop during the Digital Pedagogy Lab Institute. The workshop is broad so even if you did not follow along with #DigCiz, but are interested in digital citizenship in higher education and society at large it will be valuable.

If you are attending the Institute consider coming to our workshop! If you are not attending there is still time because registration is still open (as of the time of this posting anyway).

I realize trying to ask people to attend a whole institute for a 75 min workshop is a little crazy but there is so much to be learned at the Institute as a whole! It looks like there is still room in Data, Networks, and Domains tracks! These are led by some of the smartest people in the room (and by room I mean the internet) Kris Shaffer (Data), Maha Bali and Kate Bowles (Networks), and Martha Burtis (Domains).

And! Even though their tracks are full, hanging with the likes of Amy Collier, Sean Michael Morris, Jesse Stommel, and Chris Friend… Well come’on! I mean the prospect of running into these folks in the hallway is super cool in and of itself.

#DigCiz Reflections

Mostly what I really want in hashtag #DigCiz, is to have a broad conversation about “digital citizenship” that takes a critical look at both “digital” and “citizenship” and that moves beyond things like netiquette and cyberbullying. I think those things are important but I want them to be part of the conversation not the whole conversation.

I think that we have been pretty successful in creating conversation that does that but it also seems that a bit of a community is growing.

This last round of #DigCiz spurred a bit of a branching out…. meaning that there are all of these little side things that keep popping up even though our planned burst ended weeks ago.

For instance the other day Dr. Naomi Barnes decided to live tweet a reading of an article called Towards a Radical Digital Citizenship in Digital Education by Akwugo Emejulu and Callum McGregor using the #DigCiz tag.

This spurred a bunch of us to read it, and wow!! This is exactly the kind of thing that I’m talking about when I say that I want to think about digital citizenship in deeper and more critically.

Besides Naomi’s spontaneous contribution we also had this cool idea inspired by Bill Fitzgerald’s and Kristen Eshleman’s week to do a hypothesis annotation of a privacy policy. We chose to annotate the Slack privacy policy and it was really enlightening. So many of us are entering into these legal agreements when we use these services without even questioning what we are agreeing to. Using social annotation we can really dig in there and pull out the nuance of these documents for questioning, contextualizing, and clarifying.

Ever since Audrey Watters blocked annotation from her site I’ve been rethinking my use of hypothesis. I don’t think that Audrey is wrong (it is her site people) but I also see great benefit from annotating the web. Annotating privacy policies and TOS as a way to better understand them does not feel like I am impinging on anyone’s creative work. We are still doing some work to refine how we do this but I think it has promise.

Then, the other day on Twitter George Station was talking about Zeynep Tufekci’s new book Twitter and Tear Gas. Turned out Sundi and Daniel were about to read it as well as some others. I noodled George on Twitter about doing a #DigCiz book discussion and he took me up on it! I started into the book right away and wow!!! Again, this is more of what I’m looking for when I talk about a deeper look at Digital Citizenship.

In Short

A big part of why I can’t call DigCiz a MOOC is because I don’t feel like a teacher in DigCiz – I feel more like a learner.

However, I do turn around what I learn in DigCiz and teach it. I am planning a first year seminar in Digital Identities, Environments, and Citizenship to be taught in the fall and now I have this exciting opportunity to do the workshop at the Digital Pedagogy Lab Institute with Sundi.

If you are going to be at DPLI consider coming to our workshop. Sundi and I will be presenting together and we will be talking about many of the things that we have learned through these DigCiz conversations. We plan to present different scenarios that encompass facets of digital citizenship and ask participants to think about how we can present these to students for a deeper consideration of digital citizenship.

Also keep an eye on digciz.org  cause you never know when a DigCiz blast could pop up.

#DigCiz Week 4 – Big Data Big Dreams: waking up about data collection in edtech

It is week 4 of #DigCiz and Kristen Eshleman and Bill Fitzgerald are leading us in a week of discussion around data security and the part that higher education institutions play. In the prompt Kristen questions the context of EDUCAUSE’s top 10 issues in IT and information security’s place in the #1 slot saying “when you read the description from this list, it’s pretty clear that our membership views information security policy not in the service of the individual digital citizen, but in the service of institutional IT systems.” Kristen states that though security breaches may be costly that higher education institutions are not in the business of data security (we are in the business of educating students) and goes on to say “we may be able to address the needs of institutions and individuals more effectively if we reframe the conversation from the lens of digital citizenship”

This really spoke to me in terms of how our professional organizations frame things to us as professionals. Often training and development from professional organizations is the way that many of us stay abreast of changes in the field. How professional organizations choose to frame these issues shapes how we bring these issues back to our institutions.

In response to the prompt Kristen and Bill held a synchronous video call and again this came up from Chris Gilliard and Amy Collier.

All of this reminded me of something I wrote several months ago about my attendance at the ELI National conference that at the time I’d decided not to publish. I was questioning the framing around the professional development I was getting and now, after hearing other colleagues similar concerns, it just feels so relevant that I can’t hold back.

I want to say that I felt really blessed to attend the conference and to present on digital citizenship but because of various experiences, which I will outline, I am now asking questions about what educational technology is for and why we are doing this.

It is not the job of digital pedagogues—or digital aficionados, or digital humanists, or educational technologists, or instructional designers—to force people to go digital. When we make it our mission to convert non-digital folks to our digital purpose, we will not only very likely alienate these valuable colleagues, but we’ll also miss the mark of our true intention: to support learning and scholarship within institutions that, in our heart of hearts, we adore.” – Sean Michael Morris 

If the focus of edtech is simply to implement technology for the sake of technology are we not vulnerable to the money and power that is backing those solutions? I’m negotiating ideas around how we are influenced in environments of professional development in edtech and what our responsibilities are as professionals, educators, and citizens. It seems to me of critical importance to be aware of how we are influenced in the environments where we place ourselves. I’m contemplating how we bring these experiences back to our institutions and how we influence our campus communities after attending them.

But anyway – onto the lived experience part:

At ELI

Having come from a more traditional IT background and then moving to an academic technology environment I was excited to attend the EDUCAUSE ELI conference. I’d always been told that the big EDUCAUSE main conference, which I have attended many times, was for that traditional IT audience but that ELI was more focused on educators.

While registering for the conference I was surprised to find that I had been automatically opted into being geographically tracked using beacons while I was onsite in Houston at the conference. Mind you I was opted in by default – I had to specifically indicate that I did not want my physical location tracked. I choose to opt out of this because I didn’t really understand what exactly it all entailed, but I can imagine.

I would imagine this tracking means EDUCAUSE (or ELI as the case might be) knows where I spend my time at the conference. What vendor booths and sessions I attended. If I took a lunch at the conference or if I went out. How much time I might have spent in the hallway. Maybe even which of my colleagues, who are also being tracked, that I’m spending time with while I’m at the conference. 

There are just some key questions that I could not find answers to – These are increasingly the same questions that I keep having with all of these data collection tools be it facebook and google or educational systems:

  • Do I get access to my data?
  • Who exactly owns these data?
  • Are these data for sale?
  • Could these data be turned over to government agencies – raw or analyzed?
  • Do vendors get access to my data – raw or analyzed?
  • Do I get access to the algorithms that will be applied to my data?
  • Is someone going to explain those algorithms to me – cause I’m not a data scientist.
  • Are the data anonymized?
  • Are these data used only in aggregate or can they be used to target me specifically?
  • How long will these data be retained? – Will they be tracked over time either in aggregate and/or individually?
  • Who has access to these data?

Once I arrived on site I found many participants who had these extra little plastic tabs stuck to their name badges and quickly found out that these were the tracking tags. In several of the session rooms and around the conference in other areas I found mid-sized plastic boxes with handles under chairs and in corners with the name of the beacon company on them.

I don’t remember information that could have answered any of the questions I listed above being provided during registration. I did not seek out anyone organizing ELI about this or anyone representing the vendor.  However, while I was onsite at ELI this started to bother me enough that I asked plenty of the participants at the conference these kind of questions. While I mostly got into very interesting conversations I did not find anyone who could answer those questions for me.

So What?

This bothers me because if educational technology professionals are giving over their data at professional development events geared toward educating us about innovations in educational technology, shouldn’t we be able to answer those questions? Why do so many of us assume benevolence and hand our data over without having those answers?

Many of us might think that we know those entities to whom we are giving our data away to but even if we think that it is a trusted professional organization, companies and organizations are changing all of the time switching out leadership and missions. Throw in the possibility of the data being sold and we have no idea what is going on with our data.

After attending larger conferences I have felt targeted by vendors and I have heard about horror stories from other female colleagues (who actually have purchasing power) at the lengths vendors will go through to get a closed door meeting. I can imagine scenarios where my data is used to the benefit of vendors over my own benefit or that of my institution.

When our professional organizations do not prompt us to think critically about data collection and when we are automatically opted into turning over our own data without question it is no wonder we don’t question taking students’ data without informing them. We are compelled by those who are teaching us about data collection that this is normal and we pass that on to our institutions.

ELI is not alone in this of course, it happens with most of the professional organizations with corporate sponsorship and with most of the corporate digital tools used for education and social interactions. However, I’m concerned when one of the major professional organizations in my field is perpetuating this normalization of data surveillance in a time when we are seeing the rights of our most vulnerable students threatened. Yet I continue to see a proliferation of this mindset that more data is always good without so much as a mention of who really owns it, how will it be used, and how can that usage change over time.

This was also evident with the first keynote presentation at ELI from a non-profit called Digital Promise. The CEO Karen Cator talked about the many products that they are developing but it was the Learner Positioning System that got me thinking about these issues. Listening to the level of personalization that was associated with this tool I could only imagine the amount of data being collected on students who were using it. The presenter made it clear at the beginning that it was the first time that she had delivered the talk and that it was a work in progress but it was hard for me to forgive no mention of the data security and ownership around a project like this. It became just another example of how the conference was glorifying and fetishizing the collection of data without any real critical reflection on what it all means.

Audrey Watters writes about about how students have to comply with the collection of their intimate data and that they don’t even get the choice to opt out. She takes a historical look at how “big data” of the 1940’s was used to identify Jews, Roma, and other ‘undesirables’ so that they could be imprisoned. She writes “Again, the risk isn’t only hacking. It’s amassing data in the first place. It’s profiling. It’s tracking. It’s surveilling. It’s identifying ‘students at risk’ and students who are ‘risks.’”

I am concerned that we are creating a culture of unquestioned data collection so much so that even those who are supposed to be the smartest people on our campuses about these matters give over their data without question. Professionals return to their campuses from events like ELI with an impression that this level of data surveillance is always good without question and that data collection is normal.

I believe that big data and personalization engines can be extremely “effective” in education but sometimes it is precisely this “effectiveness” that makes me question them. The word“effective” communicates a kind of shorter path to success; a quicker way to get to an end goal of some kind. However, the value of that end goal could be nefarious or benevolent. None of us like to think that our campus’ could use data for iniquitous ends but often these negative effects come from models being applied in new ways that they were not designed for or emerge later to show reflection of unconscious biases.

We saw this last year when the president of St. Mary’s University was let go after speaking in a disparaging way about at-risk students – wanting to get them out of the pipeline within the first few weeks of classes. I’m sympathetic to the point of view that we want to identify at-risk students so that we can help them stay but in this situation at-risk students were being identified (by a survey developed by the president’s office) specifically so that they could be encouraged to leave.

I think that we should be asking, and getting students to ask, what does success look like and what is the end goal. I don’t feel like that question has really been answered in higher education. It is really hard to think of data collection as something potentially dangerous when it is an education company or institution and the end goal is “student success”.  Of course we all want our students to be successful but let’s not forget that these data can be put together in various ways.

Let’s also not forget that we are giving students subtle and not so subtle cues about what is acceptable and what is not. Will our students think of asking questions about ownership, security, and privacy around their data once they graduate if we take and keep their data from them while they are with us? Or will they assume benevolence from everyone who asks them for access?

We need more education in our educational technology. Students are tracked and their data are mined all over the web; often I am reminded that we are not going to be able to change that. However, we could provide transparency while they are with us and get them to start asking questions about what data can be gathered about them, how it can be used, and what impacts that might have on their lives.

Wouldn’t it be wonderful if our professional organizations would help us to demand transparency of our personal data so that we could better imagine the possibilities of how it can be used?

Image Credit Ash –  Playing with Fire – Gifted to Subject 

I would like to thank Amy Collier and Chris Gilliard for providing feedback on an early draft of this post. The two of you always make me think deeper.

What is DigCiz and Why I am Not Marina Abramovic: thoughts on theory and practice

Theory

Alec Couros and Katia Hildebrandt just finished a round of facilitation in the #DigCiz conversation where they challenged us to think about moving away from a personal responsibility model of digital citizenship. In a joint blog post they spend time distinguishing digital citizenship from cybersaftey and present Jole Westheimer’s work identifying three different types of citizens to ultimately ask “What kind of (digital) citizen” are we talking about.

Additionally, this week, outside of our #DigCiz hashtag, Josie Fraser blogged about some views around digital citizenship. Here we see Josie, reminiscent of Katia and Alec, making a distinction between digital citizenship and what she identifies as e-safety but also setting it apart from digital literacy. Josie presents a venn diagram where digital citizenship is one part of a larger interaction overlapping with e-safety and digital literacy.

In other DigCiz news, this week a group of us (Sundi and I included) who presented at the annual ELI conference in Houston on digital citizenship in the liberal arts published an EDUCAUSE Review article highlighting four different digital citizenship initiatives inside of our institutions.

All of this is on the tails of our first week of #DigCiz where Mia Zamora and Bonnie Stewart troubled the idea of digital citizenship. In a post about this Bonnie artfully lays out the conflict of utopian narratives of the web as a tool for democracy with the realities of what I’m more and more just lumping under Shoshana Zubhoff’s concept of Surveillance Capitalism though you could just say it is the general Silicon Valley ethos.

But I want to get back to Katia and Alec’s call to move the conversation beyond personal responsibility. Often, digital citizenship is lumped in with things like digital/information literacy, nettiquette, online safety, and a whole host of other concepts. Often these are just variations of issues that existed way before the “digital” but are complicated by the digital.

I’m considering Katia and Alec’s call, reflecting on all of these posts and articles as well as the last year and several months of thinking and conversing about this topic on #DigCiz and I can’t help but feel like we are in the weeds on this concept.

So here it is – my foundational, basic, details ripped away, 10,000 foot view at digital citizenship where things like safety and literacy are part of the model but not the whole thing.

I’ve thought about digital citizenship like this for some time and Josie’s post reminded me the idea of representing it as a venn diagram and though some of the overlaps are messy I think that is normal.

I really want to focus and drill down on digital citizenship so I put it in the middle and zoom out from there. The factors that I see at play around digital citizenship are environments and people. In terms of people there is the individual and then others. Since this is “digital” citizenship they are digital environments and identities. The items in the overlaps are messy part. This is draft one.

Draft 1 – Autumm’s Digital Citizenship model CC-BY-ND

This is a really broad model but I think that digital citizenship is a really broad concept and that a narrow model would not do. I think part of the problem that we get into with confusing digital citizenship with digital literacy, cybersafety, netiquette or any other number of similar ideas has to do with narrowly defined models that do not allow for liminality or overlap.

In theory that is… but that brings me to the second half of this post.

Practice

I hope that the web still can exist as a place for community building, artistic expression, and civic discourse but I fear that use for it is shrinking under the pressures of its uses as an advertising and surveillance tool. 

I worry that as we are used and targeted by systems that we have been normalized to the experience of being used and targeted. Resulting in us feeling that using and targeting others does not seem like such a big deal.

 

***

In 1974 performance artist Marina Abramovic produced and performed Rhythm 0.  

I rather like the idea of performance art. Making an artistic statement not through polished practice but rather through the practice of a lived moment.

In Rhythm 0, Abramovic wanted to experiment with giving the public pure access to engage with her actual in-the-flesh self.

She stood for six hours in front of a table with all manner of objects for pleasure and pain with a statement that told the public that they could engage with her however they saw fit.

She was a type of living doll.

Quickly the public forgot that she was a person. She had told them that she was an object after all. So fast they moved from tickling her with the feathers or kissing her on the cheek to cutting her with the razors. She said she was ready to die for this experiment. She said she took full responsibility. One of the objects was a loaded gun. Someone went as far as to put it in her own hand and hold it to her head and see if they could make her pull the trigger.

But why? Why when given the chance to engage with her would people choose to harm her of all the choices of things that they could do to her?

What happens when we interact with people? Is it about us or is it about them? Are we seeing people with lives and needs and wants and fears and all the messy that is human? Or are we seeing an object that we want to interaction with… for our sense of good or bad or pain or pleasure?

I’m not sure much has changed since 1974 when Marina Abramovic first performed this piece. I’m not sure if given the choice between tools of violence and tools of peace that the public will choose peace even today.

I’m not Marina Abramovic

#DigCiz is not Rythem 0

***

 

I think we need to look at ourselves and our communities and ask why we are engaging with each other. Is it out of a selfish need for engagement? Is there a hope for beneficial reciprocation? Is there a concept of consent being considered? 

I think we need to look at our tools and wonder why we are engaging with them and the companies behind them. As they say if you are not paying you are  probably the product.

Environment shapes identity. Identity shapes other’s identities. I fear that we are shaping each other mindlessly. I fear that we are not just shaping each other but that the predatory environments we use are additionally shaping us.

I think we start to change by knowing ourselves first and then engaging where we think we will find recripciotaton, and by recripciotation I don’t mean comments and I don’t mean reply. I mean really trying to listen to one another and getting to know one another. Caring about how we think the other may want to engage and not just satisfying some hunger for engagement.

Going Forward

#DigCiz continues next week and I’m hopeful that we will start to explore these nuances of engagement even deeper as Maha Bali and Kate Bowles take the wheel. Keep an eye on #DigCiz on key social media outlets and digciz.org

Image credit CC0 Dimitris Doukas free on Pixabay

I’d also like to thank Sundi Richard, Maha Bali, and Mia Zamora for looking at a very early draft of this piece and giving much needed feedback. You each help me be better every day – thank you.

Associative Trails Around DigCiz, Fake News, and Microtargeting

Microtargeting: A Digital Citizen’s Perspective

I started writing this post about fake news and microtargeting a few days ago and then I was reminded that #OpenLearning17 was talking about Vannevar Bush’s As We May Think this week. I began to see connections between how they might relate. It made this post even longer but I think it was worth it.

Some background if you don’t know: Bush’s article was written in 1945 as the war was ending. He was the Director of Scientific Research and Development during this time so he was all about applying science in warfare. In the article he is envisioning where scientists will put their energies as the war is ending.

Now, as peace approaches, one asks where they [scientists] will find objectives worthy of their best.

The article focuses on the connections we make when we build knowledge. How we associate past discoveries with current ones and tie things together. Bush advocates using technology to track the connections that we make in this process to extend memory for better reflection on those connections. Many credit this article with predicting the Internet.

He uses this term “associative trails” to describe indexing knowledge based on connections that we define. He thinks this is more powerful than typical kinds of indexing like sorting by number or alphabetizing. But I note that this is a much more personalized kind of indexing.

He is advocating for metacognition, that is, realizing what you are thinking and where your trails lie so you can better understand what you are researching, yes, but more importantly your own thought processes. What I am wondering about is what happens when you get the technology part but you leave out the metacognitive part? Bush does not seem to consider this option but I think this is often the world that we live in today.

When I start thinking about fake news and microtargeting I have to ask what if a person does not have access to their associative trails? What if they don’t even realize they are leaving a trail? What if they think that their trail is not so important? What if someone’s trail could be bought and sold? What does the record of all our connections say about us and can it be used in ways that might be exploitive?

I’m not a data scientist. I’m not a journalist. I’m not a librarian.

I am a technologist. I am an educator. I am a person. A person who lives some of her life on the web. I want to say a lot of her life on the web…. But “a lot” is a relative term.

Often it is journalists and librarians that tackle the fake news topic. I think that both of these groups add an important perspective to the conversation but I also think that there is the perspective of a digital citizen and those that advocate for such concepts; the perspective of someone using the web as a place of expression, a place to learn, and to be heard and to listen to others.

What is microtargeting?

When I bring the idea of microtargeting up I’ll start with something like “well you know they track a lot of your data from the internet to try to influence you” and most often, before I can continue, I hear “oh yes of course I know that”. Then there is the inevitable story of shopping for an item on one site and then continuing to see ads for it on other sites. But that is rather mild and not really what concerns me.

I’m not just talking about the machine realizing that you were looking at a product on another site or that you clicked on something from your email, that is cookies and web beacons, that is rudimentary stuff.

I’m talking about gathering thousands of data points, combining them, and analyzing them. Everything from shopping history to facebook likes and what church you attend can be gathered and combined with traditional demographics to create a “personalized experience” meant to influence you with emotional and psychological messaging.

The big story around microtargeting right now has to do with a little company called Cambridge Analytica (CA) in London. They are the big story because they’ve had well known wins with customers like the Brexit Leave and Donald Trump campaigns.  

In this eleven minute video during the Concordia Summit their CEO Alexander Nix explains how they work. In the video Nix explains that demographic and geographic information is child’s play. That the idea of all people from one demographic getting the same message: “all women because of their gender, all African Americans because of their race, all old people because of their age” is ridiculous. That those things are of course important but they are only part of the picture; that psychographics are a much more complete picture because then you are targeting for personality.

The big shocker where people feel a little creeped out is when they learn that CA uses those silly little facebook quizzes (you know the ones that you click the “connect to facebook” button on before you are allowed to take them) to profile your personality. What! Those quizzes are not just there for free for you to have fun with… as they say: if the service is free consider that you might be the product.

As we may forget

CA is not the only one doing this; they are just the popular story right now and the quizzing is only part of things. For me the big part is that connection to facebook which can give the owner of the quiz (be it CA or some other company) access to all of your account information, your likes, your posts, and often much of your friend’s information. Of course, much of your personal and consumer data can be purchased so throw that into the mix. Imagine aligning all of this data for a person. It is a lot. Often people don’t even realize what they are giving away.

You authorize the connection so that you can take the quiz or play the game or whatever and then it is over for you – you have had your fun and you move on. But the app still has that connection to your account and will continue to unless you go in and specifically delete it. This means that it can continue to gather data. Apps will vary of course and I can’t speak for any specific one but I know that all of you are reading the terms of service of each app before you connect it – right?

In this case the user is continuing to make associative trails on facebook through friending and liking. However, they are not using those trails for metacognition. They are not using technology to extend their memory so that they may better reflect on the connections that they are making. Instead they plow forward forgetting many of the connections and the fact that they have authorized someone/thing else access and track their connection trails. The trails are being harvested by an outside entity and the user, more than likely, has no idea who that entity is – did I mention that they could change the terms of service, the name of, or the nature of the app at any moment?

But how much can someone really do with all that data?

I have seen the data scientist folks that I follow sort of look at the CA story a little sideways and it seems every day there is a new article downplaying the impact CA had on the Trump and Brexit campaigns. Interestingly though not too many saying that the idea behind this, using big data and psychographics to personalize experiences, is invalid. Just that CA might be more hype than pay off.

This much more comprehensive story about the origins of CA in Motherboard states that Cambridge is not releasing any empirical evidence on how much or how little they are affecting the outcomes of campaigns. And though CA is more than happy to tout their wins as proof of their effectiveness I’ve yet to see anything about their losses which is a classic vendor ploy.

In this recent Bloomberg article, The Math Babe, Cathy O’Neil points out that what Trump was doing during the campaign is not uncommon and that the Hillary campaign was also doing it. Also, that U.S. companies have for decades been tracking personality. O’Neil points out that “To be sure, there’s plenty to be appalled at in this story…. It is just not specific to Trump”.  She states that Hillary had access to more data than Trump because she had access to Obama’s archive of data from the previous elections. 

But then I think about Bush. As We May Think considers information storage and to be sure the amount of data is important. However, I think the real meat is in the connections. It is here that I have a hunch that having the right context or being able to see the right connections could be more powerful than having more data – well at least if we are talking about the difference between a lot of data and a whole heck of a lot data. Did I mention that I am not a data scientist?

Paul Olivier Dehaye has written about how CA was targeting “low information voters” for the Trump campaign. This article hypothesizes that CA used data (citing CA’s claim to have 5000 data points for every adult American) to specifically look for voters who had a low “need for cognition” for microtarged political advertising. These are the type of folks who would be more likely to not dig too deep or question stories that were presented to them. These folks are not doing a lot of metacognition. I don’t blame them for this, but I’ll get to that in a bit.

What is real and how can we tell?

As I remember it, when the term fake news first started being thrown around during the campaign it was largely being used to define sites that were not run by major news organizations or even particular journalists but rather individuals who knew how to buy a domain name, hosting, and throw up a WordPress site but who were only interested in click revenu. They would come up with crazy stories and even crazier headlines just to get people to click. As these started to be called out as “fake news” some began to create lists of these sites and place parody and satire sites alongside of them.

But then it got more challenging with accusations that major news sources were in fact fake news and that we could tap into “alternative facts” to get to the truth.

Journalists receive training to be sensitive to bias and context and to not let it interfere with their reporting, so they should be more prepared to consider context and fight against bias, especially their own. However, you will never be able to completely remove bias and context; much of it can be hidden and not realized till later. It is here that education is asked to step in and create critical citizens who will hold journalists responsible for what they report and it is here that we see the calls for greater digital and information literacy in regards to fake news.

Fake news, microtargeting, and digital citizenship

Bush envisioned people using technology to extend their memory to be more metacognitive about the connections they were making while they were building knowledge. These seem like rather “high cognition” kind of folks to me but what about those “low cognition” kind of people that Dehaye thinks CA could be after? Who are they?

I mean I’ll admit that I’m guilty myself. I don’t read every terms of service for every new app I download. I have forgotten that I’d given access to some app only later to find it hanging out in my facebook or accessing the geolocation of my phone. But I think that it is really some of the most vulnerable among us that are at risk here.  

What if you work 40/50 hours a week and care for children, parents, or grandparents? What if you have a disability or illness to manage? What if you grew up surrounded by technology and this kind of technology usage is your normal? Do you have time to build all of those literacies? 

Building critical literacies around information and digital technologies takes time. It requires more than just a list of which websites are fake, which are satire, and which are backed by trained journalists. It requires more than a diagram of which news sources lean in which direction politically.

You need the ability to critically look for the nuance of things that could be off. For instance a .com.co is different than a .com. Kin Lane talks about “domain literacy” and goes much deeper than this basic understanding of domains but I hope you see what I mean. We need to read the article and then ask is it really reporting first hand or are they reporting on reporting as Mike Caulfield points out when he calls for the first step in fact checking to not be evaluating the source but rather determining who the source is!

Once you determine the true source you need to evaluate it – who wrote this, what are their political leanings, are they being backed by other influences (like money) somewhere? You should click on the article’s links and/or look at its sources and read those articles to get context before you make a definitive decision about it’s worth. All of this takes access, and knowledge, and constant practice.

Maha Bali writes about how fake news is not our real problem. She points out how fake news is good for critical thinking and states that we need more than just a cognitive approach; what we really need is cross-cultural dialog, learning, and skills. This is where education and community need to step up to the plate.

It seems like a lot and for me it is a call for better general and liberal education. I think the first step may just be in realizing (and getting students to realize) that my internet is different from your internet. Where possible, taking ownership for our own “associative trails” and demanding that ownership when it is kept from us. Finally, simply realizing that there are political forces and companies with lots of your data… which has always been the case but maybe realizing that they are trying to influence you in increasingly intimate ways.

This article (images and words) are CC-BY Autumm Caines

What is Digital Citizenship?: recap on week 1 and announcing week 2 of #DigCiz

Last week was the first round of conversations that we kicked off about digital citizenship. I started off by presenting on digital citizenship with my colleague Jim Kerr during Martin Luther King day at Capital. It was nice to have a foundation of what Jim had done last year when he presented this topic on his own at this same event. We used the nine elements of Digital Citizenship as defined by Dr. Mike Ribble on digitalcitizenship.net which I think can be helpful when presenting on the concept to people that are not familiar with it. We Periscoped the session and put it on YouTube – We didn’t do a good job of staying in frame so the audio is better than the visuals.

This past week also kicked off the #DigCiz conversations which are an offshoot of #HumanMOOC by Sundi and I.

We have planned alternating forms of conversation between Twitter chats and Google Hangouts (GHO). This first round was a GHO and it was small, just Sundi, Daniel, and I but that was okay I think that made for a deeper conversation. We talked about some pretty cool stuff including:

  • What possibilities there might be for creating a digital bystander training
  • Creating journal entries reflecting on who one is as a digital citizen
  • Digital citizenry as a transformational experience
  • Digital citizenry as a chaotic concept as defined by the Cyenfin framework

We kept returning to the nine elements of digital citizenry and while I find the elements to be helpful in broaching the conversation of digital citizenship, as I start to dive deeper in my own thinking I’m having trouble with them. I’m just finding that digital citizenship is much deeper than any list of themes. In our GHO conversation we talked about how important creating digital identity is to citizenship. How can you participate if you don’t have an identity? What does it mean to have an identity and if someone does not have a well defined identity does that mean that maybe they are new to the digital world – should we take steps to welcome them? But the elements do not mention digital identity except for the element of digital law relating how stealing someone’s identity in a digital space is illegal. It seems to fit into the elements of digital communication, etiquette, and literacy but there is no mention of it.

This week we are continuing this conversation with a Twitter chat and we have changed the date/time to not conflict with the #MOOCMOOC instructional design chat. We will be tweeting on Friday, January 29th at 11pm CST/ 12pm EST using the tag #DigCiz.

The question is “Why is Digital Citizenship Important?” and I’ll do a quick stint here to answer it for myself just a bit: I think for me digital citizenship is important because no man is an island. It has to do with the idea of the public commons and working in public to the betterment of everyone. It has to do with those connections and networks that we talk about so much in learning theory and it has me wondering about informal learning and how we are learning all of the time as we connect with one another. What does it mean to share space? What does it mean to share ideas?

I know these are big romantic kind of questions/reflections… You can join the twitter chat on Friday, January 29th at 11pm CST/ 12pm EST using #DigCiz and help us explore them.