Platform Literacy in a Time of Mass Gaslighting – Or – That Time I Asked Cambridge Analytica for My Data

Digital Citizenship and Curiosity 

In the beginning of 2017 I first discovered Cambridge Analytica (CA) through a series of videos that included a Sky News report, some of their own advertising, as well as a presentation by their CEO Alexander Nix. I found myself fascinated by the notion that big data firms, focused on political advertising, were behind those little facebook quizzes; that these data firms were creating profiles on people through harvesting their data from these quizzes and combining it with other information about them like basic demographics, voter and districting information, and who knows what else to create a product for advertisers. I was in the process of refining a syllabus for a class and creating an online community around digital citizenship so this was of particular interest to me.

My broad interest in digital citizenship is around our rights and responsibilities online and I was compelled by the thought that we could be persuaded to take some dumb quiz and then through taking that quiz our data would be taken and used in other ways that we never expected; in ways that would be outside of our best interests. 

I had questions about what we were agreeing to: how much data firms could know about us, what kind of metrics they were running on us, how the data could be shared, and what those messages of influence might look like. I started asking questions but when the answers started coming in I found myself paralyzed under the sheer weight of how much work it took to keep up with all of it not to mention the threats of financial blowback. This paralisis made me wonder about the feasibility of an everyday person to challenge this data collection, request their own data to better understand how they were being marketed to, and of course the security and privacy of the data.

Cambridge Analytica is again in the news with a whistleblower coming forward to give more details – including that the company was harvesting networked data (that is not just you but your friends’ data) from facebook itself (reactions, personal messages, etc,) and not just the data entered into the quizzes. Facebook has suspended the Cambridge Analytica’s accounts and distanced themselves from the company. Additionally, David Carroll, a professor from the New School Parson’s School of Design, filed a legal action this past week against the company in the UK. The story is just going crazy right now and every time I turn around there is something new.

However, much of this conversation is happening from the perspective of advertising technology (adtech), politics, and law. I’m interested in it from the perspective of education so I’d like to intersect the two.

The Request

A few weeks after I found those videos, featured by and featuring Cambridge Analytica, I came across a Motherboard article that gave some history of how the company was founded and how they were hired by several high profile political campaigns. Around this time I also found Paul-Olivier Dehaye of personaldata.io who was offering to help people understand how to apply to get a copy of their data from Cambridge Analytica based on the Data Protection Act (DPA), as the data was being processed in the UK.

My interests in digital citizenship and information/media/digital literacy had me wondering just how much data CA was collecting and what they were doing with it. Their own advertising made them sound pretty powerful but I was curious about what they had, how much of it I’d potentially given to them through taking stupid online quizzes, and what was possible if combined with other data and powerful algorithms.

The original request was not to Cambridge Analytica but rather to their parent company SCL Elections. There was a form that I had to fill out and a few days later I got another email stating that I had to submit even more information and GPB £10 payable in these very specific ways.

 

Response from SCL asking for more information from me before they would process my Subject Access Request

Out of all of this, I actually found the hardest part to be paying the £10. My bank would only wire transfer a minimum of £50 and SCL told me that my $USD check would have to match £10 exactly after factoring in the exchange rate the day they recieved it. I approached friends in the UK to see if they would write a check for me and I could pay them back. I had a trip to London planned and I considered dropping by their offices to give them cash, even though that was not one of the options listed. It seemed like silly barrier, that a large and powerful data firm could not accept a PayPal payment or something and would instead force me into overpayment or deny my request due to changes in the exchange rate. In the end, PersonalData.io paid for my request and I sent along the other information that SCL wanted.

Response

After I got the £10 worked out with Paul I heard from SCL pretty quickly saying that they were processing my request and then a few days later I got a letter and an excel spreadsheet from Cambridge Analytica that listed some of the data that they had on me.

It was not a lot of data, but I have administered several small learning platforms and one of the things that you learn after running a platform for awhile is that you don’t really need a lot of data on someone to make certain inferences about them. I also found the last tab of the spreadsheet to be disconcerting as this was the breakdown of my political beliefs. This ranking showed how important on a scale of 1-10 various political issues were to me but there was nothing that told me how that ranking was obtained.

Are these results on the last tab from a quiz that I took; when I just wanted to know my personality type or what Harry Potter Character I most resemble? Is this a ranking based on a collection and analysis of my own Facebook reactions (thumbs up, love, wow, sad, or anger) on my friend’s postings? Is this a collection and analysis of my own postings? I really have no way of knowing. According to the communication from CA it is these mysterious “third parties” who must be protected more than my data.

Excerpt from the original response to the Subject Access request from Cambridge Analytica

In looking to find answers to these questions Paul put me in touch with a Ravi Naik of ITN Solicitors who helped me to issue a response to CA asking for the rest of my data and more information about how these results were garnered about me. We never got a response that I can share and in considering my options and the potential for huge costs I could face it was just too overwhelming.

Is it okay to say I got scared here? Is it okay to say I chickened out and stepped away? Cause that is what I did. There are others who are more brave than me and I commend them. David Carroll, who I mentioned earlier just filed legal papers against CA, followed the same process that I did is still trying to crowdfund resources. I just didn’t have it in me.  Sorry democracy.

It kills me. I hope to find another way to contribute.

Platform Literacy and Gaslighting

So now it is a year later and the Cambridge Analytica story has hit and everyone is talking about it. I backed away from this case and asked Ravi to not file anything under my name months ago and yet here I am now releasing a bunch of it on my blog. What gives? Basically, I don’t have it in me to take on the financial risk but I still think that there is something to be learned from the process that I went through in terms of education. This story is huge right now but the dominant narrative is approaching it from the point of view of advertising, politics, and the law. I’m interested in this from the perspective of what I do – educational technology.

About a week ago educational researcher and social media scholar danah boyd delivered a keynote at the South by Southwest Education (SXSW Edu) conference where she was pushed back on the way we approach media literacy with a focus on critical thinking – specifically in teaching but this also has implications for scholarship. This talk drew a body of compelling criticism from several other prominent educators including Benjamin Doxtdator, Renee Hobbs, and Maha Bali which inspired boyd to counter with another post responding to the criticisms.

The part of boyd’s talk (and her response) that I find particularly compelling in terms of overlap with this Cambridge Analytica story is in the construct of gaslighting in media literacy.  boyd is not the first to use the term gaslighting in relation to our current situation with media but, again, often I see this presented from the perspective of adtech, law, or politics and not so much from the perspective of education.

If you don’t know what gaslighting is you can take a moment to look into it but basically it is a form of psychological abuse between people who are in close relationships or friendships. It involves an abuser who twists facts and manipulates another person by drawing on that close proximity and the knowledge that they hold about the victim’s personality and other intimate details. The abuser uses the personal knowledge that they have of the person to manipulate them by playing on their fears, wants, and attractions.

One of the criticisms of boyd’s talk, one that I’m sympathetic to, is around the lack of blame that she places on platforms. Often people underestimate what platforms are capable of and I don’t think that most people understand the potential of platforms to track, extract, collect, and report on your behaviour.

In her rebuttal to these criticisms, to which I am equally sympathetic, boyd states that she is well aware of the part that platforms play in this problem and that she has addressed that elsewhere. She states that is not the focus of this particular talk to address platforms and I’m okay with that – to a point. Too often we attack a critic (for some reason more often critics of technology) who is talking about a complex problem for not addressing every facet of that problem all at once. It is often just not possible to address every angle at the same time and sometimes we need to break it up into more digestible parts. I can give this one to boyd – that is until we start talking about gaslighting.

It is exactly this principle of platforms employing this idea of personalization, or intimate knowledge of who a person is, which makes the gaslighting metaphor work. We are taking this thing that is a description of a very personal kind of abuse and using it to describe a problem at mass scale. It is the idea that the platform has data which tells it bits about who you are and that there are customers (most often advertisers) out there who will pay for that knowledge. If we are going to bring gaslighting into the conversation then we have to address the ability of a platform to know what makes you like, love, laugh, wow, sad, and angry and use that knowledge against you.

We don’t give enough weight to what platforms take from us and how they often hide or own data from us and then sell it to third parties (users don’t want to see all that messy metadata…. Right?).  I’m not sure you even glimpse the possibilities if you are not in the admin position – and who gets that kind of opportunity?

It would be a stretch to call me a data scientist but I’ve built some kind of “platform literacy” after a little more than a decade of overseeing learning management systems (LMS) at small colleges but most people interact with platforms as a user not as an admin so they never get that. I’m not sure how to quantify my level of platform literacy but please understand that I’m no wiz kid – an LMS is no Facebook and in my case we are only talking about a few thousand users. I’m more concerned with making the thing work for professors and students than anything, however, in doing even a small amount of admin work you get a feel for what it means to consider and care about things on a different level: how accounts are created, how they interact with content and with other accounts, the way accounts leave traces through the content they contribute but also through their metadata, and how the platform is always monitoring this and how as an administrator you have access to that monitoring when the user (person) often does not.

I don’t think that most LMS admins (at least as LMSs are currently configured) at small colleges are incentivised to go digging for nuanced details in that monitoring unprompted. I do think that platform owners who have customers willing to pay large sums for advertising contracts have more of a motivation to analyze such things.

Educational researchers are incentivised to show greater returns on learning outcomes and the drum beat of personalized learning is ever present. But I gotta ask if can we pause for a second and think… is there something to be learned from all this Cambridge Analytica, Facebook, personalization, microtargeting, of advertising story for education? Look at everything that I went through to try to better understand the data trails that I’m leaving behind and I still don’t have the answers. Look at the consequences that we are now seeing from Facebook and Cambridge Analytica. The platforms that we use in education for learning are not exempt from this issue.

My mind goes back to all the times I’ve heard utopian dreams about making a learning system that is like a social media platform. All the times I’ve seen students who were told to use Facebook itself as a learning tool. So many times I’ve sat through vendor presentations around learning analytics and then during Q&A asked “where is the student interface – you know, so the student can see all of this for themselves” only to be told that was not a feature. All the times I’ve brainstormed the “next generation digital learning environment” only to hear someone say “can we build something like Facebook?” or “I use this other system because it is so much like Facebook”. I get it. Facebook gives you what you want and it feels good – and oh how powerful learning would be if it felt good. But I’m not sure that is learning is the thing.

In her rebuttal boyd says that one of the outstanding questions that she has after listening to the critics (and thanking them for their input) is how to teach across gaslighting. So, it is here where I will suggest that we have to bring platforms back into the conversation. I’m not sure how we talk about gaslighting in media without looking at how platforms manipulate the frequency and context with which media are presented to us – especially when that frequency and context is “personalized” and based on intimate knowledge of what makes us like, love, wow, sad, grrrr.

Teaching and learning around this is not about validating the truthfulness of a source or considering bias in the story. Teaching and learning around this is about understanding the how and why of the thing, the platform, that brings you the message. The how and why it is bringing it to you right now. The how and why of the message looking the way that it does. The how and why of a different message that might be coming to someone else at the same time. It is about the medium more than the message.

And if we are going to talk about how platforms can manipulate us through media we need to talk about how platforms can manipulate us and how some will call it learning. Because there is a lot of overlap here and personalization is attractive – no really, I mean it is really really pretty and it makes you want more. I have had people tell me that they want personalization because they want to see advertising for the things that they “need”. I tried to make the case that if they really needed it then advertising would not be necessary, but this fell flat.

Personalization in learning and advertising is enabled by platforms. Just as there are deep problems with personalization of advertising, we will find it is multiplied by tens of thousands when we apply it to learning. Utopian views that ignore the problems of platforms and personalization are only going to end up looking like what we are seeing now with Facebook and CA. The thing that I can’t shake is this feeling that the platform itself is the thing that we need more people to understand.

What if instead of building platforms that personalized pathways or personalized content we found a way to teach platform’s themselves so that students really understood what platforms were capable of collecting, producing, and contextualizing? What if we could find a way to build platform literacy within our learning systems so that students understood what platforms are capable of doing? Perhaps then when inside of social platforms people would not so easily give away their data and when they did they would have a better understanding of the scope. What if we were really transparent with the data that learning systems have about students and focused on making the student aware of the existence of their data and emphasised their ownership over their data? What if we taught data literacy to the student with their own data? If decades ago we would have focused on student agency and ownership over platforms and analytics I wonder if Cambridge Analytica would have even had a product to sell to political campaigns let alone ever been a big news story.

I’m not saying this would be a fail safe solution – solutions come with their own set of problems – but I think it could be a start. It would mean a change in the interfaces and structures of these systems but it would mean other things too. Changes in the way we make business decisions when choosing systems and changes in the way we design learning would have to be there too. But we have to start thinking and talking about platforms to even get started – because the way they are currently configured has consequences.

Image CC0 from Pixabay

ELI Poster Presentation: DigPINS – A participatory faculty development experience

I’m excited to be presenting a poster at ELI2018 with Sundi Richard on DigPINS – a participatory faculty development experience. Sundi designed DigPINS around the same time that I was designing my first year seminar in digital citizenship – of course we co-founded #DigCiz and digciz.org together so there has been a lot of talk between us about all of these projects.

DigPINS looks at Digital Pedagogy, Identity, Networks, and Scholarship as an online faculty development experience in a cohort model over a set time period. It sort of reminds me of a cMOOC except the focus is not on massive numbers and there is a part of the experience that does not happen in the open – the cohort at the school that is running the course has a backchannel and really they are often closer in physical proximity to one another so they can sometimes just talk to each other on campus.

For our poster we have given a description of each of the defining concepts (the PINS: Pedagogy, Identity, Networks, and Scholarship) on one half and then an interactive description of examples of the activities on the other half. The activities are dynamic and complex – they are not easily put into a box – hence making the poster interactive. How do we make a poster interactive? Well each activity will be printed separately so that during explination they can be placed along two intersecting continuums: Private/Public and Synchronous/Asynchronous. The far extremes of each of these are hard to get at and I’m not sure that anything in DigPINS belongs there but we are hopeful that having these as moveable elements that we will be able to better demonstrate their complexity.

A digital version of the poster is embedded below – it is three slides long as Slide 1 is the poster, Slide 2 are the moveable activities, and on Slide 3 we put a description.

DigPINS Poster

Some of you know I just took a position at St. Norbert and one of the big reasons was because I knew they were not just open to but encouraging really exciting approaches to faculty development like DigPINS. I just finished up running my first implementation of DigPINS at St. Norbert. I had a great group of faculty, staff, and librarians who were really thoughtful about their approaches. We had some serious conversations about the good and bad of technology, social media, mobile access and their effects on pedagogy, scholarship, and ourselves.

I’m excited to be able to present with Sundi on DigPINS – our next move is to open the curriculum so that others can take the skeleton of the defining concepts and activities and make it their own at their institution. That is coming soon so stay tuned!!!

 

भक्ति in the Time of MOOC and Yoga in the Open

This past week was the soft start to YogaMOOC. I jumped into the twitter stream, signed up for the edx side, and signed up to be a research subject. I’m also taking #OpenEdMOOC and it was week 3 this past week and I did not do a blog post.

I’m combining some thoughts about both of these into this one post. They are thoughts about behaviour, research, resilience, metaness, and more. I’m hopeful that I can weave them together in a way that makes sense. If not I’m hopeful that you will not judge me too harshly dear reader.

Persistence and resilience: Bhakti in the time of MOOC

As we work through week 0 of YogaMOOC I’ve been dabbling into some of the blog posts (admittedly I have ignored the discussion boards on edX – I feel that this speaks badly of my character somehow but there it is) most are little tester posts just to get something out there; a sentence or two just to say “hey I’m here”. This is valuable as many are starting blogs (or tags or categories) specifically for this mooc. However, others are digging in a little deeper and reflecting on past yoga and meditation experiences. I’d like to do that too but I’ll save that for the next section. 

I digress

What really stood out to me as I perused these posts were the occasional ones that reflected on a past yoga or meditation practice that had been abandoned (or diminished) with an interest in rekindling this practice through YogaMOOC. These did not seem to be the majority of the posts but I suppose this featured prominently for me because I know this story well. I have had yoga and meditation practices in the past but they have all fallen off.

This made me think of MOOCs and especially OpenEdMOOC especially considering that I did not post a week 3 blog entry and honestly my week 2 was just a response to someone else not a response to the content or instructors. Persistence in MOOCs was a thing of interest to researchers once – I think it still is. It is one of those that I’ve been flirting with for some time but have yet to really fall into – much like my own yoga and meditation practice.

Okay here I go again with the digressing – forgive me…

I make it no secret that in my youth I was a hippy. Along with certain types of music and dress with such a lifestyle comes certain practices…. Yoga and meditation falls right in. I love yoga and meditation because of the mind/body connection – it is a liminal space.

This in-between of what makes us human. Yoga is often thought of as just physical postures, and while these are beneficial, as we go deeper we are asked to align the mind as we do them.

Meditation is thought of as just sitting and staring at one’s breath but the body ever persists (with aches and pains and cramps) in this state and we are asked to pull the mind back to a fixed point.

It naturally falls into the religious and philosophical with questions such as ‘what are we’ and ‘what happens to the mind after the body is no longer viable’. And with me being all forehead chakra, such questions appeal to me.

I’ve been to India. I’ve done the silent treatment. I’ve followed the master. I’ve climbed the mountain (okay okay … maybe it was just a foothill but that’s what I’ve got). Finally, I’ve just let myself be – and I’ve seen the echo that creates.

What’s next?

Enough digressing…

Persistence

Resilience

Yoga

MOOCs

भक्ति Bhakti is the Sanskrit word for devotion or adoration which makes me think of persistence and resilience. I’ll admit that I could be conflating things here but the path is too enticing for me not to follow it. I can’t put my finger on it quite yet but it is right there…

is it persistance out of blind practice

or out of …

(fine I’ll say it)… love?

Does it matter?

Perhaps these are two paths to the same end?

Which is easier to measure?

Researching the Intimate: Yoga in the Open

I decided to also be a research subject for YogaMOOC but I have to admit it was difficult for me. I love LINKLab MOOCs and I’ve been a research subject for them before. I’m not sure what was different this time. I filled the stuff out but it just felt more … idk… closer to home this time.

The questions were increasingly intimate. I should have paid better attention but as often happens with me (and I suspect many of us) I just kept filling in answers. As I remember it and have gone back to pick a bit…

The regular demographic stuff and with a particular nod to yoga:

How much do you make a year. How much experience with yoga do you have.

But it was the stuff after that …

The stuff about deliberately noticing sensations on the body when doing mundane tasks. The stuff about being self critical. The part about watching my feelings. The parts about meta… cognition, experience, emotion, … idk… life… that was hard for me – in that it felt really close to home.

I come from a tradition (that admittedly in which my bhakti is low) where we live in silence for ten days or so at a time in quiet reflection. We think about these things but we don’t discuss them in the open. In fact, the only dialog is in private consultation with a teacher.

Of course this research is not sharing the results with everyone but I’m not even sure myself what the research questions are? Did I miss that part? Will I get to reflect on how my data fit into the larger project?

I don’t know – I suppose it is a small thing but I can’t help wonder. I don’t mind sharing but I feel like I’m missing some kind of reciprocation… I still filled the stuff out… but it just feels strange to me…

I’m looking forward to YogaMOOC and OpenEdMOOC and few other MOOCs that I’m playing with at this time.

These are just some thoughts – let’s see what comes next.

Till next time,
शरद (aka Monsoon) 

Image Credit: Yoga Baby – photographer: me
Check it here

Openness, Humanness, and Connectedness: or We can connect in the Open if you want to

In my last blog post I responded to the question “why does open matter” posed by #OpenEdMooc with the answer “because we are human” by proclaiming that open mattered because humans have a need to connect and because openness facilitates connectedness.

I came across an interesting response to my post from Benjamin Stewart that I did not notice before because Benjamin posted it as an Evernote note which I did not get a notification on but I did notice when he posted it to twitter. Benjamin asks if I’ve gone far enough in my thinking. He gives an example that one can connect without being in the open and gives examples of connecting through a clique or a closed group of like minded individuals.  

I appreciate the question – it makes me wonder about how open is open. Can one be open within a clique? Can a closed group have diverse minds? Then I start thinking about Blake and the doors of perception – about how I’ve argued that the doors are more than just open or closed but of many differing natures and states (glass doors, murphy doors, pocket doors… being propped, locked, ajar, and I could go on). But again I’ll stop myself from going down the rabbit hole as this post will be much longer and go in a different direction if I do.

Trying to stay somewhat short and on task I’ll address Benjamin’s concern by simply restating that it is my premise that open matters because are human. To extrapolate further – humans have a need to connect and openness facilitates connectedness…. however, I am not stating that humans have a need to connect in the open. I think that Benjamin is absolutely correct when he says that people can connect in closed groups. I would actually take it further and say that people will connect in closed groups. This is not antithetical to my point – I think that it actually makes my point.

I’m not one of those who demonize this closed connectedness – I think we need that too, though too much of it can be a problem. I see the problem with it as one of scale more so than it being an issue in the thing itself.

As we enter week 2 of #OpenEdMooc we are starting the conversation around copyright and how that plays with Open. The problem around Mickey Mouse and copyright has come up because here in the US every time the threshold of public domain draws near to Mickey going into the commons lobbyists petition the courts and the date is pushed forward another twenty years. Of course this does not just apply to Mickey but to all copyrighted works.

We don’t have to connect with Mickey – but if we wanted to – well it has to be one way and at a premium.

We don’t have to connect in the Open – but we can if you want to.

But we do have to connect.

Humans need to connect. With ideas, with one another, with language, with images, with emotions, with numbers, with color – I could go on. Extreme cases of humans being deprived of connections are abusive atrocities but lesser barriers are always going to exist. You can connect within closed systems or you can connect in the open but I think most people mix it up depending on your definition of ‘Open’. I like to connect in both in the broad sense; though when it comes to education I tend toward Open when I can. There are some for whom connecting in the open may be their only option because of money or other circumstance. So, when I think about why Open matters – I’m sure there are lots of reasons, but still, I say it is because we are human. 

Why Does Open Matter? because we are human

George Siemens and David Wiley have joined up to do a MOOC on Open Education and I feel as though, with my interest in the subject, I would be remiss if I did not at least dip my toe in…

I always find myself intrigued by those big broad questions and the guys have not let me down by asking in week one “why does open matter”. 

I’ve watched the videos, done some readings, read some of the participant blogs and tweets. Along with some personal reflection here goes my take on things.

The question is a hard one for me because I’m not sure that there are simple definitions for either “open” or “education” and I’m not happy with my own definition for either. Both of these words give me pause and raise more questions than answers for me. However, I will try to not get too “in the weeds” with definitions in this post.

If I have to answer this question simply, this question of why open matters, I would say it matters because we are human. Because it is in the nature of most humans to connect… in some way. It is what we do. (Often to each other, but not always). And openness facilitates connectedness.

It’s a big messy web. And it is bigger than technology. It is bigger than the internet – which is just the latest tool for making connections in the open.

Because it is in our nature to connect, somewhere along the way a premium was put on the ways that we connect – which was mostly through means of expression. An expression of knowledge, let’s say a book, is placed out of reach for most and only given to a few who can afford it. This proves lucrative in terms of growing capital and in terms of growing a certain kind of knowledge but alas not in terms of growing that kind of knowledge that serves the human spirit and it only feeds that human need to connect for a very few.

Simply, open matters because we are human; humans need to connect and openness facilitates connectedness.

However,

Humans grow and thrive through connections but not all connections are equal.

Even if that book is put at a premium – one may not connect with it but they will connect (to something – someone), nonetheless.

Earlier today Siemens asked an intriguing question:

I do think that we will have gone wrong somewhere if we start thinking that open equals good.

Most of the platforms used to facilitate open connections and open education these days require a person to sign off on a terms of service that most never read and few could really understand. Often these platforms harvest the data that we generate through satisfying our need for connecting to feed our preferences and leanings back to us at a premium. I don’t find this “good”, yet I have a need to connect and so here I am.

While I think that open matters because humans have a need to connect fulfilling a need is not always good. We have a need to eat but if we eat junk food all the time that would not be “good” in terms of our health and wellbeing.

Some may say that junk food is not real food and others may say that junk food may be good in the moment as it might taste good. So perhaps getting in the weeds with definitions is not so bad afterall but if I would have tackled “what is open” or “what is education” this post would have been much different.

I’m looking forward to the next few weeks of the MOOC. I’ve just started a new job in De Pere WI as an Instructional Designer at St. Norbert College and I’ll be seeing if any of my new colleagues may be interested in these questions and others that I’m sure will be coming up in #OpenEdMOOC.

 

#DigCiz Reflections and a #DigPed Workshop

We just wrapped up a month long #DigCiz conversation and it was really unlike any of the others.

It was bigger for one thing.

I was informally running Twitter stats in the background and we consistently had between 200-400 people for any given week. Not massive by any means but growing. Though it was bigger than before and though it was online I’m still adamant that it was not a MOOC – it’s a conversation.  A conversation mediated by technology, sure, but a conversation, and not a course, nonetheless.

A #DigPed Workshop

Still, we learned a lot and as part of the continual processing and dissemination of that learning, I’m excited to point out (I’m not really announcing – the site has been up for awhile) that Sundi Richard and I will be collaborating in the flesh with participants for a 75 minute workshop during the Digital Pedagogy Lab Institute. The workshop is broad so even if you did not follow along with #DigCiz, but are interested in digital citizenship in higher education and society at large it will be valuable.

If you are attending the Institute consider coming to our workshop! If you are not attending there is still time because registration is still open (as of the time of this posting anyway).

I realize trying to ask people to attend a whole institute for a 75 min workshop is a little crazy but there is so much to be learned at the Institute as a whole! It looks like there is still room in Data, Networks, and Domains tracks! These are led by some of the smartest people in the room (and by room I mean the internet) Kris Shaffer (Data), Maha Bali and Kate Bowles (Networks), and Martha Burtis (Domains).

And! Even though their tracks are full, hanging with the likes of Amy Collier, Sean Michael Morris, Jesse Stommel, and Chris Friend… Well come’on! I mean the prospect of running into these folks in the hallway is super cool in and of itself.

#DigCiz Reflections

Mostly what I really want in hashtag #DigCiz, is to have a broad conversation about “digital citizenship” that takes a critical look at both “digital” and “citizenship” and that moves beyond things like netiquette and cyberbullying. I think those things are important but I want them to be part of the conversation not the whole conversation.

I think that we have been pretty successful in creating conversation that does that but it also seems that a bit of a community is growing.

This last round of #DigCiz spurred a bit of a branching out…. meaning that there are all of these little side things that keep popping up even though our planned burst ended weeks ago.

For instance the other day Dr. Naomi Barnes decided to live tweet a reading of an article called Towards a Radical Digital Citizenship in Digital Education by Akwugo Emejulu and Callum McGregor using the #DigCiz tag.

This spurred a bunch of us to read it, and wow!! This is exactly the kind of thing that I’m talking about when I say that I want to think about digital citizenship in deeper and more critically.

Besides Naomi’s spontaneous contribution we also had this cool idea inspired by Bill Fitzgerald’s and Kristen Eshleman’s week to do a hypothesis annotation of a privacy policy. We chose to annotate the Slack privacy policy and it was really enlightening. So many of us are entering into these legal agreements when we use these services without even questioning what we are agreeing to. Using social annotation we can really dig in there and pull out the nuance of these documents for questioning, contextualizing, and clarifying.

Ever since Audrey Watters blocked annotation from her site I’ve been rethinking my use of hypothesis. I don’t think that Audrey is wrong (it is her site people) but I also see great benefit from annotating the web. Annotating privacy policies and TOS as a way to better understand them does not feel like I am impinging on anyone’s creative work. We are still doing some work to refine how we do this but I think it has promise.

Then, the other day on Twitter George Station was talking about Zeynep Tufekci’s new book Twitter and Tear Gas. Turned out Sundi and Daniel were about to read it as well as some others. I noodled George on Twitter about doing a #DigCiz book discussion and he took me up on it! I started into the book right away and wow!!! Again, this is more of what I’m looking for when I talk about a deeper look at Digital Citizenship.

In Short

A big part of why I can’t call DigCiz a MOOC is because I don’t feel like a teacher in DigCiz – I feel more like a learner.

However, I do turn around what I learn in DigCiz and teach it. I am planning a first year seminar in Digital Identities, Environments, and Citizenship to be taught in the fall and now I have this exciting opportunity to do the workshop at the Digital Pedagogy Lab Institute with Sundi.

If you are going to be at DPLI consider coming to our workshop. Sundi and I will be presenting together and we will be talking about many of the things that we have learned through these DigCiz conversations. We plan to present different scenarios that encompass facets of digital citizenship and ask participants to think about how we can present these to students for a deeper consideration of digital citizenship.

Also keep an eye on digciz.org  cause you never know when a DigCiz blast could pop up.

#DigCiz Week 4 – Big Data Big Dreams: waking up about data collection in edtech

It is week 4 of #DigCiz and Kristen Eshleman and Bill Fitzgerald are leading us in a week of discussion around data security and the part that higher education institutions play. In the prompt Kristen questions the context of EDUCAUSE’s top 10 issues in IT and information security’s place in the #1 slot saying “when you read the description from this list, it’s pretty clear that our membership views information security policy not in the service of the individual digital citizen, but in the service of institutional IT systems.” Kristen states that though security breaches may be costly that higher education institutions are not in the business of data security (we are in the business of educating students) and goes on to say “we may be able to address the needs of institutions and individuals more effectively if we reframe the conversation from the lens of digital citizenship”

This really spoke to me in terms of how our professional organizations frame things to us as professionals. Often training and development from professional organizations is the way that many of us stay abreast of changes in the field. How professional organizations choose to frame these issues shapes how we bring these issues back to our institutions.

In response to the prompt Kristen and Bill held a synchronous video call and again this came up from Chris Gilliard and Amy Collier.

All of this reminded me of something I wrote several months ago about my attendance at the ELI National conference that at the time I’d decided not to publish. I was questioning the framing around the professional development I was getting and now, after hearing other colleagues similar concerns, it just feels so relevant that I can’t hold back.

I want to say that I felt really blessed to attend the conference and to present on digital citizenship but because of various experiences, which I will outline, I am now asking questions about what educational technology is for and why we are doing this.

It is not the job of digital pedagogues—or digital aficionados, or digital humanists, or educational technologists, or instructional designers—to force people to go digital. When we make it our mission to convert non-digital folks to our digital purpose, we will not only very likely alienate these valuable colleagues, but we’ll also miss the mark of our true intention: to support learning and scholarship within institutions that, in our heart of hearts, we adore.” – Sean Michael Morris 

If the focus of edtech is simply to implement technology for the sake of technology are we not vulnerable to the money and power that is backing those solutions? I’m negotiating ideas around how we are influenced in environments of professional development in edtech and what our responsibilities are as professionals, educators, and citizens. It seems to me of critical importance to be aware of how we are influenced in the environments where we place ourselves. I’m contemplating how we bring these experiences back to our institutions and how we influence our campus communities after attending them.

But anyway – onto the lived experience part:

At ELI

Having come from a more traditional IT background and then moving to an academic technology environment I was excited to attend the EDUCAUSE ELI conference. I’d always been told that the big EDUCAUSE main conference, which I have attended many times, was for that traditional IT audience but that ELI was more focused on educators.

While registering for the conference I was surprised to find that I had been automatically opted into being geographically tracked using beacons while I was onsite in Houston at the conference. Mind you I was opted in by default – I had to specifically indicate that I did not want my physical location tracked. I choose to opt out of this because I didn’t really understand what exactly it all entailed, but I can imagine.

I would imagine this tracking means EDUCAUSE (or ELI as the case might be) knows where I spend my time at the conference. What vendor booths and sessions I attended. If I took a lunch at the conference or if I went out. How much time I might have spent in the hallway. Maybe even which of my colleagues, who are also being tracked, that I’m spending time with while I’m at the conference. 

There are just some key questions that I could not find answers to – These are increasingly the same questions that I keep having with all of these data collection tools be it facebook and google or educational systems:

  • Do I get access to my data?
  • Who exactly owns these data?
  • Are these data for sale?
  • Could these data be turned over to government agencies – raw or analyzed?
  • Do vendors get access to my data – raw or analyzed?
  • Do I get access to the algorithms that will be applied to my data?
  • Is someone going to explain those algorithms to me – cause I’m not a data scientist.
  • Are the data anonymized?
  • Are these data used only in aggregate or can they be used to target me specifically?
  • How long will these data be retained? – Will they be tracked over time either in aggregate and/or individually?
  • Who has access to these data?

Once I arrived on site I found many participants who had these extra little plastic tabs stuck to their name badges and quickly found out that these were the tracking tags. In several of the session rooms and around the conference in other areas I found mid-sized plastic boxes with handles under chairs and in corners with the name of the beacon company on them.

I don’t remember information that could have answered any of the questions I listed above being provided during registration. I did not seek out anyone organizing ELI about this or anyone representing the vendor.  However, while I was onsite at ELI this started to bother me enough that I asked plenty of the participants at the conference these kind of questions. While I mostly got into very interesting conversations I did not find anyone who could answer those questions for me.

So What?

This bothers me because if educational technology professionals are giving over their data at professional development events geared toward educating us about innovations in educational technology, shouldn’t we be able to answer those questions? Why do so many of us assume benevolence and hand our data over without having those answers?

Many of us might think that we know those entities to whom we are giving our data away to but even if we think that it is a trusted professional organization, companies and organizations are changing all of the time switching out leadership and missions. Throw in the possibility of the data being sold and we have no idea what is going on with our data.

After attending larger conferences I have felt targeted by vendors and I have heard about horror stories from other female colleagues (who actually have purchasing power) at the lengths vendors will go through to get a closed door meeting. I can imagine scenarios where my data is used to the benefit of vendors over my own benefit or that of my institution.

When our professional organizations do not prompt us to think critically about data collection and when we are automatically opted into turning over our own data without question it is no wonder we don’t question taking students’ data without informing them. We are compelled by those who are teaching us about data collection that this is normal and we pass that on to our institutions.

ELI is not alone in this of course, it happens with most of the professional organizations with corporate sponsorship and with most of the corporate digital tools used for education and social interactions. However, I’m concerned when one of the major professional organizations in my field is perpetuating this normalization of data surveillance in a time when we are seeing the rights of our most vulnerable students threatened. Yet I continue to see a proliferation of this mindset that more data is always good without so much as a mention of who really owns it, how will it be used, and how can that usage change over time.

This was also evident with the first keynote presentation at ELI from a non-profit called Digital Promise. The CEO Karen Cator talked about the many products that they are developing but it was the Learner Positioning System that got me thinking about these issues. Listening to the level of personalization that was associated with this tool I could only imagine the amount of data being collected on students who were using it. The presenter made it clear at the beginning that it was the first time that she had delivered the talk and that it was a work in progress but it was hard for me to forgive no mention of the data security and ownership around a project like this. It became just another example of how the conference was glorifying and fetishizing the collection of data without any real critical reflection on what it all means.

Audrey Watters writes about about how students have to comply with the collection of their intimate data and that they don’t even get the choice to opt out. She takes a historical look at how “big data” of the 1940’s was used to identify Jews, Roma, and other ‘undesirables’ so that they could be imprisoned. She writes “Again, the risk isn’t only hacking. It’s amassing data in the first place. It’s profiling. It’s tracking. It’s surveilling. It’s identifying ‘students at risk’ and students who are ‘risks.’”

I am concerned that we are creating a culture of unquestioned data collection so much so that even those who are supposed to be the smartest people on our campuses about these matters give over their data without question. Professionals return to their campuses from events like ELI with an impression that this level of data surveillance is always good without question and that data collection is normal.

I believe that big data and personalization engines can be extremely “effective” in education but sometimes it is precisely this “effectiveness” that makes me question them. The word“effective” communicates a kind of shorter path to success; a quicker way to get to an end goal of some kind. However, the value of that end goal could be nefarious or benevolent. None of us like to think that our campus’ could use data for iniquitous ends but often these negative effects come from models being applied in new ways that they were not designed for or emerge later to show reflection of unconscious biases.

We saw this last year when the president of St. Mary’s University was let go after speaking in a disparaging way about at-risk students – wanting to get them out of the pipeline within the first few weeks of classes. I’m sympathetic to the point of view that we want to identify at-risk students so that we can help them stay but in this situation at-risk students were being identified (by a survey developed by the president’s office) specifically so that they could be encouraged to leave.

I think that we should be asking, and getting students to ask, what does success look like and what is the end goal. I don’t feel like that question has really been answered in higher education. It is really hard to think of data collection as something potentially dangerous when it is an education company or institution and the end goal is “student success”.  Of course we all want our students to be successful but let’s not forget that these data can be put together in various ways.

Let’s also not forget that we are giving students subtle and not so subtle cues about what is acceptable and what is not. Will our students think of asking questions about ownership, security, and privacy around their data once they graduate if we take and keep their data from them while they are with us? Or will they assume benevolence from everyone who asks them for access?

We need more education in our educational technology. Students are tracked and their data are mined all over the web; often I am reminded that we are not going to be able to change that. However, we could provide transparency while they are with us and get them to start asking questions about what data can be gathered about them, how it can be used, and what impacts that might have on their lives.

Wouldn’t it be wonderful if our professional organizations would help us to demand transparency of our personal data so that we could better imagine the possibilities of how it can be used?

Image Credit Ash –  Playing with Fire – Gifted to Subject 

I would like to thank Amy Collier and Chris Gilliard for providing feedback on an early draft of this post. The two of you always make me think deeper.

What is DigCiz and Why I am Not Marina Abramovic: thoughts on theory and practice

Theory

Alec Couros and Katia Hildebrandt just finished a round of facilitation in the #DigCiz conversation where they challenged us to think about moving away from a personal responsibility model of digital citizenship. In a joint blog post they spend time distinguishing digital citizenship from cybersaftey and present Jole Westheimer’s work identifying three different types of citizens to ultimately ask “What kind of (digital) citizen” are we talking about.

Additionally, this week, outside of our #DigCiz hashtag, Josie Fraser blogged about some views around digital citizenship. Here we see Josie, reminiscent of Katia and Alec, making a distinction between digital citizenship and what she identifies as e-safety but also setting it apart from digital literacy. Josie presents a venn diagram where digital citizenship is one part of a larger interaction overlapping with e-safety and digital literacy.

In other DigCiz news, this week a group of us (Sundi and I included) who presented at the annual ELI conference in Houston on digital citizenship in the liberal arts published an EDUCAUSE Review article highlighting four different digital citizenship initiatives inside of our institutions.

All of this is on the tails of our first week of #DigCiz where Mia Zamora and Bonnie Stewart troubled the idea of digital citizenship. In a post about this Bonnie artfully lays out the conflict of utopian narratives of the web as a tool for democracy with the realities of what I’m more and more just lumping under Shoshana Zubhoff’s concept of Surveillance Capitalism though you could just say it is the general Silicon Valley ethos.

But I want to get back to Katia and Alec’s call to move the conversation beyond personal responsibility. Often, digital citizenship is lumped in with things like digital/information literacy, nettiquette, online safety, and a whole host of other concepts. Often these are just variations of issues that existed way before the “digital” but are complicated by the digital.

I’m considering Katia and Alec’s call, reflecting on all of these posts and articles as well as the last year and several months of thinking and conversing about this topic on #DigCiz and I can’t help but feel like we are in the weeds on this concept.

So here it is – my foundational, basic, details ripped away, 10,000 foot view at digital citizenship where things like safety and literacy are part of the model but not the whole thing.

I’ve thought about digital citizenship like this for some time and Josie’s post reminded me the idea of representing it as a venn diagram and though some of the overlaps are messy I think that is normal.

I really want to focus and drill down on digital citizenship so I put it in the middle and zoom out from there. The factors that I see at play around digital citizenship are environments and people. In terms of people there is the individual and then others. Since this is “digital” citizenship they are digital environments and identities. The items in the overlaps are messy part. This is draft one.

Draft 1 – Autumm’s Digital Citizenship model CC-BY-ND

This is a really broad model but I think that digital citizenship is a really broad concept and that a narrow model would not do. I think part of the problem that we get into with confusing digital citizenship with digital literacy, cybersafety, netiquette or any other number of similar ideas has to do with narrowly defined models that do not allow for liminality or overlap.

In theory that is… but that brings me to the second half of this post.

Practice

I hope that the web still can exist as a place for community building, artistic expression, and civic discourse but I fear that use for it is shrinking under the pressures of its uses as an advertising and surveillance tool. 

I worry that as we are used and targeted by systems that we have been normalized to the experience of being used and targeted. Resulting in us feeling that using and targeting others does not seem like such a big deal.

 

***

In 1974 performance artist Marina Abramovic produced and performed Rhythm 0.  

I rather like the idea of performance art. Making an artistic statement not through polished practice but rather through the practice of a lived moment.

In Rhythm 0, Abramovic wanted to experiment with giving the public pure access to engage with her actual in-the-flesh self.

She stood for six hours in front of a table with all manner of objects for pleasure and pain with a statement that told the public that they could engage with her however they saw fit.

She was a type of living doll.

Quickly the public forgot that she was a person. She had told them that she was an object after all. So fast they moved from tickling her with the feathers or kissing her on the cheek to cutting her with the razors. She said she was ready to die for this experiment. She said she took full responsibility. One of the objects was a loaded gun. Someone went as far as to put it in her own hand and hold it to her head and see if they could make her pull the trigger.

But why? Why when given the chance to engage with her would people choose to harm her of all the choices of things that they could do to her?

What happens when we interact with people? Is it about us or is it about them? Are we seeing people with lives and needs and wants and fears and all the messy that is human? Or are we seeing an object that we want to interaction with… for our sense of good or bad or pain or pleasure?

I’m not sure much has changed since 1974 when Marina Abramovic first performed this piece. I’m not sure if given the choice between tools of violence and tools of peace that the public will choose peace even today.

I’m not Marina Abramovic

#DigCiz is not Rythem 0

***

 

I think we need to look at ourselves and our communities and ask why we are engaging with each other. Is it out of a selfish need for engagement? Is there a hope for beneficial reciprocation? Is there a concept of consent being considered? 

I think we need to look at our tools and wonder why we are engaging with them and the companies behind them. As they say if you are not paying you are  probably the product.

Environment shapes identity. Identity shapes other’s identities. I fear that we are shaping each other mindlessly. I fear that we are not just shaping each other but that the predatory environments we use are additionally shaping us.

I think we start to change by knowing ourselves first and then engaging where we think we will find recripciotaton, and by recripciotation I don’t mean comments and I don’t mean reply. I mean really trying to listen to one another and getting to know one another. Caring about how we think the other may want to engage and not just satisfying some hunger for engagement.

Going Forward

#DigCiz continues next week and I’m hopeful that we will start to explore these nuances of engagement even deeper as Maha Bali and Kate Bowles take the wheel. Keep an eye on #DigCiz on key social media outlets and digciz.org

Image credit CC0 Dimitris Doukas free on Pixabay

I’d also like to thank Sundi Richard, Maha Bali, and Mia Zamora for looking at a very early draft of this piece and giving much needed feedback. You each help me be better every day – thank you.

Thoughts on Sync Video Conversations

It has been a while since I’ve posted – I’ve been dealing with a bit of writer’s block. Theoretically, politically, philosophically my mind is racing too fast and I can’t seem to get any of it down. To try to break the block I’m going to set it all aside and go for a simple technical post.

Through my work with Virtually Connecting I do a lot of synchronous video calls so I’ve developed some thoughts on formats, techniques, and technologies. Though warning – I will NOT be advocating, admonishing, reviewing, or even mentioning by name any specific brand of technology for sync video in this post – cause that is how I roll. I see more and more people doing sync video conversations so I’m hoping this will be helpful or start further conversation.

Why Sync Video

Sync video is not always the answer. You have little time for reflection in sync conversations so you need to be more “on”. There is something more vulnerable about sync video than other kinds of technologically mediated communications. Still, as with most things, opening to that vulnerability brings possibilities of a rich experience. Seeing facial expressions instead of emojis, hearing laughter rather than just a lol; this is the kind of immediate feedback that we should be looking to give rather than numbers on a dashboard.

Use sync video when you want to have a conversation.

Silence is Golden?

I’ve yet to work with any system that I really felt was 100% synchronous. There is always a little bit of a lag. You should know this going in and be okay with it. There might be times where no one is talking because they are still listening to what you just said – because it is taking longer for the audio to reach them. These days it is really much better than in the past and it should not be more than a couple of seconds on a day when you have a bad connection for whatever reason. However, if it is too bad you may have to look for other options.

The problem is that those silences do not feel good. They break up the flow and make things seem off. Even in face to face conversations we have all had awkward silences, the technology cannot help with those. If you are trained in media studies then the feeling is even going to be worse for you because you have been conditioned to avoid “dead air”. If you are broadcasting or recording your conversation then it can be even worse and you may even face the opposite problem where everyone is so afraid of dead air that they step on one another trying to avoid it. You don’t have to fear dead air. You can warn your participants about lag before the conversation if they are new to sync video and let them know that natural pauses for reflection and listening are normal and okay.

At the same time there are moments where hospitality can avoid awkward dead air that is not needed. The most obvious for me is when you have a large group and you ask them to do introductions. They don’t know what order to go in. Sync video gets strange with two people starting at the same time. No one wants to go first and no one wants to step on anyone else. So, everyone just sort of sits there looking at one another. Here is where I think you need someone to host. Someone needs to call on people and invite them to introduce themselves. If everyone knows one another really well maybe you don’t need this but it is still nice. I have tried to organize the order before hand or even in the moment and communicate it out to everyone but it just never really works for me. People always forget the order, they are not looking at the chat where I posted the order, or they think the order is different than what I said. It just seems so much nicer to me to just invite people one by one to introduce themselves.

Audio is Often More Important than Video

When you first start doing sync video you might think it is the video that makes it a rich conversation. Well, yes and no. I have a sync video project right now where students will be paying particular attention to nonverbal cues while someone is telling a story. It is a listening exercise. Here video is going to be key. In my past I’ve done some work with the deaf community in terms of sync video – again video is key. Is video really needed for your conversation?

If it is not really needed I would still argue that it can make for a richer conversation; so I’m not saying don’t do video unless you absolutely need it. However, technical issues can make it so that you need to back off. Video takes a lot of bandwidth and if you are in a lower bandwidth situation turning off your video can free things up so that you can continue what is important – the conversation.

And the one everyone forgets – I bet you have a phone. If technical issues around video are too much, call the person. One of my favorite stories is when I was set to have a sync video call with Bob Cole and Joe Antonioli and as the time neared I happened to be backchanneling with Sean Michael Morris when it became clear that it would be great to have him on the call. But Sean was in a coffee shop with really limited wifi. We tried, but it was no go. So I was like “hey you have your phone don’t you”…. I had to put Sean on speaker and hold the phone up to my mic when he wanted to speak, but you know what – we had that conversation.

Seeing the Person(s)

I think it has something to do with the immediacy of it. And yes, it is visual but so much of it is the audio in sync with the visual. However, I think some of it is tactile too – where the vibrations that someone makes come through to me in almost realtime touching my ears or when I see someone throw their hair to the side and get some sense of its texture.

Different platforms treat visuals differently and video may only be one element. Some try to be everything to everyone and break the screen up into other boxes. A box for video, a box for sharing documents, a box for chat, a box for listing the participants, a box for drawing on a whiteboard… If you are working with something like this take time to think about the environment and what is most important. If the conversation is most important make the video the big box and put it in the center. If this is the case look at your lighting and make sure you don’t have a light source behind you like a window. Think about your physical environment, your background, tidy up or even think about things that might be nice to have in the background.

When having a conversation with more than one person different platforms have different approaches. Some privilege the speaker, having a large video of one camera at a time with smaller thumbnails of everyone else on the bottom. The person who appears on the large part of the screen is determined by the audio – whoever has the mic’s attention has the floor. Other times the video is controlled by a host. Still, some platform layouts shrink the video of everyone as members join the call to give equal screen real estate to all. What is best? What kind of conversation do you want to have?

I’ll be honest, I’m biased toward the speaker getting the camera. Listening is done with more than the ears and if someone is speaking I want to see them. I get that it visually minimizes everyone else. I’m sympathetic to the argument for equality that some make for the set-up that shares the space with all. Using this layout I have felt more like everyone is present and no one is any larger than anyone else but I’ve also been distracted by someone who is not speaking who is fidgeting or something. I like the ability to switch back and forth between these layouts because to listen best I like to see.

So, Why Sync Video Again?

You know what I can’t stand – webinars. Why make people come together to listen to you if you are not going to listen to them? Just record the thing and let them listen to it on their own terms. For me, sync video is about a complex interplay of combining the communication senses to further understanding, and ultimately it is about conversation. It is about seeing but it is also about being seen, it is about talking but it is also about listening, and it is about being vulnerable and being brave. It is not for everyone and it is not for every situation. There are many options, there are still many barriers, for me it is about learning and playing with the possibilities afforded through tech, design, and delivery.

Image Credit: Me, Electric Lavender 3, CC0 

Going Clear: A Call for Conversation around the Ethics of Mobile Live Streaming from #OER17 and Other Events

Last year I read Dave Eggers’ The Circle, a dystopian novel about technology, social media, and society gone wrong. By the end of the novel the lead character is engaging in a process called “going clear” where she is live streaming her entire life save sleeping and going to the bathroom. It is pretty creepy. I’m not ready to go clear but I like to do live streams from my phone when I’m attending conferences and I want to evaluate my usage because I do think that this technology can be overreaching and even dangerous. I’m interested in the ethical pursuit of technology in my work so I think about this stuff a lot.

My work with Virtually Connecting has made me sensitive to things like making sure when we go live in a public hallway that we have a wall behind us and not a walkthrough – to be sure that people who are passing by do not get caught in the livestream. But it is more than just visuals – live stream can pick up background conversations and other audio that others may not want out there too.

There have been calls for me to live stream from #OER17 and I’m thrilled to do it. Maha asked Martin Hawksey if it was okay to do personal mobile livestream and he said he had concerns about the Wifi bandwidth but as long as presenters were okay with it that was fine. I have a good data plan that I can use in UK so I think I can still do it and not bog down the conference wifi. Battery life is more of a concern to me than bandwidth at this point.

But I’m wondering about those ethical concerns of live stream not just conference sessions but also social time.

Consent is huge for me. During a scheduled conference session I can ask the presenter but I think I might also ask the presenter to make an announcement to the audience that the stream is happening.

I’ve had wonderful experiences live streaming social situations. I’ve had great circumstances where it was just a lot of fun for everyone involved and people watching have told me that they love it and that it makes them feel like they are really a part of the conference. However, consent can be tricky with a group of people. I’ve had situations where even though I’ve announced that I’m going live not everyone in the group caught that and if we are in public what about strangers who might just wander into my shot?

There is also something important about addressing the camera. Talking to an audience even if no one is currently watching. It is a hospitality thing but really an ethical concern about the background audio for me. There are lots of beautiful visual elements that can be fun to live stream without putting my face in the camera and saying “hey look at this”. But to just start up a recorded live stream without directing the audience’s audible experience means that you are publicizing the ambient sounds, with I’m fine with if it is birds chirping or street music but feel a little concerned about for background conversations.

What else should I be thinking about as I consider live streaming from #OER17? Much of this is an experiment for me but how do I do that ethically? What technology should I use? What is the best way to confirm consent? How do I empower people to say no? Has anyone else done this from a conference? What ethical concerns do you have? It seems to me that live streaming is a great tool for open education but what are the ethical implications?