#DigCiz Week 4 – Big Data Big Dreams: waking up about data collection in edtech

It is week 4 of #DigCiz and Kristen Eshleman and Bill Fitzgerald are leading us in a week of discussion around data security and the part that higher education institutions play. In the prompt Kristen questions the context of EDUCAUSE’s top 10 issues in IT and information security’s place in the #1 slot saying “when you read the description from this list, it’s pretty clear that our membership views information security policy not in the service of the individual digital citizen, but in the service of institutional IT systems.” Kristen states that though security breaches may be costly that higher education institutions are not in the business of data security (we are in the business of educating students) and goes on to say “we may be able to address the needs of institutions and individuals more effectively if we reframe the conversation from the lens of digital citizenship”

This really spoke to me in terms of how our professional organizations frame things to us as professionals. Often training and development from professional organizations is the way that many of us stay abreast of changes in the field. How professional organizations choose to frame these issues shapes how we bring these issues back to our institutions.

In response to the prompt Kristen and Bill held a synchronous video call and again this came up from Chris Gilliard and Amy Collier.

All of this reminded me of something I wrote several months ago about my attendance at the ELI National conference that at the time I’d decided not to publish. I was questioning the framing around the professional development I was getting and now, after hearing other colleagues similar concerns, it just feels so relevant that I can’t hold back.

I want to say that I felt really blessed to attend the conference and to present on digital citizenship but because of various experiences, which I will outline, I am now asking questions about what educational technology is for and why we are doing this.

It is not the job of digital pedagogues—or digital aficionados, or digital humanists, or educational technologists, or instructional designers—to force people to go digital. When we make it our mission to convert non-digital folks to our digital purpose, we will not only very likely alienate these valuable colleagues, but we’ll also miss the mark of our true intention: to support learning and scholarship within institutions that, in our heart of hearts, we adore.” – Sean Michael Morris 

If the focus of edtech is simply to implement technology for the sake of technology are we not vulnerable to the money and power that is backing those solutions? I’m negotiating ideas around how we are influenced in environments of professional development in edtech and what our responsibilities are as professionals, educators, and citizens. It seems to me of critical importance to be aware of how we are influenced in the environments where we place ourselves. I’m contemplating how we bring these experiences back to our institutions and how we influence our campus communities after attending them.

But anyway – onto the lived experience part:

At ELI

Having come from a more traditional IT background and then moving to an academic technology environment I was excited to attend the EDUCAUSE ELI conference. I’d always been told that the big EDUCAUSE main conference, which I have attended many times, was for that traditional IT audience but that ELI was more focused on educators.

While registering for the conference I was surprised to find that I had been automatically opted into being geographically tracked using beacons while I was onsite in Houston at the conference. Mind you I was opted in by default – I had to specifically indicate that I did not want my physical location tracked. I choose to opt out of this because I didn’t really understand what exactly it all entailed, but I can imagine.

I would imagine this tracking means EDUCAUSE (or ELI as the case might be) knows where I spend my time at the conference. What vendor booths and sessions I attended. If I took a lunch at the conference or if I went out. How much time I might have spent in the hallway. Maybe even which of my colleagues, who are also being tracked, that I’m spending time with while I’m at the conference. 

There are just some key questions that I could not find answers to – These are increasingly the same questions that I keep having with all of these data collection tools be it facebook and google or educational systems:

  • Do I get access to my data?
  • Who exactly owns these data?
  • Are these data for sale?
  • Could these data be turned over to government agencies – raw or analyzed?
  • Do vendors get access to my data – raw or analyzed?
  • Do I get access to the algorithms that will be applied to my data?
  • Is someone going to explain those algorithms to me – cause I’m not a data scientist.
  • Are the data anonymized?
  • Are these data used only in aggregate or can they be used to target me specifically?
  • How long will these data be retained? – Will they be tracked over time either in aggregate and/or individually?
  • Who has access to these data?

Once I arrived on site I found many participants who had these extra little plastic tabs stuck to their name badges and quickly found out that these were the tracking tags. In several of the session rooms and around the conference in other areas I found mid-sized plastic boxes with handles under chairs and in corners with the name of the beacon company on them.

I don’t remember information that could have answered any of the questions I listed above being provided during registration. I did not seek out anyone organizing ELI about this or anyone representing the vendor.  However, while I was onsite at ELI this started to bother me enough that I asked plenty of the participants at the conference these kind of questions. While I mostly got into very interesting conversations I did not find anyone who could answer those questions for me.

So What?

This bothers me because if educational technology professionals are giving over their data at professional development events geared toward educating us about innovations in educational technology, shouldn’t we be able to answer those questions? Why do so many of us assume benevolence and hand our data over without having those answers?

Many of us might think that we know those entities to whom we are giving our data away to but even if we think that it is a trusted professional organization, companies and organizations are changing all of the time switching out leadership and missions. Throw in the possibility of the data being sold and we have no idea what is going on with our data.

After attending larger conferences I have felt targeted by vendors and I have heard about horror stories from other female colleagues (who actually have purchasing power) at the lengths vendors will go through to get a closed door meeting. I can imagine scenarios where my data is used to the benefit of vendors over my own benefit or that of my institution.

When our professional organizations do not prompt us to think critically about data collection and when we are automatically opted into turning over our own data without question it is no wonder we don’t question taking students’ data without informing them. We are compelled by those who are teaching us about data collection that this is normal and we pass that on to our institutions.

ELI is not alone in this of course, it happens with most of the professional organizations with corporate sponsorship and with most of the corporate digital tools used for education and social interactions. However, I’m concerned when one of the major professional organizations in my field is perpetuating this normalization of data surveillance in a time when we are seeing the rights of our most vulnerable students threatened. Yet I continue to see a proliferation of this mindset that more data is always good without so much as a mention of who really owns it, how will it be used, and how can that usage change over time.

This was also evident with the first keynote presentation at ELI from a non-profit called Digital Promise. The CEO Karen Cator talked about the many products that they are developing but it was the Learner Positioning System that got me thinking about these issues. Listening to the level of personalization that was associated with this tool I could only imagine the amount of data being collected on students who were using it. The presenter made it clear at the beginning that it was the first time that she had delivered the talk and that it was a work in progress but it was hard for me to forgive no mention of the data security and ownership around a project like this. It became just another example of how the conference was glorifying and fetishizing the collection of data without any real critical reflection on what it all means.

Audrey Watters writes about about how students have to comply with the collection of their intimate data and that they don’t even get the choice to opt out. She takes a historical look at how “big data” of the 1940’s was used to identify Jews, Roma, and other ‘undesirables’ so that they could be imprisoned. She writes “Again, the risk isn’t only hacking. It’s amassing data in the first place. It’s profiling. It’s tracking. It’s surveilling. It’s identifying ‘students at risk’ and students who are ‘risks.’”

I am concerned that we are creating a culture of unquestioned data collection so much so that even those who are supposed to be the smartest people on our campuses about these matters give over their data without question. Professionals return to their campuses from events like ELI with an impression that this level of data surveillance is always good without question and that data collection is normal.

I believe that big data and personalization engines can be extremely “effective” in education but sometimes it is precisely this “effectiveness” that makes me question them. The word“effective” communicates a kind of shorter path to success; a quicker way to get to an end goal of some kind. However, the value of that end goal could be nefarious or benevolent. None of us like to think that our campus’ could use data for iniquitous ends but often these negative effects come from models being applied in new ways that they were not designed for or emerge later to show reflection of unconscious biases.

We saw this last year when the president of St. Mary’s University was let go after speaking in a disparaging way about at-risk students – wanting to get them out of the pipeline within the first few weeks of classes. I’m sympathetic to the point of view that we want to identify at-risk students so that we can help them stay but in this situation at-risk students were being identified (by a survey developed by the president’s office) specifically so that they could be encouraged to leave.

I think that we should be asking, and getting students to ask, what does success look like and what is the end goal. I don’t feel like that question has really been answered in higher education. It is really hard to think of data collection as something potentially dangerous when it is an education company or institution and the end goal is “student success”.  Of course we all want our students to be successful but let’s not forget that these data can be put together in various ways.

Let’s also not forget that we are giving students subtle and not so subtle cues about what is acceptable and what is not. Will our students think of asking questions about ownership, security, and privacy around their data once they graduate if we take and keep their data from them while they are with us? Or will they assume benevolence from everyone who asks them for access?

We need more education in our educational technology. Students are tracked and their data are mined all over the web; often I am reminded that we are not going to be able to change that. However, we could provide transparency while they are with us and get them to start asking questions about what data can be gathered about them, how it can be used, and what impacts that might have on their lives.

Wouldn’t it be wonderful if our professional organizations would help us to demand transparency of our personal data so that we could better imagine the possibilities of how it can be used?

Image Credit Ash –  Playing with Fire – Gifted to Subject 

I would like to thank Amy Collier and Chris Gilliard for providing feedback on an early draft of this post. The two of you always make me think deeper.

What is DigCiz and Why I am Not Marina Abramovic: thoughts on theory and practice

Theory

Alec Couros and Katia Hildebrandt just finished a round of facilitation in the #DigCiz conversation where they challenged us to think about moving away from a personal responsibility model of digital citizenship. In a joint blog post they spend time distinguishing digital citizenship from cybersaftey and present Jole Westheimer’s work identifying three different types of citizens to ultimately ask “What kind of (digital) citizen” are we talking about.

Additionally, this week, outside of our #DigCiz hashtag, Josie Fraser blogged about some views around digital citizenship. Here we see Josie, reminiscent of Katia and Alec, making a distinction between digital citizenship and what she identifies as e-safety but also setting it apart from digital literacy. Josie presents a venn diagram where digital citizenship is one part of a larger interaction overlapping with e-safety and digital literacy.

In other DigCiz news, this week a group of us (Sundi and I included) who presented at the annual ELI conference in Houston on digital citizenship in the liberal arts published an EDUCAUSE Review article highlighting four different digital citizenship initiatives inside of our institutions.

All of this is on the tails of our first week of #DigCiz where Mia Zamora and Bonnie Stewart troubled the idea of digital citizenship. In a post about this Bonnie artfully lays out the conflict of utopian narratives of the web as a tool for democracy with the realities of what I’m more and more just lumping under Shoshana Zubhoff’s concept of Surveillance Capitalism though you could just say it is the general Silicon Valley ethos.

But I want to get back to Katia and Alec’s call to move the conversation beyond personal responsibility. Often, digital citizenship is lumped in with things like digital/information literacy, nettiquette, online safety, and a whole host of other concepts. Often these are just variations of issues that existed way before the “digital” but are complicated by the digital.

I’m considering Katia and Alec’s call, reflecting on all of these posts and articles as well as the last year and several months of thinking and conversing about this topic on #DigCiz and I can’t help but feel like we are in the weeds on this concept.

So here it is – my foundational, basic, details ripped away, 10,000 foot view at digital citizenship where things like safety and literacy are part of the model but not the whole thing.

I’ve thought about digital citizenship like this for some time and Josie’s post reminded me the idea of representing it as a venn diagram and though some of the overlaps are messy I think that is normal.

I really want to focus and drill down on digital citizenship so I put it in the middle and zoom out from there. The factors that I see at play around digital citizenship are environments and people. In terms of people there is the individual and then others. Since this is “digital” citizenship they are digital environments and identities. The items in the overlaps are messy part. This is draft one.

Draft 1 – Autumm’s Digital Citizenship model CC-BY-ND

This is a really broad model but I think that digital citizenship is a really broad concept and that a narrow model would not do. I think part of the problem that we get into with confusing digital citizenship with digital literacy, cybersafety, netiquette or any other number of similar ideas has to do with narrowly defined models that do not allow for liminality or overlap.

In theory that is… but that brings me to the second half of this post.

Practice

I hope that the web still can exist as a place for community building, artistic expression, and civic discourse but I fear that use for it is shrinking under the pressures of its uses as an advertising and surveillance tool. 

I worry that as we are used and targeted by systems that we have been normalized to the experience of being used and targeted. Resulting in us feeling that using and targeting others does not seem like such a big deal.

 

***

In 1974 performance artist Marina Abramovic produced and performed Rhythm 0.  

I rather like the idea of performance art. Making an artistic statement not through polished practice but rather through the practice of a lived moment.

In Rhythm 0, Abramovic wanted to experiment with giving the public pure access to engage with her actual in-the-flesh self.

She stood for six hours in front of a table with all manner of objects for pleasure and pain with a statement that told the public that they could engage with her however they saw fit.

She was a type of living doll.

Quickly the public forgot that she was a person. She had told them that she was an object after all. So fast they moved from tickling her with the feathers or kissing her on the cheek to cutting her with the razors. She said she was ready to die for this experiment. She said she took full responsibility. One of the objects was a loaded gun. Someone went as far as to put it in her own hand and hold it to her head and see if they could make her pull the trigger.

But why? Why when given the chance to engage with her would people choose to harm her of all the choices of things that they could do to her?

What happens when we interact with people? Is it about us or is it about them? Are we seeing people with lives and needs and wants and fears and all the messy that is human? Or are we seeing an object that we want to interaction with… for our sense of good or bad or pain or pleasure?

I’m not sure much has changed since 1974 when Marina Abramovic first performed this piece. I’m not sure if given the choice between tools of violence and tools of peace that the public will choose peace even today.

I’m not Marina Abramovic

#DigCiz is not Rythem 0

***

 

I think we need to look at ourselves and our communities and ask why we are engaging with each other. Is it out of a selfish need for engagement? Is there a hope for beneficial reciprocation? Is there a concept of consent being considered? 

I think we need to look at our tools and wonder why we are engaging with them and the companies behind them. As they say if you are not paying you are  probably the product.

Environment shapes identity. Identity shapes other’s identities. I fear that we are shaping each other mindlessly. I fear that we are not just shaping each other but that the predatory environments we use are additionally shaping us.

I think we start to change by knowing ourselves first and then engaging where we think we will find recripciotaton, and by recripciotation I don’t mean comments and I don’t mean reply. I mean really trying to listen to one another and getting to know one another. Caring about how we think the other may want to engage and not just satisfying some hunger for engagement.

Going Forward

#DigCiz continues next week and I’m hopeful that we will start to explore these nuances of engagement even deeper as Maha Bali and Kate Bowles take the wheel. Keep an eye on #DigCiz on key social media outlets and digciz.org

Image credit CC0 Dimitris Doukas free on Pixabay

I’d also like to thank Sundi Richard, Maha Bali, and Mia Zamora for looking at a very early draft of this piece and giving much needed feedback. You each help me be better every day – thank you.

Associative Trails Around DigCiz, Fake News, and Microtargeting

Microtargeting: A Digital Citizen’s Perspective

I started writing this post about fake news and microtargeting a few days ago and then I was reminded that #OpenLearning17 was talking about Vannevar Bush’s As We May Think this week. I began to see connections between how they might relate. It made this post even longer but I think it was worth it.

Some background if you don’t know: Bush’s article was written in 1945 as the war was ending. He was the Director of Scientific Research and Development during this time so he was all about applying science in warfare. In the article he is envisioning where scientists will put their energies as the war is ending.

Now, as peace approaches, one asks where they [scientists] will find objectives worthy of their best.

The article focuses on the connections we make when we build knowledge. How we associate past discoveries with current ones and tie things together. Bush advocates using technology to track the connections that we make in this process to extend memory for better reflection on those connections. Many credit this article with predicting the Internet.

He uses this term “associative trails” to describe indexing knowledge based on connections that we define. He thinks this is more powerful than typical kinds of indexing like sorting by number or alphabetizing. But I note that this is a much more personalized kind of indexing.

He is advocating for metacognition, that is, realizing what you are thinking and where your trails lie so you can better understand what you are researching, yes, but more importantly your own thought processes. What I am wondering about is what happens when you get the technology part but you leave out the metacognitive part? Bush does not seem to consider this option but I think this is often the world that we live in today.

When I start thinking about fake news and microtargeting I have to ask what if a person does not have access to their associative trails? What if they don’t even realize they are leaving a trail? What if they think that their trail is not so important? What if someone’s trail could be bought and sold? What does the record of all our connections say about us and can it be used in ways that might be exploitive?

I’m not a data scientist. I’m not a journalist. I’m not a librarian.

I am a technologist. I am an educator. I am a person. A person who lives some of her life on the web. I want to say a lot of her life on the web…. But “a lot” is a relative term.

Often it is journalists and librarians that tackle the fake news topic. I think that both of these groups add an important perspective to the conversation but I also think that there is the perspective of a digital citizen and those that advocate for such concepts; the perspective of someone using the web as a place of expression, a place to learn, and to be heard and to listen to others.

What is microtargeting?

When I bring the idea of microtargeting up I’ll start with something like “well you know they track a lot of your data from the internet to try to influence you” and most often, before I can continue, I hear “oh yes of course I know that”. Then there is the inevitable story of shopping for an item on one site and then continuing to see ads for it on other sites. But that is rather mild and not really what concerns me.

I’m not just talking about the machine realizing that you were looking at a product on another site or that you clicked on something from your email, that is cookies and web beacons, that is rudimentary stuff.

I’m talking about gathering thousands of data points, combining them, and analyzing them. Everything from shopping history to facebook likes and what church you attend can be gathered and combined with traditional demographics to create a “personalized experience” meant to influence you with emotional and psychological messaging.

The big story around microtargeting right now has to do with a little company called Cambridge Analytica (CA) in London. They are the big story because they’ve had well known wins with customers like the Brexit Leave and Donald Trump campaigns.  

In this eleven minute video during the Concordia Summit their CEO Alexander Nix explains how they work. In the video Nix explains that demographic and geographic information is child’s play. That the idea of all people from one demographic getting the same message: “all women because of their gender, all African Americans because of their race, all old people because of their age” is ridiculous. That those things are of course important but they are only part of the picture; that psychographics are a much more complete picture because then you are targeting for personality.

The big shocker where people feel a little creeped out is when they learn that CA uses those silly little facebook quizzes (you know the ones that you click the “connect to facebook” button on before you are allowed to take them) to profile your personality. What! Those quizzes are not just there for free for you to have fun with… as they say: if the service is free consider that you might be the product.

As we may forget

CA is not the only one doing this; they are just the popular story right now and the quizzing is only part of things. For me the big part is that connection to facebook which can give the owner of the quiz (be it CA or some other company) access to all of your account information, your likes, your posts, and often much of your friend’s information. Of course, much of your personal and consumer data can be purchased so throw that into the mix. Imagine aligning all of this data for a person. It is a lot. Often people don’t even realize what they are giving away.

You authorize the connection so that you can take the quiz or play the game or whatever and then it is over for you – you have had your fun and you move on. But the app still has that connection to your account and will continue to unless you go in and specifically delete it. This means that it can continue to gather data. Apps will vary of course and I can’t speak for any specific one but I know that all of you are reading the terms of service of each app before you connect it – right?

In this case the user is continuing to make associative trails on facebook through friending and liking. However, they are not using those trails for metacognition. They are not using technology to extend their memory so that they may better reflect on the connections that they are making. Instead they plow forward forgetting many of the connections and the fact that they have authorized someone/thing else access and track their connection trails. The trails are being harvested by an outside entity and the user, more than likely, has no idea who that entity is – did I mention that they could change the terms of service, the name of, or the nature of the app at any moment?

But how much can someone really do with all that data?

I have seen the data scientist folks that I follow sort of look at the CA story a little sideways and it seems every day there is a new article downplaying the impact CA had on the Trump and Brexit campaigns. Interestingly though not too many saying that the idea behind this, using big data and psychographics to personalize experiences, is invalid. Just that CA might be more hype than pay off.

This much more comprehensive story about the origins of CA in Motherboard states that Cambridge is not releasing any empirical evidence on how much or how little they are affecting the outcomes of campaigns. And though CA is more than happy to tout their wins as proof of their effectiveness I’ve yet to see anything about their losses which is a classic vendor ploy.

In this recent Bloomberg article, The Math Babe, Cathy O’Neil points out that what Trump was doing during the campaign is not uncommon and that the Hillary campaign was also doing it. Also, that U.S. companies have for decades been tracking personality. O’Neil points out that “To be sure, there’s plenty to be appalled at in this story…. It is just not specific to Trump”.  She states that Hillary had access to more data than Trump because she had access to Obama’s archive of data from the previous elections. 

But then I think about Bush. As We May Think considers information storage and to be sure the amount of data is important. However, I think the real meat is in the connections. It is here that I have a hunch that having the right context or being able to see the right connections could be more powerful than having more data – well at least if we are talking about the difference between a lot of data and a whole heck of a lot data. Did I mention that I am not a data scientist?

Paul Olivier Dehaye has written about how CA was targeting “low information voters” for the Trump campaign. This article hypothesizes that CA used data (citing CA’s claim to have 5000 data points for every adult American) to specifically look for voters who had a low “need for cognition” for microtarged political advertising. These are the type of folks who would be more likely to not dig too deep or question stories that were presented to them. These folks are not doing a lot of metacognition. I don’t blame them for this, but I’ll get to that in a bit.

What is real and how can we tell?

As I remember it, when the term fake news first started being thrown around during the campaign it was largely being used to define sites that were not run by major news organizations or even particular journalists but rather individuals who knew how to buy a domain name, hosting, and throw up a WordPress site but who were only interested in click revenu. They would come up with crazy stories and even crazier headlines just to get people to click. As these started to be called out as “fake news” some began to create lists of these sites and place parody and satire sites alongside of them.

But then it got more challenging with accusations that major news sources were in fact fake news and that we could tap into “alternative facts” to get to the truth.

Journalists receive training to be sensitive to bias and context and to not let it interfere with their reporting, so they should be more prepared to consider context and fight against bias, especially their own. However, you will never be able to completely remove bias and context; much of it can be hidden and not realized till later. It is here that education is asked to step in and create critical citizens who will hold journalists responsible for what they report and it is here that we see the calls for greater digital and information literacy in regards to fake news.

Fake news, microtargeting, and digital citizenship

Bush envisioned people using technology to extend their memory to be more metacognitive about the connections they were making while they were building knowledge. These seem like rather “high cognition” kind of folks to me but what about those “low cognition” kind of people that Dehaye thinks CA could be after? Who are they?

I mean I’ll admit that I’m guilty myself. I don’t read every terms of service for every new app I download. I have forgotten that I’d given access to some app only later to find it hanging out in my facebook or accessing the geolocation of my phone. But I think that it is really some of the most vulnerable among us that are at risk here.  

What if you work 40/50 hours a week and care for children, parents, or grandparents? What if you have a disability or illness to manage? What if you grew up surrounded by technology and this kind of technology usage is your normal? Do you have time to build all of those literacies? 

Building critical literacies around information and digital technologies takes time. It requires more than just a list of which websites are fake, which are satire, and which are backed by trained journalists. It requires more than a diagram of which news sources lean in which direction politically.

You need the ability to critically look for the nuance of things that could be off. For instance a .com.co is different than a .com. Kin Lane talks about “domain literacy” and goes much deeper than this basic understanding of domains but I hope you see what I mean. We need to read the article and then ask is it really reporting first hand or are they reporting on reporting as Mike Caulfield points out when he calls for the first step in fact checking to not be evaluating the source but rather determining who the source is!

Once you determine the true source you need to evaluate it – who wrote this, what are their political leanings, are they being backed by other influences (like money) somewhere? You should click on the article’s links and/or look at its sources and read those articles to get context before you make a definitive decision about it’s worth. All of this takes access, and knowledge, and constant practice.

Maha Bali writes about how fake news is not our real problem. She points out how fake news is good for critical thinking and states that we need more than just a cognitive approach; what we really need is cross-cultural dialog, learning, and skills. This is where education and community need to step up to the plate.

It seems like a lot and for me it is a call for better general and liberal education. I think the first step may just be in realizing (and getting students to realize) that my internet is different from your internet. Where possible, taking ownership for our own “associative trails” and demanding that ownership when it is kept from us. Finally, simply realizing that there are political forces and companies with lots of your data… which has always been the case but maybe realizing that they are trying to influence you in increasingly intimate ways.

This article (images and words) are CC-BY Autumm Caines

#MyDigCiz as Critical Experimentation in Opposition to Best Practice: Self-Reflection After #DigPed PEI or why I thought you might care about my soup

It was 2007, I was just finishing up my BS degree in communication technology when I received a google alert on my name one day. Honestly, I had felt a little vain when I set it up but I saw how this could be helpful especially considering the uniqueness of my name.

Someone had written a blog post and mentioned me!

I didn’t have a blog and really didn’t know any bloggers so this seemed really strange. I discovered that the post was about Twitter as a tool and explored how people were using it. I was on Twitter. My supervisor at the time had said that it was something that I should check out and so I created an account and started tinkering with it a few months prior.

The blog post was a rant about the best ways that people were using twitter and comparing how they should not use twitter. Two accounts were highlighted and an example was made out of them – your’s truly was the prime example of how to NOT use twitter.

I remember being pretty mortified, I think I killed my account for awhile, I think I changed my name when I eventually reactivated it. After some time I finally got a little mad. I mean I was a student. I was new to this online platform and so was everyone else; it had only been around for a little over a year. The whole thing was a big experiment as far as I was concerned. So I tweeted some boring stuff. There are worse things in the world. I ended up tweeting a link to the offending post stating “I guess no one cares about my soup”.

I’d like to think that I’ve come a long way in my use of twitter. But I still use experimentation in dealing with new tools and I’m sure that I’m not using tools as intended at any given time depending the context. But I don’t think that I should stop doing that. I may look silly sometimes and I may come off the wrong way but I learn a lot in doing it and then I write posts like this one sharing what I’ve learned… and…  I think that is valuable. I’m not just experimenting in a vacuum. I am thinking about context, I am thinking about different vantage points, and I am thinking about how my uses impact others. But I am experimenting.

Still, I’m prone to getting sucked in by that voice of authority stating the right way to use tools. It is a strange dichotomy. This post is largely about me trying to work that out.

Right now there are a ton of things converging and diverging in my world. I’m just back from the #DigPed PEI conference where I took the digital literacies track but it is also the last two weeks of #DigCiz and Maha Bali has charged us all to define what digital citizenship looks like for us and on whose terms are we encouraging it using the tag #MyDigCiz. These two things together have me taking a hard look at what I am doing and questioning some of my practices in terms of being a person in the flesh and on the internet. I’m realizing that #MyDigCiz has a lot to do with critical self-reflection and continually trying to understand connections. At the same time I’m realizing that experiments are risky and not just to myself but to others.

I mean I am luckier than most. I have a self-reflective nature and a community of scholars that help me to build digital literacies and consider multiple contexts regularly; it’s called Virtually Connecting. As a community we are in almost constant dialog about what is ethical and what is not. How we can elevate voices that don’t get heard. What is working technically and how we can adjust environments for better connections. We are thinking about what is happening in the background when we go live and record. Who might walk into the frame and do they want to be live on the internet. It’s not perfect. We too are experimenting and learning. But we are also thinking critically, adjusting, and persisting.

You’d think this stuff was old hat for me. But it is not. I’m constantly readjusting.

One of my favorite moments at #DigPed PEI was the twitter chat. I didn’t do much tweeting. Those of us that were more experienced at twitter grouped up and gathered in the big open room – the Market Square. There were these loungy couches around the perimeter but some of us gathered in the middle and began a verbal in the flesh conversation/online twitter chat. I loved this moment so much because it ended up being a great liminal space. Those of us who gathered in the center of the room took time to talk but also time to read twitter and to tweet. There was tons of “dead air” interspersed with bits of verbal conversation. It wasn’t a show or a presentation, there was no front of the room, it was a conversation among people in the flesh who were on the internet at the same time. It was beautiful for someone like me. People jumped in from that outer ring from time to time while others were just quietly on their computers. The verbal conversation was a great mix of people with varying levels of experience in terms of presenting/attending conferences but like I said most were pretty established with using twitter and other forms of social media. Out of this conversation, a few key questions (particular thanks Audrey Watters) have led me to remember how I’ve developed certain methods around tweeting but also helped me to question some of my approaches as well.

I share a fair amount on the web. Not as much as many on Twitter but more than most in the world. I often filter other people through me via my tweets and I’m sure I deviate from their intended meanings – I am my own person after all. I live tweet many keynote speakers and session presenters. After doing this for awhile and being self-reflective about it I realized that there was a lot to be said for context. Hearing some snippet of what a speaker has said out of context can convey a completely different meaning. Then I have to ask where does my interpretation of what a speaker has said start and what they actually said (or what they meant in a particular context) end – and how does the random person who encounters that tweet perceive it?  What responsibilities do I have to the content, the connections, and the speaker? What if I hear something wrong and share something that creates confusion? What if I start a conversation between some people that are going to hate one another? What if I say something in public that will hurt someone? If any of this happens how do I atone for this? Will I even realize it?

Upon this realization I remember making a conscious choice to stop using quotes, for the most part, in these kind of tweets. I did this on purpose. My point in not using quotes is that I’m taking some responsibility for the content of that tweet. I don’t want there to be a perception that I’m quoting directly – unless I do but it is rare in the scope of my tweets. This is my little indication that I’m doing the best I can in 140 characters to interpret and not mimic. But that’s not written down anywhere – that is not a best practice – that is an Autummism… but the thing is I’m not sure anyone “gets” that but me. I think many people assume the quotes. They assume that I’m word for word transcribing the talk. I don’t know how to assert that I’m a human and not a recording or broadcast device. I’m not trying to be a journalist, I’m not trying to be a camera, I just want to be a person who expresses her experience and uses social media to process that experience… more on that in a bit.

Until a few months ago I also used to never put the handle of the speaker in each tweet. I would send out one tweet at the beginning stating who I was hearing and interpreting but all subsequent tweets would not reference the speaker. But then there is a problem with attribution and does it look like I’m spewing the speaker’s rhetoric on my own? After getting called on this it might have been the first time I googled “how to live tweet a keynote speaker” (or something of the like) and low and behold there were a set of “best practices” that sure enough stated that you should tag the speaker in each tweet. So I started doing that.

Yes! Best practice to the rescue. Now I can finally start using Twitter right.

But now that I look back I realize that many of those articles were geared toward folks that were doing some kind of media production, were trying to sell something, or were interested in hitting a specific analytic. Wait!… That ain’t me babe. I’m not sure that I’m the intended audience of those articles but I didn’t get that at that time.

The thing about “best practices” is that they are problematic in that they strip nuance out of these contextual experiences. During our conversation some noted that this tagging in every single tweet basically sent the speaker a barrage of notifications which could be annoying. Furthermore, these had the potential to start side conversations that resulted in even more notifications. These side conversations are complicated by the “meaning problem” that I started to outline earlier. Because a meaning which is based on an interpretation by the person reading the tweet, in the context of the interpretation of the person who composed and sent the tweet, can be very different than the intended message of the speaker. It feels like you are almost asking for trouble. This problem is going to be present regardless I guess but my beef is with the “best” in best practice.

Who is that best for again? And how is the “best practice” better than my experimental Autumm practice? It seems either way the vulnerabilities persist.

Another tool I have started using is mobile live streaming and I think this does a good job of taking some of the problems I just discussed off of the table. It is pretty clear when I am on camera and when the speaker is on camera and the speaker just gets to speak for themselves. This technology is fairly new and there are a ton of best practices on the web mostly geared at video and sound quality and creating an experience for the virtual audience. The problem with the live streaming over tweeting is that I don’t live tweet a speaker only for the other people on twitter who may read those tweets or for the speaker themselves – I do it for myself too. I gain perspective considering those multiple contexts and constraints. It keeps my mind engaged in a different way than if I were listening without writing or even if I were listening and taking public or private notes. I know through my work with Virtually Connecting it is not about creating a great experience for a virtual audience as much as it is about creating a reciprocal experience between those on each side of the camera. Or at least that is what it is about for me and others that I surround myself with… I suppose this is where digital citizenship comes in. 

I don’t just use social media and the public internet to channel other people who are speaking at conferences. I don’t just use images, video, and text to speak to an audience. I experiment with these things to learn about myself and the world around me. To explore multiple contexts and points of view. I have public conversations about topics that no one has easy answers for so that I can learn, maybe not the right answers but to perhaps be able to ask better questions, in community. I reflect on my experiences based on various forms of feedback that I receive and I make adjustments. I try to do better. This is not a research project, this is not reporting, this is not a course… this is a part of who I am.

And so when Sundi Richard and I started to ask questions around the idea of digital citizenship, having public conversations using video chat and Twitter seemed like second nature. When we decided to do it again a few months later we purchased web space and gave a home to some things like a schedule and an articulation of the context in which we were interested in talking about digital citizenship. By that time I had also found time to read Rhizo14: A Rhizomatic Learning cMOOC in Sunlight and in Shade by Jenny Mackness and Frances Bell and it gave me pause. I was in Rhizo15 and found the facilitator Dave Cormier to be attentive and deeply concerned for those that were experimenting with him around a complex topic for which there was no clear easy clear cut answer. However, this paper painted Dave as having control and influence over the group but neglecting the needs of those that were not having a positive experiences. That some participants had learned a great deal in the course but that others had been somehow damaged (the paper seems unclear to me on what this damage was and even feeling deeply embedded in Rhizo15 I’ve never figured it out) and that the facilitator should have had more control or something.

So, I wondered if our little #digciz project should have a disclaimer of sorts, perhaps a set of standards, or a defined code of ethics. I knew that we would never reach the scale of Rhizo14 but I saw no reason why we should not be concerned by the same ethical implications. I wanted to be clear that I was concerned about all those that were going to choose to engage with us but that at the same time that there were some dangers inherent to being on the public internet and that we would not be able to control every connection. That I have a life outside of #digciz and that I would not be able to watch 24/7.

I proposed this to Sundi and we created a page for this but we really struggled with articulation. Eventually, we decided to let the community own it and during the first week we would encourage the participants to build this statement themselves. Our first Twitter chat was more active than either of us had imagined but no one seemed interested in building such a statement. The page remained blank. We continued discussing digital citizenship anyway.

I think this was right for us and for the group that we ended up getting. We could have put on the breaks and refused to continue till we defined a disclaimer, list of ethical points, or a statement of some kind. But we didn’t. We decided to keep experimenting.

I think we all struggled with coming up with some set of standards for several reasons. For one we are a pretty new group and I don’t think that group members have even really defined what they wanted from the group. As a community comes together and solidifies I think that they sometimes feel a need to define themselves, but that takes time. For instance I had a hand in composing the Virtually Connecting manifesto and point to it often when defining that work. But I also think that our approach to subject of digital citizenship had something to do with this. We were bypassing some of the best practices on the subject and instead asking questions that were more complex and so reducing it back to a simple statement or a list of some kind just didn’t seem right.

As for me, I think that #MyDigCiz is somehow rooted in a sense that by creating a list of rules and practices we might give guidelines to some but that those guidelines will not speak for all. That like online, as in the flesh, the complexities around how we live and how we impact each other have more to do with deep fundamental attitudes surrounding relationships, empathy, and an ability to see multiple contexts than they do with following a list.

Of course the rub is that not everyone is ready to be self-reflective digital citizens. And so sometimes we create best practices, community statements, codes of ethics, etc. because we have to start somewhere. I think these are especially important for instance when dealing with young children and I don’t want to be condemning of those efforts (I understand that perhaps I have come off as hypercritical in the past) – they are important and needed – I just think that there is another conversation that is not really being discussed. I guess my point is that I think the best practices are not working by themselves and that we need more.

One thing that I did take from the Sunlight and Shade paper was that online courses including, and maybe especially, MOOCs are not going to be an enlightening experience for everyone…  I think we knew that but research often tells us things that we instinctively know.

What is becoming really clear to me is that none of this is happening in a vacuum. I see the use of public, social, digital, tools changing and shaping all of the time. I see them used to commit atrocities and then in other cases used to shine a light on atrocities. I know technology is not neutral but I also know that people’s use of technology is not neutral either. We are learning from each other and shaping the way that we affect one another through the use of these tools. I see the free experimentation of the use of technology when done with what John Dewey referred to as the “habit of amicable cooperation” as an affront on formulaic prescribed best practices that may only be best for sales numbers and media clicks. I know that the idea of citizenship is a problematic one and that digital citizenship is an even more problematic. However, I think that we have a better chance at finding a way to live together by developing an ability to see connections than in being able to follow the rules.

~~~

My next stop in this journey will be the Digital Pedagogy Lab Institute at University of Mary Washington and there will be lots of ways that you can participate virtually from Twitter to Virtually Connecting and I’m sure I will live stream a bit. However, I do want to encourage you – if you can by any means – attend in the flesh. I think that this is going to be one of the foremost learning events of the year if you are interested in getting past the hype and taking a close look at your own practice of teaching with digital tools.

Image Credit CC-BY-SA 4.0: Autumm Caines, Market Square UPEI 

What is Digital Citizenship?: recap on week 1 and announcing week 2 of #DigCiz

Last week was the first round of conversations that we kicked off about digital citizenship. I started off by presenting on digital citizenship with my colleague Jim Kerr during Martin Luther King day at Capital. It was nice to have a foundation of what Jim had done last year when he presented this topic on his own at this same event. We used the nine elements of Digital Citizenship as defined by Dr. Mike Ribble on digitalcitizenship.net which I think can be helpful when presenting on the concept to people that are not familiar with it. We Periscoped the session and put it on YouTube – We didn’t do a good job of staying in frame so the audio is better than the visuals.

This past week also kicked off the #DigCiz conversations which are an offshoot of #HumanMOOC by Sundi and I.

We have planned alternating forms of conversation between Twitter chats and Google Hangouts (GHO). This first round was a GHO and it was small, just Sundi, Daniel, and I but that was okay I think that made for a deeper conversation. We talked about some pretty cool stuff including:

  • What possibilities there might be for creating a digital bystander training
  • Creating journal entries reflecting on who one is as a digital citizen
  • Digital citizenry as a transformational experience
  • Digital citizenry as a chaotic concept as defined by the Cyenfin framework

We kept returning to the nine elements of digital citizenry and while I find the elements to be helpful in broaching the conversation of digital citizenship, as I start to dive deeper in my own thinking I’m having trouble with them. I’m just finding that digital citizenship is much deeper than any list of themes. In our GHO conversation we talked about how important creating digital identity is to citizenship. How can you participate if you don’t have an identity? What does it mean to have an identity and if someone does not have a well defined identity does that mean that maybe they are new to the digital world – should we take steps to welcome them? But the elements do not mention digital identity except for the element of digital law relating how stealing someone’s identity in a digital space is illegal. It seems to fit into the elements of digital communication, etiquette, and literacy but there is no mention of it.

This week we are continuing this conversation with a Twitter chat and we have changed the date/time to not conflict with the #MOOCMOOC instructional design chat. We will be tweeting on Friday, January 29th at 11pm CST/ 12pm EST using the tag #DigCiz.

The question is “Why is Digital Citizenship Important?” and I’ll do a quick stint here to answer it for myself just a bit: I think for me digital citizenship is important because no man is an island. It has to do with the idea of the public commons and working in public to the betterment of everyone. It has to do with those connections and networks that we talk about so much in learning theory and it has me wondering about informal learning and how we are learning all of the time as we connect with one another. What does it mean to share space? What does it mean to share ideas?

I know these are big romantic kind of questions/reflections… You can join the twitter chat on Friday, January 29th at 11pm CST/ 12pm EST using #DigCiz and help us explore them.

Introducing #DigCiz: A “place” to discuss digital citizenship – My take aways from #HumanMOOC

What does it mean to be a person on the web? What is it like to think of the web as a place? As a group of people in a place what are our responsibilities to each other?

I just finished up #HumanMOOC and it was a really good dance. I got to wax philosophic about what it means to be human and how we can transfer that human element to online learning. I’m finding more and more folks are talking about how important human relationships are to online learning and it is one of those big questions that I think takes multiple perspectives to figure out.

I’m thankful for #HumanMOOC’s dual layer design that allowed us to “go rogue” as Amy Ostrum called it and do a bunch of participant hangouts along with the scheduled conversations. One of these came about through twitter when I was having a conversation with some folks about digital citizenship and Sundi Richard suggested that we get together and talk about it a little bit.

Prior to the start of that hangout we were playing with the idea of a hashtag for discussing digital citizenship and came up with #DigCiz. I’m interested in continuing the conversation because I’d like to start teaching a first year seminar on digital citizenship in the fall. Some others have expressed interest in the conversation as well so Sundi and I got together the other day and planned a series of chats.

We decided to do alternating synchronous live video chats and twitter chats and to start off really broad and then narrow the topic. If you would like to join us feel free to check out this schedule and let us know if you want to join. Of course use the hashtag any other time or create whatever fun stuffs you would like out of this.

Week 1: What is digital citizenship?
Sync Video Chat
Wednesday, Jan 20th – 11am CST/12pm EST

Week 2: Why is digital citizenship important?
Twitter Chat
Wednesday, Jan 27th Friday, Jan 29th – 11am CST/12pm EST

Week 3: What resources around digital citizenship have we found helpful? Are there public resources that are needed and can we create them? 
Sync Video Chat
Wednesday, February 10th – 11am CST/12pm EST

Week 4: Participants choose this topic – Wellness – How do we maintain a healthy citizenry?
Twitter Chat
Wednesday, February 17th – 11am CST/12pm EST
Tuesday, February 16th – 1pm CST/2pm EST.

Week 5: Do we want to continue this conversation? What questions are still unanswered? What kind of timing should we continue with?
Sync Video Chat
Wednesday, February 24th –11am CST/12pm 1pm CST/2pm EST