Gehl, R.W. (2013). What’s on your mind? Social media monopolies and noopower

Gehl, R.W. (2013). What’s on your mind? Social media monopolies and noopower. First Monday. 18(3).

On noopower1 through marketing and repetition extended into ubiquitous social media:

Operating within the larger political economy of advertising–supported media, it is not surprising that Facebook, Google, and Twitter mirror marketing’s penchant for experimentation and repetition. Software engineers working for these firms pore over data about what actions users most commonly take — that is, what is most often repeated within the architectures of the sites. These engineers then constantly tweak their interfaces, APIs, and underlying software to reinforce these actions and to produce (they hope) new ones. The tiny changes in the Google homepage, for example, are akin to ripples on the surface of a body of water caused by motion deep underneath, as software engineers seek to increase the attention and productivity of users of these sites.

and

Real–time data collection on links clicked and videos watched provide marketers with the data they need to experiment with different messages, images, sounds, and narrative structures, allowing them to tailor messages to target publics, and then this process is repeated, ad nauseam, in a cybernetic loop. Behavioral tracking of users allows marketers to repeat messages across heterogeneous Web sites as users visit them, as well as make sales pitches via mobile devices as users travel through space. The messages that result in sales are repeated; those that do not are archived (perhaps they will be useful later). Liking, “+1”ing, or retweeting an ad enters users into a contest to win a trip to the theme park built around the movie that was based on the video game currently being advertised, a game in which the main character must use social media to build a following to solve a crime. All of this is, of course, a marketer’s dream: the observation, experimentation upon, and ultimate modulation of the thoughts of billions, the chance to increase what they call (in some of the most frightening language imaginable) “brand consciousness” over other forms of consciousness and subjectivity. It is the reduction of the scope of thought to a particular civic activity. It is the production of the flexible and always–willing global consumer as the real abstraction of our time. Consumption über alles.

and

Thus, to counter the reductive noopower operating in and through the social media monopolies, activists and technologists must create systems that allow for radical thought and heterogeneous uses, for differences that make a difference. The alternatives to social media monopolies must be built with protocols, interfaces, and databases all designed to promote new political thinking — noopolitical thinking — and to resist reduction of thought to repeated marketing messages of all varieties. We all can agree that this is probably impossible, but we always must keep a better future on our minds as we work with what we have on our minds.

  1. “power over minds, power over thoughts”

edtech fetishism

Additionally, educational technology can be prone to cycles of hype and fetishism, where new tools and applications are rapidly adopted by individuals who are seen as innovators in the field, with little time for thorough or rigorous investigation of the pedagogical strategies that may be enabled by the affordances of these new tools.

Norman, D. (2013). A Case Study Using the Community of Inquiry Framework to Analyze Online Discussions in WordPress and Blackboard in a Graduate Course. (Master’s thesis, University of Calgary). Retrieved from http://darcynorman.net/thesis

on hype cycles and easy answers

David Kernohan published a revised edtech hype cycle, rightly pointing out that it’s not a cycle, and that “progression” to the “plateau of productivity” is not a foregone conclusion. Here’s David’s EduBeardStroke Parabola 2013:

EduBeardStroke Parabola

I’ve seen the Gartner Hype Cycle used quite a bit – I’ve even used it myself on campus briefings and reports. It’s never sat well with me, but I couldn’t articulate why. I mean, Gartner hires The Experts to Make Sense of Things. And this is how they do it. And people understand the simplifications and generalizations, and feel comforted that Everything Will Be OK.

David raises some excellent points. Go read his post.

The Hype Cycle does a few things:

  1. it (over)simplifies a concept, reducing it to a single point on a (curvy) line. The only variable is its position on the (curvy) x-axis. And it implies that time (which is the typical x-axis) is all that’s needed. Invent a new shiny thing. Put a dot on the line. And wait. Continue waiting. BOOM! It’s moved through The Cycle, and is now resting happily on The Plateau of Productivity. Awesome! The reason people don’t adopt something early on is because they are dullards who just don’t get how awesome things are. And those that adopt it later are merely sheep who finally wised up and incorporated the inevitable product of time marching on.
  2. it implies a single shared context. that Shiny Thing #1 is the same thing for everyone, and has the same impact and risks and costs and benefits. That everyone is moving, lock step, together through the inevitable march of time as Shiny Things progress through The Hype Cycle until they reach the point where an organization (it’s always an organization or institution or company – no individuals need apply) is comfortable enough with the risk:benefit ratio and decides to incorporate the inevitable. But this is bullshit. Every organization has a unique context. And individuals matter.
  3. it provides the Easy Answer. Gartner is in the business of selling reports on extremely complicated or chaotic fields – Big Data, Higher Education, etc… – in order to help organizations and companies to understand change. Buy the report, get the latest Hype Cycle, and you’ll see at a glance where things are at, and what your organization or company needs to do. But easy answers aren’t useful. They placate the CYA MBA crowd who mumble things like “due diligence” and “mitigating risk” etc… while not providing an actual analysis of what these changes mean to their own organization or company. Nope. Gartner said it’s not quite at the peak of inflated expectations, so we have at least 6 months before we have to allocate budget resources to addressing it… – this is the kind of thing Scott points out with his comment on David’s post – it’s probably the biggest danger of this kind of thing. All we need to do is buy access to a report, put a dot on a (curvy) line, and BOOM hey, presto! we’re innovating! we’re adapting to change! To the presentation circuit!

Look at something like Learning Object Repositories – according to The Hype Cycle, that is now a mature/old concept. It should be grazing in the Plateau of Productivity by now. Except for most people, it’s a non-concept any more. So, does it drop back to the Trough of Disillusionment? Does it drop off the curve entirely at some point? Or is it Productivity Plateau for the folks that can actually use something like a repository? Which context wins?

So. What to do when the crowd-that-is-paid-better-than-me uses The Hype Cycle as gospel when describing innovation and the state of the art? Yeah. I don’t know, either. But I’m more likely to include a reference to Kernohan’s fantastic Parabola than I am to use the Hype Cycle. Again.

“I’m in a glass case of emotion” (or, on Enterprise Solutions on campus)

Brian Lamb wrote a fantastic post that linked to Martin Weller’s recent post that touches on enterprise-vs-twitter-scale-support.

My synopsis of the important issues:

  1. People are different. They have different needs, different capabilities, different comfort levels, etc… etc…
  2. Institutions are (relatively) good at offering Enterprise Solutions.
  3. Enterprise Solutions kind of suck for individuals, and for small-scale innovation.

My take on this is that the institutions need to provide a “common ground” so all members of a community have access to core services and functionality. The LMS/VLE does that. Not always well, but the intent is to provide everyone with the ability to manage a course online. To do that at the scale of a modern university((there are nearly 40,000 students in various roles at the UofC, including 31,000 undergrads.)) means invoking Enterprise Software. So we get things like Peoplesoft as the Student Information System managing course enrolments and the like. And we get things like Blackboard providing the online course environment. Everyone gets to play. Maybe not in the exact way they’d like, but they’re in the game, and they get support to help them along. This is good.

But it leaves out the smaller scale needs. Where does a prof (or student) go when they want to set up their own website? Craft an ePortfolio that doesn’t fit into the tool provided by the Enterprise Software? Do something that involves colouring outside the lines? We need to be able to provide the means to allow, enable, and support that as much as possible.

So we get things like UCalgaryBlogs, wiki.ucalgary.ca, etc… that start by one person sneaking some software onto a campus server, and kind of letting it grow from there.

But that’s not good. It depends on:

  1. having someone able/willing to do that
  2. that person having the ability to find spots on servers to sneak software
  3. having the good fortune to not get reprimanded for 1. and 2.
  4. having that person never ever leave the university or get sick or die, or all of these little sneaky servers become orphaned

How to provide institutional support for that kind of thing, without having to rely on some things that may not be sustainable? There are already models in place for this. Every web hosting provider on the planet has already solved this problem.

We have an enterprise-class data centre already. All we need to do is implement web hosting functionality akin to Mediatemple etc… Implementation details don’t matter so much. A private campus cloud/grid/cluster? Every member of the campus community gets an account, and can use one-click installers to run whatever is provided, or if they have the ability, they can install whatever software can run on the servers. The metal, OS, and core software would be managed by IT. Support for the core stuff would be relatively straightforward to provide. And the community could support the rest, with the help of IT and others.

And, at the end of a person’s career at the University, they could bundle all of their stuff up and take it with them.

Yes, this totally rips off / builds on the UMW Domain of One’s Own project. They’re doing so many awesome things at that school, we’d be crazy not to model some things after them.

What would the campus web hosting service look like? What kinds of software/platforms would be used? Easy enough to spin up a community process to investigate that…

Enterprise Solutions providing the core services (SIS, LMS/VLE, web hosting), with support provided both centrally, through the campus community, and distributed through The Internet At Large.

Update: A quick napkin-math calculation suggests storage to allow 5GB/user would cost about $1M per year. This is a non-trivial thing to implement…

my edtech predictions for 2013

So, Dr. Bates calls out Audrey Watters for not making predictions for 2013. I’d love to see her predictions. Fair’s fair, though, so here are mine:[1]

  1. Lots of people will do small-scale innovative projects with no funding or resources, because they love trying new things and doing awesome stuff.
  2. Some companies or institutions will “invent” or “discover” something that one or more of these people have been doing, and it will be branded as their own.
  3. This branded “innovation” will become co-opted and corrupted, so that it doesn’t really do anything innovative, or anything other than building the reputation of the “innovators”.
  4. People will hype the crap out of the “innovation” as The Future of Education, and The Saviour (or Disruption) of Universities, and present it at conferences and write papers and travel the presentation circuit explaining it to the masses.
  5. The people from 1. will largely ignore the hype, shrug their shoulders, and continue doing awesome stuff because they enjoy doing awesome stuff.

I feel pretty safe standing behind these recommendations for 2013, because that’s the pattern of innovation that’s happened pretty much every year I’ve been playing with edtech.[2]


Footnotes:

  1. some of the details have been left out, but can easily be filled in as an exercise left to the reader
  2. which, holy crap, has been for almost 2 decades. which, holy crap, is a lifetime or two in edtech

on empathy

half-baked post alert

This is nothing new, but I’ve been internally coming back to it often enough that it’s worth saying out loud.

We’ve been working on identifying and documenting the needs of our campus community, with respect to an eLearning environment – with the unspoken goal of finding The One True Tool that will serve everyone’s needs. The further-unspoken-message being that everyone is (or should be) fundamentally the same, and that by finding and encouraging a single set of “best practices” that we’ll be able to help the lesser-able (i.e., different) people to adapt (i.e., conform). There are reasons to encourage conformity – it’s easier to support, easier to implement, cleaner to put into an RFP, etc…

Giulia Forsythe is presenting on creativity at UMW’s Faculty Academy, and mentioned a throwaway comment by an engineering prof who said her visual notes were horribly and dangerously unorganized, and that she should use Visio to keep more organized notes. Because if you want to be a proper note-taker, with an organized mind, you must adopt the tools and techniques of an engineer. Or be dismissed as an unorganized and cluttered mind.

The pattern is pervasive. “I’ve done this. Everyone should do it just like I did, because I’ve figured it out.” But that doesn’t work.

Michael Wesch has been doing some awesome, inspiring and innovative stuff in his digital ethnography courses. He talks about the stuff he and his students do, and people dutifully write it down as a recipe for them to do the same. But that doesn’t work. People are different. Dr. Wesch nails it – the most important thing we have is empathy. The ability to recognize others’ feelings. To be aware that people are different.

So. How do we move away from silently encouraging conformity, toward recognizing and leveraging diversity? This isn’t just an edtech or eLearning thing. This isn’t just a teaching-and-learning or education thing. How do we encourage and support empathy? How do we avoid the urge to pigeonhole problems into solutions?

Yeah. I don’t know, either.

my hosting/publishing/sharing setup

Time for another reclaim project update, after nuking my Flickr account. What am I running, and what is my workflow? Well, I’m running almost everything on my Hippie Hosting Co-op account, including:

  • my main blog
    • all media is posted there – either in a full blog post, or in the ephemeral media section
    • I use a bunch of plugins, all listed on my colophon page
  • links (running a self-hosted copy of Scuttle )
  • rss reader ( Fever˚ )
  • url shortener (Shaun Inman’s lessn, with no tracking or administration)
  • feed2js, for doing fun things with rss feeds on web pages
  • about mini-site. static html.
  • 1998-style home page, using my instance of feed2js to tie in feeds in a handy dandy dashboard

Most of my posts are made as photos using the WordPress app on my iPhone. I have it in the main app bar, so it’s always just one click away. Photos are lately taken most often using the great 6×7 app on my phone (not owned by Facebook, not tied to any service – all it does is take a photo quickly, and save it to my camera roll where I can quickly post it using the WordPress app).

My default category for new posts is “ephemera” so I don’t have to select any categories when posting photos from my phone. I use a plugin that I wrote, which filters all “ephemera” posts from the front page and main feed so that the 4 subscribers aren’t inundated by photos. I use a second plugin that I wrote that tells the Twitter WordPress plugin to tweak the tweet announcing posts – so “ephemera” posts have “(media)” inserted in the twoot to prevent Scott from blowing a gasket at all of the tootbot noise…

Bigger “real” posts (like this one) are written using the WordPress web interface. I used to use MarsEdit, which is really great software and I love it, but WordPress’ interface has gotten good enough that I really don’t need a separate app. And, with the Markdown QuickTags plugin, it’s actually easier and faster to use the native web interface. It also handles media uploads really nicely, which is handy (and the biggest reason I used to use a separate standalone app for writing stuff – the media uploads used to be easier that way).

The only things I’m not hosting myself are my Google account (which isn’t used much, and I still use DuckDuckGo for 99% of my searching because it doesn’t feed the beast), my Facebook account (which only exists because I have family and friends that don’t exist online outside of Facebook), and Twitter (which is like ephemeral social glue).

One nice thing about running everything on my own (co-op hosted) server, is that I can back everything up at once. I can use something like rsync to suck my entire hosting account directory onto my laptop, so I’ve got a backup in case Bad Things Happen. That’s hard, or impossible, using distributed hosted services…

What have I lost by hosting it myself? Not much. Some of the community connections, perhaps, but most of that has been happening in Twitter anyway, so that’s not a big deal.

What have I gained by hosting it myself? I own it. Nobody can say “hey. we sold our company to these guys. good luck with that.” And nobody – nobody tracks what people do here. I have the static apache logs, but that is a crude and completely anonymous aggregate of activity. Nothing directly feeds Google’s (or any other company’s) machines for tracking and monitoring and monetizing (I don’t use any third-party analytics packages, so there shouldn’t be any tracking except from YouTube and Vimeo hosted videos). That’s worth doing it all myself, right there.

Worst case scenario, if Hippie Hosting Co-op’s orbiting server platform goes offline for some reason, almost everything I publish becomes temporarily unavailable. That’s not really a big risk. The world could do with a little less noise. And, eventually, my stuff would become available again and balance would be restored. Whew.

update: I thought of some other key tools that I use that I’m not hosting myself, but would love to find a way to do so:

  • google docs. no way out of this one, aside from emailing documents around again. nope.
  • evernote (I basically live in this, but am kind of queasy about the amount of my private/secure data that’s residing on a company’s servers somewhere)
  • dropbox – I’ve played with owncloud, but it’s just not as seamless as dropbox, especially for automagically syncing across many devices
  • icloud – likely no way out of this one. it’s an email account and iOS backup, but also tied to Ping, GameCenter, and other things Apple.

and other services that are hosted elsewhere but I just don’t care because they’re meaningless (but I’d consider nuking them just to throw a shoe into the machinery of ubiquitously tracking everyone):

  • linkedin. really? do people actually use this?
  • facebook. it’s full of people. and creepy monitoring/monetizing.
  • likely a bunch of other stuff I can’t remember because they’re silly and meaningless web 2.0 noise…

on the role of the lms in higher education

It’s fashionable to rail against the LMS, to lament the shackles of institutional constraint and to advocate for abandoning the concept in exchange for a DIY nirvana. There’s definitely something to the no-LMS movement, because it emphasizes individual control and grassroots innovation. But, there is also a role for the LMS in higher education. If for no other reason than the simple reality that most instructors, and many students, aren’t ready, willing, or able to forge their own solutions. Nor should they be required to. The DIY hobby craftsperson ethos is inspiring, but institutions have an obligation to provide tools to enable their faculty, students and staff. The LMS is only part of that – but I think it is a core part.

From an institution’s perspective, the LMS is the glue that holds everything together. It provides access to courses. It ties grades to the registrar’s office. It provides a core set of tools and resources for offering courses online. It’s not the whole widget, but it’s an important part of it.

Much of the fun and innovative work happens outside of the LMS – blogs, wikis, collaborative assignment managers, etc… But, even non-LMS platforms start to take on the characteristics of the conventional LMS – tracking students’ activity, providing access to resources, connecting assignments to grades, etc… Even a grassroots No-LMS environment eventually grows to resemble an LMS-like space.

And, there is some interesting innovation going on within the “conventional” LMS – with some really compelling things showing up to support thoughtful course design, effective assignment aligned with the goals of instructors and students, and the potential to blur the traditional boundaries that restrict activities within the LMS. It’s not so much the tool that matters, but what you do with it. I’ve seen profs do really interesting things with our creaky installation of Blackboard 8. And, I’ve seen profs do downright traditional and boring things with our non-LMS campus blogging platform. It’s not the tool that defines creative teaching and learning.

So, I look at the LMS not as shackles of institutional control, but as a means of providing a set of tools to individuals, to ensure that we’re doing a decent job of teaching online. I’m up to my eyeballs with campus LMS requirements documentation, so can’t comment much more than that at the moment. But, I believe the LMS is important as an equalizing platform. And, that people (faculty, students and staff) should be encouraged to explore and build both within and outside the LMS.

basic metadata analysis

Here’s a quick pass at analyzing the basic metadata for the online discussions.

I plotted a few calculated values (Excel pivot tables fracking ROCK, BTW…), to try to compare activity patterns. What’s interesting in this graph is the average wordcount (green line) – low for the Blackboard discussion board threads (the left 5 items) and markedly higher for the 8 student blog (the right 8 items).

The number of posts in each discussion (dark blue line) is relatively consistent across all discussions. Slightly lower for the WordPress blog sites, but not dramatically so.

Also interesting is the red line – standard deviation of the “day of course” for posts. It’s a rough estimate at how rapidly posts occur – a low standard deviation indicates the posts occurred relatively close together on the calendar. A high value indicates the posts occurred over a longer spread of days. This suggests that Blackboard posts were added in brief, rapid bursts, while the WordPress posts and comments were posted over longer durations. People kept coming back to blog posts long after they were started. Interesting. There could be a number of reasons for this – it’s easier to see Bb discussion boards all in one place – and easier to forget to check various blogs for activity, etc… Or, do they just reflect more, and more deeply on blogs? Interesting… I’d love to find out the reasons behind the different values…

So… The WordPress discussions occurred over longer periods, using slightly fewer posts/responses, but with dramatically longer posts than was seen in the Blackboard discussions…

full online discussion metadata visualization

I’ve finally entered all of the metadata information for the online discussions I’m using in my thesis. This includes the person who posts something, the date, and the size of the post. I worked through my earlier visualization mockup, and wanted to try it with the full set of data. So, here’s the Blackboard discussions (top image) and WordPress blog posts (bottom image):

It’s only the most basic of metadata, but already differences in activity patterns are becoming apparent. Both images are on the same time- and size- scales. The WordPress discussions appear to be using significantly longer posts and comments, spread over much more time. Blackboard discussions appear to be shorter posts, over briefer durations.

Next up, I get to code each post for Community of Inquiry model “presences” – as described by indicators for social, cognitive and teaching contributions in the posts. I’ll figure out some way to overlay that information on top of the basic metadata visualization.