In early December, I participated in a Ditchley Conference on power and accountability in the private sector digital economy as an invitee of the Canadian Ditchley Foundation.
(If you are not aware of the Ditchley Foundation and the regular conferences that are held at the Ditchley Park country estate outside of Oxford, please check out the Foundation’s website which contains an impressive list of past conferences with a Director’s Summary capturing the unattributed points of the discussion at each event.)
This conference on the digital economy included a fascinating mix of private sector, public sector and civil society leaders and experts, with a small enough number of us academics, giving the conference a quite different dynamic to those I’m used to. The plenary, break-out group, and (especially) the informal conversations over port and claret were expertise-informed, intense, enjoyable and enlightening. I’m afraid I learned much more than I contributed, but I certainly had my thinking challenged by some of the leading minds in the world on issues of digital privacy and security.
This conference attempted to re-balance the concerns raised about government digital surveillance in the wake of the Snowden revelations, by re-orienting the focus to issues of security and privacy as they relate to the reach and influence of major private sector digital companies – Google, Facebook and Twitter being the canonical contemporary examples. Issues of monetization of users’ data, protection of personal privacy, and the increasing reach of algorithmic decision making fueled by data about our choices and behaviours provided the foundation for a wide ranging discussion about the rights of people – and the responsibilities of institutions – in the digital realm. The Director’s Note is available here, and represents an artful summary of the discussions amongst this diverse group.
While not entirely resonant with the Director’s Note, three inter-connected points I attempted to articulate at the conference are:
- data is the cost of free: When we think about how much data companies like those mentioned above have about our choices and behaviours, this is largely based on the “free” Internet model that has become so dominant. (I self-consciously note that I am typing this on a free blogging service. I also note that I certainly did not predict that this was the direction the Internet was going 20 years ago. But given the rise of ad-blocking, I’m not sure how much longer the “free” Internet is going to last.) The primary reason why people are not concerned about the privacy and security of their Facebook data is because they get a service for (what appears to be) free. Because of the spread of the NPM-myth that citizens are paying customers of government, we object to governments collecting our data because we don’t consider all the stuff we get for free from the state because we had to pay for it. But Google? That’s ok, because Maps are awesome. And free. Totally free!
- are we willing to pay to be free: would we be willing to pay for previously free services if it would protect our privacy? (Would we pay a small amount for a currently free service in exchange for stronger control over our data?) I have the honour of working on another project with Sir TIm Berners-Lee, and he and his colleagues at the MIT Media Lab are promoting a concept called Crosscloud that I (certainly inaccurately) refer to as “Sir Tim’s mea culpa to the Internet” or how to get back to what he initially thought the Internet should be. To oversimplify, Crosscloud is an architecture where – instead of Facebook holding all your Facebook-related data on its servers – you would grant Facebook access to a specific part of your data “pod” in the cloud for the purposes you determine and for the length of time you want. But related to the “free” Internet issue above, the central challenge of Crosscloud is not technical – it’s that it would require that people pay for a service (a secure cloud-based data pod) as an alternative to what they currently get for free. (Also, would Facebook ever play in such an environment? See the point about Ad-blockers above, and the long-term sustainability of their model – and it’s possible they might. Someday).
- youts are not what you think: there were occasional statements at the conference along the lines of “young people don’t care about privacy”. This belief has become commonplace and accepted without being examined. While there is much more to be said on this, the basic point I would make is that some not-small-number of young people are inventing a kludged version of privacy under their own control. The creation of multiple Facebook accounts (to get around the “oh crap, my parents joined Facebook” problem), anonymous social media profiles, and things like fInstagrams are examples of this privacy-by-stealth movement.
The basic idea running through these three observations, I think, is that – rather than thinking about privacy and security as something that’s done for (or to) us by institutions – privacy and security are inching towards a re-conceptualization as being the responsibility of the person. Whether we have the technical skills and interest to take up that responsibility is another question.