Being digitally invisible: Policy analytics and a new type of digital divide

Citation: Longo, J., Kuras, E., Smith, H., Hondula, D. M. and Johnston, E. (2017), Technology Use, Exposure to Natural Hazards, and Being Digitally Invisible: Implications for Policy Analytics. Policy & Internet. Early View doi:10.1002/poi3.144


Policy analytics involves the combination of new data sources – e.g., from mobile smartphones, Internet of Everything (IoE) devices, and electronic payment cards – with new data analytics techniques for informing and directing public policy.

The concept of the digital divide has been around for some time now. Whether it focusses on basic ownership and access to digital tools, or the ability to use them effectively, the digital divide means that some people are not able to send information into or receive information across digital channels. If you haven’t got a computer, you can’t tweet about it.

The access part of the digital divide has diminished in recent years (mostly because of the falling cost of technology needed to get online, and efforts by corporations and governments to put mobile technology into people’s hands at low or no upfront costs), but the broader concept of who is represented online is still of concern to researchers and policymakers.

With the rise of new data sources (often referred to as “big data”) driving the possibilities for policy analytics, work undertaken at the Center for Policy Informatics at Arizona State University in early 2015 came to focus on those who do not use or own devices like smartphones, IoE devices and transaction cards.

We explored the possibility that people may be rendered digitally invisible if the signals from their daily actions are not generated or captured because they don’t carry the devices that “big data” presumes, and therefore don’t figure into policy analytics. Failing to observe the lived experience of those outside the “big data” world may result in policy analytics being biased, and policy interventions being misdirected as a result.

With my CPI colleagues Evan Kuras, Holly Smith, Dave Hondula, and Erik Johnson, we set out to determine whether the concept of the digitally invisible could be shown empirically by conducting an exploratory study with the participation of homeless individuals in Phoenix and the Phoenix Rescue Mission, in the context of extreme heat exposure.

The results of that work have been published in a special issue of the journal Policy & InternetIf you don’t have access to the online version at the publisher, the published version can be accessed here.

Do the digitally invisible exist? Perhaps surprising to some, homeless individuals in the United States have very good coverage in terms of mobile phone usage (this is partly a result of government programs, and partly because a mobile phone becomes a crucial technology when you don’t have a fixed addess). And public libraries and other access points provide computer resources and Internet access, leveling the digital playing field and lowering cost barriers.

Yet policy analytics is based not on active participation, as is the focus of the digital divide literature, but instead is based on passive data contributions (through “big data”). We think this is the key idea that distinguishes digitally invisible from the digital divide.

For those without a smartphone, without a bank account or credit card, without regular and ubiquitous Internet-connected computer access, living beneath and beyond the network of sensors, monitors and data capture points, their existence is being rendered increasingly invisible, with policy developed using a policy analytics approach biased against them, even if unintentionally. As a result, policymaking is blind to their existence and policy based on incomplete evidence will not reflect their reality.

We’re at the early stages of the policy analytics movement. But we argue that a contextual awareness and humility should guide the developing policy analytics approach, understanding that it offers only a partial picture of a reality that is influenced by the values we bring to the analysis. We recommend being vigilant in looking for those who are hidden and will do the same in our future work.

We look forward to your comments.

Review: “A Field Guide to Lies: Critical Thinking in the Information Age”, by Daniel J. Levitin

  • A Field Guide to Lies: Critical Thinking in the Information Age
  • Daniel J. Levitin
  • Publisher: Allen Lane, 2016
  • Amazon.ca listing

While we’re still in the early days of 2017, I’m beginning to think that “gaslighting”, “fake news”, “post-truth”, and “alternative facts” are all possibles for word of the year. As these ideas consume us – causing us to ask whether Donald Trump’s supporters think his outright and brazen lies are true, or they like him because he so boldly lies and they know that he’s lying – Dan Levitin’s “A Field Guide to Lies” is disappointing because it resides in a bygone world of the gentlemen’s agreement: a world typified by the late U.S. Senator Daniel Patrick Moynihan, who was reputed (though, likely apocryphally) to have remarked: “everyone is entitled to their own opinion, but not their own facts.” Unfortunately, it is 2017, and Moynihan’s law seems to have been re-written as “everyone gets to have their own facts, as long as you state them with confidence and call the alternative ‘fake news’.” Also unfortunately, Levitin’s book is so quaintly 2015, in an era where things move fast and get broken with glee.

I picked up this book hoping for some insight into how to understand this phenomenon where a President of the United States is so comfortable with lying, and so reflexively calls that with which he does not agree “fake news”. Regrettably, I don’t think Levitin anticipated the depths to which our democratic discourse has sunk – I mean, who could have? – and so his book is a useful guide to critically assessing falsehoods, but doesn’t do much for our understanding of “the new normal” of the lie that is embodied by Trump.

Levitin is a neuroscientist and Canadian/American academic of renown, and his goal in writing this book – “how to spot problems with the facts you encounter” (p. ix) – is laudable. He cannot be faulted that the goal-posts were moved between the writing and the publishing, when ideas like “post-truth” and “fake news” appeared out of whole cloth. The publisher, hoping that the book was incredibly timely, is likely responsible for the title. (Update: I just received Levitin’s book “Weaponized Lies: How to Think Critically in the Post-Truth Era”, interested to see how Levitin would take on this subject in this new book. Unfortunately, “Weaponized Lies” (paperback, March 2017) is 99.9% (I made that statistic up, but it’s close enough) the same book as “A Field Guide to Lies” (hardcover, September 2016), changing the title to take advantage of the buzz around the “post-truth era”, but doing little to address what is fundamentally different about this era (the introduction has been modified very slightly)). Levitin has noble objectives in explaining with care and clear prose how a skeptical reader should interpret statistics, information, and assertions in a media story. But if you’re expecting an explanation of this age of bonkers lying? Disappointment awaits, I’m afraid.

Also for a neuroscientist, the dearth of reference to the cognitive factors that influence how we perceived information is odd. Levitin almost reveals the depth of the “believing is seeing” problem when he says “the human brain often makes up its mind based on emotional considerations, and then seeks to justify them” (p. 124). But there is no follow-up into the reasons why we choose to believe obvious untruths, concepts such as motivated reasoning, often identity-based, which can help explain why some people believe in climate change and others are convinced it’s a Chinese hoax. What can explain the relative absence of the work of Kahneman and Tversky, when this book is all about the “slow” mode of the thinking when the “fast” mode now seems dominant?

Again, for a textbook on how to accurately and skeptically assess statistical findings, the presentation of information, and the validity of assertions, this is very well-written and concise. This will appear in the syllabus for many introductory statistics courses, and students should be grateful for its readability. His chapters on the proper use of statistical techniques, graphical presentation, probabilities, and data collection methods are clear and persuasive. He does miss the new technological advances that seek to address the “how numbers are collected” problems, but the chapter on “probabilities” will help you interpret what FiveThirtyEight.com meant when they said Hillary “Clinton is a 71 percent favorite to win the election“.  Ahem.

It’s in part two of the book that Levitin ventures into more interesting territory, setting out questions about the fundamental nature of knowledge – that is, how we know what we know. Take the moon landing, for instance. How do we know this happened as reported, and isn’t an elaborate hoax perpetrated by an American government determined to win the space race at any cost? “We have three ways to acquire information: We can discover it ourselves, we can absorb it implicitly, or we can be told it explicitly.” (p.123). Option 1 isn’t available to us here (the moon landing, if it happened, happened a long time ago). Option 2 is also out of reach (I assume here he means observing things that happen to others, and inferring the same would happen to us). So we’re left with being told it happened, and believing it because we trust the person who told us that.

Ah, thus enters “the death of expertise“, introducing the greatest danger in the era of the bold lie. As a former great American President once said, when asked how he knew that letting a fire burn in a national park was actually good for the environment: “Because smart people told me.” One has trouble imagining the current president, no less fictional to many than that President Jeb Bartlett, saying he was going to do something because an expert told him to. In the Trumpian era, the expert is derided as either having a hidden agenda or lacking any common sense. With no experts to tell us so, we are left with being comforted by the convincing lie. Knowing how to properly distinguish between the mean and median and mode will be of little help in a world where reference to the Bowling Green Massacre threatens to sway public opinion.

You’ve likely heard a lack of preparation or sophistication framed as “playing checkers when the other side was playing chess”. Whereas bringing the wrong defensive tools to an argument is like “bringing a knife to a gunfight.” I’m afraid that studying Levitin’s book in today’s political context would be akin to coming to a gunfight prepared to play chess. You may feel good about your technical abilities, but you’re not playing the same game as your opponent. You’ll lose, while feeling righteous about it.