Computer-Supported Policy Analysis, Past and Future

In a recent draft paper (available here, and below), I’ve speculated that enterprise collaboration systems in government can serve to expand the notion of who in government can contribute to policy analysis; and that open data can expand the idea of policy analysis to those working outside of government.

Comments are welcome from any reader. Note that this draft is currently under review and is intended for: Parson, Edward A. (ed.). forthcoming. A Fine Balance: Expertise, Evidence and Democracy in Policy and Governance, 1960-2011. Toronto: University of Toronto Press.

Computer-Supported Policy Analysis, Past and Future

The use of computer technology in support of the policy analysis function in government has evolved over the post-World War II period both in response to development and innovation in networked digital computer technology and the changing nature of how policy analysis is conceptualized and practiced. This chapter is an attempt to sketch that relationship between computers and the practice of policy analysis, and begin to map out possible future directions for the practice of policy analysis arising from continuing organizational changes and shifting approaches to governance being made possible by recent technology innovations.

Speculation around the role that advances in information and communications technologies (ICTs) might play in improving the practice of policy analysis has tended to follow a technology “hype cycle” (Fenn and Raskino, 2011), in which a predictable trough of disillusionment follows a peak of initially high expectations. With every significant increase in hardware power, software functionality, system reliability and affordability, renewed enthusiasm for the ability of computer technology to transform the process of policy analysis seems to have followed. When the dust of each technological upheaval has settled, however, questions are invariably raised as to whether the practice of policy analysis has been fundamentally transformed by the introduction of technology innovations, or whether all that has changed are the tools by which the public policy analyst carries out their traditional tasks.

The question this chapter explores is whether the future of computer-supported policy analysis will be so unlike its past as to represent a fundamental transformation of practice. The factors identified that, taken together, signal a possible discontinuity between the past and future of ICT-supported policy analysis include technologies and work-modes that seek to flatten traditional organizational hierarchies through support for collaboration and knowledge sharing amongst knowledge workers within large bureaucracies (enterprise social collaboration), and a growing social and governance movement aimed at increasing transparency through the open publication of government data sources (the open data movement).

In the next section, a brief sketch is presented of the shared sixty year period that has seen the development of both the field of policy analysis and of the networked digital electronic computer. These developments have produced a history of computer-supported policy analysis over the post-World War II period, from the use of the first commercially available computers as large dataset tabulators to today’s networked, mobile ICTs as a key tool of the contemporary policy analyst. Following that sketch, two emerging trends that flow from that history are discussed: the emergence of enterprise social collaboration platforms within large bureaucratic organizations as a means for knowledge workers to connect, collaborate and share knowledge with their colleagues; and the movement to turn open government data sources for widespread public scrutiny, manipulation and re-purposing. These trends are supplemented by a discussion of their implications for the policy analysis profession: how enterprise social collaboration can serve to flatten the organizational hierarchy and expand the notion of who in the organization might contribute to policy analysis; and how the open data movement can make available to external-to-government public policy advocates the data resources that, with appropriate analytical skill, can rival the inside-of-government position of the policy analysis professional.

Policy Analysis and a Brief History of e-Policy

The relationship between computers and policy analysis that has developed over the past sixty years covers a period that, coincidentally, represent the first sixty years for each of these ‘inventions’ of the modern age – at least in their current conception. With their respective births emerging from the destruction of the Second World War, each were seen at the time as instruments crucial for the enhancement of the human condition (Bush, 1945). Having grown up together, the digital electronic computer and the policy analysis profession are not unlike most siblings; at times benefiting from each other’s advances, the computer is often seen as the golden child with policy analysis the vacillating brother who has shown moments of brightness but generally has failed to live up to expectations. Indeed, policy analysis has been dealing with an existential crisis for most of its later middle age. Whether the continued dominance of the computer will continue to overshadow the earlier hopes for policy analysis, or give policy analysis new meaning and purpose, is the question this chapter explores.

But before addressing briefly the history of computer-supported policy analysis (and since most readers will have a shared understanding of what is meant by ‘computers’), I should clarify what is meant here by ‘policy analysis’: a core function of government in which a particular type of public servant, often referred to as a ‘policy analyst’, provides support for decision making with the aim of contributing to better decisions than would be made in the absence of such analysis (Quade, 1975). A concept of policy analysis in support of public decisions can be found from ancient civilizations through to the modern age, ‘from Hammurabi to Keynes’ in all but name (deLeon and Overman, 1998; Dunn, 1981). But it was through the publication of Lerner and Lasswell’s edited volume on The Policy Sciences (1951) that an integrated, multidisciplinary approach to the study of public problems and development of recommended solutions first took shape. Harold Lasswell is widely considered to be the founder of the policy sciences, and his post-war writings provide the field with its earliest concepts. Of particular note is Lasswell’s introductory chapter in The Policy Sciences where he advanced ‘policy analysis’ as a term of art, seeking to differentiate it conceptually from the social sciences generally and ‘political science’ specifically.

As practiced by the individual public servant, policy analysis involves a range of activities including: the identification of public problems and the determination of their extent; the assembling of evidence, and analysis of the problem; the projecting of outcomes and development of strategies for dealing with trade-offs; the construction and evaluation of options for addressing the problem; the assembling of bureaucratic and civil society coalitions necessary for later policy formulation and implementation; the communication of recommendations to support decision-making; and the evaluation of previously adopted policies to determine effectiveness or value (Bardach, 2000; Pal, 2009; Weimer & Vining, 2010). Policy analysts play many roles in the policy analysis process: as information and knowledge manager, decision-support reference source, coordinator and collaborator, boundary agent, advocate, advisor and gatekeeper. Artifacts of the policy analyst’s work may take the form of draft position papers, consultative documents and strategy statements, decision notes and briefing notes, draft Ministerial orders, proposals for new or amended legislation, regulations or programs, formal Cabinet submissions and less-formal Cabinet presentations.

Lasswell’s vision for the policy sciences was based on social science knowledge and quantitative methods to analyze policy choices, strongly influenced by economics. During the first twenty five years of the policy analysis movement, techniques such as modeling, quantification of data, descriptive statistics, statistical inference testing, operations research and systems analysis, cost-benefit and risk-benefit analysis, Markov analysis, linear programming, dynamic programming, stochastic modeling, Bayesian analysis, quasi-linearization, invariant embedding, and general systems theory became staples of the profession. This ‘golden era’ of the policy analysis movement in the late 1960s was a time when “policy analysis was essentially quantitative analysis” (Yang, 2007: 351). Advances made in the techniques of policy analysis led Brewer (1974: 239) to suggest that the practice, as it neared its second quarter century, may be finally “emerging as an identifiable, respectable, even desirable professional activity.” Theoretical and applied advances were made through the 1960s and 1970s in policy analysis using mathematical equations and computer programming (Quade, 1980). Quantitative methods in policy analysis were not deployed for the sake of demonstrating elegant technical prowess however, but in response to real social and economic pressures: expansion of the welfare state, economic malaise, emerging awareness of environmental limits, space exploration and continuation of the Cold War provided new challenges and opportunities for policy analysis. Additional approaches such as system dynamics (e.g., Forrester, 1971; Meadows, et al., 1972; Mesarovic and Pestel, 1974), Integrated Assessment Models (IAMs) for integrating science with policy (Parson and Fisher-Vanden, 1997) and increasingly sophisticated simulation tools (see Wolfson, in this volume) made important advances during the 1970s and 1980s.

Despite the coming of age of policy analysis in the early 1970s, debates over the real, perceived and proposed role of the policy analyst have coloured the profession’s second quarter century. Positivism – the application of logical and mathematical treatment to empirical evidence, as the basis for determining authoritative, scientific knowledge – dominated the discipline’s intellectual infrastructure, and the policy analysis movement continued to be influenced by the training, practice and specialization of the academics who taught succeeding generations of policy analysts (Durning 1999). While technical policy analysis rooted in quantitative methods became more sophisticated during the 1970s and 1980s, and just as policy analysis had finally found its footing, high-profile failures exposed the limits of policy analysis (Radin, 2006). Critics of positivism argued that the attempt to model social interactions on the natural sciences model was mistaken (Amy, 1984), and that policy wisdom should be seen as more than the results of data impressively distilled (Prince, 2007; Wildavsky, 1978). The limits of positivism were also under scrutiny at the same time that the ‘implementation problem’ (i.e., the disconnect between policy making and action required on-the-ground to realize the intent of the policy initiative) was being highlighted (Pressman and Wildavsky, 1973). The post-positivist movement lead to calls for a balancing of softer skills along with technical mastery (Fischer, 2003) and approaches such as participatory design, stakeholder involvement, citizens’ input, qualitative methods and mixed methodology, amongst others, were advanced. Part of the response to the implementation problem focused on the knowledge gained through the work of the ‘street level bureaucrat’ (Lipsky, 1971). Contemporary policy analysis skills now include: case study methods, interviewing and qualitative data analysis, organizational culture analysis, political feasibility analysis, stakeholder engagement and small-group facilitation.

Whether positivism is still dominant in practice is an open question. There is a rich literature on what policy analysts should do (e.g., Jenkins-Smith, 1982; Jennings, 1987; Torgerson, 1986). But in terms of how policy analysts today actually practice their ‘art and craft’ – beyond the ironic tautology of “policy analysis is what policy analysts do” (Meltsner, 1976: vii) – the empirical evidence on what policy analysts actually do in practice is less developed (Dobuzinskis, Howlett & Laycock, 2007). Over a decade ago, Morçöl (2001) found that there was considerable support for positivism among policy professionals, especially among practitioners and professionals with educational background in economics, mathematics, and science. A recent study of the contemporary policy formulation environment and the impact of new technology innovations on the work of the policy analyst (see Longo, 2013) found that, when asked to rank five policy analyst archetypes (‘connector’, ‘entrepreneur’, ‘listener’, ‘synthesizer’, ‘technician’) in order of how they understood and practiced policy analysis, the ‘synthesizer’ archetype (“consulting various sources to understand how a problem is conceptualized … develop recommended ways to deal with the problem”) was overwhelmingly identified with and the ‘technician’ archetype (“locating of primary raw data sources in order to undertake statistical policy research”) was consistently ranked lowest. Thus, respondents strongly supported a post-positivist, narrative policy analytic approach over quantitative positivist techniques. As will be discussed below, this evolution in what it means to do policy analysis has transpired along with the changing nature of computer technology to affect how computers have been used as policy analytic support tools. This shift also has implications for the future practice of policy analysis though it seems unlikely that the advance of technology will resolve the positivist / post-positivist stalemate between the rational – practitioner’s promise of precise answers – however inaccurate – and the post-positivist’s hand wringing over the uncertainty about unknowable outcomes.

Computer-supported policy analysis, the focus of this chapter, inhabits a sparsely populated corner of a vast ‘e-gov’ literature (e-gov being a convenient amalgam of the often undistinguished fields of e-government and e-governance; Marche and McNiven, 2003). A broad definition of e-gov includes all applications of ICTs deployed in service of the business of government and their use in support of public sector and civil society governance activities. Within e-gov generally, a four concept organizing framework that separates e-democracy, e-policy, e-management and e-service delivery provides specificity for categorizing different elements, concepts and practices involved in e-gov (Dobell and Longo, 2011), with computer-supported policy analysis operating as part of the e-policy category.

The immediate aftermath of World War II saw the development of working electronic digital computers, with the production of commercially available versions following soon after (see e.g., Freed and Ishida, 1995 for a general history). With the combined capacity of the welfare state and ongoing military Cold War needs providing a market, large mainframe computers entered the public service in the early 1950s. This first generation of computer-supported policy analysis saw computers helping to perform complex mathematical calculations and to manage large data requirements in support of social welfare policy analysis and service delivery (Gammon, 1954; fn1:Gammon’s 1954 article would have likely remained hidden from any survey of e-gov literature as it, of course, made no use of the term e-government (being written some 40 years before that term emerged) and was only cited once in the academic literature. This early publication was brought to light recently when Richard Heeks suggested via his blog that it could be identified as “the first research paper about e-government” (Heeks, 2011).). With the bipolar transistor replacing the vacuum tube around 1955, a second generation of commercially-available computers that were smaller, cheaper and consumed less electricity increased the presence of computer technology in government as elsewhere. The explosion of quantitative technique in the late 1960s described above was developed in large part through the hegemony of positivism and normative economic reasoning, though advances brought about by the computational power of modern computers were instrumental in making the application of these techniques feasible (Bossel, 1977). With the market for the work of policy analysts growing, the tools (computing power) and techniques (quantitative methods) were increasingly in demand. Perhaps the zenith of the integration of computer technology with state governance came with the bold utopian Project Cybersyn undertaken in an effort to engineer an economy and society in the service of the Chilean people (Beer, 1974).

Despite their increasing availability, direct access to computers throughout the 1970s was still mediated by experts with programming capabilities, as skills needed to control a large centralized mainframe computer were still beyond the capacity of most generalists who could, at most, hope to have ‘dumb terminal’ access. This changed with the development of third-generation computers, built on the microprocessor which led to the microcomputer and ultimately the personal computer (PCs). With the availability of desktop PCs at continually lower prices,(fn2: IBM announced the PC XT for $7500 in 1983 and Apple debuted the Macintosh at $2400. In 2013 prices, an IBM PC XT would cost approximately $17000 and the Apple Macintosh approximately $5500 (based on Consumer Price Index data from the U.S. Bureau of Labor Statistic). While comparisons to current technology are difficult to make, one could anticipate spending under $1500 for a comparable desktop PC or Apple Macintosh today.) policy analysts throughout most governments were increasingly given first-hand access to computer technology; this happened over a period of years depending on the government, but a general timeline spans the early 1980s (n.b.: 1983 was the year when IBM announced the IBM PC XT and when Time magazine selected the personal computer as its “Machine of the Year”; in 1984 Apple unveiled the Macintosh).

This PC era also saw an explosion in commercial competition in operating systems and program applications, each designed to make computers more ‘user-friendly’ and useful to the non-expert. One development of note that occurred during the early 1980s PC-era was the introduction of the graphical user interface (GUI) as a standard feature of desktop computers, starting with the introduction of the Macintosh operating system and followed by the Microsoft Windows interface. ‘Office productivity software’ – which typically included word processing, spreadsheet, presentation, database and drawing programs – became standard elements of desktop computers at this time with the development of AppleWorks, Microsoft Office and similar packages in the mid 1980s.

An important development for the practicing policy analyst undertaking quantitative analysis during the PC era was the desktop availability of statistical analysis programs – notably SPSS (Statistical Package for the Social Sciences) and SAS (Statistical Analysis System). Both products became available on desktop PCs with a GUI and drop-down menus (as opposed to requiring syntax commands) in the 1980s, making their use more accessible to generalists in government. Additionally, PC-based and increasingly ‘user-friendly’ computer simulation tools became accessible to non-experts using spreadsheets and as standalone products (e.g., SimHealth was a simulation tool for U.S. health-care system policy analysis; QUEST was a computer-based integrated assessment model and deliberation support tool designed to facilitate discussion about regional sustainability; see Longo, 2003). Desktop ‘decision support tools’ and ‘management information systems’ become increasingly available and further served to put specialized computer tools directly into the hands of analysts and decision makers (Ennals, 1981; Hämäläinen, 1988; Likierman, 1982).

Coupled with the development of the computer as a data processing, office productivity, statistical analysis, simulation and decision support tool in its own right, the linking together of individual computers into a communications network is the other striking feature of the development of ICTs over the post-War period. While the concept of transmitting data between two points predates the development of the digital electronic computer (e.g., using telegraph, telephone and radio media), the modern Internet changed fundamentally the nature of computers from being principally computing devices to information and communication devices. Starting around 1990, the connecting of government offices to the Internet began to take hold. (fn3: At the same time, the connecting of Internet users to government was also taking place. The early 1990s is generally seen as ushering in a new citizen engagement era with the publicizing of government email addresses (e.g., president@whitehouse.gov was made active in 1993). This was soon followed by the creation of official government websites (http://www.whitehouse.gov/ was launched on October 20, 1994) as static, external communication mechanisms.) Governments began to offer Internet-based email to their employees as early as 1989 (though distributed network email systems existed for some time before). This emergence of email as a workplace tool allowed policy analysts to communicate through email instead of through mail, telephone, fax, etc. as previously – both with internal-to-government colleagues as well as with external actors. Document sharing was often constrained in this early period to plain text transmissions, though increasingly sophisticated encoding protocols allowed for the transmission of more complex documents with their formatting and graphics intact thus allowing policy analysts to collaborate on formatted documents. With the development of the World Wide Web in 1991, a growing number of Internet-connected computer servers became easily accessible to desktop computer users. With the roll-out to government employees of widespread desktop access to the growing resources available via the World Wide Web, beginning around the time of the availability of the first graphical user interface ‘web browser’ in 1993, policy analysts now had access to an increasing volume of resources for informing ‘synthesis’ type policy analysis. Since its first appearance in the early 1990s, email has become an indispensable communication and information tool in the policy analysis environment serving as a document of record in the policy formulation process, allowing for a thoughtful and full response to inquiries from colleagues and stakeholders and providing a template for answering similar future inquiries (Longo, 2013).

The popularization of ‘the web’ gave rise to a “fairy godmother” period when progressive politicians sought “to associate themselves with the magical effects of her wand” which promised to transform government and fix a range of  administrative inefficiencies (Margetts, 1998: xiii). The focus of this ‘information superhighway’ impulse was largely focused on improving citizen service delivery, health care and public education. Following this initial enthusiasm for ‘digital-era governance’ in the web era (Dunleavy, Margetts, Bastow and Tinkler, 2005), and out of the ashes of the 2001 ‘dot-com bubble’, renewed enthusiasm for the Internet began to emerge around 2004 with an approach called ‘Web 2.0’. Web 2.0 connotes a second generation Internet built on the technologies of the first generation web, focused on user control, simple user-publishing of web content, social media communication, participation and peer-to-peer collaboration. Under a Web 2.0 model, the distinction between consumers of information and producers of content is blurred; the one-to-many broadcasting model of the early web now supports many-to-many interactions. Web 2.0 technologies – such as blogs and microblogs, reader commentary, wikis, social networking services, content sharing, collaboration and tagging – continue to grow in popularity and function. Principally used for social activities (e.g., Facebook and Twitter continue to be cited as principle examples of Web 2.0 applications), Web 2.0 has also been deployed in a number of corporate environments in support of operations management, collaboration and knowledge sharing. ‘Gov 2.0’ – the application of Web 2.0 tools and approaches to public sector governance activities – has emerged in recent years as a sub-domain of the e-government literature (Morison, 2010) . Where governments have adopted Gov 2.0, it has generally been in support of external communication strategies (e.g., Wyld, 2007) and as a platform for citizen engagement (Chadwick, 2009). In terms of the policy analysis function, however, Gov 2.0 technologies can be deployed to broaden policy development through corporate collaboration and knowledge sharing platforms, These are web-based applications designed for use in a corporate context (as opposed to open access social tools like Facebook and Twitter) that facilitate collaboration without relying on existing formal work-flows or pre-defined hierarchical structures. The tool might be a wiki (a document that any user can change or add to), a blog (a statement, paragraph or longer document that any authorized user can comment on) or a related forum and platform. These workspaces can be used to pose questions, connect to knowledge sources, initiate discussions, or co-create and collaborate on documents (Fyfe and Crookall, 2010). The key is that users can easily start conversations across their entire network, and other users within the organization can join that conversation whether they are known to the originator or not, without the need for corporate approval or technical web support (McAfee, 2006).

Lastly, the recent rise of access to email and the World Wide Web using handheld mobile devices such as a smartphone or tablet computer connected wirelessly to a network is considered. From their first use in the early 1990s, access to the Web and email was generally limited to desktop or laptop computers with fixed line Internet service. With the release of the Blackberry smartphone in 2003, governments rapidly adopted these mobile email devices as a way of keeping senior knowledge workers in constant contact regardless of their location. Additional smartphone platforms were released in 2007, and touchscreen tablet computers in 2010. Available to consumers at lower prices than traditional PCs, movement towards the mobile Internet has accelerated. (fn4: With employees now having mobile access to the Internet on devices that rival in power their workplace-supplied computers, governments are now recognizing that prohibiting their use in the workplace is problematic. Instead, some governments are adopting ‘bring your own device’ (BYOD) strategies that seek to accommodate the variety of ways that knowledge workers use ICTs while also ensuring the security and confidentiality of that work (e.g., British Columbia, 2013; United States, 2012).) The impact of wireless email and web access on computer-supported policy analysis is mixed: in many respects, mobile technology has not changed the methods or substance of policy analysis, but has simply served to increase the speed with which information is communicated between colleagues (and increased with it the expectations of how quickly requests will be responded to), and has extended the notion of what reasonably constitutes ‘working hours’ and ‘the workplace’ (Longo, 2013). However, mobile technology does represent a continuation of the general trend in ICTs over the past 60 years as continually smaller, increasingly powerful, ubiquitous and ‘invisible’ devices integrated into everyday work and life. Throughout this evolution of the modern computer from room-sized installations in which expert operators would manage complex calculations, to handheld wireless devices at the disposal of any user, the use of ICTs in support of the practice of policy analysis has also evolved.

The foregoing was intended as a high-level survey of the sweep of history that Rod Dobell has experienced firsthand, in which both ICTs and the practice of policy analysis in government have evolved. While these histories are strongly entwined, the dominant ‘social construction of technology’ interpretation since the 1980s has been that technology has not-so-much driven public sector organizational change (including the practice of policy analysis) as facilitated the implementation of intended new directions (Bretschneider and Mergel, 2011; Wynne, 2010). Whether the widespread ubiquitous infiltration of the Internet and the emergence of new technology-enabled governance models might represent sufficiently transformative developments as to cause us to reappraise the social construction perspective will be addressed in the second part of this chapter.

Towards a Renewal of Computer-Supported Policy Analysis

Today, the emergence of a second generation Internet built upon an architecture of user participation, coupled with ubiquitous cloud computing, advances in data analytic capacity and massive data availability all portend a transformation in computer-supported policy analysis. This section is an attempt to briefly consider the possible future implications for practicing policy analysts as a consequence of two emerging phenomena: enterprise social collaboration, and the open data movement.

Enterprise Social Collaboration

The policy analysis process in large government bureaucracies has evolved over its sixty year history to suit the information needs of those organizations and the hierarchical structures they are built on. Traditionally, governments have dealt with policy problems by breaking the organization up into distinct units, taking a quasi-militaristic hierarchical approach with ministries, divisions and branches, in order to make sense of the breadth of policy issues governments are responsible for and to coordinate the work of their employees. In order to respond to the decision-support needs of Ministers and senior executives, policy analysis assignments cascade through this hierarchy from superiors to subordinates until they become the responsibility of a special class of public servant called ‘policy analysts’. Under the ‘synthesizer’ model of policy analysis that now appears dominant, the individual policy analyst then draws upon the resources at her disposal within the organization and beyond (using, likely, resources accessed over the Internet) to assemble the information necessary to produce an output such as a briefing note. This document then flows back up through the hierarchy, with each successive management layer approving the work of their subordinate, seeking to add value to what was done before them and serving as an information filtering mechanism. Once the document has worked its way back through the hierarchy, the information is then available to the individual it was intended for. This process for policy analysis reflects current practice (Longo, 2013) and is reinforced through the training of policy analysts (e.g., Bardach, 2000). The current technology configuration sketched above seems purpose-built to facilitate this process: email (whether via a mobile device or not) provides the mechanism for communicating the assignment; an Internet-connected PC with a web browser and office productivity software provides the platform upon which the analyst can acquire the material upon which to base her analysis and manipulate that information into a presentable synthesis document; and the organization’s network technology provides the means for transmitting the finished product back up through the hierarchy.

While this system works in many respects, the contemporary policy analysis and briefing process, as revealed in practice – assigning and drafting briefing notes based on hierarchical management, and having policy analysts work in isolation on particular assignments – shows some weaknesses especially in the context of complex public policy challenges that require a coordinated cross-governmental response that draws upon the input of multiple actors. Complex policy problems rarely fall entirely within the organizational divisions established in a government, and individual policy analysts within government are unlikely to have access to the full breadth of relevant intelligence necessary to fully comprehend and address a policy problem, even with access to the resources available via the Internet. (fn5: Another problem with the current policy analysis process lies in the very ICT approach by which the process of policy analysis is currently coordinated, i.e., corporate email systems, individual PCs and network shared disk drives which have come to be typified by an unmanageable volume of emails messages, the inability of recipients to easily distinguish important from unimportant emails, the closed nature of email that makes the contents inaccessible to non-recipients, and uncertainty as to the location of the most up-to-date and authoritative version of a document (Nairn, 2011, Thompson, 2011).)

One approach to dealing with complexity in a public policy context is horizontality, the act of working across the various ministries and divisions of a government in order to harness the organization’s capacity and resources and direct them towards an appropriate response to the complex problem (Parsons, 2004; 6, 2004). One prominent mechanism for addressing the horizontality challenge is the promotion of greater organization-wide collaboration, knowledge sharing and active knowledge seeking amongst a network of knowledge workers. And enterprise collaboration platforms have emerged in recent years as a platform for collaboration and knowledge sharing and as a possible technology solution for addressing some of the problems that modern knowledge organizations like government face. Examples of the use of enterprise collaboration systems in government continue to grow including leading work amongst government agencies in Canada and the United States (Akerley, Cowan and Belanger, 2008; Wigand, 2010). Such tools allow knowledge workers within organizations to connect with each other over a social networking platform and, building on those connections, share knowledge and collaborate to greater effect than if they were to work in isolation. By augmenting the organizational social network with a technology-based social networking platform, the knowledge worker is better positioned to access more of the knowledge resources embodied in an organization. Enterprise collaboration tools also propose to help solve the email problem that has come to plague many organizations by moving electronic communications and document sharing into a shared collaborative space.

In some respects, enterprise collaboration systems represent a minor advance over previous knowledge management systems, which were found to have limited effect on corporate performance (Grundin, 1988). Modern enterprise collaboration systems in contemporary practice will also face challenges in adoption, among them: employee uncertainty about whether knowledge can or should be shared across governments; and the possible reluctance of women to share knowledge in male dominated cultural settings (Longo, 2013). However, in the context of the policy analysis system, there is a potentially profound impact these tools can have. Collaboration systems build on social networks and embodying principles of openness across the organization can serve to ‘flatten’ traditional hierarchies by providing a platform for bringing all knowledge workers at all levels in an organization into a collaborative space. But in the modern government organization, in the age of the ‘street level bureaucrat’, it is hard to imagine a public servant who is not a ‘knowledge worker’. Building on that premise, under an enterprise collaboration system, every workplace interaction, every meeting with a stakeholder group, every transaction with a citizen, every experience in a worker’s past and every new bit of information collected adds to the knowledge resources of the organization.  For anyone who doubts that open collaboration systems can effectively solve knowledge and workflow coordination problems, we need look no further than successes such as Wikipedia or a growing number of open source software products (Benkler, 2006; Shirky, 2008); the challenge for the policy analysis system will be in developing a comparable collaboration infrastructure that can efficiently evaluate policy contributions from outside the traditional policy network systems. Whereas in the past, knowledge management systems sought to capture those information collection moments and organize them in a knowledge repository that other knowledge workers could draw on, enterprise collaboration systems are built on the understanding that knowledge resides in people, not machines (Hinds & Bailey, 2003). And if the system can be engineered so that someone that can benefit from some existing knowledge resource can be connected to someone in the organization who has that knowledge, a profound transformation of the organization can occur. In the realm of complex policy problems requiring horizontal solutions, organizations can ill afford to waste the knowledge that currently exists, untapped, throughout their organizations.

The Open Data Movement

Governments collect, generate and compile vast amounts of digitized data continually as a consequence of the business of governing (e.g., taxation and government expenditure data, economic activity data; Cate, 2008; Lieberman, 2002), through records accumulated by regulators and service delivery agencies (e.g., natural environment system conditions, public health; Hodge and Longo, 2002), and as purposeful data-collection activities aimed at fueling policy-oriented research (e.g., census and survey work by public statistics agencies; Dillon, 2010). Building on these traditional approaches to data collection, new sources of rich data now come from expanded deployments of instrument clusters (Barnes et al., 2011), location-based data related to the explosion in mobile web technology (Ratti, Pulselli and Williams, 2006), advances in remote sensing (Khorram, Koch, Wiele and Nelson, 2012) and the increased routinization of administrative data collection (Dawes, 2012), all of which continue to increase the flow and stock of data available to support problem identification and analysis. These advances have been labelled the era of ‘massive data’ (Brown, 2009; Science, 2011) and have given rise to an enthusiastic ‘big data analytics’ movement (LaValle, et al., 2011).

Normally, used by government for its exclusive purpose, or occasionally treated as a commercial product with revenue-generating potential (Klinkenberg, 2003), the masses of data held by governments has become the object of an ‘open data movement’ that has grown in recent years calling for governments to provide free Internet-based access to these databases (with appropriate action taken to ensure the protection of privacy and issues of a confidential nature). As a political movement, calls for greater openness in government-held data have generated significant momentum in a short period (Ginsberg, 2011). From the advocacy perspective, open data represents a technology-driven political movement fueled by the general access-to-information expectations that web users have. That ‘open data’ has been embraced by governments appears to be motivated by three considerations: that application developers will use government data to enhance citizen services; that government transparency and accountability will be enhanced; and that outside-of-government policy networks will be expanded (Longo, 2011); the focus here is on this third motivation.

Whereas the movement towards enterprise collaboration systems seeks to broaden the notion of who inside the organization can contribute policy-relevant knowledge, the expanded policy networks motivation of the open data movement rests on the idea that ‘policy analysts’ operating outside of government might revealing new public policy relevant insights through the mining of massive open government data, and substantially increase the limited policy analytical capacity in government (Bertot, Jaeger, Munson and Glaisyer, 2010; Eaves, 2010). Working independently or connected through collaborative tools, using powerful data analysis software and traditional quantitative techniques, assessing multiple data sets in previously unconsidered ways, the open data movement seeks previously unrevealed insights emerging from a collective policy capacity (Napoli and Karaganis, 2010). There is currently little empirical evidence on who these non-government analysts are and how they are using open data to perform policy analysis, though some preliminary research has found them working as researchers in organized policy-oriented think tanks, civil society organizations and private sector firms, academics, ‘data journalists’, data entrepreneurs, computer developers, ‘hackers’ and citizen hobbyists (Harris, Vaisman and Murphy, 2012). The most prominent approaches to analyzing open data focus on advances in data visualization techniques and increasingly geotagged data, expanding the possibilities for the drawing of inferences from the spatial and visual representation of data (Viégas and Wattenberg, 2010). (fn6: Despite several efforts to find examples of how independent analysts were using more traditional quantitative methods to analyze open government data, few were found (for an exception, see Reggi and Ricci, 2011).)

In a similar way to how concerns over the appropriateness of quantitative techniques for investigating social behaviour and conditions gave rise to the post-positivist critique of the policy sciences, questions regarding the competency and honesty of the use of data by independent public policy advocates must be considered. Without an understanding of the processes that produced data, it is not possible for the open data analyst to know if the data is of sufficient quality and appropriate to the analysis being undertaken (Dawes, 2012). While policy analysts and decision-makers inside government do not have an unassailable right to control policy debates, there is a potential downside to the rise of policy advocates armed with massive data and impressive analytical tools: responding to erroneous, tendentious or self-serving claims by policy advocates already occupies a fair amount of time in the contemporary world of the policy analyst (Longo, 2013). If such claims are made more persuasive by being fueled by data and analytics, the challenge to the inside-government policy analyst will be increased. This is not meant to argue against transparency, as policy analysis based on open data can serve to increase the competitiveness of alternative policy analysis perspectives in the marketplace of ideas, leading ultimately to better solutions (Dobell and Zussman, 1981). However, in order to compete effectively in that arena, and appropriately evaluate solutions emanating from analytic work conducted outside of government, the public sector policy analyst should take note of the growth in big data analytics. Despite the lack of affinity for the policy ‘technician’ archetype noted above (fn7: Despite the preference for ‘synthesizer’ type policy analysis and the eschewing of the ‘technician’ archetype, practicing policy analysts in government appear not to be totally indifferent to large number datasets: with the launch of Data BC <http://data.gov.bc.ca&gt;, an initiative of the British Columbia Government to “manage government data as a strategic asset, and provide data services to the citizens of British Columbia and the public sector”, about 25% of the visitors to the website during its first six months of operations came from Government of British Columbia IP addresses (Dominic Seiterle, personal communication, August 13 2012). There is, of course, an irony in public servants using a public-facing Internet site to access data sets contributed by their colleagues in the same government; however, the point still remains that this provides some evidence that ‘data’ is of interest to public servants (though in what capacity and for what purpose these visitors accessed Data BC cannot be deduced).), a revival of policy analysis skills from the discipline’s golden age, updated to account for advances in analytical techniques and tools, seems timely in order to appropriately evaluate and respond to the work of citizen policy analysts.

The inside-government policy analysis system should not see this open data movement simply as a challenge to its hegemony, but as a prompt to consider re-embracing its quantitative roots. As big data policy analysis is promoted outside of government, the potential for it to influence how policy analysis is done inside government is profound. In some ways, the decline of quantitative methods that typify the policy analysis profession following the rise of post-positivism was a reflection of limitations in the tools of analysis and the lack of data availability. In the massive data / big data analytics era, we are now in a situation where policy ideas can become subject to continual micro-experimentation in order to propose, pilot, test, evaluate and redesign policy interventions. With a sense of inquisitiveness and openness on the part of the policy analyst profession, the massive data era might serve to revive positivist policy traditions and the use of quantitative techniques inside government and lead to a new generation of computer-supported policy analysis, moving beyond prospective policy analysis (Rose, 1991) to embrace emerging approaches such as real-time policy experimentation (Paquet, 2009), robust adaptive planning (Lempert, Popper and Bankes, 2002) and massive scenario generation (Davis, Bankes and Egner, 2003). With an appreciation of the lessons of history embodied in the post-positivist perspective (fn8: This is, admittedly, a big “if”. Dawes (2012) expresses the concern that the lessons of the post-positivist reformulation of policy analysis have been lost in the rush to publish open government data that is assumed to be “intrinsically better than processed data, and that data in electronic form suitable for delivery on the Internet is superior to other forms and formats for information.”), a renewed policy analysis profession, more analytically diverse and data-supported, might emerge.

Concluding Thoughts

Computer technology and policy analysis shared an intertwined history, both marked by a democratizing trend of user-orientation and accessibility. As technology development continues, and organizations and governance expectations change, there is a potential for a reconfiguration of policy analysis. Enterprise social collaboration can serve to flatten the organizational hierarchy and expand the notion of who in the organization might contribute to policy analysis. The open data movement can serve to revive the quantitative tradition in policy analysis and prompt the inside-of-government policy analyst to re-consider their drift away from that ‘technician’ tradition. In both cases, a central challenge will be in developing better methods for managing and evaluating these additional contributions to the policy process, lest the adding on of these additional pathways for making policy-relevant contributions simply result in more noise in an already complex environment. While there are some parallels between open source software programming and the managing of policy analysis processes (Shirky, 2012), there do not currently exist automated mechanisms for comparing the value of one interjection vis-a-vis another.

Will the future of computer-supported policy analysis be so unlike its past as to represent a fundamental transformation of practice? Will enterprise social collaboration and the open data movement lead to a fundamental reconfigure of policy analysis? No, but it will prompt the profession to both reconsider its movement away from quantitative methods and continue its post-positivist impulse of ‘democratizing’ policy analysis. These twin directions should not be seen as antithetical; Fischer’s (1980) proposal for a post-positive policy analysis never denied the need for rigorous analysis, but instead called for an appreciation of the presence of normative judgement in the policy analysis process. Both enterprise collaborative systems and the open data movement further respond to the post-positivist policy analysis impulse to democratize policy analysis, dispersing the ability to provide policy-relevant insights through governing organizations, and to those outside of government. Following both routes can serve to improve the practice of policy analysis.

 

References

Akerley, Marj, Peter Cowan and Anna Belanger. 2008. “Collaborative Revolution.” Canadian Government Executive. October 8, 2008.

Amy, Douglas J. 1984. “Towards a Post-Positivist Policy Analysis.” Policy Studies Journal, 13: 207–211.

Bardach, Eugene. 2000. A practical guide for policy analysis: The eightfold path to more effective problem solving. Berkeley, CA: Berkeley Academic Press.

Barnes, Chris R., Mairi Best, Fern Johnson, L. Pautet, Benoit Pirenne and the Founding Scientists of NEPTUNE Canada. NEPTUNE Canada: A real-time view across ocean zones into biota and their environment. World Conference on Marine Biodiversitiy, Aberdeen.

Beer, Stafford. 1974. Designing Freedom. Toronto: CBC Learning Systems.

Benkler, Yochai. 2006. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven: Yale.

Bertot, John Carlo, Paul T. Jaeger, Sean Munson, Tom Glaisyer. 2010. “Engaging the Public in Open Government: Social Media Technology and Policy for Government Transparency.” Technology Mediated Social Participation Workshop Discussion Paper. http://www.tmsp.umd.edu/TMSPreports_files/6.IEEE-Computer-TMSP-Government-Bertot-100817pdf.pdf

Bossel, H. (ed.) 1977. Concepts and Tools of Computer Assisted Policy Analysis. Birkhauser Verlag.

Bretschneider, Stuart I. and Ines Mergel. 2011. “Technology and Public Management Information Systems.” Chapter 12 in Donald C. Menzel and Harvey L. White (eds.) The state of public administration: issues, challenges, and opportunities. New York: M. E. Sharpe.

Brewer, Garry D. 1974. The policy sciences emerge: to nurture and structure a discipline. Policy Sciences, 5(3), 239-244.

British Columbia. 2013. “Information Security Program for the BC Government.” Office of the Chief Information Officer. Victoria, BC: Ministry of Citizens’ Services and Open Government. http://www.cio.gov.bc.ca/local/cio/informationsecurity/documents/InformationSecurityProgram.pdf

Brown, David J. 2009. “International Council for Scientific and Technical Information (ICSTI) Annual Conference – Managing Data for Science.” Information Services and Use. Volume 29, Number 4, pp. 103-121.

Bush, Vannevar. 1945. “As We May Think.” Atlantic Monthly.

Cate, F. H. 2008. “Government Data Mining: The Need for a Legal Framework.” Harvard Civil Rights and Civil Liberties Law Review. Volume 43, no. 2, pp. 435-489.

Davis, Paul K., Steven C. Bankes, Michael Egner. 2003. “Enhancing Strategic Planning with Massive Scenario Generation: Theory and Experiments.” National Security research Division. RAND Technical Report.

Dawes, Sharon. 2012. “A realistic look at open data.” Using Open Data Workshop, Brussels, June 19-20, 2012. http://www.w3.org/2012/06/pmod/pmod2012_submission_38.pdf

deLeon, P. and E.S. Overman. 1998. “A History of the Policy Sciences.” In J. Rabin, W.B. Hildreth, and G.J. Miller (eds.) Handbook of Public Administration, 2nd edition. New York: Marcel Dekker.

Dobell, Rodney A. and David Zussman. 1981. “An evaluation system for government: If politics is theatre, then evaluation is (mostly) art.” Canadian Public Administration. 24(3):404-427.

Dobuzinskis, L., M. Howlett and D. Laycock. 2007. Policy Analysis in Canada: The State of the Art. Toronto: University of Toronto Press.

Dunleavy, Patrick, Helen Margetts, Simon Bastow and Jane Tinkler. 2005. “New Public Management Is Dead—Long Live Digital-Era Governance.” Journal of Public Administration Research and Theory, 16:467–494.

Dunn, W.N. 1981. Public policy analysis: An introduction. Englewood Cliffs, NJ: Prentice-Hall.

Durning, Dan. 1999. “The transition from traditional to postpositivist policy analysis: A role for Q-methodology.” Journal of Policy Analysis and Management, 18: 389–410.

Eaves, David. 2010. “After the Collapse: Open Government and the Future of Civil Service.” In Lathrop, Daniel and Laurel Ruma (eds). 2010. Open Government: Collaboration, Transparency and Participation in Practice.  Sebastopol, CA: O’Reilly Media, Inc.

Ennals, Ken. 1981 “The Management Information System for Ministers in the Department of the Environment.” Local Government Studies, 7:1, 39-46.

Fenn, Jackie and Mark Raskino. 2011. Understanding Gartner’s Hype Cycles, 2011. Gartner, Inc.

Fischer, Frank. 1980. Politics, Values, and Public Policy; The Problem of Methodology. Boulder, Colorado: Westview Press.

Fischer, Frank. 2003. Reframing public policy: discursive politics and deliberative practices. New York: Oxford University Press.

Forrester, J. W. 1971. World Dynamics. Wright-Allen Press.

Fountain, Jane. 2001. Building the Virtual State: Information Technology and Institutional Change. Washington, DC: Brookings Institution Press.

Freed, Les and Sarah Ishida. 1995. The History of Computers. Emeryville, California: Ziff-Davis

Fyfe, Toby, and Paul Crookall. 2010. Social media and public sector policy dilemmas. Toronto: Institute of Public Administration of Canada.

Gammon, Howard. 1954. “The Automatic Handling of Office Paper Work.” Public Administration Review. Vol. 14, No. 1 (Winter, 1954), pp. 63-73.

Ginsberg, Wendy R. 2011. “The Obama Administration’s Open Government Initiative: Issues for Congress.” CRS Report for Congress 7-5700 R41361 January 28 2011. Washington, D.C.: United States Congressional Research Service.

Grundin, Jonathan. 1988. “Why CSCW applications fail: problems in the design and evaluation of organizational interfaces.” Proceedings of the 1988 ACM Conference on Computer-Supported Cooperative Work, pages 85 – 93.

Hämäläinen, Raimo P. 1988. “Computer Assisted Energy Policy Analysis in the Parliament of Finland.” Interfaces, Vol. 18, No. 4 (Jul. – Aug., 1988), pp. 12-23.

Harris, Harlan, Marck Vaisman and Sean Murphy. 2012. “Data Scientists survey results teaser.” Data Community DC Blog. August 10 2012. http://datacommunitydc.org/blog/2012/08/data-scientists-survey-results-teaser/

Heeks, Richard. 2011. “The First e-Government Research Paper.” ICTs for Development blog. April 30 2011. http://ict4dblog.wordpress.com/2011/04/30/the-first-e-government-research-paper/

Hodge, R. A. and J. M. Justin Longo. 2002. “International monitoring for environmental health surveillance.” Canadian Journal of Public Health. Volume 93, pp. s16-s23

Jenkins-Smith, H. 1982. “Professional Roles of Policy Analysts.” Journal of Policy Analysis and Management 2(1), pp. 88-100.

Jennings, B. 1987. “Policy Analysis: Science, Advocacy, or Counsel?” Research in Public Policy Analysis and Management, vol. 4, pp. 121-134.

Khorram, Siamak, Frank H. Koch, Cynthia F. Wiele, and Stacy A.C. Nelson. 2012. “Future Trends in Remote Sensing.” Remote Sensing (2012): 125-129.

Klinkenberg, Brian. 2003. “The true cost of spatial data in Canada.” The Canadian Geographer, 47: 37–49.

Lasswell, H.D. 1951. “The policy orientation”, pp. 3-15 in D. Lerner and H.D. Lasswell (eds.), The Policy Sciences. Stanford, CA: Stanford University Press.

LaValle, Steve, Eric Lesser, Rebecca Shockley, Michael S. Hopkins, and Nina Kruschwitz. 2011. “Big data, analytics and the path from insights to value.” MIT Sloan Management Review 52, no. 2 (2011): 21-32.

Lempert, Robert, Steven Popper and Steven Bankes. 2002. “Confronting Surprise.” Social Science Computer Review 2002 20

Lieberman, Evan S. 2002. “Taxation data as indicators of state-society relations: possibilities and pitfalls in cross-national research.” Studies in Comparative International Development (SCID) 36.4: 89-115.

Likierman, Andrew. 1982. “Management Information for Ministers: the MINIS System in the Department of the Environment.” Public Administration Vol. 60 Summer 1982 (127-142).

Lipsky, Michael. 1971. “Street-level bureaucracy and the analysis of urban reform.” Urban Affairs Review 6.4: 391-409.

Longo, Justin. 2003. “Reflections on the Informal User Testing of a Computer-Based Simulation Tool as a Potential Aid to Policy Analysis”. Paper presented at the WTMC Summer School, The Netherlands. September 9 – 12, 2003.

Longo, Justin. 2011. “#OpenData: Digital-Era Governance Thoroughbred or New Public Management Trojan Horse?” Public Policy and Governance Review. Vol. 2, no. 2, pp. 38-51. Spring 2011.

Longo, Justin. 2013. Towards Policy Analysis 2.0. PhD dissertation, University of Victoria. Ann Arbor: ProQuest/UMI. (Publication No. AAT *******.)

Marche, Sunny, and James D. McNiven. 2003. “E‐Government and E‐Governance: The Future Isn’t What It Used To Be.” Canadian Journal of Administrative Sciences. Vol. 20, no. 1, pp. 74-86.

Margetts, Helen. 1998. Information technology in government: Britain and America. London: Routledge.

McAfee, Andrew P. 2006. “Enterprise 2.0: The Dawn of Emergent Collaboration.” MIT Sloan Management Review. Spring 2006, Vol. 47, no. 3.

Meadows, D. H. et al. 1972. The Limits to Growth. Universe Books

Meltsner, A.J. 1976. Policy Analysts in the Bureaucracy. Berkeley, CA: University of California Press.

Mesarovic, Mihajlo and Eduard Pestel. 1974. “Mankind at the Turning Point: The Second Report to the Club of Rome.” Reshaping the International Order: A Report to the Club of Rome. New York: EP Dutton.

Morçöl, Göktug. 2001. “Positivist beliefs among policy professionals: An empirical investigation.” Policy Sciences, 34: 381-401.

Morison, John. 2010. “Gov 2.0: Towards a User Generated State?” The Modern Law Review 73.4: 551-577.

Nairn, Geoff. 2011. “The trouble with office email.” Financial Times. February 17, 2011.

Napoli, Philip M. and Joe Karaganis. 2010. “On making public policy with publicly available data: The case of U.S. communications policymaking.” Government Information Quarterly. Volume 27, Issue 4, Special Issue: Open/Transparent Government, October 2010, pp. 384-391.

Pal, L.A. 2009. Beyond policy analysis: Public issue management in turbulent times. Toronto: Nelson.

Paquet, Gilles. 2009. Crippling Epistemologies and Governance Failures: A Plea for Experimentalism. Ottawa: University of Ottawa Press.

Parson, E.A. and K. Fisher-Vanden. 1997. “Integrated Assessment Models of Global Climate Change.” Annual Review of Energy and Environment. Vol. 22, pp. 589-628.

Parsons, Wayne. 2004. “Not Just Steering but Weaving: Relevant Knowledge and the Craft of Building Policy Capacity and Coherence.” Australian Journal of Public Administration, 63: 43–57.

Pressman, Jeffrey L. and Aaron Wildavsky. 1973. Implementation: How Great Expectations in Washington Are Dashed in Oakland; Or, Why It’s Amazing that Federal Programs Work at All, This Being a Saga of the Economic Development Administration as Told by Two Sympathetic Observers Who Seek to Build Morals on a Foundation. University of California Press.

Prince, Michael. 2007. “Soft Craft, Hard Choices, Altered Context: Reflections on Twenty-Five Years of Policy Advice in Canada.” Chapter 7 in Policy Analysis in Canada: The State of the Art. Laurent Dobuzinskis, Michael Howlett, and David Laycock (eds.). IPAC Series in Public Management and Governance. Toronto: University of Toronto Press.

Quade, E.S. 1975. Analysis for public decisions. New York: Elsevier.

Quade, E.S. 1980. “Pitfalls in Formulation and Modeling.” Chapter 3 in Giandomenico Majone and Edward S. Quade (eds.) Pitfalls in Analysis. IIASA and John Wiley & Sons

Radin, Beryl A. 2006. Challenging the Performance Movement: Accountability Complexity and Democratic Value. Washington DC: Georgetown University Press.

Ratti, Carlo, Dennis Frenchman, Riccardo Maria Pulselli and Sarah Williams. 2006. “Mobile Landscapes: using location data from cell phones for urban analysis.” Environment and Planning B: Planning and Design, Volume 33, pages 727-748.

Reggi, Luigi, and Chiara Ricci. 2011. “Information strategies for open government in Europe: EU regions opening up the data on structural funds.” Electronic Government: 173-184.

Rose, Richard. 1991. “What is Lesson-Drawing?” Journal of Public Policy 11(1): 3–30.

Science. 2011. “Challenges and Opportunities. Introduction to a Special Online Collection: Dealing with Data.” Science 11 February 2011: 331 (6018), 692-693. http://www.sciencemag.org/site/special/data/

Shirky, Clay. 2008. Here Comes Everybody. The Power of Organizing Without Organizations. New York: Penguin.

Shirky, Clay. 2012. “How the Internet will (one day) transform government.” [Video file]. Retrieved from http://www.ted.com/talks/clay_shirky_how_the_internet_will_one_day_transform_government.html

Thompson, Derek. 2011. “The Case For Banning Email at Work.” The Atlantic Monthly. DEC 1 2011.

Torgerson, D. 1986. “Between Knowledge and Power: Three Faces of Policy Analysis.” Policy Sciences 19, pp. 33-59.

United States. 2012. “Bring Your Own Device: A Toolkit to Support Federal Agencies Implementing Bring Your Own Device (BYOD) Programs.” Product of the Digital Services Advisory Group and Federal Chief Information Officers Council. Washington, D.C.: The White House. August 23 2012. http://www.whitehouse.gov/digitalgov/bring-your-own-device

Viégas, Fernanda and Martin Wattenberg. 2010. “Case Study: Many Eyes.” In Lathrop, Daniel and Laurel Ruma (eds). Open Government: Collaboration, Transparency and Participation in Practice.  Sebastopol, CA: O’Reilly Media, Inc.

Weimer, David and Aidan R. Vining. (2010). Policy Analysis: Concepts and Practice. 5th Edition. London: Longman.

Wigand, F. Dianne Lux. 2010. “Adoption of Web 2.0 by Canadian and US Governments.” Comparative E-Government: 161-181.

Wildavsky, Aaron. 1978. “Policy Analysis is What Information Systems Are Not.” Accounting, Organizations and Society, Vol. 3, No. 1, pp. 77-88.

Wyld, David C. 2007. “The Blogging Revolution: Government in the Age of Web 2.0.” IBM Center for The Business of Government.

Wynne, B. 2010. “Foreword.” In Andrew Feenberg, Between Reason and Experience: Essays in Technology and Modernity. Cambridge, MA: MIT Press, 2010.

Yang, Kaifeng. 2007. “Quantitative methods for policy analysis.” Chapter 23 (pp. 349-368) in Frank, Fischer, Gerald J. Miller, and Mara S. Sidney, eds. Handbook of public policy analysis: theory, politics, and methods.

6, Perri. 2004. E-Governance: Styles of Political Judgement in the Information Age Polity. London: Palgrave Macmillan.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s