Calls to “do something” about modern digital platforms — Amazon, Facebook, Google, and the like — are endemic today. These companies’ size — among the largest in the world — endows them with the superficial appearance of market power, providing competitors and advocates with the rhetorical basis for antitrust action against them. Moreover, the same high-tech, scale industries that are likely to evoke “big-is-bad” antitrust concerns are also likely to raise important social, legal, and political questions. The telephone and the railroad reshaped society; the computer began a reshaping of society that the personal computer continued and that is still ongoing in today’s Internet era. Such pervasive consequences don’t go unnoticed, nor are they uniformly welcomed.
The urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of the tech industry are framed in antitrust terms. Abetted by a growing chorus of scholars on both the left and right, proponents of activist antitrust are now calling for invasive, “public-utility-style” regulation or even the dissolution of the world’s most innovative companies essentially because they seem “too powerful.” Unconstrained by a sufficient number of competitors, the argument goes, these firms impose all manner of alleged harms — from fake news, to the demise of local retail, to low wages, to the veritable destruction of “democracy.” What is needed, they say, is industrial policy that shackles large companies or mandates more, smaller firms.
But this view contradicts the past century’s worth of economic experience and learning. It would require jettisoning the crown jewel of modern antitrust law — the consumer welfare standard — and returning antitrust to an earlier era in which inefficient firms were protected from the burdens of competition at the expense of consumers. And in doing so it would put industrial regulation in the hands of would-be central planners, shielded from any politically accountable oversight.
Adapting to the changes wrought by these industries is one of the defining challenges of the 21st century. But antitrust law is not the proper vehicle for addressing open-ended issues related to social and political values and disconnected from the economic effects of restraints on competition.
Language, Power and Privacy
Surveillance of citizens is a clear manifestation of government power. The act of surveillance is generally deemed acceptable in a democratic society where it is necessary to protect the interests of the nation and where the power is exercised non-arbitrarily and in accordance with the law. In spite of this, the practice of surveillance create stark challenges for the constraining forces of transparency and accountability.
A number of features of surveillance law, surveillance language, and the distribution of power perpetuate the existing surveillance paradigm. Using case studies from the US, the UK, and Ireland, Dr Murphy assesses the techniques used to maintain the status quo of continued surveillance expansion. Though Dr Murphy maintains that the classic principles of transparency and accountability remain the best means available to limit the arbitrary exercise of government power, she evaluates how these principles could be better realised in order to restore power to the people and to maintain an appropriate balance between government intrusion and the right to privacy.
Smart Cities Data Governance - Lessons from Quayside
Smart cities are places in which information technologies and vast quantities of data combine to generate data-driven solutions to a broad range of urban problems. Smart cities initiatives are often conceived of as technology, infrastructure, and/or economic development projects. However, the governance issues raised by smart cities may call for a rethink of the legal and policy infrastructure almost at pace with the emerging and evolving technologies. Drawing from the experience of the still unfolding process around the Waterfront Toronto-Sidewalk Labs Quayside smart city development, this presentation will examine smart cities data governance challenges.
Public Lecture March - Rob Kitchin
The Right to the Smart City
Cities around the world are pursuing a smart cities agenda in which digital technologies are used to manage cities. In general, these initiatives are promoted and rolled-out by governments and corporations and enact various forms of top-down, technocratic governance and reproduce neoliberal governmentality. Despite calls for the smart city agenda to be more citizen-centric and bottom-up in nature, how this translates into policy and initiatives is still weakly articulated and practiced. Indeed, there is little meaningful engagement by key stakeholders with respect to rights, citizenship, social justice, commoning, civic participation, co-creation, ethics, and how the smart city might be productively reimagined and remade. This talk advocates for the Right to the Smart City and considers how to produce a genuinely humanizing smart urbanism, both with respect to setting out a normative vision for smart cities rooted in ideas of fairness, equity, care, democracy and the public good, and enacting this vision through citizen-centric tactics.
In the past few years, a global wave of nationalism has arrived. World leaders have sought to elevate the interests of their respective countries above all others, promising to take control against foreign influences that are blamed for decline. Shifts in national identity have brought about economic policies that reinforce strong sovereignty and nativism. Although much has been written on the changing landscape of national identity and nationalism, little attention has been paid on how this impacts intellectual property (IP) law. This project uses international political economy research as a lens to understand how governments can use patents and other IP rights as a tool for promoting a nationalistic agenda. Using the United States as an example, the author examines how a country’s approach to IP protection can change based on evolving nationalistic views.
May - William Staples
Real-Time Grade Books and the Metric Culture of Schooling
In my book, Everyday Surveillance (2014), I focus on the relatively mundane techniques of keeping a close watch of people – what I have dubbed the 'Tiny Brothers' – that are increasingly present in the workplace, school, home, and community. Nearly all these kinds of 'data sponges' collect quantified measurements regarding an individual's movements, behaviors, and activities. In some cases, these technologies encourage 'self-’ or 'participatory monitoring' so that workers, students, and others may use the information collected to improve their own standing. One example of this phenomenon are internet-based student information systems (SIS) that offer students, parents, teachers, and administrator's immediate access to detailed student profiles. One feature called 'Student View' permits learners to view their teacher's grade book in real-time. I will report on in-depth interviews with a sample of these school stakeholders focused on how some students engage in intensified 'self-tracking' of performance metrics. Interviewees report that the system encourages high performing students to obsessively monitor their grades through smartphones and other devices, frequently comparing their performance metrics with other students, and generating anxiety for themselves and their parents. Consequently, participant narratives suggest these systems intensify both organizational and 'participatory monitoring' of student performance and foster micro-level assessments of their everyday lives.
Information technologies in healthcare: towards a new geography of right to health
Contemporary healthcare systems are going through the ‘digital turn’, i.e. the progressive incorporation of information technologies in daily practice with the purpose of improving the quality and increasing the efficiency of healthcare-delivery process. Despite the flourishing of research in this field, legal scholars have shown a clear predilection for some issues among which include data-protection, confidentiality, licensure and liability. Nevertheless, there is a need for a more comprehensive approach through which to address the overall impact of digitalisation processes on the organization of the healthcare systems. I argue that such an impact may be described in terms of a new geography of right to health, in which space, places, legal provisions, technological artefacts and relations of power converge, reshaping the content of the existing rights and leading to the appearance of new ones.
The Role of Humans in an Age of Intelligent Machines
Artificial intelligence (AI) and the information age are bringing us more knowledge about ourselves and each other than any society has ever known. Yet at the same time it brings machines seemingly more capable of every human endeavour than any human can be. What are the limits of AI? Of intelligence and humanity more broadly? What are our ethical obligations to machines? Do these alter our obligations to each other? What is the basis of our social obligations? In this talk I will argue that there are really only two problems humanity (or any other species) has to solve. These are sustainability and inequality, or put another way, security and power. Or put a third way, how big of a pie can we make, and how do we slice up that pie. Life is not a zero-sum game; we and many other species use the security of sociality to construct public goods where everyone benefits. But still, every individual needs enough pie to thrive, and this is the challenge of inequality. I will argue that understanding these processes is not only essential to surviving the challenges of the climate crisis, but also helps answer the fundamental questions of ethics and social obligation. I will also examine how AI is presently affecting both of these problems. I will close with concrete policy recommendations for managing AI and our society.
Climate Change Tipping Point: Civil Society Turns to Litigation
Civil society with an interest in fighting climate change understands that we are getting dangerously closely to a tipping point. The IPCC has given us a stark reminder that we have only 12 years left to prevent dangerous climate change for our future generations. With this in mind, people and organisations around the world are losing trust in both the actions of the public and the private sector and are taking the situation in their own hands. Not in the way of a “climate revolution”, although some argue that may be needed at some point, but by putting their attention and their hopes into judges and courts. Against this background, this keynote will discuss climate change litigation in countries where there is little or even no litigation on climate change. It will attempt to understand what constitutes successful climate change litigation and what may need to happen (both from a legal and non-legal perspective) in those countries without or with just very incipient litigation in the field of climate change. This presentation builds on the Climate Change Litigation Initiative (C2LI). The latter is led by the Strathclyde Centre for Environmental Law and Governance in collaboration with the University of Geneva and the National University of Singapore. C2LI has reviewed climate change litigation in over 30 countries using a scenario based methodology and is soon to move into its policy oriented phase by developing an online platform.
Hostile Design: Philosophy, Architecture, and Policy
The design of public space is rarely an innocent matter. Rules and laws govern what may and may not be done in such spaces (and often, by extension, who is and is not welcome). Sometimes in accord with these laws and rules, the objects of public spaces are designed to deter certain behaviors. As a philosopher of technology, I’ve been working to identify—and sometimes to criticize—the patterns of design used to control public space.
I have developed these ideas within a theoretical perspective called “postphenomenology.” This perspective builds on the philosophical traditions of phenomenology and American pragmatism to understand how technologies shape people’s experience. I’ve been thinking about the many ways that objects in public spaces are used differently by different people, and then also how these same objects are sometimes redesigned by the powerful to close off particular spaces to particular populations.
A contemporary discussion over these issues—involving a disparate group of scholars across a range of disciplines—is now beginning to emerge, one that criticizes the control of public space through design, especially when already vulnerable populations are targeted. I’ve come to call this phenomenon “hostile design,” though other names are also in use, such as “hostile architecture,” “architectural exclusion,” and “defensive architecture.” In my talk for TILT, we’ll consider a variety of examples, from benches designed so that you can’t sleep on them, to garbage bins designed so that you can’t pick from them, to security cameras designed to be so conspicuous that you are intimidated into following the rules. How should we think about the politics and the epistemology of these kinds of technologies? And how should we account for the fact that people not targeted by these designs often fail to take notice of them?
The Internet Jurisdiction Tipping Point
In the context of global warming, scientists have suggested that there are certain ‘tipping points’ that, once reached could bring the Earth into a state beyond which human efforts to reduce emissions will be increasingly futile.
A similar reasoning is possible in the context of Internet jurisdiction. It seems clear that if we continue along the current course, we will sooner, rather than later, reach somewhat similar tipping points at which the Internet as we know it ceases to exist, and from which attempts at a reversal is potentially futile.
In more detail, the Internet’s openness, the Internet as an enabler, and protector, of human rights and democratic values are at risk. The same can be said about the Internet as a contributor towards a fairer and more equal world, and the Internet as a global communications medium connecting people so as to bring us closer together; ultimately supporting a peaceful coexistence.
All these important characteristics are currently under threat to varying degrees from developments such as:
In this talk, Professor Svantesson examines the Internet Jurisdiction Tipping Points as well as the developments (the positive feedback loops) that together are taking us towards these tipping points. He also canvasses some possible responses that may prevent the described gloomy prospect from becoming a reality.