Even as Big Data is used to chart flu outbreaks and improve winter weather forecasts, Big Data continues to generate important policy debates. Watching businesses and advocates argue over the use of “data” to measure human behavior in order to cut through both political ideology and personal intuition, David Brooks declares in The New York Times that the “rising philosophy of the day . . . is data-ism.” Writing for GigaOM, Derrick Harris responds that Brook’s concerns over data-worship are “really just statistics, the stuff academicians and businesspeople have been doing for years.”
The basic collection of data is . However, Harris makes the point that there is a considerable difference between “just plain data” and the rise of Big Data. Similarly, this raises the question of whether the privacy concerns swirling around Big Data differ in substance from the privacy issues we have long faced in the collection of personally identifiable information rather than merely in scale. In other words, what technological changes presented by Big Data raise novel privacy concerns?
The substance of Big Data is its scale. As our ability to collect and store vast quantities of information has increased, so too has our capacity to process this data to discover breakthroughs ranging from better health care, a cleaner environment, safer cities, and more effective marketing. Privacy advocates argue that it is the scale of data collection that can potentially threaten individual privacy in new ways. According to the Jay Stanley, Senior Policy Analyst at the ACLU, Big Data amplifies “information asymmetries of big companies over other economic actors and allows for people to be manipulated.” Data mining allows entities to infer new facts about a person based upon diverse data sets, threatening individuals with discriminatory profiling and a general loss of control over their everyday lives. Noting that credit card limits and auto insurance rates can easily be crafted on the basis of aggregated data, tech analyst and author Alistair Croll cautions that individual personalization is just “another word for discrimination.” Advocates worry that over time, Big Data will have potentially chilling effects on individual behavior.
With its proposed new General Data Protection Regulation, European policymakers propose to advance privacy by limiting uses of Big Data when individuals are analyzed. The regulation’s most recent draft proposal, drafted by Jan Philipp Albrecht, Rapporteur for the LIBE Committee, restricts individual profiling, which is defined as “any form of automated processing of personal data intended to evaluate certain personal aspects relating to a natural person or to analyse or predict in particular that natural person’s performance at work, economic situation, location, health, personal preferences, reliability or behaviour.” This sort of limit on “automated processing” effectively makes verboten much of the data that scientists and technologists see as the future of Big Data.
The fundamental problem is that neither individuals nor business, nor government for that matter, have developed a comprehensive understanding of Big Data. As a result, no one has actually balanced the costs and benefits of this new world of data. Individuals are still largely uninformed about how much data is actually being collected about them. They do not read nor understand lengthy privacy policies, but worry that their information is being used against them rather than on their behalf. Meanwhile, business is struggling to balance new economic opportunities against the “creepy factor” or concerns that data is somehow being misused. Sometimes consumers adjust to the new stream of data (Facebook’s Newsfeed), and other times they simply do not (Google Buzz).
Kord Davis, a digital strategist and co-author of The Ethics of Big Data, notes that there is no common vocabulary or framework for the ethical use of Big Data. As a result, individuals and business, along with advocates and government, are speaking past one another. The result is a regime where entities collect data first and ask questions later. Thus, when Big Data opportunities and privacy concerns collide, important decisions are made ad hoc.
The Future of Privacy Forum’s Omer Tene and Jules Polonetsky have previously called for the need to develop a model where Big Data’s benefits, for businesses and research, are balanced against individual privacy rights. To continue to advance scholarship in this area, FPF and the Stanford Center for Internet and Society invite authors to submit papers discussing the legal, technological, social, and policy implications of Big Data. Selected papers will be published in a special issue of the Stanford Law Review Online and presented at an FPF/CIS workshop, which will take place in Washington, DC, on September 10, 2013. More information is available here.
Joseph Jerome is a Legal and Policy Fellow at the Future of Privacy Forum. A 2011 graduate of the New York University School of Law, he previously served as a fellow at the American Constitution Society for Law & Policy.