Even as it promises breakthroughs in healthcare, the environment, and how individuals understand the world, Big Data may also be a powerful tool in the national security space. On Wednesday, the Journal of National Security Law & Policy, along with the Georgetown Center on National Security and the Law, launched their first symposium by addressing the fundamentals of Big Data and looking at how to establish a policy framework for its use. While video promises to be available soon, I thought summarizing how the event touched on general privacy concerns might be helpful.
Recognizing the tension between a conservative legal profession seeking guidelines and agile technological development, the day began with a panel free of lawyers and full of technologists to address what Big Data is. Palantir’s Matthew Gordon suggested that Big Data is actually a misnomer. Instead, “big data” is simply making insights accessible for the benefit of users rather than purely for the benefit of isolated, discrete databases. Echoing the strategy of Privacy by Design, Gordon recommended organizations “bake in privacy from the start” when working with data.
Comparing today’s Big Data to the revolution caused by the Ford Model T, Johns Hopkins’ Sean Fahey declared that Big Data was important because it had “democratized” data. Today, thanks to cheap storage, excess computational power, and open source initiatives, enterprises of any scale can work with large amounts of data, eliminating the specialized equipment and significant capital investment that was once required to do serious data analytics. This has encouraged the collection of more and more data. Providing an observation that many would return to during the symposium, Fahey noted that today’s technological environment has actually made it costlier to decide to throw data away rather than simply to collect as much as possible.
While security guidelines are often discussed, particularly in the realm of national security, the challenge is that no one has addressed what sort of privacy protections we intend to implement. “We cannot ask a computer to answer whether some analytics violates privacy,” said Professor Daniel Weitzner, an FPF advisory board member, arguing that the problem of Big Data is determining how to inject “human judgment” into privacy questions. Unfortunately for the law–and for the lawyers advising clients about the law–this problem poses difficult ethical questions.
So what sort of legal and policy framework should be established to protect the privacy issues raised by Big Data collection, storage, and analysis? The challenges standing in the way are significant.
Professor Paul Ohm suggested that the “democratization” of data produced a fundamental shift from public data collection to private data collection. At the same time, Big Data has blurred the distinction between public and private. For the Federal Trade Commission, it finds itself in the position of trying to police changes to industry privacy policies, while the national security space places the federal government in the position of also having to change its privacy policies. Professor Laura Donohue wondered whether our federated system, divided between private industry and government, is always more helpful for protecting privacy today. Pointing to cybersecurity challenges, she cautioned that walls between public and private data could actually harm individual privacy when breaches occur.
Jennifer Granick from Stanford’s Center for Internet and Society argued against any notion that Big Data had somehow created a level playing field for individuals. “There’s no parity for individuals accused of crimes,” she noted, and the benefits of all this information are weighted toward private companies and government rather than the individuals producing the new streams of data. In September, FPF will partner with the Center for Internet and Society to address this question and how to bring Big Data and privacy together at an event. More information is available here.
After considerable discussion about the national security implications of Big Data, the symposium concluded with a recognition that modern privacy law is about establishing a degree a control vis-a-vis third parties, as Marc Rotenberg of the Electronic Privacy Information Center put it. How that can best be done in a Big Data world is the big question. Citing Microsoft’s “Scroogled” campaign, Elisebeth Cook, a member of the Privacy and Civil Liberties Oversight Board and an attorney at WilmerHale, proposed market-based solutions to privacy infractions. Pointing to California’s stronger privacy rules, a number of privacy advocates suggested forum shopping could produce privacy gains. Moreover, in the face of the European Union’s proposed Data Protection Regulation, there was a recognition that privacy policies may be effectively offshored.
Of course, while it is often taken as a given that EU is working to strengthen privacy protections, Prof. Weitzner noted that the EU was less effective at enforcing its rules. Furthermore, while many Big Data players have their eyes on Europe, as privacy gets offshored, foreign legal system may actually end up producing weaker privacy protections for individuals over the long-term. “Perhaps an international solution is the answer?” someone in the audience offered. Either way, the ocean of Big Data continues to rise.
Joseph Jerome is a Legal and Policy Fellow at the Future of Privacy Forum. A 2011 graduate of the New York University School of Law, he previously served as a fellow at the American Constitution Society for Law & Policy.