Posts Tagged ‘privacy’

FPF Founder Christopher Wolf wins Vanguard Award

The entire FPF team is thrilled to congratulate Chris Wolf, FPF founder and Co-chair and cherished mentor, on winning the IAPP Vanguard Award.  The following remarks were delivered today at IAPP by Brenda Leong, FPF Senior Counsel and Director of Operations.  Chris Wolf, FPF Founder and Co-chair, and his Vanguard Award

Welcome everyone and thank you for joining us this morning. I am honored to present the IAPP Privacy Vanguard Award. This annual award recognizes outstanding leadership, knowledge, and creativity in the field of privacy and data protection.

The recipient of the IAPP Privacy Vanguard Award is someone who has positively impacted the privacy industry through personal and communal achievements throughout their career.

This year’s award winner is known as the “Dean of the Industry” or, as The Washingtonian Magazine deemed him, the “Tech Titan”. He is not only a well-respected lawyer in privacy and data protection, and an accomplished and long-time respected leader in the field, but also a treasured friend, colleague, mentor or role model for those who have had the privilege to know and work with him.

He helped break the path for privacy law in the early days of its modern application when the Internet and related technologies made clear that existing laws were no longer sufficient. He has had a hand in advising and shaping thinking on many leading-edge issues including Internet free speech, Internet hate speech and the parameters of government access to stored information. In doing so, he led the development of a top-ranked privacy law practice, Hogan Lovells US, where he now helps lead a team of 27 full time privacy lawyers.

His influence on the global privacy agenda led to co-founding of the Future of Privacy Forum, DC-based a think tank dedicated to advancing responsible data use in commercial and consumer privacy. Working with great vision as part of the FPF team, he helped build it into a thriving community of business, academic and advocacy thought leaders, that has been influential in shaping public policy on many privacy issues.

As a “pioneer in privacy law”, he originated and edited the first privacy law treatise published by the prestigious Practising Law Institute and has written and lectured widely on the subject of privacy law. He is the co-editor of the PLI book, “A Practical Guide to the Red Flag Rules”, those identity-theft-prevention regulations issued by the FTC and financial regulators. He has testified in Congress and before the Privacy and Civil Liberties Board, participated in the 2014 White House Big Data Workshops, and served as a panelist at numerous FTC privacy-related workshops.

While playing this industry-leading role in the development of privacy and tech policy, he also managed to dedicate a major portion of his time and focus to such worthy charitable and philanthropic causes as the Anti-Defamation League – where he serves as the National Civil Rights Chair – and Food and Friends, a Washington-based nonprofit that provides home-delivered meals and nutrition counseling for people with life-challenging illnesses, and several other such organizations.

Finally, he has played all these roles while prioritizing time for family and friends – he manages to be there as a friend and colleague for an incredible swath of people – with grace and warmth – with diplomacy when required – and always with great class.

Jules Polonetsky – his cofounder at FPF – was very disappointed he couldn’t be here today, but he told me that there was a classic Jewish word that sums up today’s winner – he is a “mensch”…and for those like me whose yiddish might be a bit rusty – this simply means: a “fine human being”.

It gives me great pleasure to announce, and welcome to the stage, the winner of this year’s Privacy Vanguard Award, Christopher Wolf.

Framing the “Big Data Industry”

For all its hype, discussions about Big Data often still devolve into debates about buzzwords and concepts like business intelligence, data analytics, and machine learning. Hidden in each of these terms are important privacy and ethical considerations. A recent article by Kirsten Martin in MIS Quarterly Executive attempts to bring these considerations to the surface by moving past framing Big Data as merely some business asset or computational technique. Instead, Martin suggests analyzing risks and rewards at a macro-level by looking at the entire Big Data ecosystem, which she terms the Big Data Industry (BDI).

Yes, her paper still largely focuses on the negative impacts of Big Data, but instead of a general sense of doom-and-gloom, her focus is on a systemic analysis of where the data industry faces specific challenges. Though the article is peppered with examples of privacy-invading headlines, like Target’s purported ability to predict pregnancy, her framing is particularly helpful because it largely divorces the “risks” posed by Big Data from individualized company practices, anecdotes, and hypotheticals. Instead, she describes the entire Big Data information supply chain from upstream data sources to downstream data uses. Consumer-facing firms, tracking companies, and data aggregators — or data brokers — work together to exchange information and add more value to different data sources.

Martin breaks down the different negative effects that can impact individuals at different points in the supply chain. She highlights some of the existing concerns around downstream uses of Big Data. For example, she notes that both incorrect and correct inferences about individuals could limit individual’s opportunities, encourage consumer manipulation, and ultimately be viewed as being disrespectful to individual concerns. While these sorts of Big Data harms have been long debated, Martin places them on a spectrum alongside concerns raised by upstream suppliers of data, including poor data quality, biases in the data, and privacy issues in the collection and sharing of information. Analogizing to how food providers have become responsible for everything from labor conditions to how products are farmed, she argues that Big Data Industry players, by choosing and creating supply chains, similarly become “responsible for the conduct and treatment of users throughout the chain.”

By looking at Big Data as one complete supply chain, Martin appears to believe it will be easier for members of the Big Data Industry to identify and monitor economical and ethical issues with the supply chain. Yet problems also exist across this nascent industry. Even if we can effectively understand data supply chains, Martin is perhaps more concerned with the systemic issues she sees in the BDI. Specifically, the norms and practices currently being established throughout the entire data supply chain give rise to “everyone does it” ethical questions, and the BDI, in particular, poses two pivotal ethical considerations.

First, data supply chains may create negative externalities, especially in aggregate. Air pollution, for example, can become a generalized societal problem through global warming, and the harm from actions across the manufacturing industry can be considerably greater than the pollution caused by any individual company. Martin posits the Big Data Industry presents a similar dynamic, wherein every member that captures, aggregates, or uses information creates costs to society in the form of surveillance. By contributing to a “larger system of surveillance” and by frequently remaining invisible and out-of-sight to individuals, the BDI may be generating an informational power imbalance. Perhaps because individual companies that are part of the BDI fail to see themselves as part of a larger data ecosystem, few companies have been put in a position to take account of — or even to consider — that their data practices may give rise to such a negative externality.

Second, the Big Data Industry may foster “destructive demand” for consumer-facing companies to collect and sell increasing amounts of consumer data with lower standards. According to Martin, demand can become destructive (1) when a primary markets that promise a customer-facing relationship become a front for a secondary market, (2) when the standards and quality of the secondary market are less than the primary market, and (3) when those consumer-facing companies have limited accountability to consumers for their transactions and dealings in the secondary market. Martin sees a cautionary tale for the BDI in the recent mortgage crisis and the role that mortgage-backed securities played in warping the financial industry. She warns that problems are inevitable as the buying and selling of consumer data becomes more important than “selling an application or providing a service.” Invoking the specter of the data broker boogeyman, Martin argues that consumer-facing organizations lack accountability for their activities in the secondary data market, particularly so long as consumers remain in the dark as to what is going on behind the scenes in the greater BDI.

So how can the Big Data Industry address these concerns? She places much of her faith in the hope that organizations like the Census Bureau that have “unique influence” as well as “providers of key products within the Big Data Industry, such as Palantir, Microsoft, SAP, IBM” can help shape sustainable industry practices moving forward. These practices would embody a number of different solutions under the rubrics of data stewardship, data integrity, and data due process. Many of the proposals under the first two amount to endorsing additional transparency mechanisms. For example, publicly linking companies through a larger supply chain could create “a vested interest in ensuring others in the chain uphold data stewardship and data due process practices.”

Data due process, on the other hand, would help firms to “internalize the cost of surveillance.” Additional internal oversight and due process procedures would, according to Martin, “increase the cost of holding individualized yet comprehensive data and internalize the cost of contributing to surveillance.” As to what these mechanisms could look like, Martin points to ideas like consumer subject review boards, which was first popularized at a Future of Privacy Forum event two years ago and is an effort we have continued to expand upon. The call for data integrity professionals mirrors the notion of “algorithmists” that could monitor not just the quality of upstream data sources but downstream data uses. (As an aside, she chastises business schools, who, even as they race to train Big Data professionals, do not require business students to take courses in ethics.) Effective ethical reviews would require such professionals, which could potentially mitigate some of risks inherent in the data supply chain.

While Martin’s proposals are not a panacea, industry and regulators alike should take her suggestions seriously. Her framing of a greater Big Data Industry provides a path forward for companies — and regulators and watchdogs — to better target their efforts to promote public trust in Big Data. She has identified places in the information supply chain where certain industry segments may need to get “more skin in the game” so to speak. And, at the very least, Martin has moved Big Data from amorphous buzzword to a full-fledged ecosystem with some shape to it.

-Joseph Jerome, Policy Counsel

Public Perceptions on Privacy

Today’s new report by the Pew Research Center gives the lie to the notion that privacy is unimportant to the average American. Instead, the big take away is that individuals feel like they lack any control over their personal information. These feelings are directed at the public and private sector alike, and suggest a profound trust gap is emerging in the age of big data.

While Pew has framed its report as a survey of Americans’ attitudes post-Snowden, the report presents a number of alarming statistics of which businesses ought take note. Advertisers take the brunt of criticism, and the entire report broadly suggests that public concerns about data brokers and the opacity of data collection are only growing. Seventy-one percent of respondents say that advertisers can be trusted only some of the time, and 16% say they never can. These numbers track every demographic group, and indeed, get worse among lower income households. Eighty percent of social network users are concerned about the information being shared with unknown third parties. Even as Americans are concerned about government access to personal information, they increasingly support more regulation of advertisers. This support is strong across an array of demographic groups.

Further, even as consumers remain willing to trade personal information in return for access to free Internet services, two-thirds of consumers disapprove of the suggestion that online services work due to increased access to personal information. More problematic, however, is that 91% of Americans now believe that “consumers have lost control over how personal information is collected and used by companies.” Though this Pew study does not show that privacy values are trumping digital services — and every indication suggests that they are not — it is a likely topic for Pew to return to in the future. It would be interesting to see whether this anxiety translates into action.

However, in the meantime, anxiety about privacy suggests an opportunity for companies to win with consumers simply by providing them with more control. Fully 61% “would like to do more” to protect their online privacy. We have repeatedly called for efforts to “featurize” data and have supported efforts to help consumers engage with their personal information. Many companies already provide meaningful controls on the collection and use of personal information, but the challenge is both making consumers aware of these options and ensuring that taking advantage of these dashboards and toggles is as fun as using a simple app.

So we need more tools to make privacy fun. And industry may also need to a better job staying attuned to consumer preferences. Pew reiterates how context-dependent privacy is, and that the value of privacy and consumer interest in protecting their privacy can vary widely from person to person, in different contexts and transactions, and perhaps most pointedly, in response to current events. “[U]sers bounce back and forth between different levels of disclosure depending on the context,” the report argues.

The challenge is ensuring that context is understood similarly by all parties. Part of this is understanding where and when personal information is sensitive. This is a debate that was highlighted at the FTC’s recent big data workshop, and is a theme that increasingly arises in conversations about big data and civil rights. Aside from Social Security numbers, which 95% of respondents considered to be sensitive information, data ranging from health information and phone and email message content to location information and birth date could be viewed as sensitive depending upon the context.

Depending upon context, everything is sensitive or nothing is sensitive. Obviously, this can be a tricky balancing act for consumers to manage. Information management requires users to juggle different online personas, platforms, and audiences. Thus, the door is open for companies to both take certain information off the table — or make a better case why some sensitive information is invaluable for certain services.

While Pew has not shown whether these privacy anxieties trump other pressing economic or social concerns, the report also suggests that the Americans’ perceptions of privacy are heavily intertwined with their understanding of security. Privacy may be amorphous, but security is less so — but being proactive on the one can often be a boon to the other. Positive and proactive public actions on privacy are essential if we are to reverse Americans’ doubts that they can trust sharing their personal information.

-Joseph Jerome, Policy Counsel

“Databuse” as the Future of Privacy?

Is “privacy” such a broad concept as to be meaningless from a legal and policy perspective? On Tuesday, October 14th, the Center for Democracy & Technology hosted a conversation with Benjamin Wittes and Wells Bennett, frequently of the national security blog, Lawfare, to discuss their recent scholarship on “databuse” and the scope of corporate responsibilities for personal data.

Coming from a world of FISA and ECPA, and the detailed statutory guidance that accompanies privacy in the national security space, Wittes noted that privacy law on the consumer side is vague and amorphous, and largely “amounts to don’t be deceptive and don’t be unfair.” Part of the challenge, as number privacy scholars have noted, is that privacy encompasses a range of different social values and policy judgments. “We don’t agree what value we’re protecting,” Wittes said, explaining that government privacy policies have values and distinctions such as national borders and citizen/non-citizen than mean something.

Important distinctions are much less easier to find in consumer privacy. Wittes’ initial work on “databuse” in 2011 was considerably broader and more provocative, applying to all data controllers — first and third party, but his follow-up work with Bennett attempted to limit its scope to the duties owed to consumers exclusively by first parties. According to the pair, this core group of duties “lacks a name in the English language” but “describe a relationship best seen as a form of trusteeship.”

Looking broadly at law and policy around data use, including FTC enforcement actions, the pair argue that there is broad consensus that corporate custodians face certain obligations when holding personal data, including (1) obligations to keep it secure, (2) obligations to be candid and straightforward with users about how their data is being exploited, (3) obligations not to materially misrepresent their uses of user data, and (4) obligations not to use them in fashions injurious to or materially adverse to the users’ interests without their explicit consent. According to Wittes, this core set of requirements better describes reality than any sort of “grandiose conception of privacy.”

“When you talk in the broad language of privacy, you promise consumers more than the legal and enforcement system can deliver,” Wittes argued. “If we want useful privacy policy, we should focus on this core,” he continued, noting that most of these requirements are not directly required by statute.

Bennett detailed how data uses fall into three general categories. The first, a “win/win” category,” describes where the interests of business and consumers align, and he cited the many uses of geolocation information on mobile devices as a good example of this. The second category reflects cases where businesses directly benefit but consumers face a neutral value proposition, and Bennett suggested online behavioral advertising fit into this second category. Finally, a third category of uses are when businesses benefit at consumer’s expense, and he argued that regulatory action would be appropriate to limit these behaviors.

Bennett further argued that this categorization fit well with FTC enforcement actions, if not the agency’s privacy rhetoric. “FTC report often hint at subjective harms,” Bennett explained, but most of the Commission’s actions target objective harms to consumers by companies.

However, the broad language of “privacy” distorts what harms the pair believe regulators — and consumers, as well — are legitimately concerned about. Giving credit to CDT for initially coining the term “databuse,” Wittes defines the term as follows:

[T]he malicious, reckless, negligent, or unjustified handling, collection, or use of a person’s data in a fashion adverse to that person’s interests and in the absence of that person’s knowing consent. . . . It asks not to be left alone, only that we not be forced to be the agents of our own injury when we entrust our data to others. We are asking not necessarily that our data remain private; we are asking, rather, that they not be used as a sword against us without good reason.

CDT’s Justin Brookman, who moderated the conversation, asked whether (or when) price discrimination could turn into databuse.

“Everyone likes [price discrimination] when you call it discounts,” Wittes snarked, explaining that he was “allergic to the merger of privacy and antidiscrimination laws.” Where personal data was being abused or unlawful discrimination was transpiring, Wittes supported regulatory involvement, but he was hesitant to see both problems as falling into the same category of concern.

The conversation quickly shifted to a discussion of the obligations of third parties — or data brokers generally — and Wittes and Bennett acknowledged they dealt with the obligations of first parties because its an easier problem. “We punted on third parties,” they conceded, though Wittes’ background in journalism forced him to question how “data brokers” were functionally different from the press. “I haven’t thought enough about the First Amendment law,” he admitted, but he wasn’t sure what principle would allow advocates to divine “good” third parties and “bad” third parties.

But if the pair’s theory of “databuse” can’t answer every question about privacy policy, at least we might admit the term should enter the privacy lexicon.

-Joseph Jerome, Policy Counsel

Interest Based Ads and More Transparency

Facebook Ads

Facebook wasn’t doing interest based advertising until now?  Huh?

Most users of Facebook know that the ads they see are selected by Facebook based on information on their profile, what they have “liked” and interests they have selected.  Most have also noticed that if they visit a web site off Facebook like Zappos, they may get “retargeted” ads on Facebook for Zappos. Similarly, Facebook works with online and offline retailers to help them buy ads on Facebook aimed at users who have been their customers.

Today Facebook, with much fanfare, has announced that it is launching an interest based advertising program. What’s new? Well, the one thing Facebook hasn’t been doing is selling ads targeted based on the web sites and apps you use outside of Facebook. An individual advertiser could buy an ad, based on your visit to a particular site – but many advertisers couldn’t buy an ad based on your visits to many sites. Now they can.

Got it? Ads on Facebook are selected in an attempt to make them relevant based on your profile, and your activity off of Facebook. And now they will use more activity off Facebook.

What is new is a major new effort to show users extensive detail about the many categories that are used to select ads, and to let users add or edit many categories of interest. This is one of the most extensive moves to give users a deep look at the data used to target ads that we have seen and should make some users feel more in control of the experience.

Don’t like it?  Click on the icon on every targeted ad and turn off the interest based targeting. On mobile, use the limit ad tracking settings on iOS or Android (which will actually tell all apps you dont want interest based ads, not just Facebook).

Privacy Calendar

all-day Data Privacy Day
Data Privacy Day
Jan 28 – Jan 29 all-day
“Data Privacy Day began in the United States and Canada in January 2008, as an extension of the Data Protection Day celebration in Europe. The Day commemorates the 1981 signing of Convention 108, the first[...]
all-day Data Privacy Day
Data Privacy Day
Jan 28 – Jan 29 all-day
“Data Privacy Day began in the United States and Canada in January 2008, as an extension of the Data Protection Day celebration in Europe. The Day commemorates the 1981 signing of Convention 108, the first[...]

View Calendar