cropped bannermashup2

by Jennifer McArdle

Internet sites and social media platforms like Google, YouTube, and Facebook, have amassed immense amounts of data on individual users, compiling, in essence, individualized virtual footprints. Combining each persons’ virtual footprints—their clicks, downloads, purchases, ‘likes’, and posts—with psychology and neuroscience, allows search engines or social media platforms to model human behavior and predict current and future interests.

The power of personalized prediction has already been demonstrated within the advertising world. In a much-publicized 2012 media story, Target was able to identify a pregnant teenager before her father, simply based on her Internet search history. Internet search data when combined with the power of behavioral science can reveal very unique things about individuals, even life-changing events, like pregnancy. Corporations use personalized predictions to increase corporate profits. In 2011, Facebook and Google, alone, made $3.2 and $36.5 billion, respectively, by selling personalized advertising space to corporations based on user data. Personalized advertising works, and the market for it is steadily on the rise.

Media personalization, however, extends beyond corporate advertising to the news. In a New York Times article, Jeff Rosen of George Washington Law investigated what ‘personalized’ news meant for democracy. After clearing cookies from two of his Internet browsers, Safari and Firefox, Rosen created a ‘democratic Jeff’ and a ‘republican Jeff.’ Within two days, his two different browsers with his two different ‘identities’ began returning search results that fundamentally differed based on platform predictions of partisan interests. Similarly, Eli Pariser in The Filter Bubble ran an experiment with two left-leaning, female colleagues from the Northeast. Pariser asked both colleagues at the height of the 2010 Deepwater Horizon oil spill to run searches of ‘BP.’ The first page of search results markedly differed, one woman’s results returned news of the oil spill, while the other’s search only returned investment information on British Petroleum. For the latter of the two, a quick skimming of the front-page search results would not have confirmed the existence of the ongoing environmental crisis. Google’s predictive, personalized algorithms delivered fundamentally different news results.

In a 1982, Shanto Iyengar highlighted the impact of media on personal perception. In his study, “Experimental Demonstrations of the ‘Not so Minimal’ Consequences of Television News,” Iyengar demonstrated that media exposure to various issue areas, tended to raise the perception of issue importance in subject minds. Iyengar called this ‘accessibility bias’. In a world of personalized search engines, news, and social media, it is likely we will fall prey to this ‘accessibility bias’. However, unlike past ‘accessibility biases’, today’s ‘accessibility biases’ will be constructs of our own beliefs.

As political philosopher Hannah Arendt wisely noted, democracy requires a public space, where citizens can meet and exchange diverse opinions—it is within this common space that a robust democratic dialogue takes place, and a commonality can emerge through the differences. Internet personalization erodes these common spaces, making it increasingly unlikely that our ‘democratic and republican Jeffs’ will encounter differing ideas or philosophies.

If Internet personalization today seems somewhat troubling from a democratic standpoint, the future only seems more problematic. Yahoo’s CEO and Google’s former Vice President Marissa Meyer has expressed hope that the company could eventually render the search box obsolete. Eric Schmidt, an executive chairman of Google, has forecasted that “the next step of search is doing this automatically…When I walk down the street, I want my smartphone to be doing searches constantly—‘did you know?’ ‘did you know?’ ‘did you know?’” In the future, your phone, as Peares notes, will be doing the searching for you.

A future of ubiquitous personalization could be a future of ubiquitous confirmation biases—a world where our beliefs and perceptions are constantly confirmed by our personalized media, entering us into an endless confirmation loop with no real feedback. In psychology and cognitive science, confirmation bias leads to statistical error. In a democracy, confirmation bias could lead to polarization and the failure of democratic dialogue.

In February 2012, the Obama administration released the Consumer Privacy Bill of Rights, which sought to clarify the proper use of virtual consumer data by corporations. While the administrations’ Consumer Privacy Bill of Rights is a step in the right direction—helping to ensure virtual privacy in the marketplace—it does nothing to address a ‘netizens’ right to access information, free from personalization or bias.

At present, some sites, like Google are allowing users to opt-out of interest-based ads. However, these measures do not go far enough. Platforms like Facebook, Twitter, and Google have content visibility, personalization, and data sharing methods that are based on private algorithms and policies. These algorithms and polices are often opaque or inaccessible to the public, yet can yield immense influence. Making personalization algorithms transparent and simple would allow users to understand how their news is getting personalized. That combined with user ability to opt-in and out of personalization could help ensure an Arendtian public space, while providing corporations profitable advertising platforms. Personalization does not have to erode democracy. However, if personalization remains opaque, it may do just that.