by Jennifer McArdle
Internet sites and social media platforms like Google, YouTube, and Facebook, have amassed immense amounts of data on individual users, compiling, in essence, individualized virtual footprints. Combining each persons’ virtual footprints—their clicks, downloads, purchases, ‘likes’, and posts—with psychology and neuroscience, allows search engines or social media platforms to model human behavior and predict current and future interests.
The power of personalized prediction has already been demonstrated within the advertising world. In a much-publicized 2012 media story, Target was able to identify a pregnant teenager before her father, simply based on her Internet search history. Internet search data when combined with the power of behavioral science can reveal very unique things about individuals, even life-changing events, like pregnancy. Corporations use personalized predictions to increase corporate profits. In 2011, Facebook and Google, alone, made $3.2 and $36.5 billion, respectively, by selling personalized advertising space to corporations based on user data. Personalized advertising works, and the market for it is steadily on the rise.
Media personalization, however, extends beyond corporate advertising to the news. In a New York Times article, Jeff Rosen of George Washington Law investigated what ‘personalized’ news meant for democracy. After clearing cookies from two of his Internet browsers, Safari and Firefox, Rosen created a ‘democratic Jeff’ and a ‘republican Jeff.’ Within two days, his two different browsers with his two different ‘identities’ began returning search results that fundamentally differed based on platform predictions of partisan interests. Similarly, Eli Pariser in The Filter Bubble ran an experiment with two left-leaning, female colleagues from the Northeast. Pariser asked both colleagues at the height of the 2010 Deepwater Horizon oil spill to run searches of ‘BP.’ The first page of search results markedly differed, one woman’s results returned news of the oil spill, while the other’s search only returned investment information on British Petroleum. For the latter of the two, a quick skimming of the front-page search results would not have confirmed the existence of the ongoing environmental crisis. Google’s predictive, personalized algorithms delivered fundamentally different news results.
In a 1982, Shanto Iyengar highlighted the impact of media on personal perception. In his study, “Experimental Demonstrations of the ‘Not so Minimal’ Consequences of Television News,” Iyengar demonstrated that media exposure to various issue areas, tended to raise the perception of issue importance in subject minds. Iyengar called this ‘accessibility bias’. In a world of personalized search engines, news, and social media, it is likely we will fall prey to this ‘accessibility bias’. However, unlike past ‘accessibility biases’, today’s ‘accessibility biases’ will be constructs of our own beliefs.
As political philosopher Hannah Arendt wisely noted, democracy requires a public space, where citizens can meet and exchange diverse opinions—it is within this common space that a robust democratic dialogue takes place, and a commonality can emerge through the differences. Internet personalization erodes these common spaces, making it increasingly unlikely that our ‘democratic and republican Jeffs’ will encounter differing ideas or philosophies.
If Internet personalization today seems somewhat troubling from a democratic standpoint, the future only seems more problematic. Yahoo’s CEO and Google’s former Vice President Marissa Meyer has expressed hope that the company could eventually render the search box obsolete. Eric Schmidt, an executive chairman of Google, has forecasted that “the next step of search is doing this automatically…When I walk down the street, I want my smartphone to be doing searches constantly—‘did you know?’ ‘did you know?’ ‘did you know?’” In the future, your phone, as Peares notes, will be doing the searching for you.
A future of ubiquitous personalization could be a future of ubiquitous confirmation biases—a world where our beliefs and perceptions are constantly confirmed by our personalized media, entering us into an endless confirmation loop with no real feedback. In psychology and cognitive science, confirmation bias leads to statistical error. In a democracy, confirmation bias could lead to polarization and the failure of democratic dialogue.
In February 2012, the Obama administration released the Consumer Privacy Bill of Rights, which sought to clarify the proper use of virtual consumer data by corporations. While the administrations’ Consumer Privacy Bill of Rights is a step in the right direction—helping to ensure virtual privacy in the marketplace—it does nothing to address a ‘netizens’ right to access information, free from personalization or bias.
At present, some sites, like Google are allowing users to opt-out of interest-based ads. However, these measures do not go far enough. Platforms like Facebook, Twitter, and Google have content visibility, personalization, and data sharing methods that are based on private algorithms and policies. These algorithms and polices are often opaque or inaccessible to the public, yet can yield immense influence. Making personalization algorithms transparent and simple would allow users to understand how their news is getting personalized. That combined with user ability to opt-in and out of personalization could help ensure an Arendtian public space, while providing corporations profitable advertising platforms. Personalization does not have to erode democracy. However, if personalization remains opaque, it may do just that.
By Mark Ridinger
For the world’s over 2 billion Christians, Easter Sunday represents Jesus’ resurrection and ascension into heaven, and by so doing, paving the path for mankind to achieve everlasting life. It is perhaps not a coincidence then, that the producers of the movie Transcendence chose Easter weekend for the release of their new film. In it, Johnny Depp plays a brilliant, dying, computer scientist that in a quixotic effort uploads his mind into a supercomputer, in an attempt to achieve a version of what is commonly called the Singularity. Transcendence, but not of the divine variety. From there on, things get pretty interesting for humankind.
The concept of the Singularity, or artificial intelligence so great it surpasses human intelligence and understanding, has been discussed for decades; first introduced by the mathematician John von Neumann in the 1950’s. Since then, two diametrically opposed views have emerged: the heaven and hell scenarios. One of the biggest cheerleaders for the heaven case is inventor and futurist Ray Kurzweil (now working for Google). The heaven scenario postulates that the Singularity will bring unfathomable gifts to humankind, not only in terms of cures for disease, alleviation of hunger, and limitless energy, but for immortality as well, as we will be able to ultimately ditch our fragile, mortal biological host for a durable and ever lasting silicon model. But before these wonders are bestowed on us, Kurzweil also predicts the rise of a populace anti-technology movement, which he has labeled the New Luddites, as they would be the descendants of the movement that protested increasing machine automation in 19th century industrial-age England.
But it is hard to call Bill Joy, cofounder of Sun Microsystems, a Luddite. Yet, he is one of the main proponents of the hell scenario, which argues, in short, that the acute, exponentially technological explosion that is part of the Singularity would be a real existential threat to humanity, as it would give unfathomable power to potentially anyone. He thinks it is entirely possible that this all leads to the extinction of the human race—or worse—within 25 years.
It is true that technology always produces dislocations and disruptions. The power loom was the focus of the Luddites, but so was the “horseless carriage”, the airplane, and the Internet, to name but a few. And for the most part, people adapted. It’s dangerous to say it’s “different this time”; almost always that proves to be wrong. But is the exponential change in technology—one leading to the Singularity or not—finally poised to overwhelm the glacial pace that is evolutionary change, and those that arose from it—namely humans? Do we have the wisdom and sagacity to handle such a “transcension” and even if we do, do we really want to leave behind our humanity, as we have always known it? And are those opposed to pursuing this, now or in the future, merely technophobic New Luddites?
On March 27, an elderly, but healthy woman known to the public only as Anne from Sussex, availed herself of medically assisted suicide after traveling from her home in Britain to Switzerland. Although 89, it seems hard pressed to dismiss her as some Luddite. She was a former Royal Navy engineer, and described her life as “full, with so many adventures and tremendous independence.” Yet, she lamented that technology had so taken the “humanity out of human interactions” such that we were becoming “robots” that sat in front of screens, and that it was now just too hard to adapt to the “modern world.” She had grown weary of “swimming against the current.”
“If you can’t join them”, she said, “get off.”
Hopefully, Anne from Sussex will turn out to be a rare, unfortunate and sad victim of the existential ennui that rapid technological change can produce, but not a greater harbinger of things to come. But we might have to wait until the history books are written (if they are written) describing the aftermath of the Singularity—should it occur—to find out whether she was an outlier or a human canary in the coalmine. There is more to take away from this than just a case study of severe “technophobia”. Namely, what is mankind’s role in shaping our own destiny? If given the tools to direct our evolution, merge with an AI in the Singularity, for example, will we do it? Should we do it? And will it even be possible to opt out (short of suicide), if one doesn’t want to “evolve”?
It seems unlikely that we will be unable to avoid the seduction of achieving the Singularity, if and when. After all, we are told in Genesis 1 that God said to man: Be fruitful and multiply; fill the earth and subdue it. Is the Singularity the ultimate extension of that biblical passage and this “neogenesis” what we have been preordained to achieve?
Christians look to Easter as the promise of everlasting life given to us by Jesus dying for humanity’s sins, and with it, a chance to transcend the chains of our material bodies and take a seat next to God in heaven. It remains to be seen if mankind’s quest to create and direct our own transcendence--and by so doing to become God-like-- will end in a heaven or hell on earth.
By Jen Buss
A year ago, the President announced the BRAIN Initiative specifically charged as "a bold new research effort to revolutionize our understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders like Alzheimer’s, schizophrenia, autism, epilepsy, and traumatic brain injury.” These diseases affect less than 5% of the population.
Neuroscience and technology will affect our entire society, not just people with these diseases. Neuroscience will be able to help
veterans recover and help get them jobs,
students excel in school and become the best and brightest in the world to stay on top of other countries, and
bring new industries that will create jobs and new economies.
In order to do this, we need to expand the current BRAIN initiative to a National Neurotechnology Initiative (NNTI). We need an initiative that will benefit the public good and be of national interest. The NNTI will be a national effort that will affect the whole population, not just a fraction of the population. We need to do something the public can believe in, be proud of, and see results. Neurotechnology is going to revolutionize the world and have profound effects on the way society interacts together and societies interact with each other.
The government can enable these changes rather than sit back and watch them happen before it is too late to guide our society. Now is the time to act to create the National Neurotechnology Initiative. This Initiative should
focus Federal Investment in key research areas,
follow an investment roadmap, and
coordinate these investment efforts through a National Neuroscience and technology Coordination Office.
Through these three tasks, the government can succeed in expanding the BRAIN Initiative. The National Neurotechnology Initiative is the only solution for the future of neuroscience in our society.
We need “CE” as much as “PE” in our schools.
By Mark Ridinger
When discussing the state of education in America, most talk today revolves around measuring intelligence and trying to improve standardized test performance. IQ tests (which attempt to measure convergent thinking) are frequently used to try and find our brightest students and to place them in gifted programs. Intelligence is of course an important part of the equation but what of creativity, of identifying and measuring divergent thinking, and fostering its development? What of Creativity Intelligence (CQ)? Our future problem solvers and innovators, be they entrepreneurs, inventors, authors, or researchers will rely on creative intelligence, and identifying and fostering them early in their education is paramount for America’s future. Unfortunately we are failing at that endeavor.
The paradigm of merely equating IQ with the skills needed for success is outdated. Current research shows that there is little correlation between intelligence and creativity, except at lower ends of the IQ scale. People can in fact be both highly intelligent and creative, but also intelligent and uncreative, and vice versa. But how do we identify CQ? Dr. E. Paul Torrance has been called the Father of Creativity, for his work that began in the 1960’s. His standardized test, the Torrance Test for Creative Thinking (TTCT) is considered to be the gold standard for measuring and assessing creative thinking, and can be administered at any educational level—from kindergarten through graduate work.
Several recent comprehensive reviews of Torrance’s data—spanning decades—have been published. The bottom-line is the TTCT not only identifies creative thinkers but is also a strong predictor of future lifetime creative accomplishments. In fact, Indiana University’s Jonathon Plucker determined that the correlation to lifetime creative accomplishment (e.g. inventions, patents, publications etc.) was more than three times stronger for childhood creativity (as measured by the TTCT) than childhood IQ. Having a validated instrument like the TTCT is so important because alternative means to identify CQ don’t work so well. Expert opinion and teacher nominations have been used, but these methods are prone to errors and biases. For example, students who are already achieving or who have pleasant demeanors or have already ranked well on conventional IQ tests tend to be selected, while researchers have shown that highly creative students and divergent thinkers are typically shunned and are at risk of becoming estranged from teachers and other students. In fact, the odds of dropping out increases by as much as 50 percent if creative students are in the wrong school environment.
What else has the review of Torrance’s data shown? Unfortunately, that America seems to be in a CQ crisis. Kyung-Hee Kim, an assistant professor at William and Mary, analyzed 300,000 TTCT results and has determined that creativity has been on the decline in the US since 1990. The age group that is showing the worst decline is the kindergarten to sixth grade. The factors behind this decline aren’t known, but may be due to a mix of uncreative play (escalating hours spent in front of the TV or videogame console for example), changing parenting and family dynamics (research suggests a stable home environment that also values uniqueness is important), and an educational system that focuses too much on rote memorization, standardized curriculum and national standardize testing. Are we stifling divergent thinking in our children for conformity of behavior?
The rest of the world seems to have woken up to the need to foster creativity in the educational process, and initiatives to make the development of creative thinking a national priority are on going in England, the EU and even China. The United States needs a similar national initiative if we hope to stay competitive on the world stage. What is needed is a new approach to learning that still has children mastering necessary skills and knowledge, but through a creative pedagogical approach. We know that creativity can be measured, managed, and fostered; there is no excuse to not implement such a strategy in our school system. Let’s see the creation and deployment of creative exercise classes for our students and the use of creativity tests as additional inclusion criteria to gifted programs. Surely “CE” is at least every bit as important as PE.
By Ewelina Czapla
Although we currently fear that our phone or online data is outside of our control and subject to searches by both private industry and the government, much more will be at stake in the future: our thoughts and ideas. Throughout recent years our privacy has constantly been challenged by the development of evermore invasive technologies. While there has been a call for an explicit right to privacy, the addition of such a right to our Constitution may not suffice in protecting us in the years to come.
Currently, we produce data by using our phones, computers and tablets. This data can be so personal as to include the thoughts we formalized with text. But recent developments in the field of fMRI suggest that we are able to accurately read the human mind. When looking even further into the future it is likely that we will be able to digitally interface with the human brain, making the next generation smartphone an implant. At this point, not only will the data we choose to formalize with text be subject to the access but also our very thoughts and ideas.
We find ourselves without an explicit right to privacy due to the high rate of technological development and the slow rate of legal development. Our legal system has managed to respond to the Phase 1 impacts of digital communication. However, it is still struggling to address the Phase 2 impacts after decades while the Phase 3 impacts are on the horizon. Unless action is taken now we will find ourselves not only weary about our lack of privacy but also the lack of cognitive liberty. For this reason, we must look beyond simply the right to privacy and call for cognitive liberty including the right to cognitive enhancement, ownership of personal data including thoughts and as well as protections for our thoughts similar to those afforded to spoken word. Only when such changes are made will we be afforded adequate civil liberties to function in the modern world.