cropped bannermashup2

By Ewelina Czapla

Cognitive liberty is defined as sovereign control over one's own consciousness. As the ability to manipulate and monitor the human mind grows with the development of neuroscience and analytical tools the question of one’s right to remain cognitively sovereign will need to be answered. The issue of cognitive liberty raises questions about privacy and freedom of thought as well as the 4th and 5th Amendments to the Constitution. Is cognitive liberty is a basic right? Should an individual simply be protected from cognitive manipulation by others or also be free to engage in the use of mind-altering technologies to improve their own cognition?

Technological developments, such as the fMRI among others, allow for the scanning and analysis of the brain’s activity. FMRI scans can determine whether or not an individual is responding honestly to questions or if they approach situations with an unconscious bias. Additionally, brain scans can identify areas of abnormality in the brain such as a spot or cyst that suggest the propensity to commit a particular kind of crime. These technologies have the ability to contribute to decisions regarding a defendant’s guilt, a prisoner’s sentence or date of parole. Should this technology alter criminal and civil procedure? What are the future civilian implications of technologies like fMRI? Could they be used as a pre-screening measure for employment, similar to a drug test?

If amendments to our rights were made to ensure ‘cognitive liberty’, there would be profound repercussions on commercial and advertising firms, campaigns, and others who may in the future seek to influence cognition. Arguably, should ‘cognitive liberty’ be granted, the balance of power could shift in favor of the individual from corporations or government. Whether people have the right to use the ‘fourth estate’ free of manipulation is a pressing governance question with broad future ramifications.

by Jenny McArdle

In the year 12,069 (or Foundation Era -79) Hari Seldon predicted the fall of the Galactic Empire, the ensuing turbulence of the interregnum years, and the rise of the ‘Foundation’, a group of scientific pseudo-religious and merchant rulers modeling Plato’s own ‘philosopher kings’. Seldon did this through the science of psychohistory, a science that used statistics, history, and sociology to predictively model human behavior. However, Seldon did more than just foresee the imminent collapse of the Galactic Empire, he used psychohistory to mold the future to his liking, setting in motion the futures that would support the rise of these scientific ‘philosopher kings’ and eventually the installation of the Second Empire for the good of humanity.

While Hari Seldon’s psychohistory was a literary thread that Issac Asimov used to bridge his science fiction short stories in the Foundation trilogy, the scientific discipline he fictionally created between 1942 and 1950 now seems strangely clairvoyant.

Indeed, the convergence of big data, psychology, and behavior science (i.e. cognitive security) is making psychohistory a reality. Big data has allowed scientists to study billions of human interactions at the individual level. The MIT’s Human Dynamics Laboratory under the guidance of Alex ‘Sandy’ Pentland has discovered that by using computers to analyze mathematical patterns of human interactions, they can explain and predict phenomena—political upsets, flu pandemics, human productive output, financial crashes, among others. While Pentland views cognitive security as a force for future good, he notes that the ability to track, predict, and potentially control human behavior can also be exploited.

Prometheus has long been a symbol of the human quest for scientific knowledge. Prometheus ensured human progress through the gift of fire, but was sentenced to eternal torment by the Olympian Gods for his transgression. Science does at times have overreaching and unintended consequences. Cognitive security can be used as a force for good, but in the wrong hands it can also be egregiously misused. Are we to assume that all our future ‘psychohistorians’ will be motivated for the good of humanity, like Hari Seldon? A brief glance through the history of mankind would beg to differ.

by Jennifer McArdle

Are we entering an era of ‘informationized power brokers?’ In dystopian science fiction futures, books like Red Mars have presented futures where transnational corporations wield as much, if not more, power than states. The convergence of big data, psychology, and neuroscience (i.e. cognitive security) may help enable that reality, creating new ‘informationized power brokers’. Internet sites and social media platforms like Google, YouTube, and Facebook, have amassed immense amounts of data on individual users, in some cases up to 1,200 pages of data on a single person. This data, when combined with behavioral science and analytics can effectively model human behavior; providing these new ‘informationized power brokers’ the ability to ‘social engineer human behavior.’

Corporations have long used advertising to influence consumer behavior. The advent of the printing press helped spawn weekly print advertisements in newspapers and periodicals in the 17th and 18th centuries. However, what is fundamentally different now, is that corporations can uniquely cater their message to an individual based on their data profile. In a much-publicized 2012 media story, Target was able to identify a pregnant teenage-girl before her father, simply based on her consumer Internet search history. Internet search data when combined with the power of behavioral science can reveal very unique things about individuals, even life-changing events, like pregnancy. But what power, besides more influential advertising, does this really give corporations? Why does this make them potential ‘power brokers’?

Internet corporations, particularly platforms like Facebook and Twitter have content visibility and data sharing methods that are based on private algorithms and policies. These algorithms and polices are often opaque or inaccessible to the public, yet have immense influence. They control what the public does or does not see. What happens when one of these platforms is biased? A Nature study noted that during the 2012 election, people voted in statistically higher numbers after seeing the civic “go vote” message on Facebook. Could a corporate social media platform use such a tool to selectively target individuals whose views would advance their corporation? Zeynep Tufekci speculates that it is possible, and more importantly, that this would go on largely undetected by the public and government. If corporations are able to ‘social engineer’ the public for corporate benefit, will we be entering an era where the real power brokers are not states but corporations?

Pre-crime. Part 2 of the Modeling and Profiling Blog

by Mike Swetnam

Many have noted that the greatest threat to the human race in the 21st century is ultra-powerful technology (like nuclear, bio weapons, or nano tech) in the hands of madmen.

Clearly there are technologies like nuclear and bio that can destroy all or most of us. Also there is no shortage of crazy leaders and despots who would not hesitate to use these weapons if they could.

There is also a rising number of people who just lose it and go 'postal'. People that are so disenfranchised, so disconnected, and so nuts that they grab a gun and start shooting. We have had far too many of these incidents this past decade.

Combine the rising number of 'postal' cases with the increasing availability of bad technology, like bio and how long will it be before some crazy person cooks up a super virus instead of crabbing a gun?

This is a really bad concept!

A crazy person with a gun might kill 20 innocent children, but a crazy person with advanced biotech can kill millions!

We can not let that happen!

How do we prevent this Marriage of Mayhem: destructive technology married to a crazy fatalistic personality?

We certainly don't want to wait for it to happen, and then just prosecute the crazy person. It would be far better to find, identify, and deal with the crazy before he did the deed that killed millions.

Fortunately, we are developing the behavior modeling technology to do this. Today, industry models your buying behavior so well that they can predict what you will buy.

All of us have joined these frequent buyer programs that track what we buy, how often, and what we buy at the same time. This data has been used to profile us. These profiles of our behavior help industry market to us in very targeted ways.

They also help industry identify and deal with fraud. When someone steals a credit card and attempts to use it, the profile of that fraudulent use is different then the profile of the card owner. Computer programs note the difference and alarms go off. These alarms are used to stop fraud and miss-use early. This happens when you travel to new places and use your card in strange and new ways that cause the card company to want you to call in and verify your identity.

This same technology can be used to identify the behavior of someone getting ready to go postal. It turns out that the behavior of those approaching the breaking point is fairly identifiable.

As this technology develops and is proven accurate, with low false alarms, how will we use it?

The US Constitution says that one is innocent until proven guilty. Can we legislate the legitimacy of scientific models as proof of potential future guilt?

These are not esoteric questions. Our very survival could be threatened by madmen with access to bad and destructive technology. It is absolutely clear that we cannot wait for them to attempt our destruction to act. It is just as clear that our basic beliefs in innocence until proven guilty will be challenged by these realities.

It’s time for a Constitutional discussion that we have not had for 237 years.

Mike Swetnam

There is much hue and cry over what is called profiling in law enforcement. Many see this tactic as merely a weak justification for prejudice. In truth profiling, using race in particular, has been a tool of prejudice far too often in our history.

Clearly, targeting individuals based on race, or sex, or nationality is prejudice. Prejudice is ignorance at the extreme. It is judging one's character without facts or knowledge of that character at all. That is ignorance. That is the opposite of science.

The US Constitution was designed to protect us from such fact-less attacks.

Science is about really understanding things. Science is about facts. A scientifically developed model of criminal activity will help the police find potential criminals using knowledge and science, not prejudice and ignorance.

Modeling human behavior is rapidly becoming a mature science. Many industries model human behavior to understand who will buy which product. This information is used very successfully to target potential buyers and improve the buying experience. Behavior modeling is also used to detect fraudulent credit card and banking activity.

At a macro level, human behavior modeling is a form of profiling. The difference is that the profile is not based on biased ignorance or prejudice, it is based on a scientific understanding of the way humans behave.

That is the difference between what people dislike, prejudiced based profiling, and what we need, scientific models of criminal activity.

The Constitution does not protect individuals against facts.

The Chicago police[1] recently began testing a scientific model of criminal activity. It claims to use scientifically based models to target individuals who are likely to commit crimes. Some would call this profiling.

The Potomac Institute for Policy Studies (PIPS), and in particular, the Center for Revolutionary Science Thought at PIPS, is dedicated to the development of public policy based on science. It is clear to us that scientifically based models of criminal behavior can and will be useful in finding criminals and preventing crime.

A key point is to make sure the models are based on science and not personal prejudices. Without seeing the details of the Chicago model it is difficult to determine if it is a scientifically valid model of human behavior or just another attempt at prejudicial law enforcement. We hope it is scientifically based and useful.

Finally, we need to worry about finding criminals or potential criminals before they commit a crime. More on this in the next blog.

[1] http://www.washingtontimes.com/news/2014/feb/20/cpd-goes-minority-report-puts-big-resources-predic/, http://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist