- Academic Centers
- About us
We should change our education system to take advantage of how are brains are designed to process information by utilizing current technology. Humans are unique in our thought process, we recognize patterns in information better than any other animal or technology. We should teach humans to better use this ability rather than teaching rote memorization. We should change the education system to teach us to learn to think rather than teaching the details of subject matter.
At one time in history, information had to be integrated by an individual in a sequential fashion, using memory to carry information from one book or experience to something observed in another book or experience; the details mattered. But now, with massive digital stores of fact available, it is possible to put information together based on affinities that are discovered through automated searches and machine processing. This changes everything. While it doesn’t render memorization completely obsolete, it emphasizes our need to be able to recognize patterns in order to extract meaning from collections of information.
Think about the pre- and post-internet age. What percentage of the population had the knowledge of a certain topic like Greek mythology? Before the internet, an instructor was required to teach these topics. However, with the internet, more people are exposed to any given topic thus, more people know about it. There is increased access to Greek mythology, for example, so that each person does not have to learn it in class, but could learn it on their own without an instructor.
One possibility would be to teach everything through problem based learning – teach the students how to learn rather than teaching them the information. Teaching through problem-based learning does not reduce the amount of information students are taught, but actually increases it. When students are taught to ask questions about a topic, they are able to research the details (on the internet, etc), find patterns in the information, and deduce the answers. Despite not having lectures and they would be learning how to follow a process to teach themselves.
Accordingly, learning needs to emphasize an ability to develop patterns of recognition, as opposed to memorizing facts. Machines bring together large bodies of information; it is the role of an intelligent person to put together that information into meaningful understanding by assimilating the patterns. The implications for modalities of education are huge. A lecturer who teaches facts is largely outmoded, since the set of facts can often be gleaned from simple web searches. It is the interpretation of the facts that are accessed, by means of recognizing patterns in some generalized and approximate fashion that needs to be sharpened for young and old alike.
The survival of the human race is a serious matter. The United Nations recently drafted a <a href="http://daccess-dds-ny.un.org/doc/UNDOC/GEN/N13/472/83/PDF/N1347283.pdf?OpenElement">resolution</a> that recommends including the creation of an International Asteroid Warning Group to help protect society from Near Earth Objects. The resolution will most likely be adopted. Though a small step for the UN, this could be a large move for humanity. It is laudable and further technical and political measures should be considered.
Early this year, a meteor exploded above Chelyabinsk, Russia <a href="http://www.reuters.com/article/2013/02/15/us-russia-meteorite-idUSBRE91E05Z20130215">injuring 1,000 people</a> due to the resulting shock wave. The warning systems we have in place did not detect the asteroid. Luckily, there were no deaths but these rare events have become a wake up call for professional and amateur astronomers.
The threat posed by Near Earth Objects (NEO) is real. An asteroid detected in 1997, <a href="http://en.wikipedia.org/wiki/(35396)_1997_XF11">1997XF11</a>, will bypass Earth in 2028 but would likely wipe out life on the planet if it collided with Earth’s atmosphere. It is estimated that there are roughly one million asteroids within the Sun’s orbit, <a href="http://neo.jpl.nasa.gov/stats/">10,000</a> of which are being tracked by NASA. Small asteroids collide with the planet regularly. Most burn up in the atmosphere. The few that make it through vary in size, composition, and angle of entrance. Early warning systems are the best approach to prepare for a collision, thus enabling time to deflect the object or evacuate the impact zone. For instance, <a href="http://www.telegraph.co.uk/science/space/10413205/United-Nations-to-lead-efforts-to-defend-Earth-from-asteroids.html">under the new plan</a> the “UN Committee on the Peaceful Uses of Outer Space will monitor detections and help plan a deflection campaign if that is necessary.”
The realization and acceptance by scientists of the inevitability of a collision with Earth is not new. The probability of another asteroid striking Earth is 100%; there is no uncertainty in this calculation. The uncertainty exists in determining <i>when</i> such a significant collision will occur. The product of the probability of collision in any given year and the effect on society of the collision is a calculation fraught with high degrees of uncertainty because it depends on many parameters: diameter, composition, density, velocity, and angle of impact of the asteroid, as well as the timing, population density at the location of impact, and local infrastructure of society.
Despite this difficulty, it is widely accepted in scientific communities today that the risk of doing nothing is unacceptable. The cost-benefit analysis, though noisy, is clearly in favor of protecting human life. It begs the question: Do we ensure the survival of future generations by making investments now? And the more difficult question: If so, how many resources should be devoted to these endeavors?
Today we have advanced technologies to sense, collect, store, analyze, and predict these projectiles on massive scales. For example, RF and laser radar technology, big data analytics, modeling and simulation software, commercial launch capabilities, and autonomous systems have advanced rapidly in recent years. This past June NASA launched a <a href="http://www.nasa.gov/mission_pages/asteroids/initiative/index.html">grand challenge</a> to gather ideas to mitigate the threat of asteroids, including bold ideas such as solar sails and gravity tractors.
Communicating the science of asteroid physics is also critical to enable policy implementation. Hollywood movies like Armageddon might be entertaining, but in the end they do little to educate the public due to the fact that they come from fiction-based Hollywood and are bursting with technology flaws. Rational, vetted analysis based on rigorous scientific research presented in widely understandable language is the only way to make political headway.
The international support to plan for space rock collision is promising. Private space companies, groups such as the Association of Space Explorers, B612 Foundation, The Lifeboat Foundation, international organizations, and government agencies should partner to confront this task together. With greater collaboration, testing, information sharing, and mock impact scenarios in place we can ensure that we are dedicating resources to a worthy cause.
The ability to predict rare events such as an asteroid collision is achievable and our human ingenuity will ensure this. In order for our government to fulfill its responsibility to protect its citizens, more federally sponsored research needs to be conducted on asteroid collision mitigating science and technologies. Surely, it is a worthy endeavor of human activity to protect the human race and ensure its longevity. Given that the world spends roughly <a href="http://books.sipri.org/product_info?c_product_id=458">$1.75 trillion</a> annually defending ourselves against one another, wouldn’t it be wise to spend a few billion a year to defend ourselves from true external, existential, space-based threats.
The first in a series of three articles on the implications and challenges in Big Data.
The term “Big Data” suggests that the concept, however defined, is a Big Deal. However, the concept of collecting, storing, and analyzing large volumes of data in ways that were not possible some years ago, is both older and bigger than the concept that people ascribe to “Big Data.”
The ability to store and retrieve data, and to convert data into information, is an essential aspect of being human. The development of language, and oral tradition, is perhaps the most profound development in human history. No one is quite sure when this happened; speech may have occurred a hundred thousand years ago, or maybe earlier. Certainly by 50,000 years ago, speech had established in what we now view as the human race. The ability to transmit to others information derived from observations and experience, as well as to successive generations, through words within the construct of a language, completely changed the nature of what human beings could accomplish. Transmission of information supplementing genetic transmission is an evolutionary leap that redefined the meaning of “human.”
Subsequent advances in information transmission in the history of human existence came slowly at first, and then with increasing frequency. Around 30,000 years ago, writing was invented, first as a way of recording accounts (numbers), and then through recordings in cave paintings and other markings. Later (perhaps only 5,000 years ago) written language representing words and ideas appeared in various forms, permitting the recording of history, and development and transmission of cultures and religion, in large volumes of written material, rather than merely through oral traditions. Development of writing instruments and paper was important. In more modern times, a huge advance occurred with the development of the printing press, which permitted mass production of written material, and the democratization of information. Arguably, the invention of the printing press enabled the Reformation, and the Peace of Westphalia that established the notion of the nation-state separate from the local customs and religion. Further democratization occurred with the development of pamphlets and newspapers, and the total amount of stored data greatly increased with the development of photography and then videography. The twentieth century saw the development of computers, microelectronics, personal computers, and digital communication devices. Each advance profoundly changed humanity’s interaction with data, and how we assemble, store, and transmit information.
With the further development of microprocessors and digital storage media, another profound change has occurred. What are we to make of the fact that in the year 1993, estimates are that 3% of the world’s data was stored digitally, but that by 2007, 94% of all recorded data was digital[i] (and undoubtedly an even higher percentage by 2013)? The figures may be dwarfed by imagery and video, but it is clear that there has been a sea-change in the sensing, recording, and retrieval of data, from analog forms to digital representations. And while this transformation is occurring, the sheer volume of data has been growing, at a rate that is at least exponential. With new high capacity disks, RAID storage systems, flash memory and solid state drives, and now readily available cloud storage, the opportunity to keep and maintain data from all events and activities in people’s lives has become feasible. The same researchers that observe the role reversal of digital data in storage also estimate that there has been an exponential increase in the amount of data since 1986 with a compounded annual growth rate of 25%.[ii]
The impact of this rapid transformation is, we assert, as profound as any of the historical revolutions that changed humankind’s interaction with information, on par with the invention of language. Big Data grossly understates how big a deal is the transformation to massive digital recording.
One of the principle impacts of this change is the method by which we, as a human race, extract information and develop theories to explain phenomena. The scientific method has served mankind well for centuries: An iterative process of observation, hypothesis, devising of tests, and refinement and/or validation. The development of modeling and simulation capabilities with massive computer processing power allows for greater use of computational models in the testing phase of the scientific method. But an even greater modification to the scientific method is afforded by the existence of massive stores of digital recording of sensor data. Instead of using a few observations and developing hypotheses based on human intuition, it is now possible to comb through massive amounts of observed data, and to develop hypotheses based on computed correlations.
For example, throughout history, marketers attempted to convince people to buy things based on good guesses as to what might persuade them. Now, online advertisers can observe trails of “clicks” and observe patterns of purchases, to far more easily deduce persuasive patterns. This is the basis of many commercial endeavors that use internet and web tracking.
But scientific discovery can also make use of massive data stores to develop better theories of natural phenomena. Cosmological observations, for example, have enabled us to deduce the presence of planets circling distant stars, based on analysis of patterns of intensity data. Medical data, including statistics over sequenced DNA data, is permitting us to identify genetic causes of certain diseases. Image processing of data gathered from particle accelerators (in particular, the Large Hadron Collider) yields massive amounts of information that has allowed scientists to deduce the existence of the Higgs boson.
And while scientific inquiry has been transformed, the analysis of the sociology of humans has been unleashed through the collection of large amounts of data of nearly every human on the planet. Of course, there are significant concerns about individual privacy, but since data is being collected by companies and authorities of all of our transactions, locations of our devices, and health and status of our machines, soon it will be possible to track the movements and behavior of just about every human being. The potential to understand human behavior as a function of stimulus and history is both stunning and frightening. If the data is anonymized and analyzed statistically, then great good can come from the analysis. If the data is used to discover and “target” individuals, then there are more sinister possibilities.
In any case, the modification to the scientific method, whether for marketing, science, or sociology, goes deeper than just the amount of data being observed. Instead of having to intuit relationships between observable variables, computer analysis can now look at groups of variables, sometimes correlated across multiple data bases, to look for statistical relationships (a form of correlation) between the variables. Algorithmic methods can be used to look for constraints that indicate a statistical relationship between observables. The formulation can be extended to account for “noise” in the observed data by allowing for approximate relationships, and also can be extended for the case of variables that can take on discrete values (as opposed to continuous numerical values).
The change is the ability to analyze by algorithmic means large amounts of data, collected using “big data” techniques that store massive amounts of data in cloud-based disks, or on individual flash drives in home laptops, or in massive government databases. Whereas until recently, discovery and analysis progressed largely through empirical means, dependent on an analyst’s ability to intuit relationships in data that is sparse, rare, and displayed through analog means, we can now use computer programs to cull massive amounts of data to suggest relationships in variables that might never have been viewed by humans. Empirical testing and laborious search can become automated analysis through massive databases to suggest relationships that can be used to far more rapidly develop hypotheses of causality and models to describe behavior.
In a quiet revolution, sitting in the midst of a change that takes a few years masks the momentousness of the event in the course of human history. This massive bulk of digital data, and the information derived from it, can be shared with societies throughout the world, and with generations to come. Not all of the changes will be good. It is often said that information is power, and with an ability to extract and share information derived from automated means, concentrations of power are indeed possible. Already, political parties use massive databases to perform fundraising and campaigning. Perhaps knowing and understanding human behavior too well will threaten individual choice and independence. From influencing what we buy to dictating for whom we vote, our very selves might become predictions based on correlations of variables.
Lurking behind the development of “big data” analytics is a transformation comparable to the invention of language, in terms of the ability to transform the meaning of being human. And yet, the opportunities and challenges afforded by a new way of deducing information from observations, now become massive digital observations, can reasonably be called a revolution in human means of executing a scientific method of understanding phenomena.
While our ability to record data and to analyze massive databases are exploding, and while scientists and analysts develop new skills at executing automated data analysis tasks in place of intuition and empirical searches, the question arises: Are we getting prepared to accept the implications and consequences of this massive revolution in the way humankind handles data? Preparations might imply new policies, new rights, and new approaches to sharing of information. Most likely, these policy and procedural reforms will be enacted post-facto as consequences of the big data revolution unfold. But since at least some of the directions and implications are apparent now, it would make sense to implement policies now that steer the use of data analytics toward the development of knowledge that is beneficial, or at least not harmful, to mankind as a whole.
Today, mass shootings are an all too common occurrence. Sometimes, such as in the case of Sandy Hook or Aurora, these shootings cause a massive public outcry and response because of the shooter’s actions. Other times, the response appears almost nonexistent, i.e. the recent shooting at the Navy Yard. More people died during the recent Navy Yard shooting than during the Boston bombing and the response was muted at best. When is enough enough? What does it take for society and government to change our ways?
Public response to mass shooting events has been mixed, leaving many to throw up their arms and express frustration in changing what appears to be a perennially broken system. However, something has got to give. In all of the most recent lone-wolf mass shootings, information has emerged post-mortem that has indicated that the shooter likely had a severe mental illness. Often, the perpetrators have fallen through the cracks in the mental health system. Aaron Alexis, the Navy Yard shooter, displayed symptoms of paranoid schizophrenia and Jared Lee Loughner (Tucson, AZ) was diagnosed with the same disorder. Seung-Hui Cho (Virginia Tech shooting) was diagnosed with severe anxiety disorder. James Holmes saw three mental health professionals prior to his debut and is pleading insanity in court. These show a trend of untreated mental health cases that led to violent behavior harming innocent US citizens. To reduce the likelihood of mass shooting events occurring in the future, information about a person who poses a risk of committing these acts could be made available to authorities, security clearance officials, gun dealers, and select others with the potential to prevent harmful actions. Increased funding should also be given to neuroscience so that actions and mental health conditions are better understood.
If authorities and gun dealers accessed the already public information, they might have prevented the Navy Yard and other mass shootings. Currently, work is being performed within the Veterans Administration to determine, based on electronic medical records, if a patient is a suicide risk. Researchers employ natural language processing to scan electronic medical records by focusing on previous attempts to determine if patients are suicide risks. It has been shown that previous suicide attempts are most strongly correlated to future suicide risk. By scanning medical records with search engine technology, researchers have been able to determine (to about 80 percent accuracy) those who go on to commit suicide and are working to include other risk factors to further refine the algorithm. If intelligent analytics could be applied to those at risk for carrying out mass shootings, it could greatly improve the safety of all Americans. Unfortunately, most of the current safeguarding analytics are outdated and much more efficient at catching cold-war spies than determining the likelihood of someone who may be a danger to society.
The information available to authorities, gun dealers, etc. should be multifaceted and cover many aspects of the mental health, criminal, and neuroscience puzzle. It should combine records including arrest, criminal and mental health records, weapons purchases, and access to dangerous chemicals/compounds. While this may seem excessive, the information gathered (except mental health records) is all public information. It may be useful for authorities to obtain a full history of public information and mental health information on individuals who may prove to be a danger to themselves and others. This could lead to better decisions being made in relation to individuals that may be a danger to themselves or others. If the Rhode Island authorities had access to Aaron Alexis’ complete records, they might have been able to justify, under Rhode Island law, placing him under a 24-hour psychiatric hold. If information on criminal records and mental health (in)stability was accessible to gun dealers, a history of mental health may have precluded Mr. Alexis from purchasing a gun in such short time. This in turn would have triggered more of a response from his employer and the Navy, and may have prevented the mass shooting. It is a matter of public safety to delay the purchase of a weapon, not an infringement on his rights.
In the case of James Holmes, the psychiatrist in charge of his care was concerned about his behavior and reported it to the campus police. This report was not followed up. Certain notes and reports submitted by mental health providers on a certain patient could put authorities on notice. These notes could, in turn, warrant a response. The mental health system failed Mr. Holmes and in turn failed society. If this data had been accessible to gun dealers, it would not have prevented James Holmes from buying all of the weapons used in the Aurora shooting but would have likely reduced the number and brought him to the attention of the authorities before the shooting occurred, potentially saving the lives of 12 people. Had Mr. Holmes received mental health treatment prior to leaving school, it could have prevented this event.
It is important to remember that it isn’t just adults that have these tendencies. The recent shooting at a middle school in Sparks, Nevada and another shooting over a decade ago in Columbine, Colorado were committed by minors. This should remind us of the importance of teaching children the dangers associated with these weapons and keeping them properly stored when children are in the home. While children should be allowed, with their parents permission, to shoot guns at firing ranges and in other appropriate settings (such as skeet or trap shooting or hunting) access of children to firearms should be very limited. In addition, this is an opportunity to think about the mental health status of our children and how our health care system can better serve them.
The information available to authorities, security clearance officials, and gun dealers may raise flags during background checks on an individual before security clearances are granted or when weapons are purchased. While this flag should not preclude people from conducting these activities, it should serve to make sure that people are receiving the care that they need and are not a threat to themselves or others before conducting activities within the public trust or being allowed to legally purchase a weapon. This may work to not only prevent mass shootings, but may also reduce the number of gun suicides.
More funding for mental health and neuroscience research will help refine the information on mental health conditions to better understand those who are truly likely to offend. By understanding the symptoms and related biology that leads to mass shootings, better prediction and increased prevention is possible. Predictive analytics have allowed us to determine those who are at risk for suicide and may also be able to help us identify lone-wolves that endanger the lives of others, reducing the number of mass casualty events and leading to a safer environment for all Americans. The data referred to in this post is collected but not shared across professions towards a common goal. We have the capability to prevent many of these incidences. The question becomes, should we do it?
It’s unclear if the number of mass shootings is increasing or not. What is certain is both the public outrage and the call for trying to prevent these heinous crimes. What can we do to prevent or predict these violent acts? Often these questions are answered with the suggestion that we need to increase the availability of personal records of individuals suffering from mental illness. For example, if a person suffers from mental illness and suggests that they will commit violent acts then the records of the mental health professional documenting the case should be available to law enforcement authorities. However, currently mental health professionals often report such threats of violence to the authorities yet these acts of violence are not prevented. This occurred in the case of Alexis, the Navy Yard shooter, and James Holmes, the movie theater shooting in Colorado. This suggests that the degree of sharing among mental health professionals and law enforcement ought to be more rigorous and systematic.
Although broad availability of mental health records seems like a practical solution it does not recognize the need to maintain our longstanding civil liberties. The privacy of mental health records, and health records in general, is explicitly protected under federal law, Health Insurance Portability and Accountability Act, more commonly known as HIPAA. Who among us would willfully allow our health records to be exported into a database free to the prying eyes of countless bureaucrats, analysts and other interlopers? As horrible as the recent shootings are, they still remain a statistically rare event, so while we’re at it, shouldn’t we take those health records and do something more with the data to impact many more lives as well as cost to society? For example, there are millions of people living with type 2 diabetes, a leading cause of heart disease, stroke and indeed death, and many millions more at risk. It’s estimated that someone diagnosed with type 2 diabetes between the ages of 25 and 44 will likely incur costs of $124,700 over their lifetime. With such a database and analytics, it would be easy to find out those both afflicted and at risk, and dictate that mandatory exercise, diet and weight loss regimes be put in place and, if not adhered to, enforced, saving billions of dollars a year and many lives in the process. Do the ends justify the means? Unlike mass shooters, diabetics are common.
Which gets to the question debated since this country’s founding: What are the costs and ramifications of living in a free and open society? Indeed, the issue is not and should not be finding mentally ill people. The issue is civil liberties in an age when governments and industries are rapidly developing technologies that can and are being used to profile and track us, and identify individuals and their unique behaviors. What rights to privacy, anonymity, and freedom of thought and deed do we still have? All our purchases, attendance at events, movements on highways, trains, planes, and even buses are monitored and shared in databases with limited legal or technical protection. Yes, we give companies permission to collect this meta-data, but did we give them permission to share it with governments and or other companies? That is happening today. Just look at the NSA 702 and PRISM programs. Vast databases of meta-data that detailed the calling habits of millions of Americans were shared with a secret federal agency and used to profile, find, and track terrorists.
The reason given for that use of data had noble intent: finding and catching terrorists. Shall we now go and sanction this activity even further? To what degree do we forfeit our freedom for safety (theoretical or otherwise)? Now an argument is being put forth to try and use this technology in some sort of predictive policing scheme. What will be the next reason given after that for searching through the meta-data that details the behaviors and habits of Americans? Will it be something as simple as finding all those who would support a political party in opposition to the one in power, so that they can be targeted, marginalized, or sidetracked?
The new “big data” capabilities that companies like Amazon, Google, and Safeway find so useful in "enhancing the customers experience" and their bottom-line, can become "big-brother" frightening when used by a government without strict control. But there is at least one thing we can agree on: the state of our understanding of mental illness is poor, and our therapeutic options are limited. Let’s agree to advance that cause, and keep our liberties instead.
Over nearly two decades of work on science and technology policy issues, the Potomac Institute has become a leader in providing meaningful policy options for science and technology, including national security, defense initiatives, and S&T forecasting. The Center for Revolutionary Scientific Thought (CReST) will bring together individuals from a variety of backgrounds to foster discussion on science and technology futures from both an academic and policy perspective. CReST intends to develop new ideas about the future directions of science and technology, formulate strategies on how to achieve revolutionary gains in S&T, provide a forum to discuss the associated policy, ethical, legal and social issues, and inform the public and policymakers.
The Potomac Institute for Policy Studies invests in development of research on science and technology trends to identify the key science and technology developments that could radically change or affect our society and/or national security. To do this, CReST produces briefings, one-pagers, and full reports on trending science and technology topics in society. Some topics include Big Data Analytics, Neuroscience and technology, Threats to the Human Race, and Mental Health in the Age of Technology. These reports are intended for policy makers to further understand the current science and technology in order to drive federal investment in S&T and S&T policy for the good of society and for our nation’s security. On top of this, CReST will develop larger, more complete reports on specific topics of interest to the defense community. Some examples of these types of reports produced by PIPS in the past include: Neurotechnology Futures (2007), Out of the Box (2001), Biosurveillance (2005), etc.
The Center for Revolutionary Scientific Thought will also focus on bringing Bold Ideas to light. CReST will host activities designed to find and foster big, bold science and technology ideas that address key societal, national, and international issues. A forum to discuss these bold ideas is the Bold Ideas seminar series featuring past and future Nobel Laureates presenting to and holding dialogue with science and technology leaders in a variety of agencies across the government. The CReST Fellows program sponsors extraordinarily talented scientists for a year at the Institute to address a big complex problem with creative complex solutions.
The conversation around advances in science and technology is incomplete without the discussion of respective ethical, legal, and societal implications (ELSI). Each year the Potomac Institute for Policy Studies hosts an ELSI conference around neuroscience and technology. A similar but more general conference will be held addressing science and technologies and the ELSI associated with them on society.
The Center for Revolutionary Scientific Thought is comprised of Potomac Institute Employees and additional Adjunct Fellows. There are three permanent members of CReST: the CEO, Mike Swetnam, the Chief Scientist, Bob Hummel, and the Chief of Staff, Kathryn Schiller-Wurster. This year there are three CReST Fellows Jennifer Buss, Patrick Cheetham, and Ewelina Czapla. Senior CReST Fellow Mark Ridinger and additional Adjunct Fellows participate in CReST meetings for the discussion of the bold ideas addressing key societal, national, and international science and technology issues. This blog is intended to keep you updated on those conversations and allow you to pitch in your two cents. Stay tuned for blogs describing our discussions, Bold Ideas seminars, current events, and policy recommendations. If you have questions or additional comments, please contact the CReST Coordinator, Jen Buss.